Core plasma design of the compact helical reactor with a consideration of the equipartition effect
NASA Astrophysics Data System (ADS)
Goto, T.; Miyazawa, J.; Yanagi, N.; Tamura, H.; Tanaka, T.; Sakamoto, R.; Suzuki, C.; Seki, R.; Satake, S.; Nunami, M.; Yokoyama, M.; Sagara, A.; the FFHR Design Group
2018-07-01
Integrated physics analysis of plasma operation scenario of the compact helical reactor FFHR-c1 has been conducted. The DPE method, which predicts radial profiles in a reactor by direct extrapolation from the reference experimental data, has been extended to implement the equipartition effect. Close investigation of the plasma operation regime has been conducted and a candidate plasma operation point of FFHR-c1 has been identified within the parameter regime that has already been confirmed in LHD experiment in view of MHD equilibrium, MHD stability and neoclassical transport.
NASA Astrophysics Data System (ADS)
Lee, Kurnchul; Venugopal, Vishnu; Girimaji, Sharath S.
2016-08-01
Return-to-isotropy and kinetic-potential energy equipartition are two fundamental pressure-moderated energy redistributive processes in anisotropic compressible turbulence. Pressure-strain correlation tensor redistributes energy among various Reynolds stress components and pressure-dilatation is responsible for energy reallocation between dilatational kinetic and potential energies. The competition and interplay between these pressure-based processes are investigated in this study. Direct numerical simulations (DNS) of low turbulent Mach number dilatational turbulence are performed employing the hybrid thermal Lattice Boltzman method (HTLBM). It is found that a tendency towards equipartition precedes proclivity for isotropization. An evolution towards equipartition has a collateral but critical effect on return-to-isotropy. The preferential transfer of energy from strong (rather than weak) Reynolds stress components to potential energy accelerates the isotropization of dilatational fluctuations. Understanding of these pressure-based redistributive processes is critical for developing insight into the character of compressible turbulence.
Holographic equipartition from first order action
NASA Astrophysics Data System (ADS)
Wang, Jingbo
2017-12-01
Recently, the idea that gravity is emergent has attract many people's attention. The "Emergent Gravity Paradigm" is a program that develop this idea from the thermodynamical point of view. It expresses the Einstein equation in the language of thermodynamics. A key equation in this paradigm is the holographic equipartition which says that, in all static spacetimes, the degrees of freedom on the boundary equal those in the bulk. And the time evolution of spacetime is drove by the departure from the holographic equipartition. In this paper, we get the holographic equipartition and its generalization from the first order formalism, that is, the connection and its conjugate momentum are considered to be the canonical variables. The final results have similar structure as those from the metric formalism. It gives another proof of holographic equipartition.
Holographic equipartition and the maximization of entropy
NASA Astrophysics Data System (ADS)
Krishna, P. B.; Mathew, Titus K.
2017-09-01
The accelerated expansion of the Universe can be interpreted as a tendency to satisfy holographic equipartition. It can be expressed by a simple law, Δ V =Δ t (Nsurf-ɛ Nbulk) , where V is the Hubble volume in Planck units, t is the cosmic time in Planck units, and Nsurf /bulk is the number of degrees of freedom on the horizon/bulk of the Universe. We show that this holographic equipartition law effectively implies the maximization of entropy. In the cosmological context, a system that obeys the holographic equipartition law behaves as an ordinary macroscopic system that proceeds to an equilibrium state of maximum entropy. We consider the standard Λ CDM model of the Universe and show that it is consistent with the holographic equipartition law. Analyzing the entropy evolution, we find that it also proceeds to an equilibrium state of maximum entropy.
Thermodynamic laws and equipartition theorem in relativistic Brownian motion.
Koide, T; Kodama, T
2011-06-01
We extend the stochastic energetics to a relativistic system. The thermodynamic laws and equipartition theorem are discussed for a relativistic Brownian particle and the first and the second law of thermodynamics in this formalism are derived. The relation between the relativistic equipartition relation and the rate of heat transfer is discussed in the relativistic case together with the nature of the noise term.
The two Faces of Equipartition
NASA Astrophysics Data System (ADS)
Sanchez-Sesma, F. J.; Perton, M.; Rodriguez-Castellanos, A.; Campillo, M.; Weaver, R. L.; Rodriguez, M.; Prieto, G.; Luzon, F.; McGarr, A.
2008-12-01
Equipartition is good. Beyond its philosophical implications, in many instances of statistical physics it implies that the available kinetic and potential elastic energy, in phase space, is distributed in the same fixed proportions among the possible "states". There are at least two distinct and complementary descriptions of such states in a diffuse elastic wave field u(r,t). One asserts that u may be represented as an incoherent isotropic superposition of incident plane waves of different polarizations. Each type of wave has an appropriate share of the available energy. This definition introduced by Weaver is similar to the room acoustics notion of a diffuse field, and it suffices to permit prediction of field correlations. The other description assumes that the degrees of freedom of the system, in this case, the kinetic energy densities, are all incoherently excited with equal expected amplitude. This definition, introduced by Maxwell, is also familiar from room acoustics using the normal modes of vibration within an arbitrarily large body. Usually, to establish if an elastic field is diffuse and equipartitioned only the first description has been applied, which requires the separation of dilatational and shear waves using carefully designed experiments. When the medium is bounded by an interface, waves of other modes, for example Rayleigh waves, complicate the measurement of these energies. As a consequence, it can be advantageous to use the second description. Moreover, each spatial component of the energy densities is linked, when an elastic field is diffuse and equipartitioned, to the component of the imaginary part of the Green function at the source. Accordingly, one can use the second description to retrieve the Green function and obtain more information about the medium. The equivalence between the two descriptions of equipartition are given for an infinite space and extended to the case of a half-space. These two descriptiosn are equivalent thanks to the relationship of average autocorrelations with the imaginary part of Green function at the source. Preliminary results are displayed in data sets from Chilpancingo, Mexico, and the Tautona Gold Mine, South Africa, that strongly suggest that equipartition, that guarantees the diffuse nature of seismic fields, has more than one face. Acknowledgements. Partial supports from DGAPA-UNAM, Project IN114706, Mexico; from Proyect MCyT CGL2005-05500-C02/BTE, Spain; from project DyETI of INSU-CNRS, France, and from the Instituto Mexicano del Petróleo are greatly appreciated.
Deviation from the law of energy equipartition in a small dynamic-random-access memory
NASA Astrophysics Data System (ADS)
Carles, Pierre-Alix; Nishiguchi, Katsuhiko; Fujiwara, Akira
2015-06-01
A small dynamic-random-access memory (DRAM) coupled with a high charge sensitivity electrometer based on a silicon field-effect transistor is used to study the law of equipartition of energy. By statistically analyzing the movement of single electrons in the DRAM at various temperature and voltage conditions in thermal equilibrium, we are able to observe a behavior that differs from what is predicted by the law of equipartition energy: when the charging energy of the capacitor of the DRAM is comparable to or smaller than the thermal energy kBT/2, random electron motion is ruled perfectly by thermal energy; on the other hand, when the charging energy becomes higher in relation to the thermal energy kBT/2, random electron motion is suppressed which indicates a deviation from the law of equipartition of energy. Since the law of equipartition is analyzed using the DRAM, one of the most familiar devices, we believe that our results are perfectly universal among all electronic devices.
Tsai, V.C.
2010-01-01
Recent derivations have shown that when noise in a physical system has its energy equipartitioned into the modes of the system, there is a convenient relationship between the cross correlation of time-series recorded at two points and the Green's function of the system. Here, we show that even when energy is not fully equipartitioned and modes are allowed to be degenerate, a similar (though less general) property holds for equations with wave equation structure. This property can be used to understand why certain seismic noise correlation measurements are successful despite known degeneracy and lack of equipartition on the Earth. No claim to original US government works Journal compilation ?? 2010 RAS.
Temperature profile and equipartition law in a Langevin harmonic chain
NASA Astrophysics Data System (ADS)
Kim, Sangrak
2017-09-01
Temperature profile in a Langevin harmonic chain is explicitly derived and the validity of the equipartition law is checked. First, we point out that the temperature profile in previous studies does not agree with the equipartition law: In thermal equilibrium, the temperature profile deviates from the same temperature distribution against the equipartition law, particularly at the ends of the chain. The matrix connecting temperatures of the heat reservoirs and the temperatures of the harmonic oscillators turns out to be a probability matrix. By explicitly calculating the power spectrum of the probability matrix, we will show that the discrepancy comes from the neglect of the power spectrum in higher frequency ω, which is in decay mode, and related with the imaginary number of wave number q.
ERIC Educational Resources Information Center
Confrey, Jere; Maloney, Alan
2015-01-01
Design research studies provide significant opportunities to study new innovations and approaches and how they affect the forms of learning in complex classroom ecologies. This paper reports on a two-week long design research study with twelve 2nd through 4th graders using curricular materials and a tablet-based diagnostic assessment system, both…
A novel look at energy equipartition in globular clusters
NASA Astrophysics Data System (ADS)
Bianchini, P.; van de Ven, G.; Norris, M. A.; Schinnerer, E.; Varri, A. L.
2016-06-01
Two-body interactions play a major role in shaping the structural and dynamical properties of globular clusters (GCs) over their long-term evolution. In particular, GCs evolve towards a state of partial energy equipartition that induces a mass dependence in their kinematics. By using a set of Monte Carlo cluster simulations evolved in quasi-isolation, we show that the stellar mass dependence of the velocity dispersion σ(m) can be described by an exponential function σ2 ∝ exp (-m/meq), with the parameter meq quantifying the degree of partial energy equipartition of the systems. This simple parametrization successfully captures the behaviour of the velocity dispersion at lower as well as higher stellar masses, that is, the regime where the system is expected to approach full equipartition. We find a tight correlation between the degree of equipartition reached by a GC and its dynamical state, indicating that clusters that are more than about 20 core relaxation times old, have reached a maximum degree of equipartition. This equipartition-dynamical state relation can be used as a tool to characterize the relaxation condition of a cluster with a kinematic measure of the meq parameter. Vice versa, the mass dependence of the kinematics can be predicted knowing the relaxation time solely on the basis of photometric measurements. Moreover, any deviations from this tight relation could be used as a probe of a peculiar dynamical history of a cluster. Finally, our novel approach is important for the interpretation of state-of-the-art Hubble Space Telescope proper motion data, for which the mass dependence of kinematics can now be measured, and for the application of modelling techniques which take into consideration multimass components and mass segregation.
Kinematics of Globular Cluster: new Perspectives of Energy Equipartition from N-body Simulations
NASA Astrophysics Data System (ADS)
Kim, Hyunwoo; Pasquato, Mario; Yoon, Suk-jin
2018-01-01
Globular clusters (GCs) evolve dynamically through gravitational two-body interactions between stars. We investigated the evolution towards energy equipartition in GCs using direct n-body simulations in NBODY6. If a GC reaches full energy equipartition, the velocity dispersion as a function of stars’ mass becomes a power law with exponent -1/2. However, our n-body simulations never reach full equipartition, which is similar to Trenti & van de Marel (2013) results. Instead we found that in simulations with a shallow mass spectrum the best fit exponent becomes positive slightly before core collapse time. This inversion is a new result, which can be used as a kinematic predictor of core collapse. We are currently exploring applications of this inversion indicator to the detection of intermediate mass black holes.
Kinetic theory of binary particles with unequal mean velocities and non-equipartition energies
NASA Astrophysics Data System (ADS)
Chen, Yanpei; Mei, Yifeng; Wang, Wei
2017-03-01
The hydrodynamic conservation equations and constitutive relations for a binary granular mixture composed of smooth, nearly elastic spheres with non-equipartition energies and different mean velocities are derived. This research is aimed to build three-dimensional kinetic theory to characterize the behaviors of two species of particles suffering different forces. The standard Enskog method is employed assuming a Maxwell velocity distribution for each species of particles. The collision components of the stress tensor and the other parameters are calculated from the zeroth- and first-order approximation. Our results demonstrate that three factors, namely the differences between two granular masses, temperatures and mean velocities all play important roles in the stress-strain relation of the binary mixture, indicating that the assumption of energy equipartition and the same mean velocity may not be acceptable. The collision frequency and the solid viscosity increase monotonously with each granular temperature. The zeroth-order approximation to the energy dissipation varies greatly with the mean velocities of both species of spheres, reaching its peak value at the maximum of their relative velocity.
No energy equipartition in globular clusters
NASA Astrophysics Data System (ADS)
Trenti, Michele; van der Marel, Roeland
2013-11-01
It is widely believed that globular clusters evolve over many two-body relaxation times towards a state of energy equipartition, so that velocity dispersion scales with stellar mass as σ ∝ m-η with η = 0.5. We show here that this is incorrect, using a suite of direct N-body simulations with a variety of realistic initial mass functions and initial conditions. No simulated system ever reaches a state close to equipartition. Near the centre, the luminous main-sequence stars reach a maximum ηmax ≈ 0.15 ± 0.03. At large times, all radial bins convergence on an asymptotic value η∞ ≈ 0.08 ± 0.02. The development of this `partial equipartition' is strikingly similar across our simulations, despite the range of different initial conditions employed. Compact remnants tend to have higher η than main-sequence stars (but still η < 0.5), due to their steeper (evolved) mass function. The presence of an intermediate-mass black hole (IMBH) decreases η, consistent with our previous findings of a quenching of mass segregation under these conditions. All these results can be understood as a consequence of the Spitzer instability for two-component systems, extended by Vishniac to a continuous mass spectrum. Mass segregation (the tendency of heavier stars to sink towards the core) has often been studied observationally, but energy equipartition has not. Due to the advent of high-quality proper motion data sets from the Hubble Space Telescope, it is now possible to measure η for real clusters. Detailed data-model comparisons open up a new observational window on globular cluster dynamics and evolution. A first comparison of our simulations to observations of Omega Cen yields good agreement, supporting the view that globular clusters are not generally in energy equipartition. Modelling techniques that assume equipartition by construction (e.g. multi-mass Michie-King models) are approximate at best.
Intrinsic Brightness Temperatures of AGN Jets
NASA Astrophysics Data System (ADS)
Homan, D. C.; Kovalev, Y. Y.; Lister, M. L.; Ros, E.; Kellermann, K. I.; Cohen, M. H.; Vermeulen, R. C.; Zensus, J. A.; Kadler, M.
2006-05-01
We present a new method for studying the intrinsic brightness temperatures of the parsec-scale jet cores of active galactic nuclei (AGNs). Our method uses observed superluminal motions and observed brightness temperatures for a large sample of AGNs to constrain the characteristic intrinsic brightness temperature of the sample as a whole. To study changes in intrinsic brightness temperature, we assume that the Doppler factors of individual jets are constant in time, as justified by their relatively small changes in observed flux density. We find that in their median-low brightness temperature state, the sources in our sample have a narrow range of intrinsic brightness temperatures centered on a characteristic temperature, Tint~=3×1010 K, which is close to the value expected for equipartition, when the energy in the radiating particles equals the energy stored in the magnetic fields. However, in their maximum brightness state, we find that sources in our sample have a characteristic intrinsic brightness temperature greater than 2×1011 K, which is well in excess of the equipartition temperature. In this state, we estimate that the energy in radiating particles exceeds the energy in the magnetic field by a factor of ~105. We suggest that the excess of particle energy when sources are in their maximum brightness state is due to injection or acceleration of particles at the base of the jet. Our results suggest that the common method of estimating jet Doppler factors by using a single measurement of observed brightness temperature, the assumption of equipartition, or both may lead to large scatter or systematic errors in the derived values.
Core shifts, magnetic fields and magnetization of extragalactic jets
NASA Astrophysics Data System (ADS)
Zdziarski, Andrzej A.; Sikora, Marek; Pjanka, Patryk; Tchekhovskoy, Alexander
2015-07-01
We study the effect of radio-jet core shift, which is a dependence of the position of the jet radio core on the observational frequency. We derive a new method of measuring the jet magnetic field based on both the value of the shift and the observed radio flux, which complements the standard method that assumes equipartition. Using both methods, we re-analyse the blazar sample of Zamaninasab et al. We find that equipartition is satisfied only if the jet opening angle in the radio core region is close to the values found observationally, ≃0.1-0.2 divided by the bulk Lorentz factor, Γj. Larger values, e.g. 1/Γj, would imply magnetic fields much above equipartition. A small jet opening angle implies in turn the magnetization parameter of ≪1. We determine the jet magnetic flux taking into account this effect. We find that the transverse-averaged jet magnetic flux is fully compatible with the model of jet formation due to black hole (BH) spin-energy extraction and the accretion being a magnetically arrested disc (MAD). We calculate the jet average mass-flow rate corresponding to this model and find it consists of a substantial fraction of the mass accretion rate. This suggests the jet composition with a large fraction of baryons. We also calculate the average jet power, and find it moderately exceeds the accretion power, dot{M} c^2, reflecting BH spin energy extraction. We find our results for radio galaxies at low Eddington ratios are compatible with MADs but require a low radiative efficiency, as predicted by standard accretion models.
The holographic principle, the equipartition of energy and Newton’s gravity
NASA Astrophysics Data System (ADS)
Sadiq, M.
2017-12-01
Assuming the equipartition of energy to hold on a holographic sphere, Erik Verlinde demonstrated that Newton’s gravity follows as an entropic force. Some comments are in place about Verlinde’s assumptions in his derivation. It is pointed out that the holographic principle allows for freedom up to a free scale factor in the choice of Planck scale area while leading to classical gravity. Similarity of this free parameter with the Immirzi parameter of loop quantum gravity is discussed. We point out that the equipartition of energy is inbuilt into the holographic principle and, therefore, need not be assumed from the outset.
On the Equipartition of Kinetic Energy in an Ideal Gas Mixture
ERIC Educational Resources Information Center
Peliti, L.
2007-01-01
A refinement of an argument due to Maxwell for the equipartition of translational kinetic energy in a mixture of ideal gases with different masses is proposed. The argument is elementary, yet it may work as an illustration of the role of symmetry and independence postulates in kinetic theory. (Contains 1 figure.)
Brightness temperature - obtaining the physical properties of a non-equipartition plasma
NASA Astrophysics Data System (ADS)
Nokhrina, E. E.
2017-06-01
The limit on the intrinsic brightness temperature, attributed to `Compton catastrophe', has been established being 1012 K. Somewhat lower limit of the order of 1011.5 K is implied if we assume that the radiating plasma is in equipartition with the magnetic field - the idea that explained why the observed cores of active galactic nuclei (AGNs) sustained the limit lower than the `Compton catastrophe'. Recent observations with unprecedented high resolution by the RadioAstron have revealed systematic exceed in the observed brightness temperature. We propose means of estimating the degree of the non-equipartition regime in AGN cores. Coupled with the core-shift measurements, the method allows us to independently estimate the magnetic field strength and the particle number density at the core. We show that the ratio of magnetic energy to radiating plasma energy is of the order of 10-5, which means the flow in the core is dominated by the particle energy. We show that the magnetic field obtained by the brightness temperature measurements may be underestimated. We propose for the relativistic jets with small viewing angles the non-uniform magnetohydrodynamic model and obtain the expression for the magnetic field amplitude about two orders higher than that for the uniform model. These magnetic field amplitudes are consistent with the limiting magnetic field suggested by the `magnetically arrested disc' model.
NASA Astrophysics Data System (ADS)
Nutto, C.; Steiner, O.; Schaffenberger, W.; Roth, M.
2012-02-01
Context. Observations of waves at frequencies above the acoustic cut-off frequency have revealed vanishing wave travel-times in the vicinity of strong magnetic fields. This detection of apparently evanescent waves, instead of the expected propagating waves, has remained a riddle. Aims: We investigate the influence of a strong magnetic field on the propagation of magneto-acoustic waves in the atmosphere of the solar network. We test whether mode conversion effects can account for the shortening in wave travel-times between different heights in the solar atmosphere. Methods: We carry out numerical simulations of the complex magneto-atmosphere representing the solar magnetic network. In the simulation domain, we artificially excite high frequency waves whose wave travel-times between different height levels we then analyze. Results: The simulations demonstrate that the wave travel-time in the solar magneto-atmosphere is strongly influenced by mode conversion. In a layer enclosing the surface sheet defined by the set of points where the Alfvén speed and the sound speed are equal, called the equipartition level, energy is partially transferred from the fast acoustic mode to the fast magnetic mode. Above the equipartition level, the fast magnetic mode is refracted due to the large gradient of the Alfvén speed. The refractive wave path and the increasing phase speed of the fast mode inside the magnetic canopy significantly reduce the wave travel-time, provided that both observing levels are above the equipartition level. Conclusions: Mode conversion and the resulting excitation and propagation of fast magneto-acoustic waves is responsible for the observation of vanishing wave travel-times in the vicinity of strong magnetic fields. In particular, the wave propagation behavior of the fast mode above the equipartition level may mimic evanescent behavior. The present wave propagation experiments provide an explanation of vanishing wave travel-times as observed with multi-line high-cadence instruments. Movies are available in electronic form at http://www.aanda.org
Intermittent Fermi-Pasta-Ulam Dynamics at Equilibrium
NASA Astrophysics Data System (ADS)
Campbell, David; Danieli, Carlo; Flach, Sergej
The equilibrium value of an observable defines a manifold in the phase space of an ergodic and equipartitioned many-body syste. A typical trajectory pierces that manifold infinitely often as time goes to infinity. We use these piercings to measure both the relaxation time of the lowest frequency eigenmode of the Fermi-Pasta-Ulam chain, as well as the fluctuations of the subsequent dynamics in equilibrium. We show that previously obtained scaling laws for equipartition times are modified at low energy density due to an unexpected slowing down of the relaxation. The dynamics in equilibrium is characterized by a power-law distribution of excursion times far off equilibrium, with diverging variance. The long excursions arise from sticky dynamics close to regular orbits in the phase space. Our method is generalizable to large classes of many-body systems. The authors acknowledge financial support from IBS (Project Code IBS-R024-D1).
NASA Astrophysics Data System (ADS)
Webb, Jeremy J.; Vesperini, Enrico
2017-01-01
We make use of N-body simulations to determine the relationship between two observable parameters that are used to quantify mass segregation and energy equipartition in star clusters. Mass segregation can be quantified by measuring how the slope of a cluster's stellar mass function α changes with clustercentric distance r, and then calculating δ _α = d α (r)/d ln(r/r_m), where rm is the cluster's half-mass radius. The degree of energy equipartition in a cluster is quantified by η, which is a measure of how stellar velocity dispersion σ depends on stellar mass m via σ(m) ∝ m-η. Through a suite of N-body star cluster simulations with a range of initial sizes, binary fractions, orbits, black hole retention fractions, and initial mass functions, we present the co-evolution of δα and η. We find that measurements of the global η are strongly affected by the radial dependence of σ and mean stellar mass and the relationship between η and δα depends mainly on the cluster's initial conditions and the tidal field. Within rm, where these effects are minimized, we find that η and δα initially share a linear relationship. However, once the degree of mass segregation increases such that the radial dependence of σ and mean stellar mass become a factor within rm, or the cluster undergoes core collapse, the relationship breaks down. We propose a method for determining η within rm from an observational measurement of δα. In cases where η and δα can be measured independently, this new method offers a way of measuring the cluster's dynamical state.
NASA Astrophysics Data System (ADS)
Bulyzhenkov, I. E.
2018-02-01
Translational ordering of the internal kinematic chaos provides the Special Relativity referents for the geodesic motion of warm thermodynamical bodies. Taking identical mathematics, relativistic physics of the low speed transport of time-varying heat-energies differs from Newton's physics of steady masses without internal degrees of freedom. General Relativity predicts geodesic changes of the internal heat-energy variable under the free gravitational fall and the geodesic turn in the radial field center. Internal heat variations enable cyclic dynamics of decelerated falls and accelerated takeoffs of inertial matter and its structural self-organization. The coordinate speed of the ordered spatial motion takes maximum under the equipartition of relativistic internal and translational kinetic energies. Observable predictions are discussed for verification/falsification of the principle of equipartition as a new basic for the ordered motion and self-organization in external fields, including gravitational, electromagnetic, and thermal ones.
Brown-York quasilocal energy in Lanczos-Lovelock gravity and black hole horizons
NASA Astrophysics Data System (ADS)
Chakraborty, Sumanta; Dadhich, Naresh
2015-12-01
A standard candidate for quasilocal energy in general relativity is the Brown-York energy, which is essentially a two dimensional surface integral of the extrinsic curvature on the two-boundary of a spacelike hypersurface referenced to flat spacetime. Several years back one of us had conjectured that the black hole horizon is defined by equipartition of gravitational and non-gravitational energy. By employing the above definition of quasilocal Brown-York energy, we have verified the equipartition conjecture for static charged and charged axi-symmetric black holes in general relativity. We have further generalized the Brown-York formalism to all orders in Lanczos-Lovelock theories of gravity and have verified the conjecture for pure Lovelock charged black hole in all even d = 2 m + 2 dimensions, where m is the degree of Lovelock action. It turns out that the equipartition conjecture works only for pure Lovelock, and not for Einstein-Lovelock black holes.
Sarshar, Mohammad; Wong, Winson T.; Anvari, Bahman
2014-01-01
Abstract. Optical tweezers have become an important instrument in force measurements associated with various physical, biological, and biophysical phenomena. Quantitative use of optical tweezers relies on accurate calibration of the stiffness of the optical trap. Using the same optical tweezers platform operating at 1064 nm and beads with two different diameters, we present a comparative study of viscous drag force, equipartition theorem, Boltzmann statistics, and power spectral density (PSD) as methods in calibrating the stiffness of a single beam gradient force optical trap at trapping laser powers in the range of 0.05 to 1.38 W at the focal plane. The equipartition theorem and Boltzmann statistic methods demonstrate a linear stiffness with trapping laser powers up to 355 mW, when used in conjunction with video position sensing means. The PSD of a trapped particle’s Brownian motion or measurements of the particle displacement against known viscous drag forces can be reliably used for stiffness calibration of an optical trap over a greater range of trapping laser powers. Viscous drag stiffness calibration method produces results relevant to applications where trapped particle undergoes large displacements, and at a given position sensing resolution, can be used for stiffness calibration at higher trapping laser powers than the PSD method. PMID:25375348
Tail resonances of Fermi-Pasta-Ulam q-breathers and their impact on the pathway to equipartition
NASA Astrophysics Data System (ADS)
Penati, Tiziano; Flach, Sergej
2007-06-01
Upon initial excitation of a few normal modes the energy distribution among all modes of a nonlinear atomic chain (the Fermi-Pasta-Ulam model) exhibits exponential localization on large time scales. At the same time, resonant anomalies (peaks) are observed in its weakly excited tail for long times preceding equipartition. We observe a similar resonant tail structure also for exact time-periodic Lyapunov orbits, coined q-breathers due to their exponential localization in modal space. We give a simple explanation for this structure in terms of superharmonic resonances. The resonance analysis agrees very well with numerical results and has predictive power. We extend a previously developed perturbation method, based essentially on a Poincaré-Lindstedt scheme, in order to account for these resonances, and in order to treat more general model cases, including truncated Toda potentials. Our results give a qualitative and semiquantitative account for the superharmonic resonances of q-breathers and natural packets.
Quenching and anisotropy of hydromagnetic turbulent transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karak, Bidya Binay; Brandenburg, Axel; Rheinhardt, Matthias
2014-11-01
Hydromagnetic turbulence affects the evolution of large-scale magnetic fields through mean-field effects like turbulent diffusion and the α effect. For stronger fields, these effects are usually suppressed or quenched, and additional anisotropies are introduced. Using different variants of the test-field method, we determine the quenching of the turbulent transport coefficients for the forced Roberts flow, isotropically forced non-helical turbulence, and rotating thermal convection. We see significant quenching only when the mean magnetic field is larger than the equipartition value of the turbulence. Expressing the magnetic field in terms of the equipartition value of the quenched flows, we obtain for themore » quenching exponents of the turbulent magnetic diffusivity about 1.3, 1.1, and 1.3 for Roberts flow, forced turbulence, and convection, respectively. However, when the magnetic field is expressed in terms of the equipartition value of the unquenched flows, these quenching exponents become about 4, 1.5, and 2.3, respectively. For the α effect, the exponent is about 1.3 for the Roberts flow and 2 for convection in the first case, but 4 and 3, respectively, in the second. In convection, the quenching of turbulent pumping follows the same power law as turbulent diffusion, while for the coefficient describing the Ω×J effect nearly the same quenching exponent is obtained as for α. For forced turbulence, turbulent diffusion proportional to the second derivative along the mean magnetic field is quenched much less, especially for larger values of the magnetic Reynolds number. However, we find that in corresponding axisymmetric mean-field dynamos with dominant toroidal field the quenched diffusion coefficients are the same for the poloidal and toroidal field constituents.« less
The relation between the mass-to-light ratio and the relaxation state of globular clusters
NASA Astrophysics Data System (ADS)
Bianchini, P.; Sills, A.; van de Ven, G.; Sippel, A. C.
2017-08-01
The internal dynamics of globular clusters (GCs) is strongly affected by two-body interactions that bring the systems to a state of partial energy equipartition. Using a set of Monte Carlo clusters simulations, we investigate the role of the onset of energy equipartition in shaping the mass-to-light ratio (M/L) in GCs. Our simulations show that the M/L profiles cannot be considered constant and their specific shape strongly depends on the dynamical age of the clusters. Dynamically younger clusters display a central peak up to M/L ≃ 25 M⊙/L⊙ caused by the retention of dark remnants; this peak flattens out for dynamically older clusters. Moreover, we find that also the global values of M/L correlate with the dynamical state of a cluster quantified as either the number of relaxation times a system has experienced nrel or the equipartition parameter meq: clusters closer to full equipartition (higher nrel or lower meq) display a lower M/L. We show that the decrease of M/L is primarily driven by the dynamical ejection of dark remnants, rather than by the escape of low-mass stars. The predictions of our models are in good agreement with observations of GCs in the Milky Way and M31, indicating that differences in relaxation state alone can explain variations of M/L up to a factor of ≃3. Our characterization of the M/L as a function of relaxation state is of primary relevance for the application and interpretation of dynamical models.
NASA Astrophysics Data System (ADS)
Komatsu, Nobuyoshi
2017-11-01
A power-law corrected entropy based on a quantum entanglement is considered to be a viable black-hole entropy. In this study, as an alternative to Bekenstein-Hawking entropy, a power-law corrected entropy is applied to Padmanabhan's holographic equipartition law to thermodynamically examine an extra driving term in the cosmological equations for a flat Friedmann-Robertson-Walker universe at late times. Deviations from the Bekenstein-Hawking entropy generate an extra driving term (proportional to the α th power of the Hubble parameter, where α is a dimensionless constant for the power-law correction) in the acceleration equation, which can be derived from the holographic equipartition law. Interestingly, the value of the extra driving term in the present model is constrained by the second law of thermodynamics. From the thermodynamic constraint, the order of the driving term is found to be consistent with the order of the cosmological constant measured by observations. In addition, the driving term tends to be constantlike when α is small, i.e., when the deviation from the Bekenstein-Hawking entropy is small.
Accretion in Radiative Equipartition (AiRE) Disks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yazdi, Yasaman K.; Afshordi, Niayesh, E-mail: yyazdi@pitp.ca, E-mail: nafshordi@pitp.ca
2017-07-01
Standard accretion disk theory predicts that the total pressure in disks at typical (sub-)Eddington accretion rates becomes radiation pressure dominated. However, radiation pressure dominated disks are thermally unstable. Since these disks are observed in approximate steady state over the instability timescale, our accretion models in the radiation-pressure-dominated regime (i.e., inner disk) need to be modified. Here, we present a modification to the Shakura and Sunyaev model, where the radiation pressure is in equipartition with the gas pressure in the inner region. We call these flows accretion in radiative equipartition (AiRE) disks. We introduce the basic features of AiRE disks andmore » show how they modify disk properties such as the Toomre parameter and the central temperature. We then show that the accretion rate of AiRE disks is limited from above and below, by Toomre and nodal sonic point instabilities, respectively. The former leads to a strict upper limit on the mass of supermassive black holes as a function of cosmic time (and spin), while the latter could explain the transition between hard and soft states of X-ray binaries.« less
Accretion in Radiative Equipartition (AiRE) Disks
NASA Astrophysics Data System (ADS)
Yazdi, Yasaman K.; Afshordi, Niayesh
2017-07-01
Standard accretion disk theory predicts that the total pressure in disks at typical (sub-)Eddington accretion rates becomes radiation pressure dominated. However, radiation pressure dominated disks are thermally unstable. Since these disks are observed in approximate steady state over the instability timescale, our accretion models in the radiation-pressure-dominated regime (I.e., inner disk) need to be modified. Here, we present a modification to the Shakura & Sunyaev model, where the radiation pressure is in equipartition with the gas pressure in the inner region. We call these flows accretion in radiative equipartition (AiRE) disks. We introduce the basic features of AiRE disks and show how they modify disk properties such as the Toomre parameter and the central temperature. We then show that the accretion rate of AiRE disks is limited from above and below, by Toomre and nodal sonic point instabilities, respectively. The former leads to a strict upper limit on the mass of supermassive black holes as a function of cosmic time (and spin), while the latter could explain the transition between hard and soft states of X-ray binaries.
Thermodynamic method for generating random stress distributions on an earthquake fault
Barall, Michael; Harris, Ruth A.
2012-01-01
This report presents a new method for generating random stress distributions on an earthquake fault, suitable for use as initial conditions in a dynamic rupture simulation. The method employs concepts from thermodynamics and statistical mechanics. A pattern of fault slip is considered to be analogous to a micro-state of a thermodynamic system. The energy of the micro-state is taken to be the elastic energy stored in the surrounding medium. Then, the Boltzmann distribution gives the probability of a given pattern of fault slip and stress. We show how to decompose the system into independent degrees of freedom, which makes it computationally feasible to select a random state. However, due to the equipartition theorem, straightforward application of the Boltzmann distribution leads to a divergence which predicts infinite stress. To avoid equipartition, we show that the finite strength of the fault acts to restrict the possible states of the system. By analyzing a set of earthquake scaling relations, we derive a new formula for the expected power spectral density of the stress distribution, which allows us to construct a computer algorithm free of infinities. We then present a new technique for controlling the extent of the rupture by generating a random stress distribution thousands of times larger than the fault surface, and selecting a portion which, by chance, has a positive stress perturbation of the desired size. Finally, we present a new two-stage nucleation method that combines a small zone of forced rupture with a larger zone of reduced fracture energy.
NASA Astrophysics Data System (ADS)
Dermer, Charles D.; Yan, Dahai; Zhang, Li; Finke, Justin D.; Lott, Benoit
2015-08-01
Fermi-LAT analyses show that the γ-ray photon spectral indices {{{Γ }}}γ of a large sample of blazars correlate with the ν {F}ν peak synchrotron frequency {ν }s according to the relation {{{Γ }}}γ =d-k{log} {ν }s. The same function, with different constants d and k, also describes the relationship between {{{Γ }}}γ and peak Compton frequency {ν }{{C}}. This behavior is derived analytically using an equipartition blazar model with a log-parabola description of the electron energy distribution (EED). In the Thomson regime, k={k}{EC}=3b/4 for external Compton (EC) processes and k={k}{SSC}=9b/16 for synchrotron self-Compton (SSC) processes, where b is the log-parabola width parameter of the EED. The BL Lac object Mrk 501 is fit with a synchrotron/SSC model given by the log-parabola EED, and is best fit away from equipartition. Corrections are made to the spectral-index diagrams for a low-energy power-law EED and departures from equipartition, as constrained by absolute jet power. Analytic expressions are compared with numerical values derived from self-Compton and EC scattered γ-ray spectra from Lyα broad-line region and IR target photons. The {{{Γ }}}γ versus {ν }s behavior in the model depends strongly on b, with progressively and predictably weaker dependences on γ-ray detection range, variability time, and isotropic γ-ray luminosity. Implications for blazar unification and blazars as ultra-high energy cosmic-ray sources are discussed. Arguments by Ghisellini et al. that the jet power exceeds the accretion luminosity depend on the doubtful assumption that we are viewing at the Doppler angle.
NASA Astrophysics Data System (ADS)
Peeters, A. G.; Angioni, C.; Strintzi, D.
2009-03-01
The comment addresses questions raised on the derivation of the momentum pinch velocity due to the Coriolis drift effect [A. G. Peeters et al., Phys. Rev. Lett. 98, 265003 (2007)]. These concern the definition of the gradient, and the scaling with the density gradient length. It will be shown that the turbulent equipartition mechanism is included within the derivation using the Coriolis drift, with the density gradient scaling being the consequence of drift terms not considered in [T. S. Hahm et al., Phys. Plasmas 15, 055902 (2008)]. Finally the accuracy of the analytic models is assessed through a comparison with the full gyrokinetic solution.
Sgr A* Emission Parametrizations from GRMHD Simulations
NASA Astrophysics Data System (ADS)
Anantua, Richard; Ressler, Sean; Quataert, Eliot
2018-06-01
Galactic Center emission near the vicinity of the central black hole, Sagittarius (Sgr) A*, is modeled using parametrizations involving the electron temperature, which is found from general relativistic magnetohydrodynamic (GRMHD) simulations to be highest in the disk-outflow corona. Jet-motivated prescriptions generalizing equipartition of particle and magnetic energies, e.g., by scaling relativistic electron energy density to powers of the magnetic field strength, are also introduced. GRMHD jet (or outflow)/accretion disk/black hole (JAB) simulation postprocessing codes IBOTHROS and GRMONTY are employed in the calculation of images and spectra. Various parametric models reproduce spectral and morphological features, such as the sub-mm spectral bump in electron temperature models and asymmetric photon rings in equipartition-based models. The Event Horizon Telescope (EHT) will provide unprecedentedly high-resolution 230+ GHz observations of the "shadow" around Sgr A*'s supermassive black hole, which the synthetic models presented here will reverse-engineer. Both electron temperature and equipartition-based models can be constructed to be compatible with EHT size constraints for the emitting region of Sgr A*. This program sets the groundwork for devising a unified emission parametrization flexible enough to model disk, corona and outflow/jet regions with a small set of parameters including electron heating fraction and plasma beta.
Symmetry blockade and its breakdown in energy equipartition of square graphene resonators
NASA Astrophysics Data System (ADS)
Wang, Yisen; Zhu, Zhigang; Zhang, Yong; Huang, Liang
2018-03-01
The interaction between flexural modes due to nonlinear potentials is critical to heat conductivity and mechanical vibration of two dimensional materials such as graphene. Much effort has been devoted to understanding the underlying mechanism. In this paper, we examine solely the out-of-plane flexural modes and identify their energy flow pathway during the equipartition process. In particular, the modes are grouped into four classes by their distinct symmetries. The couplings are significantly larger within a class than between classes, forming symmetry blockades. As a result, the energy first flows to the modes in the same symmetry class. Breakdown of the symmetry blockade, i.e., inter-class energy flow, starts when the displacement profile becomes complex and the inter-class couplings bear nonneglectable values. The equipartition time follows the stretched exponential law and survives in the thermodynamic limit. These results bring fundamental understandings to the Fermi-Pasta-Ulam problem in two dimensional systems with complex potentials and reveal clearly the physical picture of dynamical interactions between the flexural modes, which will be crucial to the understanding of their contribution in high thermal conductivity and mechanism of energy dissipation that may intrinsically limit the quality factor of the resonator.
NASA Technical Reports Server (NTRS)
Zweibel, Ellen G.; Mckee, Christopher F.
1995-01-01
Molecular clouds are observed to be partially supported by turbulent pressure. The kinetic energy of the turbulence is directly measurable, but the potential energy, which consists of magnetic, thermal, and gravitational potential energy, is largly unseen. We have extended previous results on equipartition between kinetic and potential energy to show that it is likely to be a very good approximation in molecular clouds. We have used two separate approaches to demonstrate this result: For small-amplitude perturbations of a static equilibrium, we have used the energy principle analysis of Bernstein et al. (1958); this derivation applies to perturbations of arbitary wavelength. To treat perturbations of a nonstatic equilibrium, we have used the Lagrangian analysis of Dewar (1970); this analysis applies only to short-wavelength perturbations. Both analysis assume conservation of energy. Wave damping has only a small effect on equipartition if the wave frequency is small compared to the neutral-ion collision frequency; for the particular case we considered, radiative losses have no effect on equipartition. These results are then incorporated in a simple way into analyses of cloud equilibrium and global stability. We discuss the effect of Alfvenic turbulence on the Jeans mass and show that it has little effect on the magnetic critical mass.
NASA Astrophysics Data System (ADS)
Eftekhari, T.; Berger, E.; Zauderer, B. A.; Margutti, R.; Alexander, K. D.
2018-02-01
We present continued radio and X-ray observations of the relativistic tidal disruption event Swift J164449.3+573451 extending to δt ≈ 2000 days after discovery. The radio data were obtained with the Very Large Array (VLA) as part of a long-term program to monitor the energy and dynamical evolution of the jet and to characterize the parsec-scale environment around a previously dormant supermassive black hole. We combine these data with Chandra observations and demonstrate that the X-ray emission following the sharp decline at δt ≈ 500 days is likely due to the forward shock. We constrain the synchrotron cooling frequency and the microphysical properties of the outflow for the first time. We find that the cooling frequency evolves through the optical/NIR band at δt ≈ 10–200 days, corresponding to ɛ B ≈ 10‑3, well below equipartition; the X-ray data demonstrate that this deviation from equipartition holds to at least δt ≈ 2000 days. We thus recalculate the physical properties of the jet over the lifetime of the event, no longer assuming equipartition. We find a total kinetic energy of E K ≈ 4 × 1051 erg and a transition to non-relativistic expansion on the timescale of our latest observations (700 days). The density profile is approximately R ‑3/2 at ≲0.3 pc and ≳0.7 pc, with a plateau at intermediate scales, characteristic of Bondi accretion. Based on its evolution thus far, we predict that Sw 1644+57 will be detectable at centimeter wavelengths for decades to centuries with existing and upcoming radio facilities. Similar off-axis events should be detectable to z ∼ 2, but with a slow evolution that may inhibit their recognition as transient events.
GneimoSim: A Modular Internal Coordinates Molecular Dynamics Simulation Package
Larsen, Adrien B.; Wagner, Jeffrey R.; Kandel, Saugat; Salomon-Ferrer, Romelia; Vaidehi, Nagarajan; Jain, Abhinandan
2014-01-01
The Generalized Newton Euler Inverse Mass Operator (GNEIMO) method is an advanced method for internal coordinates molecular dynamics (ICMD). GNEIMO includes several theoretical and algorithmic advancements that address longstanding challenges with ICMD simulations. In this paper we describe the GneimoSim ICMD software package that implements the GNEIMO method. We believe that GneimoSim is the first software package to include advanced features such as the equipartition principle derived for internal coordinates, and a method for including the Fixman potential to eliminate systematic statistical biases introduced by the use of hard constraints. Moreover, by design, GneimoSim is extensible and can be easily interfaced with third party force field packages for ICMD simulations. Currently, GneimoSim includes interfaces to LAMMPS, OpenMM, Rosetta force field calculation packages. The availability of a comprehensive Python interface to the underlying C++ classes and their methods provides a powerful and versatile mechanism for users to develop simulation scripts to configure the simulation and control the simulation flow. GneimoSim has been used extensively for studying the dynamics of protein structures, refinement of protein homology models, and for simulating large scale protein conformational changes with enhanced sampling methods. GneimoSim is not limited to proteins and can also be used for the simulation of polymeric materials. PMID:25263538
GneimoSim: a modular internal coordinates molecular dynamics simulation package.
Larsen, Adrien B; Wagner, Jeffrey R; Kandel, Saugat; Salomon-Ferrer, Romelia; Vaidehi, Nagarajan; Jain, Abhinandan
2014-12-05
The generalized Newton-Euler inverse mass operator (GNEIMO) method is an advanced method for internal coordinates molecular dynamics (ICMD). GNEIMO includes several theoretical and algorithmic advancements that address longstanding challenges with ICMD simulations. In this article, we describe the GneimoSim ICMD software package that implements the GNEIMO method. We believe that GneimoSim is the first software package to include advanced features such as the equipartition principle derived for internal coordinates, and a method for including the Fixman potential to eliminate systematic statistical biases introduced by the use of hard constraints. Moreover, by design, GneimoSim is extensible and can be easily interfaced with third party force field packages for ICMD simulations. Currently, GneimoSim includes interfaces to LAMMPS, OpenMM, and Rosetta force field calculation packages. The availability of a comprehensive Python interface to the underlying C++ classes and their methods provides a powerful and versatile mechanism for users to develop simulation scripts to configure the simulation and control the simulation flow. GneimoSim has been used extensively for studying the dynamics of protein structures, refinement of protein homology models, and for simulating large scale protein conformational changes with enhanced sampling methods. GneimoSim is not limited to proteins and can also be used for the simulation of polymeric materials. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Qu, Rui; Liu, Shu-Shen; Zheng, Qiao-Feng; Li, Tong
2017-03-01
Concentration addition (CA) was proposed as a reasonable default approach for the ecological risk assessment of chemical mixtures. However, CA cannot predict the toxicity of mixture at some effect zones if not all components have definite effective concentrations at the given effect, such as some compounds induce hormesis. In this paper, we developed a new method for the toxicity prediction of various types of binary mixtures, an interpolation method based on the Delaunay triangulation (DT) and Voronoi tessellation (VT) as well as the training set of direct equipartition ray design (EquRay) mixtures, simply IDVequ. At first, the EquRay was employed to design the basic concentration compositions of five binary mixture rays. The toxic effects of single components and mixture rays at different times and various concentrations were determined by the time-dependent microplate toxicity analysis. Secondly, the concentration-toxicity data of the pure components and various mixture rays were acted as a training set. The DT triangles and VT polygons were constructed by various vertices of concentrations in the training set. The toxicities of unknown mixtures were predicted by the linear interpolation and natural neighbor interpolation of vertices. The IDVequ successfully predicted the toxicities of various types of binary mixtures.
Qu, Rui; Liu, Shu-Shen; Zheng, Qiao-Feng; Li, Tong
2017-01-01
Concentration addition (CA) was proposed as a reasonable default approach for the ecological risk assessment of chemical mixtures. However, CA cannot predict the toxicity of mixture at some effect zones if not all components have definite effective concentrations at the given effect, such as some compounds induce hormesis. In this paper, we developed a new method for the toxicity prediction of various types of binary mixtures, an interpolation method based on the Delaunay triangulation (DT) and Voronoi tessellation (VT) as well as the training set of direct equipartition ray design (EquRay) mixtures, simply IDVequ. At first, the EquRay was employed to design the basic concentration compositions of five binary mixture rays. The toxic effects of single components and mixture rays at different times and various concentrations were determined by the time-dependent microplate toxicity analysis. Secondly, the concentration-toxicity data of the pure components and various mixture rays were acted as a training set. The DT triangles and VT polygons were constructed by various vertices of concentrations in the training set. The toxicities of unknown mixtures were predicted by the linear interpolation and natural neighbor interpolation of vertices. The IDVequ successfully predicted the toxicities of various types of binary mixtures. PMID:28287626
Improved surface-wave retrieval from ambient seismic noise by multi-dimensional deconvolution
NASA Astrophysics Data System (ADS)
Wapenaar, Kees; Ruigrok, Elmer; van der Neut, Joost; Draganov, Deyan
2011-01-01
The methodology of surface-wave retrieval from ambient seismic noise by crosscorrelation relies on the assumption that the noise field is equipartitioned. Deviations from equipartitioning degrade the accuracy of the retrieved surface-wave Green's function. A point-spread function, derived from the same ambient noise field, quantifies the smearing in space and time of the virtual source of the Green's function. By multidimensionally deconvolving the retrieved Green's function by the point-spread function, the virtual source becomes better focussed in space and time and hence the accuracy of the retrieved surface-wave Green's function may improve significantly. We illustrate this at the hand of a numerical example and discuss the advantages and limitations of this new methodology.
NASA Astrophysics Data System (ADS)
Trenti, Michele
2010-09-01
Intermediate Mass Black Holes {IMBHs} are objects of considerable astrophysical significance. They have been invoked as possible remnants of Population III stars, precursors of supermassive black holes, sources of ultra-luminous X-ray emission, and emitters of gravitational waves. The centers of globular clusters, where they may have formed through runaway collapse of massive stars, may be our best chance of detecting them. HST studies of velocity dispersions have provided tentative evidence, but the measurements are difficult and the results have been disputed. It is thus important to explore and develop additional indicators of the presence of an IMBH in these systems. In a Cycle 16 theory project we focused on the fingerprints of an IMBH derived from HST photometry. We showed that an IMBH leads to a detectable quenching of mass segregation. Analysis of HST-ACS data for NGC 2298 validated the method, and ruled out an IMBH of more than 300 solar masses. We propose here to extend the search for IMBH signatures from photometry to kinematics. The velocity dispersion of stars in collisionally relaxed stellar systems such as globular clusters scales with main sequence mass as sigma m^alpha. A value alpha = -0.5 corresponds to equipartition. Mass-dependent kinematics can now be measured from HST proper motion studies {e.g., alpha = -0.21 for Omega Cen}. Preliminary analysis shows that the value of alpha can be used as indicator of the presence of an IMBH. In fact, the quenching of mass segregation is a result of the degree of equipartition that the system attains. However, detailed numerical simulations are required to quantify this. Therefore we propose {a} to carry out a new, larger set of realistic N-body simulations of star clusters with IMBHs, primordial binaries and stellar evolution to predict in detail the expected kinematic signatures and {b} to compare these predictions to datasets that are {becoming} available. Considerable HST resources have been invested in proper motions studies of some dozen clusters, but theoretical simulations are generally not performed as part of such programs. Our methods are complementary to other efforts to detect IMBHs in globulars, and will allow new constraints to be derived from HST data that are already being obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raskin, Cody; Owen, J. Michael
Creating spherical initial conditions in smoothed particle hydrodynamics simulations that are spherically conformal is a difficult task. Here in this paper, we describe two algorithmic methods for evenly distributing points on surfaces that when paired can be used to build three-dimensional spherical objects with optimal equipartition of volume between particles, commensurate with an arbitrary radial density function. We demonstrate the efficacy of our method against stretched lattice arrangements on the metrics of hydrodynamic stability, spherical conformity, and the harmonic power distribution of gravitational settling oscillations. We further demonstrate how our method is highly optimized for simulating multi-material spheres, such asmore » planets with core–mantle boundaries.« less
Raskin, Cody; Owen, J. Michael
2016-03-24
Creating spherical initial conditions in smoothed particle hydrodynamics simulations that are spherically conformal is a difficult task. Here in this paper, we describe two algorithmic methods for evenly distributing points on surfaces that when paired can be used to build three-dimensional spherical objects with optimal equipartition of volume between particles, commensurate with an arbitrary radial density function. We demonstrate the efficacy of our method against stretched lattice arrangements on the metrics of hydrodynamic stability, spherical conformity, and the harmonic power distribution of gravitational settling oscillations. We further demonstrate how our method is highly optimized for simulating multi-material spheres, such asmore » planets with core–mantle boundaries.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raskin, Cody; Owen, J. Michael
2016-04-01
Creating spherical initial conditions in smoothed particle hydrodynamics simulations that are spherically conformal is a difficult task. Here, we describe two algorithmic methods for evenly distributing points on surfaces that when paired can be used to build three-dimensional spherical objects with optimal equipartition of volume between particles, commensurate with an arbitrary radial density function. We demonstrate the efficacy of our method against stretched lattice arrangements on the metrics of hydrodynamic stability, spherical conformity, and the harmonic power distribution of gravitational settling oscillations. We further demonstrate how our method is highly optimized for simulating multi-material spheres, such as planets with core–mantlemore » boundaries.« less
Beam dynamics pre-study for the RFQ of SPPC p-Linac
NASA Astrophysics Data System (ADS)
Liu, Jing; Lu, Yuanrong; Li, Haipeng; Su, Jiancang; Liu, Xiaolong
2018-02-01
A proton-proton collider at center-of-mass energy of more than 70 TeV is the second stage of the CEPC-SPPC program. As proposed, the SPPC injector chain will use a 1.2 GeV p-Linac and three synchrotrons of 10 GeV p-RCS, 180 GeV MSS and 2.1 TeV SS. Peking University is responsible for the preliminary conceptual design of the room temperature part of SPPC p-Linac. This paper is focusing on the beam dynamics studies performed with respect to the 325 MHz RFQ. As the first accelerator structure after the ion source and the front-end of the whole SPPC, RFQ plays an important role in the beam initial transverse focusing and longitudinal bunching. Based on the New Four Section Procedure strategy, as well as the matched and Equipartitioning design method, a 3 MeV RFQ designed by Parmteq code will be introduced. The cavity length of RFQ is 3.6 m and the transmission efficiency is 98%. In this design scheme, the 40 mA proton beam from the 50 keV ion source is accelerated to 3 MeV in 3.8 m length, which achieves a sixty times energy gain. The results of the analyses show that the RFQ design is reliable and meets all the SPPC p-Linac requirements well.
Supernova neutrinos and antineutrinos: ternary luminosity diagram and spectral split patterns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fogli, Gianluigi; Marrone, Antonio; Tamborra, Irene
2009-10-01
In core-collapse supernovae, the ν{sub e} and ν-bar {sub e} species may experience collective flavor swaps to non-electron species ν{sub x}, within energy intervals limited by relatively sharp boundaries (''splits''). These phenomena appear to depend sensitively upon the initial energy spectra and luminosities. We investigate the effect of generic variations of the fractional luminosities (l{sub e}, l{sub ē}, l{sub x}) with respect to the usual ''energy equipartition'' case (1/6, 1/6, 1/6), within an early-time supernova scenario with fixed thermal spectra and total luminosity. We represent the constraint l{sub e}+l{sub ē}+4l{sub x} = 1 in a ternary diagram, which is exploredmore » via numerical experiments (in single-angle approximation) over an evenly-spaced grid of points. In inverted hierarchy, single splits arise in most cases, but an abrupt transition to double splits is observed for a few points surrounding the equipartition one. In normal hierarchy, collective effects turn out to be unobservable at all grid points but one, where single splits occur. Admissible deviations from equipartition may thus induce dramatic changes in the shape of supernova (anti)neutrino spectra. The observed patterns are interpreted in terms of initial flavor polarization vectors (defining boundaries for the single/double split transitions), lepton number conservation, and minimization of potential energy.« less
NASA Astrophysics Data System (ADS)
Li, Guang-Xing; Burkert, Andreas
2018-02-01
The interplay between gravity, turbulence and the magnetic field determines the evolution of the molecular interstellar medium (ISM) and the formation of the stars. In spite of growing interests, there remains a lack of understanding of the importance of magnetic field over multiple scales. We derive the magnetic energy spectrum - a measure that constraints the multiscale distribution of the magnetic energy, and compare it with the gravitational energy spectrum derived in Li & Burkert. In our formalism, the gravitational energy spectrum is purely determined by the surface density probability density distribution (PDF), and the magnetic energy spectrum is determined by both the surface density PDF and the magnetic field-density relation. If regions have density PDFs close to P(Σ) ˜ Σ-2 and a universal magnetic field-density relation B ˜ ρ1/2, we expect a multiscale near equipartition between gravity and the magnetic fields. This equipartition is found to be true in NGC 6334, where estimates of magnetic fields over multiple scales (from 0.1 pc to a few parsec) are available. However, the current observations are still limited in sample size. In the future, it is necessary to obtain multiscale measurements of magnetic fields from different clouds with different surface density PDFs and apply our formalism to further study the gravity-magnetic field interplay.
Turbulent equipartition pinch of toroidal momentum in spherical torus
NASA Astrophysics Data System (ADS)
Hahm, T. S.; Lee, J.; Wang, W. X.; Diamond, P. H.; Choi, G. J.; Na, D. H.; Na, Y. S.; Chung, K. J.; Hwang, Y. S.
2014-12-01
We present a new analytic expression for turbulent equipartition (TEP) pinch of toroidal angular momentum originating from magnetic field inhomogeneity of spherical torus (ST) plasmas. Starting from a conservative modern nonlinear gyrokinetic equation (Hahm et al 1988 Phys. Fluids 31 2670), we derive an expression for pinch to momentum diffusivity ratio without using a usual tokamak approximation of B ∝ 1/R which has been previously employed for TEP momentum pinch derivation in tokamaks (Hahm et al 2007 Phys. Plasmas 14 072302). Our new formula is evaluated for model equilibria of National Spherical Torus eXperiment (NSTX) (Ono et al 2001 Nucl. Fusion 41 1435) and Versatile Experiment Spherical Torus (VEST) (Chung et al 2013 Plasma Sci. Technol. 15 244) plasmas. Our result predicts stronger inward pinch for both cases, as compared to the prediction based on the tokamak formula.
NASA Astrophysics Data System (ADS)
Zabusky, Norman J.
2005-03-01
This paper is mostly a history of the early years of nonlinear and computational physics and mathematics. I trace how the counterintuitive result of near-recurrence to an initial condition in the first scientific digital computer simulation led to the discovery of the soliton in a later computer simulation. The 1955 report by Fermi, Pasta, and Ulam (FPU) described their simulation of a one-dimensional nonlinear lattice which did not show energy equipartition. The 1965 paper by Zabusky and Kruskalshowed that the Korteweg-de Vries (KdV) nonlinear partial differential equation, a long wavelength model of the α-lattice (or cubic nonlinearity), derived by Kruskal, gave quantitatively the same results obtained by FPU. In 1967, Zabusky and Deem showed that a localized short wavelength initial excitation (then called an "optical" and now a "zone-boundary mode" excitation ) of the α-lattice revealed "n-curve" coherent states. If the initial amplitude was sufficiently large energy equipartition followed in a short time. The work of Kruskal and Miura (KM), Gardner and Greene (GG), and myself led to the appreciation of the infinity of denumerable invariants (conservation laws) for Hamiltonian systems and to a procedure by GGKM in 1967 for solving KdV exactly. The nonlinear science field exponentiated in diversity of linkages (as described in Appendix A). Included were pure and applied mathematics and all branches of basic and applied physics, including the first nonhydrodynamic application to optical solitons, as described in a brief essay (Appendix B) by Hasegawa. The growth was also manifest in the number of meetings held and institutes founded, as described briefly in Appendix D. Physicists and mathematicians in Japan, USA, and USSR (in the latter two, people associated with plasma physics) contributed to the diversification of the nonlinear paradigm which continues worldwide to the present. The last part of the paper (and Appendix C) discuss visiometrics: the visualization and quantification of simulation data, e.g., projection to lower dimensions, to facilitate understanding of nonlinear phenomena for modeling and prediction (or design). Finally, I present some recent developments that are linked to my early work by: Dritschel (vortex dynamics via contour dynamics/surgery in two and three dimensions); Friedland (pattern formation by synchronization in Hamiltonian nonlinear wave, vortex, plasma, systems, etc.); and the author ("n-curve" states and energy equipartition in a FPU lattice).
Zabusky, Norman J
2005-03-01
This paper is mostly a history of the early years of nonlinear and computational physics and mathematics. I trace how the counterintuitive result of near-recurrence to an initial condition in the first scientific digital computer simulation led to the discovery of the soliton in a later computer simulation. The 1955 report by Fermi, Pasta, and Ulam (FPU) described their simulation of a one-dimensional nonlinear lattice which did not show energy equipartition. The 1965 paper by Zabusky and Kruskalshowed that the Korteweg-de Vries (KdV) nonlinear partial differential equation, a long wavelength model of the alpha-lattice (or cubic nonlinearity), derived by Kruskal, gave quantitatively the same results obtained by FPU. In 1967, Zabusky and Deem showed that a localized short wavelength initial excitation (then called an "optical" and now a "zone-boundary mode" excitation ) of the alpha-lattice revealed "n-curve" coherent states. If the initial amplitude was sufficiently large energy equipartition followed in a short time. The work of Kruskal and Miura (KM), Gardner and Greene (GG), and myself led to the appreciation of the infinity of denumerable invariants (conservation laws) for Hamiltonian systems and to a procedure by GGKM in 1967 for solving KdV exactly. The nonlinear science field exponentiated in diversity of linkages (as described in Appendix A). Included were pure and applied mathematics and all branches of basic and applied physics, including the first nonhydrodynamic application to optical solitons, as described in a brief essay (Appendix B) by Hasegawa. The growth was also manifest in the number of meetings held and institutes founded, as described briefly in Appendix D. Physicists and mathematicians in Japan, USA, and USSR (in the latter two, people associated with plasma physics) contributed to the diversification of the nonlinear paradigm which continues worldwide to the present. The last part of the paper (and Appendix C) discuss visiometrics: the visualization and quantification of simulation data, e.g., projection to lower dimensions, to facilitate understanding of nonlinear phenomena for modeling and prediction (or design). Finally, I present some recent developments that are linked to my early work by: Dritschel (vortex dynamics via contour dynamics/surgery in two and three dimensions); Friedland (pattern formation by synchronization in Hamiltonian nonlinear wave, vortex, plasma, systems, etc.); and the author ("n-curve" states and energy equipartition in a FPU lattice).
Dynamical energy equipartition of the Toda model with additional on-site potentials
NASA Astrophysics Data System (ADS)
Zhang, Zhenjun; Tang, Chunmei; Kang, Jing; Tong, Peiqing
2017-09-01
Not Available Project supported by the National Natural Science Foundation of China (Grant Nos. 11575087 and 11305045) and the Fundamental Research Funds for the Central Universities, China (Grant No. 2017B17114).
Tsallis and Kaniadakis statistics from a point of view of the holographic equipartition law
NASA Astrophysics Data System (ADS)
Abreu, Everton M. C.; Ananias Neto, Jorge; Mendes, Albert C. R.; Bonilla, Alexander
2018-02-01
In this work, we have illustrated the difference between both Tsallis and Kaniadakis entropies through cosmological models obtained from the formalism proposed by Padmanabhan, which is called holographic equipartition law. Similarly to the formalism proposed by Komatsu, we have obtained an extra driving constant term in the Friedmann equation if we deform the Tsallis entropy by Kaniadakis' formalism. We have considered initially Tsallis entropy as the black-hole (BH) area entropy. This constant term may lead the universe to be in an accelerated or decelerated mode. On the other hand, if we start with the Kaniadakis entropy as the BH area entropy and then by modifying the Kappa expression by Tsallis' formalism, the same absolute value but with opposite sign is obtained. In an opposite limit, no driving inflation term of the early universe was derived from both deformations.
Li, Wenjin
2018-02-28
Transition path ensemble consists of reactive trajectories and possesses all the information necessary for the understanding of the mechanism and dynamics of important condensed phase processes. However, quantitative description of the properties of the transition path ensemble is far from being established. Here, with numerical calculations on a model system, the equipartition terms defined in thermal equilibrium were for the first time estimated in the transition path ensemble. It was not surprising to observe that the energy was not equally distributed among all the coordinates. However, the energies distributed on a pair of conjugated coordinates remained equal. Higher energies were observed to be distributed on several coordinates, which are highly coupled to the reaction coordinate, while the rest were almost equally distributed. In addition, the ensemble-averaged energy on each coordinate as a function of time was also quantified. These quantitative analyses on energy distributions provided new insights into the transition path ensemble.
Designing an experiment to measure cellular interaction forces
NASA Astrophysics Data System (ADS)
McAlinden, Niall; Glass, David G.; Millington, Owain R.; Wright, Amanda J.
2013-09-01
Optical trapping is a powerful tool in Life Science research and is becoming common place in many microscopy laboratories and facilities. The force applied by the laser beam on the trapped object can be accurately determined allowing any external forces acting on the trapped object to be deduced. We aim to design a series of experiments that use an optical trap to measure and quantify the interaction force between immune cells. In order to cause minimum perturbation to the sample we plan to directly trap T cells and remove the need to introduce exogenous beads to the sample. This poses a series of challenges and raises questions that need to be answered in order to design a set of effect end-point experiments. A typical cell is large compared to the beads normally trapped and highly non-uniform - can we reliably trap such objects and prevent them from rolling and re-orientating? In this paper we show how a spatial light modulator can produce a triple-spot trap, as opposed to a single-spot trap, giving complete control over the object's orientation and preventing it from rolling due, for example, to Brownian motion. To use an optical trap as a force transducer to measure an external force you must first have a reliably calibrated system. The optical trapping force is typically measured using either the theory of equipartition and observing the Brownian motion of the trapped object or using an escape force method, e.g. the viscous drag force method. In this paper we examine the relationship between force and displacement, as well as measuring the maximum displacement from equilibrium position before an object falls out of the trap, hence determining the conditions under which the different calibration methods should be applied.
NASA Astrophysics Data System (ADS)
Yan, Dahai; Zeng, Houdun; Zhang, Li
2012-08-01
The detections of X-ray emission from the kiloparsec-scale jets of blazars and radio galaxies could imply the existence of high-energy electrons in these extended jets, and these electrons could produce high-energy emission through the inverse Compton (IC) process. In this paper, we study the non-variable hard TeV emission from a blazar. The multiband emission consists of two components: (i) the traditional synchrotron self-Compton (SSC) emission from the inner jet; (ii) the emission produced via SSC and IC scattering of cosmic microwave background (CMB) photons (IC/CMB) and extragalactic background light (EBL) photons by relativistic electrons in the extended jet under the stochastic acceleration scenario. Such a model is applied to 1ES 1101-232. The results indicate the following. (i) The non-variable hard TeV emission of 1ES 1101-232, which is dominated by IC/CMB emission from the extended jet, can be reproduced well by using three characteristic values of the Doppler factor (δD = 5, 10 and 15) for the TeV-emitting region in the extended jet. (ii) In the cases of δD = 15 and 10, the physical parameters can achieve equipartition (or quasi-equipartition) between the relativistic electrons and the magnetic field. In contrast, the physical parameters largely deviate from equipartition for the case of δD = 5. Therefore, we conclude that the TeV emission region of 1ES 1101-232 in the extended jet should be moderately or highly beamed.
NASA Astrophysics Data System (ADS)
Uma, B.; Swaminathan, T. N.; Ayyaswamy, P. S.; Eckmann, D. M.; Radhakrishnan, R.
2011-09-01
A direct numerical simulation (DNS) procedure is employed to study the thermal motion of a nanoparticle in an incompressible Newtonian stationary fluid medium with the generalized Langevin approach. We consider both the Markovian (white noise) and non-Markovian (Ornstein-Uhlenbeck noise and Mittag-Leffler noise) processes. Initial locations of the particle are at various distances from the bounding wall to delineate wall effects. At thermal equilibrium, the numerical results are validated by comparing the calculated translational and rotational temperatures of the particle with those obtained from the equipartition theorem. The nature of the hydrodynamic interactions is verified by comparing the velocity autocorrelation functions and mean square displacements with analytical results. Numerical predictions of wall interactions with the particle in terms of mean square displacements are compared with analytical results. In the non-Markovian Langevin approach, an appropriate choice of colored noise is required to satisfy the power-law decay in the velocity autocorrelation function at long times. The results obtained by using non-Markovian Mittag-Leffler noise simultaneously satisfy the equipartition theorem and the long-time behavior of the hydrodynamic correlations for a range of memory correlation times. The Ornstein-Uhlenbeck process does not provide the appropriate hydrodynamic correlations. Comparing our DNS results to the solution of an one-dimensional generalized Langevin equation, it is observed that where the thermostat adheres to the equipartition theorem, the characteristic memory time in the noise is consistent with the inherent time scale of the memory kernel. The performance of the thermostat with respect to equilibrium and dynamic properties for various noise schemes is discussed.
Examining the High-energy Radiation Mechanisms of Knots and Hotspots in Active Galactic Nucleus Jets
NASA Astrophysics Data System (ADS)
Zhang, Jin; Du, Shen-shi; Guo, Sheng-Chu; Zhang, Hai-Ming; Chen, Liang; Liang, En-Wei; Zhang, Shuang-Nan
2018-05-01
We compile the radio–optical–X-ray spectral energy distributions (SEDs) of 65 knots and 29 hotspots in 41 active galactic nucleus jets to examine their high-energy radiation mechanisms. Their SEDs can be fitted with the single-zone leptonic models, except for the hotspot of Pictor A and six knots of 3C 273. The X-ray emission of 1 hotspot and 22 knots is well explained as synchrotron radiation under the equipartition condition; they usually have lower X-ray and radio luminosities than the others, which may be due to a lower beaming factor. An inverse Compton (IC) process is involved for explaining the X-ray emission of the other SEDs. Without considering the equipartition condition, their X-ray emission can be attributed to the synchrotron-self-Compton process, but the derived jet powers (P jet) are not correlated with L k and most of them are larger than L k, with more than three orders of magnitude, where L k is the jet kinetic power estimated with their radio emission. Under the equipartition condition, the X-ray emission is well interpreted with the IC process for the cosmic microwave background photons (IC/CMB). In this scenario, the derived P jet of knots and hotspots are correlated with and comparable to L k. These results suggest that the IC/CMB model may be a promising interpretation of the X-ray emission. In addition, a tentative knot–hotspot sequence in the synchrotron peak-energy–peak-luminosity plane is observed, similar to the blazar sequence, which may be attributed to the different cooling mechanisms of electrons.
Nanoparticle Brownian motion and hydrodynamic interactions in the presence of flow fields
Uma, B.; Swaminathan, T. N.; Radhakrishnan, R.; Eckmann, D. M.; Ayyaswamy, P. S.
2011-01-01
We consider the Brownian motion of a nanoparticle in an incompressible Newtonian fluid medium (quiescent or fully developed Poiseuille flow) with the fluctuating hydrodynamics approach. The formalism considers situations where both the Brownian motion and the hydrodynamic interactions are important. The flow results have been modified to account for compressibility effects. Different nanoparticle sizes and nearly neutrally buoyant particle densities are also considered. Tracked particles are initially located at various distances from the bounding wall to delineate wall effects. The results for thermal equilibrium are validated by comparing the predictions for the temperatures of the particle with those obtained from the equipartition theorem. The nature of the hydrodynamic interactions is verified by comparing the velocity autocorrelation functions and mean square displacements with analytical and experimental results where available. The equipartition theorem for a Brownian particle in Poiseuille flow is verified for a range of low Reynolds numbers. Numerical predictions of wall interactions with the particle in terms of particle diffusivities are consistent with results, where available. PMID:21918592
New measurements of photospheric magnetic fields in late-type stars and emerging trends
NASA Technical Reports Server (NTRS)
Saar, S. H.; Linsky, J. L.
1986-01-01
The magnetic fields of late-type stars are measured using the method of Saar et al. (1986). The method includes radiative transfer effects and compensation for line blending; the photospheric magnetic field parameters are derived by comparing observed and theoretical line profiles using an LTE code that includes line saturation and full Zeeman pattern. The preliminary mean active region magnetic field strengths (B) and surface area coverages for 20 stars are discussed. It is observed that there is a trend of increasing B towards the cooler dwarfs stars, and the linear correlation between B and the equipartition value of the magnetic field strength suggests that the photospheric gas pressure determines the photospheric magnetic field strengths. A tendency toward larger filling factors at larger stellar angular velocities is also detected.
Kinematic fingerprint of core-collapsed globular clusters
NASA Astrophysics Data System (ADS)
Bianchini, P.; Webb, J. J.; Sills, A.; Vesperini, E.
2018-03-01
Dynamical evolution drives globular clusters towards core collapse, which strongly shapes their internal properties. Diagnostics of core collapse have so far been based on photometry only, namely on the study of the concentration of the density profiles. Here, we present a new method to robustly identify core-collapsed clusters based on the study of their stellar kinematics. We introduce the kinematic concentration parameter, ck, the ratio between the global and local degree of energy equipartition reached by a cluster, and show through extensive direct N-body simulations that clusters approaching core collapse and in the post-core collapse phase are strictly characterized by ck > 1. The kinematic concentration provides a suitable diagnostic to identify core-collapsed clusters, independent from any other previous methods based on photometry. We also explore the effects of incomplete radial and stellar mass coverage on the calculation of ck and find that our method can be applied to state-of-art kinematic data sets.
ERIC Educational Resources Information Center
Fine, Leonard
2005-01-01
A brief description on the work and life of the great physicist scientist Albert Einstein is presented. The photoelectric paper written by him in 1905 led him to the study of fluctuations in the energy density of radiation and from there to the incomplete nature of the equipartition theorem of classical mechanics, which failed to account for…
Extending the Peak Bandwidth of Parameters for Softmax Selection in Reinforcement Learning.
Iwata, Kazunori
2016-05-11
Softmax selection is one of the most popular methods for action selection in reinforcement learning. Although various recently proposed methods may be more effective with full parameter tuning, implementing a complicated method that requires the tuning of many parameters can be difficult. Thus, softmax selection is still worth revisiting, considering the cost savings of its implementation and tuning. In fact, this method works adequately in practice with only one parameter appropriately set for the environment. The aim of this paper is to improve the variable setting of this method to extend the bandwidth of good parameters, thereby reducing the cost of implementation and parameter tuning. To achieve this, we take advantage of the asymptotic equipartition property in a Markov decision process to extend the peak bandwidth of softmax selection. Using a variety of episodic tasks, we show that our setting is effective in extending the bandwidth and that it yields a better policy in terms of stability. The bandwidth is quantitatively assessed in a series of statistical tests.
EDUCATIONAL AND VOCATIONAL GOALS OF RURAL YOUTH IN THE SOUTH.
ERIC Educational Resources Information Center
SPERRY, IRWIN V.; AND OTHERS
THE OBJECTIVES OF THE STUDY WERE TO--(1) COMPARE EDUCATIONAL GOALS OF RURAL YOUTH AND THEIR PARENTS AND (2) DETERMINE THE RELATIONSHIPS OF THE SIMILARITIES AND DIFFERENCES TO SUCH FACTORS AS GEOGRAPHIC AREA, STATE, SEX, LEVEL OF LIVING, RESIDENCE, FAMILY SIZE, AND CLUB MEMBERSHIP. A SURVEY SAMPLE, SELECTED FROM AN EQUIPARTITIONED UNIVERSE…
Relaxation processes in a low-order three-dimensional magnetohydrodynamics model
NASA Technical Reports Server (NTRS)
Stribling, Troy; Matthaeus, William H.
1991-01-01
The time asymptotic behavior of a Galerkin model of 3D magnetohydrodynamics (MHD) has been interpreted using the selective decay and dynamic alignment relaxation theories. A large number of simulations has been performed that scan a parameter space defined by the rugged ideal invariants, including energy, cross helicity, and magnetic helicity. It is concluded that time asymptotic state can be interpreted as a relaxation to minimum energy. A simple decay model, based on absolute equilibrium theory, is found to predict a mapping of initial onto time asymptotic states, and to accurately describe the long time behavior of the runs when magnetic helicity is present. Attention is also given to two processes, operating on time scales shorter than selective decay and dynamic alignment, in which the ratio of kinetic to magnetic energy relaxes to values 0(1). The faster of the two processes takes states initially dominant in magnetic energy to a state of near-equipartition between kinetic and magnetic energy through power law growth of kinetic energy. The other process takes states initially dominant in kinetic energy to the near-equipartitioned state through exponential growth of magnetic energy.
Noninvasive determination of optical lever sensitivity in atomic force microscopy
NASA Astrophysics Data System (ADS)
Higgins, M. J.; Proksch, R.; Sader, J. E.; Polcik, M.; Mc Endoo, S.; Cleveland, J. P.; Jarvis, S. P.
2006-01-01
Atomic force microscopes typically require knowledge of the cantilever spring constant and optical lever sensitivity in order to accurately determine the force from the cantilever deflection. In this study, we investigate a technique to calibrate the optical lever sensitivity of rectangular cantilevers that does not require contact to be made with a surface. This noncontact approach utilizes the method of Sader et al. [Rev. Sci. Instrum. 70, 3967 (1999)] to calibrate the spring constant of the cantilever in combination with the equipartition theorem [J. L. Hutter and J. Bechhoefer, Rev. Sci. Instrum. 64, 1868 (1993)] to determine the optical lever sensitivity. A comparison is presented between sensitivity values obtained from conventional static mode force curves and those derived using this noncontact approach for a range of different cantilevers in air and liquid. These measurements indicate that the method offers a quick, alternative approach for the calibration of the optical lever sensitivity.
Radhakrishnan, Ravi; Yu, Hsiu-Yu; Eckmann, David M.; Ayyaswamy, Portonovo S.
2017-01-01
Traditionally, the numerical computation of particle motion in a fluid is resolved through computational fluid dynamics (CFD). However, resolving the motion of nanoparticles poses additional challenges due to the coupling between the Brownian and hydrodynamic forces. Here, we focus on the Brownian motion of a nanoparticle coupled to adhesive interactions and confining-wall-mediated hydrodynamic interactions. We discuss several techniques that are founded on the basis of combining CFD methods with the theory of nonequilibrium statistical mechanics in order to simultaneously conserve thermal equipartition and to show correct hydrodynamic correlations. These include the fluctuating hydrodynamics (FHD) method, the generalized Langevin method, the hybrid method, and the deterministic method. Through the examples discussed, we also show a top-down multiscale progression of temporal dynamics from the colloidal scales to the molecular scales, and the associated fluctuations, hydrodynamic correlations. While the motivation and the examples discussed here pertain to nanoscale fluid dynamics and mass transport, the methodologies presented are rather general and can be easily adopted to applications in convective heat transfer. PMID:28035168
Residual Energy Spectrum of Solar Wind Turbulence
NASA Astrophysics Data System (ADS)
Chen, C. H. K.; Bale, S. D.; Salem, C. S.; Maruca, B. A.
2013-06-01
It has long been known that the energy in velocity and magnetic field fluctuations in the solar wind is not in equipartition. In this paper, we present an analysis of 5 yr of Wind data at 1 AU to investigate the reason for this. The residual energy (difference between energy in velocity and magnetic field fluctuations) was calculated using both the standard magnetohydrodynamic (MHD) normalization for the magnetic field and a kinetic version, which includes temperature anisotropies and drifts between particle species. It was found that with the kinetic normalization, the fluctuations are closer to equipartition, with a mean normalized residual energy of σr = -0.19 and mean Alfvén ratio of r A = 0.71. The spectrum of residual energy, in the kinetic normalization, was found to be steeper than both the velocity and magnetic field spectra, consistent with some recent MHD turbulence predictions and numerical simulations, having a spectral index close to -1.9. The local properties of residual energy and cross helicity were also investigated, showing that globally balanced intervals with small residual energy contain local patches of larger imbalance and larger residual energy at all scales, as expected for nonlinear turbulent interactions.
Intermittent many-body dynamics at equilibrium
NASA Astrophysics Data System (ADS)
Danieli, C.; Campbell, D. K.; Flach, S.
2017-06-01
The equilibrium value of an observable defines a manifold in the phase space of an ergodic and equipartitioned many-body system. A typical trajectory pierces that manifold infinitely often as time goes to infinity. We use these piercings to measure both the relaxation time of the lowest frequency eigenmode of the Fermi-Pasta-Ulam chain, as well as the fluctuations of the subsequent dynamics in equilibrium. The dynamics in equilibrium is characterized by a power-law distribution of excursion times far off equilibrium, with diverging variance. Long excursions arise from sticky dynamics close to q -breathers localized in normal mode space. Measuring the exponent allows one to predict the transition into nonergodic dynamics. We generalize our method to Klein-Gordon lattices where the sticky dynamics is due to discrete breathers localized in real space.
NASA Astrophysics Data System (ADS)
Wapenaar, K.; van der Neut, J.; Ruigrok, E.; Draganov, D.; Hunziker, J.; Slob, E.; Thorbecke, J.; Snieder, R.
2008-12-01
It is well-known that under specific conditions the crosscorrelation of wavefields observed at two receivers yields the impulse response between these receivers. This principle is known as 'Green's function retrieval' or 'seismic interferometry'. Recently it has been recognized that in many situations it can be advantageous to replace the correlation process by deconvolution. One of the advantages is that deconvolution compensates for the waveform emitted by the source; another advantage is that it is not necessary to assume that the medium is lossless. The approaches that have been developed to date employ a 1D deconvolution process. We propose a method for seismic interferometry by multidimensional deconvolution and show that under specific circumstances the method compensates for irregularities in the source distribution. This is an important difference with crosscorrelation methods, which rely on the condition that waves are equipartitioned. This condition is for example fulfilled when the sources are regularly distributed along a closed surface and the power spectra of the sources are identical. The proposed multidimensional deconvolution method compensates for anisotropic illumination, without requiring knowledge about the positions and the spectra of the sources.
Studies of Nonlinear Problems. I
DOE R&D Accomplishments Database
Fermi, E.; Pasta, J.; Ulam, S.
1955-05-01
A one-dimensional dynamical system of 64 particles with forces between neighbors containing nonlinear terms has been studied on the Los Alamos computer MANIAC I. The nonlinear terms considered are quadratic, cubic, and broken linear types. The results are analyzed into Fourier components and plotted as a function of time. The results show very little, if any, tendency toward equipartition of energy among the degrees of freedom.
3C 279 IN OUTBURST IN 2015 JUNE: A BROADBAND SED STUDY BASED ON THE INTEGRAL DETECTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bottacini, Eugenio; Böttcher, Markus; Pian, Elena
2016-11-20
Blazars radiate from radio through gamma-ray frequencies and thereby make ideal targets for multifrequency studies. Such studies allow the properties of the emitting jet to be constrained. 3C 279 is among the most notable blazars and therefore subject to extensive multifrequency campaigns. We report the results of a campaign ranging from near-IR to gamma-ray energies that targeted an outburst of 3C 279 in 2015 June. The campaign pivots around the detection in only 50 ks by INTEGRAL , whose IBIS/ISGRI data pin down the high-energy component of the spectral energy distribution (SED) between Swift -XRT data and Fermi -LAT data. The overallmore » SED from near-IR to gamma rays can be well represented by either a leptonic or a lepto-hadronic radiation transfer model. Even though the data are equally well represented by the two models, their inferred parameters challenge the physical conditions in the jet. In fact, the leptonic model requires parameters with a magnetic field far below equipartition with the relativistic particle energy density. In contrast, equipartition may be achieved with the lepto-hadronic model, although this implies an extreme total jet power close to the Eddington luminosity.« less
NASA Astrophysics Data System (ADS)
Basu, A.; Das, B.; Middya, T. R.; Bhattacharya, D. P.
2017-01-01
The phonon growth characteristic in a degenerate semiconductor has been calculated under the condition of low temperature. If the lattice temperature is high, the energy of the intravalley acoustic phonon is negligibly small compared to the average thermal energy of the electrons. Hence one can traditionally assume the electron-phonon collisions to be elastic and approximate the Bose-Einstein (B.E.) distribution for the phonons by the simple equipartition law. However, in the present analysis at the low lattice temperatures, the interaction of the non equilibrium electrons with the acoustic phonons becomes inelastic and the simple equipartition law for the phonon distribution is not valid. Hence the analysis is made taking into account the inelastic collisions and the complete form of the B.E. distribution. The high-field distribution function of the carriers given by Fermi-Dirac (F.D.) function at the field dependent carrier temperature, has been approximated by a well tested model that apparently overcomes the intrinsic problem of correct evaluation of the integrals involving the product and powers of the Fermi function. Hence the results thus obtained are more reliable compared to the rough estimation that one may obtain from using the exact F.D. function, but taking recourse to some over simplified approximations.
Simulation of the small-scale magnetism in main-sequence stellar atmospheres
NASA Astrophysics Data System (ADS)
Salhab, R. G.; Steiner, O.; Berdyugina, S. V.; Freytag, B.; Rajaguru, S. P.; Steffen, M.
2018-06-01
Context. Observations of the Sun tell us that its granular and subgranular small-scale magnetism has significant consequences for global quantities such as the total solar irradiance or convective blueshift of spectral lines. Aims: In this paper, properties of the small-scale magnetism of four cool stellar atmospheres, including the Sun, are investigated, and in particular its effects on the radiative intensity and flux. Methods: We carried out three-dimensional radiation magnetohydrodynamic simulations with the CO5BOLD code in two different settings: with and without a magnetic field. These are thought to represent states of high and low small-scale magnetic activity of a stellar magnetic cycle. Results: We find that the presence of small-scale magnetism increases the bolometric intensity and flux in all investigated models. The surplus in radiative flux of the magnetic over the magnetic field-free atmosphere increases with increasing effective temperature, Teff, from 0.47% for spectral type K8V to 1.05% for the solar model, but decreases for higher effective temperatures than solar. The degree of evacuation of the magnetic flux concentrations monotonically increases with Teff as does their depression of the visible optical surface, that is the Wilson depression. Nevertheless, the strength of the field concentrations on this surface stays remarkably unchanged at ≈1560 G throughout the considered range of spectral types. With respect to the surrounding gas pressure, the field strength is close to (thermal) equipartition for the Sun and spectral type F5V but is clearly sub-equipartition for K2V and more so for K8V. The magnetic flux concentrations appear most conspicuous for model K2V owing to their high brightness contrast. Conclusions: For mean magnetic flux densities of approximately 50 G, we expect the small-scale magnetism of stars in the spectral range from F5V to K8V to produce a positive contribution to their bolometric luminosity. The modulation seems to be most effective for early G-type stars.
Understanding exoplanet populations with simulation-based methods
NASA Astrophysics Data System (ADS)
Morehead, Robert Charles
The Kepler candidate catalog represents an unprecedented sample of exoplanet host stars. This dataset is ideal for probing the populations of exoplanet systems and exploring their architectures. Confirming transiting exoplanets candidates through traditional follow-up methods is challenging, especially for faint host stars. Most of Kepler's validated planets relied on statistical methods to separate true planets from false-positives. Multiple transiting planet systems (MTPS) have been previously shown to have low false-positive rates and over 850 planets in MTPSs have been statistically validated so far. We show that the period-normalized transit duration ratio (xi) offers additional information that can be used to establish the planetary nature of these systems. We briefly discuss the observed distribution of xi for the Q1-Q17 Kepler Candidate Search. We also use xi to develop a Bayesian statistical framework combined with Monte Carlo methods to determine which pairs of planet candidates in an MTPS are consistent with the planet hypothesis for a sample of 862 MTPSs that include candidate planets, confirmed planets, and known false-positives. This analysis proves to be efficient and advantageous in that it only requires catalog-level bulk candidate properties and galactic population modeling to compute the probabilities of a myriad of feasible scenarios composed of background and companion stellar blends in the photometric aperture, without needing additional observational follow-up. Our results agree with the previous results of a low false-positive rate in the Kepler MTPSs. This implies, independently of any other estimates, that most of the MTPSs detected by Kepler are planetary in nature, but that a substantial fraction could be orbiting stars other than then the putative target star, and therefore may be subject to significant error in the inferred planet parameters resulting from unknown or mismeasured stellar host attributes. We also apply approximate Bayesian computation (ABC) using forward simulations of the Kepler planet catalog to simultaneously constrain the distributions of mutual inclination between the planets, orbital eccentricity, the underlying number of planets per planetary system, and the fraction of stars that host planet systems in a subsample of Kepler candidate planets using SimpleABC, a Python package we developed that is a general-purpose framework for ABC analysis. For our investigation into planet architectures, we limit our investigation to candidates in orbits from 10 to 320 days, where the false-positive contamination rate is expected to be low. We test two models, the first is an independent eccentricity ( e) model where mutual inclination and e are drawn from Rayleigh distributions with dispersions sigmaim and sigmae, planets per planetary system is drawn from a Poisson distribution with mean lambda, and the fraction of stars with planetary systems is drawn from two-state categorical distribution parameterized by etap. We also test an Equipartition Model identical to the Independent e Model, except that sigmae is linked to sigmaim by a scaling factor gammae. For the Independent e Model, we find sigmaim = 5.51° +8.00-3.35, sigmae = 0.03+0.05-0.01, lambda = 6.62+7.74 -3.36, and etap = 0.20 +0.18-0.11. For the Equipartition Model, we find sigmaim = 1.15°+0.56-0.33 , gammae = 1.38+1.89 -0.93, lambda = 2.25+0.56-0.29, and etap = 0.56+0.08-0.11 . These results, especially the Equipartition Model, are in good agreement with previous studies. However, deficiencies in our single population models suggest that at least one additional subpopulation of planet systems is needed to explain the Kepler sample, providing more confirmation of the so-called "Kepler Dichotomy".
Exploring the Internal Dynamics of Globular Clusters
NASA Astrophysics Data System (ADS)
Watkins, Laura L.; van der Marel, Roeland; Bellini, Andrea; Luetzgendorf, Nora; HSTPROMO Collaboration
2018-01-01
Exploring the Internal Dynamics of Globular ClustersThe formation histories and structural properties of globular clusters are imprinted on their internal dynamics. Energy equipartition results in velocity differences for stars of different mass, and leads to mass segregation, which results in different spatial distributions for stars of different mass. Intermediate-mass black holes significantly increase the velocity dispersions at the centres of clusters. By combining accurate measurements of their internal kinematics with state-of-the-art dynamical models, we can characterise both the velocity dispersion and mass profiles of clusters, tease apart the different effects, and understand how clusters may have formed and evolved.Using proper motions from the Hubble Space Telescope Proper Motion (HSTPROMO) Collaboration for a set of 22 Milky Way globular clusters, and our discrete dynamical modelling techniques designed to work with large, high-quality datasets, we are studying a variety of internal cluster properties. We will present the results of theoretical work on simulated clusters that demonstrates the efficacy of our approach, and preliminary results from application to real clusters.
Statistical thermodynamics of a two-dimensional relativistic gas.
Montakhab, Afshin; Ghodrat, Malihe; Barati, Mahmood
2009-03-01
In this paper we study a fully relativistic model of a two-dimensional hard-disk gas. This model avoids the general problems associated with relativistic particle collisions and is therefore an ideal system to study relativistic effects in statistical thermodynamics. We study this model using molecular-dynamics simulation, concentrating on the velocity distribution functions. We obtain results for x and y components of velocity in the rest frame (Gamma) as well as the moving frame (Gamma;{'}) . Our results confirm that Jüttner distribution is the correct generalization of Maxwell-Boltzmann distribution. We obtain the same "temperature" parameter beta for both frames consistent with a recent study of a limited one-dimensional model. We also address the controversial topic of temperature transformation. We show that while local thermal equilibrium holds in the moving frame, relying on statistical methods such as distribution functions or equipartition theorem are ultimately inconclusive in deciding on a correct temperature transformation law (if any).
Scotti, A.; Beardsley, R.; Butman, B.
2006-01-01
A self-consistent formalism to estimate baroclinic energy densities and fluxes resulting from the propagation of internal waves of arbitrary amplitude is derived using the concept of available potential energy. The method can be applied to numerical, laboratory or field data. The total energy flux is shown to be the sum of the linear energy flux ??? u??? p??? dz (primes denote baroclinic quantities), plus contributions from the non-hydrostatic pressure anomaly and the self-advection of kinetic and available potential energy. Using highly resolved observations in Massachusetts Bay, it is shown that due to the presence of nonlinear internal waves periodically propagating in the area, ??? u??? p??? dz accounts for only half of the total flux. The same data show that equipartition of available potential and kinetic energy can be violated, especially when the nonlinear waves begin to interact with the bottom. ?? 2006 Cambridge University Press.
A representative survey of the dynamics and energetics of FR II radio galaxies
NASA Astrophysics Data System (ADS)
Ineson, J.; Croston, J. H.; Hardcastle, M. J.; Mingo, B.
2017-05-01
We report the first large, systematic study of the dynamics and energetics of a representative sample of Fanaroff-Riley type II (FR II) radio galaxies with well-characterized group/cluster environments. We used X-ray inverse-Compton and radio synchrotron measurements to determine the internal radio-lobe conditions, and these were compared with external pressures acting on the lobes, determined from measurements of the thermal X-ray emission of the group/cluster. Consistent with previous work, we found that FR II radio lobes are typically electron dominated by a small factor relative to equipartition, and are overpressured relative to the external medium in their outer parts. These results suggest that there is typically no energetically significant proton population in the lobes of FR II radio galaxies (unlike for FR Is), and so for this population, inverse-Compton modelling provides an accurate way of measuring total energy content and estimating jet power. We estimated the distribution of Mach numbers for the population of expanding radio lobes, finding that at least half of the radio galaxies are currently driving strong shocks into their group/cluster environments. Finally, we determined a jet power-radio luminosity relation for FR II radio galaxies based on our estimates of lobe internal energy and Mach number. The slope and normalization of this relation are consistent with theoretical expectations, given the departure from equipartition and environmental distribution for our sample.
NASA Astrophysics Data System (ADS)
Bruni, G.; Gómez, J. L.; Casadio, C.; Lobanov, A.; Kovalev, Y. Y.; Sokolovsky, K. V.; Lisakov, M. M.; Bach, U.; Marscher, A.; Jorstad, S.; Anderson, J. M.; Krichbaum, T. P.; Savolainen, T.; Vega-García, L.; Fuentes, A.; Zensus, J. A.; Alberdi, A.; Lee, S.-S.; Lu, R.-S.; Pérez-Torres, M.; Ros, E.
2017-08-01
Context. RadioAstron is a 10 m orbiting radio telescope mounted on the Spektr-R satellite, launched in 2011, performing Space Very Long Baseline Interferometry (SVLBI) observations supported by a global ground array of radio telescopes. With an apogee of 350 000 km, it is offering for the first time the possibility to perform μas-resolution imaging in the cm-band. Aims: The RadioAstron active galactic nuclei (AGN) polarization Key Science Project (KSP) aims at exploiting the unprecedented angular resolution provided by RadioAstron to study jet launching/collimation and magnetic-field configuration in AGN jets. The targets of our KSP are some of the most powerful blazars in the sky. Methods: We present observations at 22 GHz of 3C 273, performed in 2014, designed to reach a maximum baseline of approximately nine Earth diameters. Reaching an angular resolution of 0.3 mas, we study a particularly low-activity state of the source, and estimate the nuclear region brightness temperature, comparing with the extreme one detected one year before during the RadioAstron early science period. We also make use of the VLBA-BU-BLAZAR survey data, at 43 GHz, to study the kinematics of the jet in a 1.5-yr time window. Results: We find that the nuclear brightness temperature is two orders of magnitude lower than the exceptionally high value detected in 2013 with RadioAstron at the same frequency (1.4 × 1013 K, source-frame), and even one order of magnitude lower than the equipartition value. The kinematics analysis at 43 GHz shows that a new component was ejected 2 months after the 2013 epoch, visible also in our 22 GHz map presented here. Consequently this was located upstream of the core during the brightness temperature peak. Fermi-LAT observations for the period 2010-2014 do not show any γ-ray flare in conjunction with the passage of the new component by the core at 43 GHz. Conclusions: These observations confirm that the previously detected extreme brightness temperature in 3C 273, exceeding the inverse Compton limit, is a short-lived phenomenon caused by a temporary departure from equipartition. Thus, the availability of interferometric baselines capable of providing μas angular resolution does not systematically imply measured brightness temperatures over the known physical limits for astrophysical sources. The reduced image (FITS file) is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/604/A111
The development of truncated inviscid turbulence and the FPU-problem
NASA Astrophysics Data System (ADS)
Ooms, G.; Boersma, B. J.
As is well known Fermi, Pasta and Ulam [1] studied the energy redistribution between the linear modes of a one-dimensional chain of particles connected via weakly nonlinear springs. To their surprise no apparent tendency to equipartition of energy was observed in their numerical experiments. Much more knowledge is now available about this problem (see, for instance, the recent book by Gallavotti [2] or the review by Cambell et al. [3] in the focus issue on the FPU-problem in the journal Chaos).
Towards Full-Waveform Ambient Noise Inversion
NASA Astrophysics Data System (ADS)
Sager, K.; Ermert, L. A.; Boehm, C.; Fichtner, A.
2016-12-01
Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green function between the two receivers. This assumption, however, is only met under specific conditions, e.g. wavefield diffusivity and equipartitioning, or the isotropic distribution of both mono- and dipolar uncorrelated noise sources. These assumptions are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations in order to constrain Earth structure and noise generation. To overcome this limitation, we attempt to develop a method that consistently accounts for the distribution of noise sources, 3D heterogeneous Earth structure and the full seismic wave propagation physics. This is intended to improve the resolution of tomographic images, to refine noise source location, and thereby to contribute to a better understanding of noise generation. We introduce an operator-based formulation for the computation of correlation functions and apply the continuous adjoint method that allows us to compute first and second derivatives of misfit functionals with respect to source distribution and Earth structure efficiently. Based on these developments we design an inversion scheme using a 2D finite-difference code. To enable a joint inversion for noise sources and Earth structure, we investigate the following aspects: The capability of different misfit functionals to image wave speed anomalies and source distribution. Possible source-structure trade-offs, especially to what extent unresolvable structure can be mapped into the inverted noise source distribution and vice versa. In anticipation of real-data applications, we present an extension of the open-source waveform modelling and inversion package Salvus, which allows us to compute correlation functions in 3D media with heterogeneous noise sources at the surface.
On the Foundation of Equipartition in Supernova Remnants
NASA Astrophysics Data System (ADS)
Urošević, Dejan; Pavlović, Marko Z.; Arbutina, Bojan
2018-03-01
A widely accepted paradigm is that equipartition (eqp) between the energy density of cosmic rays (CRs) and the energy density of the magnetic field cannot be sustained in supernova remnants (SNRs). However, our 3D hydrodynamic supercomputer simulations, coupled with a nonlinear diffusive shock acceleration model, provide evidence that eqp may be established at the end of the Sedov phase of evolution in which most SNRs spend the longest portions of their lives. We introduce the term “constant partition” for any constant ratio between the CR energy density and the energy density of the magnetic field in an SNR, while the term “equipartition” should be reserved for the case of approximately the same values of the energy density (also, it is constant partition in the order of magnitude) of ultra-relativistic electrons only (or CRs in total) and the energy density of the magnetic field. Our simulations suggest that this approximate constant partition exists in all but the youngest SNRs. We speculate that since evolved SNRs at the end of the Sedov phase of evolution can reach eqp between CRs and magnetic fields, they may be responsible for initializing this type of eqp in the interstellar medium. Additionally, we show that eqp between the electron component of CRs and the magnetic field may be used for calculating the magnetic field strength directly from observations of synchrotron emission from SNRs. The values of magnetic field strengths in SNRs given here are approximately 2.5 times lower than values calculated by Arbutina et al.
Turbulent dynamo in a collisionless plasma
NASA Astrophysics Data System (ADS)
Rincon, François; Califano, Francesco; Schekochihin, Alexander A.; Valentini, Francesco
2016-04-01
Magnetic fields pervade the entire universe and affect the formation and evolution of astrophysical systems from cosmological to planetary scales. The generation and dynamical amplification of extragalactic magnetic fields through cosmic times (up to microgauss levels reported in nearby galaxy clusters, near equipartition with kinetic energy of plasma motions, and on scales of at least tens of kiloparsecs) are major puzzles largely unconstrained by observations. A dynamo effect converting kinetic flow energy into magnetic energy is often invoked in that context; however, extragalactic plasmas are weakly collisional (as opposed to magnetohydrodynamic fluids), and whether magnetic field growth and sustainment through an efficient turbulent dynamo instability are possible in such plasmas is not established. Fully kinetic numerical simulations of the Vlasov equation in a 6D-phase space necessary to answer this question have, until recently, remained beyond computational capabilities. Here, we show by means of such simulations that magnetic field amplification by dynamo instability does occur in a stochastically driven, nonrelativistic subsonic flow of initially unmagnetized collisionless plasma. We also find that the dynamo self-accelerates and becomes entangled with kinetic instabilities as magnetization increases. The results suggest that such a plasma dynamo may be realizable in laboratory experiments, support the idea that intracluster medium turbulence may have significantly contributed to the amplification of cluster magnetic fields up to near-equipartition levels on a timescale shorter than the Hubble time, and emphasize the crucial role of multiscale kinetic physics in high-energy astrophysical plasmas.
Inter-station coda wavefield studies using a novel icequake database on Erebus volcano
NASA Astrophysics Data System (ADS)
Chaput, J. A.; Campillo, M.; Roux, P.; Aster, R. C.
2013-12-01
Recent theoretical advances pertaining to the properties of multiply scattered wavefields have yielded a plethora of numerical and controlled source studies aiming to better understand what information may be derived from these otherwise chaotic signals. Practically, multiply scattered wavefields are difficult to compare to numerically derived models due to a combination of source paucity/directionality and array density limitations, particularly in passive seismology scenarios. Furthermore, in situations where data quantities are abundant, such as for ambient noise correlations, it remains very difficult to recover pseudo-Green's function symmetry in the ballistic components of the wavefield, let alone in the coda of the correlations. In this study, we use a large network of short period and broadband instruments on Erebus volcano to show that actual Green's function recovery is indeed possible in some cases. We make use of a large database of small impulsive icequakes distributed randomly on the summit plateau and, using fundamental theoretical properties of equipartitioned wavefields and interstation icequake coda correlations, are able to directly derive notoriously difficult quantities such as the bulk elastic mean free path for the volcano, demonstrations of correlation coda symmetry and its dependence on the number of icequakes used, and a theoretically predicted coherent backscattering amplification factor associated with weak localization. We furthermore show that stable equipartition and H^2/V^2 ratios may be consistently observed for icequake coda, and we perform simple depth inversions of these frequency dependent quantities to compare with known structures.
Magnetized Turbulent Dynamo in Protogalaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leonid Malyshkin; Russell M. Kulsrud
The prevailing theory for the origin of cosmic magnetic fields is that they have been amplified to their present values by the turbulent dynamo inductive action in the protogalactic and galactic medium. Up to now, in calculation of the turbulent dynamo, it has been customary to assume that there is no back reaction of the magnetic field on the turbulence, as long as the magnetic energy is less than the turbulent kinetic energy. This assumption leads to the kinematic dynamo theory. However, the applicability of this theory to protogalaxies is rather limited. The reason is that in protogalaxies the temperaturemore » is very high, and the viscosity is dominated by magnetized ions. As the magnetic field strength grows in time, the ion cyclotron time becomes shorter than the ion collision time, and the plasma becomes strongly magnetized. As a result, the ion viscosity becomes the Braginskii viscosity. Thus, in protogalaxies the back reaction sets in much earlier, at field strengths much lower than those which correspond to field-turbulence energy equipartition, and the turbulent dynamo becomes what we call the magnetized turbulent dynamo. In this paper we lay the theoretical groundwork for the magnetized turbulent dynamo. In particular, we predict that the magnetic energy growth rate in the magnetized dynamo theory is up to ten times larger than that in the kinematic dynamo theory. We also briefly discuss how the Braginskii viscosity can aid the development of the inverse cascade of magnetic energy after the energy equipartition is reached.« less
Turbulent dynamo in a collisionless plasma
Rincon, François; Califano, Francesco; Schekochihin, Alexander A.; Valentini, Francesco
2016-01-01
Magnetic fields pervade the entire universe and affect the formation and evolution of astrophysical systems from cosmological to planetary scales. The generation and dynamical amplification of extragalactic magnetic fields through cosmic times (up to microgauss levels reported in nearby galaxy clusters, near equipartition with kinetic energy of plasma motions, and on scales of at least tens of kiloparsecs) are major puzzles largely unconstrained by observations. A dynamo effect converting kinetic flow energy into magnetic energy is often invoked in that context; however, extragalactic plasmas are weakly collisional (as opposed to magnetohydrodynamic fluids), and whether magnetic field growth and sustainment through an efficient turbulent dynamo instability are possible in such plasmas is not established. Fully kinetic numerical simulations of the Vlasov equation in a 6D-phase space necessary to answer this question have, until recently, remained beyond computational capabilities. Here, we show by means of such simulations that magnetic field amplification by dynamo instability does occur in a stochastically driven, nonrelativistic subsonic flow of initially unmagnetized collisionless plasma. We also find that the dynamo self-accelerates and becomes entangled with kinetic instabilities as magnetization increases. The results suggest that such a plasma dynamo may be realizable in laboratory experiments, support the idea that intracluster medium turbulence may have significantly contributed to the amplification of cluster magnetic fields up to near-equipartition levels on a timescale shorter than the Hubble time, and emphasize the crucial role of multiscale kinetic physics in high-energy astrophysical plasmas. PMID:27035981
Turbulent dynamo in a collisionless plasma.
Rincon, François; Califano, Francesco; Schekochihin, Alexander A; Valentini, Francesco
2016-04-12
Magnetic fields pervade the entire universe and affect the formation and evolution of astrophysical systems from cosmological to planetary scales. The generation and dynamical amplification of extragalactic magnetic fields through cosmic times (up to microgauss levels reported in nearby galaxy clusters, near equipartition with kinetic energy of plasma motions, and on scales of at least tens of kiloparsecs) are major puzzles largely unconstrained by observations. A dynamo effect converting kinetic flow energy into magnetic energy is often invoked in that context; however, extragalactic plasmas are weakly collisional (as opposed to magnetohydrodynamic fluids), and whether magnetic field growth and sustainment through an efficient turbulent dynamo instability are possible in such plasmas is not established. Fully kinetic numerical simulations of the Vlasov equation in a 6D-phase space necessary to answer this question have, until recently, remained beyond computational capabilities. Here, we show by means of such simulations that magnetic field amplification by dynamo instability does occur in a stochastically driven, nonrelativistic subsonic flow of initially unmagnetized collisionless plasma. We also find that the dynamo self-accelerates and becomes entangled with kinetic instabilities as magnetization increases. The results suggest that such a plasma dynamo may be realizable in laboratory experiments, support the idea that intracluster medium turbulence may have significantly contributed to the amplification of cluster magnetic fields up to near-equipartition levels on a timescale shorter than the Hubble time, and emphasize the crucial role of multiscale kinetic physics in high-energy astrophysical plasmas.
Global Energetics in Solar Flares and Coronal Mass Ejections
NASA Astrophysics Data System (ADS)
Aschwanden, Markus J.
2017-08-01
We present a statistical study of the energetics of coronal mass ejections (CME) and compare it with the magnetic, thermal, and nonthermal energy dissipated in flares. The physical parameters of CME speeds, mass, and kinetic energies are determined with two different independent methods, i.e., the traditional white-light scattering method using LASCO/SOHO data, and the EUV dimming method using AIA/SDO data. We analyze all 860 GOES M- and X-class flare events observed during the first 7 years (2010-2016) of the SDO mission. The new ingredients of our CME modeling includes: (1) CME geometry in terms of a self-similar adiabatic expansion, (2) DEM analysis of CME mass over entire coronal temperature range, (3) deceleration of CME due to gravity force which controls the kinetic and potentail CME energy as a function of time, (4) the critical speed that controls eruptive and confined CMEs, (5) the relationship between the center-of-mass motion during EUV dimming and the leading edge motion observed in white-light coronagraphs. Novel results are: (1) Physical parameters obtained from both the EUV dimming and white-light method can be reconciled; (2) the equi-partition of CME kinetic and thermal flare energy; (3) the Rosner-Tucker-Vaiana scaling law. We find that the two methods in EUV and white-light wavelengths are highly complementary and yield more complete models than each method alone.
Breakdown of equipartition in diffuse fields caused by energy leakage
NASA Astrophysics Data System (ADS)
Margerin, L.
2017-05-01
Equipartition is a central concept in the analysis of random wavefields which stipulates that in an infinite scattering medium all modes and propagation directions become equally probable at long lapse time in the coda. The objective of this work is to examine quantitatively how this conclusion is affected in an open waveguide geometry, with a particular emphasis on seismological applications. To carry our this task, the problem is recast as a spectral analysis of the radiative transfer equation. Using a discrete ordinate approach, the smallest eigenvalue and associated eigenfunction of the transfer equation, which control the asymptotic intensity distribution in the waveguide, are determined numerically with the aid of a shooting algorithm. The inverse of this eigenvalue may be interpreted as the leakage time of the diffuse waves out of the waveguide. The associated eigenfunction provides the depth and angular distribution of the specific intensity. The effect of boundary conditions and scattering anisotropy is investigated in a series of numerical experiments. Two propagation regimes are identified, depending on the ratio H∗ between the thickness of the waveguide and the transport mean path in the layer. The thick layer regime H∗ > 1 has been thoroughly studied in the literature in the framework of diffusion theory and is briefly considered. In the thin layer regime H∗ < 1, we find that both boundary conditions and scattering anisotropy leave a strong imprint on the leakage effect. A parametric study reveals that in the presence of a flat free surface, the leakage time is essentially controlled by the mean free time of the waves in the layer in the limit H∗ → 0. By contrast, when the free surface is rough, the travel time of ballistic waves propagating through the crust becomes the limiting factor. For fixed H∗, the efficacy of leakage, as quantified by the inverse coda quality factor, increases with scattering anisotropy. For sufficiently thin layers H∗≈ 1/5, the energy flux is predominantly directed parallel to the surface and equipartition breaks down. Qualitatively, the anisotropy of the intensity field is found to increase with the inverse non-dimensional leakage time, with the scattering mean free time as time scale. Because it enhances leakage, a rough free surface may result in stronger anisotropy of the intensity field than a flat surface, for the same bulk scattering properties. Our work identifies leakage as a potential explanation for the large deviation from isotropy observed in the coda of body waves.
NASA Astrophysics Data System (ADS)
Tzeferacos, P.; Rigby, A.; Bott, A.; Bell, A.; Bingham, R.; Casner, A.; Cattaneo, F.; Churazov, E.; Forest, C.; Katz, J.; Koenig, M.; Li, C.-K.; Meinecke, J.; Petrasso, R.; Park, H.-S.; Remington, B.; Ross, J.; Ryutov, D.; Ryu, D.; Reville, B.; Miniati, F.; Schekochihin, A.; Froula, D.; Lamb, D.; Gregori, G.
2017-10-01
The universe is permeated by magnetic fields, with strengths ranging from a femtogauss in the voids between the filaments of galaxy clusters to several teragauss in black holes and neutron stars. The standard model for cosmological magnetic fields is the nonlinear amplification of seed fields via turbulent dynamo. We have conceived experiments to demonstrate and study the turbulent dynamo mechanism in the laboratory. Here, we describe the design of these experiments through large-scale 3D FLASH simulations on the Mira supercomputer at ANL, and the laser-driven experiments we conducted with the OMEGA laser at LLE. Our results indicate that turbulence is capable of rapidly amplifying seed fields to near equipartition with the turbulent fluid motions. This work was supported in part from the ERC (FP7/2007-2013, No. 256973 and 247039), and the U.S. DOE, Contract No. B591485 to LLNL, FWP 57789 to ANL, Grant No. DE-NA0002724 and DE-SC0016566 to the University of Chicago, and DE-AC02-06CH11357 to ANL.
Albert, A; André, M; Anghinolfi, M; Anton, G; Ardid, M; Aubert, J-J; Avgitas, T; Baret, B; Barrios-Martí, J; Basa, S; Bertin, V; Biagi, S; Bormuth, R; Bourret, S; Bouwhuis, M C; Bruijn, R; Brunner, J; Busto, J; Capone, A; Caramete, L; Carr, J; Celli, S; Chiarusi, T; Circella, M; Coelho, J A B; Coleiro, A; Coniglione, R; Costantini, H; Coyle, P; Creusot, A; Deschamps, A; De Bonis, G; Distefano, C; Di Palma, I; Domi, A; Donzaud, C; Dornic, D; Drouhin, D; Eberl, T; El Bojaddaini, I; Elsässer, D; Enzenhöfer, A; Felis, I; Folger, F; Fusco, L A; Galatà, S; Gay, P; Giordano, V; Glotin, H; Grégoire, T; Gracia Ruiz, R; Graf, K; Hallmann, S; van Haren, H; Heijboer, A J; Hello, Y; Hernández-Rey, J J; Hößl, J; Hofestädt, J; Hugon, C; Illuminati, G; James, C W; de Jong, M; Jongen, M; Kadler, M; Kalekin, O; Katz, U; Kießling, D; Kouchner, A; Kreter, M; Kreykenbohm, I; Kulikovskiy, V; Lachaud, C; Lahmann, R; Lefèvre, D; Leonora, E; Lotze, M; Loucatos, S; Marcelin, M; Margiotta, A; Marinelli, A; Martínez-Mora, J A; Mele, R; Melis, K; Michael, T; Migliozzi, P; Moussa, A; Nezri, E; Organokov, M; Păvălaş, G E; Pellegrino, C; Perrina, C; Piattelli, P; Popa, V; Pradier, T; Quinn, L; Racca, C; Riccobene, G; Sánchez-Losa, A; Saldaña, M; Salvadori, I; Samtleben, D F E; Sanguineti, M; Sapienza, P; Schüssler, F; Sieger, C; Spurio, M; Stolarczyk, Th; Taiuti, M; Tayalati, Y; Trovato, A; Turpin, D; Tönnis, C; Vallage, B; Van Elewyck, V; Versari, F; Vivolo, D; Vizzoca, A; Wilms, J; Zornoza, J D; Zúñiga, J
2017-01-01
A novel algorithm to reconstruct neutrino-induced particle showers within the ANTARES neutrino telescope is presented. The method achieves a median angular resolution of [Formula: see text] for shower energies below 100 TeV. Applying this algorithm to 6 years of data taken with the ANTARES detector, 8 events with reconstructed shower energies above 10 TeV are observed. This is consistent with the expectation of about 5 events from atmospheric backgrounds, but also compatible with diffuse astrophysical flux measurements by the IceCube collaboration, from which 2-4 additional events are expected. A [Formula: see text] C.L. upper limit on the diffuse astrophysical neutrino flux with a value per neutrino flavour of [Formula: see text] is set, applicable to the energy range from 23 TeV to 7.8 PeV, assuming an unbroken [Formula: see text] spectrum and neutrino flavour equipartition at Earth.
Monte Carlo calculations of diatomic molecule gas flows including rotational mode excitation
NASA Technical Reports Server (NTRS)
Yoshikawa, K. K.; Itikawa, Y.
1976-01-01
The direct simulation Monte Carlo method was used to solve the Boltzmann equation for flows of an internally excited nonequilibrium gas, namely, of rotationally excited homonuclear diatomic nitrogen. The semi-classical transition probability model of Itikawa was investigated for its ability to simulate flow fields far from equilibrium. The behavior of diatomic nitrogen was examined for several different nonequilibrium initial states that are subjected to uniform mean flow without boundary interactions. A sample of 1000 model molecules was observed as the gas relaxed to a steady state starting from three specified initial states. The initial states considered are: (1) complete equilibrium, (2) nonequilibrium, equipartition (all rotational energy states are assigned the mean energy level obtained at equilibrium with a Boltzmann distribution at the translational temperature), and (3) nonequipartition (the mean rotational energy is different from the equilibrium mean value with respect to the translational energy states). In all cases investigated the present model satisfactorily simulated the principal features of the relaxation effects in nonequilibrium flow of diatomic molecules.
Temperature for a dynamic spin ensemble
NASA Astrophysics Data System (ADS)
Ma, Pui-Wai; Dudarev, S. L.; Semenov, A. A.; Woo, C. H.
2010-09-01
In molecular dynamics simulations, temperature is evaluated, via the equipartition principle, by computing the mean kinetic energy of atoms. There is no similar recipe yet for evaluating temperature of a dynamic system of interacting spins. By solving semiclassical Langevin spin-dynamics equations, and applying the fluctuation-dissipation theorem, we derive an equation for the temperature of a spin ensemble, expressed in terms of dynamic spin variables. The fact that definitions for the kinetic and spin temperatures are fully consistent is illustrated using large-scale spin dynamics and spin-lattice dynamics simulations.
V.L.A. Observations of Solar-Active Regions. I. The Slowly Varying Component,
1980-08-01
bremsstrahlung accounts for the highly polarized radiation. In this situation the magnetic energy 2 % 4 -3 density of H /(8T) 10 erg cm vastly exceeds the...equipartition value inferred from the virial theorem, for the thermal kinetic energy density in -3 the "coronal condensations" is 3N kT 5 erg cm . It...Boston: D. Reidel). Kundu, M.R., 1959a, Ann. Ap., 22, 1. Kundu, M.R., 1959b, "Etude Interferometrique des Sources d’Activite Solaire sur 3 cm de
A Morphological Analysis of Gamma-Ray Burst Early-optical Afterglows
NASA Astrophysics Data System (ADS)
Gao, He; Wang, Xiang-Gao; Mészáros, Peter; Zhang, Bing
2015-09-01
Within the framework of the external shock model of gamma-ray burst (GRB) afterglows, we perform a morphological analysis of the early-optical light curves to directly constrain model parameters. We define four morphological types, i.e., the reverse shock-dominated cases with/without the emergence of the forward shock peak (Type I/Type II), and the forward shock-dominated cases without/with νm crossing the band (Type III/IV). We systematically investigate all of the Swift GRBs that have optical detection earlier than 500 s and find 3/63 Type I bursts (4.8%), 12/63 Type II bursts (19.0%), 30/63 Type III bursts (47.6%), 8/63 Type IV bursts (12.7%), and 10/63 Type III/IV bursts (15.9%). We perform Monte Carlo simulations to constrain model parameters in order to reproduce the observations. We find that the favored value of the magnetic equipartition parameter in the forward shock ({ɛ }B{{f}}) ranges from 10-6 to 10-2, and the reverse-to-forward ratio of ɛB ({{R}}B) is about 100. The preferred electron equipartition parameter {ɛ }{{e}}{{r},{{f}}} value is 0.01, which is smaller than the commonly assumed value, e.g., 0.1. This could mitigate the so-called “efficiency problem” for the internal shock model, if ɛe during the prompt emission phase (in the internal shocks) is large (say, ˜0.1). The preferred {{R}}B value is in agreement with the results in previous works that indicate a moderately magnetized baryonic jet for GRBs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Lifeng, E-mail: walfe@nuaa.edu.cn; Hu, Haiyan
The thermal vibration of a rectangular single-layered graphene sheet is investigated by using a rectangular nonlocal elastic plate model with quantum effects taken into account when the law of energy equipartition is unreliable. The relation between the temperature and the Root of Mean Squared (RMS) amplitude of vibration at any point of the rectangular single-layered graphene sheet in simply supported case is derived first from the rectangular nonlocal elastic plate model with the strain gradient of the second order taken into consideration so as to characterize the effect of microstructure of the graphene sheet. Then, the RMS amplitude of thermalmore » vibration of a rectangular single-layered graphene sheet simply supported on an elastic foundation is derived. The study shows that the RMS amplitude of the rectangular single-layered graphene sheet predicted from the quantum theory is lower than that predicted from the law of energy equipartition. The maximal relative difference of RMS amplitude of thermal vibration appears at the sheet corners. The microstructure of the graphene sheet has a little effect on the thermal vibrations of lower modes, but exhibits an obvious effect on the thermal vibrations of higher modes. The quantum effect is more important for the thermal vibration of higher modes in the case of smaller sides and lower temperature. The relative difference of maximal RMS amplitude of thermal vibration of a rectangular single-layered graphene sheet decreases monotonically with an increase of temperature. The absolute difference of maximal RMS amplitude of thermal vibration of a rectangular single-layered graphene sheet increases slowly with the rising of Winkler foundation modulus.« less
Magnetic field formation in the Milky Way like disc galaxies of the Auriga project
NASA Astrophysics Data System (ADS)
Pakmor, Rüdiger; Gómez, Facundo A.; Grand, Robert J. J.; Marinacci, Federico; Simpson, Christine M.; Springel, Volker; Campbell, David J. R.; Frenk, Carlos S.; Guillet, Thomas; Pfrommer, Christoph; White, Simon D. M.
2017-08-01
The magnetic fields observed in the Milky Way and nearby galaxies appear to be in equipartition with the turbulent, thermal and cosmic ray energy densities, and hence are expected to be dynamically important. However, the origin of these strong magnetic fields is still unclear, and most previous attempts to simulate galaxy formation from cosmological initial conditions have ignored them altogether. Here, we analyse the magnetic fields predicted by the simulations of the Auriga Project, a set of 30 high-resolution cosmological zoom simulations of Milky Way like galaxies, carried out with a moving-mesh magnetohydrodynamics code and a detailed galaxy formation physics model. We find that the magnetic fields grow exponentially at early times owing to a small-scale dynamo with an e-folding time of roughly 100 Myr in the centre of haloes until saturation occurs around z = 2-3, when the magnetic energy density reaches about 10 per cent of the turbulent energy density with a typical strength of 10-50 {μ G}. In the galactic centres, the ratio between magnetic and turbulent energies remains nearly constant until z = 0. At larger radii, differential rotation in the discs leads to linear amplification that typically saturates around z = 0.5-0. The final radial and vertical variations of the magnetic field strength can be well described by two joint exponential profiles, and are in good agreement with observational constraints. Overall, the magnetic fields have only little effect on the global evolution of the galaxies as it takes too long to reach equipartition. We also demonstrate that our results are well converged with numerical resolution.
NASA Astrophysics Data System (ADS)
Emeriau-Viard, Constance; Brun, Allan Sacha
2017-10-01
During the PMS, structure and rotation rate of stars evolve significantly. We wish to assess the consequences of these drastic changes on stellar dynamo, internal magnetic field topology and activity level by mean of HPC simulations with the ASH code. To answer this question, we develop 3D MHD simulations that represent specific stages of stellar evolution along the PMS. We choose five different models characterized by the radius of their radiative zone following an evolutionary track, from 1 Myr to 50 Myr, computed by a 1D stellar evolution code. We introduce a seed magnetic field in the youngest model and then we spread it through all simulations. First of all, we study the consequences that the increase of rotation rate and the change of geometry of the convective zone have on the dynamo field that exists in the convective envelop. The magnetic energy increases, the topology of the magnetic field becomes more complex and the axisymmetric magnetic field becomes less predominant as the star ages. The computation of the fully convective MHD model shows that a strong dynamo develops with a ratio of magnetic to kinetic energy reaching equipartition and even super-equipartition states in the faster rotating cases. Magnetic fields resulting from our MHD simulations possess a mixed poloidal-toroidal topology with no obvious dominant component. We also study the relaxation of the vestige dynamo magnetic field within the radiative core and found that it satisfies stability criteria. Hence it does not experience a global reconfiguration and instead slowly relaxes by retaining its mixed poloidal-toroidal topology.
Energy Budget of Forming Clumps in Numerical Simulations of Collapsing Clouds
NASA Astrophysics Data System (ADS)
Camacho, Vianey; Vázquez-Semadeni, Enrique; Ballesteros-Paredes, Javier; Gómez, Gilberto C.; Fall, S. Michael; Mata-Chávez, M. Dolores
2016-12-01
We analyze the physical properties and energy balance of density enhancements in two SPH simulations of the formation, evolution, and collapse of giant molecular clouds. In the simulations, no feedback is included, so all motions are due either to the initial decaying turbulence or to gravitational contraction. We define clumps as connected regions above a series of density thresholds. The resulting full set of clumps follows the generalized energy equipartition relation, {σ }v/{R}1/2\\propto {{{Σ }}}1/2, where {σ }v is the velocity dispersion, R is the “radius,” and Σ is the column density. We interpret this as a natural consequence of gravitational contraction at all scales rather than virial equilibrium. Nevertheless, clumps with low Σ tend to show a large scatter around equipartition. In more than half of the cases, this scatter is dominated by external turbulent compressions that assemble the clumps rather than by small-scale random motions that would disperse them. The other half does actually disperse. Moreover, clump sub-samples selected by means of different criteria exhibit different scalings. Sub-samples with narrow Σ ranges follow Larson-like relations, although characterized by their respective values of Σ. Finally, we find that (I) clumps lying in filaments tend to appear sub-virial, (II) high-density cores (n≥slant {10}5 cm3) that exhibit moderate kinetic energy excesses often contain sink (“stellar”) particles and the excess disappears when the stellar mass is taken into account in the energy balance, and (III) cores with kinetic energy excess but no stellar particles are truly in a state of dispersal.
Barvinsky, A O
2007-08-17
The density matrix of the Universe for the microcanonical ensemble in quantum cosmology describes an equipartition in the physical phase space of the theory (sum over everything), but in terms of the observable spacetime geometry this ensemble is peaked about the set of recently obtained cosmological instantons limited to a bounded range of the cosmological constant. This suggests the mechanism of constraining the landscape of string vacua and a possible solution to the dark energy problem in the form of the quasiequilibrium decay of the microcanonical state of the Universe.
Phlegethon flow: A proposed origin for spicules and coronal heating
NASA Technical Reports Server (NTRS)
Schatten, Kenneth H.; Mayr, Hans G.
1986-01-01
A model was develped for the mass, energy, and magnetic field transport into the corona. The focus is on the flow below the photosphere which allows the energy to pass into, and be dissipated within, the solar atmosphere. The high flow velocities observed in spicules are explained. A treatment following the work of Bailyn et al. (1985) is examined. It was concluded that within the framework of the model, energy may dissipate at a temperature comparable to the temperature where the waves originated, allowing for an equipartition solution of atmospheric flow, departing the sun at velocities approaching the maximum Alfven speed.
Radiative Efficiency of Collisionless Accretion
NASA Astrophysics Data System (ADS)
Gruzinov, Andrei V.
1998-07-01
The radiative efficiency, η≡L/Ṁc2, of a slowly accreting black hole is estimated using a two-temperature model of accretion. The radiative efficiency depends on the magnetic field strength near the Schwarzschild radius. For weak magnetic fields, i.e., β-1 ≡ B2/8πp <~ 10-3, the low efficiency η ~ 10-4 that is assumed in some theoretical models is achieved. For β-1 > 10-3, a significant fraction of viscous heat is dissipated by electrons and radiated away resulting in η > 10-4. At equipartition magnetic fields, β-1 ~ 1, we estimate η ~ 10-1.
Neutrino signal of electron-capture supernovae from core collapse to cooling.
Hüdepohl, L; Müller, B; Janka, H-T; Marek, A; Raffelt, G G
2010-06-25
An 8.8M{⊙} electron-capture supernova was simulated in spherical symmetry consistently from collapse through explosion to essentially complete deleptonization of the forming neutron star. The evolution time (∼9 s) is short because high-density effects suppress our neutrino opacities. After a short phase of accretion-enhanced luminosities (∼200 ms), luminosity equipartition among all species becomes almost perfect and the spectra of ν{e} and ν{μ,τ} very similar, ruling out the neutrino-driven wind as r-process site. We also discuss consequences for neutrino flavor oscillations.
Experimental Study of Short-Time Brownian Motion
NASA Astrophysics Data System (ADS)
Mo, Jianyong; Simha, Akarsh; Riegler, David; Raizen, Mark
2015-03-01
We report our progress on the study of short-time Brownian motion of optically-trapped microspheres. In earlier work, we observed the instantaneous velocity of microspheres in gas and in liquid, verifying a prediction by Albert Einstein from 1907. We now report a more accurate test of the energy equipartition theorem for a particle in liquid. We also observe boundary effects on Brownian motion in liquid by setting a wall near the trapped particle, which changes the dynamics of the motion. We find that the velocity autocorrelation of the particle decreases faster as the particle gets closer to the wall.
NASA Astrophysics Data System (ADS)
Baldwin, A. T.; Watkins, L. L.; van der Marel, R. P.; Bianchini, P.; Bellini, A.; Anderson, J.
2016-08-01
We make use of the Hubble Space Telescope proper-motion catalogs derived by Bellini et al. to produce the first radial velocity dispersion profiles σ (R) for blue straggler stars (BSSs) in Galactic globular clusters (GCs), as well as the first dynamical estimates for the average mass of the entire BSS population. We show that BSSs typically have lower velocity dispersions than stars with mass equal to the main-sequence turnoff mass, as one would expect for a more massive population of stars. Since GCs are expected to experience some degree of energy equipartition, we use the relation σ \\propto {M}-η , where η is related to the degree of energy equipartition, along with our velocity dispersion profiles to estimate BSS masses. We estimate η as a function of cluster relaxation from recent Monte Carlo cluster simulations by Bianchini et al. and then derive an average mass ratio {M}{BSS}/{M}{MSTO}=1.50+/- 0.14 and an average mass {M}{BSS}=1.22+/- 0.12 M ⊙ from 598 BSSs across 19 GCs. The final error bars include any systematic errors that are random between different clusters, but not any potential biases inherent to our methodology. Our results are in good agreement with the average mass of {M}{BSS}=1.22+/- 0.06 M ⊙ for the 35 BSSs in Galactic GCs in the literature with properties that have allowed individual mass determination. Based on proprietary and archival observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555.
THE EFFECT OF UNRESOLVED BINARIES ON GLOBULAR CLUSTER PROPER-MOTION DISPERSION PROFILES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bianchini, P.; Norris, M. A.; Ven, G. van de
2016-03-20
High-precision kinematic studies of globular clusters (GCs) require an accurate knowledge of all possible sources of contamination. Among other sources, binary stars can introduce systematic biases in the kinematics. Using a set of Monte Carlo cluster simulations with different concentrations and binary fractions, we investigate the effect of unresolved binaries on proper-motion dispersion profiles, treating the simulations like Hubble Space Telescope proper-motion samples. Since GCs evolve toward a state of partial energy equipartition, more-massive stars lose energy and decrease their velocity dispersion. As a consequence, on average, binaries have a lower velocity dispersion, since they are more-massive kinematic tracers. Wemore » show that, in the case of clusters with high binary fractions (initial binary fractions of 50%) and high concentrations (i.e., closer to energy equipartition), unresolved binaries introduce a color-dependent bias in the velocity dispersion of main-sequence stars of the order of 0.1–0.3 km s{sup −1} (corresponding to 1%−6% of the velocity dispersion), with the reddest stars having a lower velocity dispersion, due to the higher fraction of contaminating binaries. This bias depends on the ability to distinguish binaries from single stars, on the details of the color–magnitude diagram and the photometric errors. We apply our analysis to the HSTPROMO data set of NGC 7078 (M15) and show that no effect ascribable to binaries is observed, consistent with the low binary fraction of the cluster. Our work indicates that binaries do not significantly bias proper-motion velocity-dispersion profiles, but should be taken into account in the error budget of kinematic analyses.« less
NASA Astrophysics Data System (ADS)
Margerin, Ludovic
2013-01-01
This paper presents an analytical study of the multiple scattering of seismic waves by a collection of randomly distributed point scatterers. The theory assumes that the energy envelopes are smooth, but does not require perturbations to be small, thereby allowing the modelling of strong, resonant scattering. The correlation tensor of seismic coda waves recorded at a three-component sensor is decomposed into a sum of eigenmodes of the elastodynamic multiple scattering (Bethe-Salpeter) equation. For a general moment tensor excitation, a total number of four modes is necessary to describe the transport of seismic waves polarization. Their spatio-temporal dependence is given in closed analytical form. Two additional modes transporting exclusively shear polarizations may be excited by antisymmetric moment tensor sources only. The general solution converges towards an equipartition mixture of diffusing P and S waves which allows the retrieval of the local Green's function from coda waves. The equipartition time is obtained analytically and the impact of absorption on Green's function reconstruction is discussed. The process of depolarization of multiply scattered waves and the resulting loss of information is illustrated for various seismic sources. It is shown that coda waves may be used to characterize the source mechanism up to lapse times of the order of a few mean free times only. In the case of resonant scatterers, a formula for the diffusivity of seismic waves incorporating the effect of energy entrapment inside the scatterers is obtained. Application of the theory to high-contrast media demonstrates that coda waves are more sensitive to slow rather than fast velocity anomalies by several orders of magnitude. Resonant scattering appears as an attractive physical phenomenon to explain the small values of the diffusion constant of seismic waves reported in volcanic areas.
Kinematics of Parsec-scale Jets of Gamma-Ray Blazars at 43 GHz within the VLBA-BU-BLAZAR Program
NASA Astrophysics Data System (ADS)
Jorstad, Svetlana G.; Marscher, Alan P.; Morozova, Daria A.; Troitsky, Ivan S.; Agudo, Iván; Casadio, Carolina; Foord, Adi; Gómez, José L.; MacDonald, Nicholas R.; Molina, Sol N.; Lähteenmäki, Anne; Tammi, Joni; Tornikoski, Merja
2017-09-01
We analyze the parsec-scale jet kinematics from 2007 June to 2013 January of a sample of γ-ray bright blazars monitored roughly monthly with the Very Long Baseline Array at 43 GHz. In a total of 1929 images, we measure apparent speeds of 252 emission knots in 21 quasars, 12 BL Lacertae objects (BLLacs), and 3 radio galaxies, ranging from 0.02c to 78c; 21% of the knots are quasi-stationary. Approximately one-third of the moving knots execute non-ballistic motions, with the quasars exhibiting acceleration along the jet within 5 pc (projected) of the core, and knots in BLLacs tending to decelerate near the core. Using the apparent speeds of the components and the timescales of variability from their light curves, we derive the physical parameters of 120 superluminal knots, including variability Doppler factors, Lorentz factors, and viewing angles. We estimate the half-opening angle of each jet based on the projected opening angle and scatter of intrinsic viewing angles of knots. We determine characteristic values of the physical parameters for each jet and active galactic nucleus class based on the range of values obtained for individual features. We calculate the intrinsic brightness temperatures of the cores, {T}{{b},{int}}{core}, at all epochs, finding that the radio galaxies usually maintain equipartition conditions in the cores, while ˜30% of {T}{{b},{int}}{core} measurements in the quasars and BLLacs deviate from equipartition values by a factor >10. This probably occurs during transient events connected with active states. In the Appendix, we briefly describe the behavior of each blazar during the period analyzed.
Fermi/LAT observations of lobe-dominant radio galaxy 3C 207 and possible radiation region of γ-rays
NASA Astrophysics Data System (ADS)
Guo, Sheng-Chu; Zhang, Hai-Ming; Zhang, Jin; Liang, En-Wei
2018-06-01
3C 207 is a lobe-dominant radio galaxy with a one sided jet and bright knots, spanning a kpc-Mpc scale, which have been resolved in the radio, optical and X-ray bands. This target was confirmed as a γ-ray emitter with Fermi/LAT, but it is uncertain whether the γ-ray emission region is the core or knots due to the low spatial resolution of Fermi/LAT. We present an analysis of its Fermi/LAT data acquired during the past 9 years. Different from the radio and optical emission from the core, it is found that the γ-ray emission is steady without detection of flux variation at over a 2σ confidence level. This likely implies that the γ-ray emission is from its knots. We collect the radio, optical and X-ray data of knot-A, the closest knot from the core at 1.4″, and compile its spectral energy distribution (SED). Although the single-zone synchrotron+SSC+IC/CMB model that assumes knot-A is at rest can reproduce the SED in the radio-optical-X-ray band, the predicted γ-ray flux is lower than the LAT observations and the derived magnetic field strength deviates from the equipartition condition by 3 orders of magnitude. Assuming that knot-A is moving relativistically, its SED from radio to γ-ray bands would be represented well with the single-zone synchrotron+SSC+IC/CMB model under the equipartition condition. These results likely suggest that the γ-ray emission may be from knot-A via the IC/CMB process and the knot should have relativistical motion. The jet power derived from our model parameters is also roughly consistent with the kinetic power estimated with radio data.
FR II radio galaxies at low frequencies - I. Morphology, magnetic field strength and energetics.
Harwood, Jeremy J; Croston, Judith H; Intema, Huib T; Stewart, Adam J; Ineson, Judith; Hardcastle, Martin J; Godfrey, Leith; Best, Philip; Brienza, Marisa; Heesen, Volker; Mahony, Elizabeth K; Morganti, Raffaella; Murgia, Matteo; Orrú, Emanuela; Röttgering, Huub; Shulevski, Aleksandar; Wise, Michael W
2016-06-01
Due to their steep spectra, low-frequency observations of Fanaroff-Riley type II (FR II) radio galaxies potentially provide key insights in to the morphology, energetics and spectrum of these powerful radio sources. However, limitations imposed by the previous generation of radio interferometers at metre wavelengths have meant that this region of parameter space remains largely unexplored. In this paper, the first in a series examining FR IIs at low frequencies, we use LOFAR (LOw Frequency ARray) observations between 50 and 160 MHz, along with complementary archival radio and X-ray data, to explore the properties of two FR II sources, 3C 452 and 3C 223. We find that the morphology of 3C 452 is that of a standard FR II rather than of a double-double radio galaxy as had previously been suggested, with no remnant emission being observed beyond the active lobes. We find that the low-frequency integrated spectra of both sources are much steeper than expected based on traditional assumptions and, using synchrotron/inverse-Compton model fitting, show that the total energy content of the lobes is greater than previous estimates by a factor of around 5 for 3C 452 and 2 for 3C 223. We go on to discuss possible causes of these steeper-than-expected spectra and provide revised estimates of the internal pressures and magnetic field strengths for the intrinsically steep case. We find that the ratio between the equipartition magnetic field strengths and those derived through synchrotron/inverse-Compton model fitting remains consistent with previous findings and show that the observed departure from equipartition may in some cases provide a solution to the spectral versus dynamical age disparity.
A KPC-Scale X-Ray Jet in the BL Lac Source S5 2007+777
NASA Technical Reports Server (NTRS)
Sambruna, Rita M.; Donato, Davide; Cheung, C.C.; Tavecchio, F.; Maraschi, L.
2008-01-01
X-ray jets in AGN are commonly observed in FRII and FRI radiogalaxies, but rarely in BL Lacs, most probably due to their orientation close to the line of sight and the ensuing foreshortening effects. Only three BL Lacs are known so far to contain a kpc-scale X-ray jet. In this paper, we present the evidence for the existence of a fourth extended X-ray jet in the classical radio-selected source S5 2007+777, which for its hybrid FRI/II radio morphology has been classified as a HYMOR (HYbrid MOrphology Radio source). Our Chandra ACISS observations of this source revealed an X-ray counterpart to the 19"-long radio jet. Interestingly, the X-ray properties of the kpc-scale jet in S5 2007+777 are very similar to those observed in FRII jets. First, the X-ray morphology closely mirrors the radio one, with the X-rays being concentrated in the discrete radio knots. Second, the X-ray continuum of the jet/brightest knot is described by a very hard power law, with photon index gamma(sub x) approximately 1. Third, the optical upper limit from archival HST data implies a concave radio-to-X-ray SED. If the X-ray emission is attributed to IC/CMB with equipartition, strong beaming (delta= 13) is required, implying a very large scale (Mpc) jet. The beaming requirement can be somewhat relaxed assuming a magnetic field lower than equipartition. Alternatively, synchrotron emission from a second population of very high-energy electrons is viable. Comparison to other HYMOR jets detected with Chandra is discussed, as well as general implications for the origin of the FRI/II division.
IS THE SMALL-SCALE MAGNETIC FIELD CORRELATED WITH THE DYNAMO CYCLE?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karak, Bidya Binay; Brandenburg, Axel, E-mail: bbkarak@nordita.org
2016-01-01
The small-scale magnetic field is ubiquitous at the solar surface—even at high latitudes. From observations we know that this field is uncorrelated (or perhaps even weakly anticorrelated) with the global sunspot cycle. Our aim is to explore the origin, and particularly the cycle dependence, of such a phenomenon using three-dimensional dynamo simulations. We adopt a simple model of a turbulent dynamo in a shearing box driven by helically forced turbulence. Depending on the dynamo parameters, large-scale (global) and small-scale (local) dynamos can be excited independently in this model. Based on simulations in different parameter regimes, we find that, when onlymore » the large-scale dynamo is operating in the system, the small-scale magnetic field generated through shredding and tangling of the large-scale magnetic field is positively correlated with the global magnetic cycle. However, when both dynamos are operating, the small-scale field is produced from both the small-scale dynamo and the tangling of the large-scale field. In this situation, when the large-scale field is weaker than the equipartition value of the turbulence, the small-scale field is almost uncorrelated with the large-scale magnetic cycle. On the other hand, when the large-scale field is stronger than the equipartition value, we observe an anticorrelation between the small-scale field and the large-scale magnetic cycle. This anticorrelation can be interpreted as a suppression of the small-scale dynamo. Based on our studies we conclude that the observed small-scale magnetic field in the Sun is generated by the combined mechanisms of a small-scale dynamo and tangling of the large-scale field.« less
Torrens, Francisco; Castellano, Gloria
2014-06-05
Pesticide residues in wine were analyzed by liquid chromatography-tandem mass spectrometry. Retentions are modelled by structure-property relationships. Bioplastic evolution is an evolutionary perspective conjugating effect of acquired characters and evolutionary indeterminacy-morphological determination-natural selection principles; its application to design co-ordination index barely improves correlations. Fractal dimensions and partition coefficient differentiate pesticides. Classification algorithms are based on information entropy and its production. Pesticides allow a structural classification by nonplanarity, and number of O, S, N and Cl atoms and cycles; different behaviours depend on number of cycles. The novelty of the approach is that the structural parameters are related to retentions. Classification algorithms are based on information entropy. When applying procedures to moderate-sized sets, excessive results appear compatible with data suffering a combinatorial explosion. However, equipartition conjecture selects criterion resulting from classification between hierarchical trees. Information entropy permits classifying compounds agreeing with principal component analyses. Periodic classification shows that pesticides in the same group present similar properties; those also in equal period, maximum resemblance. The advantage of the classification is to predict the retentions for molecules not included in the categorization. Classification extends to phenyl/sulphonylureas and the application will be to predict their retentions.
NASA Astrophysics Data System (ADS)
Wapenaar, Kees; van der Neut, Joost; Ruigrok, Elmer; Draganov, Deyan; Hunziker, Jürg; Slob, Evert; Thorbecke, Jan; Snieder, Roel
2011-06-01
Seismic interferometry, also known as Green's function retrieval by crosscorrelation, has a wide range of applications, ranging from surface-wave tomography using ambient noise, to creating virtual sources for improved reflection seismology. Despite its successful applications, the crosscorrelation approach also has its limitations. The main underlying assumptions are that the medium is lossless and that the wavefield is equipartitioned. These assumptions are in practice often violated: the medium of interest is often illuminated from one side only, the sources may be irregularly distributed, and losses may be significant. These limitations may partly be overcome by reformulating seismic interferometry as a multidimensional deconvolution (MDD) process. We present a systematic analysis of seismic interferometry by crosscorrelation and by MDD. We show that for the non-ideal situations mentioned above, the correlation function is proportional to a Green's function with a blurred source. The source blurring is quantified by a so-called interferometric point-spread function which, like the correlation function, can be derived from the observed data (i.e. without the need to know the sources and the medium). The source of the Green's function obtained by the correlation method can be deblurred by deconvolving the correlation function for the point-spread function. This is the essence of seismic interferometry by MDD. We illustrate the crosscorrelation and MDD methods for controlled-source and passive-data applications with numerical examples and discuss the advantages and limitations of both methods.
An explosion model for the formation of the radio halo of NGC 891
NASA Astrophysics Data System (ADS)
You, Jun-han; Allen, R. J.; Hu, Fu-xing
1987-06-01
The explosion model for the formation of the radio halo of NGC 891 proposed here are mainly based on two physical assumptions: a) the relativistic electrons belong to two families, a halo family and a disk family: the disk family originating in supernova events throughout the disk and the halo family, in a violent explosion of the galactic nucleus in the distant past. b) Energy equipartition, that is, the magnetic energy density be proportional to the number density of stars. On these two assumptions, the main observed features of the radio halo of NGC 891 can be satisfactorily explained.
An explosion model for the formation of the radio halo of NGC 891
NASA Astrophysics Data System (ADS)
You, Jun-Han; Allen, R. J.; Hu, Fu-Xing
1986-06-01
The explosion model for the formation of the radio halo of NGC 891 proposed here is mainly based on two physical assumptions: (1) the relativistic electrons belong to two families, a halo family and a disk family, the disk family originating in supernova events throughout the disk, and the halo family in a violent explosion of the galactic nucleus in the distant past; and (2) energy equipartition, where the magnetic energy density is proportional to the number density of stars. On these two assumptions, the main observed features of the radio halo of NGC 891 can be satisfactorily explained.
NASA Astrophysics Data System (ADS)
Zapiór, Maciej; Martínez-Gómez, David
2016-02-01
Based on the data collected by the Vacuum Tower Telescope located in the Teide Observatory in the Canary Islands, we analyzed the three-dimensional (3D) motion of so-called knots in a solar prominence of 2014 June 9. Trajectories of seven knots were reconstructed, giving information of the 3D geometry of the magnetic field. Helical motion was detected. From the equipartition principle, we estimated the lower limit of the magnetic field in the prominence to ≈1-3 G and from the Ampère’s law the lower limit of the electric current to ≈1.2 × 109 A.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fläschner, G.; Ruschmeier, K.; Schwarz, A., E-mail: aschwarz@physnet.uni-hamburg.de
The sensitivity of atomic force microscopes is fundamentally limited by the cantilever temperature, which can be, in principle, determined by measuring its thermal spectrum and applying the equipartition theorem. However, the mechanical response can be affected by the light field inside the cavity of a Fabry-Perot interferometer due to light absorption, radiation pressure, photothermal forces, and laser noise. By evaluating the optomechanical Hamiltonian, we are able to explain the peculiar distance dependence of the mechanical quality factor as well as the appearance of thermal spectra with symmetrical Lorentzian as well as asymmetrical Fano line shapes. Our results can be appliedmore » to any type of mechanical oscillator in an interferometer-based detection system.« less
Thermodynamics and statistical mechanics. [thermodynamic properties of gases
NASA Technical Reports Server (NTRS)
1976-01-01
The basic thermodynamic properties of gases are reviewed and the relations between them are derived from the first and second laws. The elements of statistical mechanics are then formulated and the partition function is derived. The classical form of the partition function is used to obtain the Maxwell-Boltzmann distribution of kinetic energies in the gas phase and the equipartition of energy theorem is given in its most general form. The thermodynamic properties are all derived as functions of the partition function. Quantum statistics are reviewed briefly and the differences between the Boltzmann distribution function for classical particles and the Fermi-Dirac and Bose-Einstein distributions for quantum particles are discussed.
Holographic shell model: Stack data structure inside black holes?
NASA Astrophysics Data System (ADS)
Davidson, Aharon
2014-03-01
Rather than tiling the black hole horizon by Planck area patches, we suggest that bits of information inhabit, universally and holographically, the entire black core interior, a bit per a light sheet unit interval of order Planck area difference. The number of distinguishable (tagged by a binary code) configurations, counted within the context of a discrete holographic shell model, is given by the Catalan series. The area entropy formula is recovered, including Cardy's universal logarithmic correction, and the equipartition of mass per degree of freedom is proven. The black hole information storage resembles, in the count procedure, the so-called stack data structure.
Welch, Kyle J; Hastings-Hauss, Isaac; Parthasarathy, Raghuveer; Corwin, Eric I
2014-04-01
We have constructed a macroscopic driven system of chaotic Faraday waves whose statistical mechanics, we find, are surprisingly simple, mimicking those of a thermal gas. We use real-time tracking of a single floating probe, energy equipartition, and the Stokes-Einstein relation to define and measure a pseudotemperature and diffusion constant and then self-consistently determine a coefficient of viscous friction for a test particle in this pseudothermal gas. Because of its simplicity, this system can serve as a model for direct experimental investigation of nonequilibrium statistical mechanics, much as the ideal gas epitomizes equilibrium statistical mechanics.
Granular gases of rod-shaped grains in microgravity.
Harth, K; Kornek, U; Trittel, T; Strachauer, U; Höme, S; Will, K; Stannarius, R
2013-04-05
Granular gases are convenient model systems to investigate the statistical physics of nonequilibrium systems. In the literature, one finds numerous theoretical predictions, but only few experiments. We study a weakly excited dilute gas of rods, confined in a cuboid container in microgravity during a suborbital rocket flight. With respect to a gas of spherical grains at comparable filling fraction, the mean free path is considerably reduced. This guarantees a dominance of grain-grain collisions over grain-wall collisions. No clustering was observed, unlike in similar experiments with spherical grains. Rod positions and orientations were determined and tracked. Translational and rotational velocity distributions are non-Gaussian. Equipartition of kinetic energy between translations and rotations is violated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taroyan, Youra; Williams, Thomas
The interaction of an intergranular downdraft with an embedded vertical magnetic field is examined. It is demonstrated that the downdraft may couple to small magnetic twists leading to an instability. The descending plasma exponentially amplifies the magnetic twists when it decelerates with depth due to increasing density. Most efficient amplification is found in the vicinity of the level, where the kinetic energy density of the downdraft reaches equipartition with the magnetic energy density. Continual extraction of energy from the decelerating plasma and growth in the total azimuthal energy occurs as a consequence of the wave-flow coupling along the downdraft. Themore » presented mechanism may drive vortices and torsional motions that have been detected between granules and in simulations of magnetoconvection.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tu, Fei-Quan; Chen, Yi-Xin, E-mail: fqtuzju@foxmail.com, E-mail: yxchen@zimp.zju.edu.cn
It has been shown that Friedmann equation of FRW universe can be derived from the idea which says cosmic space is emergent as cosmic time progresses and our universe is expanding towards the state with the holographic equipartition by Padmanabhan. In this note, we give a general relationship between the horizon entropy and the number of the degrees of freedom on the surface, which can be applied to quantum gravity. we also obtain the corresponding dynamic equations by using the idea of emergence of spaces in the f(R) theory and deformed Hořava-Lifshitz(HL) theory.
NASA Astrophysics Data System (ADS)
Nishiguchi, Katsuhiko; Ono, Yukinori; Fujiwara, Akira
2014-07-01
We report the observation of thermal noise in the motion of single electrons in an ultimately small dynamic random access memory (DRAM). The nanometer-scale transistors that compose the DRAM resolve the thermal noise in single-electron motion. A complete set of fundamental tests conducted on this single-electron thermal noise shows that the noise perfectly follows all the aspects predicted by statistical mechanics, which include the occupation probability, the law of equipartition, a detailed balance, and the law of kT/C. In addition, the counting statistics on the directional motion (i.e., the current) of the single-electron thermal noise indicate that the individual electron motion follows the Poisson process, as it does in shot noise.
Single-electron thermal noise.
Nishiguchi, Katsuhiko; Ono, Yukinori; Fujiwara, Akira
2014-07-11
We report the observation of thermal noise in the motion of single electrons in an ultimately small dynamic random access memory (DRAM). The nanometer-scale transistors that compose the DRAM resolve the thermal noise in single-electron motion. A complete set of fundamental tests conducted on this single-electron thermal noise shows that the noise perfectly follows all the aspects predicted by statistical mechanics, which include the occupation probability, the law of equipartition, a detailed balance, and the law of kT/C. In addition, the counting statistics on the directional motion (i.e., the current) of the single-electron thermal noise indicate that the individual electron motion follows the Poisson process, as it does in shot noise.
Stochastic control and the second law of thermodynamics
NASA Technical Reports Server (NTRS)
Brockett, R. W.; Willems, J. C.
1979-01-01
The second law of thermodynamics is studied from the point of view of stochastic control theory. We find that the feedback control laws which are of interest are those which depend only on average values, and not on sample path behavior. We are lead to a criterion which, when satisfied, permits one to assign a temperature to a stochastic system in such a way as to have Carnot cycles be the optimal trajectories of optimal control problems. Entropy is also defined and we are able to prove an equipartition of energy theorem using this definition of temperature. Our formulation allows one to treat irreversibility in a quite natural and completely precise way.
Dynamic Modeling of the Madison Dynamo Experiment
NASA Astrophysics Data System (ADS)
Truitt, J. L.; Forest, C. B.; Wright, J. C.
1999-11-01
This work focuses on a computer simulation of the Magnetohydrodynamic equations applied in the geometry of the Madison Dynamo Experiemnt. An integration code is used to evolve both the magnetic field and the velocity field numerically in spherical coordinates using a pseudo-spectral algorithm. The focus is to realistically model an experiment to be undertaken by the Madison Dynamo Experiment Group. The first flows studied are the well documented ones of Dudley and James. The main goals of the simulation are to observe the dynamo effect with the back-reaction allowed, to observe the equipartition of magnetic and kinetic energy due to theoretically proposed turbulent effects, and to isolate and study the α and β effects.
Universal nonlinear small-scale dynamo.
Beresnyak, A
2012-01-20
We consider astrophysically relevant nonlinear MHD dynamo at large Reynolds numbers (Re). We argue that it is universal in a sense that magnetic energy grows at a rate which is a constant fraction C(E) of the total turbulent dissipation rate. On the basis of locality bounds we claim that this "efficiency of the small-scale dynamo", C(E), is a true constant for large Re and is determined only by strongly nonlinear dynamics at the equipartition scale. We measured C(E) in numerical simulations and observed a value around 0.05 in the highest resolution simulations. We address the issue of C(E) being small, unlike the Kolmogorov constant which is of order unity. © 2012 American Physical Society
Towards a Full Waveform Ambient Noise Inversion
NASA Astrophysics Data System (ADS)
Sager, K.; Ermert, L. A.; Boehm, C.; Fichtner, A.
2015-12-01
Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green's function between the two receivers. This assumption, however, is only met under specific conditions, for instance, wavefield diffusivity and equipartitioning, zero attenuation, etc., that are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations regarding Earth structure and noise generation. To overcome this limitation we attempt to develop a method that consistently accounts for noise distribution, 3D heterogeneous Earth structure and the full seismic wave propagation physics in order to improve the current resolution of tomographic images of the Earth. As an initial step towards a full waveform ambient noise inversion we develop a preliminary inversion scheme based on a 2D finite-difference code simulating correlation functions and on adjoint techniques. With respect to our final goal, a simultaneous inversion for noise distribution and Earth structure, we address the following two aspects: (1) the capabilities of different misfit functionals to image wave speed anomalies and source distribution and (2) possible source-structure trade-offs, especially to what extent unresolvable structure could be mapped into the inverted noise source distribution and vice versa.
Analyzing Students’ Level of Understanding on Kinetic Theory of Gases
NASA Astrophysics Data System (ADS)
Nurhuda, T.; Rusdiana, D.; Setiawan, W.
2017-02-01
The purpose of this research is to analysis students’ level of understanding on gas kinetic theory. The method used is descriptive analytic with 32 students at the 11th grade of one high school in Bandung city as a sample. The sample was taken using random sampling technique. Data collection tool used is an essay test with 23 questions. The instrument was used to identify students’ level of understanding and was judged by four expert judges before it was employed, from 27 questions become to 23 questions, for data collection. Questions used are the conceptual understanding including the competence to explain, extrapolate, translate and interpret. Kinetic theory of gases section that was tested includes ideal gas law, kinetic molecular theory and equipartition of energy. The result shows from 0-4 level of understanding, 19% of the students have partial understanding on the 3th level and 81% of them have partial understanding with a specific misconception on 2th level. For the future research, it is suggested to overcome these conceptual understanding with an Interactive Lecture Demonstrations teaching model and coupled with some teaching materials based on multi-visualization because kinetic theory of gases is a microscopic concept.
The Role of Magnetic Field Dissipation in the Black Hole Candidate Sagittarius A*
NASA Astrophysics Data System (ADS)
Coker, Robert F.; Melia, Fulvio
2000-05-01
The compact, nonthermal radio source Sgr A* at the Galactic center appears to be coincident with a ~2.6×106 Msolar pointlike object. Its energy source may be the release of gravitational energy as gas from the interstellar medium descends into its deep potential well. However, simple attempts at calculating the radiative spectrum and flux based on this picture have come tantalizingly close to the observations, yet have had difficulty in accounting for the unusually low efficiency in this source. Regardless of whether the radiating particles in the accretion flow are thermal or nonthermal, there now appear to be two principal reasons for this low conversion rate of dissipated energy into radiation: (1) the plasma separates into two temperatures, with the protons attaining a significantly higher temperature than that of the radiating electrons; and (2) the magnetic field B is subequipartition, which reduces the magnetic bremsstrahlung emissivity, and therefore the overall power of Sgr A*. In this paper, we investigate the latter with a considerable improvement over what has been attempted before. In particular, rather than calculating B based on some presumed model (e.g., equipartition with the thermal energy of the gas), we instead infer its distribution with radius empirically with the requirement that the resulting spectrum matches the observations. Our assumed Ansatz for B(r) is motivated in part by earlier calculations of the expected magnetic dissipation rate due to reconnection in a compressed flow. We find reasonable agreement with the observed spectrum of Sgr A* as long as its distribution consists of three primary components: an outer equipartition field, a roughly constant field at intermediate radii (~103 Schwarzschild radii), and an inner dynamo (more or less within the last stable orbit for a nonrotating black hole), which increases B to about 100 G. The latter component accounts very well for the observed submillimiter hump in this source.
Magnetic flux concentrations from turbulent stratified convection
NASA Astrophysics Data System (ADS)
Käpylä, P. J.; Brandenburg, A.; Kleeorin, N.; Käpylä, M. J.; Rogachevskii, I.
2016-04-01
Context. The formation of magnetic flux concentrations within the solar convection zone leading to sunspot formation is unexplained. Aims: We study the self-organization of initially uniform sub-equipartition magnetic fields by highly stratified turbulent convection. Methods: We perform simulations of magnetoconvection in Cartesian domains representing the uppermost 8.5-24 Mm of the solar convection zone with the horizontal size of the domain varying between 34 and 96 Mm. The density contrast in the 24 Mm deep models is more than 3 × 103 or eight density scale heights, corresponding to a little over 12 pressure scale heights. We impose either a vertical or a horizontal uniform magnetic field in a convection-driven turbulent flow in set-ups where no small-scale dynamos are present. In the most highly stratified cases we employ the reduced sound speed method to relax the time step constraint arising from the high sound speed in the deep layers. We model radiation via the diffusion approximation and neglect detailed radiative transfer in order to concentrate on purely magnetohydrodynamic effects. Results: We find that super-equipartition magnetic flux concentrations are formed near the surface in cases with moderate and high density stratification, corresponding to domain depths of 12.5 and 24 Mm. The size of the concentrations increases as the box size increases and the largest structures (20 Mm horizontally near the surface) are obtained in the models that are 24 Mm deep. The field strength in the concentrations is in the range of 3-5 kG, almost independent of the magnitude of the imposed field. The amplitude of the concentrations grows approximately linearly in time. The effective magnetic pressure measured in the simulations is positive near the surface and negative in the bulk of the convection zone. Its derivative with respect to the mean magnetic field, however, is positive in most of the domain, which is unfavourable for the operation of the negative effective magnetic pressure instability (NEMPI). Simulations in which a passive vector field is evolved do not show a noticeable difference from magnetohydrodynamic runs in terms of the growth of the structures. Furthermore, we find that magnetic flux is concentrated in regions of converging flow corresponding to large-scale supergranulation convection pattern. Conclusions: The linear growth of large-scale flux concentrations implies that their dominant formation process is a tangling of the large-scale field rather than an instability. One plausible mechanism that can explain both the linear growth and the concentration of the flux in the regions of converging flow pattern is flux expulsion. A possible reason for the absence of NEMPI is that the derivative of the effective magnetic pressure with respect to the mean magnetic field has an unfavourable sign. Furthermore, there may not be sufficient scale separation, which is required for NEMPI to work. Movies associated to Figs. 4 and 5 are available in electronic form at http://www.aanda.org
Multicomponent lattice Boltzmann model from continuum kinetic theory.
Shan, Xiaowen
2010-04-01
We derive from the continuum kinetic theory a multicomponent lattice Boltzmann model with intermolecular interaction. The resulting model is found to be consistent with the model previously derived from a lattice-gas cellular automaton [X. Shan and H. Chen, Phys. Rev. E 47, 1815 (1993)] but applies in a much broader domain. A number of important insights are gained from the kinetic theory perspective. First, it is shown that even in the isothermal case, the energy equipartition principle dictates the form of the equilibrium distribution function. Second, thermal diffusion is shown to exist and the corresponding diffusivities are given in terms of macroscopic parameters. Third, the ordinary diffusion is shown to satisfy the Maxwell-Stefan equation at the ideal-gas limit.
Duplication and segregation of the actin (MreB) cytoskeleton during the prokaryotic cell cycle.
Vats, Purva; Rothfield, Lawrence
2007-11-06
The bacterial actin homolog MreB exists as a single-copy helical cytoskeletal structure that extends between the two poles of rod-shaped bacteria. In this study, we show that equipartition of the MreB cytoskeleton into daughter cells is accomplished by division and segregation of the helical MreB array into two equivalent structures located in opposite halves of the predivisional cell. This process ensures that each daughter cell inherits one copy of the MreB cytoskeleton. The process is triggered by the membrane association of the FtsZ cell division protein. The cytoskeletal division and segregation events occur before and independently of cytokinesis and involve specialized MreB structures that appear to be intermediates in this process.
Observation of Brownian motion in liquids at short times: instantaneous velocity and memory loss.
Kheifets, Simon; Simha, Akarsh; Melin, Kevin; Li, Tongcang; Raizen, Mark G
2014-03-28
Measurement of the instantaneous velocity of Brownian motion of suspended particles in liquid probes the microscopic foundations of statistical mechanics in soft condensed matter. However, instantaneous velocity has eluded experimental observation for more than a century since Einstein's prediction of the small length and time scales involved. We report shot-noise-limited, high-bandwidth measurements of Brownian motion of micrometer-sized beads suspended in water and acetone by an optical tweezer. We observe the hydrodynamic instantaneous velocity of Brownian motion in a liquid, which follows a modified energy equipartition theorem that accounts for the kinetic energy of the fluid displaced by the moving bead. We also observe an anticorrelated thermal force, which is conventionally assumed to be uncorrelated.
NASA Astrophysics Data System (ADS)
Peng, Sum Chee; Mohanty, Samarendra; Gupta, P. K.; Kishen, Anil
2007-02-01
Failure of endodontic treatment is commonly due to Enterococcal infection. In this study influence of chemical treatments of type-I collagen membrane by chemical agents commonly used in endodontic treatment on Enterococcus faecalis cell adherence was evaluated. In order to determine the change in number of adhering bacteria after chemical treatment, confocal laser scanning microscopy was used. For this, overnight culture of E faecalis in All Culture broth was applied to chemically treated type-I collagen membrane. It was found that Ca(OH) II treated groups had statistically significant (p value=0.05) increase in population of bacteria adherence. The change in adhesion force between bacteria and collagen was determined by using optical tweezers (1064 nm). For this experiment, Type-I collagen membrane was soaked for 5 mins in a media that contained 50% all culture media and 50% saturated Ca(OH) II . The membrane was spread on the coverslip, on which diluted bacterial suspension was added. The force of laser tweezers on the bacteria was estimated at different trap power levels using viscous drag method and trapping stiffness was calculated using Equipartition theorem method. Presence of Ca(OH) II was found to increase the cell-substrate adherence force from 0.38pN to >2.1pN. Together, these experiments show that it was highly probable that the increase in adherence to collagen was due to a stronger adhesion in the presence of Ca (OH) II.
NASA Technical Reports Server (NTRS)
Keohane, Jonathan Wilmore
1998-01-01
Thesis submitted to the faculty of the Graduate School of the University of Minnesota in partial fulfillment of the requirements for the degree of Doctor of Philosophy. Part I discusses the spatial correlation between the x-ray and radio morphologies of Cas A, and in the process address: the effect of inhomogeneous absorption on the apparent x-ray morphology, the interaction between the SNR and a molecular cloud, and the rapid move toward equipartition between the magnetic and gas energy densities. Discussions of the x-ray./radio correlation continues in Chapter 5, where we present a new, deep, ROSAT HRI image of Cas A. Chapter 7 presents ASCA spectra, with non-thermal spectral fits for 13 of the youngest SNRs in the Galaxy.
Electromagnetic gauge as an integration condition: De Broglie's argument revisited and expanded
NASA Astrophysics Data System (ADS)
Costa de Beauregard, O.
1992-12-01
Einstein's mass-energy equivalence law, argues de Broglie, by fixing the zero of the potential energy of a system, ipso facto selects a gauge in electromagnetism. We examine how this works in electrostatics and in magnetostatics and bring in, as a “trump card,” the familiar, but highly peculiar, system consisting of a toroidal magnet m and a current coil c, where none of the mutual energy W resides in the vacuum. We propose the principle of a crucial test for measuring the fractions of W residing in m and in c; if the latter is nonzero, the (fieldless) vector potential has physicality. Also, using induction for transferring energy from the magnet to a superconducting current, we prove that W is equipartitioned between m and c.
NASA Astrophysics Data System (ADS)
Keohane, Jonathan Wilmore
1998-07-01
Thesis submitted to the faculty of the Graduate School of the University of Minnesota in partial fulfillment of the requirements for the degree of Doctor of Philosophy. Part I discusses the spatial correlation between the x-ray and radio morphologies of Cas A, and in the process address: the effect of inhomogeneous absorption on the apparent x-ray morphology, the interaction between the SNR and a molecular cloud, and the rapid move toward equipartition between the magnetic and gas energy densities. Discussions of the x-ray./radio correlation continues in Chapter 5, where we present a new, deep, ROSAT HRI image of Cas A. Chapter 7 presents ASCA spectra, with non-thermal spectral fits for 13 of the youngest SNRs in the Galaxy.
Mechanics and statistics of the worm-like chain
NASA Astrophysics Data System (ADS)
Marantan, Andrew; Mahadevan, L.
2018-02-01
The worm-like chain model is a simple continuum model for the statistical mechanics of a flexible polymer subject to an external force. We offer a tutorial introduction to it using three approaches. First, we use a mesoscopic view, treating a long polymer (in two dimensions) as though it were made of many groups of correlated links or "clinks," allowing us to calculate its average extension as a function of the external force via scaling arguments. We then provide a standard statistical mechanics approach, obtaining the average extension by two different means: the equipartition theorem and the partition function. Finally, we work in a probabilistic framework, taking advantage of the Gaussian properties of the chain in the large-force limit to improve upon the previous calculations of the average extension.
Structure of thermal pair clouds around gamma-ray-emitting black holes
NASA Technical Reports Server (NTRS)
Liang, Edison P.
1991-01-01
Using certain simplifying assumptions, the general structure of a quasi-spherical thermal pair-balanced cloud surrounding an accreting black hole is derived from first principles. Pair-dominated hot solutions exist only for a restricted range of the viscosity parameter. These results are applied as examples to the 1979 HEAO 3 gamma-ray data of Cygnus X-1 and the Galactic center. Values are obtained for the viscosity parameter lying in the range of about 0.1-0.01. Since the lack of synchrotron soft photons requires the magnetic field to be typically less than 1 percent of the equipartition value, a magnetic field cannot be the main contributor to the viscous stress of the inner accretion flow, at least during the high gamma-ray states.
Double Scaling in the Relaxation Time in the β -Fermi-Pasta-Ulam-Tsingou Model
NASA Astrophysics Data System (ADS)
Lvov, Yuri V.; Onorato, Miguel
2018-04-01
We consider the original β -Fermi-Pasta-Ulam-Tsingou system; numerical simulations and theoretical arguments suggest that, for a finite number of masses, a statistical equilibrium state is reached independently of the initial energy of the system. Using ensemble averages over initial conditions characterized by different Fourier random phases, we numerically estimate the time scale of equipartition and we find that for very small nonlinearity it matches the prediction based on exact wave-wave resonant interaction theory. We derive a simple formula for the nonlinear frequency broadening and show that when the phenomenon of overlap of frequencies takes place, a different scaling for the thermalization time scale is observed. Our result supports the idea that the Chirikov overlap criterion identifies a transition region between two different relaxation time scalings.
Solving the flatness problem with an anisotropic instanton in Hořava-Lifshitz gravity
NASA Astrophysics Data System (ADS)
Bramberger, Sebastian F.; Coates, Andrew; Magueijo, João; Mukohyama, Shinji; Namba, Ryo; Watanabe, Yota
2018-02-01
In Hořava-Lifshitz gravity a scaling isotropic in space but anisotropic in spacetime, often called "anisotropic scaling," with the dynamical critical exponent z =3 , lies at the base of its renormalizability. This scaling also leads to a novel mechanism of generating scale-invariant cosmological perturbations, solving the horizon problem without inflation. In this paper we propose a possible solution to the flatness problem, in which we assume that the initial condition of the Universe is set by a small instanton respecting the same scaling. We argue that the mechanism may be more general than the concrete model presented here. We rely simply on the deformed dispersion relations of the theory, and on equipartition of the various forms of energy at the starting point.
X-rays from the radio halo of Virgo A = M87
NASA Technical Reports Server (NTRS)
1985-01-01
The purpose of this study is to investigate in more detail the associated X-ray and radio emission in the Virgo A halo discovered by SGF. Improved Einstein HRI data and new radio maps obtained with the Very Large Array are described and the relation between the X-ray and radio structures is carefully examined. Several possible explanations are presented for the X-ray emission. The inverse compton model is found to be viable only if the magnetic field is variable and substantially weaker than the equipartition value. The principal alternative is excess thermal X-rays due to compression of the intracluster medium by the radio lobe. In either case, the association of such prominent radio and X-ray structures is unique among known radio galaxies.
NASA Astrophysics Data System (ADS)
Silveira, Ana J.; Abreu, Charlles R. A.
2017-09-01
Sets of atoms collectively behaving as rigid bodies are often used in molecular dynamics to model entire molecules or parts thereof. This is a coarse-graining strategy that eliminates degrees of freedom and supposedly admits larger time steps without abandoning the atomistic character of a model. In this paper, we rely on a particular factorization of the rotation matrix to simplify the mechanical formulation of systems containing rigid bodies. We then propose a new derivation for the exact solution of torque-free rotations, which are employed as part of a symplectic numerical integration scheme for rigid-body dynamics. We also review methods for calculating pressure in systems of rigid bodies with pairwise-additive potentials and periodic boundary conditions. Finally, simulations of liquid phases, with special focus on water, are employed to analyze the numerical aspects of the proposed methodology. Our results show that energy drift is avoided for time step sizes up to 5 fs, but only if a proper smoothing is applied to the interatomic potentials. Despite this, the effects of discretization errors are relevant, even for smaller time steps. These errors induce, for instance, a systematic failure of the expected equipartition of kinetic energy between translational and rotational degrees of freedom.
Exploring the Connection Between Sampling Problems in Bayesian Inference and Statistical Mechanics
NASA Technical Reports Server (NTRS)
Pohorille, Andrew
2006-01-01
The Bayesian and statistical mechanical communities often share the same objective in their work - estimating and integrating probability distribution functions (pdfs) describing stochastic systems, models or processes. Frequently, these pdfs are complex functions of random variables exhibiting multiple, well separated local minima. Conventional strategies for sampling such pdfs are inefficient, sometimes leading to an apparent non-ergodic behavior. Several recently developed techniques for handling this problem have been successfully applied in statistical mechanics. In the multicanonical and Wang-Landau Monte Carlo (MC) methods, the correct pdfs are recovered from uniform sampling of the parameter space by iteratively establishing proper weighting factors connecting these distributions. Trivial generalizations allow for sampling from any chosen pdf. The closely related transition matrix method relies on estimating transition probabilities between different states. All these methods proved to generate estimates of pdfs with high statistical accuracy. In another MC technique, parallel tempering, several random walks, each corresponding to a different value of a parameter (e.g. "temperature"), are generated and occasionally exchanged using the Metropolis criterion. This method can be considered as a statistically correct version of simulated annealing. An alternative approach is to represent the set of independent variables as a Hamiltonian system. Considerab!e progress has been made in understanding how to ensure that the system obeys the equipartition theorem or, equivalently, that coupling between the variables is correctly described. Then a host of techniques developed for dynamical systems can be used. Among them, probably the most powerful is the Adaptive Biasing Force method, in which thermodynamic integration and biased sampling are combined to yield very efficient estimates of pdfs. The third class of methods deals with transitions between states described by rate constants. These problems are isomorphic with chemical kinetics problems. Recently, several efficient techniques for this purpose have been developed based on the approach originally proposed by Gillespie. Although the utility of the techniques mentioned above for Bayesian problems has not been determined, further research along these lines is warranted
Decorrelation dynamics and spectra in drift-Alfven turbulence
NASA Astrophysics Data System (ADS)
Fernandez Garcia, Eduardo
Motivated by the inability of one-fluid magnetohydrodynamics (MHD) to explain key turbulence characteristics in systems ranging from the solar wind and interstellar medium to fusion devices like the reversed field pinch, this thesis studies magnetic turbulence using a drift-Alfven model that extends MHD by including electron density dynamics. Electron effects play a significant role in the dynamics by changing the structure of turbulent decorrelation in the Alfvenic regime (where fast Alfvenic propagation provides the fastest decorrelation of the system): besides the familiar counter-propagating Alfvenic branches of MHD, an additional branch tied to the diamagnetic and eddy-turn- over rates enters in the turbulent response. This kinematic branch gives hydrodynamic features to turbulence that is otherwise strongly magnetic. Magnetic features are observed in the RMS frequency, energy partitions, cross-field energy transfer and in the turbulent response, whereas hydrodynamic features appear in the average frequency, self-field transfer, turbulent response and finally the wavenumber spectrum. These features are studied via renormalized closure theory and numerical simulation. The closure calculation naturally incorporates the eigenmode structure of the turbulent response in specifying spectral energy balance equations for the magnetic, kinetic and internal (density) energies. Alfvenic terms proportional to cross correlations and involved in cross field transfer compete with eddy-turn-over, self transfer, auto-correlation terms. In the steady state, the kinematic terms dominate the energy balances and yield a 5/3 Kolmogorov spectrum (as observed in the interstellar medium) for the three field energies in the strong turbulence, long wavelength limit. Alfvenic terms establish equipartition of kinetic and magnetic energies. In the limit where wavelengths are short compared to the gyroradius, the Alfvenic terms equipartition the internal and magnetic energies resulting in a steep (-2) spectrum fall-off for those energies while the largely uncoupled kinetic modes still obey a 5/3 law. From the numerical simulations, the response function of drift-Alfven turbulence is measured. Here, a statistical ensemble is constructed from small perturbations of the turbulent amplitudes at fixed wavenumber. The decorrelation structure born out of the eigenmode calculation is verified in the numerical measurement.
Relativistic equipartition via a massive damped sliding partition
NASA Astrophysics Data System (ADS)
Crawford, Frank S.
1993-04-01
A cylinder partitioned by a massive sliding slab undergoing nonrelativistic damped one-dimensional (1D) motion under bombardment from the left (i=1) and right (i=2) by particles having rest mass mi, speed vi, relativistic momentum (magnitude) pi, and (let c≡1) total energy Ei=(pi2+mi2)1/2 is considered herein. The damped slab of mass M transforms the system from its initial pi distributions (i=1,2) to a state, first, of pressure (P) equilibrium with P1=P2, but temperature T1≠T2, then, to P-T equilibrium with P1=P2 and T1=T2, given by the (1D) ``first moment'' equipartition relation (κ is Boltzmann's constant),
Experimental Observation of a Current-Driven Instability in a Neutral Electron-Positron Beam.
Warwick, J; Dzelzainis, T; Dieckmann, M E; Schumaker, W; Doria, D; Romagnani, L; Poder, K; Cole, J M; Alejo, A; Yeung, M; Krushelnick, K; Mangles, S P D; Najmudin, Z; Reville, B; Samarin, G M; Symes, D D; Thomas, A G R; Borghesi, M; Sarri, G
2017-11-03
We report on the first experimental observation of a current-driven instability developing in a quasineutral matter-antimatter beam. Strong magnetic fields (≥1 T) are measured, via means of a proton radiography technique, after the propagation of a neutral electron-positron beam through a background electron-ion plasma. The experimentally determined equipartition parameter of ε_{B}≈10^{-3} is typical of values inferred from models of astrophysical gamma-ray bursts, in which the relativistic flows are also expected to be pair dominated. The data, supported by particle-in-cell simulations and simple analytical estimates, indicate that these magnetic fields persist in the background plasma for thousands of inverse plasma frequencies. The existence of such long-lived magnetic fields can be related to analog astrophysical systems, such as those prevalent in lepton-dominated jets.
Experimental Observation of a Current-Driven Instability in a Neutral Electron-Positron Beam
NASA Astrophysics Data System (ADS)
Warwick, J.; Dzelzainis, T.; Dieckmann, M. E.; Schumaker, W.; Doria, D.; Romagnani, L.; Poder, K.; Cole, J. M.; Alejo, A.; Yeung, M.; Krushelnick, K.; Mangles, S. P. D.; Najmudin, Z.; Reville, B.; Samarin, G. M.; Symes, D. D.; Thomas, A. G. R.; Borghesi, M.; Sarri, G.
2017-11-01
We report on the first experimental observation of a current-driven instability developing in a quasineutral matter-antimatter beam. Strong magnetic fields (≥1 T ) are measured, via means of a proton radiography technique, after the propagation of a neutral electron-positron beam through a background electron-ion plasma. The experimentally determined equipartition parameter of ɛB≈10-3 is typical of values inferred from models of astrophysical gamma-ray bursts, in which the relativistic flows are also expected to be pair dominated. The data, supported by particle-in-cell simulations and simple analytical estimates, indicate that these magnetic fields persist in the background plasma for thousands of inverse plasma frequencies. The existence of such long-lived magnetic fields can be related to analog astrophysical systems, such as those prevalent in lepton-dominated jets.
Equilibrium Reconstruction on the Large Helical Device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samuel A. Lazerson, D. Gates, D. Monticello, H. Neilson, N. Pomphrey, A. Reiman S. Sakakibara, and Y. Suzuki
Equilibrium reconstruction is commonly applied to axisymmetric toroidal devices. Recent advances in computational power and equilibrium codes have allowed for reconstructions of three-dimensional fields in stellarators and heliotrons. We present the first reconstructions of finite beta discharges in the Large Helical Device (LHD). The plasma boundary and magnetic axis are constrained by the pressure profile from Thomson scattering. This results in a calculation of plasma beta without a-priori assumptions of the equipartition of energy between species. Saddle loop arrays place additional constraints on the equilibrium. These reconstruction utilize STELLOPT, which calls VMEC. The VMEC equilibrium code assumes good nested fluxmore » surfaces. Reconstructed magnetic fields are fed into the PIES code which relaxes this constraint allowing for the examination of the effect of islands and stochastic regions on the magnetic measurements.« less
Sturm und Drang: The turbulent, magnetic tempest in the Galactic center
NASA Astrophysics Data System (ADS)
Lacki, Brian C.
2014-05-01
The Galactic center central molecular zone (GCCMZ) bears similarities with extragalactic starburst regions, including a high supernova (SN) rate density. As in other starbursts like M82, the frequent SNe can heat the ISM until it is filled with a hot (˜ 4 × 107 K) superwind. Furthermore, the random forcing from SNe stirs up the wind, powering Mach 1 turbulence. I argue that a turbulent dynamo explains the strong magnetic fields in starbursts, and I predict an average B ˜70 μG in the GCCMZ. I demonstrate how the SN driving of the ISM leads to equipartition between various pressure components in the ISM. The SN-heated wind escapes the center, but I show that it may be stopped in the Galactic halo. I propose that the Fermi bubbles are the wind's termination shock.
A moving mesh unstaggered constrained transport scheme for magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Mocz, Philip; Pakmor, Rüdiger; Springel, Volker; Vogelsberger, Mark; Marinacci, Federico; Hernquist, Lars
2016-11-01
We present a constrained transport (CT) algorithm for solving the 3D ideal magnetohydrodynamic (MHD) equations on a moving mesh, which maintains the divergence-free condition on the magnetic field to machine-precision. Our CT scheme uses an unstructured representation of the magnetic vector potential, making the numerical method simple and computationally efficient. The scheme is implemented in the moving mesh code AREPO. We demonstrate the performance of the approach with simulations of driven MHD turbulence, a magnetized disc galaxy, and a cosmological volume with primordial magnetic field. We compare the outcomes of these experiments to those obtained with a previously implemented Powell divergence-cleaning scheme. While CT and the Powell technique yield similar results in idealized test problems, some differences are seen in situations more representative of astrophysical flows. In the turbulence simulations, the Powell cleaning scheme artificially grows the mean magnetic field, while CT maintains this conserved quantity of ideal MHD. In the disc simulation, CT gives slower magnetic field growth rate and saturates to equipartition between the turbulent kinetic energy and magnetic energy, whereas Powell cleaning produces a dynamically dominant magnetic field. Such difference has been observed in adaptive-mesh refinement codes with CT and smoothed-particle hydrodynamics codes with divergence-cleaning. In the cosmological simulation, both approaches give similar magnetic amplification, but Powell exhibits more cell-level noise. CT methods in general are more accurate than divergence-cleaning techniques, and, when coupled to a moving mesh can exploit the advantages of automatic spatial/temporal adaptivity and reduced advection errors, allowing for improved astrophysical MHD simulations.
Undergraduate Labs for Biological Physics: Brownian Motion and Optical Trapping
NASA Astrophysics Data System (ADS)
Chu, Kelvin; Laughney, A.; Williams, J.
2006-12-01
We describe a set of case-study driven labs for an upper-division biological physics course. These labs are motivated by case-studies and consist of inquiry-driven investigations of Brownian motion and optical-trapping experiments. Each lab incorporates two innovative educational techniques to drive the process and application aspects of scientific learning. Case studies are used to encourage students to think independently and apply the scientific method to a novel lab situation. Student input from this case study is then used to decide how to best do the measurement, guide the project and ultimately evaluate the success of the program. Where appropriate, visualization and simulation using VPython is used. Direct visualization of Brownian motion allows students to directly calculate Avogadro's number or the Boltzmann constant. Following case-study driven discussion, students use video microscopy to measure the motion of latex spheres in different viscosity fluids arrive at a good approximation of NA or kB. Optical trapping (laser tweezer) experiments allow students to investigate the consequences of 100-pN forces on small particles. The case study consists of a discussion of the Boltzmann distribution and equipartition theorem followed by a consideration of the shape of the potential. Students can then use video capture to measure the distribution of bead positions to determine the shape and depth of the trap. This work supported by NSF DUE-0536773.
NASA Astrophysics Data System (ADS)
Zhang, Yiqing; Wang, Lifeng; Jiang, Jingnong
2018-03-01
Vibrational behavior is very important for nanostructure-based resonators. In this work, an orthotropic plate model together with a molecular dynamics (MD) simulation is used to investigate the thermal vibration of rectangular single-layered black phosphorus (SLBP). Two bending stiffness, two Poisson's ratios, and one shear modulus of SLBP are calculated using the MD simulation. The natural frequency of the SLBP predicted by the orthotropic plate model agrees with the one obtained from the MD simulation very well. The root of mean squared (RMS) amplitude of the SLBP is obtained by MD simulation and the orthotropic plate model considering the law of energy equipartition. The RMS amplitude of the thermal vibration of the SLBP is predicted well by the orthotropic plate model compared to the MD results. Furthermore, the thermal vibration of the SLBP with an initial stress is also well-described by the orthotropic plate model.
Non-equilibrium surface tension of the vapour-liquid interface of active Lennard-Jones particles
NASA Astrophysics Data System (ADS)
Paliwal, Siddharth; Prymidis, Vasileios; Filion, Laura; Dijkstra, Marjolein
2017-08-01
We study a three-dimensional system of self-propelled Brownian particles interacting via the Lennard-Jones potential. Using Brownian dynamics simulations in an elongated simulation box, we investigate the steady states of vapour-liquid phase coexistence of active Lennard-Jones particles with planar interfaces. We measure the normal and tangential components of the pressure tensor along the direction perpendicular to the interface and verify mechanical equilibrium of the two coexisting phases. In addition, we determine the non-equilibrium interfacial tension by integrating the difference of the normal and tangential components of the pressure tensor and show that the surface tension as a function of strength of particle attractions is well fitted by simple power laws. Finally, we measure the interfacial stiffness using capillary wave theory and the equipartition theorem and find a simple linear relation between surface tension and interfacial stiffness with a proportionality constant characterized by an effective temperature.
e(sup +/-) Pair Loading and the Origin of the Upstream Magnetic Field in GRB Shocks
NASA Technical Reports Server (NTRS)
Ramirez-Ruiz, Enrico; Nishikawa, Ken-Ichi; Hededal, Christian B.
2006-01-01
We investigate here the effects of plasma instabilities driven by rapid e(sup +/-) pair cascades, which arise in the environment of GRB sources as a result of back-scattering of a seed fraction of their original spectrum. The injection of e(sup +/-) pairs induces strong streaming motions in the ambient medium. One therefore expects the pair-enriched medium ahead of the forward shock to be strongly sheared on length scales comparable to the radiation front thickness. Using three-dimensional particle-in-cell simulations, we show that plasma instabilities driven by these streaming e(sup +/-) pairs are responsible for the excitation of near-equipartition, turbulent magnetic fields. Our results reveal the importance of the electromagnetic filamentation instability in ensuring an effective coupling between e(sup +/-) pairs and ions, and may help explain the origin of large upstream fields in GRB shocks.
e+/- Pair Loading and the Origin of the Upstream Field in GRB Shocks
NASA Technical Reports Server (NTRS)
Ramirez-Ruiz, Enrico; Nishikawa, Ken-Ichi; Hededal, Christian B.
2006-01-01
We investigate here the effects of plasma instabilities driven by rapid e(sup plus or minus) pair cascades, which arise in the environment of GRB sources as a result of back-scattering of a seed fraction of their original spectrum. The injection of e(sup plus or minus) pairs induces strong streaming motions in the ambient medium. One therefore expects the pair-enriched medium ahead of the forward shock to be strongly sheared on length scales comparable to the radiation front thickness. Using three-dimensional particle-in-cell simulations, we show that plasma instabilities driven by these streaming e(sup plus or minus) pairs are responsible for the excitation of near-equipartition, turbulent magnetic fields. Our results reveal the importance of the electromagnetic filamentation instability in ensuring an effective coupling between e(sup plus or minus) pairs and ions, and may help explain the origin of large upstream fields in GRB shocks.
Energy localization in the phi4 oscillator chain.
Ponno, A; Ruggiero, J; Drigo, E; De Luca, J
2006-05-01
We study energy localization in a finite one-dimensional phi(4) oscillator chain with initial energy in a single oscillator of the chain. We numerically calculate the effective number of degrees of freedom sharing the energy on the lattice as a function of time. We find that for energies smaller than a critical value, energy equipartition among the oscillators is reached in a relatively short time. On the other hand, above the critical energy, a decreasing number of particles sharing the energy is observed. We give an estimate of the effective number of degrees of freedom as a function of the energy. Our results suggest that localization is due to the appearance, above threshold, of a breather-like structure. Analytic arguments are given, based on the averaging theory and the analysis of a discrete nonlinear Schrödinger equation approximating the dynamics, to support and explain the numerical results.
The atmosphere of a dirty-clathrate cometary nucleus - A two-phase, multifluid model
NASA Astrophysics Data System (ADS)
Marconi, M. L.; Mendis, D. A.
1983-10-01
The dynamical and thermal structure of a dirty-clathrate cometary nucleus' gas atmosphere is presently given a self-consistent, transonic multifluid solution in which, although the heavy neutron and ion species are treated as a single fluid in the collision-dominated region, the photoproduced H is treated separately. The thermal profile of the atmosphere thus obtained is entirely different from those predicted by the earlier, single-fluid models as well as the multifluid models which assumed equipartition of energy between electrons and ions. While the electron gas, like the neutrals and the ions, cools due to expansion, its main mode of energy loss in the inner coma is by way of inelastic collisions with the predominant H2O molecule. The high electron temperature in the outer coma also decreases the efficiency of electron removal by dissociative recombination, thereby increasing electron density throughout the coma.
Nonlinear waves and shocks in relativistic two-fluid hydrodynamics
NASA Astrophysics Data System (ADS)
Haim, L.; Gedalin, M.; Spitkovsky, A.; Krasnoselskikh, V.; Balikhin, M.
2012-06-01
Relativistic shocks are present in a number of objects where violent processes are accompanied by relativistic outflows of plasma. The magnetization parameter σ = B2/4πnmc2 of the ambient medium varies in wide range. Shocks with low σ are expected to substantially enhance the magnetic fields in the shock front. In non-relativistic shocks the magnetic compression is limited by nonlinear effects related to the deceleration of flow. Two-fluid analysis of perpendicular relativistic shocks shows that the nonlinearities are suppressed for σ<<1 and the magnetic field reaches nearly equipartition values when the magnetic energy density is of the order of the ion energy density, Beq2 ~ 4πnmic2γ. A large cross-shock potential eφ/mic2γ0 ~ B2/Beq2 develops across the electron-ion shock front. This potential is responsible for electron energization.
Simulations of Magnetic Fields in Tidally Disrupted Stars
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guillochon, James; McCourt, Michael, E-mail: jguillochon@cfa.harvard.edu
2017-01-10
We perform the first magnetohydrodynamical simulations of tidal disruptions of stars by supermassive black holes. We consider stars with both tangled and ordered magnetic fields, for both grazing and deeply disruptive encounters. When the star survives disruption, we find its magnetic field amplifies by a factor of up to 20, but see no evidence for a self-sustaining dynamo that would yield arbitrary field growth. For stars that do not survive, and within the tidal debris streams produced in partial disruptions, we find that the component of the magnetic field parallel to the direction of stretching along the debris stream onlymore » decreases slightly with time, eventually resulting in a stream where the magnetic pressure is in equipartition with the gas. Our results suggest that the returning gas in most (if not all) stellar tidal disruptions is already highly magnetized by the time it returns to the black hole.« less
Turbulent transports over tundra
NASA Technical Reports Server (NTRS)
Fitzjarrald, David R.; Moore, Kathleen E.
1992-01-01
An extensive period of eddy correlation surface flux measurements was conducted at a site distant from the coast on the western Alaskan tundra. The surface exchange of heat and moisture over tundra during the summer was limited by a strong resistance to transfer from the upper soil layer through the ground cover, with canopy resistances to evaporation observed to be approximately 200 s/m. Though July 1988 was anomalously warm and dry in the region and August was close to normal temperature and rainfall, there was no appreciable difference in the canopy resistance between the periods. During the dry sunny period at the end of July, the observed evaporation rate was 2 mm/d. High canopy resistance led to an approximate equipartition of net radiation between latent and sensible heat, each accounting for 40 percent of the available energy, with heat balance apparently going into soil heat flux.
Simulation Study of Magnetic Fields Generated by the Electromagnetic Filamentation Instability
NASA Technical Reports Server (NTRS)
Nishikawa, K.-I.; Ramirez-Ruiz, E.; Hardee, P.; Hededal, C. B.; Mizuno, Y.; Fishman, G. J.
2007-01-01
We have investigated the effects of plasma instabilities driven by rapid e(sup plus or minus) pair cascades, which arise in the environment of GRB sources as a result of back-scattering of a seed fraction of the original spectrum. The injection of e(sup plus or minus) pairs induces strong streaming motions in the ambient medium. One therefore expects the pair-enriched medium ahead of the forward shock to be strongly sheared on length scales comparable to the radiation front thickness. Using three-dimensional particle-in-cell simulations, we show that plasma instabilities driven by these streaming e(sup plus or minus) pairs are responsible for the excitation of near-equipartition, turbulent magnetic fields. Our results reveal the importance of the electromagnetic filamentation instability in ensuring an effective coupling between e(sup plus or minus) pairs and ions, and may help explain the origin of large upstream fields in GRB shocks.
Slow-Mode MHD Wave Penetration into a Coronal Null Point due to the Mode Transmission
NASA Astrophysics Data System (ADS)
Afanasyev, Andrey N.; Uralov, Arkadiy M.
2016-11-01
Recent observations of magnetohydrodynamic oscillations and waves in solar active regions revealed their close link to quasi-periodic pulsations in flaring light curves. The nature of that link has not yet been understood in detail. In our analytical modelling we investigate propagation of slow magnetoacoustic waves in a solar active region, taking into account wave refraction and transmission of the slow magnetoacoustic mode into the fast one. The wave propagation is analysed in the geometrical acoustics approximation. Special attention is paid to the penetration of waves in the vicinity of a magnetic null point. The modelling has shown that the interaction of slow magnetoacoustic waves with the magnetic reconnection site is possible due to the mode transmission at the equipartition level where the sound speed is equal to the Alfvén speed. The efficiency of the transmission is also calculated.
Laboratory evidence of dynamo amplification of magnetic fields in a turbulent plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tzeferacos, P.; Rigby, A.; Bott, A. F. A.
Magnetic fields are ubiquitous in the Universe. The energy density of these fields is typically comparable to the energy density of the fluid motions of the plasma in which they are embedded, making magnetic fields essential players in the dynamics of the luminous matter. The standard theoretical model for the origin of these strong magnetic fields is through the amplification of tiny seed fields via turbulent dynamo to the level consistent with current observations. However, experimental demonstration of the turbulent dynamo mechanism has remained elusive, since it requires plasma conditions that are extremely hard to re-create in terrestrial laboratories. Heremore » in this paper, we demonstrate, using laser-produced colliding plasma flows, that turbulence is indeed capable of rapidly amplifying seed fields to near equipartition with the turbulent fluid motions. These results support the notion that turbulent dynamo is a viable mechanism responsible for the observed present-day magnetization.« less
Hydrodynamics of a cold one-dimensional fluid: the problem of strong shock waves
NASA Astrophysics Data System (ADS)
Hurtado, Pablo I.
2005-03-01
We study a shock wave induced by an infinitely massive piston propagating into a one-dimensional cold gas. The cold gas is modelled as a collection of hard rods which are initially at rest, so the temperature is zero. Most of our results are based on simulations of a gas of rods with binary mass distribution, and we partcularly focus on the case of spatially alternating masses. We find that the properties of the resulting shock wave are in striking contrast with those predicted by hydrodynamic and kinetic approaches, e.g., the flow-field profiles relax algebraically toward their equilibrium values. In addition, most relevant observables characterizing local thermodynamic equilibrium and equipartition decay as a power law of the distance to the shock layer. The exponents of these power laws depend non-monotonously on the mass ratio. Similar interesting dependences on the mass ratio also characterize the shock width, density and temperature overshoots, etc.
The Properties of Extragalactic Radio Jets
NASA Astrophysics Data System (ADS)
Finke, Justin
2018-01-01
I show that by assuming a standard Blandford-Konigl jet, it is possible to determine the speed (bulk Lorentz factor) and orientation (angle to the line of sight) of self-similar parsec-scale blazar jets by using four measured quantities: the core radio flux, the extended radio flux, the magnitude of the core shift between two frequencies, and the apparent jet opening angle. Once the bulk Lorentz factor and angle to the line of sight of a jet are known, it is possible to compute their Doppler factor, magnetic field, and intrinsic jet opening angle. I use data taken from the literature and marginalize over nuisance parameters associated with the electron distribution and equipartition, to compute these quantities, albeit with large errors. The results have implications for the resolution of the TeV BL Lac Doppler factor crisis and the production of jets from magnetically arrested disks.
Baby supernovae through the looking glass at long wavelengths.
NASA Astrophysics Data System (ADS)
Chandra, Poonam; Ray, Alak
2004-09-01
We emphasize the importance of observations of young supernovae in wide radio band. We argue on the basis of observational results that only high- or only low-frequency data is not sufficient to get full physical picture of the shocked plasma. In SN 1993J, the composite spectrum obtained with Very Large Array (VLA) and Giant Metrewave Radio Telescope (GMRT), around day 3200, shows observational evidence of synchrotron cooling, which leads us to the direct determination of the magnetic field independent of the equipartition assumption, as well as the relative strengths of the magnetic field and relativistic particle energy densities. The GMRT low-frequency light curves of SN 1993J suggest the modification in the radio emission models developed on the basis of VLA data alone. The composite radio spectrum of SN 2003bg on day 350 obtained with GMRT plus VLA strongly supports internal synchrotron self absorption as the dominant absorption mechanism.
Experimental Observation of a Current-Driven Instability in a Neutral Electron-Positron Beam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warwick, J.; Dzelzainis, T.; Dieckmann, M. E.
Here, we report on the first experimental observation of a current-driven instability developing in a quasineutral matter-antimatter beam. Strong magnetic fields (≥ 1T) are measured, via means of a proton radiography technique, after the propagation of a neutral electron-positron beam through a background electron-ion plasma. The experimentally determined equipartition parameter of ε B ≈ 10 -3 is typical of values inferred from models of astrophysical gamma-ray bursts, in which the relativistic flows are also expected to be pair dominated. The data, supported by particle-in-cell simulations and simple analytical estimates, indicate that these magnetic fields persist in the background plasma formore » thousands of inverse plasma frequencies. The existence of such long-lived magnetic fields can be related to analog astrophysical systems, such as those prevalent in lepton-dominated jets.« less
Laboratory evidence of dynamo amplification of magnetic fields in a turbulent plasma
Tzeferacos, P.; Rigby, A.; Bott, A. F. A.; ...
2018-02-09
Magnetic fields are ubiquitous in the Universe. The energy density of these fields is typically comparable to the energy density of the fluid motions of the plasma in which they are embedded, making magnetic fields essential players in the dynamics of the luminous matter. The standard theoretical model for the origin of these strong magnetic fields is through the amplification of tiny seed fields via turbulent dynamo to the level consistent with current observations. However, experimental demonstration of the turbulent dynamo mechanism has remained elusive, since it requires plasma conditions that are extremely hard to re-create in terrestrial laboratories. Heremore » in this paper, we demonstrate, using laser-produced colliding plasma flows, that turbulence is indeed capable of rapidly amplifying seed fields to near equipartition with the turbulent fluid motions. These results support the notion that turbulent dynamo is a viable mechanism responsible for the observed present-day magnetization.« less
Magnetic dynamo action in two-dimensional turbulent magneto-hydrodynamics
NASA Technical Reports Server (NTRS)
Fyfe, D.; Joyce, G.; Montgomery, D.
1976-01-01
Two-dimensional magnetohydrodynamic turbulence is explored by means of numerical simulation. Previous analytical theory, based on non-dissipative constants of the motion in a truncated Fourier representation, is verified by following the evolution of highly non-equilibrium initial conditions numerically. Dynamo action (conversion of a significant fraction of turbulent kinetic energy into long-wavelength magnetic field energy) is observed. It is conjectured that in the presence of dissipation and external forcing, a dual cascade will be observed for zero-helicity situations. Energy will cascade to higher wave numbers simultaneously with a cascade of mean square vector potential to lower wave numbers, leading to an omni-directional magnetic energy spectrum which varies as 1/k 3 at lower wave numbers, simultaneously with a buildup of magnetic excitation at the lowest wave number of the system. Equipartition of kinetic and magnetic energies is expected at the highest wave numbers in the system.
Laboratory evidence of dynamo amplification of magnetic fields in a turbulent plasma.
Tzeferacos, P; Rigby, A; Bott, A F A; Bell, A R; Bingham, R; Casner, A; Cattaneo, F; Churazov, E M; Emig, J; Fiuza, F; Forest, C B; Foster, J; Graziani, C; Katz, J; Koenig, M; Li, C-K; Meinecke, J; Petrasso, R; Park, H-S; Remington, B A; Ross, J S; Ryu, D; Ryutov, D; White, T G; Reville, B; Miniati, F; Schekochihin, A A; Lamb, D Q; Froula, D H; Gregori, G
2018-02-09
Magnetic fields are ubiquitous in the Universe. The energy density of these fields is typically comparable to the energy density of the fluid motions of the plasma in which they are embedded, making magnetic fields essential players in the dynamics of the luminous matter. The standard theoretical model for the origin of these strong magnetic fields is through the amplification of tiny seed fields via turbulent dynamo to the level consistent with current observations. However, experimental demonstration of the turbulent dynamo mechanism has remained elusive, since it requires plasma conditions that are extremely hard to re-create in terrestrial laboratories. Here we demonstrate, using laser-produced colliding plasma flows, that turbulence is indeed capable of rapidly amplifying seed fields to near equipartition with the turbulent fluid motions. These results support the notion that turbulent dynamo is a viable mechanism responsible for the observed present-day magnetization.
Mirror Instability in the Turbulent Solar Wind
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hellinger, Petr; Landi, Simone; Verdini, Andrea
2017-04-01
The relationship between a decaying strong turbulence and the mirror instability in a slowly expanding plasma is investigated using two-dimensional hybrid expanding box simulations. We impose an initial ambient magnetic field perpendicular to the simulation box, and we start with a spectrum of large-scale, linearly polarized, random-phase Alfvénic fluctuations that have energy equipartition between kinetic and magnetic fluctuations and a vanishing correlation between the two fields. A turbulent cascade rapidly develops, magnetic field fluctuations exhibit a Kolmogorov-like power-law spectrum at large scales and a steeper spectrum at sub-ion scales. The imposed expansion (taking a strictly transverse ambient magnetic field) leadsmore » to the generation of an important perpendicular proton temperature anisotropy that eventually drives the mirror instability. This instability generates large-amplitude, nonpropagating, compressible, pressure-balanced magnetic structures in a form of magnetic enhancements/humps that reduce the perpendicular temperature anisotropy.« less
Magnetic dynamo activity in mechanically driven compressible magnetohydrodynamic turbulence
NASA Technical Reports Server (NTRS)
Shebalin, John V.; Montgomery, David
1989-01-01
Magnetic dynamo activity in a homogeneous, dissipative, polytropic, two-dimensional, turbulent magneto-fluid is simulated numerically. The magneto-fluid is simulated numerically. The magneto-fluid is, in a number of cases, mechanically forced so that energy input balances dissipation, thereby maintaining constant energy. In the presence of a mean magnetic field, a magneto-fluid whose initial turbulent magnetic energy is zero quickly arrives at a state of non-zero turbulent magnetic energy. If the mean magnetic field energy density is small, the turbulent magnetic field can achieve a local energy density more than four hundred times larger; if the mean magnetic field energy density is large, then equipartition between the turbulent magnetic and kinetic energy is achieved. Compared to the presence of a mean magnetic field, compressibility appears to have only a marginal effect in mediating the transfer of turbulent kinetic energy into magnetic energy.
Experimental Observation of a Current-Driven Instability in a Neutral Electron-Positron Beam
Warwick, J.; Dzelzainis, T.; Dieckmann, M. E.; ...
2017-11-03
Here, we report on the first experimental observation of a current-driven instability developing in a quasineutral matter-antimatter beam. Strong magnetic fields (≥ 1T) are measured, via means of a proton radiography technique, after the propagation of a neutral electron-positron beam through a background electron-ion plasma. The experimentally determined equipartition parameter of ε B ≈ 10 -3 is typical of values inferred from models of astrophysical gamma-ray bursts, in which the relativistic flows are also expected to be pair dominated. The data, supported by particle-in-cell simulations and simple analytical estimates, indicate that these magnetic fields persist in the background plasma formore » thousands of inverse plasma frequencies. The existence of such long-lived magnetic fields can be related to analog astrophysical systems, such as those prevalent in lepton-dominated jets.« less
Diffuse Waves and Energy Densities Near Boundaries
NASA Astrophysics Data System (ADS)
Sanchez-Sesma, F. J.; Rodriguez-Castellanos, A.; Campillo, M.; Perton, M.; Luzon, F.; Perez-Ruiz, J. A.
2007-12-01
Green function can be retrieved from averaging cross correlations of motions within a diffuse field. In fact, it has been shown that for an elastic inhomogeneous, anisotropic medium under equipartitioned, isotropic illumination, the average cross correlations are proportional to the imaginary part of Green function. For instance coda waves are due to multiple scattering and their intensities follow diffusive regimes. Coda waves and the noise sample the medium and effectively carry information along their paths. In this work we explore the consequences of assuming both source and receiver at the same point. From the observable side, the autocorrelation is proportional to the energy density at a given point. On the other hand, the imaginary part of the Green function at the source itself is finite because the singularity of Green function is restricted to the real part. The energy density at a point is proportional with the trace of the imaginary part of Green function tensor at the source itself. The Green function availability may allow establishing the theoretical energy density of a seismic diffuse field generated by a background equipartitioned excitation. We study an elastic layer with free surface and overlaying a half space and compute the imaginary part of the Green function for various depths. We show that the resulting spectrum is indeed closely related to the layer dynamic response and the corresponding resonant frequencies are revealed. One implication of present findings lies in the fact that spatial variations may be useful in detecting the presence of a target by its signature in the distribution of diffuse energy. These results may be useful in assessing the seismic response of a given site if strong ground motions are scarce. It suffices having a reasonable illumination from micro earthquakes and noise. We consider that the imaginary part of Green function at the source is a spectral signature of the site. The relative importance of the peaks of this energy spectrum, ruling out non linear effects, may influence the seismic response for future earthquakes. Partial supports from DGAPA-UNAM, Project IN114706, Mexico; from Proyect MCyT CGL2005-05500-C02/BTE, Spain; from project DyETI of INSU-CNRS, France, and from the Instituto Mexicano del Petróleo are greatly appreciated.
Modeling Blazar Spectra by Solving an Electron Transport Equation
NASA Astrophysics Data System (ADS)
Lewis, Tiffany; Finke, Justin; Becker, Peter A.
2018-01-01
Blazars are luminous active galaxies across the entire electromagnetic spectrum, but the spectral formation mechanisms, especially the particle acceleration, in these sources are not well understood. We develop a new theoretical model for simulating blazar spectra using a self-consistent electron number distribution. Specifically, we solve the particle transport equation considering shock acceleration, adiabatic expansion, stochastic acceleration due to MHD waves, Bohm diffusive particle escape, synchrotron radiation, and Compton radiation, where we implement the full Compton cross-section for seed photons from the accretion disk, the dust torus, and 26 individual broad lines. We used a modified Runge-Kutta method to solve the 2nd order equation, including development of a new mathematical method for normalizing stiff steady-state ordinary differential equations. We show that our self-consistent, transport-based blazar model can qualitatively fit the IR through Fermi g-ray data for 3C 279, with a single-zone, leptonic configuration. We use the solution for the electron distribution to calculate multi-wavelength SED spectra for 3C 279. We calculate the particle and magnetic field energy densities, which suggest that the emitting region is not always in equipartition (a common assumption), but sometimes matter dominated. The stratified broad line region (based on ratios in quasar reverberation mapping, and thus adding no free parameters) improves our estimate of the location of the emitting region, increasing it by ~5x. Our model provides a novel view into the physics at play in blazar jets, especially the relative strength of the shock and stochastic acceleration, where our model is well suited to distinguish between these processes, and we find that the latter tends to dominate.
Field theory of the inverse cascade in two-dimensional turbulence
NASA Astrophysics Data System (ADS)
Mayo, Jackson R.
2005-11-01
A two-dimensional fluid, stirred at high wave numbers and damped by both viscosity and linear friction, is modeled by a statistical field theory. The fluid’s long-distance behavior is studied using renormalization-group (RG) methods, as begun by Forster, Nelson, and Stephen [Phys. Rev. A 16, 732 (1977)]. With friction, which dissipates energy at low wave numbers, one expects a stationary inverse energy cascade for strong enough stirring. While such developed turbulence is beyond the quantitative reach of perturbation theory, a combination of exact and perturbative results suggests a coherent picture of the inverse cascade. The zero-friction fluctuation-dissipation theorem (FDT) is derived from a generalized time-reversal symmetry and implies zero anomalous dimension for the velocity even when friction is present. Thus the Kolmogorov scaling of the inverse cascade cannot be explained by any RG fixed point. The β function for the dimensionless coupling ĝ is computed through two loops; the ĝ3 term is positive, as already known, but the ĝ5 term is negative. An ideal cascade requires a linear β function for large ĝ , consistent with a Padé approximant to the Borel transform. The conjecture that the Kolmogorov spectrum arises from an RG flow through large ĝ is compatible with other results, but the accurate k-5/3 scaling is not explained and the Kolmogorov constant is not estimated. The lack of scale invariance should produce intermittency in high-order structure functions, as observed in some but not all numerical simulations of the inverse cascade. When analogous RG methods are applied to the one-dimensional Burgers equation using an FDT-preserving dimensional continuation, equipartition is obtained instead of a cascade—in agreement with simulations.
The Nasal Geometry of the Reindeer Gives Energy-Efficient Respiration
NASA Astrophysics Data System (ADS)
Magnanelli, Elisa; Wilhelmsen, Øivind; Acquarone, Mario; Folkow, Lars P.; Kjelstrup, Signe
2017-01-01
Reindeer in the arctic region live under very harsh conditions and may face temperatures below 233 K. Therefore, efficient conservation of body heat and water is important for their survival. Alongside their insulating fur, the reindeer nasal mechanism for heat and mass exchange during respiration plays a fundamental role. We present a dynamic model to describe the heat and mass transport that takes place inside the reindeer nose, where we account for the complicated geometrical structure of the subsystems that are part of the nose. The model correctly captures the trend in experimental data for the temperature, heat and water recovery in the reindeer nose during respiration. As a reference case, we model a nose with a simple cylindrical-like geometry, where the total volume and contact area are the same as those determined in the reindeer nose. A comparison of the reindeer nose with the reference case shows that the nose geometry has a large influence on the velocity, temperature and water content of the air inside the nose. For all investigated cases, we find that the total entropy production during a breathing cycle is lower for the reindeer nose than for the reference case. The same trend is observed for the total energy consumption. The reduction in the total entropy production caused by the complicated geometry is higher (up to -20 %) at more extreme ambient conditions, when energy efficiency is presumably more important for the maintenance of energy balance in the animal. In the literature, a hypothesis has been proposed, which states that the most energy-efficient design of a system is characterized by equipartition of the entropy production. In agreement with this hypothesis, we find that the local entropy production during a breathing cycle is significantly more uniform for the reindeer nose than for the reference case. This suggests that natural selection has favored designs that give uniform entropy production when energy efficiency is an issue. Animals living in the harsh arctic climate, such as the reindeer, can therefore serve as inspiration for a novel industrial design with increased efficiency.
Filamentary structures in dense plasma focus: Current filaments or vortex filaments?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soto, Leopoldo, E-mail: lsoto@cchen.cl; Pavez, Cristian; Moreno, José
2014-07-15
Recent observations of an azimuthally distributed array of sub-millimeter size sources of fusion protons and correlation between extreme ultraviolet (XUV) images of filaments with neutron yield in PF-1000 plasma focus have re-kindled interest in their significance. These filaments have been described variously in literature as current filaments and vortex filaments, with very little experimental evidence in support of either nomenclature. This paper provides, for the first time, experimental observations of filaments on a table-top plasma focus device using three techniques: framing photography of visible self-luminosity from the plasma, schlieren photography, and interferometry. Quantitative evaluation of density profile of filaments frommore » interferometry reveals that their radius closely agrees with the collision-less ion skin depth. This is a signature of relaxed state of a Hall fluid, which has significant mass flow with equipartition between kinetic and magnetic energy, supporting the “vortex filament” description. This interpretation is consistent with empirical evidence of an efficient energy concentration mechanism inferred from nuclear reaction yields.« less
Relative distribution of cosmic rays and magnetic fields
NASA Astrophysics Data System (ADS)
Seta, Amit; Shukurov, Anvar; Wood, Toby S.; Bushby, Paul J.; Snodin, Andrew P.
2018-02-01
Synchrotron radiation from cosmic rays is a key observational probe of the galactic magnetic field. Interpreting synchrotron emission data requires knowledge of the cosmic ray number density, which is often assumed to be in energy equipartition (or otherwise tightly correlated) with the magnetic field energy. However, there is no compelling observational or theoretical reason to expect such a tight correlation to hold across all scales. We use test particle simulations, tracing the propagation of charged particles (protons) through a random magnetic field, to study the cosmic ray distribution at scales comparable to the correlation scale of the turbulent flow in the interstellar medium (≃100 pc in spiral galaxies). In these simulations, we find that there is no spatial correlation between the cosmic ray number density and the magnetic field energy density. In fact, their distributions are approximately statistically independent. We find that low-energy cosmic rays can become trapped between magnetic mirrors, whose location depends more on the structure of the field lines than on the field strength.
To, Kiwing
2014-06-01
We investigate experimentally the steady state motion of a millimeter-sized granular polyhedral object on vertically vibrating platforms of flat, conical, and parabolic surfaces. We find that the position distribution of the granular object is related to the shape of the platform, just like that of a Brownian particle trapped in a potential at equilibrium, even though the granular object is intrinsically not at equilibrium due to inelastic collisions with the platform. From the collision dynamics, we derive the Langevin equation which describes the motion of the object under an effective potential that equals the gravitational potential along the platform surface. The potential energy is found to agree with the equilibrium equipartition theorem while the kinetic energy does not. Furthermore, the granular temperature is found to be higher than the effective temperature associated with the average potential energy, suggesting the presence of heat transfer from the kinetic part to the potential part of the granular object.
Modeling the Solar Convective Dynamo and Emerging Flux
NASA Astrophysics Data System (ADS)
Fan, Y.
2017-12-01
Significant advances have been made in recent years in global-scale fully dynamic three-dimensional convective dynamo simulations of the solar/stellar convective envelopes to reproduce some of the basic features of the Sun's large-scale cyclic magnetic field. It is found that the presence of the dynamo-generated magnetic fields plays an important role for the maintenance of the solar differential rotation, without which the differential rotation tends to become anti-solar (with a faster rotating pole instead of the observed faster rotation at the equator). Convective dynamo simulations are also found to produce emergence of coherent super-equipartition toroidal flux bundles with a statistically significant mean tilt angle that is consistent with the mean tilt of solar active regions. The emerging flux bundles are sheared by the giant cell convection into a forward leaning loop shape with its leading side (in the direction of rotation) pushed closer to the strong downflow lanes. Such asymmetric emerging flux pattern may lead to the observed asymmetric properties of solar active regions.
Simulations of a binary-sized mixture of inelastic grains in rapid shear flow.
Clelland, R; Hrenya, C M
2002-03-01
In an effort to explore the rapid flow behavior associated with a binary-sized mixture of grains and to assess the predictive ability of the existing theory for such systems, molecular-dynamic simulations have been carried out. The system under consideration is composed of inelastic, smooth, hard disks engaged in rapid shear flow. The simulations indicate that nondimensional stresses decrease with an increase in d(L)/d(S) (ratio of large particle diameter to small particle diameter) or a decrease in nu(L)/nu(S) (area fraction ratio), as is also predicted by the kinetic theory of Willits and Arnarson [Phys. Fluids 11, 3116 (1999)]. Furthermore, the level of quantitative agreement between the theoretical stress predictions and simulation data is good over the entire range of parameters investigated. Nonetheless, the molecular-dynamic simulations also show that the assumption of an equipartition of energy rapidly deteriorates as the coefficient of restitution is decreased. The magnitude of this energy difference is found to increase with the difference in particle sizes.
Statistical analysis of Hasegawa-Wakatani turbulence
NASA Astrophysics Data System (ADS)
Anderson, Johan; Hnat, Bogdan
2017-06-01
Resistive drift wave turbulence is a multipurpose paradigm that can be used to understand transport at the edge of fusion devices. The Hasegawa-Wakatani model captures the essential physics of drift turbulence while retaining the simplicity needed to gain a qualitative understanding of this process. We provide a theoretical interpretation of numerically generated probability density functions (PDFs) of intermittent events in Hasegawa-Wakatani turbulence with enforced equipartition of energy in large scale zonal flows, and small scale drift turbulence. We find that for a wide range of adiabatic index values, the stochastic component representing the small scale turbulent eddies of the flow, obtained from the autoregressive integrated moving average model, exhibits super-diffusive statistics, consistent with intermittent transport. The PDFs of large events (above one standard deviation) are well approximated by the Laplace distribution, while small events often exhibit a Gaussian character. Furthermore, there exists a strong influence of zonal flows, for example, via shearing and then viscous dissipation maintaining a sub-diffusive character of the fluxes.
Giant Metrewave Radio Telescope Observations of Head–Tail Radio Galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sebastian, Biny; Lal, Dharam V.; Rao, A. Pramesh, E-mail: biny@ncra.tifr.res.in
We present results from a study of seven large known head–tail radio galaxies based on observations using the Giant Metrewave Radio Telescope at 240 and 610 MHz. These observations are used to study the radio morphologies and distribution of the spectral indices across the sources. The overall morphology of the radio tails of these sources is suggestive of random motions of the optical host around the cluster potential. The presence of multiple bends and wiggles in several head–tail sources is possibly due to the precessing radio jets. We find steepening of the spectral index along the radio tails. The prevailingmore » equipartition magnetic field also decreases along the radio tails of these sources. These steepening trends are attributed to the synchrotron aging of plasma toward the ends of the tails. The dynamical ages of these sample sources have been estimated to be ∼10{sup 8} yr, which is a factor of six more than the age estimates from the radiative losses due to synchrotron cooling.« less
Magnetic Field Generation, Particle Energization and Radiation at Relativistic Shear Boundary Layers
NASA Astrophysics Data System (ADS)
Liang, Edison; Fu, Wen; Spisak, Jake; Boettcher, Markus
2015-11-01
Recent large scale Particle-in-Cell (PIC) simulations have demonstrated that in unmagnetized relativistic shear flows, strong transverse d.c. magnetic fields are generated and sustained by ion-dominated currents on the opposite sides of the shear interface. Instead of dissipating the shear flow free energy via turbulence formation and mixing as it is usually found in MHD simulations, the kinetic results show that the relativistic boundary layer stabilizes itself via the formation of a robust vacuum gap supported by a strong magnetic field, which effectively separates the opposing shear flows, as in a maglev train. Our new PIC simulations have extended the runs to many tens of light crossing times of the simulation box. Both the vacuum gap and supporting magnetic field remain intact. The electrons are energized to reach energy equipartition with the ions, with 10% of the total energy in electromagnetic fields. The dominant radiation mechanism is similar to that of a wiggler, due to oscillating electron orbits around the boundary layer.
The radial gradients and collisional properties of solar wind electrons
NASA Technical Reports Server (NTRS)
Gilvie, K. W.; Scudder, J. D.
1977-01-01
The plasma instrument on Mariner 10 carried out measurements of electron density and temperature in the interplanetary medium between heliocentric distances of 0.85 and 0.45 AU. Due to the stable coronal configuration and low solar activity during the period of observation, the radial variations of these quantities could be obtained. The power-law exponent of the core temperature was measured to be -0.3 + or - 0.04, and the halo temperature was found to be almost independent of heliocentric distance. The exponent of the power law for the density variation was 2.5 + or - 0.2 and the extrapolated value at 1 AU was consistent with measured values during the same period. Calculations of the core electron self-collision time, and the core-halo equipartition time were made as a function of radial distance. These measurements indicate a macroscale picture of a Coulomb-collisional core and a collisionless isothermal halo. Extrapolating back to the sun, core and halo temperatures become equal at a radial distance of approx. 2-15 radii.
NASA Astrophysics Data System (ADS)
To, Kiwing
2014-06-01
We investigate experimentally the steady state motion of a millimeter-sized granular polyhedral object on vertically vibrating platforms of flat, conical, and parabolic surfaces. We find that the position distribution of the granular object is related to the shape of the platform, just like that of a Brownian particle trapped in a potential at equilibrium, even though the granular object is intrinsically not at equilibrium due to inelastic collisions with the platform. From the collision dynamics, we derive the Langevin equation which describes the motion of the object under an effective potential that equals the gravitational potential along the platform surface. The potential energy is found to agree with the equilibrium equipartition theorem while the kinetic energy does not. Furthermore, the granular temperature is found to be higher than the effective temperature associated with the average potential energy, suggesting the presence of heat transfer from the kinetic part to the potential part of the granular object.
Energy partition, scale by scale, in magnetic Archimedes Coriolis weak wave turbulence.
Salhi, A; Baklouti, F S; Godeferd, F; Lehner, T; Cambon, C
2017-02-01
Magnetic Archimedes Coriolis (MAC) waves are omnipresent in several geophysical and astrophysical flows such as the solar tachocline. In the present study, we use linear spectral theory (LST) and investigate the energy partition, scale by scale, in MAC weak wave turbulence for a Boussinesq fluid. At the scale k^{-1}, the maximal frequencies of magnetic (Alfvén) waves, gravity (Archimedes) waves, and inertial (Coriolis) waves are, respectively, V_{A}k,N, and f. By using the induction potential scalar, which is a Lagrangian invariant for a diffusionless Boussinesq fluid [Salhi et al., Phys. Rev. E 85, 026301 (2012)PLEEE81539-375510.1103/PhysRevE.85.026301], we derive a dispersion relation for the three-dimensional MAC waves, generalizing previous ones including that of f-plane MHD "shallow water" waves [Schecter et al., Astrophys. J. 551, L185 (2001)AJLEEY0004-637X10.1086/320027]. A solution for the Fourier amplitude of perturbation fields (velocity, magnetic field, and density) is derived analytically considering a diffusive fluid for which both the magnetic and thermal Prandtl numbers are one. The radial spectrum of kinetic, S_{κ}(k,t), magnetic, S_{m}(k,t), and potential, S_{p}(k,t), energies is determined considering initial isotropic conditions. For magnetic Coriolis (MC) weak wave turbulence, it is shown that, at large scales such that V_{A}k/f≪1, the Alfvén ratio S_{κ}(k,t)/S_{m}(k,t) behaves like k^{-2} if the rotation axis is aligned with the magnetic field, in agreement with previous direct numerical simulations [Favier et al., Geophys. Astrophys. Fluid Dyn. (2012)] and like k^{-1} if the rotation axis is perpendicular to the magnetic field. At small scales, such that V_{A}k/f≫1, there is an equipartition of energy between magnetic and kinetic components. For magnetic Archimedes weak wave turbulence, it is demonstrated that, at large scales, such that (V_{A}k/N≪1), there is an equipartition of energy between magnetic and potential components, while at small scales (V_{A}k/N≫1), the ratio S_{p}(k,t)/S_{κ}(k,t) behaves like k^{-1} and S_{κ}(k,t)/S_{m}(k,t)=1. Also, for MAC weak wave turbulence, it is shown that, at small scales (V_{A}k/sqrt[N^{2}+f^{2}]≫1), the ratio S_{p}(k,t)/S_{κ}(t) behaves like k^{-1} and S_{κ}(k,t)/S_{m}(k,t)=1.
A Fokker-Planck based kinetic model for diatomic rarefied gas flows
NASA Astrophysics Data System (ADS)
Gorji, M. Hossein; Jenny, Patrick
2013-06-01
A Fokker-Planck based kinetic model is presented here, which also accounts for internal energy modes characteristic for diatomic gas molecules. The model is based on a Fokker-Planck approximation of the Boltzmann equation for monatomic molecules, whereas phenomenological principles were employed for the derivation. It is shown that the model honors the equipartition theorem in equilibrium and fulfills the Landau-Teller relaxation equations for internal degrees of freedom. The objective behind this approximate kinetic model is accuracy at reasonably low computational cost. This can be achieved due to the fact that the resulting stochastic differential equations are continuous in time; therefore, no collisions between the simulated particles have to be calculated. Besides, because of the devised energy conserving time integration scheme, it is not required to resolve the collisional scales, i.e., the mean collision time and the mean free path of molecules. This, of course, gives rise to much more efficient simulations with respect to other particle methods, especially the conventional direct simulation Monte Carlo (DSMC), for small and moderate Knudsen numbers. To examine the new approach, first the computational cost of the model was compared with respect to DSMC, where significant speed up could be obtained for small Knudsen numbers. Second, the structure of a high Mach shock (in nitrogen) was studied, and the good performance of the model for such out of equilibrium conditions could be demonstrated. At last, a hypersonic flow of nitrogen over a wedge was studied, where good agreement with respect to DSMC (with level to level transition model) for vibrational and translational temperatures is shown.
Temporal evolution of the Green's function reconstruction in the seismic coda
NASA Astrophysics Data System (ADS)
Clerc, V.; Roux, P.; Campillo, M.
2013-12-01
In presence of multiple scattering, the wavefield evolves towards an equipartitioned state, equivalent to ambient noise. CAMPILLO and PAUL (2003) reconstructed the surface wave part of the Green's function between three pairs of stations in Mexico. The data indicate that the time asymmetry between causal and acausal part of the Green's function is less pronounced when the correlation is performed in the later windows of the coda. These results on the correlation of diffuse waves provide another perspective on the reconstruction of Green function which is independent of the source distribution and which suggests that if the time of observation is long enough, a single source could be sufficient. The paper by ROUX et al. (2005) provides a theoretical frame for the reconstruction of the Green's function in a homogeneous middle. In a multiple scattering medium with a single source, scatterers behave as secondary sources according to the Huygens principle. Coda waves are relevant to multiple scattering, a regime which can be approximated by diffusion for long lapse times. We express the temporal evolution of the correlation function between two receivers as a function of the secondary sources. We are able to predict the effect of the persistence of the net flux of energy observed by CAMPILLO and PAUL (2003) in numerical simulations. This method is also effective in order to retrieve the scattering mean free path. We perform a partial reconstruction of the Green's function in a strongly scattering medium in numerical simulations. The prediction of the flux asymmetry allows defining the parts of the coda providing the same information as ambient noise cross correlation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feiden, Gregory A.; Chaboyer, Brian, E-mail: gregory.a.feiden.gr@dartmouth.edu, E-mail: brian.chaboyer@dartmouth.edu
2013-12-20
Magnetic fields are hypothesized to inflate the radii of low-mass stars—defined as less massive than 0.8 M {sub ☉}—in detached eclipsing binaries (DEBs). We investigate this hypothesis using the recently introduced magnetic Dartmouth stellar evolution code. In particular, we focus on stars thought to have a radiative core and convective outer envelope by studying in detail three individual DEBs: UV Psc, YY Gem, and CU Cnc. Our results suggest that the stabilization of thermal convection by a magnetic field is a plausible explanation for the observed model-radius discrepancies. However, surface magnetic field strengths required by the models are significantly strongermore » than those estimated from observed coronal X-ray emission. Agreement between model predicted surface magnetic field strengths and those inferred from X-ray observations can be found by assuming that the magnetic field sources its energy from convection. This approach makes the transport of heat by convection less efficient and is akin to reduced convective mixing length methods used in other studies. Predictions for the metallicity and magnetic field strengths of the aforementioned systems are reported. We also develop an expression relating a reduction in the convective mixing length to a magnetic field strength in units of the equipartition value. Our results are compared with those from previous investigations to incorporate magnetic fields to explain the low-mass DEB radius inflation. Finally, we explore how the effects of magnetic fields might affect mass determinations using asteroseismic data and the implication of magnetic fields on exoplanet studies.« less
Magnetic field evolution in dwarf and Magellanic-type galaxies
NASA Astrophysics Data System (ADS)
Siejkowski, H.; Soida, M.; Chyży, K. T.
2018-03-01
Aims: Low-mass galaxies radio observations show in many cases surprisingly high levels of magnetic field. The mass and kinematics of such objects do not favour the development of effective large-scale dynamo action. We attempted to check if the cosmic-ray-driven dynamo can be responsible for measured magnetization in this class of poorly investigated objects. We investigated how starburst events on the whole, as well as when part of the galactic disk, influence the magnetic field evolution. Methods: We created a model of a dwarf/Magellanic-type galaxy described by gravitational potential constituted from two components: the stars and the dark-matter halo. The model is evolved by solving a three-dimensional (3D) magnetohydrodynamic equation with an additional cosmic-ray component, which is approximated as a fluid. The turbulence is generated in the system via supernova explosions manifested by the injection of cosmic-rays. Results: The cosmic-ray-driven dynamo works efficiently enough to amplify the magnetic field even in low-mass dwarf/Magellanic-type galaxies. The e-folding times of magnetic energy growth are 0.50 and 0.25 Gyr for the slow (50 km s-1) and fast (100 km s-1) rotators, respectively. The amplification is being suppressed as the system reaches the equipartition level between kinetic, magnetic, and cosmic-ray energies. An episode of star formation burst amplifies the magnetic field but only for a short time while increased star formation activity holds. We find that a substantial amount of gas is expelled from the galactic disk, and that the starburst events increase the efficiency of this process.
Fermi Large Area Telescope observations of the supernova remnant HESS J1731-347
NASA Astrophysics Data System (ADS)
Yang, Rui-zhi; Zhang, Xiao; Yuan, Qiang; Liu, Siming
2014-07-01
Context. HESS J1731-347 has been identified as one of the few TeV-bright shell-type supernova remnants (SNRs). These remnants are dominated by nonthermal emission, and the nature of TeV emission has been continuously debated for nearly a decade. Aims: We carry out the detailed modeling of the radio to γ-ray spectrum of HESS J1731-347 to constrain the magnetic field and energetic particles sources, which we compare with those of the other TeV-bright shell-type SNRs explored before. Methods: Four years of data from Fermi Large Area Telescope (LAT) observations for regions around this remnant are analyzed, leading to no detection correlated with the source discovered in the TeV band. The Markov chain Monte Carlo method is used to constrain parameters of one-zone models for the overall emission spectrum. Results: Based on the 99.9% upper limits of fluxes in the GeV range, one-zone hadronic models with an energetic proton spectral slope greater than 1.8 can be ruled out, which favors a leptonic origin for the γ-ray emission, making this remnant a sibling of the brightest TeV SNR RX J1713.7-3946, the Vela Junior SNR RX J0852.0-4622, and RCW 86. The best-fit leptonic model has an electron spectral slope of 1.8 and a magnetic field of ~30 μG, which is at least a factor of 2 higher than those of RX J1713.7-3946 and RX J0852.0-4622, posing a challenge to the distance estimate and/or the energy equipartition between energetic electrons and the magnetic field of this source. A measurement of the shock speed will address this challenge and has implications on the magnetic field evolution and electron acceleration driven by shocks of SNRs.
Very Large Array OH Zeeman Observations of the Star-forming Region S88B
NASA Astrophysics Data System (ADS)
Sarma, A. P.; Brogan, C. L.; Bourke, T. L.; Eftimova, M.; Troland, T. H.
2013-04-01
We present observations of the Zeeman effect in OH thermal absorption main lines at 1665 and 1667 MHz taken with the Very Large Array toward the star-forming region S88B. The OH absorption profiles toward this source are complicated, and contain several blended components toward a number of positions. Almost all of the OH absorbing gas is located in the eastern parts of S88B, toward the compact continuum source S88B-2 and the eastern parts of the extended continuum source S88B-1. The ratio of 1665/1667 MHz OH line intensities indicates the gas is likely highly clumped, in agreement with other molecular emission line observations in the literature. S88-B appears to present a similar geometry to the well-known star-forming region M17, in that there is an edge-on eastward progression from ionized to molecular gas. The detected magnetic fields appear to mirror this eastward transition; we detected line-of-sight magnetic fields ranging from 90 to 400 μG, with the lowest values of the field to the southwest of the S88B-1 continuum peak, and the highest values to its northeast. We used the detected fields to assess the importance of the magnetic field in S88B by a number of methods; we calculated the ratio of thermal to magnetic pressures, we calculated the critical field necessary to completely support the cloud against self-gravity and compared it to the observed field, and we calculated the ratio of mass to magnetic flux in terms of the critical value of this parameter. All these methods indicated that the magnetic field in S88B is dynamically significant, and should provide an important source of support against gravity. Moreover, the magnetic energy density is in approximate equipartition with the turbulent energy density, again pointing to the importance of the magnetic field in this region.
The concept of temperature in space plasmas
NASA Astrophysics Data System (ADS)
Livadiotis, G.
2017-12-01
Independently of the initial distribution function, once the system is thermalized, its particles are stabilized into a specific distribution function parametrized by a temperature. Classical particle systems in thermal equilibrium have their phase-space distribution stabilized into a Maxwell-Boltzmann function. In contrast, space plasmas are particle systems frequently described by stationary states out of thermal equilibrium, namely, their distribution is stabilized into a function that is typically described by kappa distributions. The temperature is well-defined for systems at thermal equilibrium or stationary states described by kappa distributions. This is based on the equivalence of the two fundamental definitions of temperature, that is (i) the kinetic definition of Maxwell (1866) and (ii) the thermodynamic definition of Clausius (1862). This equivalence holds either for Maxwellians or kappa distributions, leading also to the equipartition theorem. The temperature and kappa index (together with density) are globally independent parameters characterizing the kappa distribution. While there is no equation of state or any universal relation connecting these parameters, various local relations may exist along the streamlines of space plasmas. Observations revealed several types of such local relations among plasma thermal parameters.
NASA Technical Reports Server (NTRS)
Stepinski, Tomasz F.; Reyes-Ruiz, Mauricio; Vanhala, Harri A. T.
1993-01-01
A hydromagnetic dynamo provides the best mechanism for contemporaneously producing magnetic fields in a turbulent solar nebula. We investigate the solar nebula in the framework of a steady-state accretion disk model and establish the criteria for a viable nebular dynamo. We have found that typically a magnetic gap exists in the nebula, the region where the degree of ionization is too small for the magnetic field to couple to the gas. The location and width of this gap depend on the particular model; the supposition is that gaps cover different parts of the nebula at different evolutionary stages. We have found, from several dynamical constraints, that the generated magnetic field is likely to saturate at a strength equal to equipartition with the kinetic energy of turbulence. Maxwell stress arising from a large-scale magnetic field may significantly influence nebular structure, and Maxwell stress due to small-scale fields can actually dominate other stresses in the inner parts of the nebula. We also argue that the bulk of nebular gas, within the scale height from the midplane, is stable against Balbus-Hawley instability.
The Cosmic Battery in Astrophysical Accretion Disks
NASA Astrophysics Data System (ADS)
Contopoulos, Ioannis; Nathanail, Antonios; Katsanikas, Matthaios
2015-06-01
The aberrated radiation pressure at the inner edge of the accretion disk around an astrophysical black hole imparts a relative azimuthal velocity on the electrons with respect to the ions which gives rise to a ring electric current that generates large-scale poloidal magnetic field loops. This is the Cosmic Battery established by Contopoulos and Kazanas in 1998. In the present work we perform realistic numerical simulations of this important astrophysical mechanism in advection-dominated accretion flows, ADAFs. We confirm the original prediction that the inner parts of the loops are continuously advected toward the central black hole and contribute to the growth of the large-scale magnetic field, whereas the outer parts of the loops are continuously diffusing outward through the turbulent accretion flow. This process of inward advection of the axial field and outward diffusion of the return field proceeds all the way to equipartition, thus generating astrophysically significant magnetic fields on astrophysically relevant timescales. We confirm that there exists a critical value of the magnetic Prandtl number between unity and 10 in the outer disk above which the Cosmic Battery mechanism is suppressed.
Jacobson, Daniel; Stratt, Richard M
2014-05-07
Because the geodesic pathways that a liquid follows through its potential energy landscape govern its slow, diffusive motion, we suggest that these pathways are logical candidates for the title of a liquid's "inherent dynamics." Like their namesake "inherent structures," these objects are simply features of the system's potential energy surface and thus provide views of the system's structural evolution unobstructed by thermal kinetic energy. This paper shows how these geodesic pathways can be computed for a liquid of linear molecules, allowing us to see precisely how such molecular liquids mix rotational and translational degrees of freedom into their dynamics. The ratio of translational to rotational components of the geodesic path lengths, for example, is significantly larger than would be expected on equipartition grounds, with a value that scales with the molecular aspect ratio. These and other features of the geodesics are consistent with a picture in which molecular reorientation adiabatically follows translation-molecules largely thread their way through narrow channels available in the potential energy landscape.
NASA Astrophysics Data System (ADS)
Jacobson, Daniel; Stratt, Richard M.
2014-05-01
Because the geodesic pathways that a liquid follows through its potential energy landscape govern its slow, diffusive motion, we suggest that these pathways are logical candidates for the title of a liquid's "inherent dynamics." Like their namesake "inherent structures," these objects are simply features of the system's potential energy surface and thus provide views of the system's structural evolution unobstructed by thermal kinetic energy. This paper shows how these geodesic pathways can be computed for a liquid of linear molecules, allowing us to see precisely how such molecular liquids mix rotational and translational degrees of freedom into their dynamics. The ratio of translational to rotational components of the geodesic path lengths, for example, is significantly larger than would be expected on equipartition grounds, with a value that scales with the molecular aspect ratio. These and other features of the geodesics are consistent with a picture in which molecular reorientation adiabatically follows translation—molecules largely thread their way through narrow channels available in the potential energy landscape.
Route to thermalization in the α-Fermi–Pasta–Ulam system
Onorato, Miguel; Vozella, Lara; Lvov, Yuri V.
2015-01-01
We study the original α-Fermi–Pasta–Ulam (FPU) system with N = 16, 32, and 64 masses connected by a nonlinear quadratic spring. Our approach is based on resonant wave–wave interaction theory; i.e., we assume that, in the weakly nonlinear regime (the one in which Fermi was originally interested), the large time dynamics is ruled by exact resonances. After a detailed analysis of the α-FPU equation of motion, we find that the first nontrivial resonances correspond to six-wave interactions. Those are precisely the interactions responsible for the thermalization of the energy in the spectrum. We predict that, for small-amplitude random waves, the timescale of such interactions is extremely large and it is of the order of 1/ϵ8, where ϵ is the small parameter in the system. The wave–wave interaction theory is not based on any threshold: Equipartition is predicted for arbitrary small nonlinearity. Our results are supported by extensive numerical simulations. A key role in our finding is played by the Umklapp (flip-over) resonant interactions, typical of discrete systems. The thermodynamic limit is also briefly discussed. PMID:25805822
Fluctuations in the DNA double helix
NASA Astrophysics Data System (ADS)
Peyrard, M.; López, S. C.; Angelov, D.
2007-08-01
DNA is not the static entity suggested by the famous double helix structure. It shows large fluctuational openings, in which the bases, which contain the genetic code, are temporarily open. Therefore it is an interesting system to study the effect of nonlinearity on the physical properties of a system. A simple model for DNA, at a mesoscopic scale, can be investigated by computer simulation, in the same spirit as the original work of Fermi, Pasta and Ulam. These calculations raise fundamental questions in statistical physics because they show a temporary breaking of equipartition of energy, regions with large amplitude fluctuations being able to coexist with regions where the fluctuations are very small, even when the model is studied in the canonical ensemble. This phenomenon can be related to nonlinear excitations in the model. The ability of the model to describe the actual properties of DNA is discussed by comparing theoretical and experimental results for the probability that base pairs open an a given temperature in specific DNA sequences. These studies give us indications on the proper description of the effect of the sequence in the mesoscopic model.
Shock Corrugation by Rayleigh-Taylor Instability in Gamma-Ray Burst Afterglow Jets
NASA Astrophysics Data System (ADS)
Duffell, Paul C.; MacFadyen, Andrew I.
2014-08-01
Afterglow jets are Rayleigh-Taylor unstable and therefore turbulent during the early part of their deceleration. There are also several processes which actively cool the jet. In this Letter, we demonstrate that if cooling significantly increases the compressibility of the flow, the turbulence collides with the forward shock, destabilizing and corrugating it. In this case, the forward shock is turbulent enough to produce the magnetic fields responsible for synchrotron emission via small-scale turbulent dynamo. We calculate light curves assuming the magnetic field is in energy equipartition with the turbulent kinetic energy and discover that dynamic magnetic fields are well approximated by a constant magnetic-to-thermal energy ratio of 1%, though there is a sizeable delay in the time of peak flux as the magnetic field turns on only after the turbulence has activated. The reverse shock is found to be significantly more magnetized than the forward shock, with a magnetic-to-thermal energy ratio of the order of 10%. This work motivates future Rayleigh-Taylor calculations using more physical cooling models.
Direct simulation of compressible turbulence in a shear flow
NASA Technical Reports Server (NTRS)
Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.
1991-01-01
Compressibility effects on the turbulence in homogeneous shear flow are investigated. The growth of the turbulent kinetic energy was found to decrease with increasing Mach number: a phenomenon which is similar to the reduction of turbulent velocity intensities observed in experiments on supersonic free shear layers. An examination of the turbulent energy budget shows that both the compressible dissipation and the pressure-dilatation contribute to the decrease in the growth of kinetic energy. The pressure-dilatation is predominantly negative in homogeneous shear flow, in contrast to its predominantly positive behavior in isotropic turbulence. The different signs of the pressure-dilatation are explained by theoretical consideration of the equations for the pressure variance and density variance. Previously, the following results were obtained for isotropic turbulence: (1) the normalized compressible dissipation is of O(M(sub t)(exp 2)); and (2) there is approximate equipartition between the kinetic and potential energies associated with the fluctuating compressible mode. Both of these results were substantiated in the case of homogeneous shear. The dilatation field is significantly more skewed and intermittent than the vorticity field. Strong compressions seem to be more likely than strong expansions.
NASA Technical Reports Server (NTRS)
Kulsrud, Russell M.; Anderson, Stephen W.
1992-01-01
The fluctuation spectrum that must arise in a mean field dynamo generation of galactic fields if the initial field is weak is considered. A kinetic equation for its evolution is derived and solved. The spectrum evolves by transfer of energy from one magnetic mode to another by interaction with turbulent velocity modes. This kinetic equation is valid in the limit that the rate of evolution of the magnetic modes is slower than the reciprocal decorrelation time of the turbulent modes. This turns out to be the case by a factor greater than 3. Most of the fluctuation energy concentrates on small scales, shorter than the hydrodynamic turbulent scales. The fluctuation energy builds up to equipartition with the turbulent energy in times that are short compared to the e-folding time of the mean field. The turbulence becomes strongly modified before the dynamo amplification starts. Thus, the kinematic assumption of the mean dynamo theory is invalid. Thus, the galactic field must have a primordial origin, although it may subsequently be modified by dynamo action.
Magnetoacoustic Waves in a Stratified Atmosphere with a Magnetic Null Point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarr, Lucas A.; Linton, Mark; Leake, James, E-mail: lucas.tarr.ctr@nrl.navy.mil
2017-03-01
We perform nonlinear MHD simulations to study the propagation of magnetoacoustic waves from the photosphere to the low corona. We focus on a 2D system with a gravitationally stratified atmosphere and three photospheric concentrations of magnetic flux that produce a magnetic null point with a magnetic dome topology. We find that a single wavepacket introduced at the lower boundary splits into multiple secondary wavepackets. A portion of the packet refracts toward the null owing to the varying Alfvén speed. Waves incident on the equipartition contour surrounding the null, where the sound and Alfvén speeds coincide, partially transmit, reflect, and mode-convertmore » between branches of the local dispersion relation. Approximately 15.5% of the wavepacket’s initial energy ( E {sub input}) converges on the null, mostly as a fast magnetoacoustic wave. Conversion is very efficient: 70% of the energy incident on the null is converted to slow modes propagating away from the null, 7% leaves as a fast wave, and the remaining 23% (0.036 E {sub input}) is locally dissipated. The acoustic energy leaving the null is strongly concentrated along field lines near each of the null’s four separatrices. The portion of the wavepacket that refracts toward the null, and the amount of current accumulation, depends on the vertical and horizontal wavenumbers and the centroid position of the wavepacket as it crosses the photosphere. Regions that refract toward or away from the null do not simply coincide with regions of open versus closed magnetic field or regions of particular field orientation. We also model wavepacket propagation using a WKB method and find that it agrees qualitatively, though not quantitatively, with the results of the numerical simulation.« less
Global Energetics of Solar Flares. VI. Refined Energetics of Coronal Mass Ejections
NASA Astrophysics Data System (ADS)
Aschwanden, Markus J.
2017-09-01
In this study, we refine the coronal mass ejection (CME) model that was presented in an earlier study of the global energetics of solar flares and associated CMEs and apply it to all (860) GOES M- and X-class flare events observed during the first seven years (2010-2016) of the Solar Dynamics Observatory (SDO) mission. The model refinements include (1) the CME geometry in terms of a 3D volume undergoing self-similar adiabatic expansion, (2) the solar gravitational deceleration during the propagation of the CME, which discriminates between eruptive and confined CMEs, (3) a self-consistent relationship between the CME center-of-mass motion detected during EUV dimming and the leading-edge motion observed in white-light coronagraphs, (4) the equipartition of the CME’s kinetic and thermal energies, and (5) the Rosner-Tucker-Vaiana scaling law. The refined CME model is entirely based on EUV-dimming observations (using Atmospheric Imager Assembly (AIA)/SDO data) and complements the traditional white-light scattering model (using Large-Angle and Spectrometric Coronagraph Experiment (LASCO)/Solar and Heliospheric Observatory data), and both models are independently capable of determining fundamental CME parameters. Comparing the two methods, we find that (1) LASCO is less sensitive than AIA in detecting CMEs (in 24% of the cases), (2) CME masses below {m}{cme}≲ {10}14 g are underestimated by LASCO, (3) AIA and LASCO masses, speeds, and energies agree closely in the statistical mean after the elimination of outliers, and (4) the CME parameters speed v, emission measure-weighted flare peak temperature T e , and length scale L are consistent with the following scaling laws: v\\propto {T}e1/2, v\\propto {({m}{cme})}1/4, and {m}{cme}\\propto {L}2.
The unique, optically-dominated quasar jet of PKS 1421-490
NASA Astrophysics Data System (ADS)
Gelbord, J. M.; Marshall, H. L.; Worrall, D. M.; Birkinshaw, M.; Lovell, J. E. J.; Ojha, R.; Godfrey, L.; Schwartz, D. A.; Perlman, E. S.; Georganopoulos, M.; Murphy, D. W.; Jauncey, D. L.
2004-12-01
The unique, optically-dominated quasar jet of PKS 1421-490 We report the discovery of extremely strong optical and X-ray emission associated with a knot in the radio jet of PKS 1421-490. The SDSS g' = 17.8 magnitude makes this the second-brightest optical jet known. The jet-to-core flux ratio in the X-ray band is unusually large (3.7), and the optical flux ratio ( ˜300) is unprecedented. The broad-band spectrum of the knot is flat from the radio through the optical bands, and has a similar slope with a lower normalization in the X-ray band. This emission is difficult to interpret without resorting to extreme model parameters or physically unlikely scenarios (flat electron distributions, non-equipartition magnetic fields, huge Doppler factors, etc.). We discuss several alternative models for the radio-to-X-ray continuum, including pure synchrotron, synchrotron plus inverse Compton scattering of cosmic microwave background photons, and a decelerating jet. JMG was supported under Chandra grant GO4-5124X to MIT from the CXC. HLM was supported under NASA contract SAO SV1-61010 for the Chandra X-Ray Center (CXC).
Stellar fibril magnetic systems. I - Reduced energy state
NASA Technical Reports Server (NTRS)
Parker, E. N.
1984-01-01
The remarkable fibril structure of the magnetic fields at the surface of the sun (with fibrils compressed to 1,000-2,000 gauss) lies outside existing statistical theories of magnetohydrodynamic turbulence. The total energy of the fibril field is enhanced by a factor of more than 100 above the energy for the mean field in a continuum state. The magnetic energy density within a fibril is of the order of 100 times the local kinetic energy density, so that no simple application of equipartition principles is possible. It is pointed out that the total energy of the atmosphere (thermal + gravitational + magnetic) is reduced by the fibril state of the field by avoiding the magnetic inhibition of the convective overturning, suggesting that the formation of the observed intense fibril state may be in response to the associated energy reduction. Calculation of the minimum total energy of a polytropic atmosphere permeated by magnetic fibrils yields theoretical fibril fields of the order of 1-5 kilogauss when characteristics appropriate to the solar convective zone are introduced, in rough agreement with the actual fields of 1-2 kilogauss. The polytrope model, although crude, establishes that a large reduction in total energy is made possible by the fibril state.
Exploring the Variability of the Flat-spectrum Radio Source 1633+382. II. Physical Properties
NASA Astrophysics Data System (ADS)
Algaba, Juan-Carlos; Lee, Sang-Sung; Rani, Bindu; Kim, Dae-Won; Kino, Motoki; Hodgson, Jeffrey; Zhao, Guang-Yao; Byun, Do-Young; Gurwell, Mark; Kang, Sin-Cheol; Kim, Jae-Young; Kim, Jeong-Sook; Kim, Soon-Wook; Park, Jong-Ho; Trippe, Sascha; Wajima, Kiyoaki
2018-06-01
The flat-spectrum radio quasar 1633+382 (4C 38.41) showed a significant increase of its radio flux density during the period 2012 March–2015 August, which correlates with γ-ray flaring activity. Multi-frequency simultaneous very long baseline interferometry (VLBI) observations were conducted as part of the interferometric monitoring of gamma-ray bright active galactic nuclei (iMOGABA) program and supplemented with additional radio monitoring observations with the OVRO 40 m telescope, the Boston University VLBI program, and the Submillimeter Array. The epochs of the maxima for the two largest γ-ray flares coincide with the ejection of two respective new VLBI components. Analysis of the spectral energy distribution indicates a higher turnover frequency after the flaring events. The evolution of the flare in the turnover frequency-turnover flux density plane probes the adiabatic losses in agreement with the shock-in-jet model. The derived synchrotron self-absorption magnetic fields, of the order of 0.1 mG, do not seem to change dramatically during the flares, and are much weaker, by a factor 104, than the estimated equipartition magnetic fields, indicating that the source of the flare may be associated with a particle-dominated emitting region.
ACCELERATION OF COMPACT RADIO JETS ON SUB-PARSEC SCALES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Sang-Sung; Lobanov, Andrei P.; Krichbaum, Thomas P.
2016-08-01
Jets of compact radio sources are highly relativistic and Doppler boosted, making studies of their intrinsic properties difficult. Observed brightness temperatures can be used to study the intrinsic physical properties of relativistic jets, and constrain models of jet formation in the inner jet region. We aim to observationally test such inner jet models. The very long baseline interferometry (VLBI) cores of compact radio sources are optically thick at a given frequency. The distance of the core from the central engine is inversely proportional to the frequency. Under the equipartition condition between the magnetic field energy and particle energy densities, themore » absolute distance of the VLBI core can be predicted. We compiled the brightness temperatures of VLBI cores at various radio frequencies of 2, 8, 15, and 86 GHz. We derive the brightness temperature on sub-parsec scales in the rest frame of the compact radio sources. We find that the brightness temperature increases with increasing distance from the central engine, indicating that the intrinsic jet speed (the Lorentz factor) increases along the jet. This implies that the jets are accelerated in the (sub-)parsec regions from the central engine.« less
Active processes in one dimension
NASA Astrophysics Data System (ADS)
Demaerel, Thibaut; Maes, Christian
2018-03-01
We consider the thermal and athermal overdamped motion of particles in one-dimensional geometries where discrete internal degrees of freedom (spin) are coupled with the translational motion. Adding a driving velocity that depends on the time-dependent spin constitutes the simplest model of active particles (run-and-tumble processes) where the violation of the equipartition principle and of the Sutherland-Einstein relation can be studied in detail even when there is generalized reversibility. We give an example (with four spin values) where the irreversibility of the translational motion manifests itself only in higher-order (than two) time correlations. We derive a generalized telegraph equation as the Smoluchowski equation for the spatial density for an arbitrary number of spin values. We also investigate the Arrhenius exponential law for run-and-tumble particles; due to their activity the slope of the potential becomes important in contrast to the passive diffusion case and activity enhances the escape from a potential well (if that slope is high enough). Finally, in the absence of a driving velocity, the presence of internal currents such as in the chemistry of molecular motors may be transmitted to the translational motion and the internal activity is crucial for the direction of the emerging spatial current.
Revisiting gamma-ray burst afterglows with time-dependent parameters
NASA Astrophysics Data System (ADS)
Yang, Chao; Zou, Yuan-Chuan; Chen, Wei; Liao, Bin; Lei, Wei-Hua; Liu, Yu
2018-02-01
The relativistic external shock model of gamma-ray burst (GRB) afterglows has been established with five free parameters, i.e., the total kinetic energy E, the equipartition parameters for electrons {{ε }}{{e}} and for the magnetic field {{ε }}{{B}}, the number density of the environment n and the index of the power-law distribution of shocked electrons p. A lot of modified models have been constructed to consider the variety of GRB afterglows, such as: the wind medium environment by letting n change with radius, the energy injection model by letting kinetic energy change with time and so on. In this paper, by assuming all four parameters (except p) change with time, we obtain a set of formulas for the dynamics and radiation, which can be used as a reference for modeling GRB afterglows. Some interesting results are obtained. For example, in some spectral segments, the radiated flux density does not depend on the number density or the profile of the environment. As an application, through modeling the afterglow of GRB 060607A, we find that it can be interpreted in the framework of the time dependent parameter model within a reasonable range.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kitiashvili, I. N.; Mansour, N. N.; Wray, A. A.
Magnetic fields are usually observed in the quiet Sun as small-scale elements that cover the entire solar surface (the “salt-and-pepper” patterns in line-of-sight magnetograms). By using 3D radiative MHD numerical simulations, we find that these fields result from a local dynamo action in the top layers of the convection zone, where extremely weak “seed” magnetic fields (e.g., from a 10{sup −6} G) can locally grow above the mean equipartition field to a stronger than 2000 G field localized in magnetic structures. Our results reveal that the magnetic flux is predominantly generated in regions of small-scale helical downflows. We find thatmore » the local dynamo action takes place mostly in a shallow, about 500 km deep, subsurface layer, from which the generated field is transported into the deeper layers by convective downdrafts. We demonstrate that the observed dominance of vertical magnetic fields at the photosphere and horizontal fields above the photosphere can be explained by small-scale magnetic loops produced by the dynamo. Such small-scale loops play an important role in the structure and dynamics of the solar atmosphere and their detection in observations is critical for understanding the local dynamo action on the Sun.« less
Lateral interactions and non-equilibrium in surface kinetics
NASA Astrophysics Data System (ADS)
Menzel, Dietrich
2016-08-01
Work modelling reactions between surface species frequently use Langmuir kinetics, assuming that the layer is in internal equilibrium, and that the chemical potential of adsorbates corresponds to that of an ideal gas. Coverage dependences of reacting species and of site blocking are usually treated with simple power law coverage dependences (linear in the simplest case), neglecting that lateral interactions are strong in adsorbate and co-adsorbate layers which may influence kinetics considerably. My research group has in the past investigated many co-adsorbate systems and simple reactions in them. We have collected a number of examples where strong deviations from simple coverage dependences exist, in blocking, promoting, and selecting reactions. Interactions can range from those between next neighbors to larger distances, and can be quite complex. In addition, internal equilibrium in the layer as well as equilibrium distributions over product degrees of freedom can be violated. The latter effect leads to non-equipartition of energy over molecular degrees of freedom (for products) or non-equal response to those of reactants. While such behavior can usually be described by dynamic or kinetic models, the deeper reasons require detailed theoretical analysis. Here, a selection of such cases is reviewed to exemplify these points.
Dynamically important magnetic fields near supermassive black holes in radio-loud AGN
NASA Astrophysics Data System (ADS)
Savolainen, Tuomas; Zamaninasab, Mohammad; Clausen-Brown, Eric; Tchekhovskoy, Alexander
The powerful radio jets ejected from the vicinity of accreting supermassive black holes in active galactic nuclei are thought to be formed by magnetic forces. However, there is little observational evidence of the actual strength of the magnetic fields in the jet-launching region, and in the accretion disks, of AGN. We have collected from the literature jet magnetic field estimates determined by very long baseline interferometry observations of the opacity-driven core-shift effect for 76 blazars and radio galaxies. We show that the jet magnetic flux of these radio-loud AGN tightly correlates with their accretion disk luminosity -- over seven orders of magnitude in accretion power. Moreover, the estimated magnetic flux threading the black hole quantitatively agrees with the saturation value expected in the magnetically arrested disk scenario. This implies that black holes in many, if not most, of the radio-loud AGN are surrounded by accretion disks that have dynamically important magnetic fields. Such disks behave very differently from the standard model disks with sub-equipartition magnetic fields, which may have important consequences for attempts to interpret disk spectral energy distributions or signatures of the possible black hole shadow in mm-VLBI images.
The two-stage dynamics in the Fermi-Pasta-Ulam problem: From regular to diffusive behavior
NASA Astrophysics Data System (ADS)
Ponno, A.; Christodoulidi, H.; Skokos, Ch.; Flach, S.
2011-12-01
A numerical and analytical study of the relaxation to equilibrium of both the Fermi-Pasta-Ulam (FPU) α-model and the integrable Toda model, when the fundamental mode is initially excited, is reported. We show that the dynamics of both systems is almost identical on the short term, when the energies of the initially unexcited modes grow in geometric progression with time, through a secular avalanche process. At the end of this first stage of the dynamics, the time-averaged modal energy spectrum of the Toda system stabilizes to its final profile, well described, at low energy, by the spectrum of a q-breather. The Toda equilibrium state is clearly shown to describe well the long-living quasi-state of the FPU system. On the long term, the modal energy spectrum of the FPU system slowly detaches from the Toda one by a diffusive-like rising of the tail modes, and eventually reaches the equilibrium flat shape. We find a simple law describing the growth of tail modes, which enables us to estimate the time-scale to equipartition of the FPU system, even when, at small energies, it becomes unobservable.
21cm Absorption Line Zeeman Observations And Modeling Of Physical Conditions In M16
NASA Astrophysics Data System (ADS)
Kiuchi, Furea; Brogan, C.; Troland, T.
2011-01-01
We present detailed 21 cm HI absorption line observations of M16 using the Very Large Array. The M16 "pillars of creation" are classic examples of the interaction of ISM with radiation from young, hot stars. Magnetic fields can affect these interactions, the 21 cm Zeeman effect reveals magnetic field strengths in the Photodissociation regions associated with the pillars. The present results yield a 3-sigma upper limit upon the line-of-sight magnetic field of about 300 microgauss. This limit is consistent with a total field strength of 500 microgauss, required in the molecular gas if magnetic energies and turbulent energies in the pillars are in equipartition. Most likely, magnetic fields do not play a dominant role in the dynamics of the M16 pillars. Another goal of this study is to determine the distribution of cold HI in the M16 region and to model the physical conditions in the neutral gas in the pillars. We used the spectral synthesis code Cloudy 08.00 for this purpose. We adopted the results of a published Cloudy HII region model and extended this model into the neutral gas to derive physical conditions therein.
PLASMA TURBULENCE AND KINETIC INSTABILITIES AT ION SCALES IN THE EXPANDING SOLAR WIND
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hellinger, Petr; Trávnícek, Pavel M.; Matteini, Lorenzo
The relationship between a decaying strong turbulence and kinetic instabilities in a slowly expanding plasma is investigated using two-dimensional (2D) hybrid expanding box simulations. We impose an initial ambient magnetic field perpendicular to the simulation box, and we start with a spectrum of large-scale, linearly polarized, random-phase Alfvénic fluctuations that have energy equipartition between kinetic and magnetic fluctuations and vanishing correlation between the two fields. A turbulent cascade rapidly develops; magnetic field fluctuations exhibit a power-law spectrum at large scales and a steeper spectrum at ion scales. The turbulent cascade leads to an overall anisotropic proton heating, protons are heatedmore » in the perpendicular direction, and, initially, also in the parallel direction. The imposed expansion leads to generation of a large parallel proton temperature anisotropy which is at later stages partly reduced by turbulence. The turbulent heating is not sufficient to overcome the expansion-driven perpendicular cooling and the system eventually drives the oblique firehose instability in a form of localized nonlinear wave packets which efficiently reduce the parallel temperature anisotropy. This work demonstrates that kinetic instabilities may coexist with strong plasma turbulence even in a constrained 2D regime.« less
Stellar feedback strongly alters the amplification and morphology of galactic magnetic fields
NASA Astrophysics Data System (ADS)
Su, Kung-Yi; Hayward, Christopher C.; Hopkins, Philip F.; Quataert, Eliot; Faucher-Giguère, Claude-André; Kereš, Dušan
2018-01-01
Using high-resolution magnetohydrodynamic simulations of idealized, non-cosmological galaxies, we investigate how cooling, star formation and stellar feedback affect galactic magnetic fields. We find that the amplification histories, saturation values and morphologies of the magnetic fields vary considerably depending on the baryonic physics employed, primarily because of differences in the gas density distribution. In particular, adiabatic runs and runs with a subgrid (effective equation of state) stellar feedback model yield lower saturation values and morphologies that exhibit greater large-scale order compared with runs that adopt explicit stellar feedback and runs with cooling and star formation but no feedback. The discrepancies mostly lie in gas denser than the galactic average, which requires cooling and explicit fragmentation to capture. Independent of the baryonic physics included, the magnetic field strength scales with gas density as B ∝ n2/3, suggesting isotropic flux freezing or equipartition between the magnetic and gravitational energies during the field amplification. We conclude that accurate treatments of cooling, star formation and stellar feedback are crucial for obtaining the correct magnetic field strength and morphology in dense gas, which, in turn, is essential for properly modelling other physical processes that depend on the magnetic field, such as cosmic ray feedback.
Multi-scale virtual view on the precessing jet SS433
NASA Astrophysics Data System (ADS)
Monceau-Baroux, R.; Porth, O.; Meliani, Z.; Keppens, R.
2014-07-01
Observations of SS433 infer how an X-ray binary gives rise to a corkscrew patterned relativistic jet. XRB SS433 is well known on a large range of scales for wich we realize 3D simulation and radio mappings. For our study we use relativistic hydrodynamic in special relativity using a relativistic effective polytropic index. We use parameters extracted from observations to impose thermodynamical conditions of the ISM and jet. We follow the kinetic and thermal energy content, of the various ISM and jet regions. Our simulation follows simultaneously the evolution of the population of electrons which are accelerated by the jet. The evolving spectrum of these electrons, together with an assumed equipartition between dynamic and magnetic pressure, gives input for estimating the radio emission from our simulation. Ray tracing according to a direction of sight then realizes radio mappings of our data. Single snapshots are realised to compare with VLA observation as in Roberts et al. 2008. A radio movie is realised to compare with the 41 days movie made with the VLBA instrument. Finaly a larger scale simulation explore the discrepancy of opening angle between 10 and 20 degree between the large scale observation of SS433 and its close in observation.
NASA Astrophysics Data System (ADS)
Hurd, Alan J.; Ho, Pauline
The experiments described here indicate when one of Nature's best fractals -- the Brownian trail -- becomes nonfractal. In most ambient fluids, the trail of a Brownian particle is self-similar over many decades of length. For example, the trail of a submicron particle suspended in an ordinary liquid, recorded at equal time intervals, exhibits apparently discontinuous changes in velocity from macroscopic lengths down to molecular lengths: the trail is a random walk with no velocity memory from one step to the next. In ideal Brownian motion, the kinks in the trail persist to infinitesimal time intervals, i.e., it is a curve without tangents. Even in real Brownian motion in a liquid, the time interval must be shortened to approximately 10(-8) s before the velocity appears continuous. In sufficiently rarefied environments, this time resolution at which a Brownian trail is rectified from a curve without tangents to a smoothly varying trajectory is greatly lengthened, making it possible to study the kinetic regime by dynamic light scattering. Our recent experiments with particles in a plasma have demonstrated this capability. In this regime, the particle velocity persists over a finite step length allowing an analogy to an ideal gas with Maxwell-Boltzmann velocities; the particle mass could be obtained from equipartition. The crossover from ballistic flight to hydrodynamic diffusion was also seen.
TENTATIVE EVIDENCE FOR RELATIVISTIC ELECTRONS GENERATED BY THE JET OF THE YOUNG SUN-LIKE STAR DG Tau
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ainsworth, Rachael E.; Ray, Tom P.; Taylor, Andrew M.
2014-09-01
Synchrotron emission has recently been detected in the jet of a massive protostar, providing further evidence that certain jet formation characteristics for young stars are similar to those found for highly relativistic jets from active galactic nuclei. We present data at 325 and 610 MHz taken with the Giant Metrewave Radio Telescope of the young, low-mass star DG Tau, an analog of the Sun soon after its birth. This is the first investigation of a low-mass young stellar object at such low frequencies. We detect emission with a synchrotron spectral index in the proximity of the DG Tau jet and interpretmore » this emission as a prominent bow shock associated with this outflow. This result provides tentative evidence for the acceleration of particles to relativistic energies due to the shock impact of this otherwise very low-power jet against the ambient medium. We calculate the equipartition magnetic field strength B {sub min} ≈ 0.11 mG and particle energy E {sub min} ≈ 4 × 10{sup 40} erg, which are the minimum requirements to account for the synchrotron emission of the DG Tau bow shock. These results suggest the possibility of low energy cosmic rays being generated by young Sun-like stars.« less
Steep radio spectra in high-redshift radio galaxies
NASA Technical Reports Server (NTRS)
Krolik, Julian H.; Chen, Wan
1991-01-01
The generic spectrum of an optically thin synchrotron source steepens by 0.5 in spectral index from low frequencies to high whenever the source lifetime is greater than the energy-loss timescale for at least some of the radiating electrons. Three effects tend to decrease the frequency nu(b) of this spectral bend as the source redshift increases: (1) for fixed bend frequency nu* in the rest frame, nu(b) = nu*/(1 + z); (2) losses due to inverse Compton scattering the microwave background rise with redshift as (1 + z) exp 4, so that, for fixed residence time in the radiating region, the energy of the lowest energy electron that can cool falls rapidly with increasing redshift; and (3) if the magnetic field is proportional to the equipartition field and the emitting volume is fixed or slowly varying, flux-limited samples induce a selection effect favoring low nu* at high z because higher redshift sources require higher emissivity to be included in the sample, and hence have stronger implied fields and more rapid synchrotron losses. A combination of these effects may explain the trend observed in the 3CR sample for higher redshift radio galaxies to have steeper spectra, and the successful use of ultrasteep spectrum surveys to locate high-redshift galaxies.
NASA Astrophysics Data System (ADS)
Aragonès, Àngels; Maxit, Laurent; Guasch, Oriol
2015-08-01
Statistical modal energy distribution analysis (SmEdA) extends classical statistical energy analysis (SEA) to the mid frequency range by establishing power balance equations between modes in different subsystems. This circumvents the SEA requirement of modal energy equipartition and enables applying SmEdA to the cases of low modal overlap, locally excited subsystems and to deal with complex heterogeneous subsystems as well. Yet, widening the range of application of SEA is done at a price with large models because the number of modes per subsystem can become considerable when the frequency increases. Therefore, it would be worthwhile to have at one's disposal tools for a quick identification and ranking of the resonant and non-resonant paths involved in modal energy transmission between subsystems. It will be shown that previously developed graph theory algorithms for transmission path analysis (TPA) in SEA can be adapted to SmEdA and prove useful for that purpose. The case of airborne transmission between two cavities separated apart by homogeneous and ribbed plates will be first addressed to illustrate the potential of the graph approach. A more complex case representing transmission between non-contiguous cavities in a shipbuilding structure will be also presented.
NASA Technical Reports Server (NTRS)
Neugebauer, M.
1976-01-01
Data obtained by OGO 5 are used to confirm IMP 6 observations of an inverse dependence of the helium-to-hydrogen temperature ratio in the solar wind on the ratio of solar-wind expansion time to the Coulomb-collision equipartition time. The analysis is then extended to determine the relation of the difference between the hydrogen and helium bulk velocities (the differential flow vector) with the ratio between the solar-wind expansion time and the time required for Coulomb collisions to slow down a beam of ions passing through a plasma. It is found that the magnitude of the differential flow vector varies inversely with the time ratio when the latter is small and approaches zero when it is large. These results are shown to suggest a model of continuous preferential heating and acceleration of helium (or cooling and deceleration of hydrogen), which is cancelled or limited by Coulomb collisions by the time the plasma has reached 1 AU. Since the average dependence of the differential flow vector on the time ratio cannot explain all the systematic variations of the vector observed in corotating high-velocity streams, it is concluded that additional helium acceleration probably occurs on the leading edge of such streams.
Properties of Decaying Plasma Turbulence at Subproton Scales
NASA Astrophysics Data System (ADS)
Olshevsky, Vyacheslav; Servidio, Sergio; Pucci, Francesco; Primavera, Leonardo; Lapenta, Giovanni
2018-06-01
We study the properties of plasma turbulence at subproton scales using kinetic electromagnetic three-dimensional simulations with nonidentical initial conditions. Particle-in-cell modeling of the Taylor–Green vortex has been performed, starting from three different magnetic field configurations. All simulations expose very similar energy evolution in which the large-scale ion flows and magnetic structures deteriorate and transfer their energy into particle heating. Heating is more intense for electrons, decreasing the initial temperature ratio and leading to temperature equipartition between the two species. A full turbulent cascade, with a well-defined power-law shape at subproton scales, is established within a characteristic turnover time. Spectral indices for magnetic field fluctuations in two simulations are close to α B ≈ 2.9, and are steeper in the remaining case with α B ≈ 3.05. Energy is dissipated by a complex mixture of plasma instabilities and magnetic reconnection and is milder in the latter simulation. The number of magnetic nulls, and the dissipation pattern observed in this case, differ from two others. Spectral indices for the kinetic energy deviate from magnetic spectra by ≈1 in the first simulation, and by ≈0.75 in two other runs. The difference between magnetic and electric slopes confirm the previously observed value of α B ‑ α E ≈ 2.
Future Gamma-Ray Imaging of Solar Eruptive Events
NASA Technical Reports Server (NTRS)
Shih, Albert
2012-01-01
Solar eruptive events, the combination of large solar flares and coronal mass ejections (CMEs), accelerate ions to tens of Gev and electrons to hundreds of MeV. The energy in accelerated particles can be a significant fraction (up to tens of percent) of the released energy and is roughly equipartitioned between ions and electrons. Observations of the gamma-ray signatures produced by these particles interacting with the ambient solar atmosphere probes the distribution and composition of the accelerated population, as well as the atmospheric parameters and abundances of the atmosphere, ultimately revealing information about the underlying physics. Gamma-ray imaging provided by RHESSI showed that the interacting approx.20 MeV/nucleon ions are confined to flare magnetic loops rather than precipitating from a large CME-associated shock. Furthermore, RHESSI images show a surprising, significant spatial separation between the locations where accelerated ions and electrons are interacting, thus indicating a difference in acceleration or transport processes for the two types of particles. Future gamma-ray imaging observations, with higher sensitivity and greater angular resolution, can investigate more deeply the nature of ion acceleration. The technologies being proven on the Gamma-Ray Imager/Polarimeter for Solar flares (GRIPS), a NASA balloon instrument, are possible approaches for future instrumentation. We discuss the GRIPS instrument and the future of studying this aspect of solar eruptive events.
NASA Astrophysics Data System (ADS)
Baena, M.; Perton, M.; Molina-Villegas, J. C.; Sanchez-Sesma, F. J.
2013-12-01
In order to improve the understanding of the seismic response of Mexico City Valley, we have proposed to perform a tomography study of the seismic wave velocities. For that purpose, we used a collection of acceleration seismograms (corresponding to earthquakes with magnitudes ranging from 4.5 to 8.1 and various epicentral distances to the City) recorded since 1985 in 83 stations distributed across the Valley. The H/V spectral ratios (obtained from average autocorrelations) strongly suggest these movements belong to a 3D generalized diffuse field. Thus, we interpret that cross-correlations between the signals of station pairs are proportional to the imaginary part of the corresponding Green function. Finally, the dispersion curves are constructed from the Green function which lead to the tomography. Other tomographies have already been made around the world using either the seismic coda or seismic noise. We used instead the ensemble of many earthquakes from distant sources that have undergone multiple scattering by the heterogeneities of the Earth and assume the wave fields are equipartitioned. The purpose of the present study is to describe the different steps of the data processing by using synthetic models. The wave propagation within an alluvial basin is simulated using the Indirect Boundary Element Method (IBEM) in 2D configuration for the propagation of P and SV waves. The theoretical Green function for a station pair is obtained by placing a unit force at one station and a receiver at the other. The valley illumination is composed by incoming waves which are simulated using distant independent sources and several diffractors. Data process is validated by the correct retrieval the theoretical Green function. We present here the in-plane Green function for the P-SV case and show the dispersion curves constructed from the cross-correlations compared with analytic results for a layer over a half-space. ACKNOWLEDGEMENTS. This study is partially supported by AXA Research Fund and by DGAPA-UNAM under Project IN104712.
NASA Astrophysics Data System (ADS)
HESS Collaboration; Abramowski, A.; Acero, F.; Aharonian, F.; Akhperjanian, A. G.; Anton, G.; Balenderan, S.; Balzer, A.; Barnacka, A.; Becherini, Y.; Becker, J.; Bernlöhr, K.; Birsin, E.; Biteau, J.; Bochow, A.; Boisson, C.; Bolmont, J.; Bordas, P.; Brucker, J.; Brun, F.; Brun, P.; Bulik, T.; Büsching, I.; Carrigan, S.; Casanova, S.; Cerruti, M.; Chadwick, P. M.; Charbonnier, A.; Chaves, R. C. G.; Cheesebrough, A.; Cologna, G.; Conrad, J.; Couturier, C.; Daniel, M. K.; Davids, I. D.; Degrange, B.; Deil, C.; Dickinson, H. J.; Djannati-Ataï, A.; Domainko, W.; O'C. Drury, L.; Dubus, G.; Dutson, K.; Dyks, J.; Dyrda, M.; Egberts, K.; Eger, P.; Espigat, P.; Fallon, L.; Fegan, S.; Feinstein, F.; Fernandes, M. V.; Fiasson, A.; Fontaine, G.; Förster, A.; Füßling, M.; Gajdus, M.; Gallant, Y. A.; Garrigoux, T.; Gast, H.; Gérard, L.; Giebels, B.; Glicenstein, J. F.; Glück, B.; Göring, D.; Grondin, M.-H.; Häffner, S.; Hague, J. D.; Hahn, J.; Hampf, D.; Harris, J.; Hauser, M.; Heinz, S.; Heinzelmann, G.; Henri, G.; Hermann, G.; Hillert, A.; Hinton, J. A.; Hofmann, W.; Hofverberg, P.; Holler, M.; Horns, D.; Jacholkowska, A.; Jahn, C.; Jamrozy, M.; Jung, I.; Kastendieck, M. A.; Katarzyński, K.; Katz, U.; Kaufmann, S.; Khélifi, B.; Klochkov, D.; Kluźniak, W.; Kneiske, T.; Komin, Nu.; Kosack, K.; Kossakowski, R.; Krayzel, F.; Laffon, H.; Lamanna, G.; Lenain, J.-P.; Lennarz, D.; Lohse, T.; Lopatin, A.; Lu, C.-C.; Marandon, V.; Marcowith, A.; Masbou, J.; Maurin, G.; Maxted, N.; Mayer, M.; McComb, T. J. L.; Medina, M. C.; Méhault, J.; Moderski, R.; Mohamed, M.; Moulin, E.; Naumann, C. L.; Naumann-Godo, M.; de Naurois, M.; Nedbal, D.; Nekrassov, D.; Nguyen, N.; Nicholas, B.; Niemiec, J.; Nolan, S. J.; Ohm, S.; de Oña Wilhelmi, E.; Opitz, B.; Ostrowski, M.; Oya, I.; Panter, M.; Paz Arribas, M.; Pekeur, N. W.; Pelletier, G.; Perez, J.; Petrucci, P.-O.; Peyaud, B.; Pita, S.; Pühlhofer, G.; Punch, M.; Quirrenbach, A.; Raue, M.; Reimer, A.; Reimer, O.; Renaud, M.; de los Reyes, R.; Rieger, F.; Ripken, J.; Rob, L.; Rosier-Lees, S.; Rowell, G.; Rudak, B.; Rulten, C. B.; Sahakian, V.; Sanchez, D. A.; Santangelo, A.; Schlickeiser, R.; Schulz, A.; Schwanke, U.; Schwarzburg, S.; Schwemmer, S.; Sheidaei, F.; Skilton, J. L.; Sol, H.; Spengler, G.; Stawarz, Ł.; Steenkamp, R.; Stegmann, C.; Stinzing, F.; Stycz, K.; Sushch, I.; Szostek, A.; Tavernet, J.-P.; Terrier, R.; Tluczykont, M.; Valerius, K.; van Eldik, C.; Vasileiadis, G.; Venter, C.; Viana, A.; Vincent, P.; Völk, H. J.; Volpe, F.; Vorobiov, S.; Vorster, M.; Wagner, S. J.; Ward, M.; White, R.; Wierzcholska, A.; Zacharias, M.; Zajczyk, A.; Zdziarski, A. A.; Zech, A.; Zechlin, H.-S.; Ali, M. O.
2012-09-01
Context. In some galaxy clusters, powerful active galactic nuclei (AGN) have blown bubbles with cluster scale extent into the ambient medium. The main pressure support of these bubbles is not known to date, but cosmic rays are a viable possibility. For such a scenario copious gamma-ray emission is expected as a tracer of cosmic rays from these systems. Aims: Hydra A, the closest galaxy cluster hosting a cluster scale AGN outburst, located at a redshift of 0.0538, is investigated for being a gamma-ray emitter with the High Energy Stereoscopic System (H.E.S.S.) array and the Fermi Large Area Telescope (Fermi-LAT). Methods: Data obtained in 20.2 h of dedicated H.E.S.S. observations and 38 months of Fermi-LAT data, gathered by its usual all-sky scanning mode, have been analyzed to search for a gamma-ray signal. Results: No signal has been found in either data set. Upper limits on the gamma-ray flux are derived and are compared to models. These are the first limits on gamma-ray emission ever presented for galaxy clusters hosting cluster scale AGN outbursts. Conclusions: The non-detection of Hydra A in gamma-rays has important implications on the particle populations and physical conditions inside the bubbles in this system. For the case of bubbles mainly supported by hadronic cosmic rays, the most favorable scenario, which involves full mixing between cosmic rays and embedding medium, can be excluded. However, hadronic cosmic rays still remain a viable pressure support agent to sustain the bubbles against the thermal pressure of the ambient medium. The largest population of highly-energetic electrons, which are relevant for inverse-Compton gamma-ray production is found in the youngest inner lobes of Hydra A. The limit on the inverse-Compton gamma-ray flux excludes a magnetic field below half of the equipartition value of 16 μG in the inner lobes.
Towards Full-Waveform Ambient Noise Inversion
NASA Astrophysics Data System (ADS)
Sager, Korbinian; Ermert, Laura; Afanasiev, Michael; Boehm, Christian; Fichtner, Andreas
2017-04-01
Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green function between the two receivers. This assumption, however, is only met under specific conditions, e.g. wavefield diffusivity and equipartitioning, or the isotropic distribution of both mono- and dipolar uncorrelated noise sources. These assumptions are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations in order to constrain Earth structure and noise generation. To overcome this limitation, we attempt to develop a method that consistently accounts for the distribution of noise sources, 3D heterogeneous Earth structure and the full seismic wave propagation physics. This is intended to improve the resolution of tomographic images, to refine noise source distribution, and thereby to contribute to a better understanding of both Earth structure and noise generation. First, we develop an inversion strategy based on a 2D finite-difference code using adjoint techniques. To enable a joint inversion for noise sources and Earth structure, we investigate the following aspects: i) the capability of different misfit functionals to image wave speed anomalies and source distribution and ii) possible source-structure trade-offs, especially to what extent unresolvable structure can be mapped into the inverted noise source distribution and vice versa. In anticipation of real-data applications, we present an extension of the open-source waveform modelling and inversion package Salvus (http://salvus.io). It allows us to compute correlation functions in 3D media with heterogeneous noise sources at the surface and the corresponding sensitivity kernels for the distribution of noise sources and Earth structure. By studying the effect of noise sources on correlation functions in 3D, we validate the aforementioned inversion strategy and prepare the workflow necessary for the first application of full waveform ambient noise inversion to a global dataset, for which a model for the distribution of noise sources is already available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lappa, Marcello, E-mail: marcello.lappa@strath.ac.uk
The relevance of non-equilibrium phenomena, nonlinear behavior, gravitational effects and fluid compressibility in a wide range of problems related to high-temperature gas-dynamics, especially in thermal, mechanical and nuclear engineering, calls for a concerted approach using the tools of the kinetic theory of gases, statistical physics, quantum mechanics, thermodynamics and mathematical modeling in synergy with advanced numerical strategies for the solution of the Navier–Stokes equations. The reason behind such a need is that in many instances of relevance in this field one witnesses a departure from canonical models and the resulting inadequacy of standard CFD approaches, especially those traditionally used tomore » deal with thermal (buoyancy) convection problems. Starting from microscopic considerations and typical concepts of molecular dynamics, passing through the Boltzmann equation and its known solutions, we show how it is possible to remove past assumptions and elaborate an algorithm capable of targeting the broadest range of applications. Moving beyond the Boussinesq approximation, the Sutherland law and the principle of energy equipartition, the resulting method allows most of the fluid properties (density, viscosity, thermal conductivity, heat capacity and diffusivity, etc.) to be derived in a rational and natural way while keeping empirical contamination to the minimum. Special attention is deserved as well to the well-known pressure issue. With the application of the socalled multiple pressure variables concept and a projection-like numerical approach, difficulties with such a term in the momentum equation are circumvented by allowing the hydrodynamic pressure to decouple from its thermodynamic counterpart. The final result is a flexible and modular framework that on the one hand is able to account for all the molecule (translational, rotational and vibrational) degrees of freedom and their effective excitation, and on the other hand can guarantee adequate interplay between molecular and macroscopic-level entities and processes. Performances are demonstrated by computing some incompressible and compressible benchmark test cases for thermal (gravitational) convection, which are then extended to the high-temperature regime taking advantage of the newly developed features.« less
The Gamma-Ray Emitting Radio-Loud Narrow-Line Seyfert 1 Galaxy PKS 2004-447 II. The Radio View
NASA Technical Reports Server (NTRS)
Schulz, R.; Kreikenbohm, A.; Kadler, M.; Ojha, R.; Ros, E.; Stevens, J.; Edwards, P. G.; Carpenter, B.; Elsaesser, D.; Gehrels, N.;
2016-01-01
Context. gamma-ray-detected radio-loud narrow-line Seyfert 1 (gamma-NLS1) galaxies constitute a small but interesting sample of the gamma-ray-loud AGN. The radio-loudest gamma-NLS1 known, PKS2004447, is located in the southern hemisphere and is monitored in the radio regime by the multiwavelength monitoring programme TANAMI. Aims. We aim for the first detailed study of the radio morphology and long-term radio spectral evolution of PKS2004447, which are essential for understanding the diversity of the radio properties of gamma-NLS1s. Methods. The TANAMI VLBI monitoring program uses the Australian Long Baseline Array (LBA) and telescopes in Antarctica, Chile, New Zealand, and South Africa to monitor the jets of radio-loud active galaxies in the southern hemisphere. Lower resolution radio flux density measurements at multiple radio frequencies over four years of observations were obtained with the Australia Telescope Compact Array (ATCA). Results. The TANAMI VLBI image at 8.4GHz shows an extended one-sided jet with a dominant compact VLBI core. Its brightness temperature is consistent with equipartition, but it is an order of magnitude below other gamma-NLS1s with the sample value varying over two orders of magnitude. We find a compact morphology with a projected large-scale size 11 kpc and a persistent steep radio spectrum with moderate flux-density variability. Conclusions. PKS2004447 appears to be a unique member of the gamma-NLS1 sample. It exhibits blazar-like features, such as a flat featureless X-ray spectrum and a core-dominated, one-sided parsec-scale jet with indications for relativistic beaming. However, the data also reveal properties atypical for blazars, such as a radio spectrum and large-scale size consistent with compact-steep-spectrum (CSS) objects, which are usually associated with young radio sources. These characteristics are unique among all gamma-NLS1s and extremely rare among gamma-ray-loud AGN.
Comparison between two methods for forward calculation of ambient noise H/V spectral ratios
NASA Astrophysics Data System (ADS)
Garcia-Jerez, A.; Luzón, F.; Sanchez-Sesma, F. J.; Santoyo, M. A.; Albarello, D.; Lunedei, E.; Campillo, M.; Iturrarán-Viveros, U.
2011-12-01
The analysis of horizontal-to-vertical spectral ratios of ambient noise (NHVSR) is a valuable tool for seismic prospecting, particularly if both a dense spatial sampling and a low-cost procedure are required. Unfortunately, the computation method still lacks of a unanimously accepted theoretical basis and different approaches are currently being used for inversion of the ground structure from the measured H/V curves. Two major approaches for forward calculation of NHVSRs in a layered medium are compared in this work. The first one was developed by Arai and Tokimatsu (2004) and recently improved by Albarello and Lunedei (2011). It consists of a description of the wavefield as generated by Far Surface point Forces (FSF method). The second one is based on the work of Sánchez-Sesma et al. (2011) who consider ambient noise as a Diffuse WaveField (DWF method), taking advantage of the proportionality between its Fourier-transformed autocorrelation (power spectrum) and the imaginary part of the Green function when source and receiver are the same. In both methods, the NHVSR is written as (PH/PV)1/2, where PH and PV are the horizontal and vertical power spectra. In the FSF method these quantities are given by PV∝⊙m(1+1/2χm2α2)(ARm/kRm)2 PH∝⊙m{(1+1/2χm2α2)(ARm/kRm)2χm2+1/2α2(ALm/kLm)2} where kRm, χm and ARm are wavenumber, ellipticity and medium response of the m-th Rayleigh wave mode; kLm and ALm correspond to the m-th Love wave mode and α is the horizontal-to-vertical load ratio of the ambient noise sources. Some common factors are omitted in the expressions of PV and PH. On the other hand, the DWF method deals with the full wavefield including both surface and body waves. In order to make the comparison easier, and taking into account that surface waves are often the dominant components in wide spectral ranges, body wave contributions are neglected here. In this case, the PH and PV power spectra for the DWF method are reduced to the simple expressions: PV = 1/2 ⊙m ARm PH = 1/2 ⊙m (ARm χm2 + ALm) Thus, the main difference between these methods is the way in which the amount of energy injected in each surface wave mode is established: either following the energy equipartition principle (DWF method) or controlled by surface point loads with angle tan-1 α (FSF method). These methods have been numerically compared for a simple structure consisting of a plane layer overlaying a halfspace. S-wave velocity contrast was varied between 2 and 6, and Poisson ratios of the layer from 0.25 to 0.45 (using 0.25 for the halfspace). We set α=1 for the FSF method. Although both methods provide NHVSRs of similar shape and very close peak frequencies, peaks obtained from DWF are significantly smoother. Both ratios become constant as frequency increases, but their high frequency limits differ too. References Albarello, D. & E. Lunedei (2011). Near Surface Geophysics 9, In press. Arai, H. & K. Tokimatsu (2004). Bull. Seismol. Soc. Am. 94, 53-63. Sánchez-Sesma, F. J., M. Rodríguez, U. Iturrarán-Viveros, F. Luzón, M. Campillo, L. Margerin, A. García-Jerez, M. Suarez, M. A. Santoyo & A. Rodríguez-Castellanos (2011). Geophys. J. Int. 186, 221-225.
FARADAY ROTATION STRUCTURE ON KILOPARSEC SCALES IN THE RADIO LOBES OF CENTAURUS A
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feain, I. J.; Ekers, R. D.; Norris, R. P.
2009-12-10
We present the results of an Australia Telescope Compact Array 1.4 GHz spectropolarimetric aperture synthesis survey of 34 deg{sup 2} centered on Centaurus A-NGC 5128. A catalog of 1005 extragalactic compact radio sources in the field to a continuum flux density of 3 mJy beam{sup -1} is provided along with a table of Faraday rotation measures (RMs) and linear polarized intensities for the 28% of sources with high signal to noise in linear polarization. We use the ensemble of 281 background polarized sources as line-of-sight probes of the structure of the giant radio lobes of Centaurus A. This is themore » first time such a method has been applied to radio galaxy lobes and we explain how it differs from the conventional methods that are often complicated by depth and beam depolarization effects. Assuming a magnetic field strength in the lobes of 1.3 B {sub 1} muG, where B {sub 1} = 1 is implied by equipartition between magnetic fields and relativistic particles, the upper limit we derive on the maximum possible difference between the average RM of 121 sources behind Centaurus A and the average RM of the 160 sources along sightlines outside Centaurus A implies an upper limit on the volume-averaged thermal plasma density in the giant radio lobes of (n{sub e} ) < 5 x 10{sup -5} B {sup -1} {sub 1} cm{sup -3}. We use an RM structure function analysis and report the detection of a turbulent RM signal, with rms sigma{sub RM} = 17 rad m{sup -2} and scale size 0.{sup 0}3, associated with the southern giant lobe. We cannot verify whether this signal arises from turbulent structure throughout the lobe or only in a thin skin (or sheath) around the edge, although we favor the latter. The RM signal is modeled as possibly arising from a thin skin with a thermal plasma density equivalent to the Centaurus intragroup medium density and a coherent magnetic field that reverses its sign on a spatial scale of 20 kpc. For a thermal density of n {sub 1} 10{sup -3} cm{sup -3}, the skin magnetic field strength is 0.8 n {sup -1} {sub 1} muG.« less
Shapes, spectra and new methods in nonlinear spatial optics
NASA Astrophysics Data System (ADS)
Sun, Can
For a myriad of optical applications, the quality of the light source is poor and the beam is inherently spatially partially-coherent. For this broad class of systems, wave dynamics depends not only on the wave intensity, but also on its distribution of spatial frequencies. Unfortunately, this entire spectrum of problems has often been overlooked - for reasons of theoretical ease or experimental difficulties. Here, we remedy this by demonstrating a novel experimental setup which, for the first time, allows arbitrarily modulation of the spatial spectra of light to obtain any distribution of interest. Using modulation instability as an example, we isolate the effect of different spectral shapes and observe distinct beam dynamics. Next, we turn to a thermodynamic description of the long-term evolution of statistical fields. For quantum systems, a major consequence is Bose-Einstein Condensation. However, recent theoretical studies have suggested that quantum mechanics is not necessary for the condensation process: classical waves with random phases can also self-organize into a coherent state. Starting from a random ensemble, nonlinear interactions can lead to a turbulent energy cascade towards longer spatial scales. In complete analogy with the kinetics of a gas system, there is a statistical dynamics of waves in which particle velocities map to wavepacket k-vectors while collisions are mimicked by four-wave mixing. As with collisions, each wave interaction is formally reversible, yet entropy principles mandate that the ensemble evolves towards an equilibrium state of maximum disorder. The result is an equipartition of energy, in the form of a Rayleigh-Jeans spectrum, with information about the condensation process recorded in small-scale fluctuations. Here, we give the first experimental observation of the condensation of classical waves in any media. Using classical light in a self-defocusing photorefractive, we observe all aspects of the condensation process, including the population of a coherent state, spectral redistribution towards the Rayleigh-Jeans spectrum, and formal reversibility of the interactions. The latter is proved experimentally by introducing a digital "Maxwell's Demon" to reverse (phase-conjugate) the momentum of each wavepacket and recover the original "thermal cloud". The results integrate digital and physical methods of nonlinear processing, confirm fundamental ideas in wave turbulence, and greatly extend the range of Bose-Einstein theory.
Deep inelastic scattering as a probe of entanglement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kharzeev, Dmitri E.; Levin, Eugene M.
Using nonlinear evolution equations of QCD, we compute the von Neumann entropy of the system of partons resolved by deep inelastic scattering at a given Bjorken x and momentum transfer q 2 = - Q 2 . We interpret the result as the entropy of entanglement between the spatial region probed by deep inelastic scattering and the rest of the proton. At small x the relation between the entanglement entropy S ( x ) and the parton distribution x G ( x ) becomes very simple: S ( x ) = ln [ x G ( x ) ] .more » In this small x , large rapidity Y regime, all partonic microstates have equal probabilities—the proton is composed by an exponentially large number exp ( Δ Y ) of microstates that occur with equal and exponentially small probabilities exp ( - Δ Y ) , where Δ is defined by x G ( x ) ~ 1 / x Δ . For this equipartitioned state, the entanglement entropy is maximal—so at small x , deep inelastic scattering probes a maximally entangled state. Here, we propose the entanglement entropy as an observable that can be studied in deep inelastic scattering. This will then require event-by-event measurements of hadronic final states, and would allow to study the transformation of entanglement entropy into the Boltzmann one. We estimate that the proton is represented by the maximally entangled state at x ≤ 10 -3 ; this kinematic region will be amenable to studies at the Electron Ion Collider.« less
NASA Astrophysics Data System (ADS)
Ferraro, F. R.; Lanzoni, B.; Raso, S.; Nardiello, D.; Dalessandro, E.; Vesperini, E.; Piotto, G.; Pallanca, C.; Beccari, G.; Bellini, A.; Libralato, M.; Anderson, J.; Aparicio, A.; Bedin, L. R.; Cassisi, S.; Milone, A. P.; Ortolani, S.; Renzini, A.; Salaris, M.; van der Marel, R. P.
2018-06-01
The parameter A +, defined as the area enclosed between the cumulative radial distribution of blue straggler stars (BSSs) and that of a reference population, is a powerful indicator of the level of BSS central segregation. As part of the Hubble Space Telescope UV Legacy Survey of Galactic globular clusters (GCs), here we present the BSS population and the determination of A + in 27 GCs observed out to about one half-mass radius. In combination with 21 additional clusters discussed in a previous paper, this provides us with a global sample of 48 systems (corresponding to ∼32% of the Milky Way GC population), for which we find a strong correlation between A + and the ratio of cluster age to the current central relaxation time. Tight relations have also been found with the core radius and the central luminosity density, which are expected to change with the long-term cluster dynamical evolution. An interesting relation is emerging between A + and the ratio of the BSS velocity dispersion relative to that of main sequence turn-off stars, which measures the degree of energy equipartition experienced by BSSs in the cluster. These results provide further confirmation that BSSs are invaluable probes of GC internal dynamics and that A + is a powerful dynamical clock.
Physical Conditions in Ultra-fast Outflows in AGN
NASA Astrophysics Data System (ADS)
Kraemer, S. B.; Tombesi, F.; Bottorff, M. C.
2018-01-01
XMM-Newton and Suzaku spectra of Active Galactic Nuclei (AGN) have revealed highly ionized gas, in the form of absorption lines from H-like and He-like Fe. Some of these absorbers, ultra-fast outflows (UFOs), have radial velocities of up to 0.25c. We have undertaken a detailed photoionization study of high-ionization Fe absorbers, both UFOs and non-UFOs, in a sample of AGN observed by XMM-Newton. We find that the heating and cooling processes in UFOs are Compton-dominated, unlike the non-UFOs. Both types are characterized by force multipliers on the order of unity, which suggest that they cannot be radiatively accelerated in sub-Eddington AGN, unless they were much less ionized at their point of origin. However, such highly ionized gas can be accelerated via a magneto-hydrodynamic (MHD) wind. We explore this possibility by applying a cold MHD flow model to the UFO in the well-studied Seyfert galaxy, NGC 4151. We find that the UFO can be accelerated along magnetic streamlines anchored in the accretion disk. In the process, we have been able to constrain the magnetic field strength and the magnetic pressure in the UFO and have determined that the system is not in magnetic/gravitational equipartition. Open questions include the variability of the UFOs and the apparent lack of non-UFOs in UFO sources.
PKS 1954-388: RadioAstron Detection on 80,000 km Baselines and Multiwavelength Observations
NASA Astrophysics Data System (ADS)
Edwards, P. G.; Kovalev, Y. Y.; Ojha, R.; An, H.; Bignall, H.; Carpenter, B.; Hovatta, T.; Stevens, J.; Voytsik, P.; Andrianov, A. S.; Dutka, M.; Hase, H.; Horiuchi, S.; Jauncey, D. L.; Kadler, M.; Lisakov, M.; Lovell, J. E. J.; McCallum, J.; Müller, C.; Phillips, C.; Plötz, C.; Quick, J.; Reynolds, C.; Schulz, R.; Sokolovsky, K. V.; Tzioumis, A. K.; Zuga, V.
2017-04-01
We present results from a multiwavelength study of the blazar PKS 1954-388 at radio, UV, X-ray, and gamma-ray energies. A RadioAstron observation at 1.66 GHz in June 2012 resulted in the detection of interferometric fringes on baselines of 6.2 Earth-diameters. This suggests a source frame brightness temperature of greater than 2 × 1012 K, well in excess of both equipartition and inverse Compton limits and implying the existence of Doppler boosting in the core. An 8.4-GHz TANAMI VLBI image, made less than a month after the RadioAstron observations, is consistent with a previously reported superluminal motion for a jet component. Flux density monitoring with the Australia Telescope Compact Array confirms previous evidence for long-term variability that increases with observing frequency. A search for more rapid variability revealed no evidence for significant day-scale flux density variation. The ATCA light-curve reveals a strong radio flare beginning in late 2013, which peaks higher, and earlier, at higher frequencies. Comparison with the Fermi gamma-ray light-curve indicates this followed 9 months after the start of a prolonged gamma-ray high-state-a radio lag comparable to that seen in other blazars. The multiwavelength data are combined to derive a Spectral Energy Distribution, which is fitted by a one-zone synchrotron-self-Compton (SSC) model with the addition of external Compton (EC) emission.
NASA Astrophysics Data System (ADS)
Nagai, H.; Fujita, Y.; Nakamura, M.; Orienti, M.; Kino, M.; Asada, K.; Giovannini, G.
2017-11-01
We present Very Long Baseline Array polarimetric observations of the innermost jet of 3C 84 (NGC 1275) at 43 GHz. A significant polarized emission is detected at the hotspot of the innermost restarted jet, which is located 1 pc south from the radio core. While the previous report presented a hotspot at the southern end of the western limb, the hotspot location has been moved to the southern end of the eastern limb. Faraday rotation is detected within an entire bandwidth of the 43 GHz band. The measured rotation measure (RM) is at most (6.3 ± 1.9) × 105 rad m-2 and might be slightly time variable on the timescale of a month by a factor of a few. Our measured RM and the RM previously reported by the CARMA and SMA observations cannot be consistently explained by the spherical accretion flow with a power-law profile. We propose that a clumpy/inhomogeneous ambient medium is responsible for the observed RM. Using an equipartition magnetic field, we derive the electron density of 2 × 104 cm-3. Such an electron density is consistent with the cloud of the narrow line emission region around the central engine. We also discuss the magnetic field configuration from the black hole scale to the parsec scale and the origin of low polarization.
Structural State and Elastic Properties of Perovskites in the Earth's Mantle
NASA Astrophysics Data System (ADS)
Ross, N. L.; Angel, R. J.; Zhao, J.
2005-12-01
Recent advances in laboratory-based single-crystal X-ray diffraction techniques for measuring the intensities of diffraction from crystals held in situ at high pressures in the diamond-anvil cell have been used to determine the role of polyhedral compression in the response of 2:4 and 3:3 GdFeO3-type perovskites to high pressure [1]. These new data clearly demonstrate that, contrary to previous belief, the compression of the octahedral sites is significant and that the evolution of the perovskite structure with pressure is controlled by a new principle; that of equipartition of bond-valence strain across the structure [2]. This new paradigm, together with the minimal information available from high- pressure powder diffraction studies, may provide the possibility of predicting the structural state and elastic properties of perovskites of any composition at mantle pressures and temperatures. Cation partioning between silicate perovskites and other phases should then be predictable through the application of a Brice-style model [3]. The geochemical implications of this type of analysis will be presented as well as the possibility for extending these measurements to higher pressures. References [1] e.g. Zhao, Ross & Angel (2004) Phys Chem Miner. 31: 299; Ross, Zhao,. & Angel (2004). J. Solid State Chemistry 177:1276. [2] Zhao, Ross, & Angel (2004). Acta Cryst. B60:263 [3] e.g Walter et al. (2004) Geochim Cosmochim Acta 68:4267; Blundy & Wood (1994) Nature 372:452
Multiphase complete exchange: A theoretical analysis
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1993-01-01
Complete Exchange requires each of N processors to send a unique message to each of the remaining N-1 processors. For a circuit switched hypercube with N = 2(sub d) processors, the Direct and Standard algorithms for Complete Exchange are optimal for very large and very small message sizes, respectively. For intermediate sizes, a hybrid Multiphase algorithm is better. This carries out Direct exchanges on a set of subcubes whose dimensions are a partition of the integer d. The best such algorithm for a given message size m could hitherto only be found by enumerating all partitions of d. The Multiphase algorithm is analyzed assuming a high performance communication network. It is proved that only algorithms corresponding to equipartitions of d (partitions in which the maximum and minimum elements differ by at most 1) can possibly be optimal. The run times of these algorithms plotted against m form a hull of optimality. It is proved that, although there is an exponential number of partitions, (1) the number of faces on this hull is Theta(square root of d), (2) the hull can be found in theta(square root of d) time, and (3) once it has been found, the optimal algorithm for any given m can be found in Theta(log d) time. These results provide a very fast technique for minimizing communication overhead in many important applications, such as matrix transpose, Fast Fourier transform, and ADI.
NASA Astrophysics Data System (ADS)
Agarwal, A.; Mohan, P.; Gupta, Alok C.; Mangalam, A.; Volvach, A. E.; Aller, M. F.; Aller, H. D.; Gu, M. F.; Lähteenmäki, A.; Tornikoski, M.; Volvach, L. N.
2017-07-01
We studied the pc-scale core shift effect using radio light curves for three blazars, S5 0716+714, 3C 279 and BL Lacertae, which were monitored at five frequencies (ν) between 4.8 and 36.8 GHz using the University of Michigan Radio Astronomical Observatory (UMRAO), the Crimean Astrophysical Observatory (CrAO) and Metsähovi Radio Observatory for over 40 yr. Flares were Gaussian fitted to derive time delays between observed frequencies for each flare (Δt), peak amplitude (A) and their half width. Using A ∝ να, we infer α in the range of -16.67-2.41 and using Δ t ∝ ν ^{1/k_r}, we infer kr ∼ 1, employed in the context of equipartition between magnetic and kinetic energy density for parameter estimation. From the estimated core position offset (Ωrν) and the core radius (rcore), we infer that opacity model may not be valid in all cases. The mean magnetic field strengths at 1 pc (B1) and at the core (Bcore) are in agreement with previous estimates. We apply the magnetically arrested disc model to estimate black hole spins in the range of 0.15-0.9 for these blazars, indicating that the model is consistent with expected accretion mode in such sources. The power-law-shaped power spectral density has slopes -1.3 to -2.3 and is interpreted in terms of multiple shocks or magnetic instabilities.
Deep inelastic scattering as a probe of entanglement
Kharzeev, Dmitri E.; Levin, Eugene M.
2017-06-03
Using nonlinear evolution equations of QCD, we compute the von Neumann entropy of the system of partons resolved by deep inelastic scattering at a given Bjorken x and momentum transfer q 2 = - Q 2 . We interpret the result as the entropy of entanglement between the spatial region probed by deep inelastic scattering and the rest of the proton. At small x the relation between the entanglement entropy S ( x ) and the parton distribution x G ( x ) becomes very simple: S ( x ) = ln [ x G ( x ) ] .more » In this small x , large rapidity Y regime, all partonic microstates have equal probabilities—the proton is composed by an exponentially large number exp ( Δ Y ) of microstates that occur with equal and exponentially small probabilities exp ( - Δ Y ) , where Δ is defined by x G ( x ) ~ 1 / x Δ . For this equipartitioned state, the entanglement entropy is maximal—so at small x , deep inelastic scattering probes a maximally entangled state. Here, we propose the entanglement entropy as an observable that can be studied in deep inelastic scattering. This will then require event-by-event measurements of hadronic final states, and would allow to study the transformation of entanglement entropy into the Boltzmann one. We estimate that the proton is represented by the maximally entangled state at x ≤ 10 -3 ; this kinematic region will be amenable to studies at the Electron Ion Collider.« less
NASA Astrophysics Data System (ADS)
Padoan, Paolo; Nordlund, Åke; Kritsuk, Alexei G.; Norman, Michael L.; Li, Pak Shing
2007-06-01
The Padoan and Nordlund model of the stellar initial mass function (IMF) is derived from low-order statistics of supersonic turbulence, neglecting gravity (e.g., gravitational fragmentation, accretion, and merging). In this work, the predictions of that model are tested using the largest numerical experiments of supersonic hydrodynamic (HD) and magnetohydrodynamic (MHD) turbulence to date (~10003 computational zones) and three different codes (Enzo, Zeus, and the Stagger code). The model predicts a power-law distribution for large masses, related to the turbulence-energy power-spectrum slope and the shock-jump conditions. This power-law mass distribution is confirmed by the numerical experiments. The model also predicts a sharp difference between the HD and MHD regimes, which is recovered in the experiments as well, implying that the magnetic field, even below energy equipartition on the large scale, is a crucial component of the process of turbulent fragmentation. These results suggest that the stellar IMF of primordial stars may differ from that in later epochs of star formation, due to differences in both gas temperature and magnetic field strength. In particular, we find that the IMF of primordial stars born in turbulent clouds may be narrowly peaked around a mass of order 10 Msolar, as long as the column density of such clouds is not much in excess of 1022 cm-2.
Exact relations for energy transfer in self-gravitating isothermal turbulence
NASA Astrophysics Data System (ADS)
Banerjee, Supratik; Kritsuk, Alexei G.
2017-11-01
Self-gravitating isothermal supersonic turbulence is analyzed in the asymptotic limit of large Reynolds numbers. Based on the inviscid invariance of total energy, an exact relation is derived for homogeneous (not necessarily isotropic) turbulence. A modified definition for the two-point energy correlation functions is used to comply with the requirement of detailed energy equipartition in the acoustic limit. In contrast to the previous relations (S. Galtier and S. Banerjee, Phys. Rev. Lett. 107, 134501 (2011), 10.1103/PhysRevLett.107.134501; S. Banerjee and S. Galtier, Phys. Rev. E 87, 013019 (2013), 10.1103/PhysRevE.87.013019), the current exact relation shows that the pressure dilatation terms play practically no role in the energy cascade. Both the flux and source terms are written in terms of two-point differences. Sources enter the relation in a form of mixed second-order structure functions. Unlike the kinetic and thermodynamic potential energies, the gravitational contribution is absent from the flux term. An estimate shows that, for the isotropic case, the correlation between density and gravitational acceleration may play an important role in modifying the energy transfer in self-gravitating turbulence. The exact relation is also written in an alternative form in terms of two-point correlation functions, which is then used to describe scale-by-scale energy budget in spectral space.
PCA/HEXTE Observations of Coma and A2319
NASA Technical Reports Server (NTRS)
Rephaeli, Yoel
1998-01-01
The Coma cluster was observed in 1996 for 90 ks by the PCA and HEXTE instruments aboard the RXTE satellite, the first simultaneous, pointing measurement of Coma in the broad, 2-250 keV, energy band. The high sensitivity achieved during this long observation allows precise determination of the spectrum. Our analysis of the measurements clearly indicates that in addition to the main thermal emission from hot intracluster gas at kT=7.5 keV, a second spectral component is required to best-fit the data. If thermal, it can be described with a temperature of 4.7 keV contributing about 20% of the total flux. The additional spectral component can also be described by a power-law, possibly due to Compton scattering of relativistic electrons by the CMB. This interpretation is based on the diffuse radio synchrotron emission, which has a spectral index of 2.34, within the range allowed by fits to the RXTE spectral data. A Compton origin of the measured nonthermal component would imply that the volume-averaged magnetic field in the central region of Coma is B =0.2 micro-Gauss, a value deduced directly from the radio and X-ray measurements (and thus free of the usual assumption of energy equipartition). Barring the presence of unknown systematic errors in the RXTE source or background measurements, our spectral analysis yields considerable evidence for Compton X-ray emission in the Coma cluster.
HELICITY CONSERVATION IN NONLINEAR MEAN-FIELD SOLAR DYNAMO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pipin, V. V.; Sokoloff, D. D.; Zhang, H.
It is believed that magnetic helicity conservation is an important constraint on large-scale astrophysical dynamos. In this paper, we study a mean-field solar dynamo model that employs two different formulations of the magnetic helicity conservation. In the first approach, the evolution of the averaged small-scale magnetic helicity is largely determined by the local induction effects due to the large-scale magnetic field, turbulent motions, and the turbulent diffusive loss of helicity. In this case, the dynamo model shows that the typical strength of the large-scale magnetic field generated by the dynamo is much smaller than the equipartition value for the magneticmore » Reynolds number 10{sup 6}. This is the so-called catastrophic quenching (CQ) phenomenon. In the literature, this is considered to be typical for various kinds of solar dynamo models, including the distributed-type and the Babcock-Leighton-type dynamos. The problem can be resolved by the second formulation, which is derived from the integral conservation of the total magnetic helicity. In this case, the dynamo model shows that magnetic helicity propagates with the dynamo wave from the bottom of the convection zone to the surface. This prevents CQ because of the local balance between the large-scale and small-scale magnetic helicities. Thus, the solar dynamo can operate in a wide range of magnetic Reynolds numbers up to 10{sup 6}.« less
Computing the Dynamic Response of a Stratified Elastic Half Space Using Diffuse Field Theory
NASA Astrophysics Data System (ADS)
Sanchez-Sesma, F. J.; Perton, M.; Molina Villegas, J. C.
2015-12-01
The analytical solution for the dynamic response of an elastic half-space for a normal point load at the free surface is due to Lamb (1904). For a tangential force, we have Chaós (1960) formulae. For an arbitrary load at any depth within a stratified elastic half space, the resulting elastic field can be given in the same fashion, by using an integral representation in the radial wavenumber domain. Typically, computations use discrete wave number (DWN) formalism and Fourier analysis allows for solution in space and time domain. Experimentally, these elastic Greeńs functions might be retrieved from ambient vibrations correlations when assuming a diffuse field. In fact, the field could not be totally diffuse and only parts of the Green's functions, associated to surface or body waves, are retrieved. In this communication, we explore the computation of Green functions for a layered media on top of a half-space using a set of equipartitioned elastic plane waves. Our formalism includes body and surface waves (Rayleigh and Love waves). These latter waves correspond to the classical representations in terms of normal modes in the asymptotic case of large separation distance between source and receiver. This approach allows computing Green's functions faster than DWN and separating the surface and body wave contributions in order to better represent Green's function experimentally retrieved.
Jose, Davis; Weitzel, Steven E.; Baase, Walter A.; Michael, Miya M.; von Hippel, Peter H.
2015-01-01
We here use our site-specific base analog mapping approach to study the interactions and binding equilibria of cooperatively-bound clusters of the single-stranded DNA binding protein (gp32) of the T4 DNA replication complex with longer ssDNA (and dsDNA) lattices. We show that in cooperatively bound clusters the binding free energy appears to be equi-partitioned between the gp32 monomers of the cluster, so that all bind to the ssDNA lattice with comparable affinity, but also that the outer domains of the gp32 monomers at the ends of the cluster can fluctuate on and off the lattice and that the clusters of gp32 monomers can slide along the ssDNA. We also show that at very low binding densities gp32 monomers bind to the ssDNA lattice at random, but that cooperatively bound gp32 clusters bind preferentially at the 5′-end of the ssDNA lattice. We use these results and the gp32 monomer-binding results of the companion paper to propose a detailed model for how gp32 might bind to and interact with ssDNA lattices in its various binding modes, and also consider how these clusters might interact with other components of the T4 DNA replication complex. PMID:26275774
Does the Boltzmann Principle Need a Dynamical Correction?
NASA Astrophysics Data System (ADS)
Adib, Artur B.
2004-11-01
In an attempt to derive thermodynamics from classical mechanics, an approximate expression for the equilibrium temperature of a finite system has been derived (M. Bianucci, R. Mannella, B. J. West and P. Grigolini, Phys. Rev. E 51: 3002 (1995)) which differs from the one that follows from the Boltzmann principle S = kln Ω( E) via the thermodynamic relation 1/ T=∂ S / ∂ E by additional terms of "dynamical" character, which are argued to correct and generalize the Boltzmann principle for small systems (here Ω( E) is the area of the constant-energy surface). In the present work, the underlying definition of temperature in the Fokker-Planck formalism of Bianucci et al., is investigated and shown to coincide with an approximate form of the equipartition temperature. Its exact form, however, is strictly related to the "volume" entropy S = k ln Ф( E) via the thermodynamic relation above for systems of any number of degrees of freedom ( Ф( E) is the phase space volume enclosed by the constant-energy surface). This observation explains and clarifies the numerical results of Bianucci et al., and shows that a dynamical correction for either the temperature or the entropy is unnecessary, at least within the class of systems considered by those authors. Explicit analytical and numerical results for a particle coupled to a small chain ( N~10) of quartic oscillators are also provided to further illustrate these facts.
RAiSE III: 3C radio AGN energetics and composition
NASA Astrophysics Data System (ADS)
Turner, Ross J.; Shabala, Stanislav S.; Krause, Martin G. H.
2018-03-01
Kinetic jet power estimates based exclusively on observed monochromatic radio luminosities are highly uncertain due to confounding variables and a lack of knowledge about some aspects of the physics of active galactic nuclei (AGNs). We propose a new methodology to calculate the jet powers of the largest, most powerful radio sources based on combinations of their size, lobe luminosity, and shape of their radio spectrum; this approach avoids the uncertainties encountered by previous relationships. The outputs of our model are calibrated using hydrodynamical simulations and tested against independent X-ray inverse-Compton measurements. The jet powers and lobe magnetic field strengths of radio sources are found to be recovered using solely the lobe luminosity and spectral curvature, enabling the intrinsic properties of unresolved high-redshift sources to be inferred. By contrast, the radio source ages cannot be estimated without knowledge of the lobe volumes. The monochromatic lobe luminosity alone is incapable of accurately estimating the jet power or source age without knowledge of the lobe magnetic field strength and size, respectively. We find that, on average, the lobes of the Third Cambridge Catalogue of Radio Sources (3C) have magnetic field strengths approximately a factor three lower than the equipartition value, inconsistent with equal energy in the particles and the fields at the 5σ level. The particle content of 3C radio lobes is discussed in the context of complementary observations; we do not find evidence favouring an energetically dominant proton population.
NASA Astrophysics Data System (ADS)
Potter, William J.
2017-02-01
We calculate the severe radiative energy losses which occur at the base of black hole jets using a relativistic fluid jet model, including in situ acceleration of non-thermal leptons by magnetic reconnection. Our results demonstrate that including a self-consistent treatment of radiative energy losses is necessary to perform accurate magnetohydrodynamic simulations of powerful jets and that jet spectra calculated via post-processing are liable to vastly overestimate the amount of non-thermal emission. If no more than 95 per cent of the initial total jet power is radiated away by the plasma travels as it travels along the length of the jet, we can place a lower bound on the magnetization of the jet plasma at the base of the jet. For typical powerful jets, we find that the plasma at the jet base is required to be highly magnetized, with at least 10 000 times more energy contained in magnetic fields than in non-thermal leptons. Using a simple power-law model of magnetic reconnection, motivated by simulations of collisionless reconnection, we determine the allowed range of the large-scale average reconnection rate along the jet, by restricting the total radiative energy losses incurred and the distance at which the jet first comes into equipartition. We calculate analytic expressions for the cumulative radiative energy losses due to synchrotron and inverse-Compton emission along jets, and derive analytic formulae for the constraint on the initial magnetization.
PKS 1954–388: RadioAstron detection on 80,000 km baselines and multiwavelength observations
Edwards, P. G.; Kovalev, Y. Y.; Ojha, R.; ...
2017-04-26
Here, we present results from a multiwavelength study of the blazar PKS 1954–388 at radio, UV, X-ray, and gamma-ray energies. A RadioAstron observation at 1.66 GHz in June 2012 resulted in the detection of interferometric fringes on baselines of 6.2 Earth-diameters. This suggests a source frame brightness temperature of greater than 2 × 10 12 K, well in excess of both equipartition and inverse Compton limits and implying the existence of Doppler boosting in the core. An 8.4-GHz TANAMI VLBI image, made less than a month after the RadioAstron observations, is consistent with a previously reported superluminal motion for amore » jet component. Flux density monitoring with the Australia Telescope Compact Array confirms previous evidence for long-term variability that increases with observing frequency. A search for more rapid variability revealed no evidence for significant day-scale flux density variation. The ATCA light-curve reveals a strong radio flare beginning in late 2013, which peaks higher, and earlier, at higher frequencies. Comparison with the Fermi gamma-ray light-curve indicates this followed ~ 9 months after the start of a prolonged gamma-ray high-state—a radio lag comparable to that seen in other blazars. The multiwavelength data are combined to derive a Spectral Energy Distribution, which is fitted by a one-zone synchrotron-self-Compton (SSC) model with the addition of external Compton (EC) emission.« less
A GLOBAL GALACTIC DYNAMO WITH A CORONA CONSTRAINED BY RELATIVE HELICITY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prasad, A.; Mangalam, A., E-mail: avijeet@iiap.res.in, E-mail: mangalam@iiap.res.in
We present a model for a global axisymmetric turbulent dynamo operating in a galaxy with a corona that treats the parameters of turbulence driven by supernovae and by magneto-rotational instability under a common formalism. The nonlinear quenching of the dynamo is alleviated by the inclusion of small-scale advective and diffusive magnetic helicity fluxes, which allow the gauge-invariant magnetic helicity to be transferred outside the disk and consequently to build up a corona during the course of dynamo action. The time-dependent dynamo equations are expressed in a separable form and solved through an eigenvector expansion constructed using the steady-state solutions ofmore » the dynamo equation. The parametric evolution of the dynamo solution allows us to estimate the final structure of the global magnetic field and the saturated value of the turbulence parameter α{sub m}, even before solving the dynamical equations for evolution of magnetic fields in the disk and the corona, along with α-quenching. We then solve these equations simultaneously to study the saturation of the large-scale magnetic field, its dependence on the small-scale magnetic helicity fluxes, and the corresponding evolution of the force-free field in the corona. The quadrupolar large-scale magnetic field in the disk is found to reach equipartition strength within a timescale of 1 Gyr. The large-scale magnetic field in the corona obtained is much weaker than the field inside the disk and has only a weak impact on the dynamo operation.« less
TeV-detected young pulsar wind nebulae
NASA Astrophysics Data System (ADS)
Cillis, Analia; Torres, D. F.; Martin, J.; de Oña, E.
2014-01-01
More than 20 young pulsar wind nebulae (PWNe) have been detected at very high energies (VHE) by the current Imaging Atmospheric Cherenkov Telescopes (IACT). Such sources constitute the largest population of Galactic sources in this energy range. They are associated to very energetic, young pulsars and usually show an extended emission up to a few tens of parsecs. In this work we present spectral characterization for the young PWNe detected at VHE, using a time-dependent model, spanning over 20 decades in frequency. The PWNe that have been studied in this work are: Crab Nebula, G54.1+0.3, G0.9 +0.1, G21.5-0.9, MSH 15-52, G292.2-0.5, Kes 75 , HESS J1356-645 , CTA 1, HESS J1813-178 . Other young PWNe that have been detected at VHE have not been incorporated due to controversies in the association between the PWN and pulsar or lack of observational data at radio and X-ray frequencies. Some of the most robust findings, which are not affected by the uncertainties of the model, is that all detected PWNe in TeV are particle dominated with magnetic fractions that do not exceed a few percent. None of the PWNe detected at high energies and youth is in equipartition. With respect to the spectrum of particle injection, our result suggest that the process of acceleration in the termination shock wave from the pulsar wind, cooling, advection and diffusion of the accelerated particles is common in young PWNe.
Weibull thermodynamics: Subexponential decay in the energy spectrum of cosmic-ray nuclei
NASA Astrophysics Data System (ADS)
Tomaschitz, Roman
2017-10-01
The spectral number density of cosmic-ray nuclei is shown to be a multiply broken power law with subexponential spectral cutoff. To this end, a spectral fit is performed to data sets covering the 1GeV - 1011GeV interval of the all-particle cosmic-ray spectrum. The flux points of the ultra-high energy spectral tail measured with the Telescope Array indicate a Weibull cutoff exp(-(E /(kB T)) σ) and permit a precise determination of the cutoff temperature kB T =(2 . 5 ± 0 . 1) × 1010 GeV and the spectral index σ = 0 . 66 ± 0 . 02. Based on the spectral number density inferred from the least-squares fit, the thermodynamics of this stationary non-equilibrium system, a multi-component mixture of relativistic nuclei, is developed. The derivative of entropy with respect to internal energy defines the effective temperature of the nuclei, S,U = 1 /Teff ,kBTeff ≈ 16 . 1 GeV, and the functional dependence between the cutoff temperature in the Weibull exponential and the effective gas temperature is determined. The equipartition ratio is found to be U /(NkBTeff) ≈ 0 . 30. The isochoric and isobaric heat capacities of the nuclear gas are calculated, as well as the isothermal and adiabatic compressibilities and the isobaric expansion coefficient, and it is shown that this non-equilibrated relativistic gas mixture satisfies the thermodynamic inequalities 0
A three-phase amplification of the cosmic magnetic field in galaxies
NASA Astrophysics Data System (ADS)
Martin-Alvarez, Sergio; Devriendt, Julien; Slyz, Adrianne; Teyssier, Romain
2018-06-01
Arguably the main challenge of galactic magnetism studies is to explain how the interstellar medium of galaxies reaches energetic equipartition despite the extremely weak cosmic primordial magnetic fields that are originally predicted to thread the inter-galactic medium. Previous numerical studies of isolated galaxies suggest that a fast dynamo amplification might suffice to bridge the gap spanning many orders of magnitude in strength between the weak early Universe magnetic fields and the ones observed in high redshift galaxies. To better understand their evolution in the cosmological context of hierarchical galaxy growth, we probe the amplification process undergone by the cosmic magnetic field within a spiral galaxy to unprecedented accuracy by means of a suite of constrained transport magnetohydrodynamical adaptive mesh refinement cosmological zoom simulations with different stellar feedback prescriptions. A galactic turbulent dynamo is found to be naturally excited in this cosmological environment, being responsible for most of the amplification of the magnetic energy. Indeed, we find that the magnetic energy spectra of simulated galaxies display telltale inverse cascades. Overall, the amplification process can be divided in three main phases, which are related to different physical mechanisms driving galaxy evolution: an initial collapse phase, an accretion-driven phase, and a feedback-driven phase. While different feedback models affect the magnetic field amplification differently, all tested models prove to be subdominant at early epochs, before the feedback-driven phase is reached. Thus the three-phase evolution paradigm is found to be quite robust vis-a-vis feedback prescriptions.
High-resolution hybrid simulations of turbulence from inertial to sub-proton scales
NASA Astrophysics Data System (ADS)
Franci, Luca; Hellinger, Petr; Landi, Simone; Matteini, Lorenzo; Verdini, Andrea
2015-04-01
We investigate properties of turbulence from MHD scales to ion scales by means of two-dimensional, large-scale, high-resolution hybrid particle-in-cell simulations, which to our knowledge constitute the most accurate hybrid simulations of ion scale turbulence ever presented so far. We impose an initial ambient magnetic field perpendicular to the simulation box, and we add a spectrum of large-scale, linearly polarized Alfvén waves, balanced and Alfvénically equipartitioned, on average. When turbulence is fully developed, we observe an inertial range which is characterized by the power spectrum of perpendicular magnetic field fluctuations following a Kolmogorov law with spectral index close to -5/3, while the proton bulk velocity fluctuations exhibit a less steeper slope with index close to -3/2. Both these trends hold over a full decade. A definite transition is observed at a scale of the order of the proton inertial length, above which both spectra steepen, with the perpendicular magnetic field still exhibiting a power law with spectral index about -3 over another full decade. The spectrum of perpendicular electric fluctuations follows the one of the proton bulk velocity at MHD scales and reaches a sort of plateau at small scales. The turbulent nature of our data is also supported by the presence of intermittency. This is revealed by the non-Gaussianity of the probability distribution functions of MHD primitive variables increasing as approaching kinetic scales. All these features are in good agreement with solar wind observations.
NASA Technical Reports Server (NTRS)
Contopoulos, Ioannis; Kazanas, Demosthenes; Christodoulos, Dimistris M.
2007-01-01
We reinvestigate the generation and accumulation of magnetic flux in optically thin accretion flows around active gravitating objects. The source of the magnetic field is the azimuthal electric current associated with the Poynting-Robertson drag on the electrons of the accreting plasma. This current generates magnetic field loops which open up because of the differential rotation of the flow. We show through simple numerical simulations that what regulates the generation and accumulation of magnetic flux near the center is the value of the plasma conductivity. Although the conductivity is usually considered to be effectively infinite for the fully ionized plasmas expected near the inner edge of accretion disks, the turbulence of those plasmas may actually render them much less conducting due to the presence of anomalous resistivity. We have discovered that if the resistivity is sufficiently high throughout the turbulent disk while it is suppressed interior to its inner edge, an interesting steady-state process is established: accretion carries and accumulates magnetic flux of one polarity inside the inner edge of the disk, whereas magnetic diffusion releases magnetic flux of the opposite polarity to large distances. In this scenario, magnetic flux of one polarity grows and accumulates at a steady rate in the region inside the inner edge and up to the point of equipartition when it becomes dynamically important. We argue that this inward growth and outward expulsion of oppositely-directed magnetic fields that we propose may account for the approx. 30 min cyclic variability observed in the galactic microquasar GRS1915+105.
Producing data-based sensitivity kernels from convolution and correlation in exploration geophysics.
NASA Astrophysics Data System (ADS)
Chmiel, M. J.; Roux, P.; Herrmann, P.; Rondeleux, B.
2016-12-01
Many studies have shown that seismic interferometry can be used to estimate surface wave arrivals by correlation of seismic signals recorded at a pair of locations. In the case of ambient noise sources, the convergence towards the surface wave Green's functions is obtained with the criterion of equipartitioned energy. However, seismic acquisition with active, controlled sources gives more possibilities when it comes to interferometry. The use of controlled sources makes it possible to recover the surface wave Green's function between two points using either correlation or convolution. We investigate the convolutional and correlational approaches using land active-seismic data from exploration geophysics. The data were recorded on 10,710 vertical receivers using 51,808 sources (seismic vibrator trucks). The sources spacing is the same in both X and Y directions (30 m) which is known as a "carpet shooting". The receivers are placed in parallel lines with a spacing 150 m in the X direction and 30 m in the Y direction. Invoking spatial reciprocity between sources and receivers, correlation and convolution functions can thus be constructed between either pairs of receivers or pairs of sources. Benefiting from the dense acquisition, we extract sensitivity kernels from correlation and convolution measurements of the seismic data. These sensitivity kernels are subsequently used to produce phase-velocity dispersion curves between two points and to separate the higher mode from the fundamental mode for surface waves. Potential application to surface wave cancellation is also envisaged.
The radio sources CTA 21 and OF+247: The hot spots of radio galaxies
NASA Astrophysics Data System (ADS)
Artyukh, V. S.; Tyul'bashev, S. A.; Chernikov, P. A.
2013-06-01
The physical conditions in the radio sources CTA 21 and OF+247 are studied assuming that the low-frequency spectral turnovers are due to synchrotron self-absorption. The physical parameters of the radio sources are estimated using a technique based on a nonuniform synchrotron source model. It is shown that the magnetic-field distributions in the dominant compact components of these radio sources are strongly inhomogeneous. The magnetic fields at the center of the sources are B ˜ 10-1 G, and the fields are two to three orders of magnitude weaker at the periphery. The magnetic field averaged over the compact component is B ˜ 10-3 G, and the density of relativistic electrons is n e ˜ 10-3 cm-3. Assuming that there is equipartition of the energies of the magnetic field and relativistic particles, averaged over the source, < E H > = < E e > ˜ 10-7-10-6 erg cm-3. The energy density of the magnetic field exceeds that of the relativistic electrons at the centers of the radio sources. The derived parameters of CTA 21 and OF+247 are close to those of the hot spots in the radio galaxy Cygnus A. On this basis, it is suggested that CTA 21 and OF+247 are radio galaxies at an early stage of their evolution, when the hot spots (dominant compact radio components) have appeared, and the radio lobes (weak extended components) are still being formed.
NASA Astrophysics Data System (ADS)
Hertzog, A.; Vial, F.
2001-10-01
This study is the companion paper of Vial et al. [this issue]. A campaign of ultra-long-duration, superpressure balloons in the equatorial lower stratosphere was held in September 1998. By conception these balloons evolve on isopycnic surfaces. Pressure and position were measured every 12 min, which enable to infer the characteristics of gravity waves with periods between 1 hour and 1 day in this region of the atmosphere. The intrinsic-frequency spectra of horizontal wind fluctuations exhibit a -2 slope, while the one associated with vertical-wind fluctuations is flat. Significant inhomogeneity of the wave activity is observed, and the variance of the shortest frequency waves is found to be linked to the position of the balloons with respect to the Intertropical Convergence Zone. On average, the total energy associated with gravity waves in the period range studied in this paper is found to be ˜ 7 J kg-1. Calculations of momentum flux have also been undertaken. It appears that there is an approximate equipartition of flux between eastward and westward propagating gravity waves and that the absolute value of the flux is 8-12 × 10-3 m2 s-2 at 20 km. A larger flux is also observed above convective regions. These values suggest that gravity waves may carry the largest part of the Eliassen-Palm flux required for the driving of the quasi-biennial oscillation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagai, H.; Kino, M.; Fujita, Y.
2017-11-01
We present Very Long Baseline Array polarimetric observations of the innermost jet of 3C 84 (NGC 1275) at 43 GHz. A significant polarized emission is detected at the hotspot of the innermost restarted jet, which is located 1 pc south from the radio core. While the previous report presented a hotspot at the southern end of the western limb, the hotspot location has been moved to the southern end of the eastern limb. Faraday rotation is detected within an entire bandwidth of the 43 GHz band. The measured rotation measure (RM) is at most (6.3 ± 1.9) × 10{sup 5}more » rad m{sup −2} and might be slightly time variable on the timescale of a month by a factor of a few. Our measured RM and the RM previously reported by the CARMA and SMA observations cannot be consistently explained by the spherical accretion flow with a power-law profile. We propose that a clumpy/inhomogeneous ambient medium is responsible for the observed RM. Using an equipartition magnetic field, we derive the electron density of 2 × 10{sup 4} cm{sup −3}. Such an electron density is consistent with the cloud of the narrow line emission region around the central engine. We also discuss the magnetic field configuration from the black hole scale to the parsec scale and the origin of low polarization.« less
Unusual energy properties of leaky backward Lamb waves in a submerged plate.
Nedospasov, I A; Mozhaev, V G; Kuznetsova, I E
2017-05-01
It is found that leaky backward Lamb waves, i.e. waves with negative energy-flux velocity, propagating in a plate submerged in a liquid possess extraordinary energy properties distinguishing them from any other type of waves in isotropic media. Namely, the total time-averaged energy flux along the waveguide axis is equal to zero for these waves due to opposite directions of the longitudinal energy fluxes in the adjacent media. This property gives rise to the fundamental question of how to define and calculate correctly the energy velocity in such an unusual case. The procedure of calculation based on incomplete integration of the energy flux density over the plate thickness alone is applied. The derivative of the angular frequency with respect to the wave vector, usually referred to as the group velocity, happens to be close to the energy velocity defined by this mean in that part of the frequency range where the backward mode exists in the free plate. The existence region of the backward mode is formally increased for the submerged plate in comparison to the free plate as a result of the liquid-induced hybridization of propagating and nonpropagating (evanescent) Lamb modes. It is shown that the Rayleigh's principle (i.e. equipartition of total time-averaged kinetic and potential energies for time-harmonic acoustic fields) is violated due to the leakage of Lamb waves, in spite of considering nondissipative media. Copyright © 2017 Elsevier B.V. All rights reserved.
Photodissociation pathways and lifetimes of protonated peptides and their dimers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aravind, G.; Klaerke, B.; Rajput, J.
2012-01-07
Photodissociation lifetimes and fragment channels of gas-phase, protonated YA{sub n} (n = 1,2) peptides and their dimers were measured with 266 nm photons. The protonated monomers were found to have a fast dissociation channel with an exponential lifetime of {approx}200 ns while the protonated dimers show an additional slow dissociation component with a lifetime of {approx}2 {mu}s. Laser power dependence measurements enabled us to ascribe the fast channel in the monomer and the slow channel in the dimer to a one-photon process, whereas the fast dimer channel is from a two-photon process. The slow (1 photon) dissociation channel in themore » dimer was found to result in cleavage of the H-bonds after energy transfer through these H-bonds. In general, the dissociation of these protonated peptides is non-prompt and the decay time was found to increase with the size of the peptides. Quantum RRKM calculations of the microcanonical rate constants also confirmed a statistical nature of the photodissociation processes in the dipeptide monomers and dimers. The classical RRKM expression gives a rate constant as an analytical function of the number of active vibrational modes in the system, estimated separately on the basis of the equipartition theorem. It demonstrates encouraging results in predicting fragmentation lifetimes of protonated peptides. Finally, we present the first experimental evidence for a photo-induced conversion of tyrosine-containing peptides into monocyclic aromatic hydrocarbon along with a formamide molecule both found in space.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Offner, Stella S. R.; Arce, Héctor G., E-mail: stella.offner@yale.edu
2014-03-20
We investigate protostellar outflow evolution, gas entrainment, and star formation efficiency using radiation-hydrodynamic simulations of isolated, turbulent low-mass cores. We adopt an X-wind launching model, in which the outflow rate is coupled to the instantaneous protostellar accretion rate and evolution. We vary the outflow collimation angle from θ = 0.01-0.1 and find that even well-collimated outflows effectively sweep up and entrain significant core mass. The Stage 0 lifetime ranges from 0.14-0.19 Myr, which is similar to the observed Class 0 lifetime. The star formation efficiency of the cores spans 0.41-0.51. In all cases, the outflows drive strong turbulence in themore » surrounding material. Although the initial core turbulence is purely solenoidal by construction, the simulations converge to approximate equipartition between solenoidal and compressive motions due to a combination of outflow driving and collapse. When compared to simulation of a cluster of protostars, which is not gravitationally centrally condensed, we find that the outflows drive motions that are mainly solenoidal. The final turbulent velocity dispersion is about twice the initial value of the cores, indicating that an individual outflow is easily able to replenish turbulent motions on sub-parsec scales. We post-process the simulations to produce synthetic molecular line emission maps of {sup 12}CO, {sup 13}CO, and C{sup 18}O and evaluate how well these tracers reproduce the underlying mass and velocity structure.« less
The X-ray emission mechanism of large scale powerful quasar jets: Fermi rules out IC/CMB for 3C 273.
NASA Astrophysics Data System (ADS)
Georganopoulos, Markos; Meyer, Eileen T.
2013-12-01
The process responsible for the Chandra-detected X-ray emission from the large-scale jets of powerful quasars is not clear yet. The two main models are inverse Compton scattering off the cosmic microwave background photons (IC/CMB) and synchrotron emission from a population of electrons separate from those producing the radio-IR emission. These two models imply radically different conditions in the large scale jet in terms of jet speed, kinetic power, and maximum energy of the particle acceleration mechanism, with important implications for the impact of the jet on the larger-scale environment. Georganopoulos et al. (2006) proposed a diagnostic based on a fundamental difference between these two models: the production of synchrotron X-rays requires multi-TeV electrons, while the EC/CMB model requires a cutoff in the electron energy distribution below TeV energies. This has significant implications for the γ-ray emission predicted by these two models. Here we present new Fermi observations that put an upper limit on the gamma-ray flux from the large-scale jet of 3C 273 that clearly violates the flux expected from the IC/CMB X-ray interpretation found by extrapolation of the UV to X-ray spectrum of knot A, thus ruling out the IC/CMB interpretation entirely for this source. Further, the upper limit from Fermi puts a limit on the Doppler beaming factor of at least δ <9, assuming equipartition fields, and possibly as low as δ <5 assuming no major deceleration of the jet from knots A through D1.
NASA Astrophysics Data System (ADS)
Lee, Shiu-Hang; Maeda, Keiichi; Kawanaka, Norita
2018-05-01
Neutron star mergers (NSMs) eject energetic subrelativistic dynamical ejecta into circumbinary media. Analogous to supernovae and supernova remnants, the NSM dynamical ejecta are expected to produce nonthermal emission by electrons accelerated at a shock wave. In this paper, we present the expected radio and X-ray signals by this mechanism, taking into account nonlinear diffusive shock acceleration (DSA) and magnetic field amplification. We suggest that the NSM is unique as a DSA site, where the seed relativistic electrons are abundantly provided by the decays of r-process elements. The signal is predicted to peak at a few 100–1000 days after the merger, determined by the balance between the decrease of the number of seed electrons and the increase of the dissipated kinetic energy, due to the shock expansion. While the resulting flux can ideally reach the maximum flux expected from near-equipartition, the available kinetic energy dissipation rate of the NSM ejecta limits the detectability of such a signal. It is likely that the radio and X-ray emission are overwhelmed by other mechanisms (e.g., an off-axis jet) for an observer placed in a jet direction (i.e., for GW170817). However, for an off-axis observer, to be discovered once a number of NSMs are identified, the dynamical ejecta component is predicted to dominate the nonthermal emission. While the detection of this signal is challenging even with near-future facilities, this potentially provides a robust probe of the creation of r-process elements in NSMs.
THE CONTRIBUTION OF FERMI -2LAC BLAZARS TO DIFFUSE TEV–PEV NEUTRINO FLUX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aartsen, M. G.; Abraham, K.; Ackermann, M.
2017-01-20
The recent discovery of a diffuse cosmic neutrino flux extending up to PeV energies raises the question of which astrophysical sources generate this signal. Blazars are one class of extragalactic sources which may produce such high-energy neutrinos. We present a likelihood analysis searching for cumulative neutrino emission from blazars in the 2nd Fermi -LAT AGN catalog (2LAC) using IceCube neutrino data set 2009-12, which was optimized for the detection of individual sources. In contrast to those in previous searches with IceCube, the populations investigated contain up to hundreds of sources, the largest one being the entire blazar sample in themore » 2LAC catalog. No significant excess is observed, and upper limits for the cumulative flux from these populations are obtained. These constrain the maximum contribution of 2LAC blazars to the observed astrophysical neutrino flux to 27% or less between around 10 TeV and 2 PeV, assuming the equipartition of flavors on Earth and a single power-law spectrum with a spectral index of −2.5. We can still exclude the fact that 2LAC blazars (and their subpopulations) emit more than 50% of the observed neutrinos up to a spectral index as hard as −2.2 in the same energy range. Our result takes into account the fact that the neutrino source count distribution is unknown, and it does not assume strict proportionality of the neutrino flux to the measured 2LAC γ -ray signal for each source. Additionally, we constrain recent models for neutrino emission by blazars.« less
NASA Astrophysics Data System (ADS)
Kobzar, Oleh; Niemiec, Jacek; Pohl, Martin; Bohdan, Artem
2017-08-01
A non-resonant cosmic ray (CR) current-driven instability may operate in the shock precursors of young supernova remnants and be responsible for magnetic-field amplification, plasma heating and turbulence. Earlier simulations demonstrated magnetic-field amplification, and in kinetic studies a reduction of the relative drift between CRs and thermal plasma was observed as backreaction. However, all published simulations used periodic boundary conditions, which do not account for mass conservation in decelerating flows and only allow the temporal development to be studied. Here we report results of fully kinetic particle-in-cell simulations with open boundaries that permit inflow of plasma on one side of the simulation box and outflow at the other end, hence allowing an investigation of both the temporal and the spatial development of the instability. Magnetic-field amplification proceeds as in studies with periodic boundaries and, observed here for the first time, the reduction of relative drifts causes the formation of a shock-like compression structure at which a fraction of the plasma ions are reflected. Turbulent electric field generated by the non-resonant instability inelastically scatters CRs, modifying and anisotropizing their energy distribution. Spatial CR scattering is compatible with Bohm diffusion. Electromagnetic turbulence leads to significant non-adiabatic heating of the background plasma maintaining bulk equipartition between ions and electrons. The highest temperatures are reached at sites of large-amplitude electrostatic fields. Ion spectra show supra-thermal tails resulting from stochastic scattering in the turbulent electric field. Together, these modifications in the plasma flow will affect the properties of the shock and particle acceleration there.
The Efficiency of Magnetic Field Amplification at Shocks by Turbulence
NASA Technical Reports Server (NTRS)
Ji, Suoqing; Oh, S. Peng; Ruszkowsi, M.; Markevitch, M.
2016-01-01
Turbulent dynamo field amplification has often been invoked to explain the strong field strengths in thin rims in supernova shocks (approx.100 micrograms) and in radio relics in galaxy clusters (approx. micrograms). We present high-resolution magnetohydrodynamic simulations of the interaction between pre-shock turbulence, clumping and shocks, to quantify the conditions under which turbulent dynamo amplification can be significant. We demonstrate numerically converged field amplification which scales with Alfven Mach number, B/B0 varies as MA, up to MA approx.150.This implies that the post-shock field strength is relatively independent of the seed field. Amplification is dominated by compression at low MA, and stretching (turbulent amplification) at high MA. For high MA, the B-field grows exponentially and saturates at equipartition with turbulence, while the vorticity jumps sharply at the shock and subsequently decays; the resulting field is orientated predominately along the shock normal (an effect only apparent in 3D and not 2D). This agrees with the radial field bias seen in supernova remnants. By contrast, for low MA, field amplification is mostly compressional, relatively modest, and results in a predominantly perpendicular field. The latter is consistent with the polarization seen in radio relics. Our results are relatively robust to the assumed level of gas clumping. Our results imply that the turbulent dynamo may be important for supernovae, but is only consistent with the field strength, and not geometry, for cluster radio relics. For the latter, this implies strong pre-existing B-fields in the ambient cluster outskirts.
NASA Astrophysics Data System (ADS)
Akahori, Takuya; Kato, Yuichi; Nakazawa, Kazuhiro; Ozawa, Takeaki; Gu, Liyi; Takizawa, Motokazu; Fujita, Yutaka; Nakanishi, Hiroyuki; Okabe, Nobuhiro; Makishima, Kazuo
2018-06-01
We report the Australia Telescope Compact Array 16 cm observation of CIZA J1358.9-4750. Recent X-ray studies imply that this galaxy cluster is composed of merging, binary clusters. Using the EW367 configuration, we found no significant diffuse radio emission in and around the cluster. An upper limit of the total radio power at 1.4 GHz is ˜1.1 × 1022 W Hz-1 in 30 square arcminutes, which is a typical size for radio relics. It is known that an empirical relation holds between the total radio power and X-ray luminosity of the host cluster. The upper limit is about one order of magnitude lower than the power expected from the relation. Very young (˜70 Myr) shocks with low Mach numbers (˜1.3), which are often seen at an early stage of merger simulations, are suggested by the previous X-ray observation. The shocks may generate cosmic-ray electrons with a steep energy spectrum, which is consistent with non-detection of bright (>1023 W Hz-1) relic in this 16 cm band observation. Based on the assumption of energy equipartition, the upper limit gives a magnetic field strength of below 0.68f(Dlos/1 Mpc)-1(γmin/200)-1 μG, where f is the cosmic-ray total energy density over the cosmic-ray electron energy density, Dlos is the depth of the shock wave along the sightline, and γmin is the lower cutoff Lorentz factor of the cosmic-ray electron energy spectrum.
NASA Astrophysics Data System (ADS)
Pushkarev, A. B.; Kovalev, Y. Y.
2015-10-01
We have measured the angular sizes of radio cores of active galactic nuclei (AGNs) and analysed their sky distributions and frequency dependences to study synchrotron opacity in AGN jets and the strength of angular broadening in the interstellar medium. We have used archival very long baseline interferometry (VLBI) data of more than 3000 compact extragalactic radio sources observed at frequencies, ν, from 2 to 43 GHz to measure the observed angular size of VLBI cores. We have found a significant increase in the angular sizes of the extragalactic sources seen through the Galactic plane (|b| ≲ 10°) at 2, 5 and 8 GHz, about one-third of which show significant scattering. These sources are mainly detected in directions to the Galactic bar, the Cygnus region and a region with galactic longitudes 220° ≲ l ≲ 260° (the Fitzgerald window). The strength of interstellar scattering of the AGNs is found to correlate with the Galactic Hα intensity, free-electron density and Galactic rotation measure. The dependence of scattering strengths on source redshift is insignificant, suggesting that the dominant scattering screens are located in our Galaxy. The observed angular size of Sgr A* is found to be the largest among thousands of AGNs observed over the sky; we discuss possible reasons for this strange result. Excluding extragalactic radio sources with significant scattering, we find that the angular size of opaque cores in AGNs scales typically as ν-1, confirming predictions of a conical synchrotron jet model with equipartition.
Galaxy formation with BECDM - I. Turbulence and relaxation of idealized haloes
NASA Astrophysics Data System (ADS)
Mocz, Philip; Vogelsberger, Mark; Robles, Victor H.; Zavala, Jesús; Boylan-Kolchin, Michael; Fialkov, Anastasia; Hernquist, Lars
2017-11-01
We present a theoretical analysis of some unexplored aspects of relaxed Bose-Einstein condensate dark matter (BECDM) haloes. This type of ultralight bosonic scalar field dark matter is a viable alternative to the standard cold dark matter (CDM) paradigm, as it makes the same large-scale predictions as CDM and potentially overcomes CDM's small-scale problems via a galaxy-scale de Broglie wavelength. We simulate BECDM halo formation through mergers, evolved under the Schrödinger-Poisson equations. The formed haloes consist of a soliton core supported against gravitational collapse by the quantum pressure tensor and an asymptotic r-3 NFW-like profile. We find a fundamental relation of the core-to-halo mass with the dimensionless invariant Ξ ≡ |E|/M3/(Gm/ℏ)2 or Mc/M ≃ 2.6Ξ1/3, linking the soliton to global halo properties. For r ≥ 3.5 rc core radii, we find equipartition between potential, classical kinetic and quantum gradient energies. The haloes also exhibit a conspicuous turbulent behaviour driven by the continuous reconnection of vortex lines due to wave interference. We analyse the turbulence 1D velocity power spectrum and find a k-1.1 power law. This suggests that the vorticity in BECDM haloes is homogeneous, similar to thermally-driven counterflow BEC systems from condensed matter physics, in contrast to a k-5/3 Kolmogorov power law seen in mechanically-driven quantum systems. The mode where the power spectrum peaks is approximately the soliton width, implying that the soliton-sized granules carry most of the turbulent energy in BECDM haloes.
NASA Astrophysics Data System (ADS)
Bercik, David John
2002-11-01
Three-dimensional numerical simulations are used to study the dynamic interaction between magnetic fields and convective motions near the solar surface. The magnetic field is found to be transported by convective motions from granules to the intergranular lanes, where it collects and is compressed. A convective instability causes the upper levels of magnetic regions to be evacuated, compressing the field beyond equipartition values, and forming “flux tubes” or “flux sheets”. The degree to which the field is compressed controls how much convective transport is suppressed within the flux structure, and ultimately determines whether the magnetic feature appears brighter or darker than its surroundings. For this reason, the continuum intensity is not a good tracer of the lifetimes of magnetic features, since their bright/dark signature is transient in nature. Larger magnetic structures form at sites where a granule submerges and the surrounding field is pushed into the resulting dark hole. These micropores are devoid of flow in their interior and cool by radiating radially. The convective downflows that collar the micropore heat its edges by lateral radiation, but fail to penetrate far enough into the interior to prevent an overall cooling, and therefore darkening, of the micropore. Magnetic features undergo numerous mergers or splittings during their lifetimes as a result of being pushed and squeezed by the expansion of adjacent granules. Larger structures survive for several convective turnover times, but smaller structures are too weak to resist convective motions, and are destroyed on a convective time scale.
Local Group dSph radio survey with ATCA - II. Non-thermal diffuse emission
NASA Astrophysics Data System (ADS)
Regis, Marco; Richter, Laura; Colafrancesco, Sergio; Profumo, Stefano; de Blok, W. J. G.; Massardi, Marcella
2015-04-01
Our closest neighbours, the Local Group dwarf spheroidal (dSph) galaxies, are extremely quiescent and dim objects, where thermal and non-thermal diffuse emissions lack, so far, of detection. In order to possibly study the dSph interstellar medium, deep observations are required. They could reveal non-thermal emissions associated with the very low level of star formation, or to particle dark matter annihilating or decaying in the dSph halo. In this work, we employ radio observations of six dSphs, conducted with the Australia Telescope Compact Array in the frequency band 1.1-3.1 GHz, to test the presence of a diffuse component over typical scales of few arcmin and at an rms sensitivity below 0.05 mJy beam-1. We observed the dSph fields with both a compact array and long baselines. Short spacings led to a synthesized beam of about 1 arcmin and were used for the extended emission search. The high-resolution data mapped background sources, which in turn were subtracted in the short-baseline maps, to reduce their confusion limit. We found no significant detection of a diffuse radio continuum component. After a detailed discussion on the modelling of the cosmic ray (CR) electron distribution and on the dSph magnetic properties, we present bounds on several physical quantities related to the dSphs, such that the total radio flux, the angular shape of the radio emissivity, the equipartition magnetic field, and the injection and equilibrium distributions of CR electrons. Finally, we discuss the connection to far-infrared and X-ray observations.
The NuSTAR view on Hard-TeV BL Lacs
NASA Astrophysics Data System (ADS)
Costamante, L.; Bonnoli, G.; Tavecchio, F.; Ghisellini, G.; Tagliaferri, G.; Khangulyan, D.
2018-05-01
Hard-TeV BL Lacs are a new type of blazars characterized by a hard intrinsic TeV spectrum, locating the peak of their gamma-ray emission in the spectral energy distribution (SED) above 2-10 TeV. Such high energies are problematic for the Compton emission, using a standard one-zone leptonic model. We study six examples of this new type of BL Lacs in the hard X-ray band with NuSTAR. Together with simultaneous observations with the Neil Gehrels Swift Observatory, we fully constrain the peak of the synchrotron emission in their SED, and test the leptonic synchrotron self-Compton (SSC) model. We confirm the extreme nature of 5 objects also in the synchrotron emission. We do not find evidence of additional emission components in the hard X-ray band. We find that a one-zone SSC model can in principle reproduce the extreme properties of both peaks in the SED, from X-ray up to TeV energies, but at the cost of i) extreme electron energies with very low radiative efficiency, ii) conditions heavily out of equipartition (by 3 to 5 orders of magnitude), and iii) not accounting for the simultaneous UV data, which then should belong to a different emission component, possibly the same as the far-IR (WISE) data. We find evidence of this separation of the UV and X-ray emission in at least two objects. In any case, the TeV electrons must not "see" the UV or lower-energy photons, even if coming from different zones/populations, or the increased radiative cooling would steepen the VHE spectrum.
The Discovery of γ-Ray Emission from the Blazar RGB J0710+591
NASA Astrophysics Data System (ADS)
Acciari, V. A.; Aliu, E.; Arlen, T.; Aune, T.; Bautista, M.; Beilicke, M.; Benbow, W.; Böttcher, M.; Boltuch, D.; Bradbury, S. M.; Buckley, J. H.; Bugaev, V.; Byrum, K.; Cannon, A.; Cesarini, A.; Ciupik, L.; Cui, W.; Dickherber, R.; Duke, C.; Falcone, A.; Finley, J. P.; Finnegan, G.; Fortson, L.; Furniss, A.; Galante, N.; Gall, D.; Gibbs, K.; Gillanders, G. H.; Godambe, S.; Grube, J.; Guenette, R.; Gyuk, G.; Hanna, D.; Holder, J.; Hui, C. M.; Humensky, T. B.; Imran, A.; Kaaret, P.; Karlsson, N.; Kertzman, M.; Kieda, D.; Konopelko, A.; Krawczynski, H.; Krennrich, F.; Lang, M. J.; Lamerato, A.; LeBohec, S.; Maier, G.; McArthur, S.; McCann, A.; McCutcheon, M.; Moriarty, P.; Mukherjee, R.; Ong, R. A.; Otte, A. N.; Pandel, D.; Perkins, J. S.; Petry, D.; Pichel, A.; Pohl, M.; Quinn, J.; Ragan, K.; Reyes, L. C.; Reynolds, P. T.; Roache, E.; Rose, H. J.; Roustazadeh, P.; Schroedter, M.; Sembroski, G. H.; Senturk, G. Demet; Smith, A. W.; Steele, D.; Swordy, S. P.; Tešić, G.; Theiling, M.; Thibadeau, S.; Varlotta, A.; Vassiliev, V. V.; Vincent, S.; Wagner, R. G.; Wakely, S. P.; Ward, J. E.; Weekes, T. C.; Weinstein, A.; Weisgarber, T.; Williams, D. A.; Wissel, S.; Wood, M.; Zitzer, B.; Ackermann, M.; Ajello, M.; Antolini, E.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Berenji, B.; Blandford, R. D.; Bloom, E. D.; Bonamente, E.; Borgland, A. W.; Bouvier, A.; Bregeon, J.; Brigida, M.; Bruel, P.; Buehler, R.; Buson, S.; Caliandro, G. A.; Cameron, R. A.; Caraveo, P. A.; Carrigan, S.; Casandjian, J. M.; Cavazzuti, E.; Cecchi, C.; Çelik, Ö.; Charles, E.; Chekhtman, A.; Cheung, C. C.; Chiang, J.; Ciprini, S.; Claus, R.; Cohen-Tanugi, J.; Conrad, J.; Dermer, C. D.; de Palma, F.; Silva, E. do Couto e.; Drell, P. S.; Dubois, R.; Dumora, D.; Farnier, C.; Favuzzi, C.; Fegan, S. J.; Fortin, P.; Frailis, M.; Fukazawa, Y.; Funk, S.; Fusco, P.; Gargano, F.; Gasparrini, D.; Gehrels, N.; Germani, S.; Giebels, B.; Giglietto, N.; Giordano, F.; Giroletti, M.; Glanzman, T.; Godfrey, G.; Grenier, I. A.; Grove, J. E.; Guiriec, S.; Hays, E.; Horan, D.; Hughes, R. E.; Jóhannesson, G.; Johnson, A. S.; Johnson, W. N.; Kamae, T.; Katagiri, H.; Kataoka, J.; Knödlseder, J.; Kuss, M.; Lande, J.; Latronico, L.; Lee, S.-H.; Llena Garde, M.; Longo, F.; Loparco, F.; Lott, B.; Lovellette, M. N.; Lubrano, P.; Makeev, A.; Mazziotta, M. N.; Michelson, P. F.; Mitthumsiri, W.; Mizuno, T.; Moiseev, A. A.; Monte, C.; Monzani, M. E.; Morselli, A.; Moskalenko, I. V.; Murgia, S.; Nolan, P. L.; Norris, J. P.; Nuss, E.; Ohno, M.; Ohsugi, T.; Omodei, N.; Orlando, E.; Ormes, J. F.; Paneque, D.; Panetta, J. H.; Pelassa, V.; Pepe, M.; Pesce-Rollins, M.; Piron, F.; Porter, T. A.; Rainò, S.; Rando, R.; Razzano, M.; Reimer, A.; Reimer, O.; Ripken, J.; Rodriguez, A. Y.; Roth, M.; Sadrozinski, H. F.-W.; Sanchez, D.; Sander, A.; Scargle, J. D.; Sgrò, C.; Siskind, E. J.; Smith, P. D.; Spandre, G.; Spinelli, P.; Strickman, M. S.; Suson, D. J.; Takahashi, H.; Tanaka, T.; Thayer, J. B.; Thayer, J. G.; Thompson, D. J.; Tibaldo, L.; Torres, D. F.; Tosti, G.; Tramacere, A.; Usher, T. L.; Vasileiou, V.; Vilchez, N.; Vitale, V.; Waite, A. P.; Wang, P.; Winer, B. L.; Wood, K. S.; Yang, Z.; Ylinen, T.; Ziegler, M.
2010-05-01
The high-frequency-peaked BL Lacertae object RGB J0710+591 was observed in the very high-energy (VHE; E > 100 GeV) wave band by the VERITAS array of atmospheric Cherenkov telescopes. The observations, taken between 2008 December and 2009 March and totaling 22.1 hr, yield the discovery of VHE gamma rays from the source. RGB J0710+591 is detected at a statistical significance of 5.5 standard deviations (5.5σ) above the background, corresponding to an integral flux of (3.9 ± 0.8) × 10-12 cm-2 s-1 (3% of the Crab Nebula's flux) above 300 GeV. The observed spectrum can be fit by a power law from 0.31 to 4.6 TeV with a photon spectral index of 2.69 ± 0.26stat ± 0.20sys. These data are complemented by contemporaneous multiwavelength data from the Fermi Large Area Telescope, the Swift X-ray Telescope, the Swift Ultra-Violet and Optical Telescope, and the Michigan-Dartmouth-MIT observatory. Modeling the broadband spectral energy distribution (SED) with an equilibrium synchrotron self-Compton model yields a good statistical fit to the data. The addition of an external-Compton component to the model does not improve the fit nor brings the system closer to equipartition. The combined Fermi and VERITAS data constrain the properties of the high-energy emission component of the source over 4 orders of magnitude and give measurements of the rising and falling sections of the SED.
NASA Astrophysics Data System (ADS)
Macek, Wiesław M.; Wawrzaszek, Anna; Kucharuk, Beata
2018-01-01
Turbulence is complex behavior that is ubiquitous in space, including the environments of the heliosphere and the magnetosphere. Our studies on solar wind turbulence including the heliosheath, and even at the heliospheric boundaries, also beyond the ecliptic plane, have shown that turbulence is intermittent in the entire heliosphere. As is known, turbulence in space plasmas often exhibits substantial deviations from normal Gaussian distributions. Therefore, we analyze the fluctuations of plasma and magnetic field parameters also in the magnetosheath behind the Earth's bow shock. Based on THEMIS observations, we have already suggested that turbulence behind the quasi-perpendicular shock is more intermittent with larger kurtosis than that behind the quasi-parallel shocks. Following this study, we would like to present a detailed analysis of intermittent anisotropic turbulence in the magnetosheath depending on various characteristics of plasma behind the bow shock and now also near the magnetopause. In particular, for very high Alfvénic Mach numbers and high plasma beta we have clear non-Gaussian statistics in the directions perpendicular to the magnetic field. On the other hand, for directions parallel to this field the kurtosis is small and the plasma is close to equilibrium. However, the level of intermittency for the outgoing fluctuations seems to be similar to that for the ingoing fluctuations, which is consistent with approximate equipartition of energy between the oppositely propagating Alfvén waves. We hope that the difference in characteristic behavior of these fluctuations in various regions of space plasmas can help to detect some complex structures in space missions in the near future.
Gravity or turbulence? IV. Collapsing cores in out-of-virial disguise
NASA Astrophysics Data System (ADS)
Ballesteros-Paredes, Javier; Vázquez-Semadeni, Enrique; Palau, Aina; Klessen, Ralf S.
2018-06-01
We study the dynamical state of massive cores by using a simple analytical model, an observational sample, and numerical simulations of collapsing massive cores. From the analytical model, we find that cores increase their column density and velocity dispersion as they collapse, resulting in a time evolution path in the Larson velocity dispersion-size diagram from large sizes and small velocity dispersions to small sizes and large velocity dispersions, while they tend to equipartition between gravity and kinetic energy. From the observational sample, we find that: (a) cores with substantially different column densities in the sample do not follow a Larson-like linewidth-size relation. Instead, cores with higher column densities tend to be located in the upper-left corner of the Larson velocity dispersion σv, 3D-size R diagram, a result explained in the hierarchical and chaotic collapse scenario. (b) Cores appear to have overvirial values. Finally, our numerical simulations reproduce the behavior predicted by the analytical model and depicted in the observational sample: collapsing cores evolve towards larger velocity dispersions and smaller sizes as they collapse and increase their column density. More importantly, however, they exhibit overvirial states. This apparent excess is due to the assumption that the gravitational energy is given by the energy of an isolated homogeneous sphere. However, such excess disappears when the gravitational energy is correctly calculated from the actual spatial mass distribution. We conclude that the observed energy budget of cores is consistent with their non-thermal motions being driven by their self-gravity and in the process of dynamical collapse.
Duty-cycle and energetics of remnant radio-loud AGN
NASA Astrophysics Data System (ADS)
Turner, Ross J.
2018-05-01
Deriving the energetics of remnant and restarted active galactic nuclei (AGNs) is much more challenging than for active sources due to the complexity in accurately determining the time since the nucleus switched-off. I resolve this problem using a new approach that combines spectral ageing and dynamical models to tightly constrain the energetics and duty-cycles of dying sources. Fitting the shape of the integrated radio spectrum yields the fraction of the source age the nucleus is active; this, in addition to the flux density, source size, axis ratio, and properties of the host environment, provides a constraint on dynamical models describing the remnant radio source. This technique is used to derive the intrinsic properties of the well-studied remnant radio source B2 0924+30. This object is found to spend 50_{-12}^{+14} Myr in the active phase and a further 28_{-5}^{+6} Myr in the quiescent phase, have a jet kinetic power of 3.6_{-1.7}^{+3.0}× 10^{37} W, and a lobe magnetic field strength below equipartition at the 8σ level. The integrated spectra of restarted and intermittent radio sources are found to yield a `steep-shallow' shape when the previous outburst occurred within 100 Myr. The duty-cycle of B2 0924+30 is hence constrained to be δ < 0.15 by fitting the shortest time to the previous comparable outburst that does not appreciably modify the remnant spectrum. The time-averaged feedback energy imparted by AGNs into their host galaxy environments can in this manner be quantified.
Electromagnetic versus Lense-Thirring alignment of black hole accretion discs
NASA Astrophysics Data System (ADS)
Polko, Peter; McKinney, Jonathan C.
2017-01-01
Accretion discs and black holes (BHs) have angular momenta that are generally misaligned, which can lead to warped discs and bends in any jets produced. We examine whether a disc that is misaligned at large radii can be aligned more efficiently by the torque of a Blandford-Znajek (BZ) jet than by Lense-Thirring (LT) precession. To obtain a strong result, we will assume that these torques maximally align the disc, rather than cause precession, or disc tearing. We consider several disc states that include radiatively inefficient thick discs, radiatively efficient thin discs, and super-Eddington accretion discs. The magnetic field strength of the BZ jet is chosen as either from standard equipartition arguments or from magnetically arrested disc (MAD) simulations. We show that standard thin accretion discs can reach spin-disc alignment out to large radii long before LT would play a role, due to the slow infall time that gives even a weak BZ jet time to align the disc. We show that geometrically thick radiatively inefficient discs and super-Eddington discs in the MAD state reach spin-disc alignment near the BH when density profiles are shallow as in magnetohydrodynamical simulations, while the BZ jet aligns discs with steep density profiles (as in advection-dominated accretion flows) out to larger radii. Our results imply that the BZ jet torque should affect the cosmological evolution of BH spin magnitude and direction, spin measurements in active galactic nuclei and X-ray binaries, and the interpretations for Event Horizon Telescope observations of discs or jets in strong-field gravity regimes.
Misaligned Accretion and Jet Production
NASA Astrophysics Data System (ADS)
King, Andrew; Nixon, Chris
2018-04-01
Disk accretion onto a black hole is often misaligned from its spin axis. If the disk maintains a significant magnetic field normal to its local plane, we show that dipole radiation from Lense–Thirring precessing disk annuli can extract a significant fraction of the accretion energy, sharply peaked toward small disk radii R (as R ‑17/2 for fields with constant equipartition ratio). This low-frequency emission is immediately absorbed by surrounding matter or refracted toward the regions of lowest density. The resultant mechanical pressure, dipole angular pattern, and much lower matter density toward the rotational poles create a strong tendency to drive jets along the black hole spin axis, similar to the spin-axis jets of radio pulsars, also strong dipole emitters. The coherent primary emission may explain the high brightness temperatures seen in jets. The intrinsic disk emission is modulated at Lense–Thirring frequencies near the inner edge, providing a physical mechanism for low-frequency quasi-periodic oscillations (QPOs). Dipole emission requires nonzero hole spin, but uses only disk accretion energy. No spin energy is extracted, unlike the Blandford–Znajek process. Magnetohydrodynamic/general-relativistic magnetohydrodynamic (MHD/GRMHD) formulations do not directly give radiation fields, but can be checked post-process for dipole emission and therefore self-consistency, given sufficient resolution. Jets driven by dipole radiation should be more common in active galactic nuclei (AGN) than in X-ray binaries, and in low accretion-rate states than high, agreeing with observation. In non-black hole accretion, misaligned disk annuli precess because of the accretor’s mass quadrupole moment, similarly producing jets and QPOs.
Galaxy formation with BECDM - I. Turbulence and relaxation of idealized haloes.
Mocz, Philip; Vogelsberger, Mark; Robles, Victor H; Zavala, Jesús; Boylan-Kolchin, Michael; Fialkov, Anastasia; Hernquist, Lars
2017-11-01
We present a theoretical analysis of some unexplored aspects of relaxed Bose-Einstein condensate dark matter (BECDM) haloes. This type of ultralight bosonic scalar field dark matter is a viable alternative to the standard cold dark matter (CDM) paradigm, as it makes the same large-scale predictions as CDM and potentially overcomes CDM's small-scale problems via a galaxy-scale de Broglie wavelength. We simulate BECDM halo formation through mergers, evolved under the Schrödinger-Poisson equations. The formed haloes consist of a soliton core supported against gravitational collapse by the quantum pressure tensor and an asymptotic r -3 NFW-like profile. We find a fundamental relation of the core-to-halo mass with the dimensionless invariant Ξ ≡ | E |/ M 3 /( Gm/ħ ) 2 or M c / M ≃ 2.6Ξ 1/3 , linking the soliton to global halo properties. For r ≥ 3.5 r c core radii, we find equipartition between potential, classical kinetic and quantum gradient energies. The haloes also exhibit a conspicuous turbulent behaviour driven by the continuous reconnection of vortex lines due to wave interference. We analyse the turbulence 1D velocity power spectrum and find a k -1.1 power law. This suggests that the vorticity in BECDM haloes is homogeneous, similar to thermally-driven counterflow BEC systems from condensed matter physics, in contrast to a k -5/3 Kolmogorov power law seen in mechanically-driven quantum systems. The mode where the power spectrum peaks is approximately the soliton width, implying that the soliton-sized granules carry most of the turbulent energy in BECDM haloes.
The NuSTAR view on hard-TeV BL Lacs
NASA Astrophysics Data System (ADS)
Costamante, L.; Bonnoli, G.; Tavecchio, F.; Ghisellini, G.; Tagliaferri, G.; Khangulyan, D.
2018-07-01
Hard-TeV BL Lacs are a new type of blazars characterized by a hard intrinsic TeV spectrum, locating the peak of their gamma-ray emission in the spectral energy distribution (SED) above 2-10 TeV. Such high energies are problematic for the Compton emission, using a standard one-zone leptonic model. We study six examples of this new type of BL Lacs in the hard X-ray band with NuSTAR. Together with simultaneous observations with the Neil Gehrels Swift Observatory, we fully constrain the peak of the synchrotron emission in their SED, and test the leptonic synchrotron self-Compton (SSC) model. We confirm the extreme nature of five objects also in the synchrotron emission. We do not find evidence of additional emission components in the hard X-ray band. We find that a one-zone SSC model can in principle reproduce the extreme properties of both peaks in the SED, from X-ray up to TeV energies, but at the cost of (i) extreme electron energies with very low radiative efficiency, (ii) conditions heavily out of equipartition (by three to five orders of magnitude), and (iii) not accounting for the simultaneous UV data, which then should belong to a different emission component, possibly the same as the far-IR (WISE) data. We find evidence of this separation of the UV and X-ray emission in at least two objects. In any case, the TeV electrons must not `see' the UV or lower energy photons, even if coming from different zones/populations, or the increased radiative cooling would steepen the very high energies spectrum.
Intermittent Anisotropic Turbulence Detected by THEMIS in the Magnetosheath
NASA Astrophysics Data System (ADS)
Macek, W. M.; Wawrzaszek, A.; Kucharuk, B.; Sibeck, D. G.
2017-12-01
Following our previous study of Time History of Events and Macroscale Interactions during Substorms (THEMIS) data, we consider intermittent turbulence in the magnetosheath depending on various conditions of the magnetized plasma behind the Earth’s bow shock and now also near the magnetopause. Namely, we look at the fluctuations of the components of the Elsässer variables in the plane perpendicular to the scale-dependent background magnetic fields and along the local average ambient magnetic fields. We have shown that Alfvén fluctuations often exhibit strong anisotropic non-gyrotropic turbulent intermittent behavior resulting in substantial deviations of the probability density functions from a normal Gaussian distribution with a large kurtosis. In particular, for very high Alfvénic Mach numbers and high plasma beta, we have clear anisotropy with non-Gaussian statistics in the transverse directions. However, along the magnetic field, the kurtosis is small and the plasma is close to equilibrium. On the other hand, intermittency becomes weaker for moderate Alfvén Mach numbers and lower values of the plasma parameter beta. It also seems that the degree of intermittency of turbulence for the outgoing fluctuations propagating relative to the ambient magnetic field is usually similar as for the ingoing fluctuations, which is in agreement with approximate equipartition of energy between these oppositely propagating Alfvén waves. We believe that the different characteristics of this intermittent anisotropic turbulent behavior in various regions of space and astrophysical plasmas can help identify nonlinear structures responsible for deviations of the plasma from equilibrium.
A new look at sunspot formation using theory and observations
NASA Astrophysics Data System (ADS)
Losada, I. R.; Warnecke, J.; Glogowski, K.; Roth, M.; Brandenburg, A.; Kleeorin, N.; Rogachevskii, I.
2017-10-01
Sunspots are of basic interest in the study of the Sun. Their relevance ranges from them being an activity indicator of magnetic fields to being the place where coronal mass ejections and flares erupt. They are therefore also an important ingredient of space weather. Their formation, however, is still an unresolved problem in solar physics. Observations utilize just 2D surface information near the spot, but it is debatable how to infer deep structures and properties from local helioseismology. For a long time, it was believed that flux tubes rising from the bottom of the convection zone are the origin of the bipolar sunspot structure seen on the solar surface. However, this theory has been challenged, in particular recently by new surface observation, helioseismic inversions, and numerical models of convective dynamos. In this article we discuss another theoretical approach to the formation of sunspots: the negative effective magnetic pressure instability. This is a large-scale instability, in which the total (kinetic plus magnetic) turbulent pressure can be suppressed in the presence of a weak large-scale magnetic field, leading to a converging downflow, which eventually concentrates the magnetic field within it. Numerical simulations of forced stratified turbulence have been able to produce strong super-equipartition flux concentrations, similar to sunspots at the solar surface. In this framework, sunspots would only form close to the surface due to the instability constraints on stratification and rotation. Additionally, we present some ideas from local helioseismology, where we plan to use the Hankel analysis to study the pre-emergence phase of a sunspot and to constrain its deep structure and formation mechanism.
Ultra-High Resolution Observations Of Selected Blazars
NASA Astrophysics Data System (ADS)
Hodgson, Jeffrey A.
2015-01-01
Active Galactic Nuclei are the luminous centres of active galaxies that produce powerful relativistic jets from central super massive black holes (SMBH). When these jets are oriented towards the observer's line-of-sight, they become very bright, very variable and very energetic. These sources are known as blazars and Very Long Baseline Interferometry (VLBI) provides a direct means of observing into the heart of these objects. VLBI performed at 3 mm with the Global mm-VLBI Array (GMVA) and 7 mm VLBI performed with the Very Long Baseline Array (VLBA), allows some of the highest angular resolution images of blazars to be produced. In this thesis, we present the first results of an ongoing monitoring program of blazars known to emit at γ-ray energies. The physical processes that produce these jets and the γ-ray emission are still not well known. The jets are thought to be produced by converting gravitational energy around the black hole into relativistic particles that are accelerated away at near the speed of light. However, the exact mechanisms for this and the role that magnetic fields play is not fully clear. Similarly, γ-rays have been long known to have been emitted from blazars and that their production is often related to the up-scattering of synchrotron radiation from the jet. However, the origin of seed photons for the up-scattering (either from within the jet itself or from an external photon field) and the location of the γ-ray emission regions has remained inconclusive. In this thesis, we aim to describe the likely location of γ-ray emission in jets, the physical structure of blazar jets, the location of the VLBI features relative to the origin of the jet and the nature of the magnetic field, both of the VLBI scale jet and in the region where the jet is produced. We present five sources that have been monitored at 3 mm using the GMVA from 2008 until 2012. These sources have been analysed with near-in-time 7 mm maps from the Very Long Baseline Array (VLBA), γ-ray light curves from the Fermi/LAT space telescope and cm to mm-wave total-intensity light curves. In one source, OJ 287, the source has additionally been analysed with monthly imaging at 7 mm with the VLBA and near-in-time 2 cm VLBI maps. We use these resources to analyse high angular resolution structural and spectral changes and see if they correlate with flaring (both radio and γ-ray) activity and with VLBI component ejections. By spectrally decomposing sources, we can determine the spatially resolved magnetic field structure in the jets at the highest yet performed resolutions and at frequencies that are near or above the turnover frequency for synchrotron self-absorption (SSA). We compute the magnetic field estimates from SSA theory and by assuming equipartition between magnetic fields and relativistic particle energies. All sources analysed exhibit downstream quasi-stationary features which sometimes exhibit higher brightness temperatures and flux density variability than the VLBI "core", which we interpret as being recollimation or oblique shocks. We find that γ-ray flaring, mm-wave radio flaring and changes in opacity from optically thick to optically thin, is in many cases consistent with component ejections past both the VLBI "core" and these quasi-stationary downstream features. We find decreasing apparent brightness temperatures and Doppler factors as a function of increased "core" separation, which is interpreted as consistent with a slowly accelerating jet over the de-projected inner ˜10-20 pc. Assuming equipartition between magnetic energy and relativistic particle energy, the magnetic field strengths within the jets at these scales are, on average, between B ˜ 0.3 - 0.9 G, with the highest strengths found within the VLBI "core". From the observed gradient in magnetic field strengths, we can place the mmwave "core" ˜1-3 pc downstream of the base of the jet. Additionally, we estimate the the magnetic field is Bapex ˜ 3000 - 18000 G at the base of the jet. We computed theoretical estimates based on jet production under magnetically arrested disks (MAD) and find our estimates to be consistent. In the BL Lac source OJ 287, we included monthly 7 mm and near-in-time 2 cm VLBA maps to provide full kinematics and increased spectral coverage. Following a previously reported radical change in inner-jet PA of ˜100° we find unusually discrepant PAs compared with the previous jet direction, that follow very different trajectories. The source exhibits a downstream quasi-stationary feature that at times has higher brightness temperatures than the "core". The source also exhibited a large change in apparent component speeds as compared with previous epochs, which we propose could be due to changes in jet pressure causing changes in the location of downstream recollimation or oblique shocks and hence their line-of-sight viewing angle. The addition of 2 cm VLBA data allows for a comparison of magnetic fields derived from SSA and equipartition. The magnetic field estimates are consistent within 20%, with BSSA ≥ 1.6 G and Bequi ≥ 1.2 G in the "core" and BSSA ≤ 0.4 G and Bequi ≤ 0.3 G in the stationary feature. Gamma-ray emission appears to originate in the "core" and the stationary feature. The decrease in magnetic field strengths places the mmwave "core' downstream of the jet base by ≤6 pc and likely outside of the broad line region (BLR). This, combined with the results in other sources are consistent with γ-rays being produced in the vicinity of the VLBI "core" of in further downstream stationary features, which are likely over a parsec downstream of the central black hole, favouring the scenario of photons being up-scattered within the relativistic jet.
Location of γ-ray emission and magnetic field strengths in OJ 287
NASA Astrophysics Data System (ADS)
Hodgson, J. A.; Krichbaum, T. P.; Marscher, A. P.; Jorstad, S. G.; Rani, B.; Marti-Vidal, I.; Bach, U.; Sanchez, S.; Bremer, M.; Lindqvist, M.; Uunila, M.; Kallunki, J.; Vicente, P.; Fuhrmann, L.; Angelakis, E.; Karamanavis, V.; Myserlis, I.; Nestoras, I.; Chidiac, C.; Sievers, A.; Gurwell, M.; Zensus, J. A.
2017-01-01
Context. The γ-ray BL Lac object OJ 287 is known to exhibit inner-parsec "jet-wobbling", high degrees of variability at all wavelengths and quasi-stationary features, including an apparent (≈100°) position-angle change in projection on the sky plane. Aims: Sub-50 micro-arcsecond resolution 86 GHz observations with the global mm-VLBI array (GMVA) supplement ongoing multi-frequency VLBI blazar monitoring at lower frequencies. Using these maps, together with cm/mm total intensity and γ-ray observations from Fermi-LAT from 2008-2014, we aim to determine the location of γ-ray emission and to explain the inner-mas structural changes. Methods: Observations with the GMVA offer approximately double the angular resolution compared with 43 GHz VLBA observations and enable us to observe above the synchrotron self-absorption peak frequency. Fermi-LAT γ-ray data were reduced and analysed. The jet was spectrally decomposed at multiple locations along the jet. From this, we could derive estimates of the magnetic field using equipartition and synchrotron self-absorption arguments. How the field decreases down the jet provided an estimate of the distance to the jet apex and an estimate of the magnetic field strength at the jet apex and in the broad line region. Combined with accurate kinematics, we attempt to locate the site of γ-ray activity, radio flares, and spectral changes. Results: Strong γ-ray flares appeared to originate from either the so-called core region, a downstream stationary feature, or both, with γ-ray activity significantly correlated with radio flaring in the downstream quasi-stationary feature. Magnetic field estimates were determined at multiple locations along the jet, with the magnetic field found to be ≥1.6 G in the core and ≤0.4 G in the downstream quasi-stationary feature. We therefore found upper limits on the location of the VLBI core as ≲6.0 pc from the jet apex and determined an upper limit on the magnetic field near the jet base of the order of thousands of Gauss. The 3 mm GMVA data are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/597/A80
Dynamo action and magnetic buoyancy in convection simulations with vertical shear
NASA Astrophysics Data System (ADS)
Guerrero, G.; Käpylä, P. J.
2011-09-01
Context. A hypothesis for sunspot formation is the buoyant emergence of magnetic flux tubes created by the strong radial shear at the tachocline. In this scenario, the magnetic field has to exceed a threshold value before it becomes buoyant and emerges through the whole convection zone. Aims: We follow the evolution of a random seed magnetic field with the aim of study under what conditions it is possible to excite the dynamo instability and whether the dynamo generated magnetic field becomes buoyantly unstable and emerges to the surface as expected in the flux-tube context. Methods: We perform numerical simulations of compressible turbulent convection that include a vertical shear layer. Like the solar tachocline, the shear is located at the interface between convective and stable layers. Results: We find that shear and convection are able to amplify the initial magnetic field and form large-scale elongated magnetic structures. The magnetic field strength depends on several parameters such as the shear amplitude, the thickness and location of the shear layer, and the magnetic Reynolds number (Rm). Models with deeper and thicker tachoclines allow longer storage and are more favorable for generating a mean magnetic field. Models with higher Rm grow faster but saturate at slightly lower levels. Whenever the toroidal magnetic field reaches amplitudes greater a threshold value which is close to the equipartition value, it becomes buoyant and rises into the convection zone where it expands and forms mushroom shape structures. Some events of emergence, i.e. those with the largest amplitudes of the initial field, are able to reach the very uppermost layers of the domain. These episodes are able to modify the convective pattern forming either broader convection cells or convective eddies elongated in the direction of the field. However, in none of these events the field preserves its initial structure. The back-reaction of the magnetic field on the fluid is also observed in lower values of the turbulent velocity and in perturbations of approximately three per cent on the shear profile. Conclusions: The results indicate that buoyancy is a common phenomena when the magnetic field is amplified through dynamo action in a narrow layer. It is, however, very hard for the field to rise up to the surface without losing its initial coherence.
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.
2010-01-01
Structural design generated by traditional method, optimization method and the stochastic design concept are compared. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions were produced by all the three methods. The variation in the weight calculated by the methods was modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.
A flexible layout design method for passive micromixers.
Deng, Yongbo; Liu, Zhenyu; Zhang, Ping; Liu, Yongshun; Gao, Qingyong; Wu, Yihui
2012-10-01
This paper discusses a flexible layout design method of passive micromixers based on the topology optimization of fluidic flows. Being different from the trial and error method, this method obtains the detailed layout of a passive micromixer according to the desired mixing performance by solving a topology optimization problem. Therefore, the dependence on the experience of the designer is weaken, when this method is used to design a passive micromixer with acceptable mixing performance. Several design disciplines for the passive micromixers are considered to demonstrate the flexibility of the layout design method for passive micromixers. These design disciplines include the approximation of the real 3D micromixer, the manufacturing feasibility, the spacial periodic design, and effects of the Péclet number and Reynolds number on the designs obtained by this layout design method. The capability of this design method is validated by several comparisons performed between the obtained layouts and the optimized designs in the recently published literatures, where the values of the mixing measurement is improved up to 40.4% for one cycle of the micromixer.
Aircraft digital control design methods
NASA Technical Reports Server (NTRS)
Powell, J. D.; Parsons, E.; Tashker, M. G.
1976-01-01
Variations in design methods for aircraft digital flight control are evaluated and compared. The methods fall into two categories; those where the design is done in the continuous domain (or s plane) and those where the design is done in the discrete domain (or z plane). Design method fidelity is evaluated by examining closed loop root movement and the frequency response of the discretely controlled continuous aircraft. It was found that all methods provided acceptable performance for sample rates greater than 10 cps except the uncompensated s plane design method which was acceptable above 20 cps. A design procedure based on optimal control methods was proposed that provided the best fidelity at very slow sample rates and required no design iterations for changing sample rates.
Cooling and stabilization of graphene nanoplatelets in high vacuum
NASA Astrophysics Data System (ADS)
Nagornykh, Pavel
The study of 2D materials is a rapidly growing area of research, where the ability to isolate and probe an individual single-layer specimen is of high importance. The levitation approach serves as a natural solution for this problem and can be used in ways complementary to the standard techniques. Experiments, including study of properties at high or close to melting temperatures, stretching, folding, vibration and functionalization, can be conducted on levitated 2D materials. As a first step towards realization of all these ideas, one needs to develop and test a system allowing for control over the thermal state and orientation of mono-layer flakes. In this thesis, I present the results of implementation of the parametric feedback cooling scheme in a quadrupole ion trap for stabilization and cooling of graphene nanopletelets. I have tested and showed that the feedback allows to stabilize levitated graphene nanoplatelets in high vacuum conditions (<1 microTorr) to have trapped life times longer than a week. Cooling of the center of mass motion to temperatures below 20 K for all translational degrees of freedom was observed. I have also studied the coupling of DC patch potentials, which were found to be present in the high vacuum chamber. Their effect on cooling was studied and the protocol for minimizing the noise coupling created by the DC fields was designed. We have shown that by varying DC voltages on a set of auxiliary DC electrodes, placed near the trap, one can balance out the DC fields and achieve the lowest cooling temperature. The settings corresponding to this temperature were measured to have a slow drift in time. Ability to tune the settings to balance this drift without breaking the vacuum was studied and found to be a viable solution for the drift cancellation. In addition, our effort in characterization of the flakes is presented. It was shown that the flake discharge quantization observed during the initial pumping down of the high vacuum chamber allows to extract absolute values of flake mass and charge. I also mention the issues experienced with estimation of the shape of the flake, as well as its temperature based on an equipartition theorem. Finally, I discuss the preliminary data on the precession and reorientation of the flakes in the presence of circularly polarized light (CPL) and DC stray fields. The dependence of flake orientation on the offset from the nulling settings is observed and is explained in terms of basic model of a solid charged disk in the presence of two torques created by CPL and DC stray fields.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-12
... Methods: Designation of Five New Equivalent Methods AGENCY: Office of Research and Development; Environmental Protection Agency (EPA). ACTION: Notice of the designation of five new equivalent methods for...) has designated, in accordance with 40 CFR Part 53, five new equivalent methods, one for measuring...
Bortz, John; Shatz, Narkis
2011-04-01
The recently developed generalized functional method provides a means of designing nonimaging concentrators and luminaires for use with extended sources and receivers. We explore the mathematical relationships between optical designs produced using the generalized functional method and edge-ray, aplanatic, and simultaneous multiple surface (SMS) designs. Edge-ray and dual-surface aplanatic designs are shown to be special cases of generalized functional designs. In addition, it is shown that dual-surface SMS designs are closely related to generalized functional designs and that certain computational advantages accrue when the two design methods are combined. A number of examples are provided. © 2011 Optical Society of America
The application of quadratic optimal cooperative control synthesis to a CH-47 helicopter
NASA Technical Reports Server (NTRS)
Townsend, Barbara K.
1987-01-01
A control-system design method, quadratic optimal cooperative control synthesis (CCS), is applied to the design of a stability and control augmentation system (SCAS). The CCS design method is different from other design methods in that it does not require detailed a priori design criteria, but instead relies on an explicit optimal pilot-model to create desired performance. The design method, which was developed previously for fixed-wing aircraft, is simplified and modified for application to a Boeing CH-47 helicopter. Two SCAS designs are developed using the CCS design methodology. The resulting CCS designs are then compared with designs obtained using classical/frequency-domain methods and linear quadratic regulator (LQR) theory in a piloted fixed-base simulation. Results indicate that the CCS method, with slight modifications, can be used to produce controller designs which compare favorably with the frequency-domain approach.
The application of mixed methods designs to trauma research.
Creswell, John W; Zhang, Wanqing
2009-12-01
Despite the use of quantitative and qualitative data in trauma research and therapy, mixed methods studies in this field have not been analyzed to help researchers designing investigations. This discussion begins by reviewing four core characteristics of mixed methods research in the social and human sciences. Combining these characteristics, the authors focus on four select mixed methods designs that are applicable in trauma research. These designs are defined and their essential elements noted. Applying these designs to trauma research, a search was conducted to locate mixed methods trauma studies. From this search, one sample study was selected, and its characteristics of mixed methods procedures noted. Finally, drawing on other mixed methods designs available, several follow-up mixed methods studies were described for this sample study, enabling trauma researchers to view design options for applying mixed methods research in trauma investigations.
Educating Instructional Designers: Different Methods for Different Outcomes.
ERIC Educational Resources Information Center
Rowland, Gordon; And Others
1994-01-01
Suggests new methods of teaching instructional design based on literature reviews of other design fields including engineering, architecture, interior design, media design, and medicine. Methods discussed include public presentations, visiting experts, competitions, artifacts, case studies, design studios, and internships and apprenticeships.…
Categorisation of visualisation methods to support the design of Human-Computer Interaction Systems.
Li, Katie; Tiwari, Ashutosh; Alcock, Jeffrey; Bermell-Garcia, Pablo
2016-07-01
During the design of Human-Computer Interaction (HCI) systems, the creation of visual artefacts forms an important part of design. On one hand producing a visual artefact has a number of advantages: it helps designers to externalise their thought and acts as a common language between different stakeholders. On the other hand, if an inappropriate visualisation method is employed it could hinder the design process. To support the design of HCI systems, this paper reviews the categorisation of visualisation methods used in HCI. A keyword search is conducted to identify a) current HCI design methods, b) approaches of selecting these methods. The resulting design methods are filtered to create a list of just visualisation methods. These are then categorised using the approaches identified in (b). As a result 23 HCI visualisation methods are identified and categorised in 5 selection approaches (The Recipient, Primary Purpose, Visual Archetype, Interaction Type, and The Design Process). Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Game Methodology for Design Methods and Tools Selection
ERIC Educational Resources Information Center
Ahmad, Rafiq; Lahonde, Nathalie; Omhover, Jean-françois
2014-01-01
Design process optimisation and intelligence are the key words of today's scientific community. A proliferation of methods has made design a convoluted area. Designers are usually afraid of selecting one method/tool over another and even expert designers may not necessarily know which method is the best to use in which circumstances. This…
NASA Technical Reports Server (NTRS)
Yao, Tse-Min; Choi, Kyung K.
1987-01-01
An automatic regridding method and a three dimensional shape design parameterization technique were constructed and integrated into a unified theory of shape design sensitivity analysis. An algorithm was developed for general shape design sensitivity analysis of three dimensional eleastic solids. Numerical implementation of this shape design sensitivity analysis method was carried out using the finite element code ANSYS. The unified theory of shape design sensitivity analysis uses the material derivative of continuum mechanics with a design velocity field that represents shape change effects over the structural design. Automatic regridding methods were developed by generating a domain velocity field with boundary displacement method. Shape design parameterization for three dimensional surface design problems was illustrated using a Bezier surface with boundary perturbations that depend linearly on the perturbation of design parameters. A linearization method of optimization, LINRM, was used to obtain optimum shapes. Three examples from different engineering disciplines were investigated to demonstrate the accuracy and versatility of this shape design sensitivity analysis method.
Plasmoids in relativistic reconnection, from birth to adulthood: first they grow, then they go
NASA Astrophysics Data System (ADS)
Sironi, Lorenzo; Giannios, Dimitrios; Petropoulou, Maria
2016-10-01
Blobs, or quasi-spherical emission regions containing relativistic particles and magnetic fields, are often assumed ad hoc in emission models of relativistic astrophysical jets, yet their physical origin is still not well understood. Here, we employ a suite of large-scale 2D particle-in-cell simulations in electron-positron plasmas to demonstrate that relativistic magnetic reconnection can naturally account for the formation of quasi-spherical plasmoids filled with high-energy particles and magnetic fields. Our simulations extend to unprecedentedly long temporal and spatial scales, so we can capture the asymptotic physics independently of the initial setup. We characterize the properties of the plasmoids, continuously generated as a self-consistent by-product of the reconnection process: they are in rough energy equipartition between particles and magnetic fields; the upper energy cutoff of the plasmoid particle spectrum is proportional to the plasmoid width w, corresponding to a Larmor radius ˜0.2 w; the plasmoids grow in size at ˜0.1 of the speed of light, with most of the growth happening while they are still non-relativistic (`first they grow'); their growth is suppressed once they get accelerated to relativistic speeds by the field line tension, up to the Alfvén speed (`then they go'). The largest plasmoids reach a width wmax ˜ 0.2 L independently of the system length L, they have nearly isotropic particle distributions and contain the highest energy particles, whose Larmor radius is ˜0.03 L. The latter can be regarded as the Hillas criterion for relativistic reconnection. We briefly discuss the implications of our results for the high-energy emission from relativistic jets and pulsar winds.
Radio to gamma-ray variability study of blazar S5 0716+714
Rani, B.; Krichbaum, T. P.; Fuhrmann, L.; ...
2013-03-13
In this paper, we present the results of a series of radio, optical, X-ray, and γ-ray observations of the BL Lac object S50716+714 carried out between April 2007 and January 2011. The multifrequency observations were obtained using several ground- and space-based facilities. The intense optical monitoring of the source reveals faster repetitive variations superimposed on a long-term variability trend on a time scale of ~350 days. Episodes of fast variability recur on time scales of ~60-70 days. The intense and simultaneous activity at optical and γ-ray frequencies favors the synchrotron self-Compton mechanism for the production of the high-energy emission. Twomore » major low-peaking radio flares were observed during this high optical/γ-ray activity period. The radio flares are characterized by a rising and a decaying stage and agrees with the formation of a shock and its evolution. We found that the evolution of the radio flares requires a geometrical variation in addition to intrinsic variations of the source. Different estimates yield robust and self-consistent lower limits of δ ≥ 20 and equipartition magnetic field B eq ≥ 0.36 G. Causality arguments constrain the size of emission region θ ≤ 0.004 mas. We found a significant correlation between flux variations at radio frequencies with those at optical and γ-rays. Theoptical/GeV flux variations lead the radio variability by ~65 days. The longer time delays between low-peaking radio outbursts and optical flares imply that optical flares are the precursors of radio ones. An orphan X-ray flare challenges the simple, one-zone emission models, rendering them too simple. Finally, here we also describe the spectral energy distribution modeling of the source from simultaneous data taken through different activity periods.« less
The variation of polyhedral compressibilities between structures
NASA Astrophysics Data System (ADS)
Ross, N. L.; Angel, R. J.; Zhao, J.; Vanpeteghem, C.
2006-05-01
In their influential book "Comparative Crystal Chemistry" Hazen and Finger [1] concluded that "a given type of polyhedron has nearly constant bulk modulus within estimated experimental error, independent of structure". Advances in the precision of experimental high-pressure diffraction measurements over the ensuing two decades allow us to re-examine this hypothesis. In particular, the discovery that the response of the perovskite structure to high pressures is controlled by the equipartition of bond-valence strain between the A and B cation sites within the structure [2] explicitly implies that the octahedral compressibility depends not only upon the octahedral cation, but also upon the compressibility of the cation-oxygen bonds of the extra-framework (nominally dodecahedral) site. Thus the octahedral compressibility of a B cation changes with the A cation. For example, the compressibility of the Ga-O bonds in LaGaO3 is 2.43(7) x 10-3 GPa-1, whereas it is 1.81 x 10-3 GPa-1 in NdGaO3. The compressibilities of Al-O bonds in perovskites range between 1.62(9) and 1.87(13) x 10-3 GPa-1. A more extreme example is provided by the difference in octahedral compressibilities between ABO3 perovskites and their protonated analogues AB(OH)6. In CaSnO3 the average compressibility of the Sn- O bonds within the octahedra is 1.61(11) x 10-3 GPa-1, whereas the Sn-O bonds in MnSn(OH)6 are incompressible within the uncertainties of the measurement. References [1] Hazen, Finger (1982) Comparative Crystal Chemistry. John Wiley and Sons [2] Zhao, Ross, & Angel (2004). Acta Cryst. B60:263 [3] Vanpeteghem et al. (2006) Geophys. Res. Letts. 33: L03306. [4] Ross et al. (1990) Amer. Mineral. 75:739
Microgravity experiments on a granular gas of elongated grains
NASA Astrophysics Data System (ADS)
Harth, K.; Trittel, T.; Kornek, U.; Höme, S.; Will, K.; Strachauer, U.; Stannarius, R.
2013-06-01
Granular gases represent well-suited systems to investigate statistical granular dynamics. The literature comprises numerous investigations of ensembles of spherical or irregularly shaped grains. Mainly computer models, analytical theories and experiments restricted to two dimensions were reported. In three-dimensions, the gaseous state can only be maintained by strong external excitation, e. g. vibrations or electro-magnetic fields, or in microgravity. A steady state, where the dynamics of a weakly disturbed granular gas are governed by particle-particle collisions, is hard to realize with spherical grains due to clustering. We present the first study of a granular gas of elongated cylinders in three dimensions. The mean free path is considerably reduced with respect to spheres at comparable filling fractions. The particles can be tracked in 3D over a sequence of frames. In a homogeneous steady state, we find non-Gaussian velocity distributions and a lack of equipartition of kinetic energy. We discuss the relations between energy input and vibrating plate accelerations. At the request of the authors and the Proceedings Editors, the PDF file of this article has been updated to amend some references present in the PDF file submitted to AIP Publishing. The references affected are listed here:[1] (c) K. Nichol and K. E. Daniels, Phys. Rev. Lett. 108, 018001 (2012); [11] (e) P. G. de Gennes and J. Prost, The Physics of Liquid Crystals, Clarendon Press, Oxford (1993); [17] (b) K. Harth, et al., Phys. Rev. Lett. 110, 144102 (2013).A LaTeX processing error resulted in changes to the authors reference formatting, which was not detected prior to publication. Due apologies are given to the authors for this oversight. The updated article PDF was published on 12 August 2013.
THERMAL PLASMA IN THE GIANT LOBES OF THE RADIO GALAXY CENTAURUS A
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Sullivan, S. P.; Feain, I. J.; McClure-Griffiths, N. M.
2013-02-20
We present a Faraday rotation measure (RM) study of the diffuse, polarized, radio emission from the giant lobes of the nearest radio galaxy, Centaurus A. After removal of the smooth Galactic foreground RM component, using an ensemble of background source RMs located outside the giant lobes, we are left with a residual RM signal associated with the giant lobes. We find that the most likely origin of this residual RM is from thermal material mixed throughout the relativistic lobe plasma. The alternative possibility of a thin-skin/boundary layer of magnetoionic material swept up by the expansion of the lobes is highlymore » unlikely since it requires, at least, an order of magnitude enhancement of the swept-up gas over the expected intragroup density on these scales. Strong depolarization observed from 2.3 to 0.96 GHz also supports the presence of a significant amount of thermal gas within the lobes; although depolarization solely due to RM fluctuations in a foreground Faraday screen on scales smaller than the beam cannot be ruled out. Considering the internal Faraday rotation scenario, we find a thermal gas number density of {approx}10{sup -4} cm{sup -3}, implying a total gas mass of {approx}10{sup 10} M {sub Sun} within the lobes. The thermal pressure associated with this gas (with temperature kT {approx} 0.5 keV, obtained from recent X-ray results) is approximately equal to the non-thermal pressure, indicating that over the volume of the lobes, there is approximate equipartition between the thermal gas, radio-emitting electrons, and magnetic field (and potentially any relativistic protons present).« less
NASA Astrophysics Data System (ADS)
Gómez, José L.; Lobanov, Andrei P.; Bruni, Gabriele; Kovalev, Yuri Y.; Marscher, Alan P.; Jorstad, Svetlana G.; Mizuno, Yosuke; Bach, Uwe; Sokolovsky, Kirill V.; Anderson, James M.; Galindo, Pablo; Kardashev, Nikolay S.; Lisakov, Mikhail M.
2016-02-01
We present the first polarimetric space very long baseline interferometry (VLBI) imaging observations at 22 GHz. BL Lacertae was observed in 2013 November 10 with the RadioAstron space VLBI mission, including a ground array of 15 radio telescopes. The instrumental polarization of the space radio telescope is found to be less than 9%, demonstrating the polarimetric imaging capabilities of RadioAstron at 22 GHz. Ground-space fringes were obtained up to a projected baseline distance of 7.9 Earth diameters in length, allowing us to image the jet in BL Lacertae with a maximum angular resolution of 21 μas, the highest achieved to date. We find evidence for emission upstream of the radio core, which may correspond to a recollimation shock at about 40 μas from the jet apex, in a pattern that includes other recollimation shocks at approximately 100 and 250 μas from the jet apex. Polarized emission is detected in two components within the innermost 0.5 mas from the core, as well as in some knots 3 mas downstream. Faraday rotation analysis, obtained from combining RadioAstron 22 GHz and ground-based 15 and 43 GHz images, shows a gradient in rotation measure and Faraday-corrected polarization vector as a function of position angle with respect to the core, suggesting that the jet in BL Lacertae is threaded by a helical magnetic field. The intrinsic de-boosted brightness temperature in the unresolved core exceeds 3× {10}12 K, suggesting, at the very least, departure from equipartition of energy between the magnetic field and radiating particles.
Radio Observations of the Tidal Disruption Event XMMSL1 J0740-85
NASA Astrophysics Data System (ADS)
Alexander, K. D.; Wieringa, M. H.; Berger, E.; Saxton, R. D.; Komossa, S.
2017-03-01
We present radio observations of the tidal disruption event candidate (TDE) XMMSL1 J0740-85 spanning 592 to 875 days post X-ray discovery. We detect radio emission that fades from an initial peak flux density at 1.6 GHz of 1.19 ± 0.06 mJy to 0.65 ± 0.06 mJy, suggesting an association with the TDE. This makes XMMSL1 J0740-85 at d = 75 Mpc the nearest TDE with detected radio emission to date and only the fifth TDE with radio emission overall. The observed radio luminosity rules out a powerful relativistic jet like that seen in the relativistic TDE Swift J1644+57. Instead, we infer from an equipartition analysis that the radio emission most likely arises from a non-relativistic outflow similar to that seen in the nearby TDE ASASSN-14li, with a velocity of about 104 km s-1 and a kinetic energy of about 1048 erg, expanding into a medium with a density of about 102 cm-3. Alternatively, the radio emission could arise from a weak initially relativistic but decelerated jet with an energy of ˜ 2× {10}50 erg, or (for an extreme disruption geometry) from the unbound debris. The radio data for XMMSL1 J0740-85 continues to support the previous suggestion of a bimodal distribution of common non-relativistic isotropic outflows and rare relativistic jets in TDEs (in analogy with the relation between Type Ib/c supernovae and long-duration gamma-ray bursts). The radio data also provide a new measurement of the circumnuclear density on a sub-parsec scale around an extragalactic supermassive black hole.
Origin and Evolution of Magnetic Field in PMS Stars: Influence of Rotation and Structural Changes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emeriau-Viard, Constance; Brun, Allan Sacha, E-mail: constance.emeriau@cea.fr, E-mail: sacha.brun@cea.fr
During stellar evolution, especially in the pre-main-sequence phase, stellar structure and rotation evolve significantly, causing major changes in the dynamics and global flows of the star. We wish to assess the consequences of these changes on stellar dynamo, internal magnetic field topology, and activity level. To do so, we have performed a series of 3D HD and MHD simulations with the ASH code. We choose five different models characterized by the radius of their radiative zone following an evolutionary track computed by a 1D stellar evolution code. These models characterized stellar evolution from 1 to 50 Myr. By introducing amore » seed magnetic field in the fully convective model and spreading its evolved state through all four remaining cases, we observe systematic variations in the dynamical properties and magnetic field amplitude and topology of the models. The five MHD simulations develop a strong dynamo field that can reach an equipartition state between the kinetic and magnetic energies and even superequipartition levels in the faster-rotating cases. We find that the magnetic field amplitude increases as it evolves toward the zero-age main sequence. Moreover, the magnetic field topology becomes more complex, with a decreasing axisymmetric component and a nonaxisymmetric one becoming predominant. The dipolar components decrease as the rotation rate and the size of the radiative core increase. The magnetic fields possess a mixed poloidal-toroidal topology with no obvious dominant component. Moreover, the relaxation of the vestige dynamo magnetic field within the radiative core is found to satisfy MHD stability criteria. Hence, it does not experience a global reconfiguration but slowly relaxes by retaining its mixed stable poloidal-toroidal topology.« less
Transport in a toroidally confined pure electron plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crooks, S.M.; ONeil, T.M.
1996-07-01
O{close_quote}Neil and Smith [T.M. O{close_quote}Neil and R.A. Smith, Phys. Plasmas {bold 1}, 8 (1994)] have argued that a pure electron plasma can be confined stably in a toroidal magnetic field configuration. This paper shows that the toroidal curvature of the magnetic field of necessity causes slow cross-field transport. The transport mechanism is similar to magnetic pumping and may be understood by considering a single flux tube of plasma. As the flux tube of plasma undergoes poloidal {ital E}{bold {times}}{ital B} drift rotation about the center of the plasma, the length of the flux tube and the magnetic field strength withinmore » the flux tube oscillate, and this produces corresponding oscillations in {ital T}{sub {parallel}} and {ital T}{sub {perpendicular}}. The collisional relaxation of {ital T}{sub {parallel}} toward {ital T}{sub {perpendicular}} produces a slow dissipation of electrostatic energy into heat and a consequent expansion (cross-field transport) of the plasma. In the limit where the cross section of the plasma is nearly circular the radial particle flux is given by {Gamma}{sub {ital r}}=1/2{nu}{sub {perpendicular},{parallel}}{ital T}({ital r}/{rho}{sub 0}){sup 2}{ital n}/({minus}{ital e}{partial_derivative}{Phi}/{partial_derivative}{ital r}), where {nu}{sub {perpendicular},{parallel}} is the collisional equipartition rate, {rho}{sub 0} is the major radius at the center of the plasma, and {ital r} is the minor radius measured from the center of the plasma. The transport flux is first calculated using this simple physical picture and then is calculated by solving the drift-kinetic Boltzmann equation. This latter calculation is not limited to a plasma with a circular cross section. {copyright} {ital 1996 American Institute of Physics.}« less
Cosmic Ray Acceleration by a Versatile Family of Galactic Wind Termination Shocks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bustard, Chad; Zweibel, Ellen G.; Cotter, Cory, E-mail: bustard@wisc.edu
2017-01-20
There are two distinct breaks in the cosmic ray (CR) spectrum: the so-called “knee” around 3 × 10{sup 15} eV and the so-called “ankle” around 10{sup 18} eV. Diffusive shock acceleration (DSA) at supernova remnant (SNR) shock fronts is thought to accelerate galactic CRs to energies below the knee, while an extragalactic origin is presumed for CRs with energies beyond the ankle. CRs with energies between 3 × 10{sup 15} and 10{sup 18} eV, which we dub the “shin,” have an unknown origin. It has been proposed that DSA at galactic wind termination shocks, rather than at SNR shocks, maymore » accelerate CRs to these energies. This paper uses the galactic wind model of Bustard et al. to analyze whether galactic wind termination shocks may accelerate CRs to shin energies within a reasonable acceleration time and whether such CRs can subsequently diffuse back to the Galaxy. We argue for acceleration times on the order of 100 Myr rather than a few billion years, as assumed in some previous works, and we discuss prospects for magnetic field amplification at the shock front. Ultimately, we generously assume that the magnetic field is amplified to equipartition. This formalism allows us to obtain analytic formulae, applicable to any wind model, for CR acceleration. Even with generous assumptions, we find that very high wind velocities are required to set up the necessary conditions for acceleration beyond 10{sup 17} eV. We also estimate the luminosities of CRs accelerated by outflow termination shocks, including estimates for the Milky Way wind.« less
DISCOVERY OF HIGH-ENERGY AND VERY HIGH ENERGY {gamma}-RAY EMISSION FROM THE BLAZAR RBS 0413
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aliu, E.; Archambault, S.; Arlen, T.
2012-05-10
We report on the discovery of high-energy (HE; E > 0.1 GeV) and very high energy (VHE; E > 100 GeV) {gamma}-ray emission from the high-frequency-peaked BL Lac object RBS 0413. VERITAS, a ground-based {gamma}-ray observatory, detected VHE {gamma} rays from RBS 0413 with a statistical significance of 5.5 standard deviations ({sigma}) and a {gamma}-ray flux of (1.5 {+-} 0.6{sub stat} {+-} 0.7{sub syst}) Multiplication-Sign 10{sup -8} photons m{sup -2} s{sup -1} ({approx}1% of the Crab Nebula flux) above 250 GeV. The observed spectrum can be described by a power law with a photon index of 3.18 {+-} 0.68{sub stat}more » {+-} 0.30{sub syst}. Contemporaneous observations with the Large Area Telescope (LAT) on the Fermi Gamma-ray Space Telescope detected HE {gamma} rays from RBS 0413 with a statistical significance of more than 9{sigma}, a power-law photon index of 1.57 {+-} 0.12{sub stat}+{sup 0.11}{sub -0.12sys}, and a {gamma}-ray flux between 300 MeV and 300 GeV of (1.64 {+-} 0.43{sub stat}{sup +0.31}{sub -0.22sys}) Multiplication-Sign 10{sup -5} photons m{sup -2} s{sup -1}. We present the results from Fermi-LAT and VERITAS, including a spectral energy distribution modeling of the {gamma}-ray, quasi-simultaneous X-ray (Swift-XRT), ultraviolet (Swift-UVOT), and R-band optical (MDM) data. We find that, if conditions close to equipartition are required, both the combined synchrotron self-Compton/external-Compton and the lepto-hadronic models are preferred over a pure synchrotron self-Compton model.« less
NASA Astrophysics Data System (ADS)
Su, Kung-Yi; Hopkins, Philip F.; Hayward, Christopher C.; Faucher-Giguère, Claude-André; Kereš, Dušan; Ma, Xiangcheng; Robles, Victor H.
2017-10-01
Using high-resolution simulations with explicit treatment of stellar feedback physics based on the FIRE (Feedback In Realistic Environments) project, we study how galaxy formation and the interstellar medium (ISM) are affected by magnetic fields, anisotropic Spitzer-Braginskii conduction and viscosity, and sub-grid metal diffusion from unresolved turbulence. We consider controlled simulations of isolated (non-cosmological) galaxies but also a limited set of cosmological 'zoom-in' simulations. Although simulations have shown significant effects from these physics with weak or absent stellar feedback, the effects are much weaker than those of stellar feedback when the latter is modelled explicitly. The additional physics have no systematic effect on galactic star formation rates (SFRs). In contrast, removing stellar feedback leads to SFRs being overpredicted by factors of ˜10-100. Without feedback, neither galactic winds nor volume-filling hot-phase gas exist, and discs tend to runaway collapse to ultra-thin scaleheights with unphysically dense clumps congregating at the galactic centre. With stellar feedback, a multi-phase, turbulent medium with galactic fountains and winds is established. At currently achievable resolutions and for the investigated halo mass range 1010-1013 M⊙, the additional physics investigated here (magnetohydrodynamic, conduction, viscosity, metal diffusion) have only weak (˜10 per cent-level) effects on regulating SFR and altering the balance of phases, outflows or the energy in ISM turbulence, consistent with simple equipartition arguments. We conclude that galactic star formation and the ISM are primarily governed by a combination of turbulence, gravitational instabilities and feedback. We add the caveat that active galactic nucleus feedback is not included in the present work.
A Unified Picture of Mass Segregation in Globular Clusters
NASA Astrophysics Data System (ADS)
Watkins, Laura
2017-08-01
The sensitivity, stability and longevity of HST have opened up an exciting new parameter space: we now have velocity measurements, in the form of proper motions (PMs), for stars from the tip of the red giant branch to a few magnitudes below the main-sequence turn off for a large sample of globular clusters (GCs). For the very first time, we have the opportunity to measure both kinematic and spatial dependences on stellar mass in GCs.The formation and evolution histories of GCs are poorly understood, so too are their intermediate-mass black hole populations and binary fractions. However, the current structure and dynamical state of a GC is directly determined by its past history and its components, so by understanding the former we can gain insight into the latter. Quantifying variations in spatial structure for stars of different mass is extremely difficult with photometry alone as datasets are inhomogenous and incomplete. We require kinematic data for stars that span a range of stellar masses, combined with proper dynamical modelling. We now have the data in hand, but still lack the models needed to maximise the scientific potential of our HST datasets.Here, we propose to extend existing single-mass discrete dynamical-modelling tools to include kinematic and spatial variations with stellar mass, and verify the upgrades using mock data generated from N-body models. We will then apply the models to HST PM data and directly quantify energy equipartition and mass segregation in the GCs. The theoretical phase of the project is vital for the success of the subsequent data analysis, and will serve as a benchmark for future observational campaigns with HST, JWST and beyond.
NASA Astrophysics Data System (ADS)
Jankovic, I.; Maghrebi, M.; Fiori, A.; Dagan, G.
2017-02-01
Natural gradient steady flow of mean velocity U takes place in heterogeneous aquifers of random logconductivity Y = lnK , characterized by the univariate PDF f(Y) and autocorrelation ρY. Solute transport is analyzed through the Breakthrough Curve (BTC) at planes at distance x from the injection plane. The study examines the impact of permeability structures sharing same f(Y) and ρY, but differing in higher order statistics (integral scales of variograms of Y classes) upon the numerical solution of flow and transport. Flow and transport are solved for 3D structures, rather than the 2D models adopted in most of previous works. We considered a few permeability structures, including the widely employed multi-Gaussian, the connected and disconnected fields introduced by Zinn and Harvey [2003] and a model characterized by equipartition of the correlation scale among Y values. We also consider the impact of statistical anisotropy of Y, the shape of ρY and local diffusion. The main finding is that unlike 2D, the prediction of the BTC of ergodic plumes by numerical and analytical models for different structures is quite robust, displaying a seemingly universal behavior, and can be used with confidence in applications. However, as a prerequisite the basic parameters KG (the geometric mean), σY2 (the logconductivity variance) and I (the horizontal integral scale of ρY) have to be identified from field data. The results suggest that narrowing down the gap between the BTCs in applications can be achieved by obtaining Kef (the effective conductivity) or U independently (e.g. by pumping tests), rather than attempting to characterize the permeability structure beyond f(Y) and ρY.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chandra, Poonam; Kanekar, Nissim
We report results from a Giant Metrewave Radio Telescope (GMRT) monitoring campaign of the black hole X-ray binary V404 Cygni during its 2015 June outburst. The GMRT observations were carried out at observing frequencies of 1280, 610, 325, and 235 MHz, and extended from June 26.89 UT (a day after the strongest radio/X-ray outburst) to July 12.93 UT. We find the low-frequency radio emission of V404 Cygni to be extremely bright and fast-decaying in the outburst phase, with an inverted spectrum below 1.5 GHz and an intermediate X-ray state. The radio emission settles to a weak, quiescent state ≈11 daysmore » after the outburst, with a flat radio spectrum and a soft X-ray state. Combining the GMRT measurements with flux density estimates from the literature, we identify a spectral turnover in the radio spectrum at ≈1.5 GHz on ≈ June 26.9 UT, indicating the presence of a synchrotron self-absorbed emitting region. We use the measured flux density at the turnover frequency with the assumption of equipartition of energy between the particles and the magnetic field to infer the jet radius (≈4.0 × 10{sup 13} cm), magnetic field (≈0.5 G), minimum total energy (≈7 × 10{sup 39} erg), and transient jet power (≈8 × 10{sup 34} erg s{sup −1}). The relatively low value of the jet power, despite V404 Cygni’s high black hole spin parameter, suggests that the radio jet power does not correlate with the spin parameter.« less
Observing Stellar Clusters in the Computer
NASA Astrophysics Data System (ADS)
Borch, A.; Spurzem, R.; Hurley, J.
2006-08-01
We present a new approach to combine direct N-body simulations to stellar population synthesis modeling in order to model the dynamical evolution and color evolution of globular clusters at the same time. This allows us to model the spectrum, colors and luminosities of each star in the simulated cluster. For this purpose the NBODY6++ code (Spurzem 1999) is used, which is a parallel version of the NBODY code. J. Hurley implemented simple recipes to follow the changes of stellar masses, radii, and luminosities due to stellar evolution into the NBODY6++ code (Hurley et al. 2001), in the sense that each simulation particle represents one star. These prescriptions cover all evolutionary phases and solar to globular cluster metallicities. We used the stellar parameters obtained by this stellar evolution routine and coupled them to the stellar library BaSeL 2.0 (Lejeune et al. 1997). As a first application we investigated the integrated broad band colors of simulated clusters. We modeled tidally disrupted globular clusters and compared the results with isolated globular clusters. Due to energy equipartition we expected a relative blueing of tidally disrupted clusters, because of the higher escape probability of red, low-mass stars. This behaviour we actually observe for concentrated globular clusters. The mass-to-light ratio of isolated clusters follows exactly a color-M/L correlation, similar as described in Bell and de Jong (2001) in the case of spiral galaxies. At variance to this correlation, in tidally disrupted clusters the M/L ratio becomes significantly lower at the time of cluster dissolution. Hence, for isolated clusters the behavior of the stellar population is not influenced by dynamical evolution, whereas the stellar population of tidally disrupted clusters is strongly influenced by dynamical effects.
Low-frequency radio constraints on the synchrotron cosmic web
NASA Astrophysics Data System (ADS)
Vernstrom, T.; Gaensler, B. M.; Brown, S.; Lenc, E.; Norris, R. P.
2017-06-01
We present a search for the synchrotron emission from the synchrotron cosmic web by cross-correlating 180-MHz radio images from the Murchison Widefield Array with tracers of large-scale structure (LSS). We use two versions of the radio image covering 21.76° × 21.76° with point sources brighter than 0.05 Jy subtracted, with and without filtering of Galactic emission. As tracers of the LSS, we use the Two Micron All-Sky Survey and the Wide-field InfraRed Explorer redshift catalogues to produce galaxy number density maps. The cross-correlation functions all show peak amplitudes at 0°, decreasing with varying slopes towards zero correlation over a range of 1°. The cross-correlation signals include components from point source, Galactic, and extragalactic diffuse emission. We use models of the diffuse emission from smoothing the density maps with Gaussians of sizes 1-4 Mpc to find limits on the cosmic web components. From these models, we find surface brightness 99.7 per cent upper limits in the range of 0.09-2.20 mJy beam-1 (average beam size of 2.6 arcmin), corresponding to 0.01-0.30 mJy arcmin-2. Assuming equipartition between energy densities of cosmic rays and the magnetic field, the flux density limits translate to magnetic field strength limits of 0.03-1.98 μG, depending heavily on the spectral index. We conclude that for a 3σ detection of 0.1 μG magnetic field strengths via cross-correlations, image depths of sub-mJy to sub-μJy are necessary. We include discussion on the treatment and effect of extragalactic point sources and Galactic emission, and next steps for building on this work.
NASA Technical Reports Server (NTRS)
Vinas, Adolfo F.; Moya, Pablo S.; Navarro, Roberto; Araneda, Jamie A.
2014-01-01
Two fundamental challenging problems of laboratory and astrophysical plasmas are the understanding of the relaxation of a collisionless plasmas with nearly isotropic velocity distribution functions and the resultant state of nearly equipartition energy density with electromagnetic plasma turbulence. Here, we present the results of a study which shows the role that higher-order-modes play in limiting the electromagnetic whistler-like fluctuations in a thermal and non-thermal plasma. Our main results show that for a thermal plasma the magnetic fluctuations are confined by regions that are bounded by the least-damped higher order modes. We further show that the zone where the whistler-cyclotron normal modes merges the electromagnetic fluctuations shifts to longer wavelengths as the beta(sub e) increases. This merging zone has been interpreted as the beginning of the region where the whistler-cyclotron waves losses their identity and become heavily damped while merging with the fluctuations. Our results further indicate that in the case of nonthermal plasmas, the higher-order modes do not confine the fluctuations due to the effective higher-temperature effects and the excess of suprathermal plasma particles. The analysis presented here considers the second-order theory of fluctuations and the dispersion relation of weakly transverse fluctuations, with wave vectors parallel to the uniform background magnetic field, in a finite temperature isotropic bi-Maxwellian and Tsallis-kappa-like magnetized electron-proton plasma. Our results indicate that the spontaneously emitted electromagnetic fluctuations are in fact enhanced over these quasi modes suggesting that such modes play an important role in the emission and absorption of electromagnetic fluctuations in thermal or quasi-thermal plasmas.
AGN coronal emission models - I. The predicted radio emission
NASA Astrophysics Data System (ADS)
Raginski, I.; Laor, Ari
2016-06-01
Accretion discs in active galactic nucleus (AGN) may be associated with coronal gas, as suggested by their X-ray emission. Stellar coronal emission includes radio emission, and AGN corona may also be a significant source for radio emission in radio quiet (RQ) AGN. We calculate the coronal properties required to produce the observed radio emission in RQ AGN, either from synchrotron emission of power-law (PL) electrons, or from cyclosynchrotron emission of hot mildly relativistic thermal electrons. We find that a flat spectrum, as observed in about half of RQ AGN, can be produced by corona with a disc or a spherical configuration, which extends from the innermost regions out to a pc scale. A spectral break to an optically thin power-law emission is expected around 300-1000 GHz, as the innermost corona becomes optically thin. In the case of thermal electrons, a sharp spectral cut-off is expected above the break. The position of the break can be measured with very long baseline interferometry observations, which exclude the cold dust emission, and it can be used to probe the properties of the innermost corona. Assuming equipartition of the coronal thermal energy density, the PL electrons energy density, and the magnetic field, we find that the energy density in a disc corona should scale as ˜R-1.3, to get a flat spectrum. In the spherical case the energy density scales as ˜R-2, and is ˜4 × 10-4 of the AGN radiation energy density. In Paper II we derive additional constraints on the coronal parameters from the Gudel-Benz relation, Lradio/LX-ray ˜ 10- 5, which RQ AGN follow.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hotta, H.; Yokoyama, T.; Rempel, M., E-mail: hotta.h@eps.s.u-tokyo.ac.jp
2014-05-01
We carry out non-rotating high-resolution calculations of the solar global convection, which resolve convective scales of less than 10 Mm. To cope with the low Mach number conditions in the lower convection zone, we use the reduced speed of sound technique (RSST), which is simple to implement and requires only local communication in the parallel computation. In addition, the RSST allows us to expand the computational domain upward to about 0.99 R {sub ☉}, as it can also handle compressible flows. Using this approach, we study the solar convection zone on the global scale, including small-scale near-surface convection. In particular,more » we investigate the influence of the top boundary condition on the convective structure throughout the convection zone as well as on small-scale dynamo action. Our main conclusions are as follows. (1) The small-scale downflows generated in the near-surface layer penetrate into deeper layers to some extent and excite small-scale turbulence in the region >0.9 R {sub ☉}, where R {sub ☉} is the solar radius. (2) In the deeper convection zone (<0.9 R {sub ☉}), the convection is not influenced by the location of the upper boundary. (3) Using a large eddy simulation approach, we can achieve small-scale dynamo action and maintain a field of about 0.15B {sub eq}-0.25B {sub eq} throughout the convection zone, where B {sub eq} is the equipartition magnetic field to the kinetic energy. (4) The overall dynamo efficiency varies significantly in the convection zone as a consequence of the downward directed Poynting flux and the depth variation of the intrinsic convective scales.« less
The Jet Heated X-Ray Filament in the Centaurus A Northern Middle Radio Lobe
NASA Astrophysics Data System (ADS)
Kraft, R. P.; Forman, W. R.; Hardcastle, M. J.; Birkinshaw, M.; Croston, J. H.; Jones, C.; Nulsen, P. E. J.; Worrall, D. M.; Murray, S. S.
2009-06-01
We present results from a 40 ks XMM-Newton observation of the X-ray filament coincident with the southeast edge of the Centaurus A Northern Middle Radio Lobe (NML). We find that the X-ray filament consists of five spatially resolved X-ray knots embedded in a continuous diffuse bridge. The spectrum of each knot is well fitted by a thermal model with temperatures ranging from 0.3 to 0.7 keV and subsolar elemental abundances. In four of the five knots, nonthermal models are a poor fit to the spectra, conclusively ruling out synchrotron or IC/CMB mechanisms for their emission. The internal pressures of the knots exceed that of the ambient interstellar medium or the equipartition pressure of the NML by more than an order of magnitude, demonstrating that they must be short lived (~3 × 106 yr). Based on energetic arguments, it is implausible that these knots have been ionized by the beamed flux from the active galactic nucleus of Cen A or that they have been shock heated by supersonic inflation of the NML. In our view, the most viable scenario for the origin of the X-ray knots is that they are the result of cold gas shock heated by a direct interaction with the jet. The most plausible model of the NML is that it is a bubble from a previous nuclear outburst that is being re-energized by the current outburst. The northeast inner lobe and the large-scale jet are lossless channels through which the jet material rapidly travels to the NML in this scenario. We also report the discovery of a large-scale (at least 35 kpc radius) gas halo around Cen A.
Detection of an Optical/UV Jet/Counterjet and Multiple Spectral Components in M84
NASA Astrophysics Data System (ADS)
Meyer, Eileen T.; Petropoulou, Maria; Georganopoulos, Markos; Chiaberge, Marco; Breiding, Peter; Sparks, William B.
2018-06-01
We report an optical/UV jet and counterjet in M84, previously unreported in archival Hubble Space Telescope imaging. With archival VLA, ALMA, and Chandra imaging, we examine the first well-sampled spectral energy distribution of the inner jet of M84, where we find that multiple co-spatial spectral components are required. In particular, the ALMA data reveal that the radio spectrum of all four knots in the jet turns over at approximately 100 GHz, which requires a second component for the bright optical/UV emission. Further, the optical/UV has a soft spectrum and is inconsistent with the relatively flat X-ray spectrum, which indicates a third component at higher energies. Using archival VLA imaging, we have measured the proper motion of the innermost knots at 0.9 ± 0.6 and 1.1 ± 0.4c, which when combined with the low jet-to-counterjet flux ratio yields an orientation angle for the system of {74}-18+9°. In the radio, we find high fractional polarization of the inner jet of up to 30% while in the optical no polarization is detected (<8%). We investigate different scenarios for explaining the particular multicomponent spectral energy distribution (SED) of the knots. Inverse Compton models are ruled out due to the extreme departure from equipartition and the unrealistically high total jet power required. The multicomponent SED can be naturally explained within a leptohadronic scenario, but at the cost of very high power in relativistic protons. A two-component synchrotron model remains a viable explanation, but more theoretical work is needed to explain the origin and properties of the electron populations.
ALMA and VLA observations of emission from the environment of Sgr A*
NASA Astrophysics Data System (ADS)
Yusef-Zadeh, F.; Schödel, R.; Wardle, M.; Bushouse, H.; Cotton, W.; Royster, M. J.; Kunneriath, D.; Roberts, D. A.; Gallego-Cano, E.
2017-10-01
We present 44 and 226 GHz observations of the Galactic Centre within 20 arcsec of Sgr A*. Millimetre continuum emission at 226 GHz is detected from eight stars that have previously been identified at near-IR and radio wavelengths. We also detect a 5.8 mJy source at 226 GHz coincident with the magnetar SGR J1745-29 located 2.39 arcsec SE of Sgr A* and identify a new 2.5 arcsec × 1.5 arcsec halo of mm emission centred on Sgr A*. The X-ray emission from this halo has been detected previously and is interpreted in terms of a radiatively inefficient accretion flow. The mm halo surrounds an EW linear feature that appears to arise from Sgr A* and coincides with the diffuse X-ray emission and a minimum in the near-IR extinction. We argue that the millimetre emission is produced by synchrotron emission from relativistic electrons in equipartition with an ˜1.5 mG magnetic field. The origin of this is unclear but its coexistence with hot gas supports scenarios in which the gas is produced by the interaction of winds either from the fast moving S-stars, the photoevaporation of low-mass YSO discs or by a jet-driven outflow from Sgr A*. The spatial anti-correlation of the X-ray, radio and mm emission from the halo and the low near-IR extinction provides a compelling evidence of an outflow sweeping up the interstellar material, creating a dust cavity within 2 arcsec of Sgr A*. Finally, the radio and mm counterparts to eight near-IR identified stars within ˜10 arcsec of Sgr A* provide accurate astrometry to determine the positional shift between the peak emission at 44 and 226 GHz.
The HST Large Programme on ω Centauri. II. Internal Kinematics
NASA Astrophysics Data System (ADS)
Bellini, Andrea; Libralato, Mattia; Bedin, Luigi R.; Milone, Antonino P.; van der Marel, Roeland P.; Anderson, Jay; Apai, Dániel; Burgasser, Adam J.; Marino, Anna F.; Rees, Jon M.
2018-01-01
In this second installment of the series, we look at the internal kinematics of the multiple stellar populations of the globular cluster ω Centauri in one of the parallel Hubble Space Telescope (HST) fields, located at about 3.5 half-light radii from the center of the cluster. Thanks to the over 15 yr long baseline and the exquisite astrometric precision of the HST cameras, well-measured stars in our proper-motion catalog have errors as low as ∼10 μas yr‑1, and the catalog itself extends to near the hydrogen-burning limit of the cluster. We show that second-generation (2G) stars are significantly more radially anisotropic than first-generation (1G) stars. The latter are instead consistent with an isotropic velocity distribution. In addition, 1G stars have excess systemic rotation in the plane of the sky with respect to 2G stars. We show that the six populations below the main-sequence (MS) knee identified in our first paper are associated with the five main population groups recently isolated on the upper MS in the core of cluster. Furthermore, we find both 1G and 2G stars in the field to be far from being in energy equipartition, with {η }1{{G}}=-0.007+/- 0.026 for the former and {η }2{{G}}=0.074+/- 0.029 for the latter, where η is defined so that the velocity dispersion {σ }μ scales with stellar mass as {σ }μ \\propto {m}-η . The kinematical differences reported here can help constrain the formation mechanisms for the multiple stellar populations in ω Centauri and other globular clusters. We make our astro-photometric catalog publicly available.
The extreme blazar AO 0235+164 as seen by extensive ground and space radio observations
NASA Astrophysics Data System (ADS)
Kutkin, A. M.; Pashchenko, I. N.; Lisakov, M. M.; Voytsik, P. A.; Sokolovsky, K. V.; Kovalev, Y. Y.; Lobanov, A. P.; Ipatov, A. V.; Aller, M. F.; Aller, H. D.; Lahteenmaki, A.; Tornikoski, M.; Gurvits, L. I.
2018-04-01
Clues to the physical conditions in radio cores of blazars come from measurements of brightness temperatures as well as effects produced by intrinsic opacity. We study the properties of the ultra-compact blazar AO 0235+164 with RadioAstron ground-space radio interferometer, multifrequency VLBA, EVN, and single-dish radio observations. We employ visibility modelling and image stacking for deriving structure and kinematics of the source, and use Gaussian process regression to find the relative multiband time delays of the flares. The multifrequency core size and time lags support prevailing synchrotron self-absorption. The intrinsic brightness temperature of the core derived from ground-based very long baseline interferometry (VLBI) is close to the equipartition regime value. In the same time, there is evidence for ultra-compact features of the size of less than 10 μas in the source, which might be responsible for the extreme apparent brightness temperatures of up to 1014 K as measured by RadioAstron. In 2007-2016 the VLBI components in the source at 43 GHz are found predominantly in two directions, suggesting a bend of the outflow from southern to northern direction. The apparent opening angle of the jet seen in the stacked image at 43 GHz is two times wider than that at 15 GHz, indicating a collimation of the flow within the central 1.5 mas. We estimate the Lorentz factor Γ = 14, the Doppler factor δ = 21, and the viewing angle θ = 1.7° of the apparent jet base, derive the gradients of magnetic field strength and electron density in the outflow, and the distance between jet apex and the core at each frequency.
Heat capacity of xenon adsorbed on nanobundle grooves
NASA Astrophysics Data System (ADS)
Chishko, K. A.; Sokolova, E. S.
2016-02-01
A model of a one-dimensional nonideal gas in an external transverse force field is used to interpret the experimentally observed thermodynamic properties of xenon deposited in grooves on the surface of carbon nanobundles. A nonideal gas model with pairwise interactions is not entirely adequate for describing dense adsorbates (at low temperatures), but makes it easy to account for the exchange of particles between the 1D adsorbate and the 3D atmosphere, which is an important factor at intermediate (on the order of 35 K for xenon) and, especially, high (˜100 K) temperatures. In this paper, we examine a 1D real gas taking only the one-dimensional Lennard-Jones interaction into account, but under exact equilibrium with respect to the number of particles between the 1D adsorbate and the 3D atmosphere of the measurement cell. The low-temperature branch of the specific heat is fitted independently by an elastic chain model so as to obtain the best agreement between theory and experiment over the widest possible region, beginning at zero temperature. The gas approximation sets in after temperatures for which the phonon specific heat of the chain essentially transforms to a one-dimensional equipartition law. Here the basic parameters of both models can be chosen so that the heat capacity C(T) of the chain transforms essentially continuously into the corresponding curve for the gas approximation. Thus, it can be expected that an adequate interpretation of the real temperature dependences of the specific heat of low-dimensionality atomic adsorbates can be obtained through a reasonable combination of the phonon and gas approximations. The main parameters of the gas approximation (such as the desorption energy) obtained by fitting the theory to experiments on the specific heat of xenon correlate well with published data.
Scaling laws for mixing and dissipation in unforced rotating stratified turbulence
NASA Astrophysics Data System (ADS)
Pouquet, A.; Rosenberg, D.; Marino, R.; Herbert, C.
2018-06-01
We present a model for the scaling of mixing in weakly rotating stratified flows characterized by their Rossby, Froude and Reynolds numbers Ro, Fr, Re. It is based on quasi-equipartition between kinetic and potential modes, sub-dominant vertical velocity and lessening of the energy transfer to small scales as measured by the ratio rE of kinetic energy dissipation to its dimensional expression. We determine their domains of validity for a numerical study of the unforced Boussinesq equations mostly on grids of 10243 points, with Ro/Fr> 2.5 and with 1600< Re<1.9x104; the Prandtl number is one, initial conditions are either isotropic and at large scale for the velocity, and zero for the temperature {\\theta}, or in geostrophic balance. Three regimes in Fr are observed: dominant waves, eddy-wave interactions and strong turbulence. A wave-turbulence balance for the transfer time leads to rE growing linearly with Fr in the intermediate regime, with a saturation at ~0.3 or more, depending on initial conditions for larger Froude numbers. The Ellison scale is also found to scale linearly with Fr, and the flux Richardson number Rf transitions for roughly the same parameter values as well. Putting together the 3 relationships of the model allows for the prediction of mixing efficiency scaling as Fr-2~RB-1 in the low and intermediate regimes, whereas for higher Fr, it scales as RB-1/2, as already observed: as turbulence strengthens, rE~1, the velocity is isotropic and smaller buoyancy fluxes altogether correspond to a decoupling of velocity and temperature fluctuations, the latter becoming passive.
NASA Astrophysics Data System (ADS)
Weber, Maria Ann; Browning, Matthew; Nelson, Nicholas
2018-01-01
Starspots are windows into a star’s internal dynamo mechanism. However, the manner by which the dynamo-generated magnetic field traverses the stellar interior to emerge at the surface is not especially well understood. Establishing the details of magnetic flux emergence plays a key role in deciphering stellar dynamos and observed starspot properties. In the solar context, insight into this process has been obtained by assuming the magnetism giving rise to sunspots consists partly of idealized thin flux tubes (TFTs). Here, we present three sets of TFT simulations in rotating spherical shells of convection: one representative of the Sun, the second of a solar-like rapid rotator, and the third of a fully convective M dwarf. Our solar simulations reproduce sunspot observables such as low-latitude emergence, tilting action toward the equator following the Joy’s Law trend, and a phenomenon akin to active longitudes. Further, we compare the evolution of rising flux tubes in our (computationally inexpensive) TFT simulations to buoyant magnetic structures that arise naturally in a unique global simulation of a rapidly rotating Sun. We comment on the role of rapid rotation, the Coriolis force, and external torques imparted by the surrounding convection in establishing the trajectories of the flux tubes across the convection zone. In our fully convective M dwarf simulations, the expected starspot latitudes deviate from the solar trend, favoring significantly poleward latitudes unless the differential rotation is sufficiently prograde or the magnetic field is strongly super-equipartition. Together our work provides a link between dynamo-generated magnetic fields, turbulent convection, and observations of starspots along the lower main sequence.
Cygnus OB2 DANCe: A high-precision proper motion study of the Cygnus OB2 association
NASA Astrophysics Data System (ADS)
Wright, Nicholas J.; Bouy, Herve; Drew, Janet E.; Sarro, Luis Manuel; Bertin, Emmanuel; Cuillandre, Jean-Charles; Barrado, David
2016-08-01
We present a high-precision proper motion study of 873 X-ray and spectroscopically selected stars in the massive OB association Cygnus OB2 as part of the DANCe project. These were calculated from images spanning a 15 yr baseline and have typical precisions <1 mas yr-1. We calculate the velocity dispersion in the two axes to be σ _α (c) = 13.0^{+0.8}_{-0.7} and σ _δ (c) = 9.1^{+0.5}_{-0.5} km s-1, using a two-component, two-dimensional model that takes into account the uncertainties on the measurements. This gives a three-dimensional velocity dispersion of σ3D = 17.8 ± 0.6 km s-1 implying a virial mass significantly larger than the observed stellar mass, confirming that the association is gravitationally unbound. The association appears to be dynamically unevolved, as evidenced by considerable kinematic substructure, non-isotropic velocity dispersions and a lack of energy equipartition. The proper motions show no evidence for a global expansion pattern, with approximately the same amount of kinetic energy in expansion as there is in contraction, which argues against the association being an expanded star cluster disrupted by process such as residual gas expulsion or tidal heating. The kinematic substructures, which appear to be close to virial equilibrium and have typical masses of 40-400 M⊙, also do not appear to have been affected by the expulsion of the residual gas. We conclude that Cyg OB2 was most likely born highly substructured and globally unbound, with the individual subgroups born in (or close to) virial equilibrium, and that the OB association has not experienced significant dynamical evolution since then.
NASA Astrophysics Data System (ADS)
Kundu, E.; Lundqvist, P.; Pérez-Torres, M. A.; Herrero-Illana, R.; Alberdi, A.
2017-06-01
We modeled the radio non-detection of two Type Ia supernovae (SNe), SN 2011fe and SN 2014J, considering synchrotron emission from the interaction between SN ejecta and the circumstellar medium. For ejecta whose outer parts have a power-law density structure, we compare synchrotron emission with radio observations. Assuming that 20% of the bulk shock energy is being shared equally between electrons and magnetic fields, we found a very low-density medium around both the SNe. A less tenuous medium with particle density ˜1 cm-3, which could be expected around both SNe, can be estimated when the magnetic field amplification is less than that presumed for energy equipartition. This conclusion also holds if the progenitor of SN 2014J was a rigidly rotating white dwarf (WD) with a main-sequence (MS) or red giant companion. For a He star companion, or a MS for SN 2014J, with 10% and 1% of bulk kinetic energy in magnetic fields, we obtain mass-loss rates of < {10}-9 and < ˜ 4× {10}-9 {M}⊙ {{yr}}-1 for a wind velocity of 100 {km} {{{s}}}-1. The former requires a mass accretion efficiency of >99% onto the WD, but is less restricted for the latter case. However, if the tenuous medium is due to a recurrent nova, it is difficult from our model to predict synchrotron luminosities. Although the formation channels of SNe 2011fe and 2014J are not clear, the null detection in radio wavelengths could point toward a low amplification efficiency for magnetic fields in SN shocks.
Exploring the making of a galactic wind in the starbursting dwarf irregular galaxy IC 10 with LOFAR
NASA Astrophysics Data System (ADS)
Heesen, V.; Rafferty, D. A.; Horneffer, A.; Beck, R.; Basu, A.; Westcott, J.; Hindson, L.; Brinks, E.; ChyŻy, K. T.; Scaife, A. M. M.; Brüggen, M.; Heald, G.; Fletcher, A.; Horellou, C.; Tabatabaei, F. S.; Paladino, R.; Nikiel-Wroczyński, B.; Hoeft, M.; Dettmar, R.-J.
2018-05-01
Low-mass galaxies are subject to strong galactic outflows, in which cosmic rays may play an important role; they can be best traced with low-frequency radio continuum observations, which are less affected by spectral ageing. We present a study of the nearby starburst dwarf irregular galaxy IC 10 using observations at 140 MHz with the Low-Frequency Array (LOFAR), at 1580 MHz with the Very Large Array (VLA), and at 6200 MHz with the VLA and the 100-m Effelsberg telescope. We find that IC 10 has a low-frequency radio halo, which manifests itself as a second component (thick disc) in the minor axis profiles of the non-thermal radio continuum emission at 140 and 1580 MHz. These profiles are then fitted with 1D cosmic ray transport models for pure diffusion and advection. We find that a diffusion model fits best, with a diffusion coefficient of D = (0.4-0.8) × 1026(E/GeV)0.5 cm2 s-1, which is at least an order of magnitude smaller than estimates both from anisotropic diffusion and the diffusion length. In contrast, advection models, which cannot be ruled out due to the mild inclination, while providing poorer fits, result in advection speeds close to the escape velocity of ≈ 50 km s- 1, as expected for a cosmic ray-driven wind. Our favoured model with an accelerating wind provides a self-consistent solution, where the magnetic field is in energy equipartition with both the warm neutral and warm ionized medium with an important contribution from cosmic rays. Consequently, cosmic rays can play a vital role for the launching of galactic winds in the disc-halo interface.
NASA Astrophysics Data System (ADS)
Franci, Luca; Landi, Simone; Matteini, Lorenzo; Verdini, Andrea; Hellinger, Petr
2016-04-01
We investigate the properties of the ion-scale spectral break of solar wind turbulence by means of two-dimensional, large-scale, high-resolution hybrid particle-in-cell simulations. We impose an initial ambient magnetic field perpendicular to the simulation box, and we add a spectrum of in-plane large- scale magnetic and kinetic fluctuations, with energy equipartition and vanishing correlation. We perform a set of ten simulations with different values of the ion plasma beta, β_i. In all cases, we observe the power spectrum of the total magnetic fluctuations following a power law with a spectral index of -5/3 in the inertial range, with a smooth break around ion scales and a steeper power law in the sub-ion range. This spectral break always occurs at spatial scales of the order of the proton gyroradius, ρ_i, and the proton inertial length, di = ρi / √{β_i}. When the plasma beta is of the order of 1, the two scales are very close to each other and determining which is directly related to the steepening of the spectra it's not straightforward at all. In order to overcome this limitation, we extended the range of values of βi over three orders of magnitude, from 0.01 to 10, so that the two ion scales were well separated. This let us observe that the break always seems to occur at the larger of the two scales, i.e., at di for βi 1. The effect of βi on the spectra of the parallel and perpendicular magnetic components separately and of the density fluctuations is also investigated. We compare all our numerical results with solar wind observations and suggest possible explanations for our findings.
Cooling Timescales and Temporal Structure of Gamma-Ray Bursts
NASA Astrophysics Data System (ADS)
Sari, Re'em; Narayan, Ramesh; Piran, Tsvi
1996-12-01
A leading mechanism for producing cosmological gamma-ray bursts (GRBs) is via ultrarelativistic particles in an expanding fireball. The kinetic energy of the particles is converted into thermal energy in two shocks, a forward shock and a reverse shock, when the outward flowing particles encounter the interstellar medium. The thermal energy is then radiated via synchrotron emission and Comptonization. We estimate the synchrotron cooling timescale of the shocked material in the forward and reverse shocks for electrons of various Lorentz factors, focusing in particular on those electrons whose radiation falls within the energy detection range of the BATSE detectors. We find that in order to produce the rapid variability observed in most bursts, the energy density of the magnetic field in the shocked material must be greater than about 1% of the thermal energy density. In addition, the electrons must be nearly in equipartition with the protons, since otherwise we do not have reasonable radiative efficiencies of GRBs. Inverse Compton scattering can increase the cooling rate of the relevant electrons, but the Comptonized emission itself is never within the BATSE range. These arguments allow us to pinpoint the conditions within the radiating regions in GRBs and to determine the important radiation processes. In addition, they provide a plausible explanation for several observations. The model predicts that the duty cycle of intensity variations in GRB light curves should be nearly independent of burst duration and should scale inversely as the square root of the observed photon energy. Both correlations are in agreement with observations. The model also provides a plausible explanation for the bimodal distribution of burst durations. There is no explanation, however, for the presence of a characteristic break energy in GRB spectra.
Discovery of high-energy and very high energy γ-ray emission from the blazar RBS 0413
Aliu, E.; Archambault, S.; Arlen, T.; ...
2012-04-18
Here, we report on the discovery of high-energy (HE; E > 0.1 GeV) and very high energy (VHE; E > 100 GeV) γ-ray emission from the high-frequency-peaked BL Lac object RBS 0413. VERITAS, a ground-based γ-ray observatory, detected VHE γ rays from RBS 0413 with a statistical significance of 5.5 standard deviations (σ) and a γ-ray flux of (1.5 ± 0.6 stat ± 0.7 syst) × 10–8 photons m –2 s –1 (~1% of the Crab Nebula flux) above 250 GeV. The observed spectrum can be described by a power law with a photon index of 3.18 ± 0.68 statmore » ± 0.30 syst. Contemporaneous observations with the Large Area Telescope (LAT) on the Fermi Gamma-ray Space Telescope detected HE γ rays from RBS 0413 with a statistical significance of more than 9σ, a power-law photon index of 1.57 ± 0.12 stat +0.11 – 0.12 sys, and a γ-ray flux between 300 MeV and 300 GeV of (1.64 ± 0.43 stat +0.31 – 0.22 sys) × 10 –5 photons m –2 s –1. We also present the results from Fermi-LAT and VERITAS, including a spectral energy distribution modeling of the γ-ray, quasi-simultaneous X-ray (Swift-XRT), ultraviolet (Swift-UVOT), and R-band optical (MDM) data. We finally find that, if conditions close to equipartition are required, both the combined synchrotron self-Compton/external-Compton and the lepto-hadronic models are preferred over a pure synchrotron self-Compton model.« less
NASA Astrophysics Data System (ADS)
Dieckmann, M. E.
2008-11-01
Recent particle-in-cell (PIC) simulation studies have addressed particle acceleration and magnetic field generation in relativistic astrophysical flows by plasma phase space structures. We discuss the astrophysical environments such as the jets of compact objects, and we give an overview of the global PIC simulations of shocks. These reveal several types of phase space structures, which are relevant for the energy dissipation. These structures are typically coupled in shocks, but we choose to consider them here in an isolated form. Three structures are reviewed. (1) Simulations of interpenetrating or colliding plasma clouds can trigger filamentation instabilities, while simulations of thermally anisotropic plasmas observe the Weibel instability. Both transform a spatially uniform plasma into current filaments. These filament structures cause the growth of the magnetic fields. (2) The development of a modified two-stream instability is discussed. It saturates first by the formation of electron phase space holes. The relativistic electron clouds modulate the ion beam and a secondary, spatially localized electrostatic instability grows, which saturates by forming a relativistic ion phase space hole. It accelerates electrons to ultra-relativistic speeds. (3) A simulation is also revised, in which two clouds of an electron-ion plasma collide at the speed 0.9c. The inequal densities of both clouds and a magnetic field that is oblique to the collision velocity vector result in waves with a mixed electrostatic and electromagnetic polarity. The waves give rise to growing corkscrew distributions in the electrons and ions that establish an equipartition between the electron, the ion and the magnetic energy. The filament-, phase space hole- and corkscrew structures are discussed with respect to electron acceleration and magnetic field generation.
Abdo, A. A.; Ackermann, M.; Ajello, M.; ...
2011-03-10
Here, we report on observations of BL Lacertae during the first 18 months of Fermi LAT science operations and present results from a 48 day multifrequency coordinated campaign from 2008 August 19 to 2008 October 7. The radio to gamma-ray behavior of BL Lac is unveiled during a low-activity state thanks to the coordinated observations of radio-band (Metsähovi and VLBA), near-IR/optical (Tuorla, Steward, OAGH, and MDM), and X-ray ( RXTE and Swift) observatories. No variability was resolved in gamma rays during the campaign, and the brightness level was 15 times lower than the level of the 1997 EGRET outburst. Moderatemore » and uncorrelated variability has been detected in UV and X-rays. The X-ray spectrum is found to be concave, indicating the transition region between the low- and high-energy components of the spectral energy distribution (SED). VLBA observation detected a synchrotron spectrum self-absorption turnover in the innermost part of the radio jet appearing to be elongated and inhomogeneous, and constrained the average magnetic field there to be less than 3 G. Over the following months, BL Lac appeared variable in gamma rays, showing flares (in 2009 April and 2010 January). There is no evidence for the correlation of gamma rays with the optical flux monitored from the ground in 18 months. The SED may be described by a single-zone or a two-zone synchrotron self-Compton (SSC) model, but a hybrid SSC plus external radiation Compton model seems to be preferred based on the observed variability and the fact that it provides a fit closest to equipartition.« less
Experimental design methods for bioengineering applications.
Keskin Gündoğdu, Tuğba; Deniz, İrem; Çalışkan, Gülizar; Şahin, Erdem Sefa; Azbar, Nuri
2016-01-01
Experimental design is a form of process analysis in which certain factors are selected to obtain the desired responses of interest. It may also be used for the determination of the effects of various independent factors on a dependent factor. The bioengineering discipline includes many different areas of scientific interest, and each study area is affected and governed by many different factors. Briefly analyzing the important factors and selecting an experimental design for optimization are very effective tools for the design of any bioprocess under question. This review summarizes experimental design methods that can be used to investigate various factors relating to bioengineering processes. The experimental methods generally used in bioengineering are as follows: full factorial design, fractional factorial design, Plackett-Burman design, Taguchi design, Box-Behnken design and central composite design. These design methods are briefly introduced, and then the application of these design methods to study different bioengineering processes is analyzed.
An overview of very high level software design methods
NASA Technical Reports Server (NTRS)
Asdjodi, Maryam; Hooper, James W.
1988-01-01
Very High Level design methods emphasize automatic transfer of requirements to formal design specifications, and/or may concentrate on automatic transformation of formal design specifications that include some semantic information of the system into machine executable form. Very high level design methods range from general domain independent methods to approaches implementable for specific applications or domains. Applying AI techniques, abstract programming methods, domain heuristics, software engineering tools, library-based programming and other methods different approaches for higher level software design are being developed. Though one finds that a given approach does not always fall exactly in any specific class, this paper provides a classification for very high level design methods including examples for each class. These methods are analyzed and compared based on their basic approaches, strengths and feasibility for future expansion toward automatic development of software systems.
Experimental design methodologies in the optimization of chiral CE or CEC separations: an overview.
Dejaegher, Bieke; Mangelings, Debby; Vander Heyden, Yvan
2013-01-01
In this chapter, an overview of experimental designs to develop chiral capillary electrophoresis (CE) and capillary electrochromatographic (CEC) methods is presented. Method development is generally divided into technique selection, method optimization, and method validation. In the method optimization part, often two phases can be distinguished, i.e., a screening and an optimization phase. In method validation, the method is evaluated on its fit for purpose. A validation item, also applying experimental designs, is robustness testing. In the screening phase and in robustness testing, screening designs are applied. During the optimization phase, response surface designs are used. The different design types and their application steps are discussed in this chapter and illustrated by examples of chiral CE and CEC methods.
[Review of research design and statistical methods in Chinese Journal of Cardiology].
Zhang, Li-jun; Yu, Jin-ming
2009-07-01
To evaluate the research design and the use of statistical methods in Chinese Journal of Cardiology. Peer through the research design and statistical methods in all of the original papers in Chinese Journal of Cardiology from December 2007 to November 2008. The most frequently used research designs are cross-sectional design (34%), prospective design (21%) and experimental design (25%). In all of the articles, 49 (25%) use wrong statistical methods, 29 (15%) lack some sort of statistic analysis, 23 (12%) have inconsistencies in description of methods. There are significant differences between different statistical methods (P < 0.001). The correction rates of multifactor analysis were low and repeated measurement datas were not used repeated measurement analysis. Many problems exist in Chinese Journal of Cardiology. Better research design and correct use of statistical methods are still needed. More strict review by statistician and epidemiologist is also required to improve the literature qualities.
A review of parametric approaches specific to aerodynamic design process
NASA Astrophysics Data System (ADS)
Zhang, Tian-tian; Wang, Zhen-guo; Huang, Wei; Yan, Li
2018-04-01
Parametric modeling of aircrafts plays a crucial role in the aerodynamic design process. Effective parametric approaches have large design space with a few variables. Parametric methods that commonly used nowadays are summarized in this paper, and their principles have been introduced briefly. Two-dimensional parametric methods include B-Spline method, Class/Shape function transformation method, Parametric Section method, Hicks-Henne method and Singular Value Decomposition method, and all of them have wide application in the design of the airfoil. This survey made a comparison among them to find out their abilities in the design of the airfoil, and the results show that the Singular Value Decomposition method has the best parametric accuracy. The development of three-dimensional parametric methods is limited, and the most popular one is the Free-form deformation method. Those methods extended from two-dimensional parametric methods have promising prospect in aircraft modeling. Since different parametric methods differ in their characteristics, real design process needs flexible choice among them to adapt to subsequent optimization procedure.
The application of quadratic optimal cooperative control synthesis to a CH-47 helicopter
NASA Technical Reports Server (NTRS)
Townsend, Barbara K.
1986-01-01
A control-system design method, Quadratic Optimal Cooperative Control Synthesis (CCS), is applied to the design of a Stability and Control Augmentation Systems (SCAS). The CCS design method is different from other design methods in that it does not require detailed a priori design criteria, but instead relies on an explicit optimal pilot-model to create desired performance. The design model, which was developed previously for fixed-wing aircraft, is simplified and modified for application to a Boeing Vertol CH-47 helicopter. Two SCAS designs are developed using the CCS design methodology. The resulting CCS designs are then compared with designs obtained using classical/frequency-domain methods and Linear Quadratic Regulator (LQR) theory in a piloted fixed-base simulation. Results indicate that the CCS method, with slight modifications, can be used to produce controller designs which compare favorably with the frequency-domain approach.
A Rapid Aerodynamic Design Procedure Based on Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Rai, Man Mohan
2001-01-01
An aerodynamic design procedure that uses neural networks to model the functional behavior of the objective function in design space has been developed. This method incorporates several improvements to an earlier method that employed a strategy called parameter-based partitioning of the design space in order to reduce the computational costs associated with design optimization. As with the earlier method, the current method uses a sequence of response surfaces to traverse the design space in search of the optimal solution. The new method yields significant reductions in computational costs by using composite response surfaces with better generalization capabilities and by exploiting synergies between the optimization method and the simulation codes used to generate the training data. These reductions in design optimization costs are demonstrated for a turbine airfoil design study where a generic shape is evolved into an optimal airfoil.
Applications of mixed-methods methodology in clinical pharmacy research.
Hadi, Muhammad Abdul; Closs, S José
2016-06-01
Introduction Mixed-methods methodology, as the name suggests refers to mixing of elements of both qualitative and quantitative methodologies in a single study. In the past decade, mixed-methods methodology has gained popularity among healthcare researchers as it promises to bring together the strengths of both qualitative and quantitative approaches. Methodology A number of mixed-methods designs are available in the literature and the four most commonly used designs in healthcare research are: the convergent parallel design, the embedded design, the exploratory design, and the explanatory design. Each has its own unique advantages, challenges and procedures and selection of a particular design should be guided by the research question. Guidance on designing, conducting and reporting mixed-methods research is available in the literature, so it is advisable to adhere to this to ensure methodological rigour. When to use it is best suited when the research questions require: triangulating findings from different methodologies to explain a single phenomenon; clarifying the results of one method using another method; informing the design of one method based on the findings of another method, development of a scale/questionnaire and answering different research questions within a single study. Two case studies have been presented to illustrate possible applications of mixed-methods methodology. Limitations Possessing the necessary knowledge and skills to undertake qualitative and quantitative data collection, analysis, interpretation and integration remains the biggest challenge for researchers conducting mixed-methods studies. Sequential study designs are often time consuming, being in two (or more) phases whereas concurrent study designs may require more than one data collector to collect both qualitative and quantitative data at the same time.
NASA Astrophysics Data System (ADS)
Zhang, Yunpeng; Ho, Siu-lau; Fu, Weinong
2018-05-01
This paper proposes a dynamic multi-level optimal design method for power transformer design optimization (TDO) problems. A response surface generated by second-order polynomial regression analysis is updated dynamically by adding more design points, which are selected by Shifted Hammersley Method (SHM) and calculated by finite-element method (FEM). The updating stops when the accuracy requirement is satisfied, and optimized solutions of the preliminary design are derived simultaneously. The optimal design level is modulated through changing the level of error tolerance. Based on the response surface of the preliminary design, a refined optimal design is added using multi-objective genetic algorithm (MOGA). The effectiveness of the proposed optimal design method is validated through a classic three-phase power TDO problem.
Reusable design: A proposed approach to Public Health Informatics system design
2011-01-01
Background Since it was first defined in 1995, Public Health Informatics (PHI) has become a recognized discipline, with a research agenda, defined domain-specific competencies and a specialized corpus of technical knowledge. Information systems form a cornerstone of PHI research and implementation, representing significant progress for the nascent field. However, PHI does not advocate or incorporate standard, domain-appropriate design methods for implementing public health information systems. Reusable design is generalized design advice that can be reused in a range of similar contexts. We propose that PHI create and reuse information design knowledge by taking a systems approach that incorporates design methods from the disciplines of Human-Computer Interaction, Interaction Design and other related disciplines. Discussion Although PHI operates in a domain with unique characteristics, many design problems in public health correspond to classic design problems, suggesting that existing design methods and solution approaches are applicable to the design of public health information systems. Among the numerous methodological frameworks used in other disciplines, we identify scenario-based design and participatory design as two widely-employed methodologies that are appropriate for adoption as PHI standards. We make the case that these methods show promise to create reusable design knowledge in PHI. Summary We propose the formalization of a set of standard design methods within PHI that can be used to pursue a strategy of design knowledge creation and reuse for cost-effective, interoperable public health information systems. We suggest that all public health informaticians should be able to use these design methods and the methods should be incorporated into PHI training. PMID:21333000
A New Automated Design Method Based on Machine Learning for CMOS Analog Circuits
NASA Astrophysics Data System (ADS)
Moradi, Behzad; Mirzaei, Abdolreza
2016-11-01
A new simulation based automated CMOS analog circuit design method which applies a multi-objective non-Darwinian-type evolutionary algorithm based on Learnable Evolution Model (LEM) is proposed in this article. The multi-objective property of this automated design of CMOS analog circuits is governed by a modified Strength Pareto Evolutionary Algorithm (SPEA) incorporated in the LEM algorithm presented here. LEM includes a machine learning method such as the decision trees that makes a distinction between high- and low-fitness areas in the design space. The learning process can detect the right directions of the evolution and lead to high steps in the evolution of the individuals. The learning phase shortens the evolution process and makes remarkable reduction in the number of individual evaluations. The expert designer's knowledge on circuit is applied in the design process in order to reduce the design space as well as the design time. The circuit evaluation is made by HSPICE simulator. In order to improve the design accuracy, bsim3v3 CMOS transistor model is adopted in this proposed design method. This proposed design method is tested on three different operational amplifier circuits. The performance of this proposed design method is verified by comparing it with the evolutionary strategy algorithm and other similar methods.
Modified Fully Utilized Design (MFUD) Method for Stress and Displacement Constraints
NASA Technical Reports Server (NTRS)
Patnaik, Surya; Gendy, Atef; Berke, Laszlo; Hopkins, Dale
1997-01-01
The traditional fully stressed method performs satisfactorily for stress-limited structural design. When this method is extended to include displacement limitations in addition to stress constraints, it is known as the fully utilized design (FUD). Typically, the FUD produces an overdesign, which is the primary limitation of this otherwise elegant method. We have modified FUD in an attempt to alleviate the limitation. This new method, called the modified fully utilized design (MFUD) method, has been tested successfully on a number of designs that were subjected to multiple loads and had both stress and displacement constraints. The solutions obtained with MFUD compare favorably with the optimum results that can be generated by using nonlinear mathematical programming techniques. The MFUD method appears to have alleviated the overdesign condition and offers the simplicity of a direct, fully stressed type of design method that is distinctly different from optimization and optimality criteria formulations. The MFUD method is being developed for practicing engineers who favor traditional design methods rather than methods based on advanced calculus and nonlinear mathematical programming techniques. The Integrated Force Method (IFM) was found to be the appropriate analysis tool in the development of the MFUD method. In this paper, the MFUD method and its optimality are presented along with a number of illustrative examples.
Enhanced learning through design problems - teaching a components-based course through design
NASA Astrophysics Data System (ADS)
Jensen, Bogi Bech; Högberg, Stig; Fløtum Jensen, Frida av; Mijatovic, Nenad
2012-08-01
This paper describes a teaching method used in an electrical machines course, where the students learn about electrical machines by designing them. The aim of the course is not to teach design, albeit this is a side product, but rather to teach the fundamentals and the function of electrical machines through design. The teaching method is evaluated by a student questionnaire, designed to measure the quality and effectiveness of the teaching method. The results of the questionnaire conclusively show that this method labelled 'learning through design' is a very effective way of teaching a components-based course. This teaching method can easily be generalised and used in other courses.
Tradeoff studies in multiobjective insensitive design of airplane control systems
NASA Technical Reports Server (NTRS)
Schy, A. A.; Giesy, D. P.
1983-01-01
A computer aided design method for multiobjective parameter-insensitive design of airplane control systems is described. Methods are presented for trading off nominal values of design objectives against sensitivities of the design objectives to parameter uncertainties, together with guidelines for designer utilization of the methods. The methods are illustrated by application to the design of a lateral stability augmentation system for two supersonic flight conditions of the Shuttle Orbiter. Objective functions are conventional handling quality measures and peak magnitudes of control deflections and rates. The uncertain parameters are assumed Gaussian, and numerical approximations of the stochastic behavior of the objectives are described. Results of applying the tradeoff methods to this example show that stochastic-insensitive designs are distinctly different from deterministic multiobjective designs. The main penalty for achieving significant decrease in sensitivity is decreased speed of response for the nominal system.
NASA Astrophysics Data System (ADS)
Adrich, Przemysław
2016-05-01
In Part I of this work existing methods and problems in dual foil electron beam forming system design are presented. On this basis, a new method of designing these systems is introduced. The motivation behind this work is to eliminate the shortcomings of the existing design methods and improve overall efficiency of the dual foil design process. The existing methods are based on approximate analytical models applied in an unrealistically simplified geometry. Designing a dual foil system with these methods is a rather labor intensive task as corrections to account for the effects not included in the analytical models have to be calculated separately and accounted for in an iterative procedure. To eliminate these drawbacks, the new design method is based entirely on Monte Carlo modeling in a realistic geometry and using physics models that include all relevant processes. In our approach, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of the system performance in function of parameters of the foils. The new method, while being computationally intensive, minimizes the involvement of the designer and considerably shortens the overall design time. The results are of high quality as all the relevant physics and geometry details are naturally accounted for. To demonstrate the feasibility of practical implementation of the new method, specialized software tools were developed and applied to solve a real life design problem, as described in Part II of this work.
Methodical Design of Software Architecture Using an Architecture Design Assistant (ArchE)
2005-04-01
PA 15213-3890 Methodical Design of Software Architecture Using an Architecture Design Assistant (ArchE) Felix Bachmann and Mark Klein Software...DATES COVERED 00-00-2005 to 00-00-2005 4. TITLE AND SUBTITLE Methodical Design of Software Architecture Using an Architecture Design Assistant...important for architecture design – quality requirements and constraints are most important Here’s some evidence: If the only concern is
Project Lifespan-based Nonstationary Hydrologic Design Methods for Changing Environment
NASA Astrophysics Data System (ADS)
Xiong, L.
2017-12-01
Under changing environment, we must associate design floods with the design life period of projects to ensure the hydrologic design is really relevant to the operation of the hydrologic projects, because the design value for a given exceedance probability over the project life period would be significantly different from that over other time periods of the same length due to the nonstationarity of probability distributions. Several hydrologic design methods that take the design life period of projects into account have been proposed in recent years, i.e. the expected number of exceedances (ENE), design life level (DLL), equivalent reliability (ER), and average design life level (ADLL). Among the four methods to be compared, both the ENE and ER methods are return period-based methods, while DLL and ADLL are risk/reliability- based methods which estimate design values for given probability values of risk or reliability. However, the four methods can be unified together under a general framework through a relationship transforming the so-called representative reliability (RRE) into the return period, i.e. m=1/1(1-RRE), in which we compute the return period m using the representative reliability RRE.The results of nonstationary design quantiles and associated confidence intervals calculated by ENE, ER and ADLL were very similar, since ENE or ER was a special case or had a similar expression form with respect to ADLL. In particular, the design quantiles calculated by ENE and ADLL were the same when return period was equal to the length of the design life. In addition, DLL can yield similar design values if the relationship between DLL and ER/ADLL return periods is considered. Furthermore, ENE, ER and ADLL had good adaptability to either an increasing or decreasing situation, yielding not too large or too small design quantiles. This is important for applications of nonstationary hydrologic design methods in actual practice because of the concern of choosing the emerging nonstationary methods versus the traditional stationary methods. There is still a long way to go for the conceptual transition from stationarity to nonstationarity in hydrologic design.
Merits and limitations of optimality criteria method for structural optimization
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Guptill, James D.; Berke, Laszlo
1993-01-01
The merits and limitations of the optimality criteria (OC) method for the minimum weight design of structures subjected to multiple load conditions under stress, displacement, and frequency constraints were investigated by examining several numerical examples. The examples were solved utilizing the Optimality Criteria Design Code that was developed for this purpose at NASA Lewis Research Center. This OC code incorporates OC methods available in the literature with generalizations for stress constraints, fully utilized design concepts, and hybrid methods that combine both techniques. Salient features of the code include multiple choices for Lagrange multiplier and design variable update methods, design strategies for several constraint types, variable linking, displacement and integrated force method analyzers, and analytical and numerical sensitivities. The performance of the OC method, on the basis of the examples solved, was found to be satisfactory for problems with few active constraints or with small numbers of design variables. For problems with large numbers of behavior constraints and design variables, the OC method appears to follow a subset of active constraints that can result in a heavier design. The computational efficiency of OC methods appears to be similar to some mathematical programming techniques.
NASA Technical Reports Server (NTRS)
Olds, John Robert; Walberg, Gerald D.
1993-01-01
Multidisciplinary design optimization (MDO) is an emerging discipline within aerospace engineering. Its goal is to bring structure and efficiency to the complex design process associated with advanced aerospace launch vehicles. Aerospace vehicles generally require input from a variety of traditional aerospace disciplines - aerodynamics, structures, performance, etc. As such, traditional optimization methods cannot always be applied. Several multidisciplinary techniques and methods were proposed as potentially applicable to this class of design problem. Among the candidate options are calculus-based (or gradient-based) optimization schemes and parametric schemes based on design of experiments theory. A brief overview of several applicable multidisciplinary design optimization methods is included. Methods from the calculus-based class and the parametric class are reviewed, but the research application reported focuses on methods from the parametric class. A vehicle of current interest was chosen as a test application for this research. The rocket-based combined-cycle (RBCC) single-stage-to-orbit (SSTO) launch vehicle combines elements of rocket and airbreathing propulsion in an attempt to produce an attractive option for launching medium sized payloads into low earth orbit. The RBCC SSTO presents a particularly difficult problem for traditional one-variable-at-a-time optimization methods because of the lack of an adequate experience base and the highly coupled nature of the design variables. MDO, however, with it's structured approach to design, is well suited to this problem. The result of the application of Taguchi methods, central composite designs, and response surface methods to the design optimization of the RBCC SSTO are presented. Attention is given to the aspect of Taguchi methods that attempts to locate a 'robust' design - that is, a design that is least sensitive to uncontrollable influences on the design. Near-optimum minimum dry weight solutions are determined for the vehicle. A summary and evaluation of the various parametric MDO methods employed in the research are included. Recommendations for additional research are provided.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-05
... Methods: Designation of Three New Equivalent Methods AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of three new equivalent methods for monitoring ambient air quality. SUMMARY... equivalent methods, one for measuring concentrations of PM 2.5 , one for measuring concentrations of PM 10...
ERIC Educational Resources Information Center
Honebein, Peter C.
2017-01-01
An instructional designer's values about instructional methods can be a curse or a cure. On one hand, a designer's love affair for a method may cause them to use that method in situations that are not appropriate. On the other hand, that same love affair may inspire a designer to fight for a method when those in power are willing to settle for a…
ERIC Educational Resources Information Center
Sinharay, Sandip; Holland, Paul W.
2008-01-01
The nonequivalent groups with anchor test (NEAT) design involves missing data that are missing by design. Three popular equating methods that can be used with a NEAT design are the poststratification equating method, the chain equipercentile equating method, and the item-response-theory observed-score-equating method. These three methods each…
Study of Fuze Structure and Reliability Design Based on the Direct Search Method
NASA Astrophysics Data System (ADS)
Lin, Zhang; Ning, Wang
2017-03-01
Redundant design is one of the important methods to improve the reliability of the system, but mutual coupling of multiple factors is often involved in the design. In my study, Direct Search Method is introduced into the optimum redundancy configuration for design optimization, in which, the reliability, cost, structural weight and other factors can be taken into account simultaneously, and the redundant allocation and reliability design of aircraft critical system are computed. The results show that this method is convenient and workable, and applicable to the redundancy configurations and optimization of various designs upon appropriate modifications. And this method has a good practical value.
Demystifying Mixed Methods Research Design: A Review of the Literature
ERIC Educational Resources Information Center
Caruth, Gail D.
2013-01-01
Mixed methods research evolved in response to the observed limitations of both quantitative and qualitative designs and is a more complex method. The purpose of this paper was to examine mixed methods research in an attempt to demystify the design thereby allowing those less familiar with its design an opportunity to utilize it in future research.…
Yamada, Akira; Terakawa, Mitsuhiro
2015-04-10
We present a design method of a bull's eye structure with asymmetric grooves for focusing oblique incident light. The design method is capable of designing transmission peaks to a desired oblique angle with capability of collecting light from a wider range of angles. The bull's eye groove geometry for oblique incidence is designed based on the electric field intensity pattern around an isolated subwavelength aperture on a thin gold film at oblique incidence, calculated by the finite difference time domain method. Wide angular transmission efficiency is successfully achieved by overlapping two different bull's eye groove patterns designed with different peak angles. Our novel design method would overcome the angular limitations of the conventional methods.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-11
... Methods: Designation of a New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of a new equivalent method for monitoring ambient air quality. SUMMARY: Notice is... part 53, a new equivalent method for measuring concentrations of PM 2.5 in the ambient air. FOR FURTHER...
Applications of numerical optimization methods to helicopter design problems: A survey
NASA Technical Reports Server (NTRS)
Miura, H.
1984-01-01
A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.
Applications of numerical optimization methods to helicopter design problems - A survey
NASA Technical Reports Server (NTRS)
Miura, H.
1985-01-01
A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.
Applications of numerical optimization methods to helicopter design problems - A survey
NASA Technical Reports Server (NTRS)
Miura, H.
1984-01-01
A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.
Radiofrequency pulse design using nonlinear gradient magnetic fields.
Kopanoglu, Emre; Constable, R Todd
2015-09-01
An iterative k-space trajectory and radiofrequency (RF) pulse design method is proposed for excitation using nonlinear gradient magnetic fields. The spatial encoding functions (SEFs) generated by nonlinear gradient fields are linearly dependent in Cartesian coordinates. Left uncorrected, this may lead to flip angle variations in excitation profiles. In the proposed method, SEFs (k-space samples) are selected using a matching pursuit algorithm, and the RF pulse is designed using a conjugate gradient algorithm. Three variants of the proposed approach are given: the full algorithm, a computationally cheaper version, and a third version for designing spoke-based trajectories. The method is demonstrated for various target excitation profiles using simulations and phantom experiments. The method is compared with other iterative (matching pursuit and conjugate gradient) and noniterative (coordinate-transformation and Jacobian-based) pulse design methods as well as uniform density spiral and EPI trajectories. The results show that the proposed method can increase excitation fidelity. An iterative method for designing k-space trajectories and RF pulses using nonlinear gradient fields is proposed. The method can either be used for selecting the SEFs individually to guide trajectory design, or can be adapted to design and optimize specific trajectories of interest. © 2014 Wiley Periodicals, Inc.
Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys
Hund, Lauren; Bedrick, Edward J.; Pagano, Marcello
2015-01-01
Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis. PMID:26125967
Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.
Hund, Lauren; Bedrick, Edward J; Pagano, Marcello
2015-01-01
Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.
POLLUTION PREVENTION IN THE EARLY STAGES OF HIERARCHICAL PROCESS DESIGN
Hierarchical methods are often used in the conceptual stages of process design to synthesize and evaluate process alternatives. In this work, the methods of hierarchical process design will be focused on environmental aspects. In particular, the design methods will be coupled to ...
Shape design sensitivity analysis and optimal design of structural systems
NASA Technical Reports Server (NTRS)
Choi, Kyung K.
1987-01-01
The material derivative concept of continuum mechanics and an adjoint variable method of design sensitivity analysis are used to relate variations in structural shape to measures of structural performance. A domain method of shape design sensitivity analysis is used to best utilize the basic character of the finite element method that gives accurate information not on the boundary but in the domain. Implementation of shape design sensitivty analysis using finite element computer codes is discussed. Recent numerical results are used to demonstrate the accuracy obtainable using the method. Result of design sensitivity analysis is used to carry out design optimization of a built-up structure.
NASA Astrophysics Data System (ADS)
Takizawa, Kenji; Kondo, Keiichiro
A hybrid railway traction system with fuel cells (FCs) and electric double layer-capacitors (EDLCs) is discussed in this paper. This system can save FC costs and absorb the regenerative energy. A method for designing FCs and EDLCs on the basis of the output power and capacitance, respectively, has not been reported, even though their design is one of the most important technical issues encountered in the design of hybrid railway vehicles. Such design method is presented along with a train load profile and an energy management strategy. The design results obtained using the proposed method are verified by performing numerical simulations of a running train. These results reveal that the proposed method for designing the EDLCs and FCs on the basis of the capacitance and power, respectively, and by using a method for controlling the EDLC voltage is sufficiently effective in designing efficient EDLCs and FCs of hybrid railway traction systems.
RF Pulse Design using Nonlinear Gradient Magnetic Fields
Kopanoglu, Emre; Constable, R. Todd
2014-01-01
Purpose An iterative k-space trajectory and radio-frequency (RF) pulse design method is proposed for Excitation using Nonlinear Gradient Magnetic fields (ENiGMa). Theory and Methods The spatial encoding functions (SEFs) generated by nonlinear gradient fields (NLGFs) are linearly dependent in Cartesian-coordinates. Left uncorrected, this may lead to flip-angle variations in excitation profiles. In the proposed method, SEFs (k-space samples) are selected using a Matching-Pursuit algorithm, and the RF pulse is designed using a Conjugate-Gradient algorithm. Three variants of the proposed approach are given: the full-algorithm, a computationally-cheaper version, and a third version for designing spoke-based trajectories. The method is demonstrated for various target excitation profiles using simulations and phantom experiments. Results The method is compared to other iterative (Matching-Pursuit and Conjugate Gradient) and non-iterative (coordinate-transformation and Jacobian-based) pulse design methods as well as uniform density spiral and EPI trajectories. The results show that the proposed method can increase excitation fidelity significantly. Conclusion An iterative method for designing k-space trajectories and RF pulses using nonlinear gradient fields is proposed. The method can either be used for selecting the SEFs individually to guide trajectory design, or can be adapted to design and optimize specific trajectories of interest. PMID:25203286
Innovative design method of automobile profile based on Fourier descriptor
NASA Astrophysics Data System (ADS)
Gao, Shuyong; Fu, Chaoxing; Xia, Fan; Shen, Wei
2017-10-01
Aiming at the innovation of the contours of automobile side, this paper presents an innovative design method of vehicle side profile based on Fourier descriptor. The design flow of this design method is: pre-processing, coordinate extraction, standardization, discrete Fourier transform, simplified Fourier descriptor, exchange descriptor innovation, inverse Fourier transform to get the outline of innovative design. Innovative concepts of the innovative methods of gene exchange among species and the innovative methods of gene exchange among different species are presented, and the contours of the innovative design are obtained separately. A three-dimensional model of a car is obtained by referring to the profile curve which is obtained by exchanging xenogeneic genes. The feasibility of the method proposed in this paper is verified by various aspects.
NASA Astrophysics Data System (ADS)
Wang, Nianfeng; Guo, Hao; Chen, Bicheng; Cui, Chaoyu; Zhang, Xianmin
2018-05-01
Dielectric elastomers (DE), known as electromechanical transducers, have been widely used in the field of sensors, generators, actuators and energy harvesting for decades. A large number of DE actuators including bending actuators, linear actuators and rotational actuators have been designed utilizing an experience design method. This paper proposes a new method for the design of DE actuators by using a topology optimization method based on pairs of curves. First, theoretical modeling and optimization design are discussed, after which a rotary dielectric elastomer actuator has been designed using this optimization method. Finally, experiments and comparisons between several DE actuators have been made to verify the optimized result.
Launch Vehicle Design and Optimization Methods and Priority for the Advanced Engineering Environment
NASA Technical Reports Server (NTRS)
Rowell, Lawrence F.; Korte, John J.
2003-01-01
NASA's Advanced Engineering Environment (AEE) is a research and development program that will improve collaboration among design engineers for launch vehicle conceptual design and provide the infrastructure (methods and framework) necessary to enable that environment. In this paper, three major technical challenges facing the AEE program are identified, and three specific design problems are selected to demonstrate how advanced methods can improve current design activities. References are made to studies that demonstrate these design problems and methods, and these studies will provide the detailed information and check cases to support incorporation of these methods into the AEE. This paper provides background and terminology for discussing the launch vehicle conceptual design problem so that the diverse AEE user community can participate in prioritizing the AEE development effort.
Bishop, Felicity L
2015-02-01
To outline some of the challenges of mixed methods research and illustrate how they can be addressed in health psychology research. This study critically reflects on the author's previously published mixed methods research and discusses the philosophical and technical challenges of mixed methods, grounding the discussion in a brief review of methodological literature. Mixed methods research is characterized as having philosophical and technical challenges; the former can be addressed by drawing on pragmatism, the latter by considering formal mixed methods research designs proposed in a number of design typologies. There are important differences among the design typologies which provide diverse examples of designs that health psychologists can adapt for their own mixed methods research. There are also similarities; in particular, many typologies explicitly orient to the technical challenges of deciding on the respective timing of qualitative and quantitative methods and the relative emphasis placed on each method. Characteristics, strengths, and limitations of different sequential and concurrent designs are identified by reviewing five mixed methods projects each conducted for a different purpose. Adapting formal mixed methods designs can help health psychologists address the technical challenges of mixed methods research and identify the approach that best fits the research questions and purpose. This does not obfuscate the need to address philosophical challenges of mixing qualitative and quantitative methods. Statement of contribution What is already known on this subject? Mixed methods research poses philosophical and technical challenges. Pragmatism in a popular approach to the philosophical challenges while diverse typologies of mixed methods designs can help address the technical challenges. Examples of mixed methods research can be hard to locate when component studies from mixed methods projects are published separately. What does this study add? Critical reflections on the author's previously published mixed methods research illustrate how a range of different mixed methods designs can be adapted and applied to address health psychology research questions. The philosophical and technical challenges of mixed methods research should be considered together and in relation to the broader purpose of the research. © 2014 The British Psychological Society.
The research progress on Hodograph Method of aerodynamic design at Tsinghua University
NASA Technical Reports Server (NTRS)
Chen, Zuoyi; Guo, Jingrong
1991-01-01
Progress in the use of the Hodograph method of aerodynamic design is discussed. It was found that there are some restricted conditions in the application of Hodograph design to transonic turbine and compressor cascades. The Hodograph method is suitable not only to the transonic turbine cascade but also to the transonic compressor cascade. The three dimensional Hodograph method will be developed after obtaining the basic equation for the three dimensional Hodograph method. As an example of the Hodograph method, the use of the method to design a transonic turbine and compressor cascade is discussed.
Prevalence of Mixed-Methods Sampling Designs in Social Science Research
ERIC Educational Resources Information Center
Collins, Kathleen M. T.
2006-01-01
The purpose of this mixed-methods study was to document the prevalence of sampling designs utilised in mixed-methods research and to examine the interpretive consistency between interpretations made in mixed-methods studies and the sampling design used. Classification of studies was based on a two-dimensional mixed-methods sampling model. This…
NASA Technical Reports Server (NTRS)
Freed, Alan D.
1996-01-01
There are many aspects to consider when designing a Rosenbrock-Wanner-Wolfbrandt (ROW) method for the numerical integration of ordinary differential equations (ODE's) solving initial value problems (IVP's). The process can be simplified by constructing ROW methods around good Runge-Kutta (RK) methods. The formulation of a new, simple, embedded, third-order, ROW method demonstrates this design approach.
Wu, Rengmao; Hua, Hong; Benítez, Pablo; Miñano, Juan C.; Liang, Rongguang
2016-01-01
The energy efficiency and compactness of an illumination system are two main concerns in illumination design for extended sources. In this paper, we present two methods to design compact, ultra efficient aspherical lenses for extended Lambertian sources in two-dimensional geometry. The light rays are directed by using two aspherical surfaces in the first method and one aspherical surface along with an optimized parabola in the second method. The principles and procedures of each design method are introduced in detail. Three examples are presented to demonstrate the effectiveness of these two methods in terms of performance and capacity in designing compact, ultra efficient aspherical lenses. The comparisons made between the two proposed methods indicate that the second method is much simpler and easier to be implemented, and has an excellent extensibility to three-dimensional designs. PMID:29092336
A new design approach to MMI-based (de)multiplexers
NASA Astrophysics Data System (ADS)
Yueyu, Xiao; Sailing, He
2004-09-01
A novel design method of the wavelength (de)multiplexer is presented. The output spectral response of a (de)multiplexer is designed from the view of FIR filters. Avoiding laborious mathematic analysis, the (de)multiplexer is analyzed and designed in this explicit and simple method. A four channel (de)multiplexer based on multimode interference (MMI) is designed as an example. The result obtained agrees with that of the commonly used method, and is verified by a finite difference beam propagation method (FDBPM) simulation.
Issues and Strategies in Solving Multidisciplinary Optimization Problems
NASA Technical Reports Server (NTRS)
Patnaik, Surya
2013-01-01
Optimization research at NASA Glenn Research Center has addressed the design of structures, aircraft and airbreathing propulsion engines. The accumulated multidisciplinary design activity is collected under a testbed entitled COMETBOARDS. Several issues were encountered during the solution of the problems. Four issues and the strategies adapted for their resolution are discussed. This is followed by a discussion on analytical methods that is limited to structural design application. An optimization process can lead to an inefficient local solution. This deficiency was encountered during design of an engine component. The limitation was overcome through an augmentation of animation into optimization. Optimum solutions obtained were infeasible for aircraft and airbreathing propulsion engine problems. Alleviation of this deficiency required a cascading of multiple algorithms. Profile optimization of a beam produced an irregular shape. Engineering intuition restored the regular shape for the beam. The solution obtained for a cylindrical shell by a subproblem strategy converged to a design that can be difficult to manufacture. Resolution of this issue remains a challenge. The issues and resolutions are illustrated through a set of problems: Design of an engine component, Synthesis of a subsonic aircraft, Operation optimization of a supersonic engine, Design of a wave-rotor-topping device, Profile optimization of a cantilever beam, and Design of a cylindrical shell. This chapter provides a cursory account of the issues. Cited references provide detailed discussion on the topics. Design of a structure can also be generated by traditional method and the stochastic design concept. Merits and limitations of the three methods (traditional method, optimization method and stochastic concept) are illustrated. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions can be produced by all the three methods. The variation in the weight calculated by the methods was found to be modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliability traced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.
Permanent Ground Anchors : Stump Design Criteria
DOT National Transportation Integrated Search
1982-09-01
This document summarizes the main design methods used by the principal investigators in the design of permanent ground anchors, including basic concepts, design criteria, and analytical techniques. The application of these design methods are illustra...
Software Design Methods for Real-Time Systems
1989-12-01
This module describes the concepts and methods used in the software design of real time systems . It outlines the characteristics of real time systems , describes...the role of software design in real time system development, surveys and compares some software design methods for real - time systems , and
Multiobjective Optimization of Rocket Engine Pumps Using Evolutionary Algorithm
NASA Technical Reports Server (NTRS)
Oyama, Akira; Liou, Meng-Sing
2001-01-01
A design optimization method for turbopumps of cryogenic rocket engines has been developed. Multiobjective Evolutionary Algorithm (MOEA) is used for multiobjective pump design optimizations. Performances of design candidates are evaluated by using the meanline pump flow modeling method based on the Euler turbine equation coupled with empirical correlations for rotor efficiency. To demonstrate the feasibility of the present approach, a single stage centrifugal pump design and multistage pump design optimizations are presented. In both cases, the present method obtains very reasonable Pareto-optimal solutions that include some designs outperforming the original design in total head while reducing input power by one percent. Detailed observation of the design results also reveals some important design criteria for turbopumps in cryogenic rocket engines. These results demonstrate the feasibility of the EA-based design optimization method in this field.
Green design assessment of electromechanical products based on group weighted-AHP
NASA Astrophysics Data System (ADS)
Guo, Jinwei; Zhou, MengChu; Li, Zhiwu; Xie, Huiguang
2015-11-01
Manufacturing industry is the backbone of a country's economy while environmental pollution is a serious problem that human beings must face today. The green design of electromechanical products based on enterprise information systems is an important method to solve the environmental problem. The question on how to design green products must be answered by excellent designers via both advanced design methods and effective assessment methods of electromechanical products. Making an objective and precise assessment of green design is one of the problems that must be solved when green design is conducted. An assessment method of green design on electromechanical products based on Group Weighted-AHP (Analytic Hierarchy Process) is proposed in this paper, together with the characteristics of green products. The assessment steps of green design are also established. The results are illustrated via the assessment of a refrigerator design.
General method for designing wave shape transformers.
Ma, Hua; Qu, Shaobo; Xu, Zhuo; Wang, Jiafu
2008-12-22
An effective method for designing wave shape transformers (WSTs) is investigated by adopting the coordinate transformation theory. Following this method, the devices employed to transform electromagnetic (EM) wave fronts from one style with arbitrary shape and size to another style, can be designed. To verify this method, three examples in 2D spaces are also presented. Compared with the methods proposed in other literatures, this method offers the general procedure in designing WSTs, and thus is of great importance for the potential and practical applications possessed by such kinds of devices.
Shao, Jing-Yuan; Qu, Hai-Bin; Gong, Xing-Chu
2018-05-01
In this work, two algorithms (overlapping method and the probability-based method) for design space calculation were compared by using the data collected from extraction process of Codonopsis Radix as an example. In the probability-based method, experimental error was simulated to calculate the probability of reaching the standard. The effects of several parameters on the calculated design space were studied, including simulation number, step length, and the acceptable probability threshold. For the extraction process of Codonopsis Radix, 10 000 times of simulation and 0.02 for the calculation step length can lead to a satisfactory design space. In general, the overlapping method is easy to understand, and can be realized by several kinds of commercial software without coding programs, but the reliability of the process evaluation indexes when operating in the design space is not indicated. Probability-based method is complex in calculation, but can provide the reliability to ensure that the process indexes can reach the standard within the acceptable probability threshold. In addition, there is no probability mutation in the edge of design space by probability-based method. Therefore, probability-based method is recommended for design space calculation. Copyright© by the Chinese Pharmaceutical Association.
NASA Astrophysics Data System (ADS)
Li, Leihong
A modular structural design methodology for composite blades is developed. This design method can be used to design composite rotor blades with sophisticate geometric cross-sections. This design method hierarchically decomposed the highly-coupled interdisciplinary rotor analysis into global and local levels. In the global level, aeroelastic response analysis and rotor trim are conduced based on multi-body dynamic models. In the local level, variational asymptotic beam sectional analysis methods are used for the equivalent one-dimensional beam properties. Compared with traditional design methodology, the proposed method is more efficient and accurate. Then, the proposed method is used to study three different design problems that have not been investigated before. The first is to add manufacturing constraints into design optimization. The introduction of manufacturing constraints complicates the optimization process. However, the design with manufacturing constraints benefits the manufacturing process and reduces the risk of violating major performance constraints. Next, a new design procedure for structural design against fatigue failure is proposed. This procedure combines the fatigue analysis with the optimization process. The durability or fatigue analysis employs a strength-based model. The design is subject to stiffness, frequency, and durability constraints. Finally, the manufacturing uncertainty impacts on rotor blade aeroelastic behavior are investigated, and a probabilistic design method is proposed to control the impacts of uncertainty on blade structural performance. The uncertainty factors include dimensions, shapes, material properties, and service loads.
Controlling lightwave in Riemann space by merging geometrical optics with transformation optics.
Liu, Yichao; Sun, Fei; He, Sailing
2018-01-11
In geometrical optical design, we only need to choose a suitable combination of lenses, prims, and mirrors to design an optical path. It is a simple and classic method for engineers. However, people cannot design fantastical optical devices such as invisibility cloaks, optical wormholes, etc. by geometrical optics. Transformation optics has paved the way for these complicated designs. However, controlling the propagation of light by transformation optics is not a direct design process like geometrical optics. In this study, a novel mixed method for optical design is proposed which has both the simplicity of classic geometrical optics and the flexibility of transformation optics. This mixed method overcomes the limitations of classic optical design; at the same time, it gives intuitive guidance for optical design by transformation optics. Three novel optical devices with fantastic functions have been designed using this mixed method, including asymmetrical transmissions, bidirectional focusing, and bidirectional cloaking. These optical devices cannot be implemented by classic optics alone and are also too complicated to be designed by pure transformation optics. Numerical simulations based on both the ray tracing method and full-wave simulation method are carried out to verify the performance of these three optical devices.
Bourgault, Patricia; Gallagher, Frances; Michaud, Cécile; Saint-Cyr-Tribble, Denise
2010-12-01
The use of a mixed method research design raises many questions, especially regarding the paradigmatic position. With this paradigm, we may consider the mixed method design as the best way of answering a research question and the latter orients to one of the different subtypes of mixed method design. To illustrate the use of this kind of design, we propose a study such as conducted in nursing sciences. In this article, the challenges raised by the mixed method design, and the place of this type of research in nursing sciences is discussed.
Paturzo, Marco; Colaceci, Sofia; Clari, Marco; Mottola, Antonella; Alvaro, Rosaria; Vellone, Ercole
2016-01-01
. Mixed methods designs: an innovative methodological approach for nursing research. The mixed method research designs (MM) combine qualitative and quantitative approaches in the research process, in a single study or series of studies. Their use can provide a wider understanding of multifaceted phenomena. This article presents a general overview of the structure and design of MM to spread this approach in the Italian nursing research community. The MM designs most commonly used in the nursing field are the convergent parallel design, the sequential explanatory design, the exploratory sequential design and the embedded design. For each method a research example is presented. The use of MM can be an added value to improve clinical practices as, through the integration of qualitative and quantitative methods, researchers can better assess complex phenomena typical of nursing.
NASA Technical Reports Server (NTRS)
Capo, M. A.; Disney, R. K.; Jordan, T. A.; Soltesz, R. G.; Woodsum, H. C.
1969-01-01
Eight computer programs make up a nine volume synthesis containing two design methods for nuclear rocket radiation shields. The first design method is appropriate for parametric and preliminary studies, while the second accomplishes the verification of a final nuclear rocket reactor design.
Controller design via structural reduced modeling by FETM
NASA Technical Reports Server (NTRS)
Yousuff, Ajmal
1987-01-01
The Finite Element-Transfer Matrix (FETM) method has been developed to reduce the computations involved in analysis of structures. This widely accepted method, however, has certain limitations, and does not address the issues of control design. To overcome these, a modification of the FETM method has been developed. The new method easily produces reduced models tailored toward subsequent control design. Other features of this method are its ability to: (1) extract open loop frequencies and mode shapes with less computations, (2) overcome limitations of the original FETM method, and (3) simplify the design procedures for output feedback, constrained compensation, and decentralized control. This report presents the development of the new method, generation of reduced models by this method, their properties, and the role of these reduced models in control design. Examples are included to illustrate the methodology.
Stiffness Parameter Design of Suspension Element of Under-Chassis-Equipment for A Rail Vehicle
NASA Astrophysics Data System (ADS)
Ma, Menglin; Wang, Chengqiang; Deng, Hai
2017-06-01
According to the frequency configuration requirements of the vibration of railway under-chassis-equipment, the three- dimension stiffness of the suspension elements of under-chassis-equipment is designed based on the static principle and dynamics principle. The design results of the concrete engineering case show that, compared with the design method based on the static principle, the three- dimension stiffness of the suspension elements designed by the dynamic principle design method is more uniform. The frequency and decoupling degree analysis show that the calculation frequency of under-chassis-equipment under the two design methods is basically the same as the predetermined frequency. Compared with the design method based on the static principle, the design method based on the dynamic principle is adopted. The decoupling degree can be kept high, and the coupling vibration of the corresponding vibration mode can be reduced effectively, which can effectively reduce the fatigue damage of the key parts of the hanging element.
The equivalent magnetizing method applied to the design of gradient coils for MRI.
Lopez, Hector Sanchez; Liu, Feng; Crozier, Stuart
2008-01-01
This paper presents a new method for the design of gradient coils for Magnetic Resonance Imaging systems. The method is based on the equivalence between a magnetized volume surrounded by a conducting surface and its equivalent representation in surface current/charge density. We demonstrate that the curl of the vertical magnetization induces a surface current density whose stream line defines the coil current pattern. This method can be applied for coils wounds on arbitrary surface shapes. A single layer unshielded transverse gradient coil is designed and compared, with the designs obtained using two conventional methods. Through the presented example we demonstrate that the generated unconventional current patterns obtained using the magnetizing current method produces a superior gradient coil performance than coils designed by applying conventional methods.
Robust design of microchannel cooler
NASA Astrophysics Data System (ADS)
He, Ye; Yang, Tao; Hu, Li; Li, Leimin
2005-12-01
Microchannel cooler has offered a new method for the cooling of high power diode lasers, with the advantages of small volume, high efficiency of thermal dissipation and low cost when mass-produced. In order to reduce the sensitivity of design to manufacture errors or other disturbances, Taguchi method that is one of robust design method was chosen to optimize three parameters important to the cooling performance of roof-like microchannel cooler. The hydromechanical and thermal mathematical model of varying section microchannel was calculated using finite volume method by FLUENT. A special program was written to realize the automation of the design process for improving efficiency. The optimal design is presented which compromises between optimal cooling performance and its robustness. This design method proves to be available.
In silico methods for design of biological therapeutics.
Roy, Ankit; Nair, Sanjana; Sen, Neeladri; Soni, Neelesh; Madhusudhan, M S
2017-12-01
It has been twenty years since the first rationally designed small molecule drug was introduced into the market. Since then, we have progressed from designing small molecules to designing biotherapeutics. This class of therapeutics includes designed proteins, peptides and nucleic acids that could more effectively combat drug resistance and even act in cases where the disease is caused because of a molecular deficiency. Computational methods are crucial in this design exercise and this review discusses the various elements of designing biotherapeutic proteins and peptides. Many of the techniques discussed here, such as the deterministic and stochastic design methods, are generally used in protein design. We have devoted special attention to the design of antibodies and vaccines. In addition to the methods for designing these molecules, we have included a comprehensive list of all biotherapeutics approved for clinical use. Also included is an overview of methods that predict the binding affinity, cell penetration ability, half-life, solubility, immunogenicity and toxicity of the designed therapeutics. Biotherapeutics are only going to grow in clinical importance and are set to herald a new generation of disease management and cure. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Interactive design optimization of magnetorheological-brake actuators using the Taguchi method
NASA Astrophysics Data System (ADS)
Erol, Ozan; Gurocak, Hakan
2011-10-01
This research explored an optimization method that would automate the process of designing a magnetorheological (MR)-brake but still keep the designer in the loop. MR-brakes apply resistive torque by increasing the viscosity of an MR fluid inside the brake. This electronically controllable brake can provide a very large torque-to-volume ratio, which is very desirable for an actuator. However, the design process is quite complex and time consuming due to many parameters. In this paper, we adapted the popular Taguchi method, widely used in manufacturing, to the problem of designing a complex MR-brake. Unlike other existing methods, this approach can automatically identify the dominant parameters of the design, which reduces the search space and the time it takes to find the best possible design. While automating the search for a solution, it also lets the designer see the dominant parameters and make choices to investigate only their interactions with the design output. The new method was applied for re-designing MR-brakes. It reduced the design time from a week or two down to a few minutes. Also, usability experiments indicated significantly better brake designs by novice users.
Mistry, Pankaj; Dunn, Janet A; Marshall, Andrea
2017-07-18
The application of adaptive design methodology within a clinical trial setting is becoming increasingly popular. However the application of these methods within trials is not being reported as adaptive designs hence making it more difficult to capture the emerging use of these designs. Within this review, we aim to understand how adaptive design methodology is being reported, whether these methods are explicitly stated as an 'adaptive design' or if it has to be inferred and to identify whether these methods are applied prospectively or concurrently. Three databases; Embase, Ovid and PubMed were chosen to conduct the literature search. The inclusion criteria for the review were phase II, phase III and phase II/III randomised controlled trials within the field of Oncology that published trial results in 2015. A variety of search terms related to adaptive designs were used. A total of 734 results were identified, after screening 54 were eligible. Adaptive designs were more commonly applied in phase III confirmatory trials. The majority of the papers performed an interim analysis, which included some sort of stopping criteria. Additionally only two papers explicitly stated the term 'adaptive design' and therefore for most of the papers, it had to be inferred that adaptive methods was applied. Sixty-five applications of adaptive design methods were applied, from which the most common method was an adaptation using group sequential methods. This review indicated that the reporting of adaptive design methodology within clinical trials needs improving. The proposed extension to the current CONSORT 2010 guidelines could help capture adaptive design methods. Furthermore provide an essential aid to those involved with clinical trials.
Comparing Methods for Dynamic Airspace Configuration
NASA Technical Reports Server (NTRS)
Zelinski, Shannon; Lai, Chok Fung
2011-01-01
This paper compares airspace design solutions for dynamically reconfiguring airspace in response to nominal daily traffic volume fluctuation. Airspace designs from seven algorithmic methods and a representation of current day operations in Kansas City Center were simulated with two times today's demand traffic. A three-configuration scenario was used to represent current day operations. Algorithms used projected unimpeded flight tracks to design initial 24-hour plans to switch between three configurations at predetermined reconfiguration times. At each reconfiguration time, algorithms used updated projected flight tracks to update the subsequent planned configurations. Compared to the baseline, most airspace design methods reduced delay and increased reconfiguration complexity, with similar traffic pattern complexity results. Design updates enabled several methods to as much as half the delay from their original designs. Freeform design methods reduced delay and increased reconfiguration complexity the most.
2016-11-01
Display Design, Methods , and Results for a User Study by Christopher J Garneau and Robert F Erbacher Approved for public...NOV 2016 US Army Research Laboratory Evaluation of Visualization Tools for Computer Network Defense Analysts: Display Design, Methods ...January 2013–September 2015 4. TITLE AND SUBTITLE Evaluation of Visualization Tools for Computer Network Defense Analysts: Display Design, Methods
Designs of Empirical Evaluations of Nonexperimental Methods in Field Settings.
Wong, Vivian C; Steiner, Peter M
2018-01-01
Over the last three decades, a research design has emerged to evaluate the performance of nonexperimental (NE) designs and design features in field settings. It is called the within-study comparison (WSC) approach or the design replication study. In the traditional WSC design, treatment effects from a randomized experiment are compared to those produced by an NE approach that shares the same target population. The nonexperiment may be a quasi-experimental design, such as a regression-discontinuity or an interrupted time-series design, or an observational study approach that includes matching methods, standard regression adjustments, and difference-in-differences methods. The goals of the WSC are to determine whether the nonexperiment can replicate results from a randomized experiment (which provides the causal benchmark estimate), and the contexts and conditions under which these methods work in practice. This article presents a coherent theory of the design and implementation of WSCs for evaluating NE methods. It introduces and identifies the multiple purposes of WSCs, required design components, common threats to validity, design variants, and causal estimands of interest in WSCs. It highlights two general approaches for empirical evaluations of methods in field settings, WSC designs with independent and dependent benchmark and NE arms. This article highlights advantages and disadvantages for each approach, and conditions and contexts under which each approach is optimal for addressing methodological questions.
NASA Astrophysics Data System (ADS)
Hanan, Lu; Qiushi, Li; Shaobin, Li
2016-12-01
This paper presents an integrated optimization design method in which uniform design, response surface methodology and genetic algorithm are used in combination. In detail, uniform design is used to select the experimental sampling points in the experimental domain and the system performance is evaluated by means of computational fluid dynamics to construct a database. After that, response surface methodology is employed to generate a surrogate mathematical model relating the optimization objective and the design variables. Subsequently, genetic algorithm is adopted and applied to the surrogate model to acquire the optimal solution in the case of satisfying some constraints. The method has been applied to the optimization design of an axisymmetric diverging duct, dealing with three design variables including one qualitative variable and two quantitative variables. The method of modeling and optimization design performs well in improving the duct aerodynamic performance and can be also applied to wider fields of mechanical design and seen as a useful tool for engineering designers, by reducing the design time and computation consumption.
Freeform object design and simultaneous manufacturing
NASA Astrophysics Data System (ADS)
Zhang, Wei; Zhang, Weihan; Lin, Heng; Leu, Ming C.
2003-04-01
Today's product design, especially the consuming product design, focuses more and more on individuation, originality, and the time to market. One way to meet these challenges is using the interactive and creationary product design methods and rapid prototyping/rapid tooling. This paper presents a novel Freeform Object Design and Simultaneous Manufacturing (FODSM) method that combines the natural interaction feature in the design phase and simultaneous manufacturing feature in the prototyping phase. The natural interactive three-dimensional design environment is achieved by adopting virtual reality technology. The geometry of the designed object is defined through the process of "virtual sculpting" during which the designer can touch and visualize the designed object and can hear the virtual manufacturing environment noise. During the designing process, the computer records the sculpting trajectories and automatically translates them into NC codes so as to simultaneously machine the designed part. The paper introduced the principle, implementation process, and key techniques of the new method, and compared it with other popular rapid prototyping methods.
Monks, K; Molnár, I; Rieger, H-J; Bogáti, B; Szabó, E
2012-04-06
Robust HPLC separations lead to fewer analysis failures and better method transfer as well as providing an assurance of quality. This work presents the systematic development of an optimal, robust, fast UHPLC method for the simultaneous assay of two APIs of an eye drop sample and their impurities, in accordance with Quality by Design principles. Chromatography software is employed to effectively generate design spaces (Method Operable Design Regions), which are subsequently employed to determine the final method conditions and to evaluate robustness prior to validation. Copyright © 2011 Elsevier B.V. All rights reserved.
Optimal cure cycle design of a resin-fiber composite laminate
NASA Technical Reports Server (NTRS)
Hou, Jean W.; Sheen, Jeenson
1987-01-01
A unified computed aided design method was studied for the cure cycle design that incorporates an optimal design technique with the analytical model of a composite cure process. The preliminary results of using this proposed method for optimal cure cycle design are reported and discussed. The cure process of interest is the compression molding of a polyester which is described by a diffusion reaction system. The finite element method is employed to convert the initial boundary value problem into a set of first order differential equations which are solved simultaneously by the DE program. The equations for thermal design sensitivities are derived by using the direct differentiation method and are solved by the DE program. A recursive quadratic programming algorithm with an active set strategy called a linearization method is used to optimally design the cure cycle, subjected to the given design performance requirements. The difficulty of casting the cure cycle design process into a proper mathematical form is recognized. Various optimal design problems are formulated to address theses aspects. The optimal solutions of these formulations are compared and discussed.
A knowledge-based design framework for airplane conceptual and preliminary design
NASA Astrophysics Data System (ADS)
Anemaat, Wilhelmus A. J.
The goal of work described herein is to develop the second generation of Advanced Aircraft Analysis (AAA) into an object-oriented structure which can be used in different environments. One such environment is the third generation of AAA with its own user interface, the other environment with the same AAA methods (i.e. the knowledge) is the AAA-AML program. AAA-AML automates the initial airplane design process using current AAA methods in combination with AMRaven methodologies for dependency tracking and knowledge management, using the TechnoSoft Adaptive Modeling Language (AML). This will lead to the following benefits: (1) Reduced design time: computer aided design methods can reduce design and development time and replace tedious hand calculations. (2) Better product through improved design: more alternative designs can be evaluated in the same time span, which can lead to improved quality. (3) Reduced design cost: due to less training and less calculation errors substantial savings in design time and related cost can be obtained. (4) Improved Efficiency: the design engineer can avoid technically correct but irrelevant calculations on incomplete or out of sync information, particularly if the process enables robust geometry earlier. Although numerous advancements in knowledge based design have been developed for detailed design, currently no such integrated knowledge based conceptual and preliminary airplane design system exists. The third generation AAA methods are tested over a ten year period on many different airplane designs. Using AAA methods will demonstrate significant time savings. The AAA-AML system will be exercised and tested using 27 existing airplanes ranging from single engine propeller, business jets, airliners, UAV's to fighters. Data for the varied sizing methods will be compared with AAA results, to validate these methods. One new design, a Light Sport Aircraft (LSA), will be developed as an exercise to use the tool for designing a new airplane. Using these tools will show an improvement in efficiency over using separate programs due to the automatic recalculation with any change of input data. The direct visual feedback of 3D geometry in the AAA-AML, will lead to quicker resolving of problems as opposed to conventional methods.
NASA Astrophysics Data System (ADS)
Koval, Viacheslav
The seismic design provisions of the CSA-S6 Canadian Highway Bridge Design Code and the AASHTO LRFD Seismic Bridge Design Specifications have been developed primarily based on historical earthquake events that have occurred along the west coast of North America. For the design of seismic isolation systems, these codes include simplified analysis and design methods. The appropriateness and range of application of these methods are investigated through extensive parametric nonlinear time history analyses in this thesis. It was found that there is a need to adjust existing design guidelines to better capture the expected nonlinear response of isolated bridges. For isolated bridges located in eastern North America, new damping coefficients are proposed. The applicability limits of the code-based simplified methods have been redefined to ensure that the modified method will lead to conservative results and that a wider range of seismically isolated bridges can be covered by this method. The possibility of further improving current simplified code methods was also examined. By transforming the quantity of allocated energy into a displacement contribution, an idealized analytical solution is proposed as a new simplified design method. This method realistically reflects the effects of ground-motion and system design parameters, including the effects of a drifted oscillation center. The proposed method is therefore more appropriate than current existing simplified methods and can be applicable to isolation systems exhibiting a wider range of properties. A multi-level-hazard performance matrix has been adopted by different seismic provisions worldwide and will be incorporated into the new edition of the Canadian CSA-S6-14 Bridge Design code. However, the combined effect and optimal use of isolation and supplemental damping devices in bridges have not been fully exploited yet to achieve enhanced performance under different levels of seismic hazard. A novel Dual-Level Seismic Protection (DLSP) concept is proposed and developed in this thesis which permits to achieve optimum seismic performance with combined isolation and supplemental damping devices in bridges. This concept is shown to represent an attractive design approach for both the upgrade of existing seismically deficient bridges and the design of new isolated bridges.
Analytical quality by design: a tool for regulatory flexibility and robust analytics.
Peraman, Ramalingam; Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy
2015-01-01
Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT).
Analytical Quality by Design: A Tool for Regulatory Flexibility and Robust Analytics
Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy
2015-01-01
Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT). PMID:25722723
Investigating the Use of Design Methods by Capstone Design Students at Clemson University
ERIC Educational Resources Information Center
Miller, W. Stuart; Summers, Joshua D.
2013-01-01
The authors describe a preliminary study to understand the attitude of engineering students regarding the use of design methods in projects to identify the factors either affecting or influencing the use of these methods by novice engineers. A senior undergraduate capstone design course at Clemson University, consisting of approximately fifty…
Iterative optimization method for design of quantitative magnetization transfer imaging experiments.
Levesque, Ives R; Sled, John G; Pike, G Bruce
2011-09-01
Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.
A direct approach to the design of linear multivariable systems
NASA Technical Reports Server (NTRS)
Agrawal, B. L.
1974-01-01
Design of multivariable systems is considered and design procedures are formulated in the light of the most recent work on model matching. The word model matching is used exclusively to mean matching the input-output behavior of two systems. The term is used in the frequency domain to indicate the comparison of two transfer matrices containing transfer functions as elements. Design methods where non-interaction is not used as a criteria were studied. Two design methods are considered. The first method of design is based solely upon the specification of generalized error coefficients for each individual transfer function of the overall system transfer matrix. The second design method is called the pole fixing method because all the system poles are fixed at preassigned positions. The zeros of terms either above or below the diagonal are partially fixed via steady state error coefficients. The advantages and disadvantages of each method are discussed and an example is worked to demonstrate their uses. The special cases of triangular decoupling and minimum constraints are discussed.
Stochastic Methods for Aircraft Design
NASA Technical Reports Server (NTRS)
Pelz, Richard B.; Ogot, Madara
1998-01-01
The global stochastic optimization method, simulated annealing (SA), was adapted and applied to various problems in aircraft design. The research was aimed at overcoming the problem of finding an optimal design in a space with multiple minima and roughness ubiquitous to numerically generated nonlinear objective functions. SA was modified to reduce the number of objective function evaluations for an optimal design, historically the main criticism of stochastic methods. SA was applied to many CFD/MDO problems including: low sonic-boom bodies, minimum drag on supersonic fore-bodies, minimum drag on supersonic aeroelastic fore-bodies, minimum drag on HSCT aeroelastic wings, FLOPS preliminary design code, another preliminary aircraft design study with vortex lattice aerodynamics, HSR complete aircraft aerodynamics. In every case, SA provided a simple, robust and reliable optimization method which found optimal designs in order 100 objective function evaluations. Perhaps most importantly, from this academic/industrial project, technology has been successfully transferred; this method is the method of choice for optimization problems at Northrop Grumman.
Designs and methods used in published Australian health promotion evaluations 1992-2011.
Chambers, Alana Hulme; Murphy, Kylie; Kolbe, Anthony
2015-06-01
To describe the designs and methods used in published Australian health promotion evaluation articles between 1992 and 2011. Using a content analysis approach, we reviewed 157 articles to analyse patterns and trends in designs and methods in Australian health promotion evaluation articles. The purpose was to provide empirical evidence about the types of designs and methods used. The most common type of evaluation conducted was impact evaluation. Quantitative designs were used exclusively in more than half of the articles analysed. Almost half the evaluations utilised only one data collection method. Surveys were the most common data collection method used. Few articles referred explicitly to an intended evaluation outcome or benefit and references to published evaluation models or frameworks were rare. This is the first time Australian-published health promotion evaluation articles have been empirically investigated in relation to designs and methods. There appears to be little change in the purposes, overall designs and methods of published evaluations since 1992. More methodologically transparent and sophisticated published evaluation articles might be instructional, and even motivational, for improving evaluation practice and result in better public health interventions and outcomes. © 2015 Public Health Association of Australia.
HUDSON, PARISA; HUDSON, STEPHEN D.; HANDLER, WILLIAM B.; SCHOLL, TIMOTHY J.; CHRONIK, BLAINE A.
2010-01-01
High-performance shim coils are required for high-field magnetic resonance imaging and spectroscopy. Complete sets of high-power and high-performance shim coils were designed using two different methods: the minimum inductance and the minimum power target field methods. A quantitative comparison of shim performance in terms of merit of inductance (ML) and merit of resistance (MR) was made for shim coils designed using the minimum inductance and the minimum power design algorithms. In each design case, the difference in ML and the difference in MR given by the two design methods was <15%. Comparison of wire patterns obtained using the two design algorithms show that minimum inductance designs tend to feature oscillations within the current density; while minimum power designs tend to feature less rapidly varying current densities and lower power dissipation. Overall, the differences in coil performance obtained by the two methods are relatively small. For the specific case of shim systems customized for small animal imaging, the reduced power dissipation obtained when using the minimum power method is judged to be more significant than the improvements in switching speed obtained from the minimum inductance method. PMID:20411157
A modified Finite Element-Transfer Matrix for control design of space structures
NASA Technical Reports Server (NTRS)
Tan, T.-M.; Yousuff, A.; Bahar, L. Y.; Konstandinidis, M.
1990-01-01
The Finite Element-Transfer Matrix (FETM) method was developed for reducing the computational efforts involved in structural analysis. While being widely used by structural analysts, this method does, however, have certain limitations, particularly when used for the control design of large flexible structures. In this paper, a new formulation based on the FETM method is presented. The new method effectively overcomes the limitations in the original FETM method, and also allows an easy construction of reduced models that are tailored for the control design. Other advantages of this new method include the ability to extract open loop frequencies and mode shapes with less computation, and simplification of the design procedures for output feedback, constrained compensation, and decentralized control. The development of this new method and the procedures for generating reduced models using this method are described in detail and the role of the reduced models in control design is discussed through an illustrative example.
Single-Case Designs and Qualitative Methods: Applying a Mixed Methods Research Perspective
ERIC Educational Resources Information Center
Hitchcock, John H.; Nastasi, Bonnie K.; Summerville, Meredith
2010-01-01
The purpose of this conceptual paper is to describe a design that mixes single-case (sometimes referred to as single-subject) and qualitative methods, hereafter referred to as a single-case mixed methods design (SCD-MM). Minimal attention has been given to the topic of applying qualitative methods to SCD work in the literature. These two…
Research design and statistical methods in Pakistan Journal of Medical Sciences (PJMS).
Akhtar, Sohail; Shah, Syed Wadood Ali; Rafiq, M; Khan, Ajmal
2016-01-01
This article compares the study design and statistical methods used in 2005, 2010 and 2015 of Pakistan Journal of Medical Sciences (PJMS). Only original articles of PJMS were considered for the analysis. The articles were carefully reviewed for statistical methods and designs, and then recorded accordingly. The frequency of each statistical method and research design was estimated and compared with previous years. A total of 429 articles were evaluated (n=74 in 2005, n=179 in 2010, n=176 in 2015) in which 171 (40%) were cross-sectional and 116 (27%) were prospective study designs. A verity of statistical methods were found in the analysis. The most frequent methods include: descriptive statistics (n=315, 73.4%), chi-square/Fisher's exact tests (n=205, 47.8%) and student t-test (n=186, 43.4%). There was a significant increase in the use of statistical methods over time period: t-test, chi-square/Fisher's exact test, logistic regression, epidemiological statistics, and non-parametric tests. This study shows that a diverse variety of statistical methods have been used in the research articles of PJMS and frequency improved from 2005 to 2015. However, descriptive statistics was the most frequent method of statistical analysis in the published articles while cross-sectional study design was common study design.
Research on Visualization Design Method in the Field of New Media Software Engineering
NASA Astrophysics Data System (ADS)
Deqiang, Hu
2018-03-01
In the new period of increasingly developed science and technology, with the increasingly fierce competition in the market and the increasing demand of the masses, new design and application methods have emerged in the field of new media software engineering, that is, the visualization design method. Applying the visualization design method to the field of new media software engineering can not only improve the actual operation efficiency of new media software engineering but more importantly the quality of software development can be enhanced by means of certain media of communication and transformation; on this basis, the progress and development of new media software engineering in China are also continuously promoted. Therefore, the application of visualization design method in the field of new media software engineering is analysed concretely in this article from the perspective of the overview of visualization design methods and on the basis of systematic analysis of the basic technology.
Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.
2002-01-01
Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.
Trends in study design and the statistical methods employed in a leading general medicine journal.
Gosho, M; Sato, Y; Nagashima, K; Takahashi, S
2018-02-01
Study design and statistical methods have become core components of medical research, and the methodology has become more multifaceted and complicated over time. The study of the comprehensive details and current trends of study design and statistical methods is required to support the future implementation of well-planned clinical studies providing information about evidence-based medicine. Our purpose was to illustrate study design and statistical methods employed in recent medical literature. This was an extension study of Sato et al. (N Engl J Med 2017; 376: 1086-1087), which reviewed 238 articles published in 2015 in the New England Journal of Medicine (NEJM) and briefly summarized the statistical methods employed in NEJM. Using the same database, we performed a new investigation of the detailed trends in study design and individual statistical methods that were not reported in the Sato study. Due to the CONSORT statement, prespecification and justification of sample size are obligatory in planning intervention studies. Although standard survival methods (eg Kaplan-Meier estimator and Cox regression model) were most frequently applied, the Gray test and Fine-Gray proportional hazard model for considering competing risks were sometimes used for a more valid statistical inference. With respect to handling missing data, model-based methods, which are valid for missing-at-random data, were more frequently used than single imputation methods. These methods are not recommended as a primary analysis, but they have been applied in many clinical trials. Group sequential design with interim analyses was one of the standard designs, and novel design, such as adaptive dose selection and sample size re-estimation, was sometimes employed in NEJM. Model-based approaches for handling missing data should replace single imputation methods for primary analysis in the light of the information found in some publications. Use of adaptive design with interim analyses is increasing after the presentation of the FDA guidance for adaptive design. © 2017 John Wiley & Sons Ltd.
Shim, Jongmyeong; Park, Changsu; Lee, Jinhyung; Kang, Shinill
2016-08-08
Recently, studies have examined techniques for modeling the light distribution of light-emitting diodes (LEDs) for various applications owing to their low power consumption, longevity, and light weight. The energy mapping technique, a design method that matches the energy distributions of an LED light source and target area, has been the focus of active research because of its design efficiency and accuracy. However, these studies have not considered the effects of the emitting area of the LED source. Therefore, there are limitations to the design accuracy for small, high-power applications with a short distance between the light source and optical system. A design method for compensating for the light distribution of an extended source after the initial optics design based on a point source was proposed to overcome such limits, but its time-consuming process and limited design accuracy with multiple iterations raised the need for a new design method that considers an extended source in the initial design stage. This study proposed a method for designing discrete planar optics that controls the light distribution and minimizes the optical loss with an extended source and verified the proposed method experimentally. First, the extended source was modeled theoretically, and a design method for discrete planar optics with the optimum groove angle through energy mapping was proposed. To verify the design method, design for the discrete planar optics was achieved for applications in illumination for LED flash. In addition, discrete planar optics for LED illuminance were designed and fabricated to create a uniform illuminance distribution. Optical characterization of these structures showed that the design was optimal; i.e., we plotted the optical losses as a function of the groove angle, and found a clear minimum. Simulations and measurements showed that an efficient optical design was achieved for an extended source.
How to Construct a Mixed Methods Research Design.
Schoonenboom, Judith; Johnson, R Burke
2017-01-01
This article provides researchers with knowledge of how to design a high quality mixed methods research study. To design a mixed study, researchers must understand and carefully consider each of the dimensions of mixed methods design, and always keep an eye on the issue of validity. We explain the seven major design dimensions: purpose, theoretical drive, timing (simultaneity and dependency), point of integration, typological versus interactive design approaches, planned versus emergent design, and design complexity. There also are multiple secondary dimensions that need to be considered during the design process. We explain ten secondary dimensions of design to be considered for each research study. We also provide two case studies showing how the mixed designs were constructed.
Review of design optimization methods for turbomachinery aerodynamics
NASA Astrophysics Data System (ADS)
Li, Zhihui; Zheng, Xinqian
2017-08-01
In today's competitive environment, new turbomachinery designs need to be not only more efficient, quieter, and ;greener; but also need to be developed at on much shorter time scales and at lower costs. A number of advanced optimization strategies have been developed to achieve these requirements. This paper reviews recent progress in turbomachinery design optimization to solve real-world aerodynamic problems, especially for compressors and turbines. This review covers the following topics that are important for optimizing turbomachinery designs. (1) optimization methods, (2) stochastic optimization combined with blade parameterization methods and the design of experiment methods, (3) gradient-based optimization methods for compressors and turbines and (4) data mining techniques for Pareto Fronts. We also present our own insights regarding the current research trends and the future optimization of turbomachinery designs.
Development of direct-inverse 3-D methods for applied transonic aerodynamic wing design and analysis
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1989-01-01
An inverse wing design method was developed around an existing transonic wing analysis code. The original analysis code, TAWFIVE, has as its core the numerical potential flow solver, FLO30, developed by Jameson and Caughey. Features of the analysis code include a finite-volume formulation; wing and fuselage fitted, curvilinear grid mesh; and a viscous boundary layer correction that also accounts for viscous wake thickness and curvature. The development of the inverse methods as an extension of previous methods existing for design in Cartesian coordinates is presented. Results are shown for inviscid wing design cases in super-critical flow regimes. The test cases selected also demonstrate the versatility of the design method in designing an entire wing or discontinuous sections of a wing.
Multi-Reader ROC studies with Split-Plot Designs: A Comparison of Statistical Methods
Obuchowski, Nancy A.; Gallas, Brandon D.; Hillis, Stephen L.
2012-01-01
Rationale and Objectives Multi-reader imaging trials often use a factorial design, where study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of the design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper we compare three methods of analysis for the split-plot design. Materials and Methods Three statistical methods are presented: Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean ANOVA approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power and confidence interval coverage of the three test statistics. Results The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% CIs fall close to the nominal coverage for small and large sample sizes. Conclusions The split-plot MRMC study design can be statistically efficient compared with the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rate, similar power, and nominal CI coverage, are available for this study design. PMID:23122570
Research design and statistical methods in Pakistan Journal of Medical Sciences (PJMS)
Akhtar, Sohail; Shah, Syed Wadood Ali; Rafiq, M.; Khan, Ajmal
2016-01-01
Objective: This article compares the study design and statistical methods used in 2005, 2010 and 2015 of Pakistan Journal of Medical Sciences (PJMS). Methods: Only original articles of PJMS were considered for the analysis. The articles were carefully reviewed for statistical methods and designs, and then recorded accordingly. The frequency of each statistical method and research design was estimated and compared with previous years. Results: A total of 429 articles were evaluated (n=74 in 2005, n=179 in 2010, n=176 in 2015) in which 171 (40%) were cross-sectional and 116 (27%) were prospective study designs. A verity of statistical methods were found in the analysis. The most frequent methods include: descriptive statistics (n=315, 73.4%), chi-square/Fisher’s exact tests (n=205, 47.8%) and student t-test (n=186, 43.4%). There was a significant increase in the use of statistical methods over time period: t-test, chi-square/Fisher’s exact test, logistic regression, epidemiological statistics, and non-parametric tests. Conclusion: This study shows that a diverse variety of statistical methods have been used in the research articles of PJMS and frequency improved from 2005 to 2015. However, descriptive statistics was the most frequent method of statistical analysis in the published articles while cross-sectional study design was common study design. PMID:27022365
NASA Astrophysics Data System (ADS)
Pirmoradi, Zhila; Haji Hajikolaei, Kambiz; Wang, G. Gary
2015-10-01
Product family design is cost-efficient for achieving the best trade-off between commonalization and diversification. However, for computationally intensive design functions which are viewed as black boxes, the family design would be challenging. A two-stage platform configuration method with generalized commonality is proposed for a scale-based family with unknown platform configuration. Unconventional sensitivity analysis and information on variation in the individual variants' optimal design are used for platform configuration design. Metamodelling is employed to provide the sensitivity and variable correlation information, leading to significant savings in function calls. A family of universal electric motors is designed for product performance and the efficiency of this method is studied. The impact of the employed parameters is also analysed. Then, the proposed method is modified for obtaining higher commonality. The proposed method is shown to yield design solutions with better objective function values, allowable performance loss and higher commonality than the previously developed methods in the literature.
Design and Analysis Tools for Supersonic Inlets
NASA Technical Reports Server (NTRS)
Slater, John W.; Folk, Thomas C.
2009-01-01
Computational tools are being developed for the design and analysis of supersonic inlets. The objective is to update existing tools and provide design and low-order aerodynamic analysis capability for advanced inlet concepts. The Inlet Tools effort includes aspects of creating an electronic database of inlet design information, a document describing inlet design and analysis methods, a geometry model for describing the shape of inlets, and computer tools that implement the geometry model and methods. The geometry model has a set of basic inlet shapes that include pitot, two-dimensional, axisymmetric, and stream-traced inlet shapes. The inlet model divides the inlet flow field into parts that facilitate the design and analysis methods. The inlet geometry model constructs the inlet surfaces through the generation and transformation of planar entities based on key inlet design factors. Future efforts will focus on developing the inlet geometry model, the inlet design and analysis methods, a Fortran 95 code to implement the model and methods. Other computational platforms, such as Java, will also be explored.
Space Radiation Transport Methods Development
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.
2002-01-01
Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard Finite Element Method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 milliseconds and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of reconfigurable computing and could be utilized in the final design as verification of the deterministic method optimized design.
A Proposed Model of Retransformed Qualitative Data within a Mixed Methods Research Design
ERIC Educational Resources Information Center
Palladino, John M.
2009-01-01
Most models of mixed methods research design provide equal emphasis of qualitative and quantitative data analyses and interpretation. Other models stress one method more than the other. The present article is a discourse about the investigator's decision to employ a mixed method design to examine special education teachers' advocacy and…
Integrating Software-Architecture-Centric Methods into the Rational Unified Process
2004-07-01
Architecture Design ...................................................................................... 19...QAW in a life- cycle context. One issue that needs to be addressed is how scenarios produced in a QAW can be used by a software architecture design method...implementation testing. 18 CMU/SEI-2004-TR-011 CMU/SEI-2004-TR-011 19 4 Architecture Design The Attribute-Driven Design (ADD) method
Using mixed methods effectively in prevention science: designs, procedures, and examples.
Zhang, Wanqing; Watanabe-Galloway, Shinobu
2014-10-01
There is growing interest in using a combination of quantitative and qualitative methods to generate evidence about the effectiveness of health prevention, services, and intervention programs. With the emerging importance of mixed methods research across the social and health sciences, there has been an increased recognition of the value of using mixed methods for addressing research questions in different disciplines. We illustrate the mixed methods approach in prevention research, showing design procedures used in several published research articles. In this paper, we focused on two commonly used mixed methods designs: concurrent and sequential mixed methods designs. We discuss the types of mixed methods designs, the reasons for, and advantages of using a particular type of design, and the procedures of qualitative and quantitative data collection and integration. The studies reviewed in this paper show that the essence of qualitative research is to explore complex dynamic phenomena in prevention science, and the advantage of using mixed methods is that quantitative data can yield generalizable results and qualitative data can provide extensive insights. However, the emphasis of methodological rigor in a mixed methods application also requires considerable expertise in both qualitative and quantitative methods. Besides the necessary skills and effective interdisciplinary collaboration, this combined approach also requires an open-mindedness and reflection from the involved researchers.
Computer Graphics-aided systems analysis: application to well completion design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Detamore, J.E.; Sarma, M.P.
1985-03-01
The development of an engineering tool (in the form of a computer model) for solving design and analysis problems related with oil and gas well production operations is discussed. The development of the method is based on integrating the concepts of ''Systems Analysis'' with the techniques of ''Computer Graphics''. The concepts behind the method are very general in nature. This paper, however, illustrates the application of the method in solving gas well completion design problems. The use of the method will save time and improve the efficiency of such design and analysis problems. The method can be extended to othermore » design and analysis aspects of oil and gas wells.« less
[Optimum design of imaging spectrometer based on toroidal uniform-line-spaced (TULS) spectrometer].
Xue, Qing-Sheng; Wang, Shu-Rong
2013-05-01
Based on the geometrical aberration theory, a optimum-design method for designing an imaging spectrometer based on toroidal uniform grating spectrometer is proposed. To obtain the best optical parameters, twice optimization is carried out using genetic algorithm(GA) and optical design software ZEMAX A far-ultraviolet(FUV) imaging spectrometer is designed using this method. The working waveband is 110-180 nm, the slit size is 50 microm x 5 mm, and the numerical aperture is 0.1. Using ZEMAX software, the design result is analyzed and evaluated. The results indicate that the MTF for different wavelengths is higher than 0.7 at Nyquist frequency 10 lp x mm(-1), and the RMS spot radius is less than 14 microm. The good imaging quality is achieved over the whole working waveband, the design requirements of spatial resolution 0.5 mrad and spectral resolution 0.6 nm are satisfied. It is certificated that the optimum-design method proposed in this paper is feasible. This method can be applied in other waveband, and is an instruction method for designing grating-dispersion imaging spectrometers.
Using Aerospace Technology To Design Orthopedic Implants
NASA Technical Reports Server (NTRS)
Saravanos, D. A.; Mraz, P. J.; Davy, D. T.
1996-01-01
Technology originally developed to optimize designs of composite-material aerospace structural components used to develop method for optimizing designs of orthopedic implants. Development effort focused on designing knee implants, long-term goal to develop method for optimizing designs of orthopedic implants in general.
Mechanistic flexible pavement overlay design program : tech summary.
DOT National Transportation Integrated Search
2009-07-01
The Louisiana Department of Transportation and Development (LADOTD) currently follows the 1993 : AASHTO pavement design guides component analysis method in its fl exible pavement overlay thickness : design. Such an overlay design method, how...
Method of transition from 3D model to its ontological representation in aircraft design process
NASA Astrophysics Data System (ADS)
Govorkov, A. S.; Zhilyaev, A. S.; Fokin, I. V.
2018-05-01
This paper proposes the method of transition from a 3D model to its ontological representation and describes its usage in the aircraft design process. The problems of design for manufacturability and design automation are also discussed. The introduced method is to aim to ease the process of data exchange between important aircraft design phases, namely engineering and design control. The method is also intended to increase design speed and 3D model customizability. This requires careful selection of the complex systems (CAD / CAM / CAE / PDM), providing the basis for the integration of design and technological preparation of production and more fully take into account the characteristics of products and processes for their manufacture. It is important to solve this problem, as investment in the automation define the company's competitiveness in the years ahead.
Methods for sample size determination in cluster randomized trials
Rutterford, Clare; Copas, Andrew; Eldridge, Sandra
2015-01-01
Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515
Using Mathematical Modeling and Set-Based Design Principles to Recommend an Existing CVL Design
2017-09-01
designs, it would be worth researching the feasibility of varying the launch method on some of the larger light aircraft carriers, such as the Liaoning...thesis examines the trade space in major design areas such as tonnage, aircraft launch method , propulsion, and performance in order to illustrate...future conflict. This thesis examines the trade space in major design areas such as tonnage, aircraft launch method , propulsion, and performance in
CometBoards Users Manual Release 1.0
NASA Technical Reports Server (NTRS)
Guptill, James D.; Coroneos, Rula M.; Patnaik, Surya N.; Hopkins, Dale A.; Berke, Lazlo
1996-01-01
Several nonlinear mathematical programming algorithms for structural design applications are available at present. These include the sequence of unconstrained minimizations technique, the method of feasible directions, and the sequential quadratic programming technique. The optimality criteria technique and the fully utilized design concept are two other structural design methods. A project was undertaken to bring all these design methods under a common computer environment so that a designer can select any one of these tools that may be suitable for his/her application. To facilitate selection of a design algorithm, to validate and check out the computer code, and to ascertain the relative merits of the design tools, modest finite element structural analysis programs based on the concept of stiffness and integrated force methods have been coupled to each design method. The code that contains both these design and analysis tools, by reading input information from analysis and design data files, can cast the design of a structure as a minimum-weight optimization problem. The code can then solve it with a user-specified optimization technique and a user-specified analysis method. This design code is called CometBoards, which is an acronym for Comparative Evaluation Test Bed of Optimization and Analysis Routines for the Design of Structures. This manual describes for the user a step-by-step procedure for setting up the input data files and executing CometBoards to solve a structural design problem. The manual includes the organization of CometBoards; instructions for preparing input data files; the procedure for submitting a problem; illustrative examples; and several demonstration problems. A set of 29 structural design problems have been solved by using all the optimization methods available in CometBoards. A summary of the optimum results obtained for these problems is appended to this users manual. CometBoards, at present, is available for Posix-based Cray and Convex computers, Iris and Sun workstations, and the VM/CMS system.
Engineering Design Education Program for Graduate School
NASA Astrophysics Data System (ADS)
Ohbuchi, Yoshifumi; Iida, Haruhiko
The new educational methods of engineering design have attempted to improve mechanical engineering education for graduate students in a way of the collaboration in education of engineer and designer. The education program is based on the lecture and practical exercises concerning the product design, and has engineering themes and design process themes, i.e. project management, QFD, TRIZ, robust design (Taguchi method) , ergonomics, usability, marketing, conception etc. At final exercise, all students were able to design new product related to their own research theme by applying learned knowledge and techniques. By the method of engineering design education, we have confirmed that graduate students are able to experience technological and creative interest.
Design component method for sensitivity analysis of built-up structures
NASA Technical Reports Server (NTRS)
Choi, Kyung K.; Seong, Hwai G.
1986-01-01
A 'design component method' that provides a unified and systematic organization of design sensitivity analysis for built-up structures is developed and implemented. Both conventional design variables, such as thickness and cross-sectional area, and shape design variables of components of built-up structures are considered. It is shown that design of components of built-up structures can be characterized and system design sensitivity expressions obtained by simply adding contributions from each component. The method leads to a systematic organization of computations for design sensitivity analysis that is similar to the way in which computations are organized within a finite element code.
Light Trapping for Silicon Solar Cells: Theory and Experiment
NASA Astrophysics Data System (ADS)
Zhao, Hui
Crystalline silicon solar cells have been the mainstream technology for photovoltaic energy conversion since their invention in 1954. Since silicon is an indirect band gap material, its absorption coefficient is low for much of the solar spectrum, and the highest conversion efficiencies are achieved only in cells that are thicker than about 0.1 mm. Light trapping by total internal reflection is important to increase the optical absorption in silicon layers, and becomes increasingly important as the layers are thinned. Light trapping is typically characterized by the enhancement of the absorptance of a solar cell beyond the value for a single pass of the incident beam through an absorbing semiconductor layer. Using an equipartition argument, in 1982 Yablonovitch calculated an enhancement of 4n2 , where n is the refractive index. We have extracted effective light-trapping enhancements from published external quantum efficiency spectra in several dozen silicon solar cells. These results show that this "thermodynamic" enhancement has never been achieved experimentally. The reasons for incomplete light trapping could be poor anti-reflection coating, inefficient light scattering, and parasitic absorption. We report the light-trapping properties of nanocrystalline silicon nip solar cells deposited onto two types of Ag/ZnO backreflectors at United Solar Ovonic, LLC. We prepared the first type by first making silver nanparticles onto a stainless steel substrate, and then overcoating the nanoparticles with a second silver layer. The second type was prepared at United Solar using a continuous silver film. Both types were then overcoated with a ZnO film. The root mean square roughness varied from 27 to 61 nm, and diffuse reflectance at 1000 nm wavelength varied from 0.4 to 0.8. The finished cells have a thin, indium-tin oxide layer on the top that acts as an antireflection coating. For both backreflector types, the short-circuit photocurrent densities J SC for solar illumination were about 25 mA/cm2 for 1.5 micron cells. We also measured external quantum efficiency spectra and optical reflectance spectra, which were only slightly affected by the back reflector morphology. We performed a thermodynamic calculation for the optical absorptance in the silicon layer and the top oxide layer to explain the experimental results; the calculation is an extension of previous work by Stuart and Hall that incorporates the antireflection properties and absorption in the top oxide film. From our calculations and experimental measurements, we concluded that parasitic absorption in this film is the prominent reason for incomplete light trapping in these cells. To reduce the optical parasitic loss in the top oxide layer, we propose a bilayer design, and show the possible benefits to the photocurrent density.
Intelligent design of permanent magnet synchronous motor based on CBR
NASA Astrophysics Data System (ADS)
Li, Cong; Fan, Beibei
2018-05-01
Aiming at many problems in the design process of Permanent magnet synchronous motor (PMSM), such as the complexity of design process, the over reliance on designers' experience and the lack of accumulation and inheritance of design knowledge, a design method of PMSM Based on CBR is proposed in order to solve those problems. In this paper, case-based reasoning (CBR) methods of cases similarity calculation is proposed for reasoning suitable initial scheme. This method could help designers, by referencing previous design cases, to make a conceptual PMSM solution quickly. The case retain process gives the system self-enrich function which will improve the design ability of the system with the continuous use of the system.
New knowledge network evaluation method for design rationale management
NASA Astrophysics Data System (ADS)
Jing, Shikai; Zhan, Hongfei; Liu, Jihong; Wang, Kuan; Jiang, Hao; Zhou, Jingtao
2015-01-01
Current design rationale (DR) systems have not demonstrated the value of the approach in practice since little attention is put to the evaluation method of DR knowledge. To systematize knowledge management process for future computer-aided DR applications, a prerequisite is to provide the measure for the DR knowledge. In this paper, a new knowledge network evaluation method for DR management is presented. The method characterizes the DR knowledge value from four perspectives, namely, the design rationale structure scale, association knowledge and reasoning ability, degree of design justification support and degree of knowledge representation conciseness. The DR knowledge comprehensive value is also measured by the proposed method. To validate the proposed method, different style of DR knowledge network and the performance of the proposed measure are discussed. The evaluation method has been applied in two realistic design cases and compared with the structural measures. The research proposes the DR knowledge evaluation method which can provide object metric and selection basis for the DR knowledge reuse during the product design process. In addition, the method is proved to be more effective guidance and support for the application and management of DR knowledge.
Structural analysis at aircraft conceptual design stage
NASA Astrophysics Data System (ADS)
Mansouri, Reza
In the past 50 years, computers have helped by augmenting human efforts with tremendous pace. The aircraft industry is not an exception. Aircraft industry is more than ever dependent on computing because of a high level of complexity and the increasing need for excellence to survive a highly competitive marketplace. Designers choose computers to perform almost every analysis task. But while doing so, existing effective, accurate and easy to use classical analytical methods are often forgotten, which can be very useful especially in the early phases of the aircraft design where concept generation and evaluation demands physical visibility of design parameters to make decisions [39, 2004]. Structural analysis methods have been used by human beings since the very early civilization. Centuries before computers were invented; the pyramids were designed and constructed by Egyptians around 2000 B.C, the Parthenon was built by the Greeks, around 240 B.C, Dujiangyan was built by the Chinese. Persepolis, Hagia Sophia, Taj Mahal, Eiffel tower are only few more examples of historical buildings, bridges and monuments that were constructed before we had any advancement made in computer aided engineering. Aircraft industry is no exception either. In the first half of the 20th century, engineers used classical method and designed civil transport aircraft such as Ford Tri Motor (1926), Lockheed Vega (1927), Lockheed 9 Orion (1931), Douglas DC-3 (1935), Douglas DC-4/C-54 Skymaster (1938), Boeing 307 (1938) and Boeing 314 Clipper (1939) and managed to become airborne without difficulty. Evidencing, while advanced numerical methods such as the finite element analysis is one of the most effective structural analysis methods; classical structural analysis methods can also be as useful especially during the early phase of a fixed wing aircraft design where major decisions are made and concept generation and evaluation demands physical visibility of design parameters to make decisions. Considering the strength and limitations of both methodologies, the question to be answered in this thesis is: How valuable and compatible are the classical analytical methods in today's conceptual design environment? And can these methods complement each other? To answer these questions, this thesis investigates the pros and cons of classical analytical structural analysis methods during the conceptual design stage through the following objectives: Illustrate structural design methodology of these methods within the framework of Aerospace Vehicle Design (AVD) lab's design lifecycle. Demonstrate the effectiveness of moment distribution method through four case studies. This will be done by considering and evaluating the strength and limitation of these methods. In order to objectively quantify the limitation and capabilities of the analytical method at the conceptual design stage, each case study becomes more complex than the one before.
Torrens, George Edward
2018-01-01
Summative content analysis was used to define methods and heuristics from each case study. The review process was in two parts: (1) A literature review to identify conventional research methods and (2) a summative content analysis of published case studies, based on the identified methods and heuristics to suggest an order and priority of where and when were used. Over 200 research and design methods and design heuristics were identified. From the review of the 20 case studies 42 were identified as being applied. The majority of methods and heuristics were applied in phase two, market choice. There appeared a disparity between the limited numbers of methods frequently used, under 10 within the 20 case studies, when hundreds were available. Implications for Rehabilitation The communication highlights a number of issues that have implication for those involved in assistive technology new product development: •The study defined over 200 well-established research and design methods and design heuristics that are available for use by those who specify and design assistive technology products, which provide a comprehensive reference list for practitioners in the field; •The review within the study suggests only a limited number of research and design methods are regularly used by industrial design focused assistive technology new product developers; and, •Debate is required within the practitioners working in this field to reflect on how a wider range of potentially more effective methods and heuristics may be incorporated into daily working practice.
NASA Technical Reports Server (NTRS)
Merchant, D. H.
1976-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occurring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the method are also presented.
Mixing Qualitative and Quantitative Methods: Insights into Design and Analysis Issues
ERIC Educational Resources Information Center
Lieber, Eli
2009-01-01
This article describes and discusses issues related to research design and data analysis in the mixing of qualitative and quantitative methods. It is increasingly desirable to use multiple methods in research, but questions arise as to how best to design and analyze the data generated by mixed methods projects. I offer a conceptualization for such…
NASA Astrophysics Data System (ADS)
Essameldin, Mahmoud; Fleischmann, Friedrich; Henning, Thomas; Lang, Walter
2017-02-01
Freeform optical systems are playing an important role in the field of illumination engineering for redistributing the light intensity, because of its capability of achieving accurate and efficient results. The authors have presented the basic idea of the freeform lens design method at the 117th annual meeting of the German Society of Applied Optics (DGAOProceedings). Now, we demonstrate the feasibility of the design method by designing and evaluating a freeform lens. The concepts of luminous intensity mapping, energy conservation and differential equation are combined in designing a lens for non-imaging applications. The required procedures to design a lens including the simulations are explained in detail. The optical performance is investigated by using a numerical simulation of optical ray tracing. For evaluation, the results are compared with another recently published design method, showing the accurate performance of the proposed method using a reduced number of mapping angles. As a part of the tolerance analyses of the fabrication processes, the influence of the light source misalignments (translation and orientation) on the beam-shaping performance is presented. Finally, the importance of considering the extended light source while designing a freeform lens using the proposed method is discussed.
NASA Astrophysics Data System (ADS)
Fan, Xiao-Ning; Zhi, Bo
2017-07-01
Uncertainties in parameters such as materials, loading, and geometry are inevitable in designing metallic structures for cranes. When considering these uncertainty factors, reliability-based design optimization (RBDO) offers a more reasonable design approach. However, existing RBDO methods for crane metallic structures are prone to low convergence speed and high computational cost. A unilevel RBDO method, combining a discrete imperialist competitive algorithm with an inverse reliability strategy based on the performance measure approach, is developed. Application of the imperialist competitive algorithm at the optimization level significantly improves the convergence speed of this RBDO method. At the reliability analysis level, the inverse reliability strategy is used to determine the feasibility of each probabilistic constraint at each design point by calculating its α-percentile performance, thereby avoiding convergence failure, calculation error, and disproportionate computational effort encountered using conventional moment and simulation methods. Application of the RBDO method to an actual crane structure shows that the developed RBDO realizes a design with the best tradeoff between economy and safety together with about one-third of the convergence speed and the computational cost of the existing method. This paper provides a scientific and effective design approach for the design of metallic structures of cranes.
RobOKoD: microbial strain design for (over)production of target compounds.
Stanford, Natalie J; Millard, Pierre; Swainston, Neil
2015-01-01
Sustainable production of target compounds such as biofuels and high-value chemicals for pharmaceutical, agrochemical, and chemical industries is becoming an increasing priority given their current dependency upon diminishing petrochemical resources. Designing these strains is difficult, with current methods focusing primarily on knocking-out genes, dismissing other vital steps of strain design including the overexpression and dampening of genes. The design predictions from current methods also do not translate well-into successful strains in the laboratory. Here, we introduce RobOKoD (Robust, Overexpression, Knockout and Dampening), a method for predicting strain designs for overproduction of targets. The method uses flux variability analysis to profile each reaction within the system under differing production percentages of target-compound and biomass. Using these profiles, reactions are identified as potential knockout, overexpression, or dampening targets. The identified reactions are ranked according to their suitability, providing flexibility in strain design for users. The software was tested by designing a butanol-producing Escherichia coli strain, and was compared against the popular OptKnock and RobustKnock methods. RobOKoD shows favorable design predictions, when predictions from these methods are compared to a successful butanol-producing experimentally-validated strain. Overall RobOKoD provides users with rankings of predicted beneficial genetic interventions with which to support optimized strain design.
RobOKoD: microbial strain design for (over)production of target compounds
Stanford, Natalie J.; Millard, Pierre; Swainston, Neil
2015-01-01
Sustainable production of target compounds such as biofuels and high-value chemicals for pharmaceutical, agrochemical, and chemical industries is becoming an increasing priority given their current dependency upon diminishing petrochemical resources. Designing these strains is difficult, with current methods focusing primarily on knocking-out genes, dismissing other vital steps of strain design including the overexpression and dampening of genes. The design predictions from current methods also do not translate well-into successful strains in the laboratory. Here, we introduce RobOKoD (Robust, Overexpression, Knockout and Dampening), a method for predicting strain designs for overproduction of targets. The method uses flux variability analysis to profile each reaction within the system under differing production percentages of target-compound and biomass. Using these profiles, reactions are identified as potential knockout, overexpression, or dampening targets. The identified reactions are ranked according to their suitability, providing flexibility in strain design for users. The software was tested by designing a butanol-producing Escherichia coli strain, and was compared against the popular OptKnock and RobustKnock methods. RobOKoD shows favorable design predictions, when predictions from these methods are compared to a successful butanol-producing experimentally-validated strain. Overall RobOKoD provides users with rankings of predicted beneficial genetic interventions with which to support optimized strain design. PMID:25853130
System and method of designing models in a feedback loop
Gosink, Luke C.; Pulsipher, Trenton C.; Sego, Landon H.
2017-02-14
A method and system for designing models is disclosed. The method includes selecting a plurality of models for modeling a common event of interest. The method further includes aggregating the results of the models and analyzing each model compared to the aggregate result to obtain comparative information. The method also includes providing the information back to the plurality of models to design more accurate models through a feedback loop.
The synthesis method for design of electron flow sources
NASA Astrophysics Data System (ADS)
Alexahin, Yu I.; Molodozhenzev, A. Yu
1997-01-01
The synthesis method to design a relativistic magnetically - focused beam source is described in this paper. It allows to find a shape of electrodes necessary to produce laminar space charge flows. Electron guns with shielded cathodes designed with this method were analyzed using the EGUN code. The obtained results have shown the coincidence of the synthesis and analysis calculations [1]. This method of electron gun calculation may be applied for immersed electron flows - of interest for the EBIS electron gun design.
User-Centred Design Using Gamestorming.
Currie, Leanne
2016-01-01
User-centered design (UX) is becoming a standard in software engineering and has tremendous potential in healthcare. The purpose of this tutorial will be to demonstrate and provide participants with practice in user-centred design methods that involve 'Gamestorming', a form of brainstorming where 'the rules of life are temporarily suspended'. Participants will learn and apply gamestorming methods including persona development via empathy mapping and methods to translate artefacts derived from participatory design sessions into functional and design requirements.
A systematic composite service design modeling method using graph-based theory.
Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh
2015-01-01
The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system.
A Systematic Composite Service Design Modeling Method Using Graph-Based Theory
Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh
2015-01-01
The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system. PMID:25928358
Learner Centred Design for a Hybrid Interaction Application
ERIC Educational Resources Information Center
Wood, Simon; Romero, Pablo
2010-01-01
Learner centred design methods highlight the importance of involving the stakeholders of the learning process (learners, teachers, educational researchers) at all stages of the design of educational applications and of refining the design through an iterative prototyping process. These methods have been used successfully when designing systems…
NASA Astrophysics Data System (ADS)
Sun, Li; Wang, Deyu
2011-09-01
A new multi-level analysis method of introducing the super-element modeling method, derived from the multi-level analysis method first proposed by O. F. Hughes, has been proposed in this paper to solve the problem of high time cost in adopting a rational-based optimal design method for ship structural design. Furthermore, the method was verified by its effective application in optimization of the mid-ship section of a container ship. A full 3-D FEM model of a ship, suffering static and quasi-static loads, was used as the analyzing object for evaluating the structural performance of the mid-ship module, including static strength and buckling performance. Research results reveal that this new method could substantially reduce the computational cost of the rational-based optimization problem without decreasing its accuracy, which increases the feasibility and economic efficiency of using a rational-based optimal design method in ship structural design.
Colquhoun, Heather L; Squires, Janet E; Kolehmainen, Niina; Fraser, Cynthia; Grimshaw, Jeremy M
2017-03-04
Systematic reviews consistently indicate that interventions to change healthcare professional (HCP) behaviour are haphazardly designed and poorly specified. Clarity about methods for designing and specifying interventions is needed. The objective of this review was to identify published methods for designing interventions to change HCP behaviour. A search of MEDLINE, Embase, and PsycINFO was conducted from 1996 to April 2015. Using inclusion/exclusion criteria, a broad screen of abstracts by one rater was followed by a strict screen of full text for all potentially relevant papers by three raters. An inductive approach was first applied to the included studies to identify commonalities and differences between the descriptions of methods across the papers. Based on this process and knowledge of related literatures, we developed a data extraction framework that included, e.g. level of change (e.g. individual versus organization); context of development; a brief description of the method; tasks included in the method (e.g. barrier identification, component selection, use of theory). 3966 titles and abstracts and 64 full-text papers were screened to yield 15 papers included in the review, each outlining one design method. All of the papers reported methods developed within a specific context. Thirteen papers included barrier identification and 13 included linking barriers to intervention components; although not the same 13 papers. Thirteen papers targeted individual HCPs with only one paper targeting change across individual, organization, and system levels. The use of theory and user engagement were included in 13/15 and 13/15 papers, respectively. There is an agreement across methods of four tasks that need to be completed when designing individual-level interventions: identifying barriers, selecting intervention components, using theory, and engaging end-users. Methods also consist of further additional tasks. Examples of methods for designing the organisation and system-level interventions were limited. Further analysis of design tasks could facilitate the development of detailed guidelines for designing interventions.
NASA Astrophysics Data System (ADS)
Alfadhlani; Samadhi, T. M. A. Ari; Ma’ruf, Anas; Setiasyah Toha, Isa
2018-03-01
Assembly is a part of manufacturing processes that must be considered at the product design stage. Design for Assembly (DFA) is a method to evaluate product design in order to make it simpler, easier and quicker to assemble, so that assembly cost is reduced. This article discusses a framework for developing a computer-based DFA method. The method is expected to aid product designer to extract data, evaluate assembly process, and provide recommendation for the product design improvement. These three things are desirable to be performed without interactive process or user intervention, so product design evaluation process could be done automatically. Input for the proposed framework is a 3D solid engineering drawing. Product design evaluation is performed by: minimizing the number of components; generating assembly sequence alternatives; selecting the best assembly sequence based on the minimum number of assembly reorientations; and providing suggestion for design improvement.
An Approach to the Constrained Design of Natural Laminar Flow Airfoils
NASA Technical Reports Server (NTRS)
Green, Bradford E.
1997-01-01
A design method has been developed by which an airfoil with a substantial amount of natural laminar flow can be designed, while maintaining other aerodynamic and geometric constraints. After obtaining the initial airfoil's pressure distribution at the design lift coefficient using an Euler solver coupled with an integral turbulent boundary layer method, the calculations from a laminar boundary layer solver are used by a stability analysis code to obtain estimates of the transition location (using N-Factors) for the starting airfoil. A new design method then calculates a target pressure distribution that will increase the laminar flow toward the desired amount. An airfoil design method is then iteratively used to design an airfoil that possesses that target pressure distribution. The new airfoil's boundary layer stability characteristics are determined, and this iterative process continues until an airfoil is designed that meets the laminar flow requirement and as many of the other constraints as possible.
An approach to the constrained design of natural laminar flow airfoils
NASA Technical Reports Server (NTRS)
Green, Bradford Earl
1995-01-01
A design method has been developed by which an airfoil with a substantial amount of natural laminar flow can be designed, while maintaining other aerodynamic and geometric constraints. After obtaining the initial airfoil's pressure distribution at the design lift coefficient using an Euler solver coupled with an integml turbulent boundary layer method, the calculations from a laminar boundary layer solver are used by a stability analysis code to obtain estimates of the transition location (using N-Factors) for the starting airfoil. A new design method then calculates a target pressure distribution that will increase the larninar flow toward the desired amounl An airfoil design method is then iteratively used to design an airfoil that possesses that target pressure distribution. The new airfoil's boundary layer stability characteristics are determined, and this iterative process continues until an airfoil is designed that meets the laminar flow requirement and as many of the other constraints as possible.
Optimization Design of Minimum Total Resistance Hull Form Based on CFD Method
NASA Astrophysics Data System (ADS)
Zhang, Bao-ji; Zhang, Sheng-long; Zhang, Hui
2018-06-01
In order to reduce the resistance and improve the hydrodynamic performance of a ship, two hull form design methods are proposed based on the potential flow theory and viscous flow theory. The flow fields are meshed using body-fitted mesh and structured grids. The parameters of the hull modification function are the design variables. A three-dimensional modeling method is used to alter the geometry. The Non-Linear Programming (NLP) method is utilized to optimize a David Taylor Model Basin (DTMB) model 5415 ship under the constraints, including the displacement constraint. The optimization results show an effective reduction of the resistance. The two hull form design methods developed in this study can provide technical support and theoretical basis for designing green ships.
Aerodynamic shape optimization using control theory
NASA Technical Reports Server (NTRS)
Reuther, James
1996-01-01
Aerodynamic shape design has long persisted as a difficult scientific challenge due its highly nonlinear flow physics and daunting geometric complexity. However, with the emergence of Computational Fluid Dynamics (CFD) it has become possible to make accurate predictions of flows which are not dominated by viscous effects. It is thus worthwhile to explore the extension of CFD methods for flow analysis to the treatment of aerodynamic shape design. Two new aerodynamic shape design methods are developed which combine existing CFD technology, optimal control theory, and numerical optimization techniques. Flow analysis methods for the potential flow equation and the Euler equations form the basis of the two respective design methods. In each case, optimal control theory is used to derive the adjoint differential equations, the solution of which provides the necessary gradient information to a numerical optimization method much more efficiently then by conventional finite differencing. Each technique uses a quasi-Newton numerical optimization algorithm to drive an aerodynamic objective function toward a minimum. An analytic grid perturbation method is developed to modify body fitted meshes to accommodate shape changes during the design process. Both Hicks-Henne perturbation functions and B-spline control points are explored as suitable design variables. The new methods prove to be computationally efficient and robust, and can be used for practical airfoil design including geometric and aerodynamic constraints. Objective functions are chosen to allow both inverse design to a target pressure distribution and wave drag minimization. Several design cases are presented for each method illustrating its practicality and efficiency. These include non-lifting and lifting airfoils operating at both subsonic and transonic conditions.
Multi-reader ROC studies with split-plot designs: a comparison of statistical methods.
Obuchowski, Nancy A; Gallas, Brandon D; Hillis, Stephen L
2012-12-01
Multireader imaging trials often use a factorial design, in which study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of this design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper, the authors compare three methods of analysis for the split-plot design. Three statistical methods are presented: the Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean analysis-of-variance approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power, and confidence interval coverage of the three test statistics. The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% confidence intervals falls close to the nominal coverage for small and large sample sizes. The split-plot multireader, multicase study design can be statistically efficient compared to the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rates, similar power, and nominal confidence interval coverage, are available for this study design. Copyright © 2012 AUR. All rights reserved.
Calibration of resistance factors for drilled shafts for the new FHWA design method.
DOT National Transportation Integrated Search
2013-01-01
The Load and Resistance Factor Design (LRFD) calibration of deep foundation in Louisiana was first completed for driven piles (LTRC Final Report 449) in May 2009 and then for drilled shafts using 1999 FHWA design method (ONeill and Reese method) (...
Global Design Optimization for Fluid Machinery Applications
NASA Technical Reports Server (NTRS)
Shyy, Wei; Papila, Nilay; Tucker, Kevin; Vaidyanathan, Raj; Griffin, Lisa
2000-01-01
Recent experiences in utilizing the global optimization methodology, based on polynomial and neural network techniques for fluid machinery design are summarized. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. Another advantage is that these methods do not need to calculate the sensitivity of each design variable locally. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables and methods for predicting the model performance. Examples of applications selected from rocket propulsion components including a supersonic turbine and an injector element and a turbulent flow diffuser are used to illustrate the usefulness of the global optimization method.
Participatory design of healthcare technology with children.
Sims, Tara
2018-02-12
Purpose There are many frameworks and methods for involving children in design research. Human-Computer Interaction provides rich methods for involving children when designing technologies. The paper aims to discuss these issues. Design/methodology/approach This paper examines various approaches to involving children in design, considering whether users view children as study objects or active participants. Findings The BRIDGE method is a sociocultural approach to product design that views children as active participants, enabling them to contribute to the design process as competent and resourceful partners. An example is provided, in which BRIDGE was successfully applied to developing upper limb prostheses with children. Originality/value Approaching design in this way can provide children with opportunities to develop social, academic and design skills and to develop autonomy.
Hybrid PV/diesel solar power system design using multi-level factor analysis optimization
NASA Astrophysics Data System (ADS)
Drake, Joshua P.
Solar power systems represent a large area of interest across a spectrum of organizations at a global level. It was determined that a clear understanding of current state of the art software and design methods, as well as optimization methods, could be used to improve the design methodology. Solar power design literature was researched for an in depth understanding of solar power system design methods and algorithms. Multiple software packages for the design and optimization of solar power systems were analyzed for a critical understanding of their design workflow. In addition, several methods of optimization were studied, including brute force, Pareto analysis, Monte Carlo, linear and nonlinear programming, and multi-way factor analysis. Factor analysis was selected as the most efficient optimization method for engineering design as it applied to solar power system design. The solar power design algorithms, software work flow analysis, and factor analysis optimization were combined to develop a solar power system design optimization software package called FireDrake. This software was used for the design of multiple solar power systems in conjunction with an energy audit case study performed in seven Tibetan refugee camps located in Mainpat, India. A report of solar system designs for the camps, as well as a proposed schedule for future installations was generated. It was determined that there were several improvements that could be made to the state of the art in modern solar power system design, though the complexity of current applications is significant.
Problem Solving Techniques for the Design of Algorithms.
ERIC Educational Resources Information Center
Kant, Elaine; Newell, Allen
1984-01-01
Presents model of algorithm design (activity in software development) based on analysis of protocols of two subjects designing three convex hull algorithms. Automation methods, methods for studying algorithm design, role of discovery in problem solving, and comparison of different designs of case study according to model are highlighted.…
Robust Airfoil Optimization to Achieve Consistent Drag Reduction Over a Mach Range
NASA Technical Reports Server (NTRS)
Li, Wu; Huyse, Luc; Padula, Sharon; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
We prove mathematically that in order to avoid point-optimization at the sampled design points for multipoint airfoil optimization, the number of design points must be greater than the number of free-design variables. To overcome point-optimization at the sampled design points, a robust airfoil optimization method (called the profile optimization method) is developed and analyzed. This optimization method aims at a consistent drag reduction over a given Mach range and has three advantages: (a) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (b) there is no random airfoil shape distortion for any iterate it generates, and (c) it allows a designer to make a trade-off between a truly optimized airfoil and the amount of computing time consumed. For illustration purposes, we use the profile optimization method to solve a lift-constrained drag minimization problem for 2-D airfoil in Euler flow with 20 free-design variables. A comparison with other airfoil optimization methods is also included.
Feng, Shuo
2014-01-01
Parallel excitation (pTx) techniques with multiple transmit channels have been widely used in high field MRI imaging to shorten the RF pulse duration and/or reduce the specific absorption rate (SAR). However, the efficiency of pulse design still needs substantial improvement for practical real-time applications. In this paper, we present a detailed description of a fast pulse design method with Fourier domain gridding and a conjugate gradient method. Simulation results of the proposed method show that the proposed method can design pTx pulses at an efficiency 10 times higher than that of the conventional conjugate-gradient based method, without reducing the accuracy of the desirable excitation patterns. PMID:24834420
Feng, Shuo; Ji, Jim
2014-04-01
Parallel excitation (pTx) techniques with multiple transmit channels have been widely used in high field MRI imaging to shorten the RF pulse duration and/or reduce the specific absorption rate (SAR). However, the efficiency of pulse design still needs substantial improvement for practical real-time applications. In this paper, we present a detailed description of a fast pulse design method with Fourier domain gridding and a conjugate gradient method. Simulation results of the proposed method show that the proposed method can design pTx pulses at an efficiency 10 times higher than that of the conventional conjugate-gradient based method, without reducing the accuracy of the desirable excitation patterns.
A new method named as Segment-Compound method of baffle design
NASA Astrophysics Data System (ADS)
Qin, Xing; Yang, Xiaoxu; Gao, Xin; Liu, Xishuang
2017-02-01
As the observation demand increased, the demand of the lens imaging quality rising. Segment- Compound baffle design method was proposed in this paper. Three traditional methods of baffle design they are characterized as Inside to Outside, Outside to Inside, and Mirror Symmetry. Through a transmission type of optical system, the four methods were used to design stray light suppression structure for it, respectively. Then, structures modeling simulation with Solidworks, CAXA, Tracepro, At last, point source transmittance (PST) curve lines were got to describe their performance. The result shows that the Segment- Compound method can inhibit stay light more effectively. Moreover, it is easy to active and without use special material.
Constrained Aerothermodynamic Design of Hypersonic Vehicles
NASA Technical Reports Server (NTRS)
Gally, Tom; Campbell, Dick
2002-01-01
An investigation was conducted into possible methods of incorporating a hypersonic design capability with aerothermodynamic constraints into the CDISC aerodynamic design tool. The work was divided into two distinct phases: develop relations between surface curvature and hypersonic pressure coefficient which are compatible with CDISC's direct-iterative design method; and explore and implement possible methods of constraining the heat transfer rate over all or portions of the design surface. The main problem in implementing this method has been the weak relationship between surface shape and pressure coefficient at the stagnation point and the need to design around the surface blunt leading edge where there is a slope singularity. The final results show that some success has been achieved, but further improvements are needed.
Computational predictive methods for fracture and fatigue
NASA Technical Reports Server (NTRS)
Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.
1994-01-01
The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.
Computational predictive methods for fracture and fatigue
NASA Astrophysics Data System (ADS)
Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.
1994-09-01
The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.
Field Guide for Designing Human Interaction with Intelligent Systems
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Thronesbery, Carroll G.
1998-01-01
The characteristics of this Field Guide approach address the problems of designing innovative software to support user tasks. The requirements for novel software are difficult to specify a priori, because there is not sufficient understanding of how the users' tasks should be supported, and there are not obvious pre-existing design solutions. When the design team is in unfamiliar territory, care must be taken to avoid rushing into detailed design, requirements specification, or implementation of the wrong product. The challenge is to get the right design and requirements in an efficient, cost-effective manner. This document's purpose is to describe the methods we are using to design human interactions with intelligent systems which support Space Shuttle flight controllers in the Mission Control Center at NASA/Johnson Space Center. Although these software systems usually have some intelligent features, the design challenges arise primarily from the innovation needed in the software design. While these methods are tailored to our specific context, they should be extensible, and helpful to designers of human interaction with other types of automated systems. We review the unique features of this context so that you can determine how to apply these methods to your project Throughout this Field Guide, goals of the design methods are discussed. This should help designers understand how a specific method might need to be adapted to the project at hand.
A space radiation transport method development
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.
2004-01-01
Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest-order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard finite element method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 ms and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of re-configurable computing and could be utilized in the final design as verification of the deterministic method optimized design. Published by Elsevier Ltd on behalf of COSPAR.
Sanchez-Lite, Alberto; Garcia, Manuel; Domingo, Rosario; Angel Sebastian, Miguel
2013-01-01
Musculoskeletal disorders (MSDs) that result from poor ergonomic design are one of the occupational disorders of greatest concern in the industrial sector. A key advantage in the primary design phase is to focus on a method of assessment that detects and evaluates the potential risks experienced by the operative when faced with these types of physical injuries. The method of assessment will improve the process design identifying potential ergonomic improvements from various design alternatives or activities undertaken as part of the cycle of continuous improvement throughout the differing phases of the product life cycle. This paper presents a novel postural assessment method (NERPA) fit for product-process design, which was developed with the help of a digital human model together with a 3D CAD tool, which is widely used in the aeronautic and automotive industries. The power of 3D visualization and the possibility of studying the actual assembly sequence in a virtual environment can allow the functional performance of the parts to be addressed. Such tools can also provide us with an ergonomic workstation design, together with a competitive advantage in the assembly process. The method developed was used in the design of six production lines, studying 240 manual assembly operations and improving 21 of them. This study demonstrated the proposed method's usefulness and found statistically significant differences in the evaluations of the proposed method and the widely used Rapid Upper Limb Assessment (RULA) method.
Research on the Bionics Design of Automobile Styling Based on the Form Gene
NASA Astrophysics Data System (ADS)
Aili, Zhao; Long, Jiang
2017-09-01
From the heritage of form gene point of view, this thesis has analyzed the gene make-up, cultural inheritance and aesthetic features in the evolution and development of forms of brand automobiles and proposed the bionic design concept and methods in the automobile styling design. And this innovative method must be based on the form gene, and the consistency and combination of form element must be maintained during the design. Taking the design of Maserati as an example, the thesis will show you the design method and philosophy in the aspects of form gene expression and bionic design innovation for the future automobile styling.
Design of Aspirated Compressor Blades Using Three-dimensional Inverse Method
NASA Technical Reports Server (NTRS)
Dang, T. Q.; Rooij, M. Van; Larosiliere, L. M.
2003-01-01
A three-dimensional viscous inverse method is extended to allow blading design with full interaction between the prescribed pressure-loading distribution and a specified transpiration scheme. Transpiration on blade surfaces and endwalls is implemented as inflow/outflow boundary conditions, and the basic modifications to the method are outlined. This paper focuses on a discussion concerning an application of the method to the design and analysis of a supersonic rotor with aspiration. Results show that an optimum combination of pressure-loading tailoring with surface aspiration can lead to a minimization of the amount of sucked flow required for a net performance improvement at design and off-design operations.
Active controls: A look at analytical methods and associated tools
NASA Technical Reports Server (NTRS)
Newsom, J. R.; Adams, W. M., Jr.; Mukhopadhyay, V.; Tiffany, S. H.; Abel, I.
1984-01-01
A review of analytical methods and associated tools for active controls analysis and design problems is presented. Approaches employed to develop mathematical models suitable for control system analysis and/or design are discussed. Significant efforts have been expended to develop tools to generate the models from the standpoint of control system designers' needs and develop the tools necessary to analyze and design active control systems. Representative examples of these tools are discussed. Examples where results from the methods and tools have been compared with experimental data are also presented. Finally, a perspective on future trends in analysis and design methods is presented.
Category's analysis and operational project capacity method of transformation in design
NASA Astrophysics Data System (ADS)
Obednina, S. V.; Bystrova, T. Y.
2015-10-01
The method of transformation is attracting widespread interest in fields such contemporary design. However, in theory of design little attention has been paid to a categorical status of the term "transformation". This paper presents the conceptual analysis of transformation based on the theory of form employed in the influential essays by Aristotle and Thomas Aquinas. In the present work the transformation as a method of shaping design has been explored as well as potential application of this term in design has been demonstrated.
The role of the optimization process in illumination design
NASA Astrophysics Data System (ADS)
Gauvin, Michael A.; Jacobsen, David; Byrne, David J.
2015-07-01
This paper examines the role of the optimization process in illumination design. We will discuss why the starting point of the optimization process is crucial to a better design and why it is also important that the user understands the basic design problem and implements the correct merit function. Both a brute force method and the Downhill Simplex method will be used to demonstrate optimization methods with focus on using interactive design tools to create better starting points to streamline the optimization process.