Observational Requirements for High-Fidelity Reverberation Mapping
NASA Technical Reports Server (NTRS)
Horne, Keith; Peterson, Bradley M.; Collier, Stefan J.; Netzer, Hagai
2004-01-01
We present a series of simulations to demonstrate that high-fidelity velocity-delay maps of the emission-line regions in active galactic nuclei can be obtained from time-resolved spectrophotometric data sets like those that will arise from the proposed Kronos satellite. While previous reverberation-mapping experiments have established the size scale R of the broad emission-line regions from the mean time delay tau = R/c between the line and continuum variations and have provided strong evidence for supermassive black holes, the detailed structure and kinematics of the broad-line region remain ambiguous and poorly constrained. Here we outline the technical improvements that will be required to successfully map broad-line regions by reverberation techniques. For typical AGN continuum light curves, characterized by power-law power spectra P (f) is proportional to f(exp -alpha) with a = -1.5 +/- 0.5, our simulations show that a small UV/optical spectrometer like Kronos will clearly distinguish between currently viable alternative kinematic models. From spectra sampled at time intervals Delta t and sustained for a total duration T(sub dur), we can reconstruct high-fidelity velocity-delay maps with velocity resolution comparable to that of the spectra, and delay resolution Delta tau approx. 2 Delta t, provided T(sub dur) exceeds the broad-line region light crossing time by at least a factor of three. Even very complicated kinematical models, such as a Keplerian flow with superimposed spiral wave pattern, are resolved in maps from our simulated Kronos datasets. Reverberation mapping with Kronos data is therefore likely deliver the first clear maps of the geometry and kinematics in the broad emission-line regions 1-100 microarcseconds from supermassive black holes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nobuta, K.; Akiyama, M.; Ueda, Y.
2012-12-20
In order to investigate the growth of supermassive black holes (SMBHs), we construct the black hole mass function (BHMF) and Eddington ratio distribution function (ERDF) of X-ray-selected broad-line active galactic nuclei (AGNs) at z {approx} 1.4 in the Subaru XMM-Newton Deep Survey (SXDS) field. A significant part of the accretion growth of SMBHs is thought to take place in this redshift range. Black hole masses of X-ray-selected broad-line AGNs are estimated using the width of the broad Mg II line and 3000 A monochromatic luminosity. We supplement the Mg II FWHM values with the H{alpha} FWHM obtained from our NIRmore » spectroscopic survey. Using the black hole masses of broad-line AGNs at redshifts between 1.18 and 1.68, the binned broad-line AGN BHMFs and ERDFs are calculated using the V{sub max} method. To properly account for selection effects that impact the binned estimates, we derive the corrected broad-line AGN BHMFs and ERDFs by applying the maximum likelihood method, assuming that the ERDF is constant regardless of the black hole mass. We do not correct for the non-negligible uncertainties in virial BH mass estimates. If we compare the corrected broad-line AGN BHMF with that in the local universe, then the corrected BHMF at z = 1.4 has a higher number density above 10{sup 8} M{sub Sun} but a lower number density below that mass range. The evolution may be indicative of a downsizing trend of accretion activity among the SMBH population. The evolution of broad-line AGN ERDFs from z = 1.4 to 0 indicates that the fraction of broad-line AGNs with accretion rates close to the Eddington limit is higher at higher redshifts.« less
NASA Astrophysics Data System (ADS)
Trump, Jonathan R.; Hsu, Alexander D.; Fang, Jerome J.; Faber, S. M.; Koo, David C.; Kocevski, Dale D.
2013-02-01
We present the first quantified, statistical map of broad-line active galactic nucleus (AGN) frequency with host galaxy color and stellar mass in nearby (0.01 < z < 0.11) galaxies. Aperture photometry and z-band concentration measurements from the Sloan Digital Sky Survey are used to disentangle AGN and galaxy emission, resulting in estimates of uncontaminated galaxy rest-frame color, luminosity, and stellar mass. Broad-line AGNs are distributed throughout the blue cloud and green valley at a given stellar mass, and are much rarer in quiescent (red sequence) galaxies. This is in contrast to the published host galaxy properties of weaker narrow-line AGNs, indicating that broad-line AGNs occur during a different phase in galaxy evolution. More luminous broad-line AGNs have bluer host galaxies, even at fixed mass, suggesting that the same processes that fuel nuclear activity also efficiently form stars. The data favor processes that simultaneously fuel both star formation activity and rapid supermassive black hole accretion. If AGNs cause feedback on their host galaxies in the nearby universe, the evidence of galaxy-wide quenching must be delayed until after the broad-line AGN phase.
Hints of correlation between broad-line and radio variations for 3C 120
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, H. T.; Bai, J. M.; Li, S. K.
2014-01-01
In this paper, we investigate the correlation between broad-line and radio variations for the broad-line radio galaxy 3C 120. By the z-transformed discrete correlation function method and the model-independent flux randomization/random subset selection (FR/RSS) Monte Carlo method, we find that broad Hβ line variations lead the 15 GHz variations. The FR/RSS method shows that the Hβ line variations lead the radio variations by a factor of τ{sub ob} = 0.34 ± 0.01 yr. This time lag can be used to locate the position of the emitting region of radio outbursts in the jet, on the order of ∼5 lt-yr frommore » the central engine. This distance is much larger than the size of the broad-line region. The large separation of the radio outburst emitting region from the broad-line region will observably influence the gamma-ray emission in 3C 120.« less
Observational Definition of Future AGN Echo-Mapping Experiments
NASA Technical Reports Server (NTRS)
Collier, Stefan; Peterson, Bradley M.; Horne, Keith
2001-01-01
We describe numerical simulations we have begun in order to determine the observational requirements for future echo-apping experiments. We focus on two particular problems: (1) determination of the structure and kinematics of the broad-line region through emission- line reverberation mapping, and (2) detection of interband continuum lags that may be used as a probe of the continuum source, presumably a temperature-stratified accretion disk. Our preliminary results suggest the broad-line region can be reverberation-mapped to good precision with spectra of signal-to-noise ratio per pixel S/N approx. = 30, time resolution (Delta)t approx. = 0.1 day, and duration of about 60 days (which is a factor of three larger than the longest time scale in the input models); data that meet these requirements do not yet exist. We also find that interband continuum lags of approx. greater than 0.5 days can be detected at approx. greater than 95% confidence with at least daily observations for about 6 weeks, or rather more easily and definitively with shorter programs undertaken with satellite-based observatories. The results of these simulations show that significant steps forward in multiwavelength monitoring will almost certainly require dedicated facilities.
NASA Astrophysics Data System (ADS)
Ferraro, A.; Zografopoulos, D. C.; Caputo, R.; Beccherelli, R.
2017-04-01
The spectral response of a terahertz (THz) filter is investigated in detail for different angles of incidence and polarization of the incoming THz wave. The filter is fabricated by patterning an aluminum frequency-selective surface of cross-shaped apertures on a thin foil of the low-loss cyclo-olefin polymer Zeonor. Two different types of resonances are observed, namely, a broadline resonance stemming from the transmittance of the slot apertures and a series of narrowline guided-mode resonances, with the latter being investigated by employing the grating theory. Numerical simulations of the filter transmittance based on the finite-element method agree with experimental measurements by means of THz time domain spectroscopy (THz-TDS). The results reveal extensive possibilities for tuning the guided-mode resonances by mechanically adjusting the incidence or polarization angle, while the fundamental broadline resonance is not significantly affected. Such filters are envisaged as functional elements in emerging THz systems for filtering or sensing applications.
Truncated Cross Effect Dynamic Nuclear Polarization: An Overhauser Effect Doppelgänger.
Equbal, Asif; Li, Yuanxin; Leavesley, Alisa; Huang, Shengdian; Rajca, Suchada; Rajca, Andrzej; Han, Songi
2018-05-03
The discovery of a truncated cross-effect (CE) in dynamic nuclear polarization (DNP) NMR that has the features of an Overhauser-effect DNP (OE-DNP) is reported here. The apparent OE-DNP, where minimal μw power achieved optimum enhancement, was observed when doping Trityl-OX063 with a pyrroline nitroxide radical that possesses electron-withdrawing tetracarboxylate substituents (tetracarboxylate-ester-pyrroline or TCP) in vitrified water/glycerol at 6.9 T and at 3.3 to 85 K, in apparent contradiction to expectations. While the observations are fully consistent with OE-DNP, we discover that a truncated cross-effect ( tCE) is the underlying mechanism, owing to TCP's shortened T 1e . We take this observation as a guideline and demonstrate that a crossover from CE to tCE can be replicated by simulating the CE of a narrow-line (Trityl-OX063) and a broad-line (TCP) radical pair, with a significantly shortened T 1e of the broad-line radical.
On the origin of gamma-rays in Fermi blazars: beyondthe broad-line region
NASA Astrophysics Data System (ADS)
Costamante, L.; Cutini, S.; Tosti, G.; Antolini, E.; Tramacere, A.
2018-07-01
The gamma-ray emission in broad-line blazars is generally explained as inverse Compton (IC) radiation of relativistic electrons in the jet scattering optical-UV photons from the broad-line region (BLR), the so-called BLR external Compton (EC) scenario. We test this scenario on the Fermi gamma-ray spectra of 106 broad-line blazars detected with the highest significance or largest BLR, by looking for cut-off signatures at high energies compatible with γ-γ interactions with BLR photons. We do not find evidence for the expected BLR absorption. For 2/3 of the sources, we can exclude any significant absorption (τmax < 1), while for the remaining 1/3 the possible absorption is constrained to be 1.5-2 orders of magnitude lower than expected. This result holds also dividing the spectra in high- and low-flux states, and for powerful blazars with large BLR. Only 1 object out of 10 seems compatible with substantial attenuation (τmax > 5). We conclude that for 9 out of 10 objects, the jet does not interact with BLR photons. Gamma-rays seem either produced outside the BLR most of the time, or the BLR is ˜100 × larger than given by reverberation mapping. This means that (i) EC on BLR photons is disfavoured as the main gamma-ray mechanism, versus IC on IR photons from the torus or synchrotron self-Compton; (ii) the Fermi gamma-ray spectrum is mostly intrinsic, determined by the interaction of the particle distribution with the seed-photon spectrum; and (iii) without suppression by the BLR, broad-line blazars can become copious emitters above 100 GeV, as demonstrated by 3C 454.3. We expect the CTA sky to be much richer of broad-line blazars than previously thought.
Corsi, Alessandra; Gal-Yam, A.; Kulkarni, S. R.; ...
2016-10-10
Long duration γ-ray bursts are a rare subclass of stripped-envelope core-collapse supernovae (SNe) that launch collimated relativistic outflows (jets). All γ-ray-burst-associated SNe are spectroscopically Type Ic, with broad-lines, but the fraction of broad-lined SNe Ic harboring low-luminosity γ-ray bursts remains largely unconstrained. Some SNe should be accompanied by off-axis γ-ray burst jets that initially remain invisible, but then emerge as strong radio sources (as the jets decelerate). However, this critical prediction of the jet model for γ-ray bursts has yet to be verified observationally. Here, we present K. G. Jansky Very Large Array observations of 15 broad-lined SNe of Type Ic discovered by the Palomar Transient Factory in an untargeted manner. Most of the SNe in our sample exclude radio emission observationally similar to that of the radio-loud, relativistic SN 1998bw. We constrain the fraction of 1998bw-like broad-lined SNe Ic to bemore » $$\\lesssim 41 \\% $$ (99.865% confidence). Most of the events in our sample also exclude off-axis jets similar to GRB 031203 and GRB 030329, but we cannot rule out off-axis γ-ray bursts expanding in a low-density wind environment. Three SNe in our sample are detected in the radio. PTF11qcj and PTF14dby show late-time radio emission with average ejecta speeds of ≈0.3–0.4 c, on the dividing line between relativistic and "ordinary" SNe. The speed of PTF11cmh radio ejecta is poorly constrained. We estimate that $$\\lesssim 85 \\% $$ (99.865% confidence) of the broad-lined SNe Ic in our sample may harbor off-axis γ-ray bursts expanding in media with densities in the range probed by this study.« less
NASA Astrophysics Data System (ADS)
Zapartas, E.; de Mink, S. E.; Van Dyk, S. D.; Fox, O. D.; Smith, N.; Bostroem, K. A.; de Koter, A.; Filippenko, A. V.; Izzard, R. G.; Kelly, P. L.; Neijssel, C. J.; Renzo, M.; Ryder, S.
2017-06-01
Many young, massive stars are found in close binaries. Using population synthesis simulations we predict the likelihood of a companion star being present when these massive stars end their lives as core-collapse supernovae (SNe). We focus on stripped-envelope SNe, whose progenitors have lost their outer hydrogen and possibly helium layers before explosion. We use these results to interpret new Hubble Space Telescope observations of the site of the broad-lined Type Ic SN 2002ap, 14 years post-explosion. For a subsolar metallicity consistent with SN 2002ap, we expect a main-sequence (MS) companion present in about two thirds of all stripped-envelope SNe and a compact companion (likely a stripped helium star or a white dwarf/neutron star/black hole) in about 5% of cases. About a quarter of progenitors are single at explosion (originating from initially single stars, mergers, or disrupted systems). All of the latter scenarios require a massive progenitor, inconsistent with earlier studies of SN 2002ap. Our new, deeper upper limits exclude the presence of an MS companion star >8-10 {M}⊙ , ruling out about 40% of all stripped-envelope SN channels. The most likely scenario for SN 2002ap includes nonconservative binary interaction of a primary star initially ≲ 23 {M}⊙ . Although unlikely (<1% of the scenarios), we also discuss the possibility of an exotic reverse merger channel for broad-lined Type Ic events. Finally, we explore how our results depend on the metallicity and the model assumptions and discuss how additional searches for companions can constrain the physics that govern the evolution of SN progenitors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corsi, Alessandra; Gal-Yam, A.; Kulkarni, S. R.
Long duration γ-ray bursts are a rare subclass of stripped-envelope core-collapse supernovae (SNe) that launch collimated relativistic outflows (jets). All γ-ray-burst-associated SNe are spectroscopically Type Ic, with broad-lines, but the fraction of broad-lined SNe Ic harboring low-luminosity γ-ray bursts remains largely unconstrained. Some SNe should be accompanied by off-axis γ-ray burst jets that initially remain invisible, but then emerge as strong radio sources (as the jets decelerate). However, this critical prediction of the jet model for γ-ray bursts has yet to be verified observationally. Here, we present K. G. Jansky Very Large Array observations of 15 broad-lined SNe of Type Ic discovered by the Palomar Transient Factory in an untargeted manner. Most of the SNe in our sample exclude radio emission observationally similar to that of the radio-loud, relativistic SN 1998bw. We constrain the fraction of 1998bw-like broad-lined SNe Ic to bemore » $$\\lesssim 41 \\% $$ (99.865% confidence). Most of the events in our sample also exclude off-axis jets similar to GRB 031203 and GRB 030329, but we cannot rule out off-axis γ-ray bursts expanding in a low-density wind environment. Three SNe in our sample are detected in the radio. PTF11qcj and PTF14dby show late-time radio emission with average ejecta speeds of ≈0.3–0.4 c, on the dividing line between relativistic and "ordinary" SNe. The speed of PTF11cmh radio ejecta is poorly constrained. We estimate that $$\\lesssim 85 \\% $$ (99.865% confidence) of the broad-lined SNe Ic in our sample may harbor off-axis γ-ray bursts expanding in media with densities in the range probed by this study.« less
Temperature dependence of broadline NMR spectra of water-soaked, epoxy-graphite composites
NASA Astrophysics Data System (ADS)
Lawing, David; Fornes, R. E.; Gilbert, R. D.; Memory, J. D.
1981-10-01
Water-soaked, epoxy resin-graphite fiber composites show a waterline in their broadline proton NMR spectrum which indicates a state of intermediate mobility between the solid and free water liquid states. The line is still present at -42 °C, but shows a reversible decrease in amplitude with decreasing temperature. The line is isotropic upon rotation of the fiber axis with respect to the external magnetic field.
Schuster, Rolf K; Mustafa, Murad Basheer; Baskar, Jagadeesan Vijay; Rosentel, Joseph; Chester, S Theodore; Knaus, Martin
2016-07-01
Cats are host to dipylidiid cestodes of the genera Diplopylidium, Dipylidium and Joyeuxiella. Broadline(®), a topical broad-spectrum combination parasiticide containing fipronil (8.3 % w/v), (S)-methoprene (10 % w/v), eprinomectin (0.4 % w/v) and the cestocide praziquantel (8.3 % w/v), has previously been shown to be efficacious against Dipylidium caninum and Diplopylidium spp. in cats. To evaluate its efficacy against Joyeuxiella species, a blinded clinical efficacy study was conducted according to GCP. All cats had evidence for naturally acquired dipylidiid cestode infection as confirmed by pre-treatment examination. Cats were allocated randomly to two groups of 13 cats each based on bodyweight: Control (untreated) and Broadline(®) at 0.12 mL/kg bodyweight administered once topically. Based on the comparison of helminth counts in the treated and untreated cats seven days post treatment, Broadline(®) demonstrated >99 % efficacy (p < 0.01) against mature J. fuhrmanni and J. pasqualei, with 11 and 13 of the untreated cats harbouring 1 to 102 or 2 to 95 cestodes, respectively. In addition, parasite counts indicated 95.9 % efficacy (p = 0.006) against the rictularoid nematode Pterygodermatites cahirensis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Yue; Liu, Xin; Loeb, Abraham
We perform a systematic search for sub-parsec binary supermassive black holes (BHs) in normal broad-line quasars at z < 0.8, using multi-epoch Sloan Digital Sky Survey (SDSS) spectroscopy of the broad Hβ line. Our working model is that (1) one and only one of the two BHs in the binary is active; (2) the active BH dynamically dominates its own broad-line region (BLR) in the binary system, so that the mean velocity of the BLR reflects the mean velocity of its host BH; (3) the inactive companion BH is orbiting at a distance of a few R{sub BLR}, where R{submore » BLR} ∼ 0.01-0.1 pc is the BLR size. We search for the expected line-of-sight acceleration of the broad-line velocity from binary orbital motion by cross-correlating SDSS spectra from two epochs separated by up to several years in the quasar rest frame. Out of ∼700 pairs of spectra for which we have good measurements of the velocity shift between two epochs (1σ error ∼40 km s{sup –1}), we detect 28 systems with significant velocity shifts in broad Hβ, among which 7 are the best candidates for the hypothesized binaries, 4 are most likely due to broad-line variability in single BHs, and the rest are ambiguous. Continued spectroscopic observations of these candidates will easily strengthen or disprove these claims. We use the distribution of the observed accelerations (mostly non-detections) to place constraints on the abundance of such binary systems among the general quasar population. Excess variance in the velocity shift is inferred for observations separated by longer than 0.4 yr (quasar rest frame). Attributing all the excess to binary motion would imply that most of the quasars in this sample must be in binaries, that the inactive BH must be on average more massive than the active one, and that the binary separation is at most a few times the size of the BLR. However, if this excess variance is partly or largely due to long-term broad-line variability, the requirement of a large population of close binaries is much weakened or even disfavored for massive companions. Future time-domain spectroscopic surveys of normal quasars can provide vital prior information on the structure function of stochastic velocity shifts induced by broad-line variability in single BHs. Such surveys with improved spectral quality, increased time baseline, and more epochs can greatly improve the statistical constraints of this method on the general binary population in broad-line quasars, further shrink the allowed binary parameter space, and detect true sub-parsec binaries.« less
Accretion Rate: An Axis Of Agn Unification
NASA Astrophysics Data System (ADS)
Trump, Jonathan R.; Impey, C. D.; Kelly, B. C.
2011-01-01
We show how accretion rate governs the physical properties of broad-line, narrow-line, and lineless active galactic nuclei (AGNs). We avoid the systematic errors plaguing previous studies of AGN accretion rate by using accurate accretion luminosities from well-sampled multiwavelength SEDs from the Cosmic Evolution Survey (COSMOS), and accurate black hole masses derived from virial scaling relations (for broad-line AGNs) or host-AGN relations (for narrow-line and lineless AGNs). In general, broad emission lines are present only at the highest accretion rates (L/L_Edd>0.01), and these rapidly accreting AGNs are observed as broad-line AGNs or possibly as obscured narrow-line AGNs. Narrow-line and lineless AGNs at lower specific accretion rates (L/L_Edd<0.01) are unobscured and yet lack a broad line region. The disappearance of the broad emission lines is caused by an expanding radiatively inefficient accretion flow (RIAF) at the inner radius of the accretion disk. The presence of the RIAF also drives L/L_Edd<0.01 narrow-line and lineless AGNs to be 10-100 times more radio-luminous than broad-line AGNs, since the unbound nature of the RIAF means it is easier to form a radio outflow. The IR torus signature also tends to become weaker or disappear from L/L_Edd<0.01 AGNs, although there may be additional mid-IR synchrotron emission associated with the RIAF. Together these results suggest that specific accretion rate is an important physical "axis" of AGN unification, described by a simple model.
NASA Astrophysics Data System (ADS)
Hirabayashi, Atsumu; Nambu, Yoshihiro; Fujimoto, Takashi
1986-10-01
The problem of excitation anisotropy in laser-induced-fluorescence spectroscopy (LIFS) was investigated for the intense excitation case under the broad-line condition. The depolarization coefficient for the fluorescence light was derived in the intense-excitation limit (linearly-polarized or unpolarized light excitation) and the results are presented in tables. In the region of intermediate intensity, between the weak and intense-excitation limits, the master equation was solved for a specific example of atomic transitions and its result is compared with experimental results.
Broad-line Type Ic supernova SN 2014ad
NASA Astrophysics Data System (ADS)
Sahu, D. K.; Anupama, G. C.; Chakradhari, N. K.; Srivastav, S.; Tanaka, Masaomi; Maeda, Keiichi; Nomoto, Ken'ichi
2018-04-01
We present optical and ultraviolet photometry and low-resolution optical spectroscopy of the broad-line Type Ic supernova SN 2014ad in the galaxy PGC 37625 (Mrk 1309), covering the evolution of the supernova during -5 to +87 d with respect to the date of maximum in the B band. A late-phase spectrum obtained at +340 d is also presented. With an absolute V-band magnitude at peak of MV = -18.86 ± 0.23 mag, SN 2014ad is fainter than supernovae associated with gamma ray bursts (GRBs), and brighter than most of the normal and broad-line Type Ic supernovae without an associated GRB. The spectral evolution indicates that the expansion velocity of the ejecta, as measured using the Si II line, is as high as ˜33 500 km s-1 around maximum, while during the post-maximum phase it settles at ˜15 000 km s-1. The expansion velocity of SN 2014ad is higher than that of all other well-observed broad-line Type Ic supernovae except for the GRB-associated SN 2010bh. The explosion parameters, determined by applying Arnett's analytical light-curve model to the observed bolometric light-curve, indicate that it was an energetic explosion with a kinetic energy of ˜(1 ± 0.3) × 1052 erg and a total ejected mass of ˜(3.3 ± 0.8) M⊙, and that ˜0.24 M⊙ of 56Ni was synthesized in the explosion. The metallicity of the host galaxy near the supernova region is estimated to be ˜0.5 Z⊙.
FLARE-LIKE VARIABILITY OF THE Mg II {lambda}2800 EMISSION LINE IN THE {gamma}-RAY BLAZAR 3C 454.3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leon-Tavares, J.; Chavushyan, V.; Patino-Alvarez, V.
2013-02-01
We report the detection of a statistically significant flare-like event in the Mg II {lambda}2800 emission line of 3C 454.3 during the outburst of autumn 2010. The highest levels of emission line flux recorded over the monitoring period (2008-2011) coincide with a superluminal jet component traversing through the radio core. This finding crucially links the broad emission line fluctuations to the non-thermal continuum emission produced by relativistically moving material in the jet and hence to the presence of broad-line region clouds surrounding the radio core. If the radio core were located at several parsecs from the central black hole, thenmore » our results would suggest the presence of broad-line region material outside the inner parsec where the canonical broad-line region is envisaged to be located. We briefly discuss the implications of broad emission line material ionized by non-thermal continuum in the context of virial black hole mass estimates and gamma-ray production mechanisms.« less
The ΓX-L/LEdd relation in BAT AGN Spectroscopic Survey (BASS)
NASA Astrophysics Data System (ADS)
Trakhtenbrot, Benny; Ricci, Claudio; Koss, Michael; Schawinski, Kevin; Mushotzky, Richard; Ueda, Yoshihiro; Veilleux, Sylvain; Lamperti, Isabella; Oh, Kyuseok; Treister, Ezequiel; Stern, Daniel; Harrison, Fiona; Balokovic, Mislav
2018-01-01
We present a study of the relation between accretion rate (in terms of L/LEdd) and shape of the hard X-ray spectral energy distribution (namely the photon index Γx) for a large sample of over 200 hard X-ray-selected, low-redshift active galactic nuclei (AGNs), drawn from the Swift/BAT AGN Spectroscopic Survey (BASS). This includes 30 AGNs for which black hole mass (and therefore L/LEdd) is measured directly through masers, spatially resolved gas or stellar dynamics, or reverberation mapping. The high-quality and broad energy coverage of the data provided through BASS allow us to examine several alternative determinations of both Γx and L/LEdd. We find very weak correlation between Γx and L/LEdd for the BASS sample as a whole, with best-fitting relations that are considerably shallower than those reported in previous studies. Moreover, we find no corresponding correlations among the subsets of AGN with different MBH determination methodology, and in particular those AGN with direct or single-epoch MBH estimates. This latter finding is in contrast to several previous studies which focused on z > 0.5 broad-line AGN. We conclude that this tension can be partially accounted for if one adopts a simplified, power-law X-ray spectral model, combined with L/LEdd estimates that are based on the continuum emission and on single-epoch broad-line spectroscopy in the optical regime. Given these findings, we highlight the limitations of using Γx as a probe of supermassive black hole evolution in deep extragalactic X-ray surveys.
On the origin of gamma rays in Fermi blazars: beyond the broad line region.
NASA Astrophysics Data System (ADS)
Costamante, L.; Cutini, S.; Tosti, G.; Antolini, E.; Tramacere, A.
2018-05-01
The gamma-ray emission in broad-line blazars is generally explained as inverse Compton (IC) radiation of relativistic electrons in the jet scattering optical-UV photons from the Broad Line Region (BLR), the so-called BLR External Compton scenario. We test this scenario on the Fermi gamma-ray spectra of 106 broad-line blazars detected with the highest significance or largest BLR, by looking for cut-off signatures at high energies compatible with γ-γ interactions with BLR photons. We do not find evidence for the expected BLR absorption. For 2/3 of the sources, we can exclude any significant absorption (τmax < 1), while for the remaining 1/3 the possible absorption is constrained to be 1.5-2 orders of magnitude lower than expected. This result holds also dividing the spectra in high and low-flux states, and for powerful blazars with large BLR. Only 1 object out of 10 seems compatible with substantial attenuation (τmax > 5). We conclude that for 9 out of 10 objects, the jet does not interact with BLR photons. Gamma-rays seem either produced outside the BLR most of the time, or the BLR is ˜100 × larger than given by reverberation mapping. This means that i) External Compton on BLR photons is disfavoured as the main gamma-ray mechanism, vs IC on IR photons from the torus or synchrotron self-Compton; ii) the Fermi gamma-ray spectrum is mostly intrinsic, determined by the interaction of the particle distribution with the seed-photons spectrum; iii) without suppression by the BLR, broad-line blazars can become copious emitters above 100 GeV, as demonstrated by 3C 454.3. We expect the CTA sky to be much richer of broad-line blazars than previously thought.
WISE J233237.05-505643.5: A Double-Peaked Broad-Lined AGN with Spiral-Shaped Radio Morphology
NASA Technical Reports Server (NTRS)
Tsai, Chao Wei; Jarrett, Thomas H.; Stern, Daniel; Emonts, Bjorn; Barrows, R. Scott; Assef, Roberto J.; Norris, Ray P.; Eisenhardt, Peter R. M.; Lonsdale, Carol; Blain, Andrew W.;
2013-01-01
We present radio continuum mapping, optical imaging and spectroscopy of the newly discovered double-peaked broad-lined AGN WISE J233237.05-505643.5 at redshift z = 0.3447. This source exhibits an FR-I and FR-II hybrid-morphology, characterized by bright core, jet, and Doppler-boosted lobe structures in ATCA continuum maps at 1.5, 5.6, and 9 GHz. Unlike most FR-II objects, W2332-5056 is hosted by a disk-like galaxy. The core has a projected 5" linear radio feature that is perpendicular to the curved primary jet, hinting at unusual and complex activity within the inner 25 kpc. The multi-epoch optical-near-IR photometric measurements indicate significant variability over a 3-20 year baseline from the AGN component. Gemini-South optical data shows an unusual double-peaked emission-line features: the centroids of the broad-lined components of H-alpha and H-beta are blueshifted with respect to the narrow lines and host galaxy by approximately 3800 km/s. We examine possible cases which involve single or double supermassive black holes in the system, and discuss required future investigations to disentangle the mystery nature of this system.
THE ABSOLUTE RATE OF LGRB FORMATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, J. F.; Schady, P.
2016-06-01
We estimate the long-duration gamma-ray burst (LGRB) progenitor rate using our recent work on the effects of environmental metallically on LGRB formation in concert with supernovae (SNe) statistics via an approach patterned loosely off the Drake equation. Beginning with the cosmic star formation history, we consider the expected number of broad-line Type Ic events (the SNe type associated with LGRBs) that are in low-metallicity host environments adjusted by the contribution of high-metallicity host environments at a much reduced rate. We then compare this estimate to the observed LGRB rate corrected for instrumental selection effects to provide a combined estimate ofmore » the efficiency fraction of these progenitors to produce LGRBs and the fraction of which are beamed in our direction. From this we estimate that an aligned LGRB occurs for approximately every 4000 ± 2000 low-metallically broad-lined SNe Ic. Therefore, if one assumes a semi-nominal beaming factor of 100, then only about one such supernova out of 40 produce an LGRB. Finally, we propose an off-axis LGRB search strategy of targeting only broad-line Type Ic events that occur in low-metallicity hosts for radio observation.« less
The Absolute Rate of LGRB Formation
NASA Astrophysics Data System (ADS)
Graham, J. F.; Schady, P.
2016-06-01
We estimate the long-duration gamma-ray burst (LGRB) progenitor rate using our recent work on the effects of environmental metallically on LGRB formation in concert with supernovae (SNe) statistics via an approach patterned loosely off the Drake equation. Beginning with the cosmic star formation history, we consider the expected number of broad-line Type Ic events (the SNe type associated with LGRBs) that are in low-metallicity host environments adjusted by the contribution of high-metallicity host environments at a much reduced rate. We then compare this estimate to the observed LGRB rate corrected for instrumental selection effects to provide a combined estimate of the efficiency fraction of these progenitors to produce LGRBs and the fraction of which are beamed in our direction. From this we estimate that an aligned LGRB occurs for approximately every 4000 ± 2000 low-metallically broad-lined SNe Ic. Therefore, if one assumes a semi-nominal beaming factor of 100, then only about one such supernova out of 40 produce an LGRB. Finally, we propose an off-axis LGRB search strategy of targeting only broad-line Type Ic events that occur in low-metallicity hosts for radio observation.
Accretion Rate and the Physical Nature of Unobscured Active Galaxies
NASA Astrophysics Data System (ADS)
Trump, Jonathan R.; Impey, Christopher D.; Kelly, Brandon C.; Civano, Francesca; Gabor, Jared M.; Diamond-Stanic, Aleksandar M.; Merloni, Andrea; Urry, C. Megan; Hao, Heng; Jahnke, Knud; Nagao, Tohru; Taniguchi, Yoshi; Koekemoer, Anton M.; Lanzuisi, Giorgio; Liu, Charles; Mainieri, Vincenzo; Salvato, Mara; Scoville, Nick Z.
2011-05-01
We show how accretion rate governs the physical properties of a sample of unobscured broad-line, narrow-line, and lineless active galactic nuclei (AGNs). We avoid the systematic errors plaguing previous studies of AGN accretion rates by using accurate intrinsic accretion luminosities (L int) from well-sampled multiwavelength spectral energy distributions from the Cosmic Evolution Survey, and accurate black hole masses derived from virial scaling relations (for broad-line AGNs) or host-AGN relations (for narrow-line and lineless AGNs). In general, broad emission lines are present only at the highest accretion rates (L int/L Edd > 10-2), and these rapidly accreting AGNs are observed as broad-line AGNs or possibly as obscured narrow-line AGNs. Narrow-line and lineless AGNs at lower specific accretion rates (L int/L Edd < 10-2) are unobscured and yet lack a broad-line region. The disappearance of the broad emission lines is caused by an expanding radiatively inefficient accretion flow (RIAF) at the inner radius of the accretion disk. The presence of the RIAF also drives L int/L Edd < 10-2 narrow-line and lineless AGNs to have ratios of radio-to-optical/UV emission that are 10 times higher than L int/L Edd > 10-2 broad-line AGNs, since the unbound nature of the RIAF means it is easier to form a radio outflow. The IR torus signature also tends to become weaker or disappear from L int/L Edd < 10-2 AGNs, although there may be additional mid-IR synchrotron emission associated with the RIAF. Together, these results suggest that specific accretion rate is an important physical "axis" of AGN unification, as described by a simple model. Based on observations with the XMM-Newton satellite, an ESA science mission with instruments and contributions directly funded by ESA member states and NASA; the Magellan telescope, operated by the Carnegie Observatories; the ESO Very Large Telescope; and the MMT Observatory, a joint facility of the University of Arizona and the Smithsonian Institution; the Subaru Telescope, operated by the National Astronomical Observatory of Japan; and the NASA/ESA Hubble Space Telescope, operated at the Space Telescope Science Institute, which is operated by AURA Inc., under NASA contract NAS 5-26555.
BAT AGN Spectroscopic Survey (BASS) - VI. The ΓX-L/LEdd relation
NASA Astrophysics Data System (ADS)
Trakhtenbrot, Benny; Ricci, Claudio; Koss, Michael J.; Schawinski, Kevin; Mushotzky, Richard; Ueda, Yoshihiro; Veilleux, Sylvain; Lamperti, Isabella; Oh, Kyuseok; Treister, Ezequiel; Stern, Daniel; Harrison, Fiona; Baloković, Mislav; Gehrels, Neil
2017-09-01
We study the relation between accretion rate (in terms of L/LEdd) and shape of the hard X-ray spectral energy distribution (namely the photon index Γx) for a large sample of 228 hard X-ray-selected, low-redshift active galactic nuclei (AGNs), drawn from the Swift/BAT AGN Spectroscopic Survey (BASS). This includes 30 AGNs for which black hole mass (and therefore L/LEdd) is measured directly through masers, spatially resolved gas or stellar dynamics, or reverberation mapping. The high-quality and broad energy coverage of the data provided through BASS allow us to examine several alternative determinations of both Γx and L/LEdd. For the BASS sample as a whole, we find a statistically significant, albeit very weak correlation between Γx and L/LEdd. The best-fitting relations we find, Γx ≃ 0.15 log L/LEdd + const., are considerably shallower than those reported in previous studies. Moreover, we find no corresponding correlations among the subsets of AGN with different MBH determination methodology. In particular, we find no robust evidence for a correlation when considering only those AGN with direct or single-epoch MBH estimates. This latter finding is in contrast to several previous studies which focused on z > 0.5 broad-line AGN. We discuss this tension and conclude that it can be partially accounted for if one adopts a simplified, power-law X-ray spectral model, combined with L/LEdd estimates that are based on the continuum emission and on single-epoch broad-line spectroscopy in the optical regime. We finally highlight the limitations on using Γx as a probe of supermassive black hole evolution in deep extragalactic X-ray surveys.
Hidden Broad-Line Seyfert 2 Galaxies in the CFA and 12 μM Samples
NASA Astrophysics Data System (ADS)
Tran, Hien D.
2001-06-01
We report the results of a spectropolarimetric survey of the CfA and 12 μm samples of Seyfert 2 (S2) galaxies. Polarized (hidden) broad-line regions (HBLRs) are confirmed in a number of galaxies, and several new cases (F02581-1136, MCG -3-58-7, NGC 5995, NGC 6552, NGC 7682) are reported. The 12 μm S2 galaxy sample shows a significantly higher incidence of HBLRs (50%) than its CfA counterpart (30%), suggesting that the latter may be incomplete in hidden active galactic nuclei. Compared to the non-HBLR S2 galaxies, the HBLR S2 galaxies display distinctly higher radio power relative to their far-infrared output and hotter dust temperature as indicated by the f25/f60 color. However, the level of obscuration is indistinguishable between the two types of S2 galaxies. These results strongly support the existence of two intrinsically different populations of S2 galaxies: one harboring an energetic, hidden S1 nucleus with a broad-line region and the other a ``pure'' S2 galaxy, with a weak or absent S1 nucleus and a strong, perhaps dominating starburst component. Thus, the simple purely orientation-based unification model is not applicable to all Seyfert galaxies.
NASA Astrophysics Data System (ADS)
Janiak, M.; Sikora, M.; Moderski, R.
2016-05-01
We present a detailed Fermi/LAT data analysis for the broad-line radio galaxy 3C 120. This source has recently entered into a state of increased γ-ray activity which manifested itself in two major flares detected by Fermi/LAT in 2014 September and 2015 April with no significant flux changes reported in other wavelengths. We analyse available data focusing our attention on aforementioned outbursts. We find very fast variability time-scale during flares (of the order of hours) together with a significant γ-ray flux increase. We show that the ˜6.8 yr averaged γ-ray emission of 3C 120 is likely a sum of the external radiation Compton and the synchrotron self-Compton radiative components. To address the problem of violent γ-ray flares and fast variability we model the jet radiation dividing the jet structure into two components: the wide and relatively slow outer layer and the fast, narrow spine. We show that with the addition of the fast spine occasionally bent towards the observer we are able to explain observed spectral energy distribution of 3C 120 during flares with the Compton upscattered broad-line region and dusty torus photons as main γ-rays emission mechanism.
Steps Toward Unveiling the True Population of AGN: Photometric Selection of Broad-Line AGN
NASA Astrophysics Data System (ADS)
Schneider, Evan; Impey, C.
2012-01-01
We present an AGN selection technique that enables identification of broad-line AGN using only photometric data. An extension of infrared selection techniques, our method involves fitting a given spectral energy distribution with a model consisting of three physically motivated components: infrared power law emission, optical accretion disk emission, and host galaxy emission. Each component can be varied in intensity, and a reduced chi-square minimization routine is used to determine the optimum parameters for each object. Using this model, both broad- and narrow-line AGN are seen to fall within discrete ranges of parameter space that have plausible bounds, allowing physical trends with luminosity and redshift to be determined. Based on a fiducial sample of AGN from the catalog of Trump et al. (2009), we find the region occupied by broad-line AGN to be distinct from that of quiescent or star-bursting galaxies. Because this technique relies only on photometry, it will allow us to find AGN at fainter magnitudes than are accessible in spectroscopic surveys, and thus probe a population of less luminous and/or higher redshift objects. With the vast availability of photometric data in large surveys, this technique should have broad applicability and result in large samples that will complement X-ray AGN catalogs.
Structure and kinematics of the broad-line regions in active galaxies from IUE variability data
NASA Technical Reports Server (NTRS)
Koratkar, Anuradha P.; Gaskell, C. Martin
1991-01-01
IUE archival data are used here to investigate the structure nad kinematics of the broad-line regions (BLRs) in nine AGN. It is found that the centroid of the line-continuum cross-correlation functions (CCFs) can be determined with reasonable reliability. The errors in BLR size estimates from CCFs for irregularly sampled light curves are fairly well understood. BLRs are found to have small luminosity-weighted radii, and lines of high ionization tend to be emitted closer to the central source than lines of low ionization, especially for low-luminosity objects. The motion of the gas is gravity-dominated with both pure inflow and pure outflow of high-velocity gas being excluded at a high confidence level for certain geometries.
THE LICK AGN MONITORING PROJECT 2011: SPECTROSCOPIC CAMPAIGN AND EMISSION-LINE LIGHT CURVES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barth, Aaron J.; Bennert, Vardha N.; Canalizo, Gabriela
2015-04-15
In the Spring of 2011 we carried out a 2.5 month reverberation mapping campaign using the 3 m Shane telescope at Lick Observatory, monitoring 15 low-redshift Seyfert 1 galaxies. This paper describes the observations, reductions and measurements, and data products from the spectroscopic campaign. The reduced spectra were fitted with a multicomponent model in order to isolate the contributions of various continuum and emission-line components. We present light curves of broad emission lines and the active galactic nucleus (AGN) continuum, and measurements of the broad Hβ line widths in mean and rms spectra. For the most highly variable AGNs wemore » also measured broad Hβ line widths and velocity centroids from the nightly spectra. In four AGNs exhibiting the highest variability amplitudes, we detect anticorrelations between broad Hβ width and luminosity, demonstrating that the broad-line region “breathes” on short timescales of days to weeks in response to continuum variations. We also find that broad Hβ velocity centroids can undergo substantial changes in response to continuum variations; in NGC 4593, the broad Hβ velocity shifted by ∼250 km s{sup −1} over a 1 month period. This reverberation-induced velocity shift effect is likely to contribute a significant source of confusion noise to binary black hole searches that use multi-epoch quasar spectroscopy to detect binary orbital motion. We also present results from simulations that examine biases that can occur in measurement of broad-line widths from rms spectra due to the contributions of continuum variations and photon-counting noise.« less
The Lick AGN Monitoring Project 2011: Spectroscopic Campaign and Emission-line Light Curves
NASA Technical Reports Server (NTRS)
Barth, Aaron J.; Bennert, Vardha N.; Canalizo, Gabriela; Filippenko, Alexei V.; Gates, Elinor L.; Greene, Jenny E..; Li, Weidong; Malkan, Matthew A.; Pancoast, Anna; Sand, David J.;
2016-01-01
In the Spring of 2011 we carried out a 2.5 month reverberation mapping campaign using the 3 m Shane telescope at Lick Observatory, monitoring 15 low-redshift Seyfert 1 galaxies. This paper describes the observations, reductions and measurements, and data products from the spectroscopic campaign. The reduced spectra were fitted with a multicomponent model in order to isolate the contributions of various continuum and emission-line components. We present light curves of broad emission lines and the active galactic nucleus (AGN) continuum, and measurements of the broad Hß line widths in mean and rms spectra. For the most highly variable AGNs we also measured broad H beta line widths and velocity centroids from the nightly spectra. In four AGNs exhibiting the highest variability amplitudes, we detect anticorrelations between broad H beta width and luminosity, demonstrating that the broad-line region "breathes" on short timescales of days to weeks in response to continuum variations. We also find that broad H beta velocity centroids can undergo substantial changes in response to continuum variations; in NGC 4593, the broad H beta velocity shifted by approximately 250 km s(exp -1) over a 1 month period. This reverberation-induced velocity shift effect is likely to contribute a significant source of confusion noise to binary black hole searches that use multi-epoch quasar spectroscopy to detect binary orbital motion. We also present results from simulations that examine biases that can occur in measurement of broad-line widths from rms spectra due to the contributions of continuum variations and photon-counting noise.
Optical Variability of Narrow-line and Broad-line Seyfert 1 Galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rakshit, Suvendu; Stalin, C. S., E-mail: suvenduat@gmail.com
We studied the optical variability (OV) of a large sample of narrow-line Seyfert 1 (NLSy1) and broad-line Seyfert 1 (BLSy1) galaxies with z < 0.8 to investigate any differences in their OV properties. Using archival optical V -band light curves from the Catalina Real Time Transient Survey that span 5–9 years and modeling them using damped random walk, we estimated the amplitude of variability. We found that NLSy1 galaxies as a class show lower amplitude of variability than their broad-line counterparts. In the sample of both NLSy1 and BLSy1 galaxies, radio-loud sources are found to have higher variability amplitude thanmore » radio-quiet sources. Considering only sources that are detected in the X-ray band, NLSy1 galaxies are less optically variable than BLSy1 galaxies. The amplitude of variability in the sample of both NLSy1 and BLSy1 galaxies is found to be anti-correlated with Fe ii strength but correlated with the width of the H β line. The well-known anti-correlation of variability–luminosity and the variability–Eddington ratio is present in our data. Among the radio-loud sample, variability amplitude is found to be correlated with radio-loudness and radio-power, suggesting that jets also play an important role in the OV in radio-loud objects, in addition to the Eddington ratio, which is the main driving factor of OV in radio-quiet sources.« less
Optical Variability of Narrow-line and Broad-line Seyfert 1 Galaxies
NASA Astrophysics Data System (ADS)
Rakshit, Suvendu; Stalin, C. S.
2017-06-01
We studied the optical variability (OV) of a large sample of narrow-line Seyfert 1 (NLSy1) and broad-line Seyfert 1 (BLSy1) galaxies with z < 0.8 to investigate any differences in their OV properties. Using archival optical V-band light curves from the Catalina Real Time Transient Survey that span 5-9 years and modeling them using damped random walk, we estimated the amplitude of variability. We found that NLSy1 galaxies as a class show lower amplitude of variability than their broad-line counterparts. In the sample of both NLSy1 and BLSy1 galaxies, radio-loud sources are found to have higher variability amplitude than radio-quiet sources. Considering only sources that are detected in the X-ray band, NLSy1 galaxies are less optically variable than BLSy1 galaxies. The amplitude of variability in the sample of both NLSy1 and BLSy1 galaxies is found to be anti-correlated with Fe II strength but correlated with the width of the Hβ line. The well-known anti-correlation of variability-luminosity and the variability-Eddington ratio is present in our data. Among the radio-loud sample, variability amplitude is found to be correlated with radio-loudness and radio-power, suggesting that jets also play an important role in the OV in radio-loud objects, in addition to the Eddington ratio, which is the main driving factor of OV in radio-quiet sources.
Discovery of Ultra-fast Outflows in a Sample of Broad-line Radio Galaxies Observed with Suzaku
NASA Astrophysics Data System (ADS)
Tombesi, F.; Sambruna, R. M.; Reeves, J. N.; Braito, V.; Ballo, L.; Gofford, J.; Cappi, M.; Mushotzky, R. F.
2010-08-01
We present the results of a uniform and systematic search for blueshifted Fe K absorption lines in the X-ray spectra of five bright broad-line radio galaxies observed with Suzaku. We detect, for the first time in radio-loud active galactic nuclei (AGNs) at X-rays, several absorption lines at energies greater than 7 keV in three out of five sources, namely, 3C 111, 3C 120, and 3C 390.3. The lines are detected with high significance according to both the F-test and extensive Monte Carlo simulations. Their likely interpretation as blueshifted Fe XXV and Fe XXVI K-shell resonance lines implies an origin from highly ionized gas outflowing with mildly relativistic velocities, in the range v ~= 0.04-0.15c. A fit with specific photoionization models gives ionization parameters in the range log ξ ~= 4-5.6 erg s-1 cm and column densities of N H ~= 1022-1023 cm-2. These characteristics are very similar to those of the ultra-fast outflows (UFOs) previously observed in radio-quiet AGNs. Their estimated location within ~0.01-0.3 pc of the central super-massive black hole suggests a likely origin related with accretion disk winds/outflows. Depending on the absorber covering fraction, the mass outflow rate of these UFOs can be comparable to the accretion rate and their kinetic power can correspond to a significant fraction of the bolometric luminosity and is comparable to their typical jet power. Therefore, these UFOs can play a significant role in the expected feedback from the AGN to the surrounding environment and can give us further clues on the relation between the accretion disk and the formation of winds/jets in both radio-quiet and radio-loud AGNs.
Neutrino-heated stars and broad-line emission from active galactic nuclei
NASA Technical Reports Server (NTRS)
Macdonald, James; Stanev, Todor; Biermann, Peter L.
1991-01-01
Nonthermal radiation from active galactic nuclei indicates the presence of highly relativistic particles. The interaction of these high-energy particles with matter and photons gives rise to a flux of high-energy neutrinos. In this paper, the influence of the expected high neutrino fluxes on the structure and evolution of single, main-sequence stars is investigated. Sequences of models of neutrino-heated stars in thermal equilibrium are presented for masses 0.25, 0.5, 0.8, and 1.0 solar mass. In addition, a set of evolutionary sequences for mass 0.5 solar mass have been computed for different assumed values for the incident neutrino energy flux. It is found that winds driven by the heating due to high-energy particles and hard electromagnetic radiation of the outer layers of neutrino-bloated stars may satisfy the requirements of the model of Kazanas (1989) for the broad-line emission clouds in active galactic nuclei.
OUTFLOW AND METALLICITY IN THE BROAD-LINE REGION OF LOW-REDSHIFT ACTIVE GALACTIC NUCLEI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shin, Jaejin; Woo, Jong-Hak; Nagao, Tohru
2017-01-20
Outflows in active galactic nuclei (AGNs) are crucial to understand in investigating the co-evolution of supermassive black holes (SMBHs) and their host galaxies since outflows may play an important role as an AGN feedback mechanism. Based on archival UV spectra obtained with the Hubble Space Telescope and IUE , we investigate outflows in the broad-line region (BLR) in low-redshift AGNs ( z < 0.4) through detailed analysis of the velocity profile of the C iv emission line. We find a dependence of the outflow strength on the Eddington ratio and the BLR metallicity in our low-redshift AGN sample, which ismore » consistent with earlier results obtained for high-redshift quasars. These results suggest that BLR outflows, gas accretion onto SMBHs, and past star formation activity in host galaxies are physically related in low-redshift AGNs as in powerful high-redshift quasars.« less
Far-ultraviolet and optical spectrophotometry of X-ray selected Seyfert galaxies
NASA Technical Reports Server (NTRS)
Clarke, J. T.; Bowyer, S.; Grewing, M.
1986-01-01
Five X-ray selected Seyfert galaxies were examined via near-simultaneous far-ultraviolet and optical spectrophotometry in an effort to test models for excitation of emission lines by X-ray and ultraviolet continuum photoionization. The observed Ly-alpha/H-beta ratio in the present sample averages 22, with an increase found toward the high-velocity wings of the H lines in the spectrum of at least one of the Seyfert I nuclei. It is suggested that Seyfert galaxies with the most high-velocity gas exhibit the highest Ly-alpha/H-beta ratios at all velocities in the line profiles, and that sometimes this ratio may be highest for the highest velocity material in the broad-line clouds. Since broad-lined objects are least affected by Ly-alpha trapping effects, they have Ly-alpha/H-beta ratios much closer to those predicted by early photoionization calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tombesi, F.; Kallman, T.; Leutenegger, M. A.
2016-10-20
We present the first high spectral resolution X-ray observation of the broad-line radio galaxy 3C 390.3 obtained with the high-energy transmission grating spectrometer on board the Chandra X-ray Observatory . The spectrum shows complex emission and absorption features in both the soft X-rays and Fe K band. We detect emission and absorption lines in the energy range E = 700–1000 eV associated with ionized Fe L transitions (Fe XVII–XX). An emission line at the energy of E ≃ 6.4 keV consistent with the Fe K α is also observed. Our best-fit model requires at least three different components: (i) amore » hot emission component likely associated with the hot interstellar medium in this elliptical galaxy with temperature kT = 0.5 ± 0.1 keV; (ii) a warm absorber with ionization parameter log ξ = 2.3 ± 0.5 erg s{sup −1} cm, column density log N {sub H} = 20.7 ± 0.1 cm{sup −2}, and outflow velocity v {sub out} < 150 km s{sup −1}; and (iii) a lowly ionized reflection component in the Fe K band likely associated with the optical broad-line region or the outer accretion disk. These evidences suggest the possibility that we are looking directly down the ionization cone of this active galaxy and that the central X-ray source only photoionizes along the unobscured cone. This is overall consistent with the angle-dependent unified picture of active galactic nuclei.« less
NASA Technical Reports Server (NTRS)
Tombesi, F.; Reeves, J. N.; Kallman, Timothy R.; Reynolds, C. S.; Mushotzky, R. F.; Braito, V.; Behar, E.; Leutenegger, Maurice A.; Cappi, M.
2016-01-01
We present the first high spectral resolution X-ray observation of the broad-line radio galaxy 3C 390.3 obtained with the high-energy transmission grating spectrometer on board the Chandra X-ray Observatory. The spectrum shows complex emission and absorption features in both the soft X-rays and Fe K band. We detect emission and absorption lines in the energy range E = 700-1000 eV associated with ionized Fe L transitions (Fe XVIIXX). An emission line at the energy of E approximately equal to 6.4 keV consistent with the Fe K alpha is also observed. Our best-fit model requires at least three different components: (i) a hot emission component likely associated with the hot interstellar medium in this elliptical galaxy with temperature kT = 0.5 +/- 0.1 keV; (ii) a warm absorber with ionization parameter log Epislon = 2.3 +/- 0.5 erg s(exp 1) cm, column density logN(sub H) = 20.7 +/- 0.1 cm(exp -2), and outflow velocity v(sub out) less than 150 km s(exp -1); and (iii) a lowly ionized reflection component in the Fe K band likely associated with the optical broad-line region or the outer accretion disk. These evidences suggest the possibility that we are looking directly down the ionization cone of this active galaxy and that the central X-ray source only photoionizes along the unobscured cone. This is overall consistent with the angle-dependent unified picture of active galactic nuclei.
NASA Astrophysics Data System (ADS)
Sun, Mouyuan; Trump, Jonathan R.; Shen, Yue; Brandt, W. N.; Dawson, Kyle; Denney, Kelly D.; Hall, Patrick B.; Ho, Luis C.; Horne, Keith; Jiang, Linhua; Richards, Gordon T.; Schneider, Donald P.; Bizyaev, Dmitry; Kinemuchi, Karen; Oravetz, Daniel; Pan, Kaike; Simmons, Audrey
2015-09-01
We explore the variability of quasars in the Mg ii and {{H}}β broad emission lines and ultraviolet/optical continuum emission using the Sloan Digital Sky Survey Reverberation Mapping project (SDSS-RM). This is the largest spectroscopic study of quasar variability to date: our study includes 29 spectroscopic epochs from SDSS-RM over 6 months, containing 357 quasars with Mg ii and 41 quasars with {{H}}β . On longer timescales, the study is also supplemented with two-epoch data from SDSS-I/II. The SDSS-I/II data include an additional 2854 quasars with Mg ii and 572 quasars with {{H}}β . The Mg ii emission line is significantly variable ({{Δ }}f/f∼ 10% on ∼100-day timescales), a necessary prerequisite for its use for reverberation mapping studies. The data also confirm that continuum variability increases with timescale and decreases with luminosity, and the continuum light curves are consistent with a damped random-walk model on rest-frame timescales of ≳ 5 days. We compare the emission-line and continuum variability to investigate the structure of the broad-line region. Broad-line variability shows a shallower increase with timescale compared to the continuum emission, demonstrating that the broad-line transfer function is not a δ-function. {{H}}β is more variable than Mg ii (roughly by a factor of ∼1.5), suggesting different excitation mechanisms, optical depths and/or geometrical configuration for each emission line. The ensemble spectroscopic variability measurements enabled by the SDSS-RM project have important consequences for future studies of reverberation mapping and black hole mass estimation of 1\\lt z\\lt 2 quasars.
NASA Astrophysics Data System (ADS)
Gaskell, C. Martin
2017-05-01
Low-redshift active galactic nuclei (AGNs) with extremely blue optical spectral indices are shown to have a mean, velocity-averaged, broad-line Hα/Hβ ratio of ≈2.72 ± 0.04, consistent with a Baker-Menzel Case B value. Comparison of a wide range of properties of the very bluest AGNs with those of a luminosity-matched subset of the Dong et al. blue AGN sample indicates that the only difference is the internal reddening. Ultraviolet fluxes are brighter for the bluest AGNs by an amount consistent with the flat AGN reddening curve of Gaskell et al. The lack of a significant difference in the GALEX (far-ultraviolet-near-ultraviolet) colour index strongly rules out a steep Small Magellanic Cloud-like reddening curve and also argues against an intrinsically harder spectrum for the bluest AGNs. For very blue AGNs, the Ly α/Hβ ratio is also consistent with being the Case B value. The Case B ratios provide strong support for the self-shielded broad-line model of Gaskell, Klimek & Nazarova. It is proposed that the greatly enhanced Ly α/Hβ ratio at very high velocities is a consequence of continuum fluorescence in the Lyman lines (Case C). Reddenings of AGNs mean that the far-UV luminosity is often underestimated by up to an order of magnitude. This is a major factor causing the discrepancies between measured accretion disc sizes and the predictions of simple accretion disc theory. Dust covering fractions for most AGNs are lower than has been estimated. The total mass in lower mass supermassive black holes must be greater than hitherto estimated.
Constraints on the broad-line region properties and extinction in local Seyferts
NASA Astrophysics Data System (ADS)
Schnorr-Müller, Allan; Davies, R. I.; Korista, K. T.; Burtscher, L.; Rosario, D.; Storchi-Bergmann, T.; Contursi, A.; Genzel, R.; Graciá-Carpio, J.; Hicks, E. K. S.; Janssen, A.; Koss, M.; Lin, M.-Y.; Lutz, D.; Maciejewski, W.; Müller-Sánchez, F.; Orban de Xivry, G.; Riffel, R.; Riffel, Rogemar A.; Schartmann, M.; Sternberg, A.; Sturm, E.; Tacconi, L.; Veilleux, S.; Ulrich, O. A.
2016-11-01
We use high-spectral resolution (R > 8000) data covering 3800-13 000 Å to study the physical conditions of the broad-line region (BLR) of nine nearby Seyfert 1 galaxies. Up to six broad H I lines are present in each spectrum. A comparison - for the first time using simultaneous optical to near-infrared observations - to photoionization calculations with our devised simple scheme yields the extinction to the BLR at the same time as determining the density and photon flux, and hence distance from the nucleus, of the emitting gas. This points to a typical density for the H I emitting gas of 1011 cm-3 and shows that a significant amount of this gas lies at regions near the dust sublimation radius, consistent with theoretical predictions. We also confirm that in many objects, the line ratios are far from case B, the best-fitting intrinsic broad-line Hα/H β ratios being in the range 2.5-6.6 as derived with our photoionization modelling scheme. The extinction to the BLR, based on independent estimates from H I and He II lines, is AV ≤ 3 for Seyfert 1-1.5s, while Seyfert 1.8-1.9s have AV in the range 4-8. A comparison of the extinction towards the BLR and narrow-line region (NLR) indicates that the structure obscuring the BLR exists on scales smaller than the NLR. This could be the dusty torus, but dusty nuclear spirals or filaments could also be responsible. The ratios between the X-ray absorbing column NH and the extinction to the BLR are consistent with the Galactic gas-to-dust ratio if NH variations are considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roig, Benjamin; Blanton, Michael R.; Ross, Nicholas P.
2014-02-01
Many classes of active galactic nuclei (AGNs) have been observed and recorded since the discovery of Seyfert galaxies. In this paper, we examine the sample of luminous galaxies in the Baryon Oscillation Spectroscopic Survey. We find a potentially new observational class of AGNs, one with strong and broad Mg II λ2799 line emission, but very weak emission in other normal indicators of AGN activity, such as the broad-line Hα, Hβ, and the near-ultraviolet AGN continuum, leading to an extreme ratio of broad Hα/Mg II flux relative to normal quasars. Meanwhile, these objects' narrow-line flux ratios reveal AGN narrow-line regions withmore » levels of activity consistent with the Mg II fluxes and in agreement with that of normal quasars. These AGN may represent an extreme case of the Baldwin effect, with very low continuum and high equivalent width relative to typical quasars, but their ratio of broad Mg II to broad Balmer emission remains very unusual. They may also be representative of a class of AGN where the central engine is observed indirectly with scattered light. These galaxies represent a small fraction of the total population of luminous galaxies (≅ 0.1%), but are more likely (about 3.5 times) to have AGN-like nuclear line emission properties than other luminous galaxies. Because Mg II is usually inaccessible for the population of nearby galaxies, there may exist a related population of broad-line Mg II emitters in the local universe which is currently classified as narrow-line emitters (Seyfert 2 galaxies) or low ionization nuclear emission-line regions.« less
EDDINGTON RATIO DISTRIBUTION OF X-RAY-SELECTED BROAD-LINE AGNs AT 1.0 < z < 2.2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suh, Hyewon; Hasinger, Günther; Steinhardt, Charles
2015-12-20
We investigate the Eddington ratio distribution of X-ray-selected broad-line active galactic nuclei (AGNs) in the redshift range 1.0 < z < 2.2, where the number density of AGNs peaks. Combining the optical and Subaru/Fiber Multi Object Spectrograph near-infrared spectroscopy, we estimate black hole masses for broad-line AGNs in the Chandra Deep Field South (CDF-S), Extended Chandra Deep Field South (E-CDF-S), and the XMM-Newton Lockman Hole (XMM-LH) surveys. AGNs with similar black hole masses show a broad range of AGN bolometric luminosities, which are calculated from X-ray luminosities, indicating that the accretion rate of black holes is widely distributed. We find a substantial fraction ofmore » massive black holes accreting significantly below the Eddington limit at z ≲ 2, in contrast to what is generally found for luminous AGNs at high redshift. Our analysis of observational selection biases indicates that the “AGN cosmic downsizing” phenomenon can be simply explained by the strong evolution of the comoving number density at the bright end of the AGN luminosity function, together with the corresponding selection effects. However, one might need to consider a correlation between the AGN luminosity and the accretion rate of black holes, in which luminous AGNs have higher Eddington ratios than low-luminosity AGNs, in order to understand the relatively small fraction of low-luminosity AGNs with high accretion rates in this epoch. Therefore, the observed downsizing trend could be interpreted as massive black holes with low accretion rates, which are relatively fainter than less-massive black holes with efficient accretion.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sameshima, H.; Yoshii, Y.; Kawara, K., E-mail: sameshima@cc.kyoto-su.ac.jp
2017-01-10
We present an analysis of Mg ii λ 2798 and Fe ii UV emission lines for archival Sloan Digital Sky Survey (SDSS) quasars to explore the diagnostics of the magnesium-to-iron abundance ratio in a broad-line region cloud. Our sample consists of 17,432 quasars selected from the SDSS Data Release 7 with a redshift range of 0.72 < z < 1.63. A strong anticorrelation between the Mg ii equivalent width (EW) and the Eddington ratio is found, while only a weak positive correlation is found between the Fe ii EW and the Eddington ratio. To investigate the origin of these differing behaviors ofmore » Mg ii and Fe ii emission lines, we perform photoionization calculations using the Cloudy code, where constraints from recent reverberation mapping studies are considered. We find from calculations that (1) Mg ii and Fe ii emission lines are created at different regions in a photoionized cloud, and (2) their EW correlations with the Eddington ratio can be explained by just changing the cloud gas density. These results indicate that the Mg ii/Fe ii flux ratio, which has been used as a first-order proxy for the Mg/Fe abundance ratio in chemical evolution studies with quasar emission lines, depends largely on the cloud gas density. By correcting this density dependence, we propose new diagnostics of the Mg/Fe abundance ratio for a broad-line region cloud. In comparing the derived Mg/Fe abundance ratios with chemical evolution models, we suggest that α -enrichment by mass loss from metal-poor intermediate-mass stars occurred at z ∼ 2 or earlier.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Constantin, Anca; Castillo, Christopher A.; Shields, Joseph C.
Using a sample of ∼100 nearby line-emitting galaxy nuclei, we have built the currently definitive atlas of spectroscopic measurements of Hα and neighboring emission lines at subarcsecond scales. We employ these data in a quantitative comparison of the nebular emission in Hubble Space Telescope (HST) and ground-based apertures, which offer an order-of-magnitude difference in contrast, and provide new statistical constraints on the degree to which transition objects and low-ionization nuclear emission-line regions (LINERs) are powered by an accreting black hole at ≲10 pc. We show that while the small-aperture observations clearly resolve the nebular emission, the aperture dependence in themore » line ratios is generally weak, and this can be explained by gradients in the density of the line-emitting gas: the higher densities in the more nuclear regions potentially flatten the excitation gradients, suppressing the forbidden emission. The transition objects show a threefold increase in the incidence of broad Hα emission in the high-resolution data, as well as the strongest density gradients, supporting the composite model for these systems as accreting sources surrounded by star-forming activity. The narrow-line LINERs appear to be the weaker counterparts of the Type 1 LINERs, where the low accretion rates cause the disappearance of the broad-line component. The enhanced sensitivity of the HST observations reveals a 30% increase in the incidence of accretion-powered systems at z ≈ 0. A comparison of the strength of the broad-line emission detected at different epochs implies potential broad-line variability on a decade-long timescale, with at least a factor of three in amplitude.« less
Giannelli, Alessio; Brianti, Emanuele; Varcasia, Antonio; Colella, Vito; Tamponi, Claudia; Di Paola, Giancarlo; Knaus, Martin; Halos, Lénaïg; Beugnet, Frederic; Otranto, Domenico
2015-04-30
The increasing reports of Aelurostrongylus abstrusus infection and the new information on Troglostrongylus brevior have spurred the interest of the scientific community towards the research of pharmaceutical compounds effective against both pathogens. A novel topical combination of fipronil, (S)-methoprene, eprinomectin and praziquantel (Broadline®, Merial) has been released for the treatment of a variety of feline parasitic infections. The present study reports the efficacy of this spot-on in treating cats naturally infected by feline lungworms. Client owned cats (n=191) were enrolled from three geographical areas of Italy and faecal samples were examined by floatation and Baermann techniques. Twenty-three individuals were positive for L1 of A. abstrusus (n=18) or T. brevior (n=3) or for both species (n=2) and they were topically treated with Broadline®. Seventeen of them were also concomitantly infected by other parasites. Four weeks after treatment, faecal samples were collected and examined to assess the efficacy of a single administration of the product. Based on lungworm larvae counts, the efficacy of the treatment was 90.5% or 100% for A. abstrusus or T. brevior, respectively. Cats released significantly lower amounts of lungworm larvae after treatment compared to pre-treatment (p<0.0001). All but three cats were negative for other nematodes after treatment and all cats recovered from respiratory signs. Results of this study indicate that a single administration of the topical combination fipronil, (S)-methoprene, eprinomectin and praziquantel is effective and safe for the treatment of A. abstrusus and/or T. brevior infections in cats living under field conditions. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khajenabi, Fazeleh, E-mail: f.khajenabi@gu.ac.ir
We investigate the orbital motion of cold clouds in the broad-line region of active galactic nuclei subject to the gravity of a black hole, a force due to a non-isotropic central source, and a drag force proportional to the velocity square. The intercloud is described using the standard solutions for the advection-dominated accretion flows. The orbit of a cloud decays because of the drag force, but the typical timescale of clouds falling onto the central black hole is shorter compared to the linear drag case. This timescale is calculated when a cloud moves through a static or rotating intercloud. Wemore » show that when the drag force is a quadratic function of the velocity, irrespective of the initial conditions and other input parameters, clouds will generally fall onto the central region much faster than the age of whole system, and since cold clouds present in most of the broad-line regions, we suggest that mechanisms for the continuous creation of the clouds must operate in these systems.« less
GTC and Swift observations of SN 2017dcc: Revised redshift & X-ray upper limit
NASA Astrophysics Data System (ADS)
Kann, D. A.; Izzo, L.; Cano, Z.; Postigo, A. de Ugarte; Thoene, C. C.; Schulze, S.
2017-04-01
We obtained observations of the broad-lined type Ic SN 2017dcc (Guiterrez et al., ATel #10313) with the 10.4m GTC on La Palma, Canary Islands, Spain, as well as with XRT and UVOT on the Swift space telescope.
SN 2010ay is a Luminous and Broad-lined Type Ic Supernova within a Low-metallicity Host Galaxy
NASA Technical Reports Server (NTRS)
Sanders, N. E.; Soderberg, A. M.; Valenti, S.; Chomiuk, L.; Berger, E.; Smartt, S.; Hurley, K.; Barthelmy, S. D.; Chornock, R.; Foley, R. J.;
2011-01-01
We report on our serendipitous pre-discovery detection and detailed follow-up of the broad-lined Type Ic supernova SN2010ay at z approx 0.067 imaged by the Pan-STARRS1 3pi survey just approx 4 days after explosion. Combining our photometric observations with those available in the literature, we estimate the explosion date and the peak luminosity of the SN, M(sub R) approximately equals 20.2 mag, significantly brighter than known GRB-SNe and one of the most luminous SNe Ibc ever discovered. We measure the photospheric expansion velocity of the explosion from our spectroscopic follow-up observations, v(sub ph) approximately equals 19.2 X 10 (exp 3) km/s at approx 40 days after explosion. In comparison with other broad-lined SNe, the characteristic velocity of SN2010ay is 2 - 5 X higher and similar to the measurements for GRB-SNe at comparable epochs. Moreover the velocity declines two times slower than other SNe Ic-BL and GRB-SNe. Assuming that the optical emission is powered by radioactive decay, the peak magnitude implies the synthesis of an unusually large mass of Ni-56, M(sub Ni) = 0.9(+0.1/-0.1) solar mass. Our modeling of the light-curve points to a total ejecta mass, M(sub ej) approx 4.7 Solar Mass, and total kinetic energy, E(sub K,51) approximately equals 11. Thus the ratio of M(sub Ni) to M(sub ej) is at least twice as large for SN2010ay than in GRB-SNe and may indicate an additional energy reservoir. We also measure the metallicity (log(O/H) + 12 = 8.19) of the explosion site within the host galaxy using a high S/N optical spectrum. Our abundance measurement places this SN in the low-metallicity regime populated by GRB-SNe, and approx 0.2(0.5) dex lower than that typically measured for the host environments of normal (broad-lined) Ic supernovae. Despite striking similarities to the recent GRB-SN100316D/2010bh, we show that gamma-ray observations rule out an associated GRB with E(sub gamma) approx < 6 X 10(exp 48) erg (25-150 keV). Similarly, our deep radio follow-up observations with the Expanded Very Large Array rule out relativistic ejecta with energy, E approx > 10(exp 48) erg. These observations challenge the importance of progenitor metallicity for the production of a GRB, and suggest that other parameters also play a key role.
SN 2010ay Is a Luminous and Broad-Lined Type Ic Supernova Within a Low-Metallicity Host Galaxy
NASA Technical Reports Server (NTRS)
Sanders, N. E.; Soderberg, A. M.; Valenti, S.; Foley, R. J.; Chornock, R.; Chomiuk, L.; Berger, E.; Smartt, S.; Hurley, K.; Barthelmy, S. D.;
2012-01-01
We report on our serendipitous pre-discovery detection and follow-up observations of the broad-lined Type Ic supernova (SN Ic) 2010ay at z = 0.067 imaged by the Pan-STARRS1 3pi survey just approximately 4 days after explosion. The supernova (SN) had a peak luminosity, MR approx. -20.2 mag, significantly more luminous than known GRB-SNe and one of the most luminous SNe Ib/c ever discovered. The absorption velocity of SN 2010ay is v Si (is) approx. 19×10(exp 3) km s-1 at approximately 40 days after explosion, 2-5 times higher than other broad-lined SNe and similar to the GRB-SN 2010bh at comparable epochs. Moreover, the velocity declines approximately 2 times slower than other SNe Ic-BL and GRB-SNe. Assuming that the optical emission is powered by radioactive decay, the peak magnitude implies the synthesis of an unusually large mass of 56Ni, MNi = 0.9 solar mass. Applying scaling relations to the light curve, we estimate a total ejecta mass, Mej (is) approx. 4.7 solar mass, and total kinetic energy, EK (is) approx. 11 × 10(exp 51) erg. The ratio of MNi to Mej is approximately 2 times as large for SN 2010ay as typical GRB-SNe and may suggest an additional energy reservoir. The metallicity (log(O/H)PP04 + 12 = 8.19) of the explosion site within the host galaxy places SN 2010ay in the low-metallicity regime populated by GRB-SNe, and (is) approximately 0.5(0.2) dex lower than that typically measured for the host environments of normal (broad-lined) SNe Ic. We constrain any gamma-ray emission with E(gamma) (is) approximately less than 6 × 10(exp 48) erg (25-150 keV), and our deep radio follow-up observations with the Expanded Very Large Array rule out relativistic ejecta with energy E (is) approximately greater than 10(exp 48) erg. We therefore rule out the association of a relativistic outflow like those that accompanied SN 1998bw and traditional long-duration gamma-ray bursts (GRBs), but we place less-stringent constraints on a weak afterglow like that seen from XRF 060218. If this SN did not harbor a GRB, these observations challenge the importance of progenitor metallicity for the production of relativistic ejecta and suggest that other parameters also play a key role.
Payload training methodology study
NASA Technical Reports Server (NTRS)
1990-01-01
The results of the Payload Training Methodology Study (PTMS) are documented. Methods and procedures are defined for the development of payload training programs to be conducted at the Marshall Space Flight Center Payload Training Complex (PCT) for the Space Station Freedom program. The study outlines the overall training program concept as well as the six methodologies associated with the program implementation. The program concept outlines the entire payload training program from initial identification of training requirements to the development of detailed design specifications for simulators and instructional material. The following six methodologies are defined: (1) The Training and Simulation Needs Assessment Methodology; (2) The Simulation Approach Methodology; (3) The Simulation Definition Analysis Methodology; (4) The Simulator Requirements Standardization Methodology; (5) The Simulator Development Verification Methodology; and (6) The Simulator Validation Methodology.
The near-infrared radius-luminosity relationship for active galactic nuclei
NASA Astrophysics Data System (ADS)
Landt, Hermine; Bentz, Misty C.; Peterson, Bradley M.; Elvis, Martin; Ward, Martin J.; Korista, Kirk T.; Karovska, Margarita
2011-05-01
Black hole masses for samples of active galactic nuclei (AGNs) are currently estimated from single-epoch optical spectra. In particular, the size of the broad-line emitting region needed to compute the black hole mass is derived from the optical or ultraviolet continuum luminosity. Here we consider the relationship between the broad-line region size, R, and the near-infrared (near-IR) AGN continuum luminosity, L, as the near-IR continuum suffers less dust extinction than at shorter wavelengths and the prospects for separating the AGN continuum from host-galaxy starlight are better in the near-IR than in the optical. For a relationship of the form R∝Lα, we obtain for a sample of 14 reverberation-mapped AGN a best-fitting slope of α= 0.5 ± 0.1, which is consistent with the slope of the relationship in the optical band and with the value of 0.5 naïvely expected from photoionization theory. Black hole masses can then be estimated from the near-IR virial product, which is calculated using the strong and unblended Paschen broad emission lines (Paα or Paβ).
NASA Astrophysics Data System (ADS)
Lipka, Michał; Parniak, Michał; Wasilewski, Wojciech
2017-09-01
We present an experimental realization of the optical frequency locked loop applied to long-term frequency difference stabilization of broad-line DFB lasers along with a new independent method to characterize relative phase fluctuations of two lasers. The presented design is based on a fast photodiode matched with an integrated phase-frequency detector chip. The locking setup is digitally tunable in real time, insensitive to environmental perturbations and compatible with commercially available laser current control modules. We present a simple model and a quick method to optimize the loop for a given hardware relying exclusively on simple measurements in time domain. Step response of the system as well as phase characteristics closely agree with the theoretical model. Finally, frequency stabilization for offsets within 4-15 GHz working range achieving <0.1 Hz long-term stability of the beat note frequency for 500 s averaging time period is demonstrated. For these measurements we employ an I/Q mixer that allows us to precisely and independently measure the full phase trace of the beat note signal.
Stability of the Broad-line Region Geometry and Dynamics in Arp 151 Over Seven Years
NASA Astrophysics Data System (ADS)
Pancoast, A.; Barth, A. J.; Horne, K.; Treu, T.; Brewer, B. J.; Bennert, V. N.; Canalizo, G.; Gates, E. L.; Li, W.; Malkan, M. A.; Sand, D.; Schmidt, T.; Valenti, S.; Woo, J.-H.; Clubb, K. I.; Cooper, M. C.; Crawford, S. M.; Hönig, S. F.; Joner, M. D.; Kandrashoff, M. T.; Lazarova, M.; Nierenberg, A. M.; Romero-Colmenero, E.; Son, D.; Tollerud, E.; Walsh, J. L.; Winkler, H.
2018-04-01
The Seyfert 1 galaxy Arp 151 was monitored as part of three reverberation mapping campaigns spanning 2008–2015. We present modeling of these velocity-resolved reverberation mapping data sets using a geometric and dynamical model for the broad-line region (BLR). By modeling each of the three data sets independently, we infer the evolution of the BLR structure in Arp 151 over a total of 7 yr and constrain the systematic uncertainties in nonvarying parameters such as the black hole mass. We find that the BLR geometry of a thick disk viewed close to face-on is stable over this time, although the size of the BLR grows by a factor of ∼2. The dynamics of the BLR are dominated by inflow, and the inferred black hole mass is consistent for the three data sets, despite the increase in BLR size. Combining the inference for the three data sets yields a black hole mass and statistical uncertainty of log10({M}BH}/{M}ȯ ) = {6.82}-0.09+0.09 with a standard deviation in individual measurements of 0.13 dex.
NASA Technical Reports Server (NTRS)
Peterson, B. M.; Berlind, P.; Bertram, R.; Bischoff, K.; Bochkarev, N. G.; Burenkov, A. N.; Calkins, M.; Carrasco, L.; Chavushyan, V. H.
2002-01-01
We present the final installment of an intensive 13 year study of variations of the optical continuum and broad H beta emission line in the Seyfert 1 galaxy NGC 5548. The database consists of 1530 optical continuum measurements and 1248 H beta measurements. The H beta variations follow the continuum variations closely, with a typical time delay of about 20 days. However, a year-by-year analysis shows that the magnitude of emission-line time delay is correlated with the mean continuum flux. We argue that the data are consistent with the simple model prediction between the size of the broad-line region and the ionizing luminosity, r is proportional to L(sup 1/2)(sub ion). Moreover, the apparently linear nature of the correlation between the H beta response time and the nonstellar optical continuum F(sub opt) arises as a consequence of the changing shape of the continuum as it varies, specifically F(sub opt) is proportional to F(sup 0.56)(sub UV).
DISSECTING THE QUASAR MAIN SEQUENCE: INSIGHT FROM HOST GALAXY PROPERTIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Jiayi; Shen, Yue
2015-05-01
The diverse properties of broad-line quasars appear to follow a well-defined main sequence along which the optical Fe ii strength increases. It has been suggested that this sequence is mainly driven by the Eddington ratio (L/L{sub Edd}) of the black hole (BH) accretion. Shen and Ho demonstrated with quasar clustering analysis that the average BH mass decreases with increasing Fe ii strength when quasar luminosity is fixed, consistent with this suggestion. Here we perform an independent test by measuring the stellar velocity dispersion σ{sub *} (hence, the BH mass via the M–σ{sub *} relation) from decomposed host spectra in low-redshiftmore » Sloan Digital Sky Survey quasars. We found that at fixed quasar luminosity, σ{sub *} systematically decreases with increasing Fe ii strength, confirming that the Eddington ratio increases with Fe ii strength. We also found that at fixed luminosity and Fe ii strength, there is little dependence of σ{sub *} on the broad Hβ FWHM. These new results reinforce the framework that the Eddington ratio and orientation govern most of the diversity seen in broad-line quasar properties.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Hua; Li, Hong; Shen, Yue
2014-10-10
Based on an updated Hβ reverberation mapping (RM) sample of 44 nearby active galactic nuclei (AGNs), we propose a novel approach for black hole (BH) mass estimation using two filtered luminosities computed from single-epoch (SE) AGN spectra around the Hβ region. We found that the two optimal-filter luminosities extract virial information (size and virial velocity of the broad-line region, BLR) from the spectra, justifying their usage in this empirical BH mass estimator. The major advantages of this new recipe over traditional SE BH mass estimators utilizing continuum luminosity and broad-line width are (1) it has a smaller intrinsic scatter ofmore » 0.28 dex calibrated against RM masses, (2) it is extremely simple to use in practice, without any need to decompose the spectrum, and (3) it produces unambiguous and highly repeatable results even with low signal-to-noise spectra. The combination of the two luminosities can also cancel out, to some extent, systematic luminosity errors potentially introduced by uncertainties in distance or flux calibration. In addition, we recalibrated the traditional SE mass estimators using broad Hβ FWHM and monochromatic continuum luminosity at 5100 Å (L {sub 5100}). We found that using the best-fit slopes on FWHM and L {sub 5100} (derived from fitting the BLR radius-luminosity relation and the correlation between rms line dispersion and SE FWHM, respectively) rather than simple assumptions (e.g., 0.5 for L {sub 5100} and 2 for FWHM) leads to more precise SE mass estimates, improving the intrinsic scatter from 0.41 dex to 0.36 dex with respect to the RM masses. We compared different estimators and discussed their applications to the Sloan Digital Sky Survey quasar sample. Due to the limitations of the current RM sample, application of any SE recipe calibrated against RM masses to distant quasars should be treated with caution.« less
NASA Technical Reports Server (NTRS)
Zheng, W.; Kriss, G. A.; Wang, J. X.; Brotherton, M.; Oegerle, W. R.; Blair, W. P.; Davidsen, A. F.; Green, R. F.; Hutchings, J. B.; Kaiser, M. E.;
2001-01-01
We present a moderate-resolution (approximately 20 km s(exp -1) spectrum of the mini broad absorption line QSO PG 1351+64 between 915-1180 A, obtained with the Far Ultraviolet Spectroscopic Explorer (FUSE). Additional low-resolution spectra at longer wavelengths were also obtained with the Hubble Space Telescope (HST) and ground-based telescopes. Broad absorption is present on the blue wings of C III (lambda)977, Ly(beta), O VI (lambda)(lambda)1032,1038, Ly(alpha), N V (lambda)(lambda)1238,1242, Si IV (lambda)(lambda)1393,1402, and C IV (lambda)(lambda)1548,1450. The absorption profile can be fitted with five components at velocities of approximately -780, -1049, -1629, -1833, and -3054 km s(exp -1) with respect to the emission-line redshift of z = 0.088. All the absorption components cover a large fraction of the continuum source as well as the broad-line region. The O VI emission feature is very weak, and the O VI/Ly(alpha) flux ratio is 0.08, one of the lowest among low-redshift active galaxies and QSOs. The UV (ultraviolet) continuum shows a significant change in slope near 1050 A in the restframe. The steeper continuum shortward of the Lyman limit extrapolates well to the observed weak X-ray flux level. The absorbers' properties are similar to those of high-redshift broad absorption-line QSOs. The derived total column density of the UV absorbers is on the order of 10(exp 21) cm(exp -2), unlikely to produce significant opacity above 1 keV in the X-ray. Unless there is a separate, high-ionization X-ray absorber, the QSO's weak X-ray flux may be intrinsic. The ionization level of the absorbing components is comparable to that anticipated in the broad-line region, therefore the absorbers may be related to broad-line clouds along the line of sight.
NASA Technical Reports Server (NTRS)
Zheng, W.; Kriss, G. A.; Wang, J. X.; Brotherton, M.; Oegerle, W. R.; Blair, W. P.; Davidsen, A. F.; Green, R. F.; Hutchings, J. B.; Kaiser, M. E.;
2001-01-01
We present a moderate-resolution (approximately 20 km/s) spectrum of the broad-absorption line QSO PG 1351+64 between 915-1180 angstroms, obtained with the Far Ultraviolet Spectroscopic Explorer (FUSE). Additional low-resolution spectra at longer wavelengths were also obtained with the Hubble Space Telescope (HST) and ground-based telescopes. Broad absorption is present on the blue wings of C III lambda977, Ly-beta, O VI lambda-lambda-1032,1038, Ly-alpha, N V lambda-lambda-1238,1242, Si IV lambda-lambda-1393,1402, and C IV lambda-lambda-1548,1450. The absorption profile can be fitted with five components at velocities of approximately -780, -1049, -1629, -1833, and -3054 km/s with respect to the emission-line redshift of z = 0.088. All the absorption components cover a large fraction of the continuum source as well as the broad-line region. The O VI emission feature is very weak, and the O VI/Ly-alpha flux ratio is 0.08, one of the lowest among low-redshift active galaxies and QSOs. The ultraviolet continuum shows a significant change in slope near 1050 angstroms in the restframe. The steeper continuum shortward of the Lyman limit extrapolates well to the observed weak X-ray flux level. The absorbers' properties are similar to those of high-redshift broad absorption-line QSOs. The derived total column density of the UV absorbers is on the order of 10(exp 21)/s, unlikely to produce significant opacity above 1 keV in the X-ray. Unless there is a separate, high-ionization X-ray absorber, the QSO's weak X-ray flux may be intrinsic. The ionization level of the absorbing components is comparable to that anticipated in the broad-line region, therefore the absorbers may be related to broad-line clouds along the line of sight.
Correlation between the line width and the line flux of the double-peaked broad Hα of 3C390.3
NASA Astrophysics Data System (ADS)
Zhang, Xue-Guang
2013-03-01
In this paper, we carefully check the correlation between the line width (second moment) and the line flux of the double-peaked broad Hα of the well-known mapped active galactic nucleus (AGN) 3C390.3 in order to show some further distinctions between double-peaked emitters and normal broad-line AGN. Based on the virialization assumption MBH ∝ RBLR × V2(BLR) and the empirical relation RBLR ∝ L˜0.5, one strong negative correlation between the line width and the line flux of the double-peaked broad lines should be expected for 3C390.3, such as the negative correlation confirmed for the mapped broad-line object NGC 5548, RBLR × V2(BLR) ∝ L˜0.5 × σ2 = constant. Moreover, based on the public spectra around 1995 from the AGN WATCH project for 3C390.3, one reliable positive correlation is found between the line width and the line flux of the double-peaked broad Hα. In the context of the proposed theoretical accretion disc model for double-peaked emitters, the unexpected positive correlation can be naturally explained, due to different time delays for the inner and outer parts of the disc-like broad-line region (BLR) of 3C390.3. Moreover, the virialization assumption is checked and found to be still available for 3C390.3. However, the time-varying size of the BLR of 3C390.3 cannot be expected by the empirical relation RBLR ∝ L˜0.5. In other words, the mean size of the BLR of 3C390.3 can be estimated by the continuum luminosity (line luminosity), while the continuum emission strengthening leads to the size of BLR decreasing (not increasing) in different moments for 3C390.3. Then, we compared our results of 3C390.3 with the previous results reported in the literature for the other double-peaked emitters, and found that before to clearly correct the effects from disc physical parameters varying (such as the effects of disc precession) for long-term observed line spectra, it is not so meaningful to discuss the correlation of the line parameters of double-peaked broad lines. Furthermore, due to the probable `external' ionizing source with so far unclear structures, it is hard to give one conclusion that the positive correlation between the line width and the line flux can be found for all double-peaked emitters, even after the considerations of disc physical parameters varying. However, once one positive correlation of broad-line parameters is found, the accretion disc origination of the broad line should be considered first.
2016-09-15
METHODOLOGY INVESTIGATION: COMPARISON OF LIVE FIRE AND WEAPON SIMULATOR TEST METHODOLOGIES AND THE EFFECTS OF CLOTHING AND INDIVIDUAL EQUIPMENT ON...2. REPORT TYPE Final 3. DATES COVERED (From - To) October 2014 – August 2015 4. TITLE AND SUBTITLE WEAPON SIMULATOR TEST METHODOLOGY INVESTIGATION...COMPARISON OF LIVE FIRE AND WEAPON SIMULATOR TEST METHODOLOGIES AND THE EFFECTS OF CLOTHING AND INDIVIDUAL EQUIPMENT ON MARKSMANSHIP 5a. CONTRACT
Hidden Broad-line Regions in Seyfert 2 Galaxies: From the Spectropolarimetric Perspective
NASA Astrophysics Data System (ADS)
Du, Pu; Wang, Jian-Min; Zhang, Zhi-Xiang
2017-05-01
The hidden broad-line regions (BLRs) in Seyfert 2 galaxies, which display broad emission lines (BELs) in their polarized spectra, are a key piece of evidence in support of the unified model for active galactic nuclei (AGNs). However, the detailed kinematics and geometry of hidden BLRs are still not fully understood. The virial factor obtained from reverberation mapping of type 1 AGNs may be a useful diagnostic of the nature of hidden BLRs in type 2 objects. In order to understand the hidden BLRs, we compile six type 2 objects from the literature with polarized BELs and dynamical measurements of black hole masses. All of them contain pseudobulges. We estimate their virial factors, and find the average value is 0.60 and the standard deviation is 0.69, which agree well with the value of type 1 AGNs with pseudobulges. This study demonstrates that (1) the geometry and kinematics of BLR are similar in type 1 and type 2 AGNs of the same bulge type (pseudobulges), and (2) the small values of virial factors in Seyfert 2 galaxies suggest that, similar to type 1 AGNs, BLRs tend to be very thick disks in type 2 objects.
NASA Technical Reports Server (NTRS)
2007-01-01
The number of AGN and their luminosity distribution are crucial parameters for our understanding of the AGN phenomenon. Recent work strongly suggests every massive galaxy has a central black hole. However most of these objects either are not radiating or have been very difficult to detect We are now in the era of large surveys, and the luminosity function (LF] of AGN has been estimated in various ways. In the X-ray band. Chandra and XMM surveys have revealed that the LF of hard X-ray selected AGN shows a strong luminosity-dependent evolution with a dramatic break towards low L(sub x) (at all z). This is seen for all types of AGN, but is stronger for the broad-line objects. In sharp contrast, the local LF of optically-selected samples shows no such break and no differences between narrow and broad-line objects. If as been suggested, hard X ray and optical emission line can both can be fair indicators of AGN activity, it is important to first understand how reliable these characteristics are if we hope to understand the apparent discrepancy in the LFs.
NASA Technical Reports Server (NTRS)
Walker, E. S.; Mazzali, P. A.; Pian, E.; Hurley, K.; Arcavi, I.; Cenko, S. B.; Gal-Yam, A.; Horesh, A.; Kasliwal, M.; Poznanski, D.;
2014-01-01
We present optical photometry and spectroscopy of the broad-lined Type Ic supernova (SN Ic-BL) PTF10qts, which was discovered as part of the Palomar Transient Factory. The supernova was located in a dwarf galaxy of magnitude r = 21.1 at a redshift z = 0.0907.We find that the R-band light curve is a poor proxy for bolometric data and use photometric and spectroscopic data to construct and constrain the bolometric light curve. The derived bolometric magnitude at maximum light is Mbol = -18.51 +/- 0.2 mag, comparable to that of SN1998bw (Mbol = -18.7 mag) which was associated with a gamma-ray burst (GRB). PTF10qts is one of the most luminous SN Ic-BL observed without an accompanying GRB. We estimate the physical parameters of the explosion using data from our programme of follow-up observations, finding that it produced a larger mass of radioactive nickel compared to other SNeIc-BL with similar inferred ejecta masses and kinetic energies. The progenitor of the event was likely a approximately 20 solar mass star.
Microlensing of an extended source by a power-law mass distribution
NASA Astrophysics Data System (ADS)
Congdon, Arthur B.; Keeton, Charles R.; Osmer, S. J.
2007-03-01
Microlensing promises to be a powerful tool for studying distant galaxies and quasars. As the data and models improve, there are systematic effects that need to be explored. Quasar continuum and broad-line regions may respond differently to microlensing due to their different sizes; to understand this effect, we study microlensing of finite sources by a mass function of stars. We find that microlensing is insensitive to the slope of the mass function but does depend on the mass range. For negative-parity images, diluting the stellar population with dark matter increases the magnification dispersion for small sources and decreases it for large sources. This implies that the quasar continuum and broad-line regions may experience very different microlensing in negative-parity lensed images. We confirm earlier conclusions that the surface brightness profile and geometry of the source have little effect on microlensing. Finally, we consider non-circular sources. We show that elliptical sources that are aligned with the direction of shear have larger magnification dispersions than sources with perpendicular alignment, an effect that becomes more prominent as the ellipticity increases. Elongated sources can lead to more rapid variability than circular sources, which raises the prospect of using microlensing to probe source shape.
A New Look at Ionized Disk Winds in Seyfert-1 AGN
NASA Astrophysics Data System (ADS)
Bostrom, Allison; Miller, Jon M.
2016-04-01
We present an analysis of deep, high signal-to-noise Chandra/HETG observations of four Seyfert-1 galaxies with known warm absorbers (outflowing winds), including NGC 4151, MCG-6-30-15, NGC 3783, and NGC 3516. Focusing on the 4-10 keV Fe K-band, we fit the spectra using grids of models characterized by photoion- ized absorption. Even in this limited band, the sensitive, time-averaged spectra all require 2-3 zones within the outflow. In an improvement over most previous studies, re-emission from the winds was self-consistently included in our models. The broadening of these emission components, when attributed to Keplerian rotation, yields new launching radius estimations that are largely consistent with the broad-line region. If this is correct, the hot outflow may supply the pressure needed to confine clumps within the broad-line region. NGC 4151 and NGC 3516 each appear to have a high-velocity component with speeds comparable to 0.01c. The winds in each of the four objects have kinetic luminosities greater than 0.5% of the host galaxy bolometric luminosity for a filling factor of unity, indicating that they may be significant agents of AGN feedback.
Mini-Survey of SDSS OIII AGN with Swift
NASA Technical Reports Server (NTRS)
Angelina, Lorella; George, Ian
2007-01-01
There is a common wisdom that every massive galaxy has a massive block hole. However, most of these objects either are not radiating or until recently have been very difficult to detect. The Sloan Digital Sky Survey (SDSS) data, based on the [OIII] line indicate that perhaps up to 20% of all galaxies may be classified as AGN a surprising result that must be checked with independent data. X-ray surveys have revealed that hard X-ray selected AGN show a strong luminosity dependent evolution and their luminosity function (LF) shows a dramatic break towards low Lx (at all z). This is seen for all types of AGN, but is stronger for the broad-line objects. In sharp contrast, the local LF of (optically-selected samples) shows no such break and no differences between narrow and broad-line objects. Assuming both hard X-ray and [OIII] emission are fair indicators of AGN activity, it is important to understand this discrepancy. We present here the results of a mini-survey done with Swift on a selected sample of SDSS selected AGN. The objects have been sampled at different L([OIII]) to check the relation with the Lx observed with Swift.
Hidden Broad-line Regions in Seyfert 2 Galaxies: From the Spectropolarimetric Perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Pu; Wang, Jian-Min; Zhang, Zhi-Xiang, E-mail: dupu@ihep.ac.cn
2017-05-01
The hidden broad-line regions (BLRs) in Seyfert 2 galaxies, which display broad emission lines (BELs) in their polarized spectra, are a key piece of evidence in support of the unified model for active galactic nuclei (AGNs). However, the detailed kinematics and geometry of hidden BLRs are still not fully understood. The virial factor obtained from reverberation mapping of type 1 AGNs may be a useful diagnostic of the nature of hidden BLRs in type 2 objects. In order to understand the hidden BLRs, we compile six type 2 objects from the literature with polarized BELs and dynamical measurements of blackmore » hole masses. All of them contain pseudobulges. We estimate their virial factors, and find the average value is 0.60 and the standard deviation is 0.69, which agree well with the value of type 1 AGNs with pseudobulges. This study demonstrates that (1) the geometry and kinematics of BLR are similar in type 1 and type 2 AGNs of the same bulge type (pseudobulges), and (2) the small values of virial factors in Seyfert 2 galaxies suggest that, similar to type 1 AGNs, BLRs tend to be very thick disks in type 2 objects.« less
A methodology for the assessment of manned flight simulator fidelity
NASA Technical Reports Server (NTRS)
Hess, Ronald A.; Malsbury, Terry N.
1989-01-01
A relatively simple analytical methodology for assessing the fidelity of manned flight simulators for specific vehicles and tasks is offered. The methodology is based upon an application of a structural model of the human pilot, including motion cue effects. In particular, predicted pilot/vehicle dynamic characteristics are obtained with and without simulator limitations. A procedure for selecting model parameters can be implemented, given a probable pilot control strategy. In analyzing a pair of piloting tasks for which flight and simulation data are available, the methodology correctly predicted the existence of simulator fidelity problems. The methodology permitted the analytical evaluation of a change in simulator characteristics and indicated that a major source of the fidelity problems was a visual time delay in the simulation.
NASA Astrophysics Data System (ADS)
Ursini, F.; Petrucci, P.-O.; Matt, G.; Bianchi, S.; Cappi, M.; Dadina, M.; Grandi, P.; Torresi, E.; Ballantyne, D. R.; De Marco, B.; De Rosa, A.; Giroletti, M.; Malzac, J.; Marinucci, A.; Middei, R.; Ponti, G.; Tortosa, A.
2018-05-01
We present the analysis of five joint XMM-Newton/NuSTARobservations, 20 ks each and separated by 12 days, of the broad-line radio galaxy 3C 382. The data were obtained as part of a campaign performed in September-October 2016 simultaneously with VLBA. The radio data and their relation with the X-ray ones will be discussed in a following paper. The source exhibits a moderate flux variability in the UV/X-ray bands, and a limited spectral variability especially in the soft X-ray band. In agreement with past observations, we find the presence of a warm absorber, an iron Kα line with no associated Compton reflection hump, and a variable soft excess well described by a thermal Comptonization component. The data are consistent with a "two-corona" scenario, in which the UV emission and soft excess are produced by a warm (kT ≃ 0.6 keV), optically thick (τ ≃ 20) corona consistent with being a slab fully covering a nearly passive accretion disc, while the hard X-ray emission is due to a hot corona intercepting roughly 10% of the soft emission. These results are remarkably similar to those generally found in radio-quiet Seyferts, thus suggesting a common accretion mechanism.
NASA Astrophysics Data System (ADS)
Wang, L. J.; Cano, Z.; Wang, S. Q.; Zheng, W. K.; Liu, L. D.; Deng, J. S.; Yu, H.; Dai, Z. G.; Han, Y. H.; Xu, D.; Qiu, Y. L.; Wei, J. Y.; Li, B.; Song, L. M.
2017-12-01
Broad-lined type Ic supernovae (SNe Ic-BL) are a subclass of rare core-collapse SNe whose energy source is debated in the literature. Recently, a series of investigations on SNe Ic-BL with the magnetar (plus 56Ni) model were carried out. Evidence for magnetar formation was found for the well-observed SNe Ic-BL 1998bw and 2002ap. In this paper, we systematically study a large sample of SNe Ic-BL not associated with gamma-ray bursts (GRBs). We use photospheric velocity data determined in a homogeneous way. We find that the magnetar+56Ni model provides a good description of the light curves and velocity evolution of our sample of SNe Ic-BL, although some SNe (not all) can also be described by the pure-magnetar model or by the two-component pure-56Ni model (three out of 12 are unlikely to be explained by two-component model). In the magnetar+56Ni model, the amount of 56Ni required to explain their luminosity is significantly reduced, and the derived initial explosion energy is, in general, in accordance with neutrino heating. Some correlations between different physical parameters are evaluated, and their implications regarding magnetic field amplification and the total energy reservoir are discussed.
NASA Astrophysics Data System (ADS)
Barth, Aaron
2017-08-01
The nucleus of M81 is an object of singular importance as a template for low-luminosity accretion flows onto supermassive black holes. We propose to obtain a complete, small-aperture, high S/N STIS UV/optical spectrum of the M81 nucleus and multi-filter WFC3 imaging covering the UV through near-IR. Such data have never previously been obtained with HST; the only prior archival UV/optical spectra of M81 have low S/N, incomplete wavelength coverage, and are strongly contaminated by starlight. Combined with new Chandra X-ray data, our proposed observations will comprise the definitive reference dataset on the spectral energy distribution of this benchmark low-luminosity AGN. These data will provide unique new constraints on the possible contribution of a truncated thin accretion disk to the AGN emission spectrum, clarifying a fundamental property of low-luminosity accretion flows. The data will additionally provide new insights into broad-line region structure and black hole mass scaling relationships at the lowest AGN luminosities, and spatially resolved diagnostics of narrow-line region excitation conditions at unprecedented spatial resolution to assess the impact of the AGN on the ionization state of the gas in the host galaxy bulge.
Mini-Survey Of SDSS of [OIII] AGN With Swift
NASA Technical Reports Server (NTRS)
Angelini, L.; George, I. M.; Hill, J.; Padgett, C. A.; Mushotzky, R. F.
2008-01-01
The number of AGN and their luminosity distribution are crucial parameters for our understanding of the AGN phenomenon. Recent work (e.g. Ferrarese and Merritt 2000) strongly suggests every massive galaxy has a central black hole. However, most of these objects either are not radiating or have been very difficult to detect. We are now in the era of large surveys, and the luminosity function (LF) of AGN has been estimated in various ways. In the X-ray band, Chandra and XMM surveys (e.g., Barger et al. 2005; Hasinger, et al. 2005) have revealed that the LF of Hard X-ray selected AGN shows a strong luminosity-dependent evolution with a dramatic break towards low L(x) (at al z). This is seen for all types of AGN, but is stronger for the broad-line objects (e.g., Steffen et al. 2004). In sharp contrast, the local LF of optically-selected samples shows no such break and no differences between narrow and broad-line objects (Hao et al. 2005). If, as been suggested, hard X-ray and optical emission line can both be fair indicators of AGN activity, it is important to first understand how reliable these characteristics are if we hope to understand the apparent discrepancy in the LFs.
A characteristic scale for cold gas
NASA Astrophysics Data System (ADS)
McCourt, Michael; Oh, S. Peng; O'Leary, Ryan; Madigan, Ann-Marie
2018-02-01
We find that clouds of optically thin, pressure-confined gas are prone to fragmentation as they cool below ∼106 K. This fragmentation follows the lengthscale ∼cstcool, ultimately reaching very small scales (∼0.1 pc/n), as they reach the temperature ∼104 K at which hydrogen recombines. While this lengthscale depends on the ambient pressure confining the clouds, we find that the column density through an individual fragment Ncloudlet ∼ 1017 cm-2 is essentially independent of environment; this column density represents a characteristic scale for atomic gas at 104 K. We therefore suggest that 'clouds' of cold, atomic gas may, in fact, have the structure of a mist or a fog, composed of tiny fragments dispersed throughout the ambient medium. We show that this scale emerges in hydrodynamic simulations, and that the corresponding increase in the surface area may imply rapid entrainment of cold gas. We also apply it to a number of observational puzzles, including the large covering fraction of diffuse gas in galaxy haloes, the broad-line widths seen in quasar and AGN spectra and the entrainment of cold gas in galactic winds. While our simulations make a number of assumptions and thus have associated uncertainties, we show that this characteristic scale is consistent with a number of observations, across a wide range of astrophysical environments. We discuss future steps for testing, improving and extending our model.
NASA Astrophysics Data System (ADS)
Suzuki, Akihiro; Maeda, Keiichi
2017-04-01
The hydrodynamical interaction between freely expanding supernova ejecta and a relativistic wind injected from the central region is studied in analytic and numerical ways. As a result of the collision between the ejecta and the wind, a geometrically thin shell surrounding a hot bubble forms and expands in the ejecta. We use a self-similar solution to describe the early dynamical evolution of the shell and carry out a two-dimensional special relativistic hydrodynamic simulation to follow further evolution. The Rayleigh-Taylor instability inevitably develops at the contact surface separating the shocked wind and ejecta, leading to the complete destruction of the shell and the leakage of hot gas from the hot bubble. The leaking hot materials immediately catch up with the outermost layer of the supernova ejecta and thus different layers of the ejecta are mixed. We present the spatial profiles of hydrodynamical variables and the kinetic energy distributions of the ejecta. We stop the energy injection when a total energy of 1052 erg, which is 10 times larger than the initial kinetic energy of the supernova ejecta, is deposited into the ejecta and follow the subsequent evolution. From the results of our simulations, we consider expected emission from supernova ejecta powered by the energy injection at the centre and discuss the possibility that superluminous supernovae and broad-lined Ic supernovae could be produced by similar mechanisms.
Excitation anisotropy in laser-induced-fluorescence spectroscopy: Broad-line excitation case
NASA Astrophysics Data System (ADS)
Hirabayashi, A.; Nambu, Y.; Fujimoto, T.
1986-01-01
Treatment of excitation anisotropy for Laser-Induced-Fluorescence Spectroscopy (LIFS) is extended to the intense excitation case. The depolarization coefficient is derived for intense excitation limit (linearly-polarized or unpolarized light excitation), and the result is presented in tables. For the region of intermediate intensity between the weak and intense excitation limits, the master equation is solved for specific example of transitions and its result is compared with experiment.
Constraints on the Location of γ-Ray Sample of Blazars with Radio Core-shift Measurements
NASA Astrophysics Data System (ADS)
Wu, Linhui; Wu, Qingwen; Yan, Dahai; Chen, Liang; Fan, Xuliang
2018-01-01
We model simultaneous or quasi-simultaneous multi-band spectral energy distributions (SEDs) for a sample of 25 blazars that have radio core-shift measurements, where a one-zone leptonic model and Markov chain Monte Carlo technique are adopted. In the SED fitting for 23 low-synchrotron-peaked (LSP) blazars, the seed photons from the broad-line (BLR) and molecular torus are considered respectively in the external Compton process. We find that the SED fitting with the seed photons from the torus are better than those utilizing BLR photons, which suggest that the γ-ray emitting region may be located outside the BLR. Assuming the magnetic field strength in the γ-ray emitting region as constrained from the SED fitting follows the magnetic field distribution as derived from the radio core-shift measurements (i.e., B{(R)≃ {B}1{pc}(R/1{pc})}-1, where R is the distance from the central engine and {B}1{pc} is the magnetic field strength at 1 pc), we further calculate the location of the γ-ray emitting region, {R}γ , for these blazars. We find that {R}γ ∼ 2× {10}4{R}{{S}}≃ 10 {R}{BLR} ({R}{{S}} is the Schwarzschild radius and {R}{BLR} is the BLR size), where {R}{BLR} is estimated from the broad-line luminosities using the empirical correlations obtained using the reverberation mapping methods.
Photoionization Modelling of the Giant Broad-Line Region in NGC 3998.
NASA Astrophysics Data System (ADS)
Devereux, Nicholas
2018-01-01
Prior high angular resolution spectroscopic observations of the low-ionization nuclear emission-line region in NGC 3998 obtained with the Space Telescope Imaging Spectrograph aboard the Hubble Space Telescope revealed a rich UV-visible spectrum consisting of broad permitted and broad forbidden emission lines. The photoionization code XSTAR is employed together with reddening-insensitive emission line diagnostics to constrain a dynamical model for the broad-line region (BLR) in NGC 3998. The BLR is modelled as a large H+ region ~ 7 pc in radius consisting of dust-free, low density ~ 104 cm-3, low metallicity ~ 0.01 Z/Z⊙ gas. Modelling the shape of the broad Hα emission line significantly discriminates between two independent measures of the black hole mass, favouring the estimate of de Francesco (2006). Interpreting the broad Hα emission line in terms of a steady-state spherically symmetric inflow leads to a mass inflow rate of 1.4 x 10-2 M⊙/yr, well within the present uncertainty of calculations that attempt to explain the observed X-ray emission in terms of an advection-dominated accretion flow (ADAF). Collectively, the model provides an explanation for the shape of the Hα emission line, the relative intensities and luminosities for the H Balmer, [OIII], and potentially several of the broad UV emission lines, as well as refining the initial conditions needed for future modelling of the ADAF.
Tielemans, Eric; Manavella, Coralie; Visser, Martin; Theodore Chester, S; Rosentel, Joseph
2014-04-28
Although foxes are the main reservoir of Echinococcus multilocularis, it is recognized that dogs and cats also may become infected. In cats the infection and egg production rates are usually low. Nevertheless, cats are a potential source of transmission of E. multilocularis. Due to the high human medical significance of E. multilocularis infection, it is important in endemic areas that owned cats are dewormed regularly. This paper presents the efficacy results of a new topical formulation, Broadline(®) (Merial) tested against E. multilocularis infection in cats. Two blinded laboratory studies were conducted to evaluate this novel topical combination of fipronil, (S)-methoprene, eprinomectin, and praziquantel against E. multilocularis. In each study, purpose-bred cats were assigned randomly to two treatment groups of 10 cats each: one untreated control group and one group treated at the minimum therapeutic dose of 0.12 mL/kg bodyweight to deliver 10mg fipronil, 12 mg (S)-methoprene, 0.5mg eprinomectin and 10mg praziquantel/kg bodyweight. The cats were inoculated orally with E. multilocularis protoscolices, 22 or 23 days before treatment. Based on necropsy and intestinal worm count, 8 or 11 days after treatment, the two studies confirmed 100% efficacy of Broadline(®) against adult E. multilocularis. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Mini-Survey on SDSS OIII AGN with Swift
NASA Technical Reports Server (NTRS)
Angelini, Lorella
2008-01-01
The number of AGN and their luminosity distribution are crucial parameters for our understanding of the AGN phenomenon. There is a common wisdom that every massive galaxy has a massive black hole. However, most of these objects either are not radiating or until recently have been very difficult to detect. The Sloan Digital Sky Survey (SDSS) data, based on the [OIII] line indicate that perhaps up to 20% of all galaxies may be classified as AGN a surprising result that must be checked with independent data. X-ray surveys have revealed that hard X-ray selected AGN show a strong luminosity dependent evolution and their luminosity function (LF) shows a dramatic break towards low $L_X$ (at all $z$). This is seen for all types of AGN, but is stronger for the broad-line objects. In sharp contrast, the local LF of {it optically-selected samples} shows no such break and no differences between narrow and broad-line objects. Assuming both hard X-ray and [O{\\sc iii}] emission are fair indicators of AGN activity, it is important to understand this discrepancy. We present here the results of a min-survey done with Swift on a selected sample of SDSS selected AGN. The objects have been sampled at different L([O{\\sc iii}]) to check the relation with the $L_X$ observed with Swift.
NASA Astrophysics Data System (ADS)
Krumpe, M.; Husemann, B.; Tremblay, G. R.; Urrutia, T.; Powell, M.; Davis, T. A.; Scharwächter, J.; Dexter, J.; Busch, G.; Combes, F.; Croom, S. M.; Eckart, A.; McElroy, R. E.; Perez-Torres, M.; Leung, G.
2017-11-01
After changing optical AGN type from 1.9 to 1 in 1984, the AGN Mrk 1018 recently reverted back to its type 1.9 state. Our ongoing monitoring now reveals that the AGN has halted its dramatic dimming, reaching a minimum around October 2016. The minimum was followed by an outburst rising with 0.25 U-band mag/month. The rebrightening lasted at least until February 2017, as confirmed by joint Chandra and Hubble observations. Monitoring was resumed in July 2017 after the source emerged from sunblock, at which point the AGN was found only 0.4 mag brighter than its minimum. The intermittent outburst was accompanied by the appearance of a red wing asymmetry in broad-line shape, indicative of an inhomogeneous broad-line region. The current flickering brightness of Mrk 1018 following its rapid fading either suggests that the source has reignited, remains variable at a low level, or may continue dimming over the next few years. Distinguishing between these possibilities requires continuous multiwavelength monitoring. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programme(s) 098.B-0672 and 099.B-0159. The scientific results reported in this article are based on observations made by the Chandra X-ray Observatory and the NASA/ESA Hubble Space Telescope.
Expanding Simulations as a Means of Tactical Training with Multinational Partners
2017-06-09
gap through DOTMLPF in combination with an assessment of two case studies involving higher echelon use of simulations. Through this methodology , the...DOTMLPF in combination with an assessment of two case studies involving higher echelon use of simulations. Through this methodology , the findings...CHAPTER 3 RESEARCH METHODOLOGY .................................................................26 CHAPTER 4 ANALYSIS
NASA Astrophysics Data System (ADS)
Du, Pu; Lu, Kai-Xing; Hu, Chen; Qiu, Jie; Li, Yan-Rong; Huang, Ying-Ke; Wang, Fang; Bai, Jin-Ming; Bian, Wei-Hao; Yuan, Ye-Fei; Ho, Luis C.; Wang, Jian-Min; SEAMBH Collaboration
2016-03-01
In the sixth of a series of papers reporting on a large reverberation mapping (RM) campaign of active galactic nuclei (AGNs) with high accretion rates, we present velocity-resolved time lags of Hβ emission lines for nine objects observed in the campaign during 2012-2013. In order to correct the line broadening caused by seeing and instruments before analyzing the velocity-resolved RM, we adopt the Richardson-Lucy deconvolution to reconstruct their Hβ profiles. The validity and effectiveness of the deconvolution are checked using Monte Carlo simulation. Five among the nine objects show clear dependence of the time delay on velocity. Mrk 335 and Mrk 486 show signatures of gas inflow whereas the clouds in the broad-line regions (BLRs) of Mrk 142 and MCG +06-26-012 tend to be radial outflowing. Mrk 1044 is consistent with having virialized motions. The lags of the remaining four are not velocity-resolvable. The velocity-resolved RM of super-Eddington accreting massive black holes (SEAMBHs) shows that they have diverse kinematics in their BLRs. Comparing with the AGNs with sub-Eddington accretion rates, we do not find significant differences in the BLR kinematics of SEAMBHs.
Research in Modeling and Simulation for Airspace Systems Innovation
NASA Technical Reports Server (NTRS)
Ballin, Mark G.; Kimmel, William M.; Welch, Sharon S.
2007-01-01
This viewgraph presentation provides an overview of some of the applied research and simulation methodologies at the NASA Langley Research Center that support aerospace systems innovation. Risk assessment methodologies, complex systems design and analysis methodologies, and aer ospace operations simulations are described. Potential areas for future research and collaboration using interactive and distributed simula tions are also proposed.
NASA Astrophysics Data System (ADS)
Ren, Lei; Zhang, Lin; Tao, Fei; (Luke) Zhang, Xiaolong; Luo, Yongliang; Zhang, Yabin
2012-08-01
Multidisciplinary design of complex products leads to an increasing demand for high performance simulation (HPS) platforms. One great challenge is how to achieve high efficient utilisation of large-scale simulation resources in distributed and heterogeneous environments. This article reports a virtualisation-based methodology to realise a HPS platform. This research is driven by the issues concerning large-scale simulation resources deployment and complex simulation environment construction, efficient and transparent utilisation of fine-grained simulation resources and high reliable simulation with fault tolerance. A framework of virtualisation-based simulation platform (VSIM) is first proposed. Then the article investigates and discusses key approaches in VSIM, including simulation resources modelling, a method to automatically deploying simulation resources for dynamic construction of system environment, and a live migration mechanism in case of faults in run-time simulation. Furthermore, the proposed methodology is applied to a multidisciplinary design system for aircraft virtual prototyping and some experiments are conducted. The experimental results show that the proposed methodology can (1) significantly improve the utilisation of fine-grained simulation resources, (2) result in a great reduction in deployment time and an increased flexibility for simulation environment construction and (3)achieve fault tolerant simulation.
Simulation Methodology in Nursing Education and Adult Learning Theory
ERIC Educational Resources Information Center
Rutherford-Hemming, Tonya
2012-01-01
Simulation is often used in nursing education as a teaching methodology. Simulation is rooted in adult learning theory. Three learning theories, cognitive, social, and constructivist, explain how learners gain knowledge with simulation experiences. This article takes an in-depth look at each of these three theories as each relates to simulation.…
Gamma-Ray Emission from the Broad-Line Radio Galaxy 3C 111
NASA Technical Reports Server (NTRS)
Hartman, Robert C.; Kadler, M.; Tueller, Jack
2008-01-01
The broad-line radio galaxy 3C 111 has been suggested as the counterpart of the y-ray source 3EG J0416+3650. While 3C 111 meets most of the criteria for a high-probability identification, like a bright flat-spectrum radio core and a blazar-like broadband SED, in the Third EGRET Catalog, the large positional offset of about 1.5' put 3C 111 outside the 99% probability region for 3EG J0416+3650, making this association questionable. We present a re-analysis of all available archival data for 3C 111 from the EGRET archives, resulting in detection of variable hard-spectrum high-energy gamma-ray emission above 1000 MeV from a position close to the nominal position of 3C 111, in three separate viewing periods (VPs), at a 3sigma level in each. A second variable hard-spectrum source is present nearby. At >100 MeV, one variable soft-spectrum source seems to account for most of the EGRET-detected emission of 3EG J0416+3650. A follow-up Swift UVOT/XRT observation reveals one moderately bright X-ray source in the error box of 3EG J0416+3650, but because of the large EGRET position uncertainty, it is not certain that the X-ray and gamma-ray sources are associated. Another Swift observation near the second (unidentified) hard gamma-ray source detected no X-ray source nearby.
Photoionization modelling of the giant broad-line region in NGC 3998
NASA Astrophysics Data System (ADS)
Devereux, Nick
2018-01-01
Prior high angular resolution spectroscopic observations of the Low-ionization nuclear emission-line region (Liner) in NGC 3998 obtained with the Space Telescope Imaging Spectrograph (STIS) aboard the Hubble Space Telescope (HST) revealed a rich UV-visible spectrum consisting of broad permitted and broad forbidden emission lines. The photoionization code XSTAR is employed together with reddening-insensitive emission line diagnostics to constrain a dynamical model for the broad-line region (BLR) in NGC 3998. The BLR is modelled as a large H+ region ∼ 7 pc in radius consisting of dust-free, low-density ∼ 104 cm-3, low-metallicity ∼ 0.01 Z/Z⊙ gas. Modelling the shape of the broad H α emission line significantly discriminates between two independent measures of the black hole (BH) mass, favouring the estimate of de Francesco, Capetti & Marconi (2006). Interpreting the broad H α emission line in terms of a steady-state spherically symmetric inflow leads to a mass inflow rate of 1.4 × 10-2 M⊙ yr-1, well within the present uncertainty of calculations that attempt to explain the observed X-ray emission in terms of an advection-dominated accretion flow (ADAF). Collectively, the model provides an explanation for the shape of the H α emission line, the relative intensities and luminosities for the H Balmer, [O III], and potentially several of the broad UV emission lines, as well as refining the initial conditions needed for future modelling of the ADAF.
ASASSN-16fp (SN 2016coi): a transitional supernova between Type Ic and broad-lined Ic
NASA Astrophysics Data System (ADS)
Kumar, Brajesh; Singh, A.; Srivastav, S.; Sahu, D. K.; Anupama, G. C.
2018-01-01
We present results based on a well-sampled optical (UBVRI) and ultraviolet (Swift/UVOT) imaging, and low-resolution optical spectroscopic follow-up observations of the nearby Type Ic supernova (SN) ASASSN-16fp (SN 2016coi). The SN was monitored during the photospheric phase (-10 to +33 d with respect to the B-band maximum light). The rise to maximum light and early post-maximum decline of the light curves are slow. The peak absolute magnitude (MV = -17.7 ± 0.2 mag) of ASASSN-16fp is comparable with broad-lined Ic SN 2002ap, SN 2012ap and transitional Ic SN 2004aw but considerably fainter than the gamma-ray burst/X-ray flash associated SNe (e.g. SN 1998bw, 2006aj). Similar to the light curve, the spectral evolution is also slow. ASASSN-16fp shows distinct photospheric phase spectral lines along with the C II features. The expansion velocity of the ejecta near maximum light reached ∼16 000 km s-1 and settled to ∼8000 km s-1, ∼1 month post-maximum. Analytical modelling of the quasi-bolometric light curve of ASASSN-16fp suggests that ∼0.1 M⊙ 56Ni mass was synthesized in the explosion, with a kinetic energy of 6.9^{+1.5}_{-1.3} × 1051 erg and total ejected mass of ∼4.5 ± 0.3 M⊙.
NASA Astrophysics Data System (ADS)
Cracco, V.; Ciroi, S.; Berton, M.; Di Mille, F.; Foschini, L.; La Mura, G.; Rafanelli, P.
2016-10-01
We revisited the spectroscopic characteristics of narrow-line Seyfert 1 galaxies (NLS1s) by analysing a homogeneous sample of 296 NLS1s at redshift between 0.028 and 0.345, extracted from the Sloan Digital Sky Survey (SDSS-DR7) public archive. We confirm that NLS1s are mostly characterized by Balmer lines with Lorentzian profiles, lower black hole masses and higher Eddington ratios than classic broad-line Seyfert 1 (BLS1s), but they also appear to be active galactic nuclei (AGNs) contiguous with BLS1s and sharing with them common properties. Strong Fe II emission does not seem to be a distinctive property of NLS1s, as low values of Fe II/Hβ are equally observed in these AGNs. Our data indicate that Fe II and Ca II kinematics are consistent with the one of Hβ. On the contrary, O I λ8446 seems to be systematically narrower and it is likely emitted by gas of the broad-line region more distant from the ionizing source and showing different physical properties. Finally, almost all NLS1s of our sample show radial motions of the narrow-line region highly ionized gas. The mechanism responsible for this effect is not yet clear, but there are hints that very fast outflows require high continuum luminosities (>1044 erg s-1) or high Eddington ratios (log (Lbol/LEdd) > -0.1).
A methodological, task-based approach to Procedure-Specific Simulations training.
Setty, Yaki; Salzman, Oren
2016-12-01
Procedure-Specific Simulations (PSS) are 3D realistic simulations that provide a platform to practice complete surgical procedures in a virtual-reality environment. While PSS have the potential to improve surgeons' proficiency, there are no existing standards or guidelines for PSS development in a structured manner. We employ a unique platform inspired by game design to develop virtual reality simulations in three dimensions of urethrovesical anastomosis during radical prostatectomy. 3D visualization is supported by a stereo vision, providing a fully realistic view of the simulation. The software can be executed for any robotic surgery platform. Specifically, we tested the simulation under windows environment on the RobotiX Mentor. Using urethrovesical anastomosis during radical prostatectomy simulation as a representative example, we present a task-based methodological approach to PSS training. The methodology provides tasks in increasing levels of difficulty from a novice level of basic anatomy identification, to an expert level that permits testing new surgical approaches. The modular methodology presented here can be easily extended to support more complex tasks. We foresee this methodology as a tool used to integrate PSS as a complementary training process for surgical procedures.
Rehbein, Steffen; Capári, Balazs; Duscher, Georg; Keidane, Dace; Kirkova, Zvezdelina; Petkevičius, Saulius; Rapti, Dhimiter; Wagner, Annegret; Wagner, Thomas; Chester, S Theodore; Rosentel, Joseph; Tielemans, Eric; Visser, Martin; Winter, Renate; Kley, Katrin; Knaus, Martin
2014-04-28
A novel topical combination product (BROADLINE(®), Merial) composed of fipronil, (S)-methoprene, eprinomectin and praziquantel was evaluated for safety and efficacy against nematode and cestode infections in domestic cats. The study comprised a multi-centre, positive control, blinded, field study, using a randomized block design based on order of presentation for allocation. In total 196 client-owned cats, confirmed as positive for naturally acquired infections of nematodes and/or cestodes by pre-treatment faecal examination, were studied in seven countries in Europe. Pre-treatment faecal examination revealed the presence of Toxocara, hookworm, Capillaria and/or spirurid nematode infections in 129, 73, 33 or 1 cat(s), respectively; infections with taeniid and Dipylidium cestodes were demonstrated in 39 and 17 cats, respectively. Cats were allocated randomly to one of two treatments in a ratio of 2, topical fipronil (8.3%, w/v), (S)-methoprene (10%, w/v), eprinomectin (0.4%, w/v) and praziquantel (8.3%, w/v) (BROADLINE(®), Merial; 130 cats); and 1, topical PROFENDER(®) Spot-On (Bayer; 66 cats) and treated once on Day 0. For evaluation of efficacy, two faecal samples were collected, one prior to treatment (Day -4 ± 4 days) and one at the end of the study (Day 14 ± 5 days). These were examined for fecal forms of nematode and cestode parasites. For evaluation of safety, cats were examined by a veterinarian before treatment and at the end of the study, and cat owners recorded the health status of their cats daily until the end of the study. For cats treated with Broadline(®), the efficacy was >99.9%, 100%, and 99.6% for Toxocara, hookworms, and Capillaria, respectively; and the efficacy was >99.9%, >99.9%, and 98.5%, respectively, for the cats treated with Profender(®) (p<0.001 for all nematodes and both treatments). Efficacy was 100% for both cestodes for both treatments (p<0.001). No treatment related adverse experiences were observed throughout the study. For both treatments, every cat that completed the study was given a safety score of 'excellent' for both local and systemic evaluations. The topical combination product of fipronil, (S)-methoprene, eprinomectin and praziquantel was shown to have an excellent safety profile and demonstrated high levels of efficacy when administered once as topical solution to cats infected with nematodes and cestodes under field conditions. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
The Effects of Time Advance Mechanism on Simple Agent Behaviors in Combat Simulations
2011-12-01
modeling packages that illustrate the differences between discrete-time simulation (DTS) and discrete-event simulation ( DES ) methodologies. Many combat... DES ) models , often referred to as “next-event” (Law and Kelton 2000) or discrete time simulation (DTS), commonly referred to as “time-step.” DTS...discrete-time simulation (DTS) and discrete-event simulation ( DES ) methodologies. Many combat models use DTS as their simulation time advance mechanism
Swift observations of SN2011it
NASA Astrophysics Data System (ADS)
Margutti, R.; Soderberg, A. M.; Milisavljevic, D.
2011-12-01
SN2011it has been recently classified as a broad-line type-Ic supernova (Tomasella, CBET 2938). A Swift-ToO was executed to observe the field of SN2011it starting from 2011-12-08T00:22:48 UT, with the primary aim to constrain the off-axis X-ray emission from the SN. No X-ray source is detected at the optical position of the transient with a 3 sigma upper limit of 2.4d-3 cts/s in the 0.3-10 keV energy band (total exposure= 8.3 ks).
Using soft systems methodology to develop a simulation of out-patient services.
Lehaney, B; Paul, R J
1994-10-01
Discrete event simulation is an approach to modelling a system in the form of a set of mathematical equations and logical relationships, usually used for complex problems, which are difficult to address by using analytical or numerical methods. Managing out-patient services is such a problem. However, simulation is not in itself a systemic approach, in that it provides no methodology by which system boundaries and system activities may be identified. The investigation considers the use of soft systems methodology as an aid to drawing system boundaries and identifying system activities, for the purpose of simulating the outpatients' department at a local hospital. The long term aims are to examine the effects that the participative nature of soft systems methodology has on the acceptability of the simulation model, and to provide analysts and managers with a process that may assist in planning strategies for health care.
Expert systems and simulation models; Proceedings of the Seminar, Tucson, AZ, November 18, 19, 1985
NASA Technical Reports Server (NTRS)
1986-01-01
The seminar presents papers on modeling and simulation methodology, artificial intelligence and expert systems, environments for simulation/expert system development, and methodology for simulation/expert system development. Particular attention is given to simulation modeling concepts and their representation, modular hierarchical model specification, knowledge representation, and rule-based diagnostic expert system development. Other topics include the combination of symbolic and discrete event simulation, real time inferencing, and the management of large knowledge-based simulation projects.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-05
... Numerical Simulations Risk Management Methodology November 1, 2010. I. Introduction On August 25, 2010, The... Analysis and Numerical Simulations (``STANS'') risk management methodology. The rule change alters... collateral within the STANS Monte Carlo simulations.\\7\\ \\7\\ OCC believes the approach currently used to...
Assessment methodology for computer-based instructional simulations.
Koenig, Alan; Iseli, Markus; Wainess, Richard; Lee, John J
2013-10-01
Computer-based instructional simulations are becoming more and more ubiquitous, particularly in military and medical domains. As the technology that drives these simulations grows ever more sophisticated, the underlying pedagogical models for how instruction, assessment, and feedback are implemented within these systems must evolve accordingly. In this article, we review some of the existing educational approaches to medical simulations, and present pedagogical methodologies that have been used in the design and development of games and simulations at the University of California, Los Angeles, Center for Research on Evaluation, Standards, and Student Testing. In particular, we present a methodology for how automated assessments of computer-based simulations can be implemented using ontologies and Bayesian networks, and discuss their advantages and design considerations for pedagogical use. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.
Methodology of modeling and measuring computer architectures for plasma simulations
NASA Technical Reports Server (NTRS)
Wang, L. P. T.
1977-01-01
A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.
Baldwin Effect and Additional BLR Component in AGN with Superluminal Jets
NASA Astrophysics Data System (ADS)
Patiño Álvarez, Víctor; Torrealba, Janet; Chavushyan, Vahram; Cruz González, Irene; Arshakian, Tigran; León Tavares, Jonathan; Popovic, Luka
2016-06-01
We study the Baldwin Effect (BE) in 96 core-jet blazars with optical and ultraviolet spectroscopic data from a radio-loud AGN sample obtained from the MOJAVE 2cm survey. A statistical analysis is presented of the equivalent widths W_lambda of emission lines H beta 4861, Mg II 2798, C IV 1549, and continuum luminosities at 5100, 3000, and 1350 angstroms. The BE is found statistically significant (with confidence level c.l. > 95%) in H beta and C IV emission lines, while for Mg II the trend is slightly less significant (c.l. = 94.5%). The slopes of the BE in the studied samples for H beta and Mg II are found steeper and with statistically significant difference than those of a comparison radio-quiet sample. We present simulations of the expected BE slopes produced by the contribution to the total continuum of the non-thermal boosted emission from the relativistic jet, and by variability of the continuum components. We find that the slopes of the BE between radio-quiet and radio-loud AGN should not be different, under the assumption that the broad line is only being emitted by the canonical broad line region around the black hole. We discuss that the BE slope steepening in radio AGN is due to a jet associated broad-line region.
Magnetar-powered Supernovae in Two Dimensions. II. Broad-line Supernovae Ic
NASA Astrophysics Data System (ADS)
Chen, Ke-Jung; Moriya, Takashi J.; Woosley, Stan; Sukhbold, Tuguldur; Whalen, Daniel J.; Suwa, Yudai; Bromm, Volker
2017-04-01
Nascent neutron stars (NSs) with millisecond periods and magnetic fields in excess of 1016 Gauss can drive highly energetic and asymmetric explosions known as magnetar-powered supernovae. These exotic explosions are one theoretical interpretation for supernovae Ic-BL, which are sometimes associated with long gamma-ray bursts. Twisted magnetic field lines extract the rotational energy of the NS and release it as a disk wind or a jet with energies greater than 1052 erg over ˜20 s. What fraction of the energy of the central engine go into the wind and the jet remain unclear. We have performed two-dimensional hydrodynamical simulations of magnetar-powered supernovae (SNe) driven by disk winds and jets with the CASTRO code to investigate the effect of the central engine on nucleosynthetic yields, mixing, and light curves. We find that these explosions synthesize less than 0.05 {M}⊙ of {}56{Ni} and that this mass is not very sensitive to central engine type. The morphology of the explosion can provide a powerful diagnostic of the properties of the central engine. In the absence of a circumstellar medium, these events are not very luminous, with peak bolometric magnitudes of {M}b˜ -16.5 due to low {}56{Ni} production.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Pu; Lu, Kai-Xing; Hu, Chen
In the sixth of a series of papers reporting on a large reverberation mapping (RM) campaign of active galactic nuclei (AGNs) with high accretion rates, we present velocity-resolved time lags of Hβ emission lines for nine objects observed in the campaign during 2012–2013. In order to correct the line broadening caused by seeing and instruments before analyzing the velocity-resolved RM, we adopt the Richardson–Lucy deconvolution to reconstruct their Hβ profiles. The validity and effectiveness of the deconvolution are checked using Monte Carlo simulation. Five among the nine objects show clear dependence of the time delay on velocity. Mrk 335 andmore » Mrk 486 show signatures of gas inflow whereas the clouds in the broad-line regions (BLRs) of Mrk 142 and MCG +06-26-012 tend to be radial outflowing. Mrk 1044 is consistent with having virialized motions. The lags of the remaining four are not velocity-resolvable. The velocity-resolved RM of super-Eddington accreting massive black holes (SEAMBHs) shows that they have diverse kinematics in their BLRs. Comparing with the AGNs with sub-Eddington accretion rates, we do not find significant differences in the BLR kinematics of SEAMBHs.« less
Solving the 56Ni Puzzle of Magnetar-powered Broad-lined Type IC Supernovae
NASA Astrophysics Data System (ADS)
Wang, Ling-Jun; Han, Yan-Hui; Xu, Dong; Wang, Shan-Qin; Dai, Zi-Gao; Wu, Xue-Feng; Wei, Jian-Yan
2016-11-01
Broad-lined Type Ic supernovae (SNe Ic-BL) are of great importance because their association with long-duration gamma-ray bursts (LGRBs) holds the key to deciphering the central engine of LGRBs, which refrains from being unveiled despite decades of investigation. Among the two popularly hypothesized types of central engine, I.e., black holes and strongly magnetized neutron stars (magnetars), there is mounting evidence that the central engine of GRB-associated SNe (GRB-SNe) is rapidly rotating magnetars. Theoretical analysis also suggests that magnetars could be the central engine of SNe Ic-BL. What puzzled the researchers is the fact that light-curve modeling indicates that as much as 0.2{--}0.5 {M}⊙ of 56Ni was synthesized during the explosion of the SNe Ic-BL, which is unfortunately in direct conflict with current state-of-the-art understanding of magnetar-powered 56Ni synthesis. Here we propose a dynamic model of magnetar-powered SNe to take into account the acceleration of the ejecta by the magnetar, as well as the thermalization of the injected energy. Assuming that the SN kinetic energy comes exclusively from the magnetar acceleration, we find that although a major fraction of the rotational energy of the magnetar is to accelerate the SN ejecta, a tiny fraction of this energy deposited as thermal energy of the ejecta is enough to reduce the needed 56Ni to 0.06 M ⊙ for both SN 1997ef and SN 2007ru. We therefore suggest that magnetars could power SNe Ic-BL in aspects both of energetics and of 56Ni synthesis.
NASA Technical Reports Server (NTRS)
Maoz, Dan; Smith, Paul S.; Jannuzi, Buell T.; Kaspi, Shai; Netzer, Hagai
1994-01-01
We have monitored spectrophotometrically a subsample (28) of the Palomar-Green Bright Quasar Sample for 2 years in order to test for correlations between continuum and emission-line variations and to determine the timescales relevant to mapping the broad-line regions of high-luminosity active galactic nuclei (AGNs). Half of the quasars showed optical continuum variations with amplitudes in the range 20-75%. The rise and fall time for the continuum variations is typically 0.5-2 years. In most of the objects with continuum variations, we detect correlated variations in the broad H-alpha and H-beta emission lines. The amplitude of the line variations is usually 2-4 times smaller than the optical continuum fluctuations. We present light curves and analyze spectra for six of the variable quasars with 1000-10,000 A luminosity in the range 0.3-4 x 10(exp 45) ergs/s. In four of these objects the lines respond to the continuum variations with a lag that is smaller than or comparable to our typical sampling interval (a few months). Although continued monitoring is required to confirm these results and increase their accuracy, the present evidence indicates that quasars with the above luminosities have broad-line regions smaller than about 1 1t-yr. Two of the quasars monitored show no detectable line variations despite relatively large-amplitude continuum changes. This could be a stronger manifestation of the low-amplitude line-response phenomenon we observe in the other quasars.
Gamma-Ray Emision from the Broad-Line Radio Galaxy 3C 111
NASA Technical Reports Server (NTRS)
Hartman, Robert C.; Kadler, Matthias; Tueller, Jack
2008-01-01
The broad-line radio galaxy 3C 111 has been suggested as the counterpart of the Gamma-ray source 3EGJ0416+3650. While 3C 111 meets most of the criteria for a high-probability identification, like a bright fla t-spectrum radio core and a blazarlike broadband SED, in the Third EG RET Catalog, the large positional offset of about 1.5 degrees put 3C1 11 outside the 99% probability region for 3EG J0416+3650, making this association questionable. We present a re-analysis of all available data for 3C111 from the EGRET archives, resulting in probable detection of high-energy Gamma-ray emission above 1000MeV from a position clo se to the nominal position of 3C 111, in two separate viewing periods (VPs), at a 3sigma level in each. A new source, GROJ0426+3747, appea rs to be present nearby, seen only in the >1000MeV data. For >100MeV, the data are in agreement with only one source (at the original cata log position) accounting for most of the EGRET-detected emission of 3 EGJ0416+3650. A follow-up Swift UVOT/XRT observation reveals one mode rately bright X-ray source in the error box of 3EGJ0416+3650, but bec ause of the large EGRET position uncertainty, it is not certain that the X-ray and Gamma-ray sources are associated. A Swift observation of GROJ0426+3747 detected no X.ray source nearby.
The Hunt for Red Quasars: Luminous Obscured Black Hole Growth Unveiled in the Stripe 82 X-Ray Survey
NASA Astrophysics Data System (ADS)
LaMassa, Stephanie M.; Glikman, Eilat; Brusa, Marcella; Rigby, Jane R.; Tasnim Ananna, Tonima; Stern, Daniel; Lira, Paulina; Urry, C. Megan; Salvato, Mara; Alexandroff, Rachael; Allevato, Viola; Cardamone, Carolin; Civano, Francesca; Coppi, Paolo; Farrah, Duncan; Komossa, S.; Lanzuisi, Giorgio; Marchesi, Stefano; Richards, Gordon; Trakhtenbrot, Benny; Treister, Ezequiel
2017-10-01
We present results of a ground-based near-infrared campaign with Palomar TripleSpec, Keck NIRSPEC, and Gemini GNIRS to target two samples of reddened active galactic nucleus (AGN) candidates from the 31 deg2 Stripe 82 X-ray survey. One sample, which is ˜89% complete to K< 16 (Vega), consists of eight confirmed AGNs, four of which were identified with our follow-up program, and is selected to have red R - K colors (> 4, Vega). The fainter sample (K> 17, Vega) represents a pilot program to follow-up four sources from a parent sample of 34 that are not detected in the single-epoch SDSS catalog and have WISE quasar colors. All 12 sources are broad-line AGNs (at least one permitted emission line has an FWHM exceeding 1300 km s-1) and span a redshift range 0.59< z< 2.5. Half the (R - K)-selected AGNs have features in their spectra suggestive of outflows. When comparing these sources to a matched sample of blue Type 1 AGNs, we find that the reddened AGNs are more distant (z> 0.5), and a greater percentage have high X-ray luminosities ({L}{{X},{full}}> {10}44 erg s-1). Such outflows and high luminosities may be consistent with the paradigm that reddened broad-line AGNs represent a transitory phase in AGN evolution as described by the major merger model for black hole growth. Results from our pilot program demonstrate proof of concept that our selection technique is successful in discovering reddened quasars at z> 1 missed by optical surveys.
The characterization of the distant blazar GB6 J1239+0443 from flaring and low activity periods
Pacciani, L.; Donnarumma, I.; Denney, K. D.; ...
2012-08-27
In 2008, AGILE and Fermi detected gamma-ray flaring activity from the unidentified EGRET source 3EG J1236+0457, recently associated with a flat spectrum radio quasar (GB6 J1239+0443) at z = 1.762. The optical counterpart of the gamma-ray source underwent a flux enhancement of a factor of 15–30 in six years, and of ~10 in six months. Here, we interpret this flare-up in terms of a transition from an accretion-disc-dominated emission to a synchrotron-jet-dominated one. We analysed a Sloan Digital Sky Survey (SDSS) archival optical spectrum taken during a period of low radio and optical activity of the source. We estimated themore » mass of the central black hole using the width of the C iv emission line. In our work, we have also investigated SDSS archival optical photometric data and ultraviolet GALEX observations to estimate the thermal disc emission contribution of GB6 J1239+0443. This analysis of the gamma-ray data taken during the flaring episodes indicates a flat gamma-ray spectrum, with an extension of up to 15 GeV, with no statistically relevant sign of absorption from the broad-line region, suggesting that the blazar zone is located beyond the broad-line region. Our result is confirmed by the modelling of the broad-band spectral energy distribution (well constrained by the available multiwavelength data) of the flaring activity periods and by the accretion disc luminosity and black hole mass estimated by us using archival data.« less
Broad-line radio galaxies observed with Fermi-LAT: The origin of the GeV γ-ray emission
Kataoka, J.; Stawarz, Ł.; Takahashi, Y.; ...
2011-09-22
Here, we report on a detailed investigation of the γ-ray emission from 18 broad-line radio galaxies (BLRGs) based on two years of Fermi Large Area Telescope data. We confirm the previously reported detections of 3C 120 and 3C 111 in the GeV photon energy range; a detailed look at the temporal characteristics of the observed γ-ray emission reveals in addition possible flux variability in both sources. No statistically significant γ-ray detection of the other BLRGs was found, however, in the considered data set. Though the sample size studied is small, what appears to differentiate 3C 111 and 3C 120 frommore » the BLRGs not yet detected in γ-rays is the particularly strong nuclear radio flux. This finding, together with the indications of the γ-ray flux variability and a number of other arguments presented, indicates that the GeV emission of BLRGs is most likely dominated by the beamed radiation of relativistic jets observed at intermediate viewing angles. In this paper we also analyzed a comparison sample of high-accretion-rate Seyfert 1 galaxies, which can be considered radio-quiet counterparts of BLRGs, and found that none were detected in γ-rays. A simple phenomenological hybrid model applied for the broadband emission of the discussed radio-loud and radio-quiet type 1 active galaxies suggests that the relative contribution of the nuclear jets to the accreting matter is ≥ 1% on average for BLRGs, whereas it is ≤ 0.1% for Seyfert 1 galaxies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Jia; Halpern, Jules P.; Eracleous, Michael
2016-01-20
One of the proposed explanations for the broad, double-peaked Balmer emission lines observed in the spectra of some active galactic nuclei (AGNs) is that they are associated with sub-parsec supermassive black hole (SMBH) binaries. Here, we test the binary broad-line region hypothesis through several decades of monitoring of the velocity structure of double-peaked Hα emission lines in 13 low-redshift, mostly radio-loud AGNs. This is a much larger set of objects compared to an earlier test by Eracleous et al. and we use much longer time series for the three objects studied in that paper. Although systematic changes in radial velocitymore » can be traced in many of their lines, they are demonstrably not like those of a spectroscopic binary in a circular orbit. Any spectroscopic binary period must therefore be much longer than the span of the monitoring (assuming a circular orbit), which in turn would require black hole masses that exceed by 1–2 orders of magnitude the values obtained for these objects using techniques such as reverberation mapping and stellar velocity dispersion. Moreover, the response of the double-peaked Balmer line profiles to fluctuations of the ionizing continuum and the shape of the Lyα profiles are incompatible with an SMBH binary. The binary broad-line region hypothesis is therefore disfavored. Other processes evidently shape these line profiles and cause the long-term velocity variations of the double peaks.« less
NASA Astrophysics Data System (ADS)
Kokubo, Mitsuru
2017-05-01
We examine the optical photometric and polarimetric variability of the luminous type 1 non-blazar quasar 3C 323.1 (PG 1545+210). Two optical spectropolarimetric measurements taken during the periods 1996-1998 and 2003 combined with a V-band imaging-polarimetric measurement taken in 2002 reveal that (1) as noted in the literature, the polarization of 3C 323.1 is confined only to the continuum emission, I.e. the emission from the broad-line region is unpolarized; (2) the polarized flux spectra show evidence of a time-variable broad absorption feature in the wavelength range of the Balmer continuum and other recombination lines; (3) weak variability in the polarization position angle (PA) of ˜4°over a time-scale of 4-6 yr is observed and (4) the V-band total flux and the polarized flux show highly correlated variability over a time-scale of 1 yr. Taking the above-mentioned photometric and polarimetric variability properties and the results from previous studies into consideration, we propose a geometrical model for the polarization source in 3C 323.1, in which an equatorial absorbing region and an axi-asymmetric equatorial electron-scattering region are assumed to be located between the accretion disc and the broad-line region. The scattering/absorbing regions can perhaps be attributed to the accretion disc wind or flared disc surface, but further polarimetric monitoring observations for 3C 323.1 and other quasars with continuum-confined polarization are needed to probe the true physical origins of these regions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ben-Ami, Sagi; Gal-Yam, Avishay; Yaron, Ofer
We present the discovery and extensive early-time observations of the Type Ic supernova (SN) PTF12gzk. Our light curves show a rise of 0.8 mag within 2.5 hr. Power-law fits (f(t){proportional_to}(t - t{sub 0}) {sup n}) to these data constrain the explosion date to within one day. We cannot rule out a quadratic fireball model, but higher values of n are possible as well for larger areas in the fit parameter space. Our bolometric light curve and a dense spectral sequence are used to estimate the physical parameters of the exploding star and of the explosion. We show that the photometricmore » evolution of PTF12gzk is slower than that of most SNe Ic. The high ejecta expansion velocities we measure ({approx}30, 000 km s{sup -1} derived from line minima four days after explosion) are similar to the observed velocities of broad-lined SNe Ic associated with gamma-ray bursts (GRBs) rather than to normal SN Ic velocities. Yet, this SN does not show the persistent broad lines that are typical of broad-lined SNe Ic. The host-galaxy characteristics are also consistent with GRB-SN hosts, and not with normal SN Ic hosts. By comparison with the spectroscopically similar SN 2004aw, we suggest that the observed properties of PTF12gzk indicate an initial progenitor mass of 25-35 M{sub Sun} and a large ((5-10) Multiplication-Sign 10{sup 51} erg) kinetic energy, the later being close to the regime of GRB-SN properties.« less
NASA Astrophysics Data System (ADS)
Gaskell, C. Martin; Harrington, Peter Z.
2018-04-01
The profiles of the broad emission lines of active galactic nuclei (AGNs) and the time delays in their response to changes in the ionizing continuum ("lags") give information about the structure and kinematics of the inner regions of AGNs. Line profiles are also our main way of estimating the masses of the supermassive black holes (SMBHs). However, the profiles often show ill-understood, asymmetric structure and velocity-dependent lags vary with time. Here we show that partial obscuration of the broad-line region (BLR) by outflowing, compact, dusty clumps produces asymmetries and velocity-dependent lags similar to those observed. Our model explains previously inexplicable changes in the ratios of the hydrogen lines with time and velocity, the lack of correlation of changes in line profiles with variability of the central engine, the velocity dependence of lags, and the change of lags with time. We propose that changes on timescales longer than the light-crossing time do not come from dynamical changes in the BLR, but are a natural result of the effect of outflowing dusty clumps driven by radiation pressure acting on the dust. The motion of these clumps offers an explanation of long-term changes in polarization. The effects of the dust complicate the study of the structure and kinematics of the BLR and the search for sub-parsec SMBH binaries. Partial obscuration of the accretion disc can also provide the local fluctuations in luminosity that can explain sizes deduced from microlensing.
NASA Astrophysics Data System (ADS)
Moriya, Takashi J.; Tanaka, Masaomi; Morokuma, Tomoki; Ohsuga, Ken
2017-07-01
We propose that superluminous transients that appear at central regions of active galactic nuclei (AGNs) such as CSS100217:102913+404220 (CSS100217) and PS16dtm, which reach near- or super-Eddington luminosities of the central black holes, are powered by the interaction between accretion-disk winds and clouds in broad-line regions (BLRs) surrounding them. If the disk luminosity temporarily increases by, e.g., limit-cycle oscillations, leading to a powerful radiatively driven wind, strong shock waves propagate in the BLR. Because the dense clouds in the AGN BLRs typically have similar densities to those found in SNe IIn, strong radiative shocks emerge and efficiently convert the ejecta kinetic energy to radiation. As a result, transients similar to SNe IIn can be observed at AGN central regions. Since a typical black hole disk-wind velocity is ≃0.1c, where c is the speed of light, the ejecta kinetic energy is expected to be ≃1052 erg when ≃1 M ⊙ is ejected. This kinetic energy is transformed to radiation energy in a timescale for the wind to sweep up a similar mass to itself in the BLR, which is a few hundred days. Therefore, both luminosities (˜1044 erg s-1) and timescales (˜100 days) of the superluminous transients from AGN central regions match those expected in our interaction model. If CSS100217 and PS16dtm are related to the AGN activities triggered by limit-cycle oscillations, they become bright again in coming years or decades.
Probabilistic Simulation of Multi-Scale Composite Behavior
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2012-01-01
A methodology is developed to computationally assess the non-deterministic composite response at all composite scales (from micro to structural) due to the uncertainties in the constituent (fiber and matrix) properties, in the fabrication process and in structural variables (primitive variables). The methodology is computationally efficient for simulating the probability distributions of composite behavior, such as material properties, laminate and structural responses. Bi-products of the methodology are probabilistic sensitivities of the composite primitive variables. The methodology has been implemented into the computer codes PICAN (Probabilistic Integrated Composite ANalyzer) and IPACS (Integrated Probabilistic Assessment of Composite Structures). The accuracy and efficiency of this methodology are demonstrated by simulating the uncertainties in composite typical laminates and comparing the results with the Monte Carlo simulation method. Available experimental data of composite laminate behavior at all scales fall within the scatters predicted by PICAN. Multi-scaling is extended to simulate probabilistic thermo-mechanical fatigue and to simulate the probabilistic design of a composite redome in order to illustrate its versatility. Results show that probabilistic fatigue can be simulated for different temperature amplitudes and for different cyclic stress magnitudes. Results also show that laminate configurations can be selected to increase the redome reliability by several orders of magnitude without increasing the laminate thickness--a unique feature of structural composites. The old reference denotes that nothing fundamental has been done since that time.
An Integrated Modeling and Simulation Methodology for Intelligent Systems Design and Testing
2002-08-01
simulation and actual execution. KEYWORDS: Model Continuity, Modeling, Simulation, Experimental Frame, Real Time Systems , Intelligent Systems...the methodology for a stand-alone real time system. Then it will scale up to distributed real time systems . For both systems, step-wise simulation...MODEL CONTINUITY Intelligent real time systems monitor, respond to, or control, an external environment. This environment is connected to the digital
2016-06-01
characteristics, experimental design techniques, and analysis methodologies that distinguish each phase of the MBSE MEASA. To ensure consistency... methodology . Experimental design selection, simulation analysis, and trade space analysis support the final two stages. Figure 27 segments the MBSE MEASA...rounding has the potential to increase the correlation between columns of the experimental design matrix. The design methodology presented in Vieira
Reliability based design optimization: Formulations and methodologies
NASA Astrophysics Data System (ADS)
Agarwal, Harish
Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed. A trust region managed sequential approximate optimization methodology is employed for this purpose. Results from numerical test studies indicate that the methodology can be used for performing design optimization under severe uncertainty.
Weighted Ensemble Simulation: Review of Methodology, Applications, and Software
Zuckerman, Daniel M.; Chong, Lillian T.
2018-01-01
The weighted ensemble (WE) methodology orchestrates quasi-independent parallel simulations run with intermittent communication that can enhance sampling of rare events such as protein conformational changes, folding, and binding. The WE strategy can achieve superlinear scaling—the unbiased estimation of key observables such as rate constants and equilibrium state populations to greater precision than would be possible with ordinary parallel simulation. WE software can be used to control any dynamics engine, such as standard molecular dynamics and cell-modeling packages. This article reviews the theoretical basis of WE and goes on to describe successful applications to a number of complex biological processes—protein conformational transitions, (un)binding, and assembly processes, as well as cell-scale processes in systems biology. We furthermore discuss the challenges that need to be overcome in the next phase of WE methodological development. Overall, the combined advances in WE methodology and software have enabled the simulation of long-timescale processes that would otherwise not be practical on typical computing resources using standard simulation. PMID:28301772
Weighted Ensemble Simulation: Review of Methodology, Applications, and Software.
Zuckerman, Daniel M; Chong, Lillian T
2017-05-22
The weighted ensemble (WE) methodology orchestrates quasi-independent parallel simulations run with intermittent communication that can enhance sampling of rare events such as protein conformational changes, folding, and binding. The WE strategy can achieve superlinear scaling-the unbiased estimation of key observables such as rate constants and equilibrium state populations to greater precision than would be possible with ordinary parallel simulation. WE software can be used to control any dynamics engine, such as standard molecular dynamics and cell-modeling packages. This article reviews the theoretical basis of WE and goes on to describe successful applications to a number of complex biological processes-protein conformational transitions, (un)binding, and assembly processes, as well as cell-scale processes in systems biology. We furthermore discuss the challenges that need to be overcome in the next phase of WE methodological development. Overall, the combined advances in WE methodology and software have enabled the simulation of long-timescale processes that would otherwise not be practical on typical computing resources using standard simulation.
Montgomery, Kymberlee; Morse, Catherine; Smith-Glasgow, Mary Ellen; Posmontier, Bobbie; Follen, Michele
2012-02-01
This manuscript presents the methodology used to assess the impact of a clinical simulation module used for training providers specializing in women's health. The methodology presented here will be used for a quantitative study in the future. Copyright © 2012 Elsevier HS Journals, Inc. All rights reserved.
Training effectiveness assessment: Methodological problems and issues
NASA Technical Reports Server (NTRS)
Cross, Kenneth D.
1992-01-01
The U.S. military uses a large number of simulators to train and sustain the flying skills of helicopter pilots. Despite the enormous resources required to purchase, maintain, and use those simulators, little effort has been expended in assessing their training effectiveness. One reason for this is the lack of an evaluation methodology that yields comprehensive and valid data at a practical cost. Some of these methodological problems and issues that arise in assessing simulator training effectiveness, as well as problems with the classical transfer-of-learning paradigm were discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Xin; Shen, Yue; Bian, Fuyan
2014-07-10
A small fraction of quasars have long been known to show bulk velocity offsets (of a few hundred to thousands of km s{sup –1}) in the broad Balmer lines with respect to the systemic redshift of the host galaxy. Models to explain these offsets usually invoke broad-line region gas kinematics/asymmetry around single black holes (BHs), orbital motion of massive (∼sub-parsec (sub-pc)) binary black holes (BBHs), or recoil BHs, but single-epoch spectra are unable to distinguish between these scenarios. The line-of-sight (LOS) radial velocity (RV) shifts from long-term spectroscopic monitoring can be used to test the BBH hypothesis. We have selectedmore » a sample of 399 quasars with kinematically offset broad Hβ lines from the Sloan Digital Sky Survey (SDSS) Seventh Data Release quasar catalog, and have conducted second-epoch optical spectroscopy for 50 of them. Combined with the existing SDSS spectra, the new observations enable us to constrain the LOS RV shifts of broad Hβ lines with a rest-frame baseline of a few years to nearly a decade. While previous work focused on objects with extreme velocity offset (>10{sup 3} km s{sup –1}), we explore the parameter space with smaller (a few hundred km s{sup –1}) yet significant offsets (99.7% confidence). Using cross-correlation analysis, we detect significant (99% confidence) radial accelerations in the broad Hβ lines in 24 of the 50 objects, of ∼10-200 km s{sup –1} yr{sup –1} with a median measurement uncertainty of ∼10 km s{sup –1} yr{sup –1}, implying a high fraction of variability of the broad-line velocity on multi-year timescales. We suggest that 9 of the 24 detections are sub-pc BBH candidates, which show consistent velocity shifts independently measured from a second broad line (either Hα or Mg II) without significant changes in the broad-line profiles. Combining the results on the general quasar population studied in Paper I, we find a tentative anti-correlation between the velocity offset in the first-epoch spectrum and the average acceleration between two epochs, which could be explained by orbital phase modulation when the time separation between two epochs is a non-negligible fraction of the orbital period of the motion causing the line displacement. We discuss the implications of our results for the identification of sub-pc BBH candidates in offset-line quasars and for the constraints on their frequency and orbital parameters.« less
Probabilistic simulation of multi-scale composite behavior
NASA Technical Reports Server (NTRS)
Liaw, D. G.; Shiao, M. C.; Singhal, S. N.; Chamis, Christos C.
1993-01-01
A methodology is developed to computationally assess the probabilistic composite material properties at all composite scale levels due to the uncertainties in the constituent (fiber and matrix) properties and in the fabrication process variables. The methodology is computationally efficient for simulating the probability distributions of material properties. The sensitivity of the probabilistic composite material property to each random variable is determined. This information can be used to reduce undesirable uncertainties in material properties at the macro scale of the composite by reducing the uncertainties in the most influential random variables at the micro scale. This methodology was implemented into the computer code PICAN (Probabilistic Integrated Composite ANalyzer). The accuracy and efficiency of this methodology are demonstrated by simulating the uncertainties in the material properties of a typical laminate and comparing the results with the Monte Carlo simulation method. The experimental data of composite material properties at all scales fall within the scatters predicted by PICAN.
NASA Astrophysics Data System (ADS)
Chen, Zhiming; Feng, Yuncheng
1988-08-01
This paper describes an algorithmic structure for combining simulation and optimization techniques both in theory and practice. Response surface methodology is used to optimize the decision variables in the simulation environment. A simulation-optimization software has been developed and successfully implemented, and its application to an aggregate production planning simulation-optimization model is reported. The model's objective is to minimize the production cost and to generate an optimal production plan and inventory control strategy for an aircraft factory.
American Society of Composites, 32nd Technical Conference
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aitharaju, Venkat; Wollschlager, Jeffrey; Plakomytis2, Dimitrios
This paper will present a general methodology by which weave draping manufacturing simulation results can be utilized to include the effects of weave draping and scissor angle in a structural multiscale simulation. While the methodology developed is general in nature, this paper will specifically demonstrate the methodology applied to a truncated pyramid, utilizing manufacturing simulation weave draping results from ESI PAM-FORM, and multiscale simulation using Altair Multiscale Designer (MDS) and OptiStruct. From a multiscale simulation perspective, the weave draping manufacturing simulation results will be used to develop a series of woven unit cells which cover the range of weave scissormore » angles existing within the part. For each unit cell, a multiscale material model will be developed, and applied to the corresponding spatial locations within the structural simulation mesh. In addition, the principal material orientation will be mapped from the wave draping manufacturing simulation mesh to the structural simulation mesh using Altair HyperMesh mapping technology. Results of the coupled simulation will be compared and verified against experimental data of the same available via General Motors (GM) Department of Energy (DOE) project.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricci, Paolo; Theiler, C.; Fasoli, A.
A methodology for plasma turbulence code validation is discussed, focusing on quantitative assessment of the agreement between experiments and simulations. The present work extends the analysis carried out in a previous paper [P. Ricci et al., Phys. Plasmas 16, 055703 (2009)] where the validation observables were introduced. Here, it is discussed how to quantify the agreement between experiments and simulations with respect to each observable, how to define a metric to evaluate this agreement globally, and - finally - how to assess the quality of a validation procedure. The methodology is then applied to the simulation of the basic plasmamore » physics experiment TORPEX [A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], considering both two-dimensional and three-dimensional simulation models.« less
Methodology for testing infrared focal plane arrays in simulated nuclear radiation environments
NASA Astrophysics Data System (ADS)
Divita, E. L.; Mills, R. E.; Koch, T. L.; Gordon, M. J.; Wilcox, R. A.; Williams, R. E.
1992-07-01
This paper summarizes test methodology for focal plane array (FPA) testing that can be used for benign (clear) and radiation environments, and describes the use of custom dewars and integrated test equipment in an example environment. The test methodology, consistent with American Society for Testing Materials (ASTM) standards, is presented for the total accumulated gamma dose, transient dose rate, gamma flux, and neutron fluence environments. The merits and limitations of using Cobalt 60 for gamma environment simulations and of using various fast-neutron reactors and neutron sources for neutron simulations are presented. Test result examples are presented to demonstrate test data acquisition and FPA parameter performance under different measurement conditions and environmental simulations.
Motion Cues in Flight Simulation and Simulator Induced Sickness
1988-06-01
asseusod in a driving simulator by means of a response surface methodology central-composite design . The most salient finding of the study was that visual...across treatment conditions. For an orthogonal response surface methodology (IBM) design with only tro independent variables. it can be readily shown that...J.E.Fowikes 8 SESSION III - ETIOLOGICAL FACTORS IN SIMULATOR-INDUCED AFTER EFFETS THE USE OF VE& IIBULAR MODELS FOR DESIGN AND EVALUATION OF FLIGHT
Crash Simulation and Animation: 'A New Approach for Traffic Safety Analysis'
DOT National Transportation Integrated Search
2001-02-01
This researchs objective is to present a methodology to supplement the conventional traffic safety analysis techniques. This methodology aims at using computer simulation to animate and visualize crash occurrence at high-risk locations. This methodol...
INTEGRATING DATA ANALYTICS AND SIMULATION METHODS TO SUPPORT MANUFACTURING DECISION MAKING
Kibira, Deogratias; Hatim, Qais; Kumara, Soundar; Shao, Guodong
2017-01-01
Modern manufacturing systems are installed with smart devices such as sensors that monitor system performance and collect data to manage uncertainties in their operations. However, multiple parameters and variables affect system performance, making it impossible for a human to make informed decisions without systematic methodologies and tools. Further, the large volume and variety of streaming data collected is beyond simulation analysis alone. Simulation models are run with well-prepared data. Novel approaches, combining different methods, are needed to use this data for making guided decisions. This paper proposes a methodology whereby parameters that most affect system performance are extracted from the data using data analytics methods. These parameters are used to develop scenarios for simulation inputs; system optimizations are performed on simulation data outputs. A case study of a machine shop demonstrates the proposed methodology. This paper also reviews candidate standards for data collection, simulation, and systems interfaces. PMID:28690363
IFC BIM-Based Methodology for Semi-Automated Building Energy Performance Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bazjanac, Vladimir
2008-07-01
Building energy performance (BEP) simulation is still rarely used in building design, commissioning and operations. The process is too costly and too labor intensive, and it takes too long to deliver results. Its quantitative results are not reproducible due to arbitrary decisions and assumptions made in simulation model definition, and can be trusted only under special circumstances. A methodology to semi-automate BEP simulation preparation and execution makes this process much more effective. It incorporates principles of information science and aims to eliminate inappropriate human intervention that results in subjective and arbitrary decisions. This is achieved by automating every part ofmore » the BEP modeling and simulation process that can be automated, by relying on data from original sources, and by making any necessary data transformation rule-based and automated. This paper describes the new methodology and its relationship to IFC-based BIM and software interoperability. It identifies five steps that are critical to its implementation, and shows what part of the methodology can be applied today. The paper concludes with a discussion of application to simulation with EnergyPlus, and describes data transformation rules embedded in the new Geometry Simplification Tool (GST).« less
NASA Technical Reports Server (NTRS)
Chidester, Thomas R.; Kanki, Barbara G.; Helmreich, Robert L.
1989-01-01
The crew-factors research program at NASA Ames has developed a methodology for studying the impact of a variety of variables on the effectiveness of crews flying realistic but high workload simulated trips. The validity of investigations using the methodology is enhanced by careful design of full-mission scenarios, performance assessment using converging sources of data, and recruitment of representative subjects. Recently, portions of this methodology have been adapted for use in assessing the effectiveness of crew coordination among participants in line-oriented flight training.
An Initial Multi-Domain Modeling of an Actively Cooled Structure
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur
1997-01-01
A methodology for the simulation of turbine cooling flows is being developed. The methodology seeks to combine numerical techniques that optimize both accuracy and computational efficiency. Key components of the methodology include the use of multiblock grid systems for modeling complex geometries, and multigrid convergence acceleration for enhancing computational efficiency in highly resolved fluid flow simulations. The use of the methodology has been demonstrated in several turbo machinery flow and heat transfer studies. Ongoing and future work involves implementing additional turbulence models, improving computational efficiency, adding AMR.
Pei, L.; Fausnaugh, M. M.; Barth, A. J.; ...
2017-03-10
Here, we present the results of an optical spectroscopic monitoring program targeting NGC 5548 as part of a larger multiwavelength reverberation mapping campaign. The campaign spanned 6 months and achieved an almost daily cadence with observations from five ground-based telescopes. The Hβ and He II λ4686 broad emission-line light curves lag that of the 5100 Å optical continuum bymore » $${4.17}_{-0.36}^{+0.36}\\,\\mathrm{days}$$ and $${0.79}_{-0.34}^{+0.35}\\,\\mathrm{days}$$, respectively. The Hβ lag relative to the 1158 Å ultraviolet continuum light curve measured by the Hubble Space Telescope is ~50% longer than that measured against the optical continuum, and the lag difference is consistent with the observed lag between the optical and ultraviolet continua. This suggests that the characteristic radius of the broad-line region is ~50% larger than the value inferred from optical data alone. We also measured velocity-resolved emission-line lags for Hβ and found a complex velocity-lag structure with shorter lags in the line wings, indicative of a broad-line region dominated by Keplerian motion. The responses of both the Hβ and He ii emission lines to the driving continuum changed significantly halfway through the campaign, a phenomenon also observed for C iv, Lyα, He II(+O III]), and Si Iv(+O Iv]) during the same monitoring period. Finally, given the optical luminosity of NGC 5548 during our campaign, the measured Hβ lag is a factor of five shorter than the expected value implied by the R BLR–L AGN relation based on the past behavior of NGC 5548.« less
NASA Astrophysics Data System (ADS)
Coatman, Liam; Hewett, Paul C.; Banerji, Manda; Richards, Gordon T.; Hennawi, Joseph F.; Prochaska, Jason X.
2017-01-01
Accurate black-hole (BH) mass estimates for high-redshift (z>2) quasars are essential for better understanding the relationship between super-massive BH accretion and star formation. Progress is currently limited by the large systematic errors in virial BH-masses derived from the CIV broad emission line, which is often significantly blueshifted relative to systemic, most likely due to outflowing gas in the quasar broad-line region. We have assembled Balmer-line based BH masses for a large sample of 230 high-luminosity (1045.5-1048 ergs-1), redshift 1.5
NASA Astrophysics Data System (ADS)
Moloney, Joshua; Shull, J. Michael
2014-10-01
Understanding the composition and structure of the broad-line region (BLR) of active galactic nuclei (AGNs) is important for answering many outstanding questions in supermassive black hole evolution, galaxy evolution, and ionization of the intergalactic medium. We used single-epoch UV spectra from the Cosmic Origins Spectrograph (COS) on the Hubble Space Telescope to measure EUV emission-line fluxes from four individual AGNs with 0.49 <= z <= 0.64, two AGNs with 0.32 <= z <= 0.40, and a composite of 159 AGNs. With the CLOUDY photoionization code, we calculated emission-line fluxes from BLR clouds with a range of density, hydrogen ionizing flux, and incident continuum spectral indices. The photoionization grids were fit to the observations using single-component and locally optimally emitting cloud (LOC) models. The LOC models provide good fits to the measured fluxes, while the single-component models do not. The UV spectral indices preferred by our LOC models are consistent with those measured from COS spectra. EUV emission lines such as N IV λ765, O II λ833, and O III λ834 originate primarily from gas with electron temperatures between 37,000 K and 55,000 K. This gas is found in BLR clouds with high hydrogen densities (n H >= 1012 cm-3) and hydrogen ionizing photon fluxes (ΦH >= 1022 cm-2 s-1). Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555.
X-ray and Ultraviolet Properties of AGNs in Nearby Dwarf Galaxies
NASA Astrophysics Data System (ADS)
Baldassare, Vivienne F.; Reines, Amy E.; Gallo, Elena; Greene, Jenny E.
2017-02-01
We present new Chandra X-ray Observatory and Hubble Space Telescope observations of eight optically selected broad-line active galactic nucleus (AGN) candidates in nearby dwarf galaxies (z < 0.055). Including archival Chandra observations of three additional sources, our sample contains all 10 galaxies from Reines et al. (2013) with both broad Hα emission and narrow-line AGN ratios (six AGNs, four composites), as well as one low-metallicity dwarf galaxy with broad Hα and narrow-line ratios characteristic of star formation. All 11 galaxies are detected in X-rays. Nuclear X-ray luminosities range from L 0.5-7keV ≈ 5 × 1039 to 1 × 1042 ergs-1. In all cases except for the star-forming galaxy, the nuclear X-ray luminosities are significantly higher than would be expected from X-ray binaries, providing strong confirmation that AGNs and composite dwarf galaxies do indeed host actively accreting black holes (BHs). Using our estimated BH masses (which range from ˜7 × 104 to 1 × 106 M ⊙), we find inferred Eddington fractions ranging from ˜0.1% to 50%, I.e., comparable to massive broad-line quasars at higher redshift. We use the HST imaging to determine the ratio of UV to X-ray emission for these AGNs, finding that they appear to be less X-ray luminous with respect to their UV emission than more massive quasars (I.e., α OX values an average of 0.36 lower than expected based on the relation between α OX and 2500 Å luminosity). Finally, we discuss our results in the context of different accretion models onto nuclear BHs.
NASA Astrophysics Data System (ADS)
Pei, L.; Fausnaugh, M. M.; Barth, A. J.; Peterson, B. M.; Bentz, M. C.; De Rosa, G.; Denney, K. D.; Goad, M. R.; Kochanek, C. S.; Korista, K. T.; Kriss, G. A.; Pogge, R. W.; Bennert, V. N.; Brotherton, M.; Clubb, K. I.; Dalla Bontà, E.; Filippenko, A. V.; Greene, J. E.; Grier, C. J.; Vestergaard, M.; Zheng, W.; Adams, Scott M.; Beatty, Thomas G.; Bigley, A.; Brown, Jacob E.; Brown, Jonathan S.; Canalizo, G.; Comerford, J. M.; Coker, Carl T.; Corsini, E. M.; Croft, S.; Croxall, K. V.; Deason, A. J.; Eracleous, Michael; Fox, O. D.; Gates, E. L.; Henderson, C. B.; Holmbeck, E.; Holoien, T. W.-S.; Jensen, J. J.; Johnson, C. A.; Kelly, P. L.; Kim, S.; King, A.; Lau, M. W.; Li, Miao; Lochhaas, Cassandra; Ma, Zhiyuan; Manne-Nicholas, E. R.; Mauerhan, J. C.; Malkan, M. A.; McGurk, R.; Morelli, L.; Mosquera, Ana; Mudd, Dale; Muller Sanchez, F.; Nguyen, M. L.; Ochner, P.; Ou-Yang, B.; Pancoast, A.; Penny, Matthew T.; Pizzella, A.; Poleski, Radosław; Runnoe, Jessie; Scott, B.; Schimoia, Jaderson S.; Shappee, B. J.; Shivvers, I.; Simonian, Gregory V.; Siviero, A.; Somers, Garrett; Stevens, Daniel J.; Strauss, M. A.; Tayar, Jamie; Tejos, N.; Treu, T.; Van Saders, J.; Vican, L.; Villanueva, S., Jr.; Yuk, H.; Zakamska, N. L.; Zhu, W.; Anderson, M. D.; Arévalo, P.; Bazhaw, C.; Bisogni, S.; Borman, G. A.; Bottorff, M. C.; Brandt, W. N.; Breeveld, A. A.; Cackett, E. M.; Carini, M. T.; Crenshaw, D. M.; De Lorenzo-Cáceres, A.; Dietrich, M.; Edelson, R.; Efimova, N. V.; Ely, J.; Evans, P. A.; Ferland, G. J.; Flatland, K.; Gehrels, N.; Geier, S.; Gelbord, J. M.; Grupe, D.; Gupta, A.; Hall, P. B.; Hicks, S.; Horenstein, D.; Horne, Keith; Hutchison, T.; Im, M.; Joner, M. D.; Jones, J.; Kaastra, J.; Kaspi, S.; Kelly, B. C.; Kennea, J. A.; Kim, M.; Kim, S. C.; Klimanov, S. A.; Lee, J. C.; Leonard, D. C.; Lira, P.; MacInnis, F.; Mathur, S.; McHardy, I. M.; Montouri, C.; Musso, R.; Nazarov, S. V.; Netzer, H.; Norris, R. P.; Nousek, J. A.; Okhmat, D. N.; Papadakis, I.; Parks, J. R.; Pott, J.-U.; Rafter, S. E.; Rix, H.-W.; Saylor, D. A.; Schnülle, K.; Sergeev, S. G.; Siegel, M.; Skielboe, A.; Spencer, M.; Starkey, D.; Sung, H.-I.; Teems, K. G.; Turner, C. S.; Uttley, P.; Villforth, C.; Weiss, Y.; Woo, J.-H.; Yan, H.; Young, S.; Zu, Y.
2017-03-01
We present the results of an optical spectroscopic monitoring program targeting NGC 5548 as part of a larger multiwavelength reverberation mapping campaign. The campaign spanned 6 months and achieved an almost daily cadence with observations from five ground-based telescopes. The Hβ and He II λ4686 broad emission-line light curves lag that of the 5100 Å optical continuum by {4.17}-0.36+0.36 {days} and {0.79}-0.34+0.35 {days}, respectively. The Hβ lag relative to the 1158 Å ultraviolet continuum light curve measured by the Hubble Space Telescope is ˜50% longer than that measured against the optical continuum, and the lag difference is consistent with the observed lag between the optical and ultraviolet continua. This suggests that the characteristic radius of the broad-line region is ˜50% larger than the value inferred from optical data alone. We also measured velocity-resolved emission-line lags for Hβ and found a complex velocity-lag structure with shorter lags in the line wings, indicative of a broad-line region dominated by Keplerian motion. The responses of both the Hβ and He II emission lines to the driving continuum changed significantly halfway through the campaign, a phenomenon also observed for C IV, Lyα, He II(+O III]), and Si IV(+O IV]) during the same monitoring period. Finally, given the optical luminosity of NGC 5548 during our campaign, the measured Hβ lag is a factor of five shorter than the expected value implied by the R BLR-L AGN relation based on the past behavior of NGC 5548.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jian-Min; Qiu, Jie; Du, Pu
2014-12-10
Supermassive black holes in active galactic nuclei (AGNs) undergo a wide range of accretion rates, which lead to diversity of appearance. We consider the effects of anisotropic radiation from accretion disks on the broad-line region (BLR) from the Shakura-Sunyaev regime to slim disks with super-Eddington accretion rates. The geometrically thick funnel of the inner region of slim disks produces strong self-shadowing effects that lead to very strong anisotropy of the radiation field. We demonstrate that the degree of anisotropy of the radiation fields grows with increasing accretion rate. As a result of this anisotropy, BLR clouds receive different spectral energymore » distributions depending on their location relative to the disk, resulting in the diverse observational appearance of the BLR. We show that the self-shadowing of the inner parts of the disk naturally produces two dynamically distinct regions of the BLR, depending on accretion rate. These two regions manifest themselves as kinematically distinct components of the broad Hβ line profile with different line widths and fluxes, which jointly account for the Lorentzian profile generally observed in narrow-line Seyfert 1 galaxies. In the time domain, these two components are expected to reverberate with different time lags with respect to the varying ionizing continuum, depending on the accretion rate and the viewing angle of the observer. The diverse appearance of the BLR due to the anisotropic ionizing energy source can be tested by reverberation mapping of Hβ and other broad emission lines (e.g., Fe II), providing a new tool to diagnose the structure and dynamics of the BLR. Other observational consequences of our model are also explored.« less
The case for inflow of the broad-line region of active galactic nuclei
NASA Astrophysics Data System (ADS)
Gaskell, C. Martin; Goosmann, René W.
2016-02-01
The high-ionization lines of the broad-line region (BLR) of thermal active galactic nuclei (AGNs) show blueshifts of a few hundred km/s to several thousand km/sec with respect to the low-ionization lines. This has long been thought to be due to the high-ionization lines of the BLR arising in a wind of which the far side of the outflow is blocked from our view by the accretion disc. Evidence for and against the disc-wind model is discussed. The biggest problem for the model is that velocity-resolved reverberation mapping repeatedly fails to show the expected kinematic signature of outflow of the BLR. The disc-wind model also cannot readily reproduce the red side of the line profiles of high-ionization lines. The rapidly falling density in an outflow makes it difficult to obtain high equivalent widths. We point out a number of major problems with associating the BLR with the outflows producing broad absorption lines. An explanation which avoids all these problems and satisfies the constraints of both the line profiles and velocity-resolved reverberation-mapping is a model in which the blueshifting is due to scattering off material spiraling inwards with an inflow velocity of half the velocity of the blueshifting. We discuss how recent reverberation mapping results are consistent with the scattering-plus-inflow model but do not support a disc-wind model. We propose that the anti-correlation of the apparent redshifting of Hβ with the blueshifting of C iv is a consequence of contamination of the red wings of Hβ by the broad wings of [O iii].
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moriya, Takashi J.; Tanaka, Masaomi; Ohsuga, Ken
We propose that superluminous transients that appear at central regions of active galactic nuclei (AGNs) such as CSS100217:102913+404220 (CSS100217) and PS16dtm, which reach near- or super-Eddington luminosities of the central black holes, are powered by the interaction between accretion-disk winds and clouds in broad-line regions (BLRs) surrounding them. If the disk luminosity temporarily increases by, e.g., limit–cycle oscillations, leading to a powerful radiatively driven wind, strong shock waves propagate in the BLR. Because the dense clouds in the AGN BLRs typically have similar densities to those found in SNe IIn, strong radiative shocks emerge and efficiently convert the ejecta kineticmore » energy to radiation. As a result, transients similar to SNe IIn can be observed at AGN central regions. Since a typical black hole disk-wind velocity is ≃0.1 c , where c is the speed of light, the ejecta kinetic energy is expected to be ≃10{sup 52} erg when ≃1 M {sub ⊙} is ejected. This kinetic energy is transformed to radiation energy in a timescale for the wind to sweep up a similar mass to itself in the BLR, which is a few hundred days. Therefore, both luminosities (∼10{sup 44} erg s{sup −1}) and timescales (∼100 days) of the superluminous transients from AGN central regions match those expected in our interaction model. If CSS100217 and PS16dtm are related to the AGN activities triggered by limit–cycle oscillations, they become bright again in coming years or decades.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pei, L.; Fausnaugh, M. M.; Barth, A. J.
Here, we present the results of an optical spectroscopic monitoring program targeting NGC 5548 as part of a larger multiwavelength reverberation mapping campaign. The campaign spanned 6 months and achieved an almost daily cadence with observations from five ground-based telescopes. The Hβ and He II λ4686 broad emission-line light curves lag that of the 5100 Å optical continuum bymore » $${4.17}_{-0.36}^{+0.36}\\,\\mathrm{days}$$ and $${0.79}_{-0.34}^{+0.35}\\,\\mathrm{days}$$, respectively. The Hβ lag relative to the 1158 Å ultraviolet continuum light curve measured by the Hubble Space Telescope is ~50% longer than that measured against the optical continuum, and the lag difference is consistent with the observed lag between the optical and ultraviolet continua. This suggests that the characteristic radius of the broad-line region is ~50% larger than the value inferred from optical data alone. We also measured velocity-resolved emission-line lags for Hβ and found a complex velocity-lag structure with shorter lags in the line wings, indicative of a broad-line region dominated by Keplerian motion. The responses of both the Hβ and He ii emission lines to the driving continuum changed significantly halfway through the campaign, a phenomenon also observed for C iv, Lyα, He II(+O III]), and Si Iv(+O Iv]) during the same monitoring period. Finally, given the optical luminosity of NGC 5548 during our campaign, the measured Hβ lag is a factor of five shorter than the expected value implied by the R BLR–L AGN relation based on the past behavior of NGC 5548.« less
X-Ray and Ultraviolet Properties of AGNs in Nearby Dwarf Galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baldassare, Vivienne F.; Gallo, Elena; Reines, Amy E.
2017-02-10
We present new Chandra X-ray Observatory and Hubble Space Telescope observations of eight optically selected broad-line active galactic nucleus (AGN) candidates in nearby dwarf galaxies ( z < 0.055). Including archival Chandra observations of three additional sources, our sample contains all 10 galaxies from Reines et al. (2013) with both broad H α emission and narrow-line AGN ratios (six AGNs, four composites), as well as one low-metallicity dwarf galaxy with broad H α and narrow-line ratios characteristic of star formation. All 11 galaxies are detected in X-rays. Nuclear X-ray luminosities range from L {sub 0.5–7keV} ≈ 5 × 10{sup 39}more » to 1 × 10{sup 42} ergs{sup −1}. In all cases except for the star-forming galaxy, the nuclear X-ray luminosities are significantly higher than would be expected from X-ray binaries, providing strong confirmation that AGNs and composite dwarf galaxies do indeed host actively accreting black holes (BHs). Using our estimated BH masses (which range from ∼7 × 10{sup 4} to 1 × 10{sup 6} M {sub ⊙}), we find inferred Eddington fractions ranging from ∼0.1% to 50%, i.e., comparable to massive broad-line quasars at higher redshift. We use the HST imaging to determine the ratio of UV to X-ray emission for these AGNs, finding that they appear to be less X-ray luminous with respect to their UV emission than more massive quasars (i.e., α {sub OX} values an average of 0.36 lower than expected based on the relation between α {sub OX} and 2500 Å luminosity). Finally, we discuss our results in the context of different accretion models onto nuclear BHs.« less
Kobayashi, Leo; Gosbee, John W; Merck, Derek L
2017-07-01
(1) To develop a clinical microsystem simulation methodology for alarm fatigue research with a human factors engineering (HFE) assessment framework and (2) to explore its application to the comparative examination of different approaches to patient monitoring and provider notification. Problems with the design, implementation, and real-world use of patient monitoring systems result in alarm fatigue. A multidisciplinary team is developing an open-source tool kit to promote bedside informatics research and mitigate alarm fatigue. Simulation, HFE, and computer science experts created a novel simulation methodology to study alarm fatigue. Featuring multiple interconnected simulated patient scenarios with scripted timeline, "distractor" patient care tasks, and triggered true and false alarms, the methodology incorporated objective metrics to assess provider and system performance. Developed materials were implemented during institutional review board-approved study sessions that assessed and compared an experimental multiparametric alerting system with a standard monitor telemetry system for subject response, use characteristics, and end-user feedback. A four-patient simulation setup featuring objective metrics for participant task-related performance and response to alarms was developed along with accompanying structured HFE assessment (questionnaire and interview) for monitor systems use testing. Two pilot and four study sessions with individual nurse subjects elicited true alarm and false alarm responses (including diversion from assigned tasks) as well as nonresponses to true alarms. In-simulation observation and subject questionnaires were used to test the experimental system's approach to suppressing false alarms and alerting providers. A novel investigative methodology applied simulation and HFE techniques to replicate and study alarm fatigue in controlled settings for systems assessment and experimental research purposes.
A Systematic Determination of Skill and Simulator Requirements for Airplane Pilot Certification
DOT National Transportation Integrated Search
1985-03-01
This research report describes: (1) the FAA's ATP airman certification system; (2) needs of the system regarding simulator use; (3) a systematic methodology for meeting these needs; (4) application of the methodology; (5) results of the study; and (6...
Simulation of Attacks for Security in Wireless Sensor Network.
Diaz, Alvaro; Sanchez, Pablo
2016-11-18
The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node's software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work.
Local deformation for soft tissue simulation
Omar, Nadzeri; Zhong, Yongmin; Smith, Julian; Gu, Chengfan
2016-01-01
ABSTRACT This paper presents a new methodology to localize the deformation range to improve the computational efficiency for soft tissue simulation. This methodology identifies the local deformation range from the stress distribution in soft tissues due to an external force. A stress estimation method is used based on elastic theory to estimate the stress in soft tissues according to a depth from the contact surface. The proposed methodology can be used with both mass-spring and finite element modeling approaches for soft tissue deformation. Experimental results show that the proposed methodology can improve the computational efficiency while maintaining the modeling realism. PMID:27286482
A POLLUTION REDUCTION METHODOLOGY FOR CHEMICAL PROCESS SIMULATORS
A pollution minimization methodology was developed for chemical process design using computer simulation. It is based on a pollution balance that at steady state is used to define a pollution index with units of mass of pollution per mass of products. The pollution balance has be...
A computer simulator for development of engineering system design methodologies
NASA Technical Reports Server (NTRS)
Padula, S. L.; Sobieszczanski-Sobieski, J.
1987-01-01
A computer program designed to simulate and improve engineering system design methodology is described. The simulator mimics the qualitative behavior and data couplings occurring among the subsystems of a complex engineering system. It eliminates the engineering analyses in the subsystems by replacing them with judiciously chosen analytical functions. With the cost of analysis eliminated, the simulator is used for experimentation with a large variety of candidate algorithms for multilevel design optimization to choose the best ones for the actual application. Thus, the simulator serves as a development tool for multilevel design optimization strategy. The simulator concept, implementation, and status are described and illustrated with examples.
A Methodology for the Design of Application-Specific Cyber-Physical Social Sensing Co-Simulators.
Sánchez, Borja Bordel; Alcarria, Ramón; Sánchez-Picot, Álvaro; Sánchez-de-Rivera, Diego
2017-09-22
Cyber-Physical Social Sensing (CPSS) is a new trend in the context of pervasive sensing. In these new systems, various domains coexist in time, evolve together and influence each other. Thus, application-specific tools are necessary for specifying and validating designs and simulating systems. However, nowadays, different tools are employed to simulate each domain independently. Mainly, the cause of the lack of co-simulation instruments to simulate all domains together is the extreme difficulty of combining and synchronizing various tools. In order to reduce that difficulty, an adequate architecture for the final co-simulator must be selected. Therefore, in this paper the authors investigate and propose a methodology for the design of CPSS co-simulation tools. The paper describes the four steps that software architects should follow in order to design the most adequate co-simulator for a certain application, considering the final users' needs and requirements and various additional factors such as the development team's experience. Moreover, the first practical use case of the proposed methodology is provided. An experimental validation is also included in order to evaluate the performing of the proposed co-simulator and to determine the correctness of the proposal.
A Methodology for the Design of Application-Specific Cyber-Physical Social Sensing Co-Simulators
Sánchez-Picot, Álvaro
2017-01-01
Cyber-Physical Social Sensing (CPSS) is a new trend in the context of pervasive sensing. In these new systems, various domains coexist in time, evolve together and influence each other. Thus, application-specific tools are necessary for specifying and validating designs and simulating systems. However, nowadays, different tools are employed to simulate each domain independently. Mainly, the cause of the lack of co-simulation instruments to simulate all domains together is the extreme difficulty of combining and synchronizing various tools. In order to reduce that difficulty, an adequate architecture for the final co-simulator must be selected. Therefore, in this paper the authors investigate and propose a methodology for the design of CPSS co-simulation tools. The paper describes the four steps that software architects should follow in order to design the most adequate co-simulator for a certain application, considering the final users’ needs and requirements and various additional factors such as the development team’s experience. Moreover, the first practical use case of the proposed methodology is provided. An experimental validation is also included in order to evaluate the performing of the proposed co-simulator and to determine the correctness of the proposal. PMID:28937610
The Disk-Jet Connection in Radio-Loud AGN: The X-Ray Perspective
NASA Technical Reports Server (NTRS)
Sambruna, Rita
2008-01-01
Unification schemes assume that radio-loud active galactic nuclei (AGN) contain an accretion disk and a relativistic jet perpendicular to the disk, and an obscuring molecular torus. The jet dominance decreases with larger viewing angles from blazars to Broad-Line and Narrow-Line Radio Galaxies. A fundamental question is how accretion and ejecta are related. The X-rays provide a convenient window to study these issues, as they originate in the innermost nuclear regions and penetrate large obscuring columns. I review the data, using observations by Chandra but also from other currently operating high-energy experiments. Synergy with the upcoming GLAST mission will also be highlighted.
Echo Mapping of Active Galactic Nuclei
NASA Technical Reports Server (NTRS)
Peterson, B. M.; Horne, K.
2004-01-01
Echo mapping makes use of the intrinsic variability of the continuum source in active galactic nuclei to map out the distribution and kinematics of line-emitting gas from its light travel time-delayed response to continuum changes. Echo mapping experiments have yielded sizes for the broad line-emitting region in about three dozen AGNs. The dynamics of the line-emitting gas seem to be dominated by the gravity of the central black hole, enabling measurement of the black-hole masses in AGNs. We discuss requirements for future echo-mapping experiments that will yield the high quality velocity-delay maps of the broad-line region that are needed to determine its physical nature.
Methodology development for evaluation of selective-fidelity rotorcraft simulation
NASA Technical Reports Server (NTRS)
Lewis, William D.; Schrage, D. P.; Prasad, J. V. R.; Wolfe, Daniel
1992-01-01
This paper addressed the initial step toward the goal of establishing performance and handling qualities acceptance criteria for realtime rotorcraft simulators through a planned research effort to quantify the system capabilities of 'selective fidelity' simulators. Within this framework the simulator is then classified based on the required task. The simulator is evaluated by separating the various subsystems (visual, motion, etc.) and applying corresponding fidelity constants based on the specific task. This methodology not only provides an assessment technique, but also provides a technique to determine the required levels of subsystem fidelity for a specific task.
Incorporating scenario-based simulation into a hospital nursing education program.
Nagle, Beth M; McHale, Jeanne M; Alexander, Gail A; French, Brian M
2009-01-01
Nurse educators are challenged to provide meaningful and effective learning opportunities for both new and experienced nurses. Simulation as a teaching and learning methodology is being embraced by nursing in academic and practice settings to provide innovative educational experiences to assess and develop clinical competency, promote teamwork, and improve care processes. This article provides an overview of the historical basis for using simulation in education, simulation methodologies, and perceived advantages and disadvantages. It also provides a description of the integration of scenario-based programs using a full-scale patient simulator into nursing education programming at a large academic medical center.
Intermediate-line Emission in AGNs: The Effect of Prescription of the Gas Density
NASA Astrophysics Data System (ADS)
Adhikari, T. P.; Hryniewicz, K.; Różańska, A.; Czerny, B.; Ferland, G. J.
2018-03-01
The requirement of an intermediate-line component in the recently observed spectra of several active galactic nuclei (AGNs) points to the possible existence of a physically separate region between the broad-line region (BLR) and narrow-line region (NLR). In this paper we explore the emission from the intermediate-line region (ILR) by using photoionization simulations of the gas clouds distributed radially from the center of the AGN. The gas clouds span distances typical for the BLR, ILR, and NLR, and the appearance of dust at the sublimation radius is fully taken into account in our model. The structure of a single cloud is calculated under the assumption of constant pressure. We show that the slope of the power-law radial profile of the cloud density does not affect the existence of the ILR in major types of AGNs. We found that the low-ionization iron line, Fe II, appears to be highly sensitive to the presence of dust and therefore becomes a potential tracer of dust content in line-emitting regions. We show that the use of a disk-like cloud density profile computed for the upper part of the atmosphere of the accretion disk reproduces the observed properties of the line emissivities. In particular, the distance of the Hβ line inferred from our model agrees with that obtained from reverberation mapping studies in the Sy1 galaxy NGC 5548.
OBSERVATIONAL LIMITS ON TYPE 1 ACTIVE GALACTIC NUCLEUS ACCRETION RATE IN COSMOS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trump, Jonathan R.; Impey, Chris D.; Gabor, Jared
2009-07-20
We present black hole masses and accretion rates for 182 Type 1 active galactic nuclei (AGNs) in COSMOS. We estimate masses using the scaling relations for the broad H {beta}, Mg II, and C IV emission lines in the redshift ranges 0.16 < z < 0.88, 1 < z < 2.4, and 2.7 < z < 4.9. We estimate the accretion rate using an Eddington ratio L{sub I}/L{sub Edd} estimated from optical and X-ray data. We find that very few Type 1 AGNs accrete below L{sub I} /L{sub Edd} {approx} 0.01, despite simulations of synthetic spectra which show that themore » survey is sensitive to such Type 1 AGNs. At lower accretion rates the broad-line region may become obscured, diluted, or nonexistent. We find evidence that Type 1 AGNs at higher accretion rates have higher optical luminosities, as more of their emission comes from the cool (optical) accretion disk with respect to shorter wavelengths. We measure a larger range in accretion rate than previous works, suggesting that COSMOS is more efficient at finding low accretion rate Type 1 AGNs. However, the measured range in accretion rate is still comparable to the intrinsic scatter from the scaling relations, suggesting that Type 1 AGNs accrete at a narrow range of Eddington ratio, with L{sub I} /L{sub Edd} {approx} 0.1.« less
VERA Core Simulator Methodology for PWR Cycle Depletion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kochunas, Brendan; Collins, Benjamin S; Jabaay, Daniel
2015-01-01
This paper describes the methodology developed and implemented in MPACT for performing high-fidelity pressurized water reactor (PWR) multi-cycle core physics calculations. MPACT is being developed primarily for application within the Consortium for the Advanced Simulation of Light Water Reactors (CASL) as one of the main components of the VERA Core Simulator, the others being COBRA-TF and ORIGEN. The methods summarized in this paper include a methodology for performing resonance self-shielding and computing macroscopic cross sections, 2-D/1-D transport, nuclide depletion, thermal-hydraulic feedback, and other supporting methods. These methods represent a minimal set needed to simulate high-fidelity models of a realistic nuclearmore » reactor. Results demonstrating this are presented from the simulation of a realistic model of the first cycle of Watts Bar Unit 1. The simulation, which approximates the cycle operation, is observed to be within 50 ppm boron (ppmB) reactivity for all simulated points in the cycle and approximately 15 ppmB for a consistent statepoint. The verification and validation of the PWR cycle depletion capability in MPACT is the focus of two companion papers.« less
Simulation of Attacks for Security in Wireless Sensor Network
Diaz, Alvaro; Sanchez, Pablo
2016-01-01
The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node’s software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work. PMID:27869710
Methodologies for extracting kinetic constants for multiphase reacting flow simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, S.L.; Lottes, S.A.; Golchert, B.
1997-03-01
Flows in industrial reactors often involve complex reactions of many species. A computational fluid dynamics (CFD) computer code, ICRKFLO, was developed to simulate multiphase, multi-species reacting flows. The ICRKFLO uses a hybrid technique to calculate species concentration and reaction for a large number of species in a reacting flow. This technique includes a hydrodynamic and reacting flow simulation with a small but sufficient number of lumped reactions to compute flow field properties followed by a calculation of local reaction kinetics and transport of many subspecies (order of 10 to 100). Kinetic rate constants of the numerous subspecies chemical reactions aremore » difficult to determine. A methodology has been developed to extract kinetic constants from experimental data efficiently. A flow simulation of a fluid catalytic cracking (FCC) riser was successfully used to demonstrate this methodology.« less
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
NASA Astrophysics Data System (ADS)
Pawar, Sumedh; Sharma, Atul
2018-01-01
This work presents mathematical model and solution methodology for a multiphysics engineering problem on arc formation during welding and inside a nozzle. A general-purpose commercial CFD solver ANSYS FLUENT 13.0.0 is used in this work. Arc formation involves strongly coupled gas dynamics and electro-dynamics, simulated by solution of coupled Navier-Stoke equations, Maxwell's equations and radiation heat-transfer equation. Validation of the present numerical methodology is demonstrated with an excellent agreement with the published results. The developed mathematical model and the user defined functions (UDFs) are independent of the geometry and are applicable to any system that involves arc-formation, in 2D axisymmetric coordinates system. The high-pressure flow of SF6 gas in the nozzle-arc system resembles arc chamber of SF6 gas circuit breaker; thus, this methodology can be extended to simulate arcing phenomenon during current interruption.
Predicting Failure Progression and Failure Loads in Composite Open-Hole Tension Coupons
NASA Technical Reports Server (NTRS)
Arunkumar, Satyanarayana; Przekop, Adam
2010-01-01
Failure types and failure loads in carbon-epoxy [45n/90n/-45n/0n]ms laminate coupons with central circular holes subjected to tensile load are simulated using progressive failure analysis (PFA) methodology. The progressive failure methodology is implemented using VUMAT subroutine within the ABAQUS(TradeMark)/Explicit nonlinear finite element code. The degradation model adopted in the present PFA methodology uses an instantaneous complete stress reduction (COSTR) approach to simulate damage at a material point when failure occurs. In-plane modeling parameters such as element size and shape are held constant in the finite element models, irrespective of laminate thickness and hole size, to predict failure loads and failure progression. Comparison to published test data indicates that this methodology accurately simulates brittle, pull-out and delamination failure types. The sensitivity of the failure progression and the failure load to analytical loading rates and solvers precision is demonstrated.
Optimization of lamp arrangement in a closed-conduit UV reactor based on a genetic algorithm.
Sultan, Tipu; Ahmad, Zeshan; Cho, Jinsoo
2016-01-01
The choice for the arrangement of the UV lamps in a closed-conduit ultraviolet (CCUV) reactor significantly affects the performance. However, a systematic methodology for the optimal lamp arrangement within the chamber of the CCUV reactor is not well established in the literature. In this research work, we propose a viable systematic methodology for the lamp arrangement based on a genetic algorithm (GA). In addition, we analyze the impacts of the diameter, angle, and symmetry of the lamp arrangement on the reduction equivalent dose (RED). The results are compared based on the simulated RED values and evaluated using the computational fluid dynamics simulations software ANSYS FLUENT. The fluence rate was calculated using commercial software UVCalc3D, and the GA-based lamp arrangement optimization was achieved using MATLAB. The simulation results provide detailed information about the GA-based methodology for the lamp arrangement, the pathogen transport, and the simulated RED values. A significant increase in the RED values was achieved by using the GA-based lamp arrangement methodology. This increase in RED value was highest for the asymmetric lamp arrangement within the chamber of the CCUV reactor. These results demonstrate that the proposed GA-based methodology for symmetric and asymmetric lamp arrangement provides a viable technical solution to the design and optimization of the CCUV reactor.
A top-down design methodology and its implementation for VCSEL-based optical links design
NASA Astrophysics Data System (ADS)
Li, Jiguang; Cao, Mingcui; Cai, Zilong
2005-01-01
In order to find the optimal design for a given specification of an optical communication link, an integrated simulation of electronic, optoelectronic, and optical components of a complete system is required. It is very important to be able to simulate at both system level and detailed model level. This kind of model is feasible due to the high potential of Verilog-AMS language. In this paper, we propose an effective top-down design methodology and employ it in the development of a complete VCSEL-based optical links simulation. The principle of top-down methodology is that the development would proceed from the system to device level. To design a hierarchical model for VCSEL based optical links, the design framework is organized in three levels of hierarchy. The models are developed, and implemented in Verilog-AMS. Therefore, the model parameters are fitted to measured data. A sample transient simulation demonstrates the functioning of our implementation. Suggestions for future directions in top-down methodology used for optoelectronic systems technology are also presented.
A methodology for the rigorous verification of plasma simulation codes
NASA Astrophysics Data System (ADS)
Riva, Fabio
2016-10-01
The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.
A Monte Carlo analysis of breast screening randomized trials.
Zamora, Luis I; Forastero, Cristina; Guirado, Damián; Lallena, Antonio M
2016-12-01
To analyze breast screening randomized trials with a Monte Carlo simulation tool. A simulation tool previously developed to simulate breast screening programmes was adapted for that purpose. The history of women participating in the trials was simulated, including a model for survival after local treatment of invasive cancers. Distributions of time gained due to screening detection against symptomatic detection and the overall screening sensitivity were used as inputs. Several randomized controlled trials were simulated. Except for the age range of women involved, all simulations used the same population characteristics and this permitted to analyze their external validity. The relative risks obtained were compared to those quoted for the trials, whose internal validity was addressed by further investigating the reasons of the disagreements observed. The Monte Carlo simulations produce results that are in good agreement with most of the randomized trials analyzed, thus indicating their methodological quality and external validity. A reduction of the breast cancer mortality around 20% appears to be a reasonable value according to the results of the trials that are methodologically correct. Discrepancies observed with Canada I and II trials may be attributed to a low mammography quality and some methodological problems. Kopparberg trial appears to show a low methodological quality. Monte Carlo simulations are a powerful tool to investigate breast screening controlled randomized trials, helping to establish those whose results are reliable enough to be extrapolated to other populations and to design the trial strategies and, eventually, adapting them during their development. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
On Designing Multicore-Aware Simulators for Systems Biology Endowed with OnLine Statistics
Calcagno, Cristina; Coppo, Mario
2014-01-01
The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big) data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed. PMID:25050327
On designing multicore-aware simulators for systems biology endowed with OnLine statistics.
Aldinucci, Marco; Calcagno, Cristina; Coppo, Mario; Damiani, Ferruccio; Drocco, Maurizio; Sciacca, Eva; Spinella, Salvatore; Torquati, Massimo; Troina, Angelo
2014-01-01
The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big) data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawson, Paul R.; Boyce, Donald E.; Park, Jun-Sang
A robust methodology is presented to extract slip system strengths from lattice strain distributions for polycrystalline samples obtained from high-energy x-ray diffraction (HEXD) experiments with in situ loading. The methodology consists of matching the evolution of coefficients of a harmonic expansion of the distributions from simulation to the coefficients derived from measurements. Simulation results are generated via finite element simulations of virtual polycrystals that are subjected to the loading history applied in the HEXD experiments. Advantages of the methodology include: (1) its ability to utilize extensive data sets generated by HEXD experiments; (2) its ability to capture trends in distributionsmore » that may be noisy (both measured and simulated); and (3) its sensitivity to the ratios of the family strengths. The approach is used to evaluate the slip system strengths of Ti-6Al-4V using samples having relatively equiaxed grains. These strength estimates are compared to values in the literature.« less
Soft tissue modelling through autowaves for surgery simulation.
Zhong, Yongmin; Shirinzadeh, Bijan; Alici, Gursel; Smith, Julian
2006-09-01
Modelling of soft tissue deformation is of great importance to virtual reality based surgery simulation. This paper presents a new methodology for simulation of soft tissue deformation by drawing an analogy between autowaves and soft tissue deformation. The potential energy stored in a soft tissue as a result of a deformation caused by an external force is propagated among mass points of the soft tissue by non-linear autowaves. The novelty of the methodology is that (i) autowave techniques are established to describe the potential energy distribution of a deformation for extrapolating internal forces, and (ii) non-linear materials are modelled with non-linear autowaves other than geometric non-linearity. Integration with a haptic device has been achieved to simulate soft tissue deformation with force feedback. The proposed methodology not only deals with large-range deformations, but also accommodates isotropic, anisotropic and inhomogeneous materials by simply changing diffusion coefficients.
Current target acquisition methodology in force on force simulations
NASA Astrophysics Data System (ADS)
Hixson, Jonathan G.; Miller, Brian; Mazz, John P.
2017-05-01
The U.S. Army RDECOM CERDEC NVESD MSD's target acquisition models have been used for many years by the military community in force on force simulations for training, testing, and analysis. There have been significant improvements to these models over the past few years. The significant improvements are the transition of ACQUIRE TTP-TAS (ACQUIRE Targeting Task Performance Target Angular Size) methodology for all imaging sensors and the development of new discrimination criteria for urban environments and humans. This paper is intended to provide an overview of the current target acquisition modeling approach and provide data for the new discrimination tasks. This paper will discuss advances and changes to the models and methodologies used to: (1) design and compare sensors' performance, (2) predict expected target acquisition performance in the field, (3) predict target acquisition performance for combat simulations, and (4) how to conduct model data validation for combat simulations.
NASA Astrophysics Data System (ADS)
Dib, Alain; Kavvas, M. Levent
2018-03-01
The characteristic form of the Saint-Venant equations is solved in a stochastic setting by using a newly proposed Fokker-Planck Equation (FPE) methodology. This methodology computes the ensemble behavior and variability of the unsteady flow in open channels by directly solving for the flow variables' time-space evolutionary probability distribution. The new methodology is tested on a stochastic unsteady open-channel flow problem, with an uncertainty arising from the channel's roughness coefficient. The computed statistical descriptions of the flow variables are compared to the results obtained through Monte Carlo (MC) simulations in order to evaluate the performance of the FPE methodology. The comparisons show that the proposed methodology can adequately predict the results of the considered stochastic flow problem, including the ensemble averages, variances, and probability density functions in time and space. Unlike the large number of simulations performed by the MC approach, only one simulation is required by the FPE methodology. Moreover, the total computational time of the FPE methodology is smaller than that of the MC approach, which could prove to be a particularly crucial advantage in systems with a large number of uncertain parameters. As such, the results obtained in this study indicate that the proposed FPE methodology is a powerful and time-efficient approach for predicting the ensemble average and variance behavior, in both space and time, for an open-channel flow process under an uncertain roughness coefficient.
Passenger rail vehicle safety assessment methodology. Volume I, Summary of safe performance limits.
DOT National Transportation Integrated Search
2000-04-01
This report presents a methodology based on computer simulation that asseses the safe dyamic performance limits of commuter passenger vehicles. The methodology consists of determining the critical design parameters and characteristic properties of bo...
Design Of Combined Stochastic Feedforward/Feedback Control
NASA Technical Reports Server (NTRS)
Halyo, Nesim
1989-01-01
Methodology accommodates variety of control structures and design techniques. In methodology for combined stochastic feedforward/feedback control, main objectives of feedforward and feedback control laws seen clearly. Inclusion of error-integral feedback, dynamic compensation, rate-command control structure, and like integral element of methodology. Another advantage of methodology flexibility to develop variety of techniques for design of feedback control with arbitrary structures to obtain feedback controller: includes stochastic output feedback, multiconfiguration control, decentralized control, or frequency and classical control methods. Control modes of system include capture and tracking of localizer and glideslope, crab, decrab, and flare. By use of recommended incremental implementation, control laws simulated on digital computer and connected with nonlinear digital simulation of aircraft and its systems.
Kinematics and spectra of planetary nebulae with O VI-sequence nuclei
NASA Technical Reports Server (NTRS)
Johnson, H. M.
1976-01-01
Spectral features of NGC 5189 and NGC 6905 are tabulated. Fabry-Perot profiles around H alpha and O III lambda 5007 of NGC 5189, NGC 6905, NGC 246, and NGC 1535, are illustrated. The latter planetary nebula is a non-O VI-sequence, comparison object of high excitation. The kinematics of the four planetary nebulae are simply analyzed. Discussion of these data is motivated by the possibility of collisional excitation by high-speed ejecta from broad-lined O VI-sequence nuclei, and by the opportunity to make a comparison with conditions in the supernova remnant or ring nebula, G2.4 + 1.4, which contains an O VI-sequence nucleus of Population I.
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.; Toniolo, Matthew D.; Tartabini, Paul V.; Roithmayr, Carlos M.; Albertson, Cindy W.; Karlgaard, Christopher D.
2016-01-01
The objective of this report is to develop and implement a physics based method for analysis and simulation of multi-body dynamics including launch vehicle stage separation. The constraint force equation (CFE) methodology discussed in this report provides such a framework for modeling constraint forces and moments acting at joints when the vehicles are still connected. Several stand-alone test cases involving various types of joints were developed to validate the CFE methodology. The results were compared with ADAMS(Registered Trademark) and Autolev, two different industry standard benchmark codes for multi-body dynamic analysis and simulations. However, these two codes are not designed for aerospace flight trajectory simulations. After this validation exercise, the CFE algorithm was implemented in Program to Optimize Simulated Trajectories II (POST2) to provide a capability to simulate end-to-end trajectories of launch vehicles including stage separation. The POST2/CFE methodology was applied to the STS-1 Space Shuttle solid rocket booster (SRB) separation and Hyper-X Research Vehicle (HXRV) separation from the Pegasus booster as a further test and validation for its application to launch vehicle stage separation problems. Finally, to demonstrate end-to-end simulation capability, POST2/CFE was applied to the ascent, orbit insertion, and booster return of a reusable two-stage-to-orbit (TSTO) vehicle concept. With these validation exercises, POST2/CFE software can be used for performing conceptual level end-to-end simulations, including launch vehicle stage separation, for problems similar to those discussed in this report.
The Contribution of Human Factors in Military System Development: Methodological Considerations
1980-07-01
Risk/Uncertainty Analysis - Project Scoring - Utility Scales - Relevance Tree Techniques (Reverse Factor Analysis) 2. Computer Simulation Simulation...effectiveness of mathematical models for R&D project selection. Management Science, April 1973, 18. 6-43 .1~ *.-. Souder, W.E. h scoring methodology for...per some interval PROFICIENCY test scores (written) RADIATION radiation effects aircrew performance on radiation environments REACTION TIME 1) (time
Solution to the indexing problem of frequency domain simulation experiments
NASA Technical Reports Server (NTRS)
Mitra, Mousumi; Park, Stephen K.
1991-01-01
A frequency domain simulation experiment is one in which selected system parameters are oscillated sinusoidally to induce oscillations in one or more system statistics of interest. A spectral (Fourier) analysis of these induced oscillations is then performed. To perform this spectral analysis, all oscillation frequencies must be referenced to a common, independent variable - an oscillation index. In a discrete-event simulation, the global simulation clock is the most natural choice for the oscillation index. However, past efforts to reference all frequencies to the simulation clock generally yielded unsatisfactory results. The reason for these unsatisfactory results is explained in this paper and a new methodology which uses the simulation clock as the oscillation index is presented. Techniques for implementing this new methodology are demonstrated by performing a frequency domain simulation experiment for a network of queues.
System Dynamics Modeling for Proactive Intelligence
2010-01-01
5 4. Modeling Resources as Part of an Integrated Multi- Methodology System .................. 16 5. Formalizing Pro-Active...Observable Data With and Without Simulation Analysis ............................... 15 Figure 13. Summary of Probe Methodology and Results...Strategy ............................................................................. 22 Figure 22. Overview of Methodology
Radioactive waste disposal fees-Methodology for calculation
NASA Astrophysics Data System (ADS)
Bemš, Július; Králík, Tomáš; Kubančák, Ján; Vašíček, Jiří; Starý, Oldřich
2014-11-01
This paper summarizes the methodological approach used for calculation of fee for low- and intermediate-level radioactive waste disposal and for spent fuel disposal. The methodology itself is based on simulation of cash flows related to the operation of system for waste disposal. The paper includes demonstration of methodology application on the conditions of the Czech Republic.
NASA Technical Reports Server (NTRS)
Szuch, J. R.; Krosel, S. M.; Bruton, W. M.
1982-01-01
A systematic, computer-aided, self-documenting methodology for developing hybrid computer simulations of turbofan engines is presented. The methodology that is pesented makes use of a host program that can run on a large digital computer and a machine-dependent target (hybrid) program. The host program performs all the calculations and data manipulations that are needed to transform user-supplied engine design information to a form suitable for the hybrid computer. The host program also trims the self-contained engine model to match specified design-point information. Part I contains a general discussion of the methodology, describes a test case, and presents comparisons between hybrid simulation and specified engine performance data. Part II, a companion document, contains documentation, in the form of computer printouts, for the test case.
Advanced Methodology for Simulation of Complex Flows Using Structured Grid Systems
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur; Modiano, David
1995-01-01
Detailed simulations of viscous flows in complicated geometries pose a significant challenge to current capabilities of Computational Fluid Dynamics (CFD). To enable routine application of CFD to this class of problems, advanced methodologies are required that employ (a) automated grid generation, (b) adaptivity, (c) accurate discretizations and efficient solvers, and (d) advanced software techniques. Each of these ingredients contributes to increased accuracy, efficiency (in terms of human effort and computer time), and/or reliability of CFD software. In the long run, methodologies employing structured grid systems will remain a viable choice for routine simulation of flows in complex geometries only if genuinely automatic grid generation techniques for structured grids can be developed and if adaptivity is employed more routinely. More research in both these areas is urgently needed.
SIMSAT: An object oriented architecture for real-time satellite simulation
NASA Technical Reports Server (NTRS)
Williams, Adam P.
1993-01-01
Real-time satellite simulators are vital tools in the support of satellite missions. They are used in the testing of ground control systems, the training of operators, the validation of operational procedures, and the development of contingency plans. The simulators must provide high-fidelity modeling of the satellite, which requires detailed system information, much of which is not available until relatively near launch. The short time-scales and resulting high productivity required of such simulator developments culminates in the need for a reusable infrastructure which can be used as a basis for each simulator. This paper describes a major new simulation infrastructure package, the Software Infrastructure for Modelling Satellites (SIMSAT). It outlines the object oriented design methodology used, describes the resulting design, and discusses the advantages and disadvantages experienced in applying the methodology.
NASA Astrophysics Data System (ADS)
Rao, Dhananjai M.; Chernyakhovsky, Alexander; Rao, Victoria
2008-05-01
Humanity is facing an increasing number of highly virulent and communicable diseases such as avian influenza. Researchers believe that avian influenza has potential to evolve into one of the deadliest pandemics. Combating these diseases requires in-depth knowledge of their epidemiology. An effective methodology for discovering epidemiological knowledge is to utilize a descriptive, evolutionary, ecological model and use bio-simulations to study and analyze it. These types of bio-simulations fall under the category of computational evolutionary methods because the individual entities participating in the simulation are permitted to evolve in a natural manner by reacting to changes in the simulated ecosystem. This work describes the application of the aforementioned methodology to discover epidemiological knowledge about avian influenza using a novel eco-modeling and bio-simulation environment called SEARUMS. The mathematical principles underlying SEARUMS, its design, and the procedure for using SEARUMS are discussed. The bio-simulations and multi-faceted case studies conducted using SEARUMS elucidate its ability to pinpoint timelines, epicenters, and socio-economic impacts of avian influenza. This knowledge is invaluable for proactive deployment of countermeasures in order to minimize negative socioeconomic impacts, combat the disease, and avert a pandemic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Dhananjai M.; Chernyakhovsky, Alexander; Rao, Victoria
2008-05-08
Humanity is facing an increasing number of highly virulent and communicable diseases such as avian influenza. Researchers believe that avian influenza has potential to evolve into one of the deadliest pandemics. Combating these diseases requires in-depth knowledge of their epidemiology. An effective methodology for discovering epidemiological knowledge is to utilize a descriptive, evolutionary, ecological model and use bio-simulations to study and analyze it. These types of bio-simulations fall under the category of computational evolutionary methods because the individual entities participating in the simulation are permitted to evolve in a natural manner by reacting to changes in the simulated ecosystem. Thismore » work describes the application of the aforementioned methodology to discover epidemiological knowledge about avian influenza using a novel eco-modeling and bio-simulation environment called SEARUMS. The mathematical principles underlying SEARUMS, its design, and the procedure for using SEARUMS are discussed. The bio-simulations and multi-faceted case studies conducted using SEARUMS elucidate its ability to pinpoint timelines, epicenters, and socio-economic impacts of avian influenza. This knowledge is invaluable for proactive deployment of countermeasures in order to minimize negative socioeconomic impacts, combat the disease, and avert a pandemic.« less
Design for dependability: A simulation-based approach. Ph.D. Thesis, 1993
NASA Technical Reports Server (NTRS)
Goswami, Kumar K.
1994-01-01
This research addresses issues in simulation-based system level dependability analysis of fault-tolerant computer systems. The issues and difficulties of providing a general simulation-based approach for system level analysis are discussed and a methodology that address and tackle these issues is presented. The proposed methodology is designed to permit the study of a wide variety of architectures under various fault conditions. It permits detailed functional modeling of architectural features such as sparing policies, repair schemes, routing algorithms as well as other fault-tolerant mechanisms, and it allows the execution of actual application software. One key benefit of this approach is that the behavior of a system under faults does not have to be pre-defined as it is normally done. Instead, a system can be simulated in detail and injected with faults to determine its failure modes. The thesis describes how object-oriented design is used to incorporate this methodology into a general purpose design and fault injection package called DEPEND. A software model is presented that uses abstractions of application programs to study the behavior and effect of software on hardware faults in the early design stage when actual code is not available. Finally, an acceleration technique that combines hierarchical simulation, time acceleration algorithms and hybrid simulation to reduce simulation time is introduced.
Some Dimensions of Simulation.
ERIC Educational Resources Information Center
Beck, Isabel; Monroe, Bruce
Beginning with definitions of "simulation" (a methodology for testing alternative decisions under hypothetical conditions), this paper focuses on the use of simulation as an instructional method, pointing out the relationships and differences between role playing, games, and simulation. The term "simulation games" is explored with an analysis of…
Theory of mind and Verstehen (understanding) methodology.
Kumazaki, Tsutomu
2016-09-01
Theory of mind is a prominent, but highly controversial, field in psychology, psychiatry, and philosophy of mind. Simulation theory, theory-theory and other views have been presented in recent decades, none of which are monolithic. In this article, various views on theory of mind are reviewed, and methodological problems within each view are investigated. The relationship between simulation theory and Verstehen (understanding) methodology in traditional human sciences is an intriguing issue, although the latter is not a direct ancestor of the former. From that perspective, lessons for current clinical psychiatry are drawn. © The Author(s) 2016.
Application of CFE/POST2 for Simulation of Launch Vehicle Stage Separation
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.; Tartabini, Paul V.; Toniolo, Matthew D.; Roithmayr, Carlos M.; Karlgaard, Christopher D.; Samareh, Jamshid A.
2009-01-01
The constraint force equation (CFE) methodology provides a framework for modeling constraint forces and moments acting at joints that connect multiple vehicles. With implementation in Program to Optimize Simulated Trajectories II (POST 2), the CFE provides a capability to simulate end-to-end trajectories of launch vehicles, including stage separation. In this paper, the CFE/POST2 methodology is applied to the Shuttle-SRB separation problem as a test and validation case. The CFE/POST2 results are compared with STS-1 flight test data.
Using scan statistics for congenital anomalies surveillance: the EUROCAT methodology.
Teljeur, Conor; Kelly, Alan; Loane, Maria; Densem, James; Dolk, Helen
2015-11-01
Scan statistics have been used extensively to identify temporal clusters of health events. We describe the temporal cluster detection methodology adopted by the EUROCAT (European Surveillance of Congenital Anomalies) monitoring system. Since 2001, EUROCAT has implemented variable window width scan statistic for detecting unusual temporal aggregations of congenital anomaly cases. The scan windows are based on numbers of cases rather than being defined by time. The methodology is imbedded in the EUROCAT Central Database for annual application to centrally held registry data. The methodology was incrementally adapted to improve the utility and to address statistical issues. Simulation exercises were used to determine the power of the methodology to identify periods of raised risk (of 1-18 months). In order to operationalize the scan methodology, a number of adaptations were needed, including: estimating date of conception as unit of time; deciding the maximum length (in time) and recency of clusters of interest; reporting of multiple and overlapping significant clusters; replacing the Monte Carlo simulation with a lookup table to reduce computation time; and placing a threshold on underlying population change and estimating the false positive rate by simulation. Exploration of power found that raised risk periods lasting 1 month are unlikely to be detected except when the relative risk and case counts are high. The variable window width scan statistic is a useful tool for the surveillance of congenital anomalies. Numerous adaptations have improved the utility of the original methodology in the context of temporal cluster detection in congenital anomalies.
NuSTAR reveals the Comptonizing corona of the broad-line radio galaxy 3C 382
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballantyne, D. R.; Bollenbacher, J. M.; Brenneman, L. W.
Broad-line radio galaxies (BLRGs) are active galactic nuclei that produce powerful, large-scale radio jets, but appear as Seyfert 1 galaxies in their optical spectra. In the X-ray band, BLRGs also appear like Seyfert galaxies, but with flatter spectra and weaker reflection features. One explanation for these properties is that the X-ray continuum is diluted by emission from the jet. Here, we present two NuSTAR observations of the BLRG 3C 382 that show clear evidence that the continuum of this source is dominated by thermal Comptonization, as in Seyfert 1 galaxies. The two observations were separated by over a year andmore » found 3C 382 in different states separated by a factor of 1.7 in flux. The lower flux spectrum has a photon-index of Γ=1.68{sub −0.02}{sup +0.03}, while the photon-index of the higher flux spectrum is Γ=1.78{sub −0.03}{sup +0.02}. Thermal and anisotropic Comptonization models provide an excellent fit to both spectra and show that the coronal plasma cooled from kT{sub e} = 330 ± 30 keV in the low flux data to 231{sub −88}{sup +50} keV in the high flux observation. This cooling behavior is typical of Comptonizing corona in Seyfert galaxies and is distinct from the variations observed in jet-dominated sources. In the high flux observation, simultaneous Swift data are leveraged to obtain a broadband spectral energy distribution and indicates that the corona intercepts ∼10% of the optical and ultraviolet emitting accretion disk. 3C 382 exhibits very weak reflection features, with no detectable relativistic Fe Kα line, that may be best explained by an outflowing corona combined with an ionized inner accretion disk.« less
REVERBERATION AND PHOTOIONIZATION ESTIMATES OF THE BROAD-LINE REGION RADIUS IN LOW-z QUASARS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Negrete, C. Alenka; Dultzin, Deborah; Marziani, Paola
2013-07-01
Black hole mass estimation in quasars, especially at high redshift, involves the use of single-epoch spectra with signal-to-noise ratio and resolution that permit accurate measurement of the width of a broad line assumed to be a reliable virial estimator. Coupled with an estimate of the radius of the broad-line region (BLR) this yields the black hole mass M{sub BH}. The radius of the BLR may be inferred from an extrapolation of the correlation between source luminosity and reverberation-derived r{sub BLR} measures (the so-called Kaspi relation involving about 60 low-z sources). We are exploring a different method for estimating r{sub BLR}more » directly from inferred physical conditions in the BLR of each source. We report here on a comparison of r{sub BLR} estimates that come from our method and from reverberation mapping. Our ''photoionization'' method employs diagnostic line intensity ratios in the rest-frame range 1400-2000 A (Al III {lambda}1860/Si III] {lambda}1892, C IV {lambda}1549/Al III {lambda}1860) that enable derivation of the product of density and ionization parameter with the BLR distance derived from the definition of the ionization parameter. We find good agreement between our estimates of the density, ionization parameter, and r{sub BLR} and those from reverberation mapping. We suggest empirical corrections to improve the agreement between individual photoionization-derived r{sub BLR} values and those obtained from reverberation mapping. The results in this paper can be exploited to estimate M{sub BH} for large samples of high-z quasars using an appropriate virial broadening estimator. We show that the width of the UV intermediate emission lines are consistent with the width of H{beta}, thereby providing a reliable virial broadening estimator that can be measured in large samples of high-z quasars.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Linhua; Shen, Yue; McGreer, Ian D.
2016-02-20
We present a reverberation mapping (RM) experiment that combines broad- and intermediate-band photometry; it is the first such attempt targeting 13 quasars at 0.2 < z < 0.9. The quasars were selected to have strong Hα or Hβ emission lines that are located in one of three intermediate bands (with FWHM around 200 Å) centered at 8045, 8505, and 9171 Å. The imaging observations were carried out in the intermediate bands and the broad i and z bands using the prime-focus imager 90Prime on the 2.3 m Bok telescope. Because of the large (∼1 deg{sup 2}) field of view (FOV) of 90Prime, we includedmore » the 13 quasars within only five telescope pointings or fields. The five fields were repeatedly observed over 20–30 epochs that were unevenly distributed over a duration of 5–6 months. The combination of the broad- and intermediate-band photometry allows us to derive accurate light curves for both optical continuum emission (from the accretion disk) and line emission (from the broad-line region, or BLR). We detect Hα time lags between the continuum and line emission in six quasars. These quasars are at relatively low redshifts 0.2 < z < 0.4. The measured lags are consistent with the current BLR size–luminosity relation for Hβ at z < 0.3. While this experiment appears successful in detecting lags of the bright Hα line, further investigation is required to see if it can also be applied to the fainter Hβ line for quasars at higher redshifts. Finally we demonstrate that, by using a small telescope with a large FOV, intermediate-band photometric RM can be efficiently executed for a large sample of quasars at z > 0.2.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tanaka, Y. T.; Doi, A.; Inoue, Y.
2015-01-30
We present multi-wavelength monitoring results for the broad-line radio galaxy 3C 120 in the MeV/GeV, sub-millimeter, and 43 GHz bands over 6 yr. Over the past 2 yr, the Fermi-Large Area Telescope sporadically detected 3C 120 with high significance and the 230 GHz data also suggest an enhanced activity of the source. After the MeV/GeV detection from 3C 120 in MJD 56240–56300, 43 GHz Very Long Baseline Array (VLBA) monitoring revealed a brightening of the radio core, followed by the ejection of a superluminal knot. Since we observed the γ-ray and VLBA phenomena in temporal proximity to each other, itmore » is naturally assumed that they are physically connected. This assumption was further supported by the subsequent observation that the 43 GHz core brightened again after a γ-ray flare occurred around MJD 56560. We can then infer that the MeV/GeV emission took place inside an unresolved 43 GHz core of 3C 120 and that the jet dissipation occurred at sub-parsec distances from the central black hole (BH), if we take the distance of the 43 GHz core from the central BH as ∼0.5 pc, as previously estimated from the time lag between X-ray dips and knot ejections. Based on our constraints on the relative locations of the emission regions and energetic arguments, we conclude that the γ rays are more favorably produced via the synchrotron self-Compton process, rather than inverse Compton scattering of external photons coming from the broad line region or hot dusty torus. We also derived the electron distribution and magnetic field by modeling the simultaneous broadband spectrum.« less
Tanaka, Y. T.; Doi, A.; Inoue, Y.; ...
2015-01-23
In this paper, we present multi-wavelength monitoring results for the broad-line radio galaxy 3C 120 in the MeV/GeV, sub-millimeter, and 43 GHz bands over 6 yr. Over the past 2 yr, the Fermi-Large Area Telescope sporadically detected 3C 120 with high significance and the 230 GHz data also suggest an enhanced activity of the source. After the MeV/GeV detection from 3C 120 in MJD 56240–56300, 43 GHz Very Long Baseline Array (VLBA) monitoring revealed a brightening of the radio core, followed by the ejection of a superluminal knot. Since we observed the γ-ray and VLBA phenomena in temporal proximity tomore » each other, it is naturally assumed that they are physically connected. This assumption was further supported by the subsequent observation that the 43 GHz core brightened again after a γ-ray flare occurred around MJD 56560. We can then infer that the MeV/GeV emission took place inside an unresolved 43 GHz core of 3C 120 and that the jet dissipation occurred at sub-parsec distances from the central black hole (BH), if we take the distance of the 43 GHz core from the central BH as ~0.5 pc, as previously estimated from the time lag between X-ray dips and knot ejections. Based on our constraints on the relative locations of the emission regions and energetic arguments, we conclude that the γ rays are more favorably produced via the synchrotron self-Compton process, rather than inverse Compton scattering of external photons coming from the broad line region or hot dusty torus. Finally, we also derived the electron distribution and magnetic field by modeling the simultaneous broadband spectrum.« less
NASA Technical Reports Server (NTRS)
Milisavljevic, D.; Margutti, R.; Parrent, J. T.; Soderberg, A. M.; Fesen, R. A.; Mazzali, P.; Maeda, K.; Sanders, N. E.; Cenko, S. B.; Silverman, J. M.
2014-01-01
We present ultraviolet, optical, and near-infrared observations of SN2012ap, a broad-lined Type Ic supernova in the galaxy NGC 1729 that produced a relativistic and rapidly decelerating outflow without a gamma-ray burst signature. Photometry and spectroscopy follow the flux evolution from -13 to +272 days past the B-band maximum of -17.4 +/- 0.5 mag. The spectra are dominated by Fe II, O I, and Ca II absorption lines at ejecta velocities of v approx. 20,000 km s(exp. -1) that change slowly over time. Other spectral absorption lines are consistent with contributions from photospheric He I, and hydrogen may also be present at higher velocities (v approx. greater than 27,000 km s(exp. -1)). We use these observations to estimate explosion properties and derive a total ejecta mass of 2.7 Solar mass, a kinetic energy of 1.0×1052 erg, and a (56)Ni mass of 0.1-0.2 Solar mass. Nebular spectra (t > 200 d) exhibit an asymmetric double-peaked [O I] lambda lambda 6300, 6364 emission profile that we associate with absorption in the supernova interior, although toroidal ejecta geometry is an alternative explanation. SN2012ap joins SN2009bb as another exceptional supernova that shows evidence for a central engine (e.g., black-hole accretion or magnetar) capable of launching a non-negligible portion of ejecta to relativistic velocities without a coincident gamma-ray burst detection. Defining attributes of their progenitor systems may be related to notable properties including above-average environmental metallicities of Z approx. greater than Solar Z, moderate to high levels of host-galaxy extinction (E(B -V ) > 0.4 mag), detection of high-velocity helium at early epochs, and a high relative flux ratio of [Ca II]/[O I] > 1 at nebular epochs. These events support the notion that jet activity at various energy scales may be present in a wide range of supernovae.
NASA Astrophysics Data System (ADS)
Bespalov, Vadim; Udina, Natalya; Samarskaya, Natalya
2017-10-01
Use of wind energy is related to one of the prospective directions among renewed energy sources. A methodological approach is reviewed in the article to simulation and choice of ecologically efficient and energetically economic wind turbines on the designing stage taking into account characteristics of natural-territorial complex and peculiarities of anthropogenic load in the territory of WT location.
Explosion/Blast Dynamics for Constellation Launch Vehicles Assessment
NASA Technical Reports Server (NTRS)
Baer, Mel; Crawford, Dave; Hickox, Charles; Kipp, Marlin; Hertel, Gene; Morgan, Hal; Ratzel, Arthur; Cragg, Clinton H.
2009-01-01
An assessment methodology is developed to guide quantitative predictions of adverse physical environments and the subsequent effects on the Ares-1 crew launch vehicle associated with the loss of containment of cryogenic liquid propellants from the upper stage during ascent. Development of the methodology is led by a team at Sandia National Laboratories (SNL) with guidance and support from a number of National Aeronautics and Space Administration (NASA) personnel. The methodology is based on the current Ares-1 design and feasible accident scenarios. These scenarios address containment failure from debris impact or structural response to pressure or blast loading from an external source. Once containment is breached, the envisioned assessment methodology includes predictions for the sequence of physical processes stemming from cryogenic tank failure. The investigative techniques, analysis paths, and numerical simulations that comprise the proposed methodology are summarized and appropriate simulation software is identified in this report.
NASA Astrophysics Data System (ADS)
Suzuki, Akihiro; Maeda, Keiichi
2018-04-01
We investigate broad-band emission from supernova ejecta powered by a relativistic wind from a central compact object. A recent two-dimensional hydrodynamic simulation studying the dynamical evolution of supernova ejecta with a central energy source has revealed that outermost layers of the ejecta are accelerated to mildly relativistic velocities because of the breakout of a hot bubble driven by the energy injection. The outermost layers decelerate as they sweep a circumstellar medium surrounding the ejecta, leading to the formation of the forward and reverse shocks propagating in the circumstellar medium and the ejecta. While the ejecta continue to release the internal energy as thermal emission from the photosphere, the energy dissipation at the forward and reverse shock fronts gives rise to non-thermal emission. We calculate light curves and spectral energy distributions of thermal and non-thermal emission from central engine powered supernova ejecta embedded in a steady stellar wind with typical mass loss rates for massive stars. The light curves are compared with currently available radio and X-ray observations of hydrogen-poor superluminous supernovae, as well as the two well-studied broad-lined Ic supernovae, 1998bw and 2009bb, which exhibit bright radio emission indicating central engine activities. We point out that upper limits on radio luminosities of nearby superluminous supernovae may indicate the injected energy is mainly converted to thermal radiation rather than creating mildly relativistic flows owing to photon diffusion time scales comparable to the injection time scale.
Dawson, Paul R.; Boyce, Donald E.; Park, Jun-Sang; ...
2017-10-15
A robust methodology is presented to extract slip system strengths from lattice strain distributions for polycrystalline samples obtained from high-energy x-ray diffraction (HEXD) experiments with in situ loading. The methodology consists of matching the evolution of coefficients of a harmonic expansion of the distributions from simulation to the coefficients derived from measurements. Simulation results are generated via finite element simulations of virtual polycrystals that are subjected to the loading history applied in the HEXD experiments. Advantages of the methodology include: (1) its ability to utilize extensive data sets generated by HEXD experiments; (2) its ability to capture trends in distributionsmore » that may be noisy (both measured and simulated); and (3) its sensitivity to the ratios of the family strengths. The approach is used to evaluate the slip system strengths of Ti-6Al-4V using samples having relatively equiaxed grains. These strength estimates are compared to values in the literature.« less
Recent advances in computational methodology for simulation of mechanical circulatory assist devices
Marsden, Alison L.; Bazilevs, Yuri; Long, Christopher C.; Behr, Marek
2014-01-01
Ventricular assist devices (VADs) provide mechanical circulatory support to offload the work of one or both ventricles during heart failure. They are used in the clinical setting as destination therapy, as bridge to transplant, or more recently as bridge to recovery to allow for myocardial remodeling. Recent developments in computational simulation allow for detailed assessment of VAD hemodynamics for device design and optimization for both children and adults. Here, we provide a focused review of the recent literature on finite element methods and optimization for VAD simulations. As VAD designs typically fall into two categories, pulsatile and continuous flow devices, we separately address computational challenges of both types of designs, and the interaction with the circulatory system with three representative case studies. In particular, we focus on recent advancements in finite element methodology that has increased the fidelity of VAD simulations. We outline key challenges, which extend to the incorporation of biological response such as thrombosis and hemolysis, as well as shape optimization methods and challenges in computational methodology. PMID:24449607
A methodology for thermodynamic simulation of high temperature, internal reforming fuel cell systems
NASA Astrophysics Data System (ADS)
Matelli, José Alexandre; Bazzo, Edson
This work presents a methodology for simulation of fuel cells to be used in power production in small on-site power/cogeneration plants that use natural gas as fuel. The methodology contemplates thermodynamics and electrochemical aspects related to molten carbonate and solid oxide fuel cells (MCFC and SOFC, respectively). Internal steam reforming of the natural gas hydrocarbons is considered for hydrogen production. From inputs as cell potential, cell power, number of cell in the stack, ancillary systems power consumption, reformed natural gas composition and hydrogen utilization factor, the simulation gives the natural gas consumption, anode and cathode stream gases temperature and composition, and thermodynamic, electrochemical and practical efficiencies. Both energetic and exergetic methods are considered for performance analysis. The results obtained from natural gas reforming thermodynamics simulation show that the hydrogen production is maximum around 700 °C, for a steam/carbon ratio equal to 3. As shown in the literature, the found results indicate that the SOFC is more efficient than MCFC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kong, Bo; Fox, Rodney O.; Feng, Heng
An Euler–Euler anisotropic Gaussian approach (EE-AG) for simulating gas–particle flows, in which particle velocities are assumed to follow a multivariate anisotropic Gaussian distribution, is used to perform mesoscale simulations of homogeneous cluster-induced turbulence (CIT). A three-dimensional Gauss–Hermite quadrature formulation is used to calculate the kinetic flux for 10 velocity moments in a finite-volume framework. The particle-phase volume-fraction and momentum equations are coupled with the Eulerian solver for the gas phase. This approach is implemented in an open-source CFD package, OpenFOAM, and detailed simulation results are compared with previous Euler–Lagrange simulations in a domain size study of CIT. Here, these resultsmore » demonstrate that the proposed EE-AG methodology is able to produce comparable results to EL simulations, and this moment-based methodology can be used to perform accurate mesoscale simulations of dilute gas–particle flows.« less
Kong, Bo; Fox, Rodney O.; Feng, Heng; ...
2017-02-16
An Euler–Euler anisotropic Gaussian approach (EE-AG) for simulating gas–particle flows, in which particle velocities are assumed to follow a multivariate anisotropic Gaussian distribution, is used to perform mesoscale simulations of homogeneous cluster-induced turbulence (CIT). A three-dimensional Gauss–Hermite quadrature formulation is used to calculate the kinetic flux for 10 velocity moments in a finite-volume framework. The particle-phase volume-fraction and momentum equations are coupled with the Eulerian solver for the gas phase. This approach is implemented in an open-source CFD package, OpenFOAM, and detailed simulation results are compared with previous Euler–Lagrange simulations in a domain size study of CIT. Here, these resultsmore » demonstrate that the proposed EE-AG methodology is able to produce comparable results to EL simulations, and this moment-based methodology can be used to perform accurate mesoscale simulations of dilute gas–particle flows.« less
pysimm: A Python Package for Simulation of Molecular Systems
NASA Astrophysics Data System (ADS)
Fortunato, Michael; Colina, Coray
pysimm, short for python simulation interface for molecular modeling, is a python package designed to facilitate the structure generation and simulation of molecular systems through convenient and programmatic access to object-oriented representations of molecular system data. This poster presents core features of pysimm and design philosophies that highlight a generalized methodology for incorporation of third-party software packages through API interfaces. The integration with the LAMMPS simulation package is explained to demonstrate this methodology. pysimm began as a back-end python library that powered a cloud-based application on nanohub.org for amorphous polymer simulation. The extension from a specific application library to general purpose simulation interface is explained. Additionally, this poster highlights the rapid development of new applications to construct polymer chains capable of controlling chain morphology such as molecular weight distribution and monomer composition.
Borycki, Elizabeth; Kushniruk, Andre; Carvalho, Christopher
2013-01-01
Internationally, health information systems (HIS) safety has emerged as a significant concern for governments. Recently, research has emerged that has documented the ability of HIS to be implicated in the harm and death of patients. Researchers have attempted to develop methods that can be used to prevent or reduce technology-induced errors. Some researchers are developing methods that can be employed prior to systems release. These methods include the development of safety heuristics and clinical simulations. In this paper, we outline our methodology for developing safety heuristics specific to identifying the features or functions of a HIS user interface design that may lead to technology-induced errors. We follow this with a description of a methodological approach to validate these heuristics using clinical simulations. PMID:23606902
An automated methodology development. [software design for combat simulation
NASA Technical Reports Server (NTRS)
Hawley, L. R.
1985-01-01
The design methodology employed in testing the applicability of Ada in large-scale combat simulations is described. Ada was considered as a substitute for FORTRAN to lower life cycle costs and ease the program development efforts. An object-oriented approach was taken, which featured definitions of military targets, the capability of manipulating their condition in real-time, and one-to-one correlation between the object states and real world states. The simulation design process was automated by the problem statement language (PSL)/problem statement analyzer (PSA). The PSL/PSA system accessed the problem data base directly to enhance the code efficiency by, e.g., eliminating non-used subroutines, and provided for automated report generation, besides allowing for functional and interface descriptions. The ways in which the methodology satisfied the responsiveness, reliability, transportability, modifiability, timeliness and efficiency goals are discussed.
2018-02-01
29 during Soldier Equipment Configuration Impact on Performance: Establishing a Test Methodology for the...Performance of Medium Rucksack Prototypes An investigation: Comparison of live-fire and weapon simulator test methodologies and the of three extremity armor
ESP v1.0: Methodology for Exploring Emission Impacts of Future Scenarios in the United States
This article presents a methodology for creating anthropogenic emission inventories that can be used to simulate future regional air quality. The Emission Scenario Projection (ESP) methodology focuses on energy production and use, the principal sources of many air pollutants. Emi...
NASA Technical Reports Server (NTRS)
Dec, John A.; Braun, Robert D.
2011-01-01
A finite element ablation and thermal response program is presented for simulation of three-dimensional transient thermostructural analysis. The three-dimensional governing differential equations and finite element formulation are summarized. A novel probabilistic design methodology for thermal protection systems is presented. The design methodology is an eight step process beginning with a parameter sensitivity study and is followed by a deterministic analysis whereby an optimum design can determined. The design process concludes with a Monte Carlo simulation where the probabilities of exceeding design specifications are estimated. The design methodology is demonstrated by applying the methodology to the carbon phenolic compression pads of the Crew Exploration Vehicle. The maximum allowed values of bondline temperature and tensile stress are used as the design specifications in this study.
Validating agent oriented methodology (AOM) for netlogo modelling and simulation
NASA Astrophysics Data System (ADS)
WaiShiang, Cheah; Nissom, Shane; YeeWai, Sim; Sharbini, Hamizan
2017-10-01
AOM (Agent Oriented Modeling) is a comprehensive and unified agent methodology for agent oriented software development. AOM methodology was proposed to aid developers with the introduction of technique, terminology, notation and guideline during agent systems development. Although AOM methodology is claimed to be capable of developing a complex real world system, its potential is yet to be realized and recognized by the mainstream software community and the adoption of AOM is still at its infancy. Among the reason is that there are not much case studies or success story of AOM. This paper presents two case studies on the adoption of AOM for individual based modelling and simulation. It demonstrate how the AOM is useful for epidemiology study and ecological study. Hence, it further validate the AOM in a qualitative manner.
Grau, P; Vanrolleghem, P; Ayesa, E
2007-01-01
In this paper, a new methodology for integrated modelling of the WWTP has been used for the construction of the Benchmark Simulation Model N degrees 2 (BSM2). The transformations-approach proposed in this methodology does not require the development of specific transformers to interface unit process models and allows the construction of tailored models for a particular WWTP guaranteeing the mass and charge continuity for the whole model. The BSM2 PWM constructed as case study, is evaluated by means of simulations under different scenarios and its validity in reproducing water and sludge lines in WWTP is demonstrated. Furthermore the advantages that this methodology presents compared to other approaches for integrated modelling are verified in terms of flexibility and coherence.
CAGE IIIA Distributed Simulation Design Methodology
2014-05-01
2 VHF Very High Frequency VLC Video LAN Codec – an Open-source cross-platform multimedia player and framework VM Virtual Machine VOIP Voice Over...Implementing Defence Experimentation (GUIDEx). The key challenges for this methodology are with understanding how to: • design it o define the...operation and to be available in the other nation’s simulations. The challenge for the CAGE campaign of experiments is to continue to build upon this
A new method for qualitative simulation of water resources systems: 1. Theory
NASA Astrophysics Data System (ADS)
Camara, A. S.; Pinheiro, M.; Antunes, M. P.; Seixas, M. J.
1987-11-01
A new dynamic modeling methodology, SLIN (Simulação Linguistica), allowing for the analysis of systems defined by linguistic variables, is presented. SLIN applies a set of logical rules avoiding fuzzy theoretic concepts. To make the transition from qualitative to quantitative modes, logical rules are used as well. Extensions of the methodology to simulation-optimization applications and multiexpert system modeling are also discussed.
Simulation-Based Probabilistic Tsunami Hazard Analysis: Empirical and Robust Hazard Predictions
NASA Astrophysics Data System (ADS)
De Risi, Raffaele; Goda, Katsuichiro
2017-08-01
Probabilistic tsunami hazard analysis (PTHA) is the prerequisite for rigorous risk assessment and thus for decision-making regarding risk mitigation strategies. This paper proposes a new simulation-based methodology for tsunami hazard assessment for a specific site of an engineering project along the coast, or, more broadly, for a wider tsunami-prone region. The methodology incorporates numerous uncertain parameters that are related to geophysical processes by adopting new scaling relationships for tsunamigenic seismic regions. Through the proposed methodology it is possible to obtain either a tsunami hazard curve for a single location, that is the representation of a tsunami intensity measure (such as inundation depth) versus its mean annual rate of occurrence, or tsunami hazard maps, representing the expected tsunami intensity measures within a geographical area, for a specific probability of occurrence in a given time window. In addition to the conventional tsunami hazard curve that is based on an empirical statistical representation of the simulation-based PTHA results, this study presents a robust tsunami hazard curve, which is based on a Bayesian fitting methodology. The robust approach allows a significant reduction of the number of simulations and, therefore, a reduction of the computational effort. Both methods produce a central estimate of the hazard as well as a confidence interval, facilitating the rigorous quantification of the hazard uncertainties.
Verma, Nishant; Beretvas, S Natasha; Pascual, Belen; Masdeu, Joseph C; Markey, Mia K
2015-11-12
As currently used, the Alzheimer's Disease Assessment Scale-Cognitive subscale (ADAS-Cog) has low sensitivity for measuring Alzheimer's disease progression in clinical trials. A major reason behind the low sensitivity is its sub-optimal scoring methodology, which can be improved to obtain better sensitivity. Using item response theory, we developed a new scoring methodology (ADAS-CogIRT) for the ADAS-Cog, which addresses several major limitations of the current scoring methodology. The sensitivity of the ADAS-CogIRT methodology was evaluated using clinical trial simulations as well as a negative clinical trial, which had shown an evidence of a treatment effect. The ADAS-Cog was found to measure impairment in three cognitive domains of memory, language, and praxis. The ADAS-CogIRT methodology required significantly fewer patients and shorter trial durations as compared to the current scoring methodology when both were evaluated in simulated clinical trials. When validated on data from a real clinical trial, the ADAS-CogIRT methodology had higher sensitivity than the current scoring methodology in detecting the treatment effect. The proposed scoring methodology significantly improves the sensitivity of the ADAS-Cog in measuring progression of cognitive impairment in clinical trials focused in the mild-to-moderate Alzheimer's disease stage. This provides a boost to the efficiency of clinical trials requiring fewer patients and shorter durations for investigating disease-modifying treatments.
Simulation of flashing signal operations.
DOT National Transportation Integrated Search
1982-01-01
Various guidelines that have been proposed for the operation of traffic signals in the flashing mode were reviewed. The use of existing traffic simulation procedures to evaluate flashing signals was examined and a study methodology for simulating and...
Simulation modeling of route guidance concept
DOT National Transportation Integrated Search
1997-01-01
The methodology of a simulation model developed at the University of New South Wales, Australia, for the evaluation of performance of Dynamic Route Guidance Systems (DRGS) is described. The microscopic simulation model adopts the event update simulat...
Monitoring AGNs with Hbeta Asymmetry with the Wyoming Infra-Red Observatory
NASA Astrophysics Data System (ADS)
Brotherton, Michael S.; Du, Pu; Wang, Jian-Min; Wang, Kai; Huang, Zhengpeng; Hu, Chen; Li, Yan-rong; Kasper, David H.; Chick, William T.; Nguyen, My L.; Maithil, Jaya; Hand, Derek; Bai, Jin-Ming; Ho, Luis
2018-06-01
We present preliminary results from two seasons of reverberation mapping of AGNs using the optical longslit spectrograph on the 2.3 meter WIRO telescope. The majority of the sample is part of our "Monitoring AGNs with Hbeta Asymmetry" project, also known as MAHA, which targets rarer AGNs with extremely asymmetric profiles that may provide new insights into the full diversity of size and structure of the broad-line region (BLR). Our hundreds of nights of telescope time provide dozens of epochs of spectra for approximately two dozen objects. Notably we find that many AGNs with broader asymmetric Hbeta emission lines possess time lags significantly shorter than expected for their luminosity in comparison to the majority of AGNs reverberation mapped.
The Future Impact of Wind on BPA Power System Load Following and Regulation Requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarov, Yuri V.; Lu, Shuai; McManus, Bart
Wind power is growing in a very fast pace as an alternative generating resource. As the ratio of wind power over total system capacity increases, the impact of wind on various system aspects becomes significant. This paper presents a methodology to study the future impact of wind on BPA power system load following and regulation requirements. Existing methodologies for similar analysis include dispatch model simulation and standard deviation evaluation on load and wind data. The methodology proposed in this paper uses historical data and stochastic processes to simulate the load balancing processes in the BPA power system. It mimics themore » actual power system operations therefore the results are close to reality yet the study based on this methodology is convenient to perform. The capacity, ramp rate and ramp duration characteristics are extracted from the simulation results. System load following and regulation capacity requirements are calculated accordingly. The ramp rate and ramp duration data obtained from the analysis can be used to evaluate generator response or maneuverability requirement and regulating units’ energy requirement, respectively.« less
NASA Astrophysics Data System (ADS)
Fulkerson, David E.
2010-02-01
This paper describes a new methodology for characterizing the electrical behavior and soft error rate (SER) of CMOS and SiGe HBT integrated circuits that are struck by ions. A typical engineering design problem is to calculate the SER of a critical path that commonly includes several circuits such as an input buffer, several logic gates, logic storage, clock tree circuitry, and an output buffer. Using multiple 3D TCAD simulations to solve this problem is too costly and time-consuming for general engineering use. The new and simple methodology handles the problem with ease by simple SPICE simulations. The methodology accurately predicts the measured threshold linear energy transfer (LET) of a bulk CMOS SRAM. It solves for circuit currents and voltage spikes that are close to those predicted by expensive 3D TCAD simulations. It accurately predicts the measured event cross-section vs. LET curve of an experimental SiGe HBT flip-flop. The experimental cross section vs. frequency behavior and other subtle effects are also accurately predicted.
A Measurement and Simulation Based Methodology for Cache Performance Modeling and Tuning
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
We present a cache performance modeling methodology that facilitates the tuning of uniprocessor cache performance for applications executing on shared memory multiprocessors by accurately predicting the effects of source code level modifications. Measurements on a single processor are initially used for identifying parts of code where cache utilization improvements may significantly impact the overall performance. Cache simulation based on trace-driven techniques can be carried out without gathering detailed address traces. Minimal runtime information for modeling cache performance of a selected code block includes: base virtual addresses of arrays, virtual addresses of variables, and loop bounds for that code block. Rest of the information is obtained from the source code. We show that the cache performance predictions are as reliable as those obtained through trace-driven simulations. This technique is particularly helpful to the exploration of various "what-if' scenarios regarding the cache performance impact for alternative code structures. We explain and validate this methodology using a simple matrix-matrix multiplication program. We then apply this methodology to predict and tune the cache performance of two realistic scientific applications taken from the Computational Fluid Dynamics (CFD) domain.
Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms
NASA Technical Reports Server (NTRS)
Kurdila, Andrew J.; Sharpley, Robert C.
1999-01-01
This paper presents a final report on Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms. The focus of this research is to derive and implement: 1) Wavelet based methodologies for the compression, transmission, decoding, and visualization of three dimensional finite element geometry and simulation data in a network environment; 2) methodologies for interactive algorithm monitoring and tracking in computational mechanics; and 3) Methodologies for interactive algorithm steering for the acceleration of large scale finite element simulations. Also included in this report are appendices describing the derivation of wavelet based Particle Image Velocity algorithms and reduced order input-output models for nonlinear systems by utilizing wavelet approximations.
Numerical characteristics of quantum computer simulation
NASA Astrophysics Data System (ADS)
Chernyavskiy, A.; Khamitov, K.; Teplov, A.; Voevodin, V.; Voevodin, Vl.
2016-12-01
The simulation of quantum circuits is significantly important for the implementation of quantum information technologies. The main difficulty of such modeling is the exponential growth of dimensionality, thus the usage of modern high-performance parallel computations is relevant. As it is well known, arbitrary quantum computation in circuit model can be done by only single- and two-qubit gates, and we analyze the computational structure and properties of the simulation of such gates. We investigate the fact that the unique properties of quantum nature lead to the computational properties of the considered algorithms: the quantum parallelism make the simulation of quantum gates highly parallel, and on the other hand, quantum entanglement leads to the problem of computational locality during simulation. We use the methodology of the AlgoWiki project (algowiki-project.org) to analyze the algorithm. This methodology consists of theoretical (sequential and parallel complexity, macro structure, and visual informational graph) and experimental (locality and memory access, scalability and more specific dynamic characteristics) parts. Experimental part was made by using the petascale Lomonosov supercomputer (Moscow State University, Russia). We show that the simulation of quantum gates is a good base for the research and testing of the development methods for data intense parallel software, and considered methodology of the analysis can be successfully used for the improvement of the algorithms in quantum information science.
Paliwal, Nikhil; Damiano, Robert J; Varble, Nicole A; Tutino, Vincent M; Dou, Zhongwang; Siddiqui, Adnan H; Meng, Hui
2017-12-01
Computational fluid dynamics (CFD) is a promising tool to aid in clinical diagnoses of cardiovascular diseases. However, it uses assumptions that simplify the complexities of the real cardiovascular flow. Due to high-stakes in the clinical setting, it is critical to calculate the effect of these assumptions in the CFD simulation results. However, existing CFD validation approaches do not quantify error in the simulation results due to the CFD solver's modeling assumptions. Instead, they directly compare CFD simulation results against validation data. Thus, to quantify the accuracy of a CFD solver, we developed a validation methodology that calculates the CFD model error (arising from modeling assumptions). Our methodology identifies independent error sources in CFD and validation experiments, and calculates the model error by parsing out other sources of error inherent in simulation and experiments. To demonstrate the method, we simulated the flow field of a patient-specific intracranial aneurysm (IA) in the commercial CFD software star-ccm+. Particle image velocimetry (PIV) provided validation datasets for the flow field on two orthogonal planes. The average model error in the star-ccm+ solver was 5.63 ± 5.49% along the intersecting validation line of the orthogonal planes. Furthermore, we demonstrated that our validation method is superior to existing validation approaches by applying three representative existing validation techniques to our CFD and experimental dataset, and comparing the validation results. Our validation methodology offers a streamlined workflow to extract the "true" accuracy of a CFD solver.
NASA Astrophysics Data System (ADS)
Michalik, Peter; Mital, Dusan; Zajac, Jozef; Brezikova, Katarina; Duplak, Jan; Hatala, Michal; Radchenko, Svetlana
2016-10-01
Article deals with point to using intelligent relay and PLC systems in practice, to their architecture and principles of programming and simulations for education process on all types of school from secondary to universities. Aim of the article is proposal of simple examples of applications, where is demonstrated methodology of programming on real simple practice examples and shown using of chosen instructions. In practical part is described process of creating schemas and describing of function blocks, where are described methodologies of creating program and simulations of output reactions on changeable inputs for intelligent relays.
An automated procedure for developing hybrid computer simulations of turbofan engines
NASA Technical Reports Server (NTRS)
Szuch, J. R.; Krosel, S. M.
1980-01-01
A systematic, computer-aided, self-documenting methodology for developing hybrid computer simulations of turbofan engines is presented. The methodology makes use of a host program that can run on a large digital computer and a machine-dependent target (hybrid) program. The host program performs all of the calculations and date manipulations needed to transform user-supplied engine design information to a form suitable for the hybrid computer. The host program also trims the self contained engine model to match specified design point information. A test case is described and comparisons between hybrid simulation and specified engine performance data are presented.
NASA Astrophysics Data System (ADS)
Wang, C.; Winterfeld, P. H.; Wu, Y. S.; Wang, Y.; Chen, D.; Yin, C.; Pan, Z.
2014-12-01
Hydraulic fracturing combined with horizontal drilling has made it possible to economically produce natural gas from unconventional shale gas reservoirs. An efficient methodology for evaluating hydraulic fracturing operation parameters, such as fluid and proppant properties, injection rates, and wellhead pressure, is essential for the evaluation and efficient design of these processes. Traditional numerical evaluation and optimization approaches are usually based on simulated fracture properties such as the fracture area. In our opinion, a methodology based on simulated production data is better, because production is the goal of hydraulic fracturing and we can calibrate this approach with production data that is already known. This numerical methodology requires a fully-coupled hydraulic fracture propagation and multi-phase flow model. In this paper, we present a general fully-coupled numerical framework to simulate hydraulic fracturing and post-fracture gas well performance. This three-dimensional, multi-phase simulator focuses on: (1) fracture width increase and fracture propagation that occurs as slurry is injected into the fracture, (2) erosion caused by fracture fluids and leakoff, (3) proppant subsidence and flowback, and (4) multi-phase fluid flow through various-scaled anisotropic natural and man-made fractures. Mathematical and numerical details on how to fully couple the fracture propagation and fluid flow parts are discussed. Hydraulic fracturing and production operation parameters, and properties of the reservoir, fluids, and proppants, are taken into account. The well may be horizontal, vertical, or deviated, as well as open-hole or cemented. The simulator is verified based on benchmarks from the literature and we show its application by simulating fracture network (hydraulic and natural fractures) propagation and production data history matching of a field in China. We also conduct a series of real-data modeling studies with different combinations of hydraulic fracturing parameters and present the methodology to design these operations with feedback of simulated production data. The unified model aids in the optimization of hydraulic fracturing design, operations, and production.
Real time flight simulation methodology
NASA Technical Reports Server (NTRS)
Parrish, E. A.; Cook, G.; Mcvey, E. S.
1977-01-01
Substitutional methods for digitization, input signal-dependent integrator approximations, and digital autopilot design were developed. The software framework of a simulator design package is described. Included are subroutines for iterative designs of simulation models and a rudimentary graphics package.
Cognitive simulators for medical education and training.
Kahol, Kanav; Vankipuram, Mithra; Smith, Marshall L
2009-08-01
Simulators for honing procedural skills (such as surgical skills and central venous catheter placement) have proven to be valuable tools for medical educators and students. While such simulations represent an effective paradigm in surgical education, there is an opportunity to add a layer of cognitive exercises to these basic simulations that can facilitate robust skill learning in residents. This paper describes a controlled methodology, inspired by neuropsychological assessment tasks and embodied cognition, to develop cognitive simulators for laparoscopic surgery. These simulators provide psychomotor skill training and offer the additional challenge of accomplishing cognitive tasks in realistic environments. A generic framework for design, development and evaluation of such simulators is described. The presented framework is generalizable and can be applied to different task domains. It is independent of the types of sensors, simulation environment and feedback mechanisms that the simulators use. A proof of concept of the framework is provided through developing a simulator that includes cognitive variations to a basic psychomotor task. The results of two pilot studies are presented that show the validity of the methodology in providing an effective evaluation and learning environments for surgeons.
Teaching and assessing procedural skills using simulation: metrics and methodology.
Lammers, Richard L; Davenport, Moira; Korley, Frederick; Griswold-Theodorson, Sharon; Fitch, Michael T; Narang, Aneesh T; Evans, Leigh V; Gross, Amy; Rodriguez, Elliot; Dodge, Kelly L; Hamann, Cara J; Robey, Walter C
2008-11-01
Simulation allows educators to develop learner-focused training and outcomes-based assessments. However, the effectiveness and validity of simulation-based training in emergency medicine (EM) requires further investigation. Teaching and testing technical skills require methods and assessment instruments that are somewhat different than those used for cognitive or team skills. Drawing from work published by other medical disciplines as well as educational, behavioral, and human factors research, the authors developed six research themes: measurement of procedural skills; development of performance standards; assessment and validation of training methods, simulator models, and assessment tools; optimization of training methods; transfer of skills learned on simulator models to patients; and prevention of skill decay over time. The article reviews relevant and established educational research methodologies and identifies gaps in our knowledge of how physicians learn procedures. The authors present questions requiring further research that, once answered, will advance understanding of simulation-based procedural training and assessment in EM.
Piloted Evaluation of an Integrated Methodology for Propulsion and Airframe Control Design
NASA Technical Reports Server (NTRS)
Bright, Michelle M.; Simon, Donald L.; Garg, Sanjay; Mattern, Duane L.; Ranaudo, Richard J.; Odonoghue, Dennis P.
1994-01-01
An integrated methodology for propulsion and airframe control has been developed and evaluated for a Short Take-Off Vertical Landing (STOVL) aircraft using a fixed base flight simulator at NASA Lewis Research Center. For this evaluation the flight simulator is configured for transition flight using a STOVL aircraft model, a full nonlinear turbofan engine model, simulated cockpit and displays, and pilot effectors. The paper provides a brief description of the simulation models, the flight simulation environment, the displays and symbology, the integrated control design, and the piloted tasks used for control design evaluation. In the simulation, the pilots successfully completed typical transition phase tasks such as combined constant deceleration with flight path tracking, and constant acceleration wave-off maneuvers. The pilot comments of the integrated system performance and the display symbology are discussed and analyzed to identify potential areas of improvement.
GPS system simulation methodology
NASA Technical Reports Server (NTRS)
Ewing, Thomas F.
1993-01-01
The following topics are presented: background; Global Positioning System (GPS) methodology overview; the graphical user interface (GUI); current models; application to space nuclear power/propulsion; and interfacing requirements. The discussion is presented in vugraph form.
Real time simulation of computer-assisted sequencing of terminal area operations
NASA Technical Reports Server (NTRS)
Dear, R. G.
1981-01-01
A simulation was developed to investigate the utilization of computer assisted decision making for the task of sequencing and scheduling aircraft in a high density terminal area. The simulation incorporates a decision methodology termed Constrained Position Shifting. This methodology accounts for aircraft velocity profiles, routes, and weight classes in dynamically sequencing and scheduling arriving aircraft. A sample demonstration of Constrained Position Shifting is presented where six aircraft types (including both light and heavy aircraft) are sequenced to land at Denver's Stapleton International Airport. A graphical display is utilized and Constrained Position Shifting with a maximum shift of four positions (rearward or forward) is compared to first come, first serve with respect to arrival at the runway. The implementation of computer assisted sequencing and scheduling methodologies is investigated. A time based control concept will be required and design considerations for such a system are discussed.
Wehbe-Janek, Hania; Hochhalter, Angela K; Castilla, Theresa; Jo, Chanhee
2015-02-01
Patient engagement in health care is increasingly recognized as essential for promoting the health of individuals and populations. This study pilot tested the standardized clinician (SC) methodology, a novel adaptation of standardized patient methodology, for teaching patient engagement skills for the complex health care situation of transitioning from a hospital back to home. Sixty-seven participants at heightened risk for hospitalization were randomly assigned to either simulation exposure-only or full-intervention group. Both groups participated in simulation scenarios with "standardized clinicians" around tasks related to hospital discharge and follow-up. The full-intervention group was also debriefed after scenario sets and learned about tools for actively participating in hospital-to-home transitions. Measures included changes in observed behaviors at baseline and follow-up and an overall program evaluation. The full-intervention group showed increases in observed tool possession (P = 0.014) and expression of their preferences and values (P = 0.043). The simulation exposure-only group showed improvement in worksheet scores (P = 0.002) and fewer engagement skills (P = 0.021). Both groups showed a decrease in telling an SC about their hospital admission (P < 0.05). Open-ended comments from the program evaluation were largely positive. Both groups benefited from exposure to the SC intervention. Program evaluation data suggest that simulation training is feasible and may provide a useful methodology for teaching patient skills for active engagement in health care. Future studies are warranted to determine if this methodology can be used to assess overall patient engagement and whether new patient learning transfers to health care encounters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarov, Yuri V.; Lu, Shuai
2008-07-15
This report presents a methodology developed to study the future impact of wind on BPA power system load following and regulation requirements. The methodology uses historical data and stochastic processes to simulate the load balancing processes in the BPA power system, by mimicking the actual power system operations. Therefore, the results are close to reality, yet the study based on this methodology is convenient to conduct. Compared with the proposed methodology, existing methodologies for doing similar analysis include dispatch model simulation and standard deviation evaluation on load and wind data. Dispatch model simulation is constrained by the design of themore » dispatch program, and standard deviation evaluation is artificial in separating the load following and regulation requirements, both of which usually do not reflect actual operational practice. The methodology used in this study provides not only capacity requirement information, it also analyzes the ramp rate requirements for system load following and regulation processes. The ramp rate data can be used to evaluate generator response/maneuverability requirements, which is another necessary capability of the generation fleet for the smooth integration of wind energy. The study results are presented in an innovative way such that the increased generation capacity or ramp requirements are compared for two different years, across 24 hours a day. Therefore, the impact of different levels of wind energy on generation requirements at different times can be easily visualized.« less
NASA Astrophysics Data System (ADS)
Shauly, Eitan; Rotstein, Israel; Peltinov, Ram; Latinski, Sergei; Adan, Ofer; Levi, Shimon; Menadeva, Ovadya
2009-03-01
The continues transistors scaling efforts, for smaller devices, similar (or larger) drive current/um and faster devices, increase the challenge to predict and to control the transistor off-state current. Typically, electrical simulators like SPICE, are using the design intent (as-drawn GDS data). At more sophisticated cases, the simulators are fed with the pattern after lithography and etch process simulations. As the importance of electrical simulation accuracy is increasing and leakage is becoming more dominant, there is a need to feed these simulators, with more accurate information extracted from physical on-silicon transistors. Our methodology to predict changes in device performances due to systematic lithography and etch effects was used in this paper. In general, the methodology consists on using the OPCCmaxTM for systematic Edge-Contour-Extraction (ECE) from transistors, taking along the manufacturing and includes any image distortions like line-end shortening, corner rounding and line-edge roughness. These measurements are used for SPICE modeling. Possible application of this new metrology is to provide a-head of time, physical and electrical statistical data improving time to market. In this work, we applied our methodology to analyze a small and large array's of 2.14um2 6T-SRAM, manufactured using Tower Standard Logic for General Purposes Platform. 4 out of the 6 transistors used "U-Shape AA", known to have higher variability. The predicted electrical performances of the transistors drive current and leakage current, in terms of nominal values and variability are presented. We also used the methodology to analyze an entire SRAM Block array. Study of an isolation leakage and variability are presented.
Geurtzen, Rosa; Hogeveen, Marije; Rajani, Anand K; Chitkara, Ritu; Antonius, Timothy; van Heijst, Arno; Draaisma, Jos; Halamek, Louis P
2014-06-01
Prenatal counseling at the threshold of viability is a challenging yet critically important activity, and care guidelines differ across cultures. Studying how this task is performed in the actual clinical environment is extremely difficult. In this pilot study, we used simulation as a methodology with 2 aims as follows: first, to explore the use of simulation incorporating a standardized pregnant patient as an investigative methodology and, second, to determine similarities and differences in content and style of prenatal counseling between American and Dutch neonatologists. We compared counseling practice between 11 American and 11 Dutch neonatologists, using a simulation-based investigative methodology. All subjects performed prenatal counseling with a simulated pregnant patient carrying a fetus at the limits of viability. The following elements of scenario design were standardized across all scenarios: layout of the physical environment, details of the maternal and fetal histories, questions and responses of the standardized pregnant patient, and the time allowed for consultation. American subjects typically presented several treatment options without bias, whereas Dutch subjects were more likely to explicitly advise a specific course of treatment (emphasis on partial life support). American subjects offered comfort care more frequently than the Dutch subjects and also discussed options for maximal life support more often than their Dutch colleagues. Simulation is a useful research methodology for studying activities difficult to assess in the actual clinical environment such as prenatal counseling at the limits of viability. Dutch subjects were more directive in their approach than their American counterparts, offering fewer options for care and advocating for less invasive interventions. American subjects were more likely to offer a wider range of therapeutic options without providing a recommendation for any specific option.
Probabilistic Simulation of Stress Concentration in Composite Laminates
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Murthy, P. L. N.; Liaw, D. G.
1994-01-01
A computational methodology is described to probabilistically simulate the stress concentration factors (SCF's) in composite laminates. This new approach consists of coupling probabilistic composite mechanics with probabilistic finite element structural analysis. The composite mechanics is used to probabilistically describe all the uncertainties inherent in composite material properties, whereas the finite element is used to probabilistically describe the uncertainties associated with methods to experimentally evaluate SCF's, such as loads, geometry, and supports. The effectiveness of the methodology is demonstrated by using is to simulate the SCF's in three different composite laminates. Simulated results match experimental data for probability density and for cumulative distribution functions. The sensitivity factors indicate that the SCF's are influenced by local stiffness variables, by load eccentricities, and by initial stress fields.
Mesoscopic Simulations of Crosslinked Polymer Networks
NASA Astrophysics Data System (ADS)
Megariotis, Grigorios; Vogiatzis, Georgios G.; Schneider, Ludwig; Müller, Marcus; Theodorou, Doros N.
2016-08-01
A new methodology and the corresponding C++ code for mesoscopic simulations of elastomers are presented. The test system, crosslinked ds-1’4-polyisoprene’ is simulated with a Brownian Dynamics/kinetic Monte Carlo algorithm as a dense liquid of soft, coarse-grained beads, each representing 5-10 Kuhn segments. From the thermodynamic point of view, the system is described by a Helmholtz free-energy containing contributions from entropic springs between successive beads along a chain, slip-springs representing entanglements between beads on different chains, and non-bonded interactions. The methodology is employed for the calculation of the stress relaxation function from simulations of several microseconds at equilibrium, as well as for the prediction of stress-strain curves of crosslinked polymer networks under deformation.
Q-Sample Construction: A Critical Step for a Q-Methodological Study.
Paige, Jane B; Morin, Karen H
2016-01-01
Q-sample construction is a critical step in Q-methodological studies. Prior to conducting Q-studies, researchers start with a population of opinion statements (concourse) on a particular topic of interest from which a sample is drawn. These sampled statements are known as the Q-sample. Although literature exists on methodological processes to conduct Q-methodological studies, limited guidance exists on the practical steps to reduce the population of statements to a Q-sample. A case exemplar illustrates the steps to construct a Q-sample in preparation for a study that explored perspectives nurse educators and nursing students hold about simulation design. Experts in simulation and Q-methodology evaluated the Q-sample for readability, clarity, and for representativeness of opinions contained within the concourse. The Q-sample was piloted and feedback resulted in statement refinement. Researchers especially those undertaking Q-method studies for the first time may benefit from the practical considerations to construct a Q-sample offered in this article. © The Author(s) 2014.
Educational Validity of Business Gaming Simulation: A Research Methodology Framework
ERIC Educational Resources Information Center
Stainton, Andrew J.; Johnson, Johnnie E.; Borodzicz, Edward P.
2010-01-01
Many past educational validity studies of business gaming simulation, and more specifically total enterprise simulation, have been inconclusive. Studies have focused on the weaknesses of business gaming simulation; which is often regarded as an educational medium that has limitations regarding learning effectiveness. However, no attempts have been…
Integrated corridor management analysis, modeling and simulation (AMS) methodology.
DOT National Transportation Integrated Search
2008-03-01
This AMS Methodologies Document provides a discussion of potential ICM analytical approaches for the assessment of generic corridor operations. The AMS framework described in this report identifies strategies and procedures for tailoring AMS general ...
Calibration of CORSIM models under saturated traffic flow conditions.
DOT National Transportation Integrated Search
2013-09-01
This study proposes a methodology to calibrate microscopic traffic flow simulation models. : The proposed methodology has the capability to calibrate simultaneously all the calibration : parameters as well as demand patterns for any network topology....
Simulation validation and management
NASA Astrophysics Data System (ADS)
Illgen, John D.
1995-06-01
Illgen Simulation Technologies, Inc., has been working interactive verification and validation programs for the past six years. As a result, they have evolved a methodology that has been adopted and successfully implemented by a number of different verification and validation programs. This methodology employs a unique case of computer-assisted software engineering (CASE) tools to reverse engineer source code and produce analytical outputs (flow charts and tables) that aid the engineer/analyst in the verification and validation process. We have found that the use of CASE tools saves time,which equate to improvements in both schedule and cost. This paper will describe the ISTI-developed methodology and how CASe tools are used in its support. Case studies will be discussed.
Constraint Force Equation Methodology for Modeling Multi-Body Stage Separation Dynamics
NASA Technical Reports Server (NTRS)
Toniolo, Matthew D.; Tartabini, Paul V.; Pamadi, Bandu N.; Hotchko, Nathaniel
2008-01-01
This paper discusses a generalized approach to the multi-body separation problems in a launch vehicle staging environment based on constraint force methodology and its implementation into the Program to Optimize Simulated Trajectories II (POST2), a widely used trajectory design and optimization tool. This development facilitates the inclusion of stage separation analysis into POST2 for seamless end-to-end simulations of launch vehicle trajectories, thus simplifying the overall implementation and providing a range of modeling and optimization capabilities that are standard features in POST2. Analysis and results are presented for two test cases that validate the constraint force equation methodology in a stand-alone mode and its implementation in POST2.
Dynamic Decision Making under Uncertainty and Partial Information
2017-01-30
order to address these problems, we investigated efficient computational methodologies for dynamic decision making under uncertainty and partial...information. In the course of this research, we developed and studied efficient simulation-based methodologies for dynamic decision making under...uncertainty and partial information; (ii) studied the application of these decision making models and methodologies to practical problems, such as those
Hafnium transistor process design for neural interfacing.
Parent, David W; Basham, Eric J
2009-01-01
A design methodology is presented that uses 1-D process simulations of Metal Insulator Semiconductor (MIS) structures to design the threshold voltage of hafnium oxide based transistors used for neural recording. The methodology is comprised of 1-D analytical equations for threshold voltage specification, and doping profiles, and 1-D MIS Technical Computer Aided Design (TCAD) to design a process to implement a specific threshold voltage, which minimized simulation time. The process was then verified with a 2-D process/electrical TCAD simulation. Hafnium oxide films (HfO) were grown and characterized for dielectric constant and fixed oxide charge for various annealing temperatures, two important design variables in threshold voltage design.
Bio-inspired algorithms applied to molecular docking simulations.
Heberlé, G; de Azevedo, W F
2011-01-01
Nature as a source of inspiration has been shown to have a great beneficial impact on the development of new computational methodologies. In this scenario, analyses of the interactions between a protein target and a ligand can be simulated by biologically inspired algorithms (BIAs). These algorithms mimic biological systems to create new paradigms for computation, such as neural networks, evolutionary computing, and swarm intelligence. This review provides a description of the main concepts behind BIAs applied to molecular docking simulations. Special attention is devoted to evolutionary algorithms, guided-directed evolutionary algorithms, and Lamarckian genetic algorithms. Recent applications of these methodologies to protein targets identified in the Mycobacterium tuberculosis genome are described.
Li, Jingrui; Kondov, Ivan; Wang, Haobin; Thoss, Michael
2015-04-10
A recently developed methodology to simulate photoinduced electron transfer processes at dye-semiconductor interfaces is outlined. The methodology employs a first-principles-based model Hamiltonian and accurate quantum dynamics simulations using the multilayer multiconfiguration time-dependent Hartree approach. This method is applied to study electron injection in the dye-semiconductor system coumarin 343-TiO2. Specifically, the influence of electronic-vibrational coupling is analyzed. Extending previous work, we consider the influence of Dushinsky rotation of the normal modes as well as anharmonicities of the potential energy surfaces on the electron transfer dynamics.
NASA Astrophysics Data System (ADS)
Fogarty, Aoife C.; Potestio, Raffaello; Kremer, Kurt
2015-05-01
A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydration shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.
Multibody simulation of vehicles equipped with an automatic transmission
NASA Astrophysics Data System (ADS)
Olivier, B.; Kouroussis, G.
2016-09-01
Nowadays automotive vehicles remain as one of the most used modes of transportation. Furthermore automatic transmissions are increasingly used to provide a better driving comfort and a potential optimization of the engine performances (by placing the gear shifts at specific engine and vehicle speeds). This paper presents an effective modeling of the vehicle using the multibody methodology (numerically computed under EasyDyn, an open source and in-house library dedicated to multibody simulations). However, the transmission part of the vehicle is described by the usual equations of motion computed using a systematic matrix approach: del Castillo's methodology for planetary gear trains. By coupling the analytic equations of the transmission and the equations computed by the multibody methodology, the performances of any vehicle can be obtained if the characteristics of each element in the vehicle are known. The multibody methodology offers the possibilities to develop the vehicle modeling from 1D-motion to 3D-motion by taking into account the rotations and implementing tire models. The modeling presented in this paper remains very efficient and provides an easy and quick vehicle simulation tool which could be used in order to calibrate the automatic transmission.
Generic simulation of multi-element ladar scanner kinematics in USU LadarSIM
NASA Astrophysics Data System (ADS)
Omer, David; Call, Benjamin; Pack, Robert; Fullmer, Rees
2006-05-01
This paper presents a generic simulation model for a ladar scanner with up to three scan elements, each having a steering, stabilization and/or pattern-scanning role. Of interest is the development of algorithms that automatically generate commands to the scan elements given beam-steering objectives out of the ladar aperture, and the base motion of the sensor platform. First, a straight-forward single-element body-fixed beam-steering methodology is presented. Then a unique multi-element redirective and reflective space-fixed beam-steering methodology is explained. It is shown that standard direction cosine matrix decomposition methods fail when using two orthogonal, space-fixed rotations, thus demanding the development of a new algorithm for beam steering. Finally, a related steering control methodology is presented that uses two separate optical elements mathematically combined to determine the necessary scan element commands. Limits, restrictions, and results on this methodology are presented.
Use of Computer Simulation for the Analysis of Railroad Operations in the St. Louis Terminal Area
DOT National Transportation Integrated Search
1977-11-01
This report discusses the computer simulation methodology, its uses and limitations, and its applicability to the analysis of alternative railroad terminal restructuring plans. Included is a detailed discussion of the AAR Simulation System, an overvi...
PERFORMANCE, RELIABILITY, AND IMPROVEMENT OF A TISSUE-SPECIFIC METABOLIC SIMULATOR
A methodology is described that has been used to build and enhance a simulator for rat liver metabolism providing reliable predictions within a large chemical domain. The tissue metabolism simulator (TIMES) utilizes a heuristic algorithm to generate plausible metabolic maps using...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fogarty, Aoife C., E-mail: fogarty@mpip-mainz.mpg.de; Potestio, Raffaello, E-mail: potestio@mpip-mainz.mpg.de; Kremer, Kurt, E-mail: kremer@mpip-mainz.mpg.de
A fully atomistic modelling of many biophysical and biochemical processes at biologically relevant length- and time scales is beyond our reach with current computational resources, and one approach to overcome this difficulty is the use of multiscale simulation techniques. In such simulations, when system properties necessitate a boundary between resolutions that falls within the solvent region, one can use an approach such as the Adaptive Resolution Scheme (AdResS), in which solvent particles change their resolution on the fly during the simulation. Here, we apply the existing AdResS methodology to biomolecular systems, simulating a fully atomistic protein with an atomistic hydrationmore » shell, solvated in a coarse-grained particle reservoir and heat bath. Using as a test case an aqueous solution of the regulatory protein ubiquitin, we first confirm the validity of the AdResS approach for such systems, via an examination of protein and solvent structural and dynamical properties. We then demonstrate how, in addition to providing a computational speedup, such a multiscale AdResS approach can yield otherwise inaccessible physical insights into biomolecular function. We use our methodology to show that protein structure and dynamics can still be correctly modelled using only a few shells of atomistic water molecules. We also discuss aspects of the AdResS methodology peculiar to biomolecular simulations.« less
Shin, Min-Ho; Kim, Hyo-Jun; Kim, Young-Joo
2017-02-20
We proposed an optical simulation model for the quantum dot (QD) nanophosphor based on the mean free path concept to understand precisely the optical performance of optoelectronic devices. A measurement methodology was also developed to get the desired optical characteristics such as the mean free path and absorption spectra for QD nanophosphors which are to be incorporated into the simulation. The simulation results for QD-based white LED and OLED displays show good agreement with the experimental values from the fabricated devices in terms of spectral power distribution, chromaticity coordinate, CCT, and CRI. The proposed simulation model and measurement methodology can be applied easily to the design of lots of optoelectronics devices using QD nanophosphors to obtain high efficiency and the desired color characteristics.
Probabilistic simulation of stress concentration in composite laminates
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Murthy, P. L. N.; Liaw, L.
1993-01-01
A computational methodology is described to probabilistically simulate the stress concentration factors in composite laminates. This new approach consists of coupling probabilistic composite mechanics with probabilistic finite element structural analysis. The probabilistic composite mechanics is used to probabilistically describe all the uncertainties inherent in composite material properties while probabilistic finite element is used to probabilistically describe the uncertainties associated with methods to experimentally evaluate stress concentration factors such as loads, geometry, and supports. The effectiveness of the methodology is demonstrated by using it to simulate the stress concentration factors in composite laminates made from three different composite systems. Simulated results match experimental data for probability density and for cumulative distribution functions. The sensitivity factors indicate that the stress concentration factors are influenced by local stiffness variables, by load eccentricities and by initial stress fields.
Computational Simulation of the Formation and Material Behavior of Ice
NASA Technical Reports Server (NTRS)
Tong, Michael T.; Singhal, Surendra N.; Chamis, Christos C.
1994-01-01
Computational methods are described for simulating the formation and the material behavior of ice in prevailing transient environments. The methodology developed at the NASA Lewis Research Center was adopted. A three dimensional finite-element heat transfer analyzer was used to predict the thickness of ice formed under prevailing environmental conditions. A multi-factor interaction model for simulating the material behavior of time-variant ice layers is presented. The model, used in conjunction with laminated composite mechanics, updates the material properties of an ice block as its thickness increases with time. A sample case of ice formation in a body of water was used to demonstrate the methodology. The results showed that the formation and the material behavior of ice can be computationally simulated using the available composites technology.
Simulating Afterburn with LLNL Hydrocodes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daily, L D
2004-06-11
Presented here is a working methodology for adapting a Lawrence Livermore National Laboratory (LLNL) developed hydrocode, ALE3D, to simulate weapon damage effects when afterburn is a consideration in the blast propagation. Experiments have shown that afterburn is of great consequence in enclosed environments (i.e. bomb in tunnel scenario, penetrating conventional munition in a bunker, or satchel charge placed in a deep underground facility). This empirical energy deposition methodology simulates the anticipated addition of kinetic energy that has been demonstrated by experiment (Kuhl, et. al. 1998), without explicitly solving the chemistry, or resolving the mesh to capture small-scale vorticity. This effortmore » is intended to complement the existing capability of either coupling ALE3D blast simulations with DYNA3D or performing fully coupled ALE3D simulations to predict building or component failure, for applications in National Security offensive strike planning as well as Homeland Defense infrastructure protection.« less
Tsunami hazard assessments with consideration of uncertain earthquakes characteristics
NASA Astrophysics Data System (ADS)
Sepulveda, I.; Liu, P. L. F.; Grigoriu, M. D.; Pritchard, M. E.
2017-12-01
The uncertainty quantification of tsunami assessments due to uncertain earthquake characteristics faces important challenges. First, the generated earthquake samples must be consistent with the properties observed in past events. Second, it must adopt an uncertainty propagation method to determine tsunami uncertainties with a feasible computational cost. In this study we propose a new methodology, which improves the existing tsunami uncertainty assessment methods. The methodology considers two uncertain earthquake characteristics, the slip distribution and location. First, the methodology considers the generation of consistent earthquake slip samples by means of a Karhunen Loeve (K-L) expansion and a translation process (Grigoriu, 2012), applicable to any non-rectangular rupture area and marginal probability distribution. The K-L expansion was recently applied by Le Veque et al. (2016). We have extended the methodology by analyzing accuracy criteria in terms of the tsunami initial conditions. Furthermore, and unlike this reference, we preserve the original probability properties of the slip distribution, by avoiding post sampling treatments such as earthquake slip scaling. Our approach is analyzed and justified in the framework of the present study. Second, the methodology uses a Stochastic Reduced Order model (SROM) (Grigoriu, 2009) instead of a classic Monte Carlo simulation, which reduces the computational cost of the uncertainty propagation. The methodology is applied on a real case. We study tsunamis generated at the site of the 2014 Chilean earthquake. We generate earthquake samples with expected magnitude Mw 8. We first demonstrate that the stochastic approach of our study generates consistent earthquake samples with respect to the target probability laws. We also show that the results obtained from SROM are more accurate than classic Monte Carlo simulations. We finally validate the methodology by comparing the simulated tsunamis and the tsunami records for the 2014 Chilean earthquake. Results show that leading wave measurements fall within the tsunami sample space. At later times, however, there are mismatches between measured data and the simulated results, suggesting that other sources of uncertainty are as relevant as the uncertainty of the studied earthquake characteristics.
Fidelity assessment of a UH-60A simulation on the NASA Ames vertical motion simulator
NASA Technical Reports Server (NTRS)
Atencio, Adolph, Jr.
1993-01-01
Helicopter handling qualities research requires that a ground-based simulation be a high-fidelity representation of the actual helicopter, especially over the frequency range of the investigation. This experiment was performed to assess the current capability to simulate the UH-60A Black Hawk helicopter on the Vertical Motion Simulator (VMS) at NASA Ames, to develop a methodology for assessing the fidelity of a simulation, and to find the causes for lack of fidelity. The approach used was to compare the simulation to the flight vehicle for a series of tasks performed in flight and in the simulator. The results show that subjective handling qualities ratings from flight to simulator overlap, and the mathematical model matches the UH-60A helicopter very well over the range of frequencies critical to handling qualities evaluation. Pilot comments, however, indicate a need for improvement in the perceptual fidelity of the simulation in the areas of motion and visual cuing. The methodology used to make the fidelity assessment proved useful in showing differences in pilot work load and strategy, but additional work is needed to refine objective methods for determining causes of lack of fidelity.
NASA Astrophysics Data System (ADS)
Nebot, Àngela; Mugica, Francisco
2012-10-01
Fuzzy inductive reasoning (FIR) is a modelling and simulation methodology derived from the General Systems Problem Solver. It compares favourably with other soft computing methodologies, such as neural networks, genetic or neuro-fuzzy systems, and with hard computing methodologies, such as AR, ARIMA, or NARMAX, when it is used to predict future behaviour of different kinds of systems. This paper contains an overview of the FIR methodology, its historical background, and its evolution.
Suzaku Observations of the Broad-Line Radio Galaxy 3C390.3
NASA Technical Reports Server (NTRS)
Sambruna, rita
2007-01-01
We present the results of a 100ks Suzaku observation of the BLRG 3C390.3. The observations were performed to attempt to disentangle the contributions to the X-ray emission of this galaxy from an AGN and a jet component, via variability and/or the spectrum. The source was detected at high energies up to 80 keV, with a complex 0.3--80keV spectrum. Preliminary analysis of the data shows significant flux variability, with the largest amplitudes at higher energies. Deconvolution of the spectrum shows that, besides a standard Seyfert-like spectrum dominating the 0.3--8keV emission, an additional, hard power law component is required, dominating the emission above 10 keV. We attribute this component to a variable jet.
Thomas Harless; Francis G. Wagner; Phillip Steele; Fred Taylor; Vikram Yadama; Charles W. McMillin
1991-01-01
A precise research methodology is described by which internal log-defect locations may help select hardwood log ortentation and sawing procedure to improve lumber value. Procedures for data collection, data handling, simulated sawing, and data analysis are described. A single test log verified the methodology. Results from this log showed significant differences in...
ERIC Educational Resources Information Center
Echeverria, Alejandro; Barrios, Enrique; Nussbaum, Miguel; Amestica, Matias; Leclerc, Sandra
2012-01-01
Computer simulations combined with games have been successfully used to teach conceptual physics. However, there is no clear methodology for guiding the design of these types of games. To remedy this, we propose a structured methodology for the design of conceptual physics games that explicitly integrates the principles of the intrinsic…
The methodology for modeling queuing systems using Petri nets
NASA Astrophysics Data System (ADS)
Kotyrba, Martin; Gaj, Jakub; Tvarůžka, Matouš
2017-07-01
This papers deals with the use of Petri nets in modeling and simulation of queuing systems. The first part is focused on the explanation of basic concepts and properties of Petri nets and queuing systems. The proposed methodology for the modeling of queuing systems using Petri nets is described in the practical part. The proposed methodology will be tested on specific cases.
NASA Technical Reports Server (NTRS)
Garg, Sanjay; Mattern, Duane
1994-01-01
An advanced methodology for integrated flight propulsion control (IFPC) design for future aircraft, which will use propulsion system generated forces and moments for enhanced maneuver capabilities, is briefly described. This methodology has the potential to address in a systematic manner the coupling between the airframe and the propulsion subsystems typical of such enhanced maneuverability aircraft. Application of the methodology to a short take-off vertical landing (STOVL) aircraft in the landing approach to hover transition flight phase is presented with brief description of the various steps in the IFPC design methodology. The details of the individual steps have been described in previous publications and the objective of this paper is to focus on how the components of the control system designed at each step integrate into the overall IFPC system. The full nonlinear IFPC system was evaluated extensively in nonreal-time simulations as well as piloted simulations. Results from the nonreal-time evaluations are presented in this paper. Lessons learned from this application study are summarized in terms of areas of potential improvements in the STOVL IFPC design as well as identification of technology development areas to enhance the applicability of the proposed design methodology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reines, Amy E.; Volonteri, Marta, E-mail: reines@umich.edu
Scaling relations between central black hole (BH) mass and host galaxy properties are of fundamental importance to studies of BH and galaxy evolution throughout cosmic time. Here we investigate the relationship between BH mass and host galaxy total stellar mass using a sample of 262 broad-line active galactic nuclei (AGNs) in the nearby universe (z < 0.055), as well as 79 galaxies with dynamical BH masses. The vast majority of our AGN sample is constructed using Sloan Digital Sky Survey spectroscopy and searching for Seyfert-like narrow-line ratios and broad Hα emission. BH masses are estimated using standard virial techniques. Wemore » also include a small number of dwarf galaxies with total stellar masses M{sub stellar} ≲ 10{sup 9.5} M{sub ⊙} and a subsample of the reverberation-mapped AGNs. Total stellar masses of all 341 galaxies are calculated in the most consistent manner feasible using color-dependent mass-to-light ratios. We find a clear correlation between BH mass and total stellar mass for the AGN host galaxies, with M{sub BH} ∝ M{sub stellar}, similar to that of early-type galaxies with dynamically detected BHs. However, the relation defined by the AGNs has a normalization that is lower by more than an order of magnitude, with a BH-to-total stellar mass fraction of M{sub BH}/M{sub stellar} ∼ 0.025% across the stellar mass range 10{sup 8} ≤ M{sub stellar}/M{sub ⊙} ≤ 10{sup 12}. This result has significant implications for studies at high redshift and cosmological simulations in which stellar bulges cannot be resolved.« less
USDA-ARS?s Scientific Manuscript database
Simulation modelers increasingly require greater flexibility for model implementation on diverse operating systems, and they demand high computational speed for efficient iterative simulations. Additionally, model users may differ in preference for proprietary versus open-source software environment...
Simulation/Gaming and the Acquisition of Communicative Competence in Another Language.
ERIC Educational Resources Information Center
Garcia-Carbonell, Amparo; Rising, Beverly; Montero, Begona; Watts, Frances
2001-01-01
Discussion of communicative competence in second language acquisition focuses on a theoretical and practical meshing of simulation and gaming methodology with theories of foreign language acquisition, including task-based learning, interaction, and comprehensible input. Describes experiments conducted with computer-assisted simulations in…
Diversity of nursing student views about simulation design: a q-methodological study.
Paige, Jane B; Morin, Karen H
2015-05-01
Education of future nurses benefits from well-designed simulation activities. Skillful teaching with simulation requires educators to be constantly aware of how students experience learning and perceive educators' actions. Because revision of simulation activities considers feedback elicited from students, it is crucial to understand the perspective from which students base their response. In a Q-methodological approach, 45 nursing students rank-ordered 60 opinion statements about simulation design into a distribution grid. Factor analysis revealed that nursing students hold five distinct and uniquely personal perspectives-Let Me Show You, Stand By Me, The Agony of Defeat, Let Me Think It Through, and I'm Engaging and So Should You. Results suggest that nurse educators need to reaffirm that students clearly understand the purpose of each simulation activity. Nurse educators should incorporate presimulation assignments to optimize learning and help allay anxiety. The five perspectives discovered in this study can serve as a tool to discern individual students' learning needs. Copyright 2015, SLACK Incorporated.
NASA Technical Reports Server (NTRS)
Prajous, R.; Mazankine, J.; Ippolito, J. C.
1978-01-01
Methods and algorithms used for the simulation of elementary power conditioning units buck, boost, and buck-boost, as well as shunt PWM are described. Definitions are given of similar converters and reduced parameters. The various parts of the simulation to be carried out are dealt with; local stability, corrective network, measurements of input-output impedance and global stability. A simulation example is given.
Large-Eddy Simulation (LES) of a Compressible Mixing Layer and the Significance of Inflow Turbulence
NASA Technical Reports Server (NTRS)
Mankbadi, Mina Reda; Georgiadis, Nicholas J.; Debonis, James R.
2017-01-01
In the context of Large Eddy Simulations (LES), the effects of inflow turbulence are investigated through the Synthetic Eddy Method (SEM). The growth rate of a turbulent compressible mixing layer corresponding to operating conditions of GeobelDutton Case 2 is investigated herein. The effects of spanwise width on the growth rate of the mixing layer is investigated such that spanwise width independence is reached. The error in neglecting inflow turbulence effects is quantified by comparing two methodologies: (1) Hybrid-RANS-LES methodology and (2) SEM-LES methodology. Best practices learned from Case 2 are developed herein and then applied to a higher convective mach number corresponding to Case 4 experiments of GeobelDutton.
NASA Astrophysics Data System (ADS)
Cvetkovic, V.; Molin, S.
2012-02-01
We present a methodology that combines numerical simulations of groundwater flow and advective transport in heterogeneous porous media with analytical retention models for computing the infection risk probability from pathogens in aquifers. The methodology is based on the analytical results presented in [1,2] for utilising the colloid filtration theory in a time-domain random walk framework. It is shown that in uniform flow, the results from the numerical simulations of advection yield comparable results as the analytical TDRW model for generating advection segments. It is shown that spatial variability of the attachment rate may be significant, however, it appears to affect risk in a different manner depending on if the flow is uniform or radially converging. In spite of the fact that numerous issues remain open regarding pathogen transport in aquifers on the field scale, the methodology presented here may be useful for screening purposes, and may also serve as a basis for future studies that would include greater complexity.
NASA Technical Reports Server (NTRS)
Lowrie, J. W.; Fermelia, A. J.; Haley, D. C.; Gremban, K. D.; Vanbaalen, J.; Walsh, R. W.
1982-01-01
The derivation of the equations is presented, the rate control algorithm described, and simulation methodologies summarized. A set of dynamics equations that can be used recursively to calculate forces and torques acting at the joints of an n link manipulator given the manipulator joint rates are derived. The equations are valid for any n link manipulator system with any kind of joints connected in any sequence. The equations of motion for the class of manipulators consisting of n rigid links interconnected by rotary joints are derived. A technique is outlined for reducing the system of equations to eliminate contraint torques. The linearized dynamics equations for an n link manipulator system are derived. The general n link linearized equations are then applied to a two link configuration. The coordinated rate control algorithm used to compute individual joint rates when given end effector rates is described. A short discussion of simulation methodologies is presented.
Introduction to SIMRAND: Simulation of research and development project
NASA Technical Reports Server (NTRS)
Miles, R. F., Jr.
1982-01-01
SIMRAND: SIMulation of Research ANd Development Projects is a methodology developed to aid the engineering and management decision process in the selection of the optimal set of systems or tasks to be funded on a research and development project. A project may have a set of systems or tasks under consideration for which the total cost exceeds the allocated budget. Other factors such as personnel and facilities may also enter as constraints. Thus the project's management must select, from among the complete set of systems or tasks under consideration, a partial set that satisfies all project constraints. The SIMRAND methodology uses analytical techniques and probability theory, decision analysis of management science, and computer simulation, in the selection of this optimal partial set. The SIMRAND methodology is truly a management tool. It initially specifies the information that must be generated by the engineers, thus providing information for the management direction of the engineers, and it ranks the alternatives according to the preferences of the decision makers.
An Energy Storage Assessment: Using Optimal Control Strategies to Capture Multiple Services
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Di; Jin, Chunlian; Balducci, Patrick J.
2015-09-01
This paper presents a methodology for evaluating benefits of battery storage for multiple grid applications, including energy arbitrage, balancing service, capacity value, distribution system equipment deferral, and outage mitigation. In the proposed method, at each hour, a look-ahead optimization is first formulated and solved to determine battery base operating point. The minute by minute simulation is then performed to simulate the actual battery operation. This methodology is used to assess energy storage alternatives in Puget Sound Energy System. Different battery storage candidates are simulated for a period of one year to assess different value streams and overall benefits, as partmore » of a financial feasibility evaluation of battery storage projects.« less
Simulation: an evolving methodology for health administration education.
Taylor, J K; Moore, J A; Holland, M G
1985-01-01
Simulation provides a valuable addition to a university's teaching methods. Computer-assisted gaming is especially effective in teaching advanced business strategy and corporate policy when the nature and complexity of the simulation permit. The potential for using simulation techniques in postgraduate professional education and in managerial self-assessment appears to be significant over the next several years.
NASA Astrophysics Data System (ADS)
Riva, Fabio; Milanese, Lucio; Ricci, Paolo
2017-10-01
To reduce the computational cost of the uncertainty propagation analysis, which is used to study the impact of input parameter variations on the results of a simulation, a general and simple to apply methodology based on decomposing the solution to the model equations in terms of Chebyshev polynomials is discussed. This methodology, based on the work by Scheffel [Am. J. Comput. Math. 2, 173-193 (2012)], approximates the model equation solution with a semi-analytic expression that depends explicitly on time, spatial coordinates, and input parameters. By employing a weighted residual method, a set of nonlinear algebraic equations for the coefficients appearing in the Chebyshev decomposition is then obtained. The methodology is applied to a two-dimensional Braginskii model used to simulate plasma turbulence in basic plasma physics experiments and in the scrape-off layer of tokamaks, in order to study the impact on the simulation results of the input parameter that describes the parallel losses. The uncertainty that characterizes the time-averaged density gradient lengths, time-averaged densities, and fluctuation density level are evaluated. A reasonable estimate of the uncertainty of these distributions can be obtained with a single reduced-cost simulation.
DOT National Transportation Integrated Search
2000-04-01
This report presents detailed analytic tools and results on dynamic response which are used to develop the safe dynamic performance limits of commuter passenger vehicles. The methodology consists of determining the critical parameters and characteris...
NASA Astrophysics Data System (ADS)
Lapuebla-Ferri, Andrés; Cegoñino-Banzo, José; Jiménez-Mocholí, Antonio-José; Pérez del Palomar, Amaya
2017-11-01
In breast cancer screening or diagnosis, it is usual to combine different images in order to locate a lesion as accurately as possible. These images are generated using a single or several imaging techniques. As x-ray-based mammography is widely used, a breast lesion is located in the same plane of the image (mammogram), but tracking it across mammograms corresponding to different views is a challenging task for medical physicians. Accordingly, simulation tools and methodologies that use patient-specific numerical models can facilitate the task of fusing information from different images. Additionally, these tools need to be as straightforward as possible to facilitate their translation to the clinical area. This paper presents a patient-specific, finite-element-based and semi-automated simulation methodology to track breast lesions across mammograms. A realistic three-dimensional computer model of a patient’s breast was generated from magnetic resonance imaging to simulate mammographic compressions in cranio-caudal (CC, head-to-toe) and medio-lateral oblique (MLO, shoulder-to-opposite hip) directions. For each compression being simulated, a virtual mammogram was obtained and posteriorly superimposed to the corresponding real mammogram, by sharing the nipple as a common feature. Two-dimensional rigid-body transformations were applied, and the error distance measured between the centroids of the tumors previously located on each image was 3.84 mm and 2.41 mm for CC and MLO compression, respectively. Considering that the scope of this work is to conceive a methodology translatable to clinical practice, the results indicate that it could be helpful in supporting the tracking of breast lesions.
NASA Astrophysics Data System (ADS)
Hansen, A. L.; Donnelly, C.; Refsgaard, J. C.; Karlsson, I. B.
2018-01-01
This paper describes a modeling approach proposed to simulate the impact of local-scale, spatially targeted N-mitigation measures for the Baltic Sea Basin. Spatially targeted N-regulations aim at exploiting the considerable spatial differences in the natural N-reduction taking place in groundwater and surface water. While such measures can be simulated using local-scale physically-based catchment models, use of such detailed models for the 1.8 million km2 Baltic Sea basin is not feasible due to constraints on input data and computing power. Large-scale models that are able to simulate the Baltic Sea basin, on the other hand, do not have adequate spatial resolution to simulate some of the field-scale measures. Our methodology combines knowledge and results from two local-scale physically-based MIKE SHE catchment models, the large-scale and more conceptual E-HYPE model, and auxiliary data in order to enable E-HYPE to simulate how spatially targeted regulation of agricultural practices may affect N-loads to the Baltic Sea. We conclude that the use of E-HYPE with this upscaling methodology enables the simulation of the impact on N-loads of applying a spatially targeted regulation at the Baltic Sea basin scale to the correct order-of-magnitude. The E-HYPE model together with the upscaling methodology therefore provides a sound basis for large-scale policy analysis; however, we do not expect it to be sufficiently accurate to be useful for the detailed design of local-scale measures.
Creating executable architectures using Visual Simulation Objects (VSO)
NASA Astrophysics Data System (ADS)
Woodring, John W.; Comiskey, John B.; Petrov, Orlin M.; Woodring, Brian L.
2005-05-01
Investigations have been performed to identify a methodology for creating executable models of architectures and simulations of architecture that lead to an understanding of their dynamic properties. Colored Petri Nets (CPNs) are used to describe architecture because of their strong mathematical foundations, the existence of techniques for their verification and graph theory"s well-established history of success in modern science. CPNs have been extended to interoperate with legacy simulations via a High Level Architecture (HLA) compliant interface. It has also been demonstrated that an architecture created as a CPN can be integrated with Department of Defense Architecture Framework products to ensure consistency between static and dynamic descriptions. A computer-aided tool, Visual Simulation Objects (VSO), which aids analysts in specifying, composing and executing architectures, has been developed to verify the methodology and as a prototype commercial product.
NASA Astrophysics Data System (ADS)
Kreiss, Gunilla; Holmgren, Hanna; Kronbichler, Martin; Ge, Anthony; Brant, Luca
2017-11-01
The conventional no-slip boundary condition leads to a non-integrable stress singularity at a moving contact line. This makes numerical simulations of two-phase flow challenging, especially when capillarity of the contact point is essential for the dynamics of the flow. We will describe a modeling methodology, which is suitable for numerical simulations, and present results from numerical computations. The methodology is based on combining a relation between the apparent contact angle and the contact line velocity, with the similarity solution for Stokes flow at a planar interface. The relation between angle and velocity can be determined by theoretical arguments, or from simulations using a more detailed model. In our approach we have used results from phase field simulations in a small domain, but using a molecular dynamics model should also be possible. In both cases more physics is included and the stress singularity is removed.
Simulation as a surgical teaching model.
Ruiz-Gómez, José Luis; Martín-Parra, José Ignacio; González-Noriega, Mónica; Redondo-Figuero, Carlos Godofredo; Manuel-Palazuelos, José Carlos
2018-01-01
Teaching of surgery has been affected by many factors over the last years, such as the reduction of working hours, the optimization of the use of the operating room or patient safety. Traditional teaching methodology fails to reduce the impact of these factors on surgeońs training. Simulation as a teaching model minimizes such impact, and is more effective than traditional teaching methods for integrating knowledge and clinical-surgical skills. Simulation complements clinical assistance with training, creating a safe learning environment where patient safety is not affected, and ethical or legal conflicts are avoided. Simulation uses learning methodologies that allow teaching individualization, adapting it to the learning needs of each student. It also allows training of all kinds of technical, cognitive or behavioural skills. Copyright © 2017 AEC. Publicado por Elsevier España, S.L.U. All rights reserved.
The high-energy view of the broad-line radio galaxy 3C 111
NASA Astrophysics Data System (ADS)
Ballo, L.; Braito, V.; Reeves, J. N.; Sambruna, R. M.; Tombesi, F.
2011-12-01
We present the analysis of Suzaku and XMM-Newton observations of the broad-line radio galaxy (BLRG) 3C 111. Its high-energy emission shows variability, a harder continuum with respect to the radio-quiet active galactic nucleus population, and weak reflection features. Suzaku found the source in a minimum flux level; a comparison with the XMM-Newton data implies an increase of a factor of 2.5 in the 0.5-10 keV flux, in the 6 months separating the two observations. The iron K complex is detected in both data sets, with rather low equivalent width(s). The intensity of the iron K complex does not respond to the change in continuum flux. An ultrafast, high-ionization outflowing gas is clearly detected in the Suzaku/X-ray Imaging Spectrometer data; the absorber is most likely unstable. Indeed, during the XMM-Newton observation, which was 6 months after, the absorber was not detected. No clear rollover in the hard X-ray emission is detected, probably due to the emergence of the jet as a dominant component in the hard X-ray band, as suggested by the detection above ˜100 keV with the GSO onboard Suzaku, although the present data do not allow us to firmly constrain the relative contribution of the different components. The fluxes observed by the γ-ray satellites CGRO and Fermi would be compatible with the putative jet component if peaking at energies E˜ 100 MeV. In the X-ray band, the jet contribution to the continuum starts to be significant only above 10 keV. If the detection of the jet component in 3C 111 is confirmed, then its relative importance in the X-ray energy band could explain the different observed properties in the high-energy emission of BLRGs, which are otherwise similar in their other multiwavelength properties. Comparison between X-ray and γ-ray data taken at different epochs suggests that the strong variability observed for 3C 111 is probably driven by a change in the primary continuum.
NASA Astrophysics Data System (ADS)
Park, Daeseong; Barth, Aaron J.; Woo, Jong-Hak; Malkan, Matthew A.; Treu, Tommaso; Bennert, Vardha N.; Assef, Roberto J.; Pancoast, Anna
2017-04-01
We provide an updated calibration of C IV λ 1549 broad emission line–based single-epoch (SE) black hole (BH) mass estimators for active galactic nuclei (AGNs) using new data for six reverberation-mapped AGNs at redshift z=0.005{--}0.028 with BH masses (bolometric luminosities) in the range {10}6.5{--}{10}7.5 {M}ȯ ({10}41.7{--}{10}43.8 erg s‑1). New rest-frame UV-to-optical spectra covering 1150–5700 Å for the six AGNs were obtained with the Hubble Space Telescope (HST). Multicomponent spectral decompositions of the HST spectra were used to measure SE emission-line widths for the C IV, Mg II, and Hβ lines, as well as continuum luminosities in the spectral region around each line. We combine the new data with similar measurements for a previous archival sample of 25 AGNs to derive the most consistent and accurate calibrations of the C IV-based SE BH mass estimators against the Hβ reverberation-based masses, using three different measures of broad-line width: full width at half maximum (FWHM), line dispersion ({σ }line}), and mean absolute deviation (MAD). The newly expanded sample at redshift z=0.005{--}0.234 covers a dynamic range in BH mass (bolometric luminosity) of {log}{M}BH}/{M}ȯ =6.5{--}9.1 ({log}{L}bol}/ erg s‑1 = 41.7{--}46.9), and we derive the new C IV-based mass estimators using a Bayesian linear regression analysis over this range. We generally recommend the use of {σ }line} or MAD rather than FWHM to obtain a less biased velocity measurement of the C IV emission line, because its narrow-line component contribution is difficult to decompose from the broad-line profile. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program GO-12922.
NASA Astrophysics Data System (ADS)
Bisogni, S.; di Serego Alighieri, S.; Goldoni, P.; Ho, L. C.; Marconi, A.; Ponti, G.; Risaliti, G.
2017-06-01
We studied the spectra of six z 2.2 quasars obtained with the X-shooter spectrograph at the Very Large Telescope. The redshift of these sources and the X-shooter's spectral coverage allow us to cover the rest of the spectral range 1200-7000 Å for the simultaneous detection of optical and ultraviolet lines emitted by the broad-line region. Simultaneous measurements, avoiding issues related to quasars variability, help us understand the connection between the different broad-line region line profiles generally used as virial estimators of black hole masses in quasars. The goal of this work is to compare the different emission lines for each object to check on the reliability of Hα, Mg II and C iv with respect to Hβ. Hα and Mg II linewidths correlate well with Hβ, while C iv shows a poorer correlation, due to the presence of strong blueshifts and asymmetries in the profile. We compared our sample with the only other two whose spectra were taken with the same instrument and for all examined lines our results are in agreement with the ones obtained with X-shooter at z 1.5-1.7. We finally evaluate C III] as a possible substitute of C iv in the same spectral range and find that its behaviour is more coherent with those of the other lines: we believe that, when a high quality spectrum such as the ones we present is available and a proper modelization with the Fe II and Fe III emissions is performed, it is more appropriate to use this line than that of C iv if not corrected for the contamination by non-virialized components. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile, under programme 086.B-0320(A).The reduced spectra (FITS files) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/603/A1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bentz, Misty C.; Walsh, Jonelle L.; Barth, Aaron J.
2009-11-01
We have recently completed a 64-night spectroscopic monitoring campaign at the Lick Observatory 3-m Shane telescope with the aim of measuring the masses of the black holes in 12 nearby (z < 0.05) Seyfert 1 galaxies with expected masses in the range approx10{sup 6}-10{sup 7} M {sub sun} and also the well-studied nearby active galactic nucleus (AGN) NGC 5548. Nine of the objects in the sample (including NGC 5548) showed optical variability of sufficient strength during the monitoring campaign to allow for a time lag to be measured between the continuum fluctuations and the response to these fluctuations in themore » broad Hbeta emission. We present here the light curves for all the objects in this sample and the subsequent Hbeta time lags for the nine objects where these measurements were possible. The Hbeta lag time is directly related to the size of the broad-line region (BLR) in AGNs, and by combining the Hbeta lag time with the measured width of the Hbeta emission line in the variable part of the spectrum, we determine the virial mass of the central supermassive black hole in these nine AGNs. The absolute calibration of the black hole masses is based on the normalization derived by Onken et al., which brings the masses determined by reverberation mapping into agreement with the local M {sub BH}-sigma{sub *}relationship for quiescent galaxies. We also examine the time lag response as a function of velocity across the Hbeta line profile for six of the AGNs. The analysis of four leads to rather ambiguous results with relatively flat time lags as a function of velocity. However, SBS 1116+583A exhibits a symmetric time lag response around the line center reminiscent of simple models for circularly orbiting BLR clouds, and Arp 151 shows an asymmetric profile that is most easily explained by a simple gravitational infall model. Further investigation will be necessary to fully understand the constraints placed on the physical models of the BLR by the velocity-resolved response in these objects.« less
Fermi Large Area Telescope detection of bright γ-ray outbursts from the peculiar quasar 4C +21.35
Tanaka, Y. T.; Stawarz, Ł.; Thompson, D. J.; ...
2011-04-29
In this study, we report on the two-year-long Fermi-Large Area Telescope observation of the peculiar blazar 4C +21.35 (PKS 1222+216). This source was in a quiescent state from the start of the science operations of the Fermi Gamma-ray Space Telescope in 2008 August until 2009 September, and then became more active, with gradually increasing flux and some moderately bright flares. In 2010 April and June, 4C +21.35 underwent a very strong GeV outburst composed of several major flares characterized by rise and decay timescales of the order of a day. During the outburst, the GeV spectra of 4C +21.35 displayedmore » a broken power-law form with spectral breaks observed near 1-3 GeV photon energies. We demonstrate that, at least during the major flares, the jet in 4C +21.35 carried a total kinetic luminosity comparable to the total accretion power available to feed the outflow. We also discuss the origin of the break observed in the flaring spectra of 4C +21.35. We show that, in principle, a model involving annihilation of the GeV photons on the He II Lyman recombination continuum and line emission of "broad-line region" clouds may account for such. However, we also discuss the additional constraint provided by the detection of 4C +21.35 at 0.07-0.4 TeV energies by the MAGIC telescope, which coincided with one of the GeV flares of the source. We argue that there are reasons to believe that the lesssim TeV emission of 4C +21.35 (as well as the GeV emission of the source, if co-spatial) is not likely to be produced inside the broad-line region zone of highest ionization (~10 17 cm from the nucleus), but instead originates further away from the active center, namely, around the characteristic scale of the hot dusty torus surrounding the 4C +21.35 nucleus (~10 19 cm).« less
THE SLOAN DIGITAL SKY SURVEY REVERBERATION MAPPING PROJECT: TECHNICAL OVERVIEW
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Yue; Brandt, W. N.; Dawson, Kyle S.
2015-01-01
The Sloan Digital Sky Survey Reverberation Mapping (SDSS-RM) project is a dedicated multi-object RM experiment that has spectroscopically monitored a sample of 849 broad-line quasars in a single 7 deg{sup 2} field with the SDSS-III Baryon Oscillation Spectroscopic Survey spectrograph. The RM quasar sample is flux-limited to i {sub psf} = 21.7 mag, and covers a redshift range of 0.1 < z < 4.5 without any other cuts on quasar properties. Optical spectroscopy was performed during 2014 January-July dark/gray time, with an average cadence of ∼4 days, totaling more than 30 epochs. Supporting photometric monitoring in the g and i bandsmore » was conducted at multiple facilities including the Canada-France-Hawaii Telescope (CFHT) and the Steward Observatory Bok telescope in 2014, with a cadence of ∼2 days and covering all lunar phases. The RM field (R.A., decl. = 14:14:49.00, +53:05:00.0) lies within the CFHT-LS W3 field, and coincides with the Pan-STARRS 1 (PS1) Medium Deep Field MD07, with three prior years of multi-band PS1 light curves. The SDSS-RM six month baseline program aims to detect time lags between the quasar continuum and broad line region (BLR) variability on timescales of up to several months (in the observed frame) for ∼10% of the sample, and to anchor the time baseline for continued monitoring in the future to detect lags on longer timescales and at higher redshift. SDSS-RM is the first major program to systematically explore the potential of RM for broad-line quasars at z > 0.3, and will investigate the prospects of RM with all major broad lines covered in optical spectroscopy. SDSS-RM will provide guidance on future multi-object RM campaigns on larger scales, and is aiming to deliver more than tens of BLR lag detections for a homogeneous sample of quasars. We describe the motivation, design, and implementation of this program, and outline the science impact expected from the resulting data for RM and general quasar science.« less
C IV λ1549 as an Eigenvector 1 Parameter for Active Galactic Nuclei
NASA Astrophysics Data System (ADS)
Sulentic, Jack W.; Bachev, Rumen; Marziani, Paola; Negrete, C. Alenka; Dultzin, Deborah
2007-09-01
We are exploring a spectroscopic unification for all types of broad-line emitting AGNs. The four-dimensional Eigenvector 1 (4DE1) parameter space organizes quasar diversity in a sequence primarily governed by Eddington ratio. This paper considers the role of C IV λ1549 measures as 4DE1 diagnostics. We use HST archival spectra for 130 sources with S/N high enough to permit reliable C IV λ1549 broad-component measures. We find a C IV λ1549BC profile blueshift that is strongly concentrated among (largely radio-quiet [RQ]) sources with FWHM(HβBC)<~4000 km s-1 (which we call Population A). Narrow-line Seyfert 1 (NLSy1; with FWHM Hβ<=2000 km s-1) sources belong to this population but do not emerge as a distinct class. The systematic blueshift, widely interpreted as arising in a disk wind/outflow, is not observed in broader line AGNs (including most radio-loud [RL] sources), which we call Population B. We find new correlations involving FWHM(C IV λ1549BC), C IV λ1549 line shift, and equivalent width only among Population A sources. Sulentic et al. suggested C IV λ1549 measures enhance an apparent dichotomy between sources with FWHM(HβBC) less and greater than 4000 km s-1, suggesting that it has more significance in the context of broad-line region structure than the more commonly discussed RL versus RQ dichotomy. Black hole masses computed from FWHM C IV λ1549BC for about 80 AGNs indicate that the C IV λ1549 width is a poor virial estimator. Comparison of mass estimates derived from HβBC and C IV λ1549 reveals that the latter show different and nonlinear offsets for Population A and B sources. A significant number of sources also show narrow-line C IV λ1549 emission that must be removed before C IV λ1549BC measures can be made and interpreted effectively. We present a recipe for C IV λ1549 narrow-component extraction.
Lago, M. A.; Rúperez, M. J.; Martínez-Martínez, F.; Martínez-Sanchis, S.; Bakic, P. R.; Monserrat, C.
2015-01-01
This paper presents a novel methodology to in-vivo estimate the elastic constants of a constitutive model proposed to characterize the mechanical behavior of the breast tissues. An iterative search algorithm based on genetic heuristics was constructed to in-vivo estimate these parameters using only medical images, thus avoiding invasive measurements of the mechanical response of the breast tissues. For the first time, a combination of overlap and distance coefficients were used for the evaluation of the similarity between a deformed MRI of the breast and a simulation of that deformation. The methodology was validated using breast software phantoms for virtual clinical trials, compressed to mimic MRI-guided biopsies. The biomechanical model chosen to characterize the breast tissues was an anisotropic neo-Hookean hyperelastic model. Results from this analysis showed that the algorithm is able to find the elastic constants of the constitutive equations of the proposed model with a mean relative error of about 10%. Furthermore, the overlap between the reference deformation and the simulated deformation was of around 95% showing the good performance of the proposed methodology. This methodology can be easily extended to characterize the real biomechanical behavior of the breast tissues, which means a great novelty in the field of the simulation of the breast behavior for applications such as surgical planing, surgical guidance or cancer diagnosis. This reveals the impact and relevance of the presented work. PMID:27103760
Lago, M A; Rúperez, M J; Martínez-Martínez, F; Martínez-Sanchis, S; Bakic, P R; Monserrat, C
2015-11-30
This paper presents a novel methodology to in-vivo estimate the elastic constants of a constitutive model proposed to characterize the mechanical behavior of the breast tissues. An iterative search algorithm based on genetic heuristics was constructed to in-vivo estimate these parameters using only medical images, thus avoiding invasive measurements of the mechanical response of the breast tissues. For the first time, a combination of overlap and distance coefficients were used for the evaluation of the similarity between a deformed MRI of the breast and a simulation of that deformation. The methodology was validated using breast software phantoms for virtual clinical trials, compressed to mimic MRI-guided biopsies. The biomechanical model chosen to characterize the breast tissues was an anisotropic neo-Hookean hyperelastic model. Results from this analysis showed that the algorithm is able to find the elastic constants of the constitutive equations of the proposed model with a mean relative error of about 10%. Furthermore, the overlap between the reference deformation and the simulated deformation was of around 95% showing the good performance of the proposed methodology. This methodology can be easily extended to characterize the real biomechanical behavior of the breast tissues, which means a great novelty in the field of the simulation of the breast behavior for applications such as surgical planing, surgical guidance or cancer diagnosis. This reveals the impact and relevance of the presented work.
Teaching Behavioral Modeling and Simulation Techniques for Power Electronics Courses
ERIC Educational Resources Information Center
Abramovitz, A.
2011-01-01
This paper suggests a pedagogical approach to teaching the subject of behavioral modeling of switch-mode power electronics systems through simulation by general-purpose electronic circuit simulators. The methodology is oriented toward electrical engineering (EE) students at the undergraduate level, enrolled in courses such as "Power…
Introducing Simulation via the Theory of Records
ERIC Educational Resources Information Center
Johnson, Arvid C.
2011-01-01
While spreadsheet simulation can be a useful method by which to help students to understand some of the more advanced concepts in an introductory statistics course, introducing the simulation methodology at the same time as these concepts can result in student cognitive overload. This article describes a spreadsheet model that has been…
Modeling ground-based timber harvesting systems using computer simulation
Jingxin Wang; Chris B. LeDoux
2001-01-01
Modeling ground-based timber harvesting systems with an object-oriented methodology was investigated. Object-oriented modeling and design promote a better understanding of requirements, cleaner designs, and better maintainability of the harvesting simulation system. The model developed simulates chainsaw felling, drive-to-tree feller-buncher, swing-to-tree single-grip...
Overview of Computer Simulation Modeling Approaches and Methods
Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett
2005-01-01
The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...
Enhancing Students' Employability through Business Simulation
ERIC Educational Resources Information Center
Avramenko, Alex
2012-01-01
Purpose: The purpose of this paper is to introduce an approach to business simulation with less dependence on business simulation software to provide innovative work experience within a programme of study, to boost students' confidence and employability. Design/methodology/approach: The paper is based on analysis of existing business simulation…
In Search of Effective Methodology for Organizational Learning: A Japanese Experience
ERIC Educational Resources Information Center
Tsuchiya, Shigehisa
2011-01-01
The author's personal journey regarding simulation and gaming started about 25 years ago when he happened to realize how powerful computerized simulation could be for organizational change. The metaphors created by computerized simulation enabled him to transform a stagnant university into a high-performance organization. Through extensive…
Terminological Ambiguity: Game and Simulation
ERIC Educational Resources Information Center
Klabbers, Jan H. G.
2009-01-01
Since its introduction in academia and professional practice during the 1950s, gaming has been linked to simulation. Although both fields have a few important characteristics in common, they are distinct in their form and underlying theories of knowledge and methodology. Nevertheless, in the literature, hybrid terms such as "gaming/simulation" and…
Teaching Camera Calibration by a Constructivist Methodology
ERIC Educational Resources Information Center
Samper, D.; Santolaria, J.; Pastor, J. J.; Aguilar, J. J.
2010-01-01
This article describes the Metrovisionlab simulation software and practical sessions designed to teach the most important machine vision camera calibration aspects in courses for senior undergraduate students. By following a constructivist methodology, having received introductory theoretical classes, students use the Metrovisionlab application to…
Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels
2015-01-01
A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262
3CE Methodology for Conducting a Modeling, Simulation, and Instrumentation Tool Capability Analysis
2010-05-01
flRmurn I F )T:Ir,tir)l! MCr)lto.-lng DHin nttbli..’"Ollc:~ E,;m:a..liut .!,)’l’lt’Mn:l’lll.ll~ t Managemen t F unction a l Arem 1 .5 Toola na...a modeling, simulation, and instrumentation (MS&I) environment. This methodology uses the DoDAF product set to document operational and systems...engineering process were identified and resolved, such as duplication of data elements derived from DoDAF operational and system views used to
Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.
Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel
2015-01-01
A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.
Educational Game Development Approach to a particular case: the donor's evaluation.
Borro Escribano, B; del Blanco, A; Torrente, J; Borro Mate, J M; Fernandez Manjon, B
2015-01-01
Serious games are a current trend nowadays. Almost every sector has used serious games in recent years for different educational purposes. The eLearning research team of the Complutense University of Madrid main focus of research is the development of low-cost serious games. During the past 10 years, we have been working with and developing serious games, paying special attention to those related to healthcare. From all these studies, a methodology was defined-the Educational Game Development Approach (EGDA)-to design, develop, and evaluate game-like simulations or serious games in healthcare. We present the application of the EGDA to a particular case, the development of a serious game representing the donor's evaluation in an intensive care unit from the point of view of a hospital coordinator following the EGDA methodology. In this simulation, we changed the strategy of selection of teaching cases by exponentially increasing the number of teaching cases. This kind of educational content provides several benefits to students as they learn while playing; they receive immediate feedback of mistakes and correct moves and an objective assessment. These simulations allow the students to practice in a risk-free environment. Moreover, the addition of game elements increases engagement and promotes the retention of important information. A game-like simulation has been developed through the use of this methodology. This simulation represents a complex medical procedure. Copyright © 2015 Elsevier Inc. All rights reserved.
Mathematical model of marine diesel engine simulator for a new methodology of self propulsion tests
NASA Astrophysics Data System (ADS)
Izzuddin, Nur; Sunarsih, Priyanto, Agoes
2015-05-01
As a vessel operates in the open seas, a marine diesel engine simulator whose engine rotation is controlled to transmit through propeller shaft is a new methodology for the self propulsion tests to track the fuel saving in a real time. Considering the circumstance, this paper presents the real time of marine diesel engine simulator system to track the real performance of a ship through a computer-simulated model. A mathematical model of marine diesel engine and the propeller are used in the simulation to estimate fuel rate, engine rotating speed, thrust and torque of the propeller thus achieve the target vessel's speed. The input and output are a real time control system of fuel saving rate and propeller rotating speed representing the marine diesel engine characteristics. The self-propulsion tests in calm waters were conducted using a vessel model to validate the marine diesel engine simulator. The simulator then was used to evaluate the fuel saving by employing a new mathematical model of turbochargers for the marine diesel engine simulator. The control system developed will be beneficial for users as to analyze different condition of vessel's speed to obtain better characteristics and hence optimize the fuel saving rate.
Simulation of investment returns of toll projects.
DOT National Transportation Integrated Search
2013-08-01
This research develops a methodological framework to illustrate key stages in applying the simulation of investment returns of toll projects, acting as an example process of helping agencies conduct numerical risk analysis by taking certain uncertain...
Combination and selection of traffic safety expert judgments for the prevention of driving risks.
Cabello, Enrique; Conde, Cristina; de Diego, Isaac Martín; Moguerza, Javier M; Redchuk, Andrés
2012-11-02
In this paper, we describe a new framework to combine experts’ judgments for the prevention of driving risks in a cabin truck. In addition, the methodology shows how to choose among the experts the one whose predictions fit best the environmental conditions. The methodology is applied over data sets obtained from a high immersive cabin truck simulator in natural driving conditions. A nonparametric model, based in Nearest Neighbors combined with Restricted Least Squared methods is developed. Three experts were asked to evaluate the driving risk using a Visual Analog Scale (VAS), in order to measure the driving risk in a truck simulator where the vehicle dynamics factors were stored. Numerical results show that the methodology is suitable for embedding in real time systems.
Harvesting model uncertainty for the simulation of interannual variability
NASA Astrophysics Data System (ADS)
Misra, Vasubandhu
2009-08-01
An innovative modeling strategy is introduced to account for uncertainty in the convective parameterization (CP) scheme of a coupled ocean-atmosphere model. The methodology involves calling the CP scheme several times at every given time step of the model integration to pick the most probable convective state. Each call of the CP scheme is unique in that one of its critical parameter values (which is unobserved but required by the scheme) is chosen randomly over a given range. This methodology is tested with the relaxed Arakawa-Schubert CP scheme in the Center for Ocean-Land-Atmosphere Studies (COLA) coupled general circulation model (CGCM). Relative to the control COLA CGCM, this methodology shows improvement in the El Niño-Southern Oscillation simulation and the Indian summer monsoon precipitation variability.
USGS Methodology for Assessing Continuous Petroleum Resources
Charpentier, Ronald R.; Cook, Troy A.
2011-01-01
The U.S. Geological Survey (USGS) has developed a new quantitative methodology for assessing resources in continuous (unconventional) petroleum deposits. Continuous petroleum resources include shale gas, coalbed gas, and other oil and gas deposits in low-permeability ("tight") reservoirs. The methodology is based on an approach combining geologic understanding with well productivities. The methodology is probabilistic, with both input and output variables as probability distributions, and uses Monte Carlo simulation to calculate the estimates. The new methodology is an improvement of previous USGS methodologies in that it better accommodates the uncertainties in undrilled or minimally drilled deposits that must be assessed using analogs. The publication is a collection of PowerPoint slides with accompanying comments.
Combining users' activity survey and simulators to evaluate human activity recognition systems.
Azkune, Gorka; Almeida, Aitor; López-de-Ipiña, Diego; Chen, Liming
2015-04-08
Evaluating human activity recognition systems usually implies following expensive and time-consuming methodologies, where experiments with humans are run with the consequent ethical and legal issues. We propose a novel evaluation methodology to overcome the enumerated problems, which is based on surveys for users and a synthetic dataset generator tool. Surveys allow capturing how different users perform activities of daily living, while the synthetic dataset generator is used to create properly labelled activity datasets modelled with the information extracted from surveys. Important aspects, such as sensor noise, varying time lapses and user erratic behaviour, can also be simulated using the tool. The proposed methodology is shown to have very important advantages that allow researchers to carry out their work more efficiently. To evaluate the approach, a synthetic dataset generated following the proposed methodology is compared to a real dataset computing the similarity between sensor occurrence frequencies. It is concluded that the similarity between both datasets is more than significant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aldemir, Tunc; Denning, Richard; Catalyurek, Umit
Reduction in safety margin can be expected as passive structures and components undergo degradation with time. Limitations in the traditional probabilistic risk assessment (PRA) methodology constrain its value as an effective tool to address the impact of aging effects on risk and for quantifying the impact of aging management strategies in maintaining safety margins. A methodology has been developed to address multiple aging mechanisms involving large numbers of components (with possibly statistically dependent failures) within the PRA framework in a computationally feasible manner when the sequencing of events is conditioned on the physical conditions predicted in a simulation environment, suchmore » as the New Generation System Code (NGSC) concept. Both epistemic and aleatory uncertainties can be accounted for within the same phenomenological framework and maintenance can be accounted for in a coherent fashion. The framework accommodates the prospective impacts of various intervention strategies such as testing, maintenance, and refurbishment. The methodology is illustrated with several examples.« less
Modeling Single-Event Transient Propagation in a SiGe BiCMOS Direct-Conversion Receiver
NASA Astrophysics Data System (ADS)
Ildefonso, Adrian; Song, Ickhyun; Tzintzarov, George N.; Fleetwood, Zachary E.; Lourenco, Nelson E.; Wachter, Mason T.; Cressler, John D.
2017-08-01
The propagation of single-event transient (SET) signals in a silicon-germanium direct-conversion receiver carrying modulated data is explored. A theoretical analysis of transient propagation, verified by simulation, is presented. A new methodology to characterize and quantify the impact of SETs in communication systems carrying modulated data is proposed. The proposed methodology uses a pulsed radiation source to induce distortions in the signal constellation. The error vector magnitude due to SETs can then be calculated to quantify errors. Two different modulation schemes were simulated: QPSK and 16-QAM. The distortions in the constellation diagram agree with the presented circuit theory. Furthermore, the proposed methodology was applied to evaluate the improvements in the SET response due to a known radiation-hardening-by-design (RHBD) technique, where the common-base device of the low-noise amplifier was operated in inverse mode. The proposed methodology can be a valid technique to determine the most sensitive parts of a system carrying modulated data.
Design of feedback control systems for stable plants with saturating actuators
NASA Technical Reports Server (NTRS)
Kapasouris, Petros; Athans, Michael; Stein, Gunter
1988-01-01
A systematic control design methodology is introduced for multi-input/multi-output stable open loop plants with multiple saturations. This new methodology is a substantial improvement over previous heuristic single-input/single-output approaches. The idea is to introduce a supervisor loop so that when the references and/or disturbances are sufficiently small, the control system operates linearly as designed. For signals large enough to cause saturations, the control law is modified in such a way as to ensure stability and to preserve, to the extent possible, the behavior of the linear control design. Key benefits of the methodology are: the modified compensator never produces saturating control signals, integrators and/or slow dynamics in the compensator never windup, the directional properties of the controls are maintained, and the closed loop system has certain guaranteed stability properties. The advantages of the new design methodology are illustrated in the simulation of an academic example and the simulation of the multivariable longitudinal control of a modified model of the F-8 aircraft.
Force on Force Modeling with Formal Task Structures and Dynamic Geometry
2017-03-24
task framework, derived using the MMF methodology to structure a complex mission. It further demonstrated the integration of effects from a range of...application methodology was intended to support a combined developmental testing (DT) and operational testing (OT) strategy for selected systems under test... methodology to develop new or modify existing Models and Simulations (M&S) to: • Apply data from multiple, distributed sources (including test
The SIMRAND methodology - Simulation of Research and Development Projects
NASA Technical Reports Server (NTRS)
Miles, R. F., Jr.
1984-01-01
In research and development projects, a commonly occurring management decision is concerned with the optimum allocation of resources to achieve the project goals. Because of resource constraints, management has to make a decision regarding the set of proposed systems or tasks which should be undertaken. SIMRAND (Simulation of Research and Development Projects) is a methodology which was developed for aiding management in this decision. Attention is given to a problem description, aspects of model formulation, the reduction phase of the model solution, the simulation phase, and the evaluation phase. The implementation of the considered approach is illustrated with the aid of an example which involves a simplified network of the type used to determine the price of silicon solar cells.
Chen, Yen-Ju; Lee, Yen-I; Chang, Wen-Cheng; Hsiao, Po-Jen; You, Jr-Shian; Wang, Chun-Chieh; Wei, Chia-Min
2017-01-01
Abstract Hot deformation of Nd-Fe-B magnets has been studied for more than three decades. With a good combination of forming processing parameters, the remanence and (BH)max values of Nd-Fe-B magnets could be greatly increased due to the formation of anisotropic microstructures during hot deformation. In this work, a methodology is proposed for visualizing the material flow in hot-deformed Nd-Fe-B magnets via finite element simulation. Material flow in hot-deformed Nd-Fe-B magnets could be predicted by simulation, which fitted with experimental results. By utilizing this methodology, the correlation between strain distribution and magnetic properties enhancement could be better understood. PMID:28970869
An Event-Based Approach to Design a Teamwork Training Scenario and Assessment Tool in Surgery.
Nguyen, Ngan; Watson, William D; Dominguez, Edward
2016-01-01
Simulation is a technique recommended for teaching and measuring teamwork, but few published methodologies are available on how best to design simulation for teamwork training in surgery and health care in general. The purpose of this article is to describe a general methodology, called event-based approach to training (EBAT), to guide the design of simulation for teamwork training and discuss its application to surgery. The EBAT methodology draws on the science of training by systematically introducing training exercise events that are linked to training requirements (i.e., competencies being trained and learning objectives) and performance assessment. The EBAT process involves: Of the 4 teamwork competencies endorsed by the Agency for Healthcare Research Quality and Department of Defense, "communication" was chosen to be the focus of our training efforts. A total of 5 learning objectives were defined based on 5 validated teamwork and communication techniques. Diagnostic laparoscopy was chosen as the clinical context to frame the training scenario, and 29 KSAs were defined based on review of published literature on patient safety and input from subject matter experts. Critical events included those that correspond to a specific phase in the normal flow of a surgical procedure as well as clinical events that may occur when performing the operation. Similar to the targeted KSAs, targeted responses to the critical events were developed based on existing literature and gathering input from content experts. Finally, a 29-item EBAT-derived checklist was created to assess communication performance. Like any instructional tool, simulation is only effective if it is designed and implemented appropriately. It is recognized that the effectiveness of simulation depends on whether (1) it is built upon a theoretical framework, (2) it uses preplanned structured exercises or events to allow learners the opportunity to exhibit the targeted KSAs, (3) it assesses performance, and (4) it provides formative and constructive feedback to bridge the gap between the learners' KSAs and the targeted KSAs. The EBAT methodology guides the design of simulation that incorporates these 4 features and, thus, enhances training effectiveness with simulation. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
A lava flow simulation model for the development of volcanic hazard maps for Mount Etna (Italy)
NASA Astrophysics Data System (ADS)
Damiani, M. L.; Groppelli, G.; Norini, G.; Bertino, E.; Gigliuto, A.; Nucita, A.
2006-05-01
Volcanic hazard assessment is of paramount importance for the safeguard of the resources exposed to volcanic hazards. In the paper we present ELFM, a lava flow simulation model for the evaluation of the lava flow hazard on Mount Etna (Sicily, Italy), the most important active volcano in Europe. The major contributions of the paper are: (a) a detailed specification of the lava flow simulation model and the specification of an algorithm implementing it; (b) the definition of a methodological framework for applying the model to the specific volcano. For what concerns the former issue, we propose an extended version of an existing stochastic model that has been applied so far only to the assessment of the volcanic hazard on Lanzarote and Tenerife (Canary Islands). Concerning the methodological framework, we claim model validation is definitely needed for assessing the effectiveness of the lava flow simulation model. To that extent a strategy has been devised for the generation of simulation experiments and evaluation of their outcomes.
An ultrasonic methodology for muscle cross section measurement of support space flight
NASA Astrophysics Data System (ADS)
Hatfield, Thomas R.; Klaus, David M.; Simske, Steven J.
2004-09-01
The number one priority for any manned space mission is the health and safety of its crew. The study of the short and long term physiological effects on humans is paramount to ensuring crew health and mission success. One of the challenges associated in studying the physiological effects of space flight on humans, such as loss of bone and muscle mass, has been that of readily attaining the data needed to characterize the changes. The small sampling size of astronauts, together with the fact that most physiological data collection tends to be rather tedious, continues to hinder elucidation of the underlying mechanisms responsible for the observed changes that occur in space. Better characterization of the muscle loss experienced by astronauts requires that new technologies be implemented. To this end, we have begun to validate a 360° ultrasonic scanning methodology for muscle measurements and have performed empirical sampling of a limb surrogate for comparison. Ultrasonic wave propagation was simulated using 144 stations of rotated arm and calf MRI images. These simulations were intended to provide a preliminary check of the scanning methodology and data analysis before its implementation with hardware. Pulse-echo waveforms were processed for each rotation station to characterize fat, muscle, bone, and limb boundary interfaces. The percentage error between MRI reference values and calculated muscle areas, as determined from reflection points for calf and arm cross sections, was -2.179% and +2.129%, respectively. These successful simulations suggest that ultrasound pulse scanning can be used to effectively determine limb cross-sectional areas. Cross-sectional images of a limb surrogate were then used to simulate signal measurements at several rotation angles, with ultrasonic pulse-echo sampling performed experimentally at the same stations on the actual limb surrogate to corroborate the results. The objective of the surrogate sampling was to compare the signal output of the simulation tool used as a methodology validation for actual tissue signals. The disturbance patterns of the simulated and sampled waveforms were consistent. Although only discussed as a small part of the work presented, the sampling portion also helped identify important considerations such as tissue compression and transducer positioning for future work involving tissue scanning with this methodology.
Management of health care expenditure by soft computing methodology
NASA Astrophysics Data System (ADS)
Maksimović, Goran; Jović, Srđan; Jovanović, Radomir; Aničić, Obrad
2017-01-01
In this study was managed the health care expenditure by soft computing methodology. The main goal was to predict the gross domestic product (GDP) according to several factors of health care expenditure. Soft computing methodologies were applied since GDP prediction is very complex task. The performances of the proposed predictors were confirmed with the simulation results. According to the results, support vector regression (SVR) has better prediction accuracy compared to other soft computing methodologies. The soft computing methods benefit from the soft computing capabilities of global optimization in order to avoid local minimum issues.
User's Guide for a Computerized Track Maintenance Simulation Cost Methodology
DOT National Transportation Integrated Search
1982-02-01
This User's Guide describes the simulation cost modeling technique developed for costing of maintenance operations of track and its component structures. The procedure discussed provides for separate maintenance cost entries to be associated with def...
Surrogate Modeling of High-Fidelity Fracture Simulations for Real-Time Residual Strength Predictions
NASA Technical Reports Server (NTRS)
Spear, Ashley D.; Priest, Amanda R.; Veilleux, Michael G.; Ingraffea, Anthony R.; Hochhalter, Jacob D.
2011-01-01
A surrogate model methodology is described for predicting in real time the residual strength of flight structures with discrete-source damage. Starting with design of experiment, an artificial neural network is developed that takes as input discrete-source damage parameters and outputs a prediction of the structural residual strength. Target residual strength values used to train the artificial neural network are derived from 3D finite element-based fracture simulations. A residual strength test of a metallic, integrally-stiffened panel is simulated to show that crack growth and residual strength are determined more accurately in discrete-source damage cases by using an elastic-plastic fracture framework rather than a linear-elastic fracture mechanics-based method. Improving accuracy of the residual strength training data would, in turn, improve accuracy of the surrogate model. When combined, the surrogate model methodology and high-fidelity fracture simulation framework provide useful tools for adaptive flight technology.
Surrogate Modeling of High-Fidelity Fracture Simulations for Real-Time Residual Strength Predictions
NASA Technical Reports Server (NTRS)
Spear, Ashley D.; Priest, Amanda R.; Veilleux, Michael G.; Ingraffea, Anthony R.; Hochhalter, Jacob D.
2011-01-01
A surrogate model methodology is described for predicting, during flight, the residual strength of aircraft structures that sustain discrete-source damage. Starting with design of experiment, an artificial neural network is developed that takes as input discrete-source damage parameters and outputs a prediction of the structural residual strength. Target residual strength values used to train the artificial neural network are derived from 3D finite element-based fracture simulations. Two ductile fracture simulations are presented to show that crack growth and residual strength are determined more accurately in discrete-source damage cases by using an elastic-plastic fracture framework rather than a linear-elastic fracture mechanics-based method. Improving accuracy of the residual strength training data does, in turn, improve accuracy of the surrogate model. When combined, the surrogate model methodology and high fidelity fracture simulation framework provide useful tools for adaptive flight technology.
VizieR Online Data Catalog: SDSS-RM project: peak velocities of QSOs (Shen+, 2016)
NASA Astrophysics Data System (ADS)
Shen, Y.; Brandt, W. N.; Richards, G. T.; Denney, K. D.; Greene, J. E.; Grier, C. J.; Ho, L. C.; Peterson, B. M.; Petitjean, P.; Schneider, D. P.; Tao, C.; Trump, J. R.
2017-01-01
The SDSS-RM quasar sample includes 849 broad-line quasars at 0.1
A Multiwavelength Study of POX 52, a Dwarf Seyfert Galaxy with an Intermediate Mass Black Hole
NASA Astrophysics Data System (ADS)
Barth, Aaron
2004-09-01
POX 52 is a Seyfert 1 galaxy with unprecedented properties: its host galaxy is a dwarf elliptical, and its stellar velocity dispersion is only 36 km/s. The stellar velocity dispersion and the broad emission-line widths both suggest a black hole mass of order 10^5 solar masses. We request HST ACS/HRC imaging to perform a definitive measurement of the host galaxy structure; STIS UV and optical spectroscopy to study the nonstellar continuum and the structure of the broad-line region; and Chandra ACIS imaging to investigate the spectral and variability properties of the X-ray emission. The results of this program will give a detailed understanding of the host galaxy and accretion properties of one of the very few known black holes in the mass range around 10^5 solar masses.
Magnetization of AGN jets as imposed by leptonic models of luminous blazars
NASA Astrophysics Data System (ADS)
Janiak, Mateusz; Sikora, Marek; Moderski, Rafal
2015-03-01
Recent measurements of frequency-dependent shift of radio-core locations indicate that the ratio of the magnetic to kinetic energy flux (the σ parameter) is of the order of unity. These results are consistent with predictions of magnetically-arrested-disk (MAD) models of a jet formation, but contradict the predictions of leptonic models of γ-ray production in luminous blazars. We demonstrate this discrepancy by computing the γ-ray-to-synchrotron luminosity ratio (the q parameter) as a function of a distance from the black hole for different values of σ and using both spherical and planar models for broad-line region and dusty torus. We find that it is impossible to reproduce observed q >> 1 for jets with σ >= 1. This may indicate that blazar radiation is produced in reconnection layers or in spines of magnetically stratified jets.
Ultraviolet and optical spectrophotometry of the Seyfert 1.8 galaxy Markarian 609
NASA Technical Reports Server (NTRS)
Rudy, Richard J.; Cohen, Ross D.; Ake, T. B.
1988-01-01
Ultraviolet and optical observations of the Seyfert 1.8 galaxy Mrk 609 were collected simultaneously. The observations reveal strong line and continuum emission in the UV, an increase in the flux of H-beta and He I 5876, and a decrease in the H-alpha/H-beta value since the measurements by Osterbrock (1978, 1981), as well as an extended population of early-type stars, which is considered to be the source powering the larger part of the far-IR emission. Special attention is given to the origin of steep broad-line Balmer decrement measured by Osterbrock, since the strong UV continuum and the emission lines of Mrk 609 observed rule out reddening as the cause of the Balmer decrement. It is suggested that smaller-than-normal optical depths are likely to be the cause of the decrement.
ERIC Educational Resources Information Center
Weiss, Charles J.
2017-01-01
An introduction to digital stochastic simulations for modeling a variety of physical and chemical processes is presented. Despite the importance of stochastic simulations in chemistry, the prevalence of turn-key software solutions can impose a layer of abstraction between the user and the underlying approach obscuring the methodology being…
ERIC Educational Resources Information Center
Dotger, Benjamin H.
2011-01-01
The induction years of teaching challenge novice educators to quickly transition from what they learned as teacher candidates into what they can do as emerging professionals. This article outlines a simulated interaction methodology to help bridge teacher preparation and practice. Building from examples of simulated interactions between teacher…
Where Are We? An Analysis of the Methods and Focus of the Research on Simulation Gaming.
ERIC Educational Resources Information Center
Butler, Richard J.; And Others
1988-01-01
Designed to determine whether research in simulation and gaming follows research design methodology and the degree to which research is directed toward learning outcomes measured by Bloom's taxonomy of educational objectives, this article examines studies reported in proceedings from the Association for Business Simulation and Experiential…
Abdominal surgery process modeling framework for simulation using spreadsheets.
Boshkoska, Biljana Mileva; Damij, Talib; Jelenc, Franc; Damij, Nadja
2015-08-01
We provide a continuation of the existing Activity Table Modeling methodology with a modular spreadsheets simulation. The simulation model developed is comprised of 28 modeling elements for the abdominal surgery cycle process. The simulation of a two-week patient flow in an abdominal clinic with 75 beds demonstrates the applicability of the methodology. The simulation does not include macros, thus programming experience is not essential for replication or upgrading the model. Unlike the existing methods, the proposed solution employs a modular approach for modeling the activities that ensures better readability, the possibility of easily upgrading the model with other activities, and its easy extension and connectives with other similar models. We propose a first-in-first-served approach for simulation of servicing multiple patients. The uncertain time duration of the activities is modeled using the function "rand()". The patients movements from one activity to the next one is tracked with nested "if()" functions, thus allowing easy re-creation of the process without the need of complex programming. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Lewis, F.M.; Voss, C.I.; Rubin, J.
1987-01-01
Methodologies that account for specific types of chemical reactions in the simulation of solute transport can be developed so they are compatible with solution algorithms employed in existing transport codes. This enables the simulation of reactive transport in complex multidimensional flow regimes, and provides a means for existing codes to account for some of the fundamental chemical processes that occur among transported solutes. Two equilibrium-controlled reaction systems demonstrate a methodology for accommodating chemical interaction into models of solute transport. One system involves the sorption of a given chemical species, as well as two aqueous complexations in which the sorbing species is a participant. The other reaction set involves binary ion exchange coupled with aqueous complexation involving one of the exchanging species. The methodology accommodates these reaction systems through the addition of nonlinear terms to the transport equations for the sorbing species. Example simulation results show (1) the effect equilibrium chemical parameters have on the spatial distributions of concentration for complexing solutes; (2) that an interrelationship exists between mechanical dispersion and the various reaction processes; (3) that dispersive parameters of the porous media cannot be determined from reactive concentration distributions unless the reaction is accounted for or the influence of the reaction is negligible; (4) how the concentration of a chemical species may be significantly affected by its participation in an aqueous complex with a second species which also sorbs; and (5) that these coupled chemical processes influencing reactive transport can be demonstrated in two-dimensional flow regimes. ?? 1987.
A Framework for Determining the Return on Investment of Simulation-Based Training in Health Care
Bukhari, Hatim; Andreatta, Pamela; Goldiez, Brian; Rabelo, Luis
2017-01-01
This article describes a framework that has been developed to monetize the real value of simulation-based training in health care. A significant consideration has been given to the incorporation of the intangible and qualitative benefits, not only the tangible and quantitative benefits of simulation-based training in health care. The framework builds from three works: the value measurement methodology (VMM) used by several departments of the US Government, a methodology documented in several books by Dr Jack Phillips to monetize various training approaches, and a traditional return on investment methodology put forth by Frost and Sullivan, and Immersion Medical. All 3 source materials were adapted to create an integrated methodology that can be readily implemented. This article presents details on each of these methods and how they can be integrated and presents a framework that integrates the previous methods. In addition to that, it describes the concept and the application of the developed framework. As a test of the applicability of the framework, a real case study has been used to demonstrate the application of the framework. This case study provides real data related to the correlation between the pediatric patient cardiopulmonary arrest (CPA) survival rates and a simulation-based mock codes at the University of Michigan tertiary care academic medical center. It is important to point out that the proposed framework offers the capability to consider a wide range of benefits and values, but on the other hand, there are several limitations that has been discussed and need to be taken in consideration. PMID:28133988
A Framework for Determining the Return on Investment of Simulation-Based Training in Health Care.
Bukhari, Hatim; Andreatta, Pamela; Goldiez, Brian; Rabelo, Luis
2017-01-01
This article describes a framework that has been developed to monetize the real value of simulation-based training in health care. A significant consideration has been given to the incorporation of the intangible and qualitative benefits, not only the tangible and quantitative benefits of simulation-based training in health care. The framework builds from three works: the value measurement methodology (VMM) used by several departments of the US Government, a methodology documented in several books by Dr Jack Phillips to monetize various training approaches, and a traditional return on investment methodology put forth by Frost and Sullivan, and Immersion Medical. All 3 source materials were adapted to create an integrated methodology that can be readily implemented. This article presents details on each of these methods and how they can be integrated and presents a framework that integrates the previous methods. In addition to that, it describes the concept and the application of the developed framework. As a test of the applicability of the framework, a real case study has been used to demonstrate the application of the framework. This case study provides real data related to the correlation between the pediatric patient cardiopulmonary arrest (CPA) survival rates and a simulation-based mock codes at the University of Michigan tertiary care academic medical center. It is important to point out that the proposed framework offers the capability to consider a wide range of benefits and values, but on the other hand, there are several limitations that has been discussed and need to be taken in consideration.
Chauvin, Anthony; Truchot, Jennifer; Bafeta, Aida; Pateron, Dominique; Plaisance, Patrick; Yordanov, Youri
2018-04-01
The number of trials assessing Simulation-Based Medical Education (SBME) interventions has rapidly expanded. Many studies show that potential flaws in design, conduct and reporting of randomized controlled trials (RCTs) can bias their results. We conducted a methodological review of RCTs assessing a SBME in Emergency Medicine (EM) and examined their methodological characteristics. We searched MEDLINE via PubMed for RCT that assessed a simulation intervention in EM, published in 6 general and internal medicine and in the top 10 EM journals. The Cochrane Collaboration risk of Bias tool was used to assess risk of bias, intervention reporting was evaluated based on the "template for intervention description and replication" checklist, and methodological quality was evaluated by the Medical Education Research Study Quality Instrument. Reports selection and data extraction was done by 2 independents researchers. From 1394 RCTs screened, 68 trials assessed a SBME intervention. They represent one quarter of our sample. Cardiopulmonary resuscitation (CPR) is the most frequent topic (81%). Random sequence generation and allocation concealment were performed correctly in 66 and 49% of trials. Blinding of participants and assessors was performed correctly in 19 and 68%. Risk of attrition bias was low in three-quarters of the studies (n = 51). Risk of selective reporting bias was unclear in nearly all studies. The mean MERQSI score was of 13.4/18.4% of the reports provided a description allowing the intervention replication. Trials assessing simulation represent one quarter of RCTs in EM. Their quality remains unclear, and reproducing the interventions appears challenging due to reporting issues.
DOT National Transportation Integrated Search
1995-01-01
Prepared ca. 1995. This paper illustrates the use of the simulation-optimization technique of response surface methodology (RSM) in traffic signal optimization of urban networks. It also quantifies the gains of using the common random number (CRN) va...
Maljovec, D.; Liu, S.; Wang, B.; ...
2015-07-14
Here, dynamic probabilistic risk assessment (DPRA) methodologies couple system simulator codes (e.g., RELAP and MELCOR) with simulation controller codes (e.g., RAVEN and ADAPT). Whereas system simulator codes model system dynamics deterministically, simulation controller codes introduce both deterministic (e.g., system control logic and operating procedures) and stochastic (e.g., component failures and parameter uncertainties) elements into the simulation. Typically, a DPRA is performed by sampling values of a set of parameters and simulating the system behavior for that specific set of parameter values. For complex systems, a major challenge in using DPRA methodologies is to analyze the large number of scenarios generated,more » where clustering techniques are typically employed to better organize and interpret the data. In this paper, we focus on the analysis of two nuclear simulation datasets that are part of the risk-informed safety margin characterization (RISMC) boiling water reactor (BWR) station blackout (SBO) case study. We provide the domain experts a software tool that encodes traditional and topological clustering techniques within an interactive analysis and visualization environment, for understanding the structures of such high-dimensional nuclear simulation datasets. We demonstrate through our case study that both types of clustering techniques complement each other for enhanced structural understanding of the data.« less
Field, Edward H.
2015-01-01
A methodology is presented for computing elastic‐rebound‐based probabilities in an unsegmented fault or fault system, which involves computing along‐fault averages of renewal‐model parameters. The approach is less biased and more self‐consistent than a logical extension of that applied most recently for multisegment ruptures in California. It also enables the application of magnitude‐dependent aperiodicity values, which the previous approach does not. Monte Carlo simulations are used to analyze long‐term system behavior, which is generally found to be consistent with that of physics‐based earthquake simulators. Results cast doubt that recurrence‐interval distributions at points on faults look anything like traditionally applied renewal models, a fact that should be considered when interpreting paleoseismic data. We avoid such assumptions by changing the "probability of what" question (from offset at a point to the occurrence of a rupture, assuming it is the next event to occur). The new methodology is simple, although not perfect in terms of recovering long‐term rates in Monte Carlo simulations. It represents a reasonable, improved way to represent first‐order elastic‐rebound predictability, assuming it is there in the first place, and for a system that clearly exhibits other unmodeled complexities, such as aftershock triggering.
Novel thermal management system design methodology for power lithium-ion battery
NASA Astrophysics Data System (ADS)
Nieto, Nerea; Díaz, Luis; Gastelurrutia, Jon; Blanco, Francisco; Ramos, Juan Carlos; Rivas, Alejandro
2014-12-01
Battery packs conformed by large format lithium-ion cells are increasingly being adopted in hybrid and pure electric vehicles in order to use the energy more efficiently and for a better environmental performance. Safety and cycle life are two of the main concerns regarding this technology, which are closely related to the cell's operating behavior and temperature asymmetries in the system. Therefore, the temperature of the cells in battery packs needs to be controlled by thermal management systems (TMSs). In the present paper an improved design methodology for developing TMSs is proposed. This methodology involves the development of different mathematical models for heat generation, transmission, and dissipation and their coupling and integration in the battery pack product design methodology in order to improve the overall safety and performance. The methodology is validated by comparing simulation results with laboratory measurements on a single module of the battery pack designed at IK4-IKERLAN for a traction application. The maximum difference between model predictions and experimental temperature data is 2 °C. The models developed have shown potential for use in battery thermal management studies for EV/HEV applications since they allow for scalability with accuracy and reasonable simulation time.
Adaptive System Modeling for Spacecraft Simulation
NASA Technical Reports Server (NTRS)
Thomas, Justin
2011-01-01
This invention introduces a methodology and associated software tools for automatically learning spacecraft system models without any assumptions regarding system behavior. Data stream mining techniques were used to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). Evaluation on historical ISS telemetry data shows that adaptive system modeling reduces simulation error anywhere from 50 to 90 percent over existing approaches. The purpose of the methodology is to outline how someone can create accurate system models from sensor (telemetry) data. The purpose of the software is to support the methodology. The software provides analysis tools to design the adaptive models. The software also provides the algorithms to initially build system models and continuously update them from the latest streaming sensor data. The main strengths are as follows: Creates accurate spacecraft system models without in-depth system knowledge or any assumptions about system behavior. Automatically updates/calibrates system models using the latest streaming sensor data. Creates device specific models that capture the exact behavior of devices of the same type. Adapts to evolving systems. Can reduce computational complexity (faster simulations).
Improta, Giovanni; Russo, Mario Alessandro; Triassi, Maria; Converso, Giuseppe; Murino, Teresa; Santillo, Liberatina Carmela
2018-05-01
Health technology assessments (HTAs) are often difficult to conduct because of the decisive procedures of the HTA algorithm, which are often complex and not easy to apply. Thus, their use is not always convenient or possible for the assessment of technical requests requiring a multidisciplinary approach. This paper aims to address this issue through a multi-criteria analysis focusing on the analytic hierarchy process (AHP). This methodology allows the decision maker to analyse and evaluate different alternatives and monitor their impact on different actors during the decision-making process. However, the multi-criteria analysis is implemented through a simulation model to overcome the limitations of the AHP methodology. Simulations help decision-makers to make an appropriate decision and avoid unnecessary and costly attempts. Finally, a decision problem regarding the evaluation of two health technologies, namely, the evaluation of two biological prostheses for incisional infected hernias, will be analysed to assess the effectiveness of the model. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos, Mario, E-mail: mgsantoss@gmail.com; Freitas, Raul, E-mail: raulfreitas@portugalmail.com; Crespi, Antonio L., E-mail: aluis.crespi@gmail.com
2011-10-15
This study assesses the potential of an integrated methodology for predicting local trends in invasive exotic plant species (invasive richness) using indirect, regional information on human disturbance. The distribution of invasive plants was assessed in North Portugal using herbarium collections and local environmental, geophysical and socio-economic characteristics. Invasive richness response to anthropogenic disturbance was predicted using a dynamic model based on a sequential modeling process (stochastic dynamic methodology-StDM). Derived scenarios showed that invasive richness trends were clearly associated with ongoing socio-economic change. Simulations including scenarios of growing urbanization showed an increase in invasive richness while simulations in municipalities with decreasingmore » populations showed stable or decreasing levels of invasive richness. The model simulations demonstrate the interest and feasibility of using this methodology in disturbance ecology. - Highlights: {yields} Socio-economic data indicate human induced disturbances. {yields} Socio-economic development increase disturbance in ecosystems. {yields} Disturbance promotes opportunities for invasive plants.{yields} Increased opportunities promote richness of invasive plants.{yields} Increase in richness of invasive plants change natural ecosystems.« less
Numerical simulation of the SAGD process coupled with geomechanical behavior
NASA Astrophysics Data System (ADS)
Li, Pingke
Canada has vast oil sand resources. While a large portion of this resource can be recovered by surface mining techniques, a majority is located at depths requiring the application of in situ recovery technologies. Although a number of in situ recovery technologies exist, the steam assisted gravity drainage (SAGD) process has emerged as one of the most promising technologies to develop the in situ oil sands resources. During the SAGD operations, saturated steam is continuously injected into the oil sands reservoir, which induces pore pressure and stress variations. As a result, reservoir parameters and processes may also vary, particularly when tensile and shear failure occur. This geomechanical effect is obvious for oil sands material because oil sands have the in situ interlocked fabric. The conventional reservoir simulation generally does not take this coupled mechanism into consideration. Therefore, this research is to improve the reservoir simulation techniques of the SAGD process applied in the development of oil sands and heavy oil reservoirs. The analyses of the decoupled reservoir geomechanical simulation results show that the geomechanical behavior in SAGD has obvious impact on reservoir parameters, such as absolute permeability. The issues with the coupled reservoir geomechanical simulations of the SAGD process have been clarified and the permeability variations due to geomechanical behaviors in the SAGD process investigated. A methodology of sequentially coupled reservoir geomechanical simulation technique was developed based on the reservoir simulator, EXOTHERM, and the geomechanical simulator, FLAC. In addition, a representative geomechanical model of oil sands material was summarized in this research. Finally, this reservoir geomechanical simulation methodology was verified with the UTF Phase A SAGD project and applied in a SAGD operation with gas-over-bitumen geometry. Based on this methodology, the geomechanical effect on the SAGD production performance can be quantified. This research program involves the analyses of laboratory testing results obtained from literatures. However, no laboratory testing was conducted in the process of this research.
Statistical power calculations for mixed pharmacokinetic study designs using a population approach.
Kloprogge, Frank; Simpson, Julie A; Day, Nicholas P J; White, Nicholas J; Tarning, Joel
2014-09-01
Simultaneous modelling of dense and sparse pharmacokinetic data is possible with a population approach. To determine the number of individuals required to detect the effect of a covariate, simulation-based power calculation methodologies can be employed. The Monte Carlo Mapped Power method (a simulation-based power calculation methodology using the likelihood ratio test) was extended in the current study to perform sample size calculations for mixed pharmacokinetic studies (i.e. both sparse and dense data collection). A workflow guiding an easy and straightforward pharmacokinetic study design, considering also the cost-effectiveness of alternative study designs, was used in this analysis. Initially, data were simulated for a hypothetical drug and then for the anti-malarial drug, dihydroartemisinin. Two datasets (sampling design A: dense; sampling design B: sparse) were simulated using a pharmacokinetic model that included a binary covariate effect and subsequently re-estimated using (1) the same model and (2) a model not including the covariate effect in NONMEM 7.2. Power calculations were performed for varying numbers of patients with sampling designs A and B. Study designs with statistical power >80% were selected and further evaluated for cost-effectiveness. The simulation studies of the hypothetical drug and the anti-malarial drug dihydroartemisinin demonstrated that the simulation-based power calculation methodology, based on the Monte Carlo Mapped Power method, can be utilised to evaluate and determine the sample size of mixed (part sparsely and part densely sampled) study designs. The developed method can contribute to the design of robust and efficient pharmacokinetic studies.
Multibody dynamic simulation of knee contact mechanics
Bei, Yanhong; Fregly, Benjamin J.
2006-01-01
Multibody dynamic musculoskeletal models capable of predicting muscle forces and joint contact pressures simultaneously would be valuable for studying clinical issues related to knee joint degeneration and restoration. Current three-dimensional multi-body knee models are either quasi-static with deformable contact or dynamic with rigid contact. This study proposes a computationally efficient methodology for combining multibody dynamic simulation methods with a deformable contact knee model. The methodology requires preparation of the articular surface geometry, development of efficient methods to calculate distances between contact surfaces, implementation of an efficient contact solver that accounts for the unique characteristics of human joints, and specification of an application programming interface for integration with any multibody dynamic simulation environment. The current implementation accommodates natural or artificial tibiofemoral joint models, small or large strain contact models, and linear or nonlinear material models. Applications are presented for static analysis (via dynamic simulation) of a natural knee model created from MRI and CT data and dynamic simulation of an artificial knee model produced from manufacturer’s CAD data. Small and large strain natural knee static analyses required 1 min of CPU time and predicted similar contact conditions except for peak pressure, which was higher for the large strain model. Linear and nonlinear artificial knee dynamic simulations required 10 min of CPU time and predicted similar contact force and torque but different contact pressures, which were lower for the nonlinear model due to increased contact area. This methodology provides an important step toward the realization of dynamic musculoskeletal models that can predict in vivo knee joint motion and loading simultaneously. PMID:15564115
Nonlinear maneuver autopilot for the F-15 aircraft
NASA Technical Reports Server (NTRS)
Menon, P. K. A.; Badgett, M. E.; Walker, R. A.
1989-01-01
A methodology is described for the development of flight test trajectory control laws based on singular perturbation methodology and nonlinear dynamic modeling. The control design methodology is applied to a detailed nonlinear six degree-of-freedom simulation of the F-15 and results for a level accelerations, pushover/pullup maneuver, zoom and pushover maneuver, excess thrust windup turn, constant thrust windup turn, and a constant dynamic pressure/constant load factor trajectory are presented.
Bates, Nathaniel A.; Nesbitt, Rebecca J.; Shearn, Jason T.; Myer, Gregory D.; Hewett, Timothy E.
2015-01-01
Six degree of freedom (6-DOF) robotic manipulators have simulated clinical tests and gait on cadaveric knees to examine knee biomechanics. However, these activities do not necessarily emulate the kinematics and kinetics that lead to anterior cruciate ligament (ACL) rupture. The purpose of this study was to determine the techniques needed to derive reproducible, in vitro simulations from in vivo skin-marker kinematics recorded during simulated athletic tasks. Input of raw, in vivo, skin-marker-derived motion capture kinematics consistently resulted in specimen failure. The protocol described in this study developed an in-depth methodology to adapt in vivo kinematic recordings into 6-DOF knee motion simulations for drop vertical jumps and sidestep cutting. Our simulation method repeatably produced kinetics consistent with vertical ground reaction patterns while preserving specimen integrity. Athletic task simulation represents an advancement that allows investigators to examine ACL-intact and graft biomechanics during motions that generate greater kinetics, and the athletic tasks are more representative of documented cases of ligament rupture. Establishment of baseline functional mechanics within the knee joint during athletic tasks will serve to advance the prevention, repair and rehabilitation of ACL injuries. PMID:25869454
End-To-End Simulation of Launch Vehicle Trajectories Including Stage Separation Dynamics
NASA Technical Reports Server (NTRS)
Albertson, Cindy W.; Tartabini, Paul V.; Pamadi, Bandu N.
2012-01-01
The development of methodologies, techniques, and tools for analysis and simulation of stage separation dynamics is critically needed for successful design and operation of multistage reusable launch vehicles. As a part of this activity, the Constraint Force Equation (CFE) methodology was developed and implemented in the Program to Optimize Simulated Trajectories II (POST2). The objective of this paper is to demonstrate the capability of POST2/CFE to simulate a complete end-to-end mission. The vehicle configuration selected was the Two-Stage-To-Orbit (TSTO) Langley Glide Back Booster (LGBB) bimese configuration, an in-house concept consisting of a reusable booster and an orbiter having identical outer mold lines. The proximity and isolated aerodynamic databases used for the simulation were assembled using wind-tunnel test data for this vehicle. POST2/CFE simulation results are presented for the entire mission, from lift-off, through stage separation, orbiter ascent to orbit, and booster glide back to the launch site. Additionally, POST2/CFE stage separation simulation results are compared with results from industry standard commercial software used for solving dynamics problems involving multiple bodies connected by joints.
Beillas, Philippe; Berthet, Fabien
2017-05-29
Human body models have the potential to better describe the human anatomy and variability than dummies. However, data sets available to verify the human response to impact are typically limited in numbers, and they are not size or gender specific. The objective of this study was to investigate the use of model morphing methodologies within that context. In this study, a simple human model scaling methodology was developed to morph two detailed human models (Global Human Body Model Consortium models 50th male, M50, and 5th female, F05) to the dimensions of post mortem human surrogates (PMHS) used in published literature. The methodology was then successfully applied to 52 PMHS tested in 14 impact conditions loading the abdomen. The corresponding 104 simulations were compared to the responses of the PMHS and to the responses of the baseline models without scaling (28 simulations). The responses were analysed using the CORA method and peak values. The results suggest that model scaling leads to an improvement of the predicted force and deflection but has more marginal effects on the predicted abdominal compressions. M50 and F05 models scaled to the same PMHS were also found to have similar external responses, but large differences were found between the two sets of models for the strain energy densities in the liver and the spleen for mid-abdomen impact simulations. These differences, which were attributed to the anatomical differences in the abdomen of the baseline models, highlight the importance of the selection of the impact condition for simulation studies, especially if the organ location is not known in the test. While the methodology could be further improved, it shows the feasibility of using model scaling methodologies to compare human models of different sizes and to evaluate scaling approaches within the context of human model validation.
DOT National Transportation Integrated Search
2017-02-01
As part of the Federal Highway Administration (FHWA) Traffic Analysis Toolbox (Volume XIII), this guide was designed to help corridor stakeholders implement the Integrated Corridor Management (ICM) Analysis, Modeling, and Simulation (AMS) methodology...
DOT National Transportation Integrated Search
2017-02-01
As part of the Federal Highway Administration (FHWA) Traffic Analysis Toolbox (Volume XIII), this guide was designed to help corridor stakeholders implement the Integrated Corridor Management (ICM) Analysis, Modeling, and Simulation (AMS) methodology...
High-fidelity large eddy simulation for supersonic jet noise prediction
NASA Astrophysics Data System (ADS)
Aikens, Kurt M.
The problem of intense sound radiation from supersonic jets is a concern for both civil and military applications. As a result, many experimental and computational efforts are focused at evaluating possible noise suppression techniques. Large-eddy simulation (LES) is utilized in many computational studies to simulate the turbulent jet flowfield. Integral methods such as the Ffowcs Williams-Hawkings (FWH) method are then used for propagation of the sound waves to the farfield. Improving the accuracy of this two-step methodology and evaluating beveled converging-diverging nozzles for noise suppression are the main tasks of this work. First, a series of numerical experiments are undertaken to ensure adequate numerical accuracy of the FWH methodology. This includes an analysis of different treatments for the downstream integration surface: with or without including an end-cap, averaging over multiple end-caps, and including an approximate surface integral correction term. Secondly, shock-capturing methods based on characteristic filtering and adaptive spatial filtering are used to extend a highly-parallelizable multiblock subsonic LES code to enable simulations of supersonic jets. The code is based on high-order numerical methods for accurate prediction of the acoustic sources and propagation of the sound waves. Furthermore, this new code is more efficient than the legacy version, allows cylindrical multiblock topologies, and is capable of simulating nozzles with resolved turbulent boundary layers when coupled with an approximate turbulent inflow boundary condition. Even though such wall-resolved simulations are more physically accurate, their expense is often prohibitive. To make simulations more economical, a wall model is developed and implemented. The wall modeling methodology is validated for turbulent quasi-incompressible and compressible zero pressure gradient flat plate boundary layers, and for subsonic and supersonic jets. The supersonic code additions and the wall model treatment are then utilized to simulate military-style nozzles with and without beveling of the nozzle exit plane. Experiments of beveled converging-diverging nozzles have found reduced noise levels for some observer locations. Predicting the noise for these geometries provides a good initial test of the overall methodology for a more complex nozzle. The jet flowfield and acoustic data are analyzed and compared to similar experiments and excellent agreement is found. Potential areas of improvement are discussed for future research.
SIMCA T 1.0: A SAS Computer Program for Simulating Computer Adaptive Testing
ERIC Educational Resources Information Center
Raiche, Gilles; Blais, Jean-Guy
2006-01-01
Monte Carlo methodologies are frequently applied to study the sampling distribution of the estimated proficiency level in adaptive testing. These methods eliminate real situational constraints. However, these Monte Carlo methodologies are not currently supported by the available software programs, and when these programs are available, their…
Full-Envelope Launch Abort System Performance Analysis Methodology
NASA Technical Reports Server (NTRS)
Aubuchon, Vanessa V.
2014-01-01
The implementation of a new dispersion methodology is described, which dis-perses abort initiation altitude or time along with all other Launch Abort System (LAS) parameters during Monte Carlo simulations. In contrast, the standard methodology assumes that an abort initiation condition is held constant (e.g., aborts initiated at altitude for Mach 1, altitude for maximum dynamic pressure, etc.) while dispersing other LAS parameters. The standard method results in large gaps in performance information due to the discrete nature of initiation conditions, while the full-envelope dispersion method provides a significantly more comprehensive assessment of LAS abort performance for the full launch vehicle ascent flight envelope and identifies performance "pinch-points" that may occur at flight conditions outside of those contained in the discrete set. The new method has significantly increased the fidelity of LAS abort simulations and confidence in the results.
NASA Technical Reports Server (NTRS)
Padovan, J.; Adams, M.; Lam, P.; Fertis, D.; Zeid, I.
1982-01-01
Second-year efforts within a three-year study to develop and extend finite element (FE) methodology to efficiently handle the transient/steady state response of rotor-bearing-stator structure associated with gas turbine engines are outlined. The two main areas aim at (1) implanting the squeeze film damper element into a general purpose FE code for testing and evaluation; and (2) determining the numerical characteristics of the FE-generated rotor-bearing-stator simulation scheme. The governing FE field equations are set out and the solution methodology is presented. The choice of ADINA as the general-purpose FE code is explained, and the numerical operational characteristics of the direct integration approach of FE-generated rotor-bearing-stator simulations is determined, including benchmarking, comparison of explicit vs. implicit methodologies of direct integration, and demonstration problems.
Cushing, Christopher C; Walters, Ryan W; Hoffman, Lesa
2014-03-01
Aggregated N-of-1 randomized controlled trials (RCTs) combined with multilevel modeling represent a methodological advancement that may help bridge science and practice in pediatric psychology. The purpose of this article is to offer a primer for pediatric psychologists interested in conducting aggregated N-of-1 RCTs. An overview of N-of-1 RCT methodology is provided and 2 simulated data sets are analyzed to demonstrate the clinical and research potential of the methodology. The simulated data example demonstrates the utility of aggregated N-of-1 RCTs for understanding the clinical impact of an intervention for a given individual and the modeling of covariates to explain why an intervention worked for one patient and not another. Aggregated N-of-1 RCTs hold potential for improving the science and practice of pediatric psychology.
NASA Astrophysics Data System (ADS)
Claeys, M.; Sinou, J.-J.; Lambelin, J.-P.; Todeschini, R.
2016-08-01
The nonlinear vibration response of an assembly with friction joints - named "Harmony" - is studied both experimentally and numerically. The experimental results exhibit a softening effect and an increase of dissipation with excitation level. Modal interactions due to friction are also evidenced. The numerical methodology proposed groups together well-known structural dynamic methods, including finite elements, substructuring, Harmonic Balance and continuation methods. On the one hand, the application of this methodology proves its capacity to treat a complex system where several friction movements occur at the same time. On the other hand, the main contribution of this paper is the experimental and numerical study of evidence of modal interactions due to friction. The simulation methodology succeeds in reproducing complex form of dynamic behavior such as these modal interactions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reckinger, Scott James; Livescu, Daniel; Vasilyev, Oleg V.
A comprehensive numerical methodology has been developed that handles the challenges introduced by considering the compressive nature of Rayleigh-Taylor instability (RTI) systems, which include sharp interfacial density gradients on strongly stratified background states, acoustic wave generation and removal at computational boundaries, and stratification-dependent vorticity production. The computational framework is used to simulate two-dimensional single-mode RTI to extreme late-times for a wide range of flow compressibility and variable density effects. The results show that flow compressibility acts to reduce the growth of RTI for low Atwood numbers, as predicted from linear stability analysis.
Physical Simulation of a Prolonged Plasma-Plume Exposure of a Space Debris Object
NASA Astrophysics Data System (ADS)
Shuvalov, V. A.; Gorev, N. B.; Tokmak, N. A.; Kochubei, G. S.
2018-05-01
A methodology has been developed for the physical (laboratory) simulation of the prolonged exposure of a space debris object to high-energy ions of a plasma plume for removing the object into low-Earth orbit with its subsequent burning in the Earth's atmosphere. The methodology is based on the equivalence criteria of two modes of exposure (in the Earth's ionosphere and in the setup) and the procedure for accelerated resource tests in terms of the sputtering of the space debris material and its deceleration by a plasma jet in the Earth's ionosphere.
Flight simulator fidelity assessment in a rotorcraft lateral translation maneuver
NASA Technical Reports Server (NTRS)
Hess, R. A.; Malsbury, T.; Atencio, A., Jr.
1992-01-01
A model-based methodology for assessing flight simulator fidelity in closed-loop fashion is exercised in analyzing a rotorcraft low-altitude maneuver for which flight test and simulation results were available. The addition of a handling qualities sensitivity function to a previously developed model-based assessment criteria allows an analytical comparison of both performance and handling qualities between simulation and flight test. Model predictions regarding the existence of simulator fidelity problems are corroborated by experiment. The modeling approach is used to assess analytically the effects of modifying simulator characteristics on simulator fidelity.
Health Worker Focused Distributed Simulation for Improving Capability of Health Systems in Liberia.
Gale, Thomas C E; Chatterjee, Arunangsu; Mellor, Nicholas E; Allan, Richard J
2016-04-01
The main goal of this study was to produce an adaptable learning platform using virtual learning and distributed simulation, which can be used to train health care workers, across a wide geographical area, key safety messages regarding infection prevention control (IPC). A situationally responsive agile methodology, Scrum, was used to develop a distributed simulation module using short 1-week iterations and continuous synchronous plus asynchronous communication including end users and IPC experts. The module contained content related to standard IPC precautions (including handwashing techniques) and was structured into 3 distinct sections related to donning, doffing, and hazard perception training. Using Scrum methodology, we were able to link concepts applied to best practices in simulation-based medical education (deliberate practice, continuous feedback, self-assessment, and exposure to uncommon events), pedagogic principles related to adult learning (clear goals, contextual awareness, motivational features), and key learning outcomes regarding IPC, as a rapid response initiative to the Ebola outbreak in West Africa. Gamification approach has been used to map learning mechanics to enhance user engagement. The developed IPC module demonstrates how high-frequency, low-fidelity simulations can be rapidly designed using scrum-based agile methodology. Analytics incorporated into the tool can help demonstrate improved confidence and competence of health care workers who are treating patients within an Ebola virus disease outbreak region. These concepts could be used in a range of evolving disasters where rapid development and communication of key learning messages are required.
NASA Astrophysics Data System (ADS)
Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi
2015-11-01
Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.
The Use of Computer Simulation Gaming in Teaching Broadcast Economics.
ERIC Educational Resources Information Center
Mancuso, Louis C.
The purpose of this study was to develop a broadcast economic computer simulation and to ascertain how a lecture-computer simulation game compared as a teaching method with a more traditional lecture and case study instructional methods. In each of three sections of a broadcast economics course, a different teaching methodology was employed: (1)…
ERIC Educational Resources Information Center
Gale, Jessica; Wind, Stefanie; Koval, Jayma; Dagosta, Joseph; Ryan, Mike; Usselman, Marion
2016-01-01
This paper illustrates the use of simulation-based performance assessment (PA) methodology in a recent study of eighth-grade students' understanding of physical science concepts. A set of four simulation-based PA tasks were iteratively developed to assess student understanding of an array of physical science concepts, including net force,…
Application of control theory to dynamic systems simulation
NASA Technical Reports Server (NTRS)
Auslander, D. M.; Spear, R. C.; Young, G. E.
1982-01-01
The application of control theory is applied to dynamic systems simulation. Theory and methodology applicable to controlled ecological life support systems are considered. Spatial effects on system stability, design of control systems with uncertain parameters, and an interactive computing language (PARASOL-II) designed for dynamic system simulation, report quality graphics, data acquisition, and simple real time control are discussed.
Identifying and Quantifying Emergent Behavior Through System of Systems Modeling and Simulation
2015-09-01
42 J . SUMMARY ..............................................................................................43 III. METHODOLOGY...our research. e. Ptolemy Ptolemy is a simulation and rapid prototype environment developed at the University of California Berkely in the...simulation. J . SUMMARY This chapter describes the many works used as a basis for this research. This research used the principles of Selberg’s 2008
T-H-A-T-S: timber-harvesting-and-transport-simulator: with subroutines for Appalachian logging
A. Jeff Martin
1975-01-01
A computer program for simulating harvesting operations is presented. Written in FORTRAN IV, the program contains subroutines that were developed for Appalachian logging conditions. However, with appropriate modifications, the simulator would be applicable for most logging operations and locations. The details of model development and its methodology are presented,...
Rating of Dynamic Coefficient for Simple Beam Bridge Design on High-Speed Railways
NASA Astrophysics Data System (ADS)
Diachenko, Leonid; Benin, Andrey; Smirnov, Vladimir; Diachenko, Anastasia
2018-06-01
The aim of the work is to improve the methodology for the dynamic computation of simple beam spans during the impact of high-speed trains. Mathematical simulation utilizing numerical and analytical methods of structural mechanics is used in the research. The article analyses parameters of the effect of high-speed trains on simple beam spanning bridge structures and suggests a technique of determining of the dynamic index to the live load. Reliability of the proposed methodology is confirmed by results of numerical simulation of high-speed train passage over spans with different speeds. The proposed algorithm of dynamic computation is based on a connection between maximum acceleration of the span in the resonance mode of vibrations and the main factors of stress-strain state. The methodology allows determining maximum and also minimum values of the main efforts in the construction that makes possible to perform endurance tests. It is noted that dynamic additions for the components of the stress-strain state (bending moments, transverse force and vertical deflections) are different. This condition determines the necessity for differentiated approach to evaluation of dynamic coefficients performing design verification of I and II groups of limiting state. The practical importance: the methodology of determining the dynamic coefficients allows making dynamic calculation and determining the main efforts in split beam spans without numerical simulation and direct dynamic analysis that significantly reduces the labour costs for design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricci, P., E-mail: paolo.ricci@epfl.ch; Riva, F.; Theiler, C.
In the present work, a Verification and Validation procedure is presented and applied showing, through a practical example, how it can contribute to advancing our physics understanding of plasma turbulence. Bridging the gap between plasma physics and other scientific domains, in particular, the computational fluid dynamics community, a rigorous methodology for the verification of a plasma simulation code is presented, based on the method of manufactured solutions. This methodology assesses that the model equations are correctly solved, within the order of accuracy of the numerical scheme. The technique to carry out a solution verification is described to provide a rigorousmore » estimate of the uncertainty affecting the numerical results. A methodology for plasma turbulence code validation is also discussed, focusing on quantitative assessment of the agreement between experiments and simulations. The Verification and Validation methodology is then applied to the study of plasma turbulence in the basic plasma physics experiment TORPEX [Fasoli et al., Phys. Plasmas 13, 055902 (2006)], considering both two-dimensional and three-dimensional simulations carried out with the GBS code [Ricci et al., Plasma Phys. Controlled Fusion 54, 124047 (2012)]. The validation procedure allows progress in the understanding of the turbulent dynamics in TORPEX, by pinpointing the presence of a turbulent regime transition, due to the competition between the resistive and ideal interchange instabilities.« less
Entropy Filtered Density Function for Large Eddy Simulation of Turbulent Reacting Flows
NASA Astrophysics Data System (ADS)
Safari, Mehdi
Analysis of local entropy generation is an effective means to optimize the performance of energy and combustion systems by minimizing the irreversibilities in transport processes. Large eddy simulation (LES) is employed to describe entropy transport and generation in turbulent reacting flows. The entropy transport equation in LES contains several unclosed terms. These are the subgrid scale (SGS) entropy flux and entropy generation caused by irreversible processes: heat conduction, mass diffusion, chemical reaction and viscous dissipation. The SGS effects are taken into account using a novel methodology based on the filtered density function (FDF). This methodology, entitled entropy FDF (En-FDF), is developed and utilized in the form of joint entropy-velocity-scalar-turbulent frequency FDF and the marginal scalar-entropy FDF, both of which contain the chemical reaction effects in a closed form. The former constitutes the most comprehensive form of the En-FDF and provides closure for all the unclosed filtered moments. This methodology is applied for LES of a turbulent shear layer involving transport of passive scalars. Predictions show favor- able agreements with the data generated by direct numerical simulation (DNS) of the same layer. The marginal En-FDF accounts for entropy generation effects as well as scalar and entropy statistics. This methodology is applied to a turbulent nonpremixed jet flame (Sandia Flame D) and predictions are validated against experimental data. In both flows, sources of irreversibility are predicted and analyzed.
Coarse-grained molecular dynamics simulations for giant protein-DNA complexes
NASA Astrophysics Data System (ADS)
Takada, Shoji
Biomolecules are highly hierarchic and intrinsically flexible. Thus, computational modeling calls for multi-scale methodologies. We have been developing a coarse-grained biomolecular model where on-average 10-20 atoms are grouped into one coarse-grained (CG) particle. Interactions among CG particles are tuned based on atomistic interactions and the fluctuation matching algorithm. CG molecular dynamics methods enable us to simulate much longer time scale motions of much larger molecular systems than fully atomistic models. After broad sampling of structures with CG models, we can easily reconstruct atomistic models, from which one can continue conventional molecular dynamics simulations if desired. Here, we describe our CG modeling methodology for protein-DNA complexes, together with various biological applications, such as the DNA duplication initiation complex, model chromatins, and transcription factor dynamics on chromatin-like environment.
Aviation Human-in-the-Loop Simulation Studies: Experimental Planning, Design, and Data Management
2014-01-01
Aviation Human-in-the-Loop Simulation Studies: Experimental Planning, Design , and Data Management Kevin W. Williams1 Bonny Christopher2 Gena...Simulation Studies: Experimental Planning, Design , and Data Management January 2014 6. Performing Organization Code 7. Author(s) 8. Performing...describe the process by which we designed our human-in-the-loop (HITL) simulation study and the methodology used to collect and analyze the results
MoSeS: Modelling and Simulation for e-Social Science.
Townend, Paul; Xu, Jie; Birkin, Mark; Turner, Andy; Wu, Belinda
2009-07-13
MoSeS (Modelling and Simulation for e-Social Science) is a research node of the National Centre for e-Social Science. MoSeS uses e-Science techniques to execute an events-driven model that simulates discrete demographic processes; this allows us to project the UK population 25 years into the future. This paper describes the architecture, simulation methodology and latest results obtained by MoSeS.
Membrane Insertion Profiles of Peptides Probed by Molecular Dynamics Simulations
2008-07-17
Membrane insertion profiles of peptides probed by molecular dynamics simulations In-Chul Yeh,* Mark A. Olson,# Michael S. Lee,*#§ and Anders...a methodology based on molecular dynamics simulation techniques to probe the insertion profiles of small peptides across the membrane interface. The...profiles of peptides probed by molecular dynamics simulations 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d
Mathematical model of marine diesel engine simulator for a new methodology of self propulsion tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Izzuddin, Nur; Sunarsih,; Priyanto, Agoes
As a vessel operates in the open seas, a marine diesel engine simulator whose engine rotation is controlled to transmit through propeller shaft is a new methodology for the self propulsion tests to track the fuel saving in a real time. Considering the circumstance, this paper presents the real time of marine diesel engine simulator system to track the real performance of a ship through a computer-simulated model. A mathematical model of marine diesel engine and the propeller are used in the simulation to estimate fuel rate, engine rotating speed, thrust and torque of the propeller thus achieve the targetmore » vessel’s speed. The input and output are a real time control system of fuel saving rate and propeller rotating speed representing the marine diesel engine characteristics. The self-propulsion tests in calm waters were conducted using a vessel model to validate the marine diesel engine simulator. The simulator then was used to evaluate the fuel saving by employing a new mathematical model of turbochargers for the marine diesel engine simulator. The control system developed will be beneficial for users as to analyze different condition of vessel’s speed to obtain better characteristics and hence optimize the fuel saving rate.« less
A Methodology for Evaluating the Fidelity of Ground-Based Flight Simulators
NASA Technical Reports Server (NTRS)
Zeyada, Y.; Hess, R. A.
1999-01-01
An analytical and experimental investigation was undertaken to model the manner in which pilots perceive and utilize visual, proprioceptive, and vestibular cues in a ground-based flight simulator. The study was part of a larger research effort which has the creation of a methodology for determining flight simulator fidelity requirements as its ultimate goal. The study utilized a closed-loop feedback structure of the pilot/simulator system which included the pilot, the cockpit inceptor, the dynamics of the simulated vehicle and the motion system. With the exception of time delays which accrued in visual scene production in the simulator, visual scene effects were not included in this study. The NASA Ames Vertical Motion Simulator was used in a simple, single-degree of freedom rotorcraft bob-up/down maneuver. Pilot/vehicle analysis and fuzzy-inference identification were employed to study the changes in fidelity which occurred as the characteristics of the motion system were varied over five configurations i The data from three of the five pilots that participated in the experimental study were analyzed in the fuzzy inference identification. Results indicate that both the analytical pilot/vehicle analysis and the fuzzyinference identification can be used to reflect changes in simulator fidelity for the task examined.
A Methodology for Evaluating the Fidelity of Ground-Based Flight Simulators
NASA Technical Reports Server (NTRS)
Zeyada, Y.; Hess, R. A.
1999-01-01
An analytical and experimental investigation was undertaken to model the manner in which pilots perceive and utilize visual, proprioceptive, and vestibular cues in a ground-based flight simulator. The study was part of a larger research effort which has the creation of a methodology for determining flight simulator fidelity requirements as its ultimate goal. The study utilized a closed-loop feedback structure of the pilot/simulator system which included the pilot, the cockpit inceptor, the dynamics of the simulated vehicle and the motion system. With the exception of time delays which accrued in visual scene production in the simulator, visual scene effects were not included in this study. The NASA Ames Vertical Motion Simulator was used in a simple, single-degree of freedom rotorcraft bob-up/down maneuver. Pilot/vehicle analysis and fuzzy-inference identification were employed to study the changes in fidelity which occurred as the characteristics of the motion system were varied over five configurations. The data from three of the five pilots that participated in the experimental study were analyzed in the fuzzy-inference identification. Results indicate that both the analytical pilot/vehicle analysis and the fuzzy-inference identification can be used to reflect changes in simulator fidelity for the task examined.
Simulating household travel study data in metropolitan areas : technical summary.
DOT National Transportation Integrated Search
2001-05-01
The objectives of this study are: 1. To develop and validate a methodology for MPOs to synthesize household travel survey data using local sociodemographic characteristics in conjunction with a national source of simulated travel data. 2. To evalu...
DOT National Transportation Integrated Search
1996-04-01
THIS REPORT ALSO DESCRIBES THE PROCEDURES FOR DIRECT ESTIMATION OF INTERSECTION CAPACITY WITH SIMULATION, INCLUDING A SET OF RIGOROUS STATISTICAL TESTS FOR SIMULATION PARAMETER CALIBRATION FROM FIELD DATA.
Numerical Simulation of High-Speed Turbulent Reacting Flows
NASA Technical Reports Server (NTRS)
Givi, P.; Taulbee, D. B.; Madnia, C. K.; Jaberi, F. A.; Colucci, P. J.; Gicquel, L. Y. M.; Adumitroaie, V.; James, S.
1999-01-01
The objectives of this research are: (1) to develop and implement a new methodology for large eddy simulation of (LES) of high-speed reacting turbulent flows. (2) To develop algebraic turbulence closures for statistical description of chemically reacting turbulent flows.
Simulation analysis of route diversion strategies for freeway incident management : final report.
DOT National Transportation Integrated Search
1995-02-01
The purpose of this project was to investigate whether simulation models could : be used as decision aids for defining traffic diversion strategies for effective : incident management. A methodology was developed for using such a model to : determine...
2017-06-01
importantly, it examines the methodology used to build the class IX block embarked on ship prior to deployment. The class IX block is defined as a repository...compared to historical data to evaluate model and simulation outputs. This thesis provides recommendations on improving the methodology implemented in...improving the level of organic support available to deployed units. More importantly, it examines the methodology used to build the class IX block
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu Yuan; Li Xiaobo, E-mail: liuyuan@ihep.ac.cn, E-mail: lixb@ihep.ac.cn
The properties of the dusty tori in active galactic nuclei (AGNs) have been investigated in detail, mainly focusing on the geometry and components; however, the kinematics of the torus are still not clear. The narrow iron K α line at 6.4 keV is thought to be produced by the X-ray reflection from the torus. Thus, the velocity-resolved reverberation mapping of it is able to constrain the kinematics of the torus. Such effort is limited by the spectral resolution of current charged coupled device (CCD) detectors and should be possible with the microcalorimeter on the next generation X-ray satellite. In thismore » paper, we first construct the response functions of the torus under a uniform inflow, a Keplerian rotation, and a uniform outflow. Then the energy-dependent light curve of the narrow iron K α line is simulated according to the performance of the X-ray Integral Field Unit in Athena. Finally, the energy-dependent cross-correlation function is calculated to reveal the kinematic signal. According to our results, 100 observations with 5 ks exposure of each are sufficient to distinguish the above three velocity fields. Although the real geometry and velocity field of the torus could be more complex than we assumed, the present result proves the feasibility of the velocity-resolved reverberation mapping of the narrow iron K α line. The combination of the dynamics of the torus with those of the broad-line region and the host galaxy is instructive for the understanding of the feeding and feedback process of AGNs.« less
Suzaku Discovery of Ultra-fast Outflows in Radio-loud AGN
NASA Astrophysics Data System (ADS)
Sambruna, Rita M.; Tombesi, F.; Reeves, J.; Braito, V.; Gofford, J.; Cappi, M.
2010-03-01
We present the results of an analysis of the 3.5--10.5 keV spectra of five bright Broad-Line Radio Galaxies (BLRGs) using proprietary and archival Suzaku observations. In three sources -- 3C 111, 3C 120, and 3C 390.3 -- we find evidence, for the first time in a radio-loud AGN, for absorption features at observed energies 7 keV and 8--9 keV, with high significance according to both the F-test and extensive Monte Carlo simulations (99% or larger). In the remaining two BLRGs, 3C 382 and 3C 445, there is no evidence for such absorption features in the XIS spectra. If interpreted as due to Fe XXV and/or Fe XXVI K-shell resonance lines, the absorption features in 3C 111, 3C 120, and 3C 390.3 imply an origin from an ionized gas outflowing with velocities in the range v 0.04-0.15c, reminiscent of Ultra-Fast Outflows (UFOs) previously observed in radio-quiet Seyfert galaxies. A fit with specific photoionization models gives ionization parameters log ξ 4--5.6 erg s-1 cm and column densities of NH 1022-23 cm-2, similar to the values observed in Seyferts. Based on light travel time arguments, we estimate that the UFOs in the three BLRGs are located within 20--500 gravitational radii from the central black hole, and thus most likely are connected to disk winds/outflows. Our estimates show that the UFOs mass outflow rate is comparable to the accretion rate and their kinetic energy a significant fraction of the AGN bolometric luminosity, making these outflows significant for the global energetic of these systems, in particular for mechanisms of jet formation.
NASA Astrophysics Data System (ADS)
Neumann, Martin; Dostál, Tomáš; Krása, Josef; Kavka, Petr; Davidová, Tereza; Brant, Václav; Kroulík, Milan; Mistr, Martin; Novotný, Ivan
2016-04-01
The presentation will introduce a methodology of determination of crop and cover management factor (C-faktor) for the universal soil loss equation (USLE) using field rainfall simulator. The aim of the project is to determine the C-factor value for the different phenophases of the main crops of the central-european region, while also taking into account the different agrotechnical methods. By using the field rainfall simulator, it is possible to perform the measurements in specific phenophases, which is otherwise difficult to execute due to the variability and fortuity of the natural rainfall. Due to the number of measurements needed, two identical simulators will be used, operated by two independent teams, with coordinated methodology. The methodology will mainly specify the length of simulation, the rainfall intensity, and the sampling technique. The presentation includes a more detailed account of the methods selected. Due to the wide range of variable crops and soils, it is not possible to execute the measurements for all possible combinations. We therefore decided to perform the measurements for previously selected combinations of soils,crops and agrotechnologies that are the most common in the Czech Republic. During the experiments, the volume of the surface runoff and amount of sediment will be measured in their temporal distribution, as well as several other important parameters. The key values of the 3D matrix of the combinations of the crop, agrotechnique and soil will be determined experimentally. The remaining values will be determined by interpolation or by a model analogy. There are several methods used for C-factor calculation from measured experimental data. Some of these are not suitable to be used considering the type of data gathered. The presentation will discuss the benefits and drawbacks of these methods, as well as the final design of the method used. The problems concerning the selection of a relevant measurement method as well as the final method of simulation and C-factor determination for the gathered data will be discussed in more detail. The presentation was supported by research projects QJ1530181 and SGS14/180/OHK1/3T/11.
Educational Strategies for Teaching Basic Family Dynamics to Non-Family Therapists.
ERIC Educational Resources Information Center
Merkel, William T.; Rudisill, John R.
1985-01-01
Presents six-part methodology for teaching basic concepts of family systems to non-family therapists and describes application of methodology to teach primary care physicians. Explains use of simulated encounters in which a physically symptomatic adolescent is interviewed alone, then with his mother, then with his whole family. (Author/NRB)
Level-Set Methodology on Adaptive Octree Grids
NASA Astrophysics Data System (ADS)
Gibou, Frederic; Guittet, Arthur; Mirzadeh, Mohammad; Theillard, Maxime
2017-11-01
Numerical simulations of interfacial problems in fluids require a methodology capable of tracking surfaces that can undergo changes in topology and capable to imposing jump boundary conditions in a sharp manner. In this talk, we will discuss recent advances in the level-set framework, in particular one that is based on adaptive grids.
The Research and Evaluation of Serious Games: Toward a Comprehensive Methodology
ERIC Educational Resources Information Center
Mayer, Igor; Bekebrede, Geertje; Harteveld, Casper; Warmelink, Harald; Zhou, Qiqi; van Ruijven, Theo; Lo, Julia; Kortmann, Rens; Wenzler, Ivo
2014-01-01
The authors present the methodological background to and underlying research design of an ongoing research project on the scientific evaluation of serious games and/or computer-based simulation games (SGs) for advanced learning. The main research questions are: (1) what are the requirements and design principles for a comprehensive social…
Interfacing Network Simulations and Empirical Data
2009-05-01
contraceptive innovations in the Cameroon. He found that real-world adoption rates did not follow simulation models when the network relationships were...Analysis of the Coevolution of Adolescents ’ Friendship Networks, Taste in Music, and Alcohol Consumption. Methodology, 2: 48-56. Tichy, N.M., Tushman
Modeling and Simulation in Support of Testing and Evaluation
1997-03-01
contains standardized automated test methodology, synthetic stimuli and environments based on TECOM Ground Truth data and physics . The VPG is a distributed...Systems Acquisition Management (FSAM) coursebook , Defense Systems Management College, January 1994. Crocker, Charles M. “Application of the Simulation
NASA Technical Reports Server (NTRS)
Boyce, Lola; Bast, Callie C.
1992-01-01
The research included ongoing development of methodology that provides probabilistic lifetime strength of aerospace materials via computational simulation. A probabilistic material strength degradation model, in the form of a randomized multifactor interaction equation, is postulated for strength degradation of structural components of aerospace propulsion systems subjected to a number of effects or primative variables. These primative variable may include high temperature, fatigue or creep. In most cases, strength is reduced as a result of the action of a variable. This multifactor interaction strength degradation equation has been randomized and is included in the computer program, PROMISS. Also included in the research is the development of methodology to calibrate the above described constitutive equation using actual experimental materials data together with linear regression of that data, thereby predicting values for the empirical material constraints for each effect or primative variable. This regression methodology is included in the computer program, PROMISC. Actual experimental materials data were obtained from the open literature for materials typically of interest to those studying aerospace propulsion system components. Material data for Inconel 718 was analyzed using the developed methodology.
An omnibus test for family-based association studies with multiple SNPs and multiple phenotypes.
Lasky-Su, Jessica; Murphy, Amy; McQueen, Matthew B; Weiss, Scott; Lange, Christoph
2010-06-01
We propose an omnibus family-based association test (MFBAT) that can be applied to multiple markers and multiple phenotypes and that has only one degree of freedom. The proposed test statistic extends current FBAT methodology to incorporate multiple markers as well as multiple phenotypes. Using simulation studies, power estimates for the proposed methodology are compared with the standard methodologies. On the basis of these simulations, we find that MFBAT substantially outperforms other methods, including haplotypic approaches and doing multiple tests with single single-nucleotide polymorphisms (SNPs) and single phenotypes. The practical relevance of the approach is illustrated by an application to asthma in which SNP/phenotype combinations are identified and reach overall significance that would not have been identified using other approaches. This methodology is directly applicable to cases in which there are multiple SNPs, such as candidate gene studies, cases in which there are multiple phenotypes, such as expression data, and cases in which there are multiple phenotypes and genotypes, such as genome-wide association studies that incorporate expression profiles as phenotypes. This program is available in the PBAT analysis package.
NASA Astrophysics Data System (ADS)
Kempka, Thomas; De Lucia, Marco; Kühn, Michael
2015-04-01
The integrated assessment of long-term site behaviour taking into account a high spatial resolution at reservoir scale requires a sophisticated methodology to represent coupled thermal, hydraulic, mechanical and chemical processes of relevance. Our coupling methodology considers the time-dependent occurrence and significance of multi-phase flow processes, mechanical effects and geochemical reactions (Kempka et al., 2014). Hereby, a simplified hydro-chemical coupling procedure was developed (Klein et al., 2013) and validated against fully coupled hydro-chemical simulations (De Lucia et al., 2015). The numerical simulation results elaborated for the pilot site Ketzin demonstrate that mechanical reservoir, caprock and fault integrity are maintained during the time of operation and that after 10,000 years CO2 dissolution is the dominating trapping mechanism and mineralization occurs on the order of 10 % to 25 % with negligible changes to porosity and permeability. De Lucia, M., Kempka, T., Kühn, M. A coupling alternative to reactive transport simulations for long-term prediction of chemical reactions in heterogeneous CO2 storage systems (2014) Geosci Model Dev Discuss 7:6217-6261. doi:10.5194/gmdd-7-6217-2014. Kempka, T., De Lucia, M., Kühn, M. Geomechanical integrity verification and mineral trapping quantification for the Ketzin CO2 storage pilot site by coupled numerical simulations (2014) Energy Procedia 63:3330-3338, doi:10.1016/j.egypro.2014.11.361. Klein E, De Lucia M, Kempka T, Kühn M. Evaluation of longterm mineral trapping at the Ketzin pilot site for CO2 storage: an integrative approach using geo-chemical modelling and reservoir simulation. Int J Greenh Gas Con 2013; 19:720-730. doi:10.1016/j.ijggc.2013.05.014.
A Hybrid Coarse-graining Approach for Lipid Bilayers at Large Length and Time Scales
Ayton, Gary S.; Voth, Gregory A.
2009-01-01
A hybrid analytic-systematic (HAS) coarse-grained (CG) lipid model is developed and employed in a large-scale simulation of a liposome. The methodology is termed hybrid analyticsystematic as one component of the interaction between CG sites is variationally determined from the multiscale coarse-graining (MS-CG) methodology, while the remaining component utilizes an analytic potential. The systematic component models the in-plane center of mass interaction of the lipids as determined from an atomistic-level MD simulation of a bilayer. The analytic component is based on the well known Gay-Berne ellipsoid of revolution liquid crystal model, and is designed to model the highly anisotropic interactions at a highly coarse-grained level. The HAS CG approach is the first step in an “aggressive” CG methodology designed to model multi-component biological membranes at very large length and timescales. PMID:19281167
C-Based Design Methodology and Topological Change for an Indian Agricultural Tractor Component
NASA Astrophysics Data System (ADS)
Matta, Anil Kumar; Raju, D. Ranga; Suman, K. N. S.; Kranthi, A. S.
2018-06-01
The failure of tractor components and their replacement has now become very common in India because of re-cycling, re-sale, and duplication. To over come the problem of failure we propose a design methodology for topological change co-simulating with software's. In the proposed Design methodology, the designer checks Paxial, Pcr, Pfailue, τ by hand calculations, from which refined topological changes of R.S.Arm are formed. We explained several techniques employed in the component for reduction, removal of rib material to change center of gravity and centroid point by using system C for mixed level simulation and faster topological changes. The design process in system C can be compiled and executed with software, TURBO C7. The modified component is developed in proE and analyzed in ANSYS. The topologically changed component with slot 120 × 4.75 × 32.5 mm at the center showed greater effectiveness than the original component.
Evaluation of a methodology for model identification in the time domain
NASA Technical Reports Server (NTRS)
Beck, R. T.; Beck, J. L.
1988-01-01
A model identification methodology for structural dynamics has been applied to simulated vibrational data as a first step in evaluating its accuracy. The evaluation has taken into account a wide variety of factors which affect the accuracy of the procedure. The effects of each of these factors were observed in both the response time histories and the estimates of the parameters of the model by comparing them with the exact values of the system. Each factor was varied independently but combinations of these have also been considered in an effort to simulate real situations. The results of the tests have shown that for the chain model, the procedure yields robust estimates of the stiffness parameters under the conditions studied whenever uniqueness is ensured. When inaccuracies occur in the results, they are intimately related to non-uniqueness conditions inherent in the inverse problem and not to shortcomings in the methodology.
Bayesian Inference on Proportional Elections
Brunello, Gabriel Hideki Vatanabe; Nakano, Eduardo Yoshio
2015-01-01
Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software. PMID:25786259
Bayesian inference on proportional elections.
Brunello, Gabriel Hideki Vatanabe; Nakano, Eduardo Yoshio
2015-01-01
Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software.
An immersed boundary method for modeling a dirty geometry data
NASA Astrophysics Data System (ADS)
Onishi, Keiji; Tsubokura, Makoto
2017-11-01
We present a robust, fast, and low preparation cost immersed boundary method (IBM) for simulating an incompressible high Re flow around highly complex geometries. The method is achieved by the dispersion of the momentum by the axial linear projection and the approximate domain assumption satisfying the mass conservation around the wall including cells. This methodology has been verified against an analytical theory and wind tunnel experiment data. Next, we simulate the problem of flow around a rotating object and demonstrate the ability of this methodology to the moving geometry problem. This methodology provides the possibility as a method for obtaining a quick solution at a next large scale supercomputer. This research was supported by MEXT as ``Priority Issue on Post-K computer'' (Development of innovative design and production processes) and used computational resources of the K computer provided by the RIKEN Advanced Institute for Computational Science.
NASA Astrophysics Data System (ADS)
Pucinotti, Raffaele; Ferrario, Fabio; Bursi, Oreste S.
2008-07-01
A multi-objective advanced design methodology dealing with seismic actions followed by fire on steel-concrete composite full strength joints with concrete filled tubes is proposed in this paper. The specimens were designed in detail in order to exhibit a suitable fire behaviour after a severe earthquake. The major aspects of the cyclic behaviour of composite joints are presented and commented upon. The data obtained from monotonic and cyclic experimental tests have been used to calibrate a model of the joint in order to perform seismic simulations on several moment resisting frames. A hysteretic law was used to take into account the seismic degradation of the joints. Finally, fire tests were conducted with the objective to evaluate fire resistance of the connection already damaged by an earthquake. The experimental activity together with FE simulation demonstrated the adequacy of the advanced design methodology.
C-Based Design Methodology and Topological Change for an Indian Agricultural Tractor Component
NASA Astrophysics Data System (ADS)
Matta, Anil Kumar; Raju, D. Ranga; Suman, K. N. S.; Kranthi, A. S.
2018-02-01
The failure of tractor components and their replacement has now become very common in India because of re-cycling, re-sale, and duplication. To over come the problem of failure we propose a design methodology for topological change co-simulating with software's. In the proposed Design methodology, the designer checks Paxial, Pcr, Pfailue, τ by hand calculations, from which refined topological changes of R.S.Arm are formed. We explained several techniques employed in the component for reduction, removal of rib material to change center of gravity and centroid point by using system C for mixed level simulation and faster topological changes. The design process in system C can be compiled and executed with software, TURBO C7. The modified component is developed in proE and analyzed in ANSYS. The topologically changed component with slot 120 × 4.75 × 32.5 mm at the center showed greater effectiveness than the original component.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diaz, José A. M., E-mail: joadiazme@unal.edu.co; Torres, D. A., E-mail: datorresg@unal.edu.co
2016-07-07
The deposited energy and dose distribution of beams of protons and carbon over a head are simulated using the free tool package Geant4 and the data analysis package ROOT-C++. The present work shows a methodology to understand the microscopical process occurring in a session of hadron-therapy using advance simulation tools.
A Methodology to Assess UrbanSim Scenarios
2012-09-01
Education LOE – Line of Effort MMOG – Massively Multiplayer Online Game MC3 – Maneuver Captain’s Career Course MSCCC – Maneuver Support...augmented reality simulations, increased automation and artificial intelligence simulation, and massively multiplayer online games (MMOG), among...distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Turn-based strategy games and simulations are vital tools for military
Parallel methodology to capture cyclic variability in motored engines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ameen, Muhsin M.; Yang, Xiaofeng; Kuo, Tang-Wei
2016-07-28
Numerical prediction of of cycle-to-cycle variability (CCV) in SI engines is extremely challenging for two key reasons: (i) high-fidelity methods such as large eddy simulation (LES) are require to accurately capture the in-cylinder turbulent flowfield, and (ii) CCV is experienced over long timescales and hence the simulations need to be performed for hundreds of consecutive cycles. In this study, a new methodology is proposed to dissociate this long time-scale problem into several shorter time-scale problems, which can considerably reduce the computational time without sacrificing the fidelity of the simulations. The strategy is to perform multiple single-cycle simulations in parallel bymore » effectively perturbing the simulation parameters such as the initial and boundary conditions. It is shown that by perturbing the initial velocity field effectively based on the intensity of the in-cylinder turbulence, the mean and variance of the in-cylinder flowfield is captured reasonably well. Adding perturbations in the initial pressure field and the boundary pressure improves the predictions. It is shown that this new approach is able to give accurate predictions of the flowfield statistics in less than one-tenth of time required for the conventional approach of simulating consecutive engine cycles.« less
A negotiation methodology and its application to cogeneration planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, S.M.; Liu, C.C.; Luu, S.
Power system planning has become a complex process in utilities today. This paper presents a methodology for integrated planning with multiple objectives. The methodology uses a graphical representation (Goal-Decision Network) to capture the planning knowledge. The planning process is viewed as a negotiation process that applies three negotiation operators to search for beneficial decisions in a GDN. Also, the negotiation framework is applied to the problem of planning for cogeneration interconnection. The simulation results are presented to illustrate the cogeneration planning process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dionne, B.; Tzanos, C. P.
To support the safety analyses required for the conversion of the Belgian Reactor 2 (BR2) from highly-enriched uranium (HEU) to low-enriched uranium (LEU) fuel, the simulation of a number of loss-of-flow tests, with or without loss of pressure, has been undertaken. These tests were performed at BR2 in 1963 and used instrumented fuel assemblies (FAs) with thermocouples (TC) imbedded in the cladding as well as probes to measure the FAs power on the basis of their coolant temperature rise. The availability of experimental data for these tests offers an opportunity to better establish the credibility of the RELAP5-3D model andmore » methodology used in the conversion analysis. In order to support the HEU to LEU conversion safety analyses of the BR2 reactor, RELAP simulations of a number of loss-of-flow/loss-of-pressure tests have been undertaken. Preliminary analyses showed that the conservative power distributions used historically in the BR2 RELAP model resulted in a significant overestimation of the peak cladding temperature during the transient. Therefore, it was concluded that better estimates of the steady-state and decay power distributions were needed to accurately predict the cladding temperatures measured during the tests and establish the credibility of the RELAP model and methodology. The new approach ('best estimate' methodology) uses the MCNP5, ORIGEN-2 and BERYL codes to obtain steady-state and decay power distributions for the BR2 core during the tests A/400/1, C/600/3 and F/400/1. This methodology can be easily extended to simulate any BR2 core configuration. Comparisons with measured peak cladding temperatures showed a much better agreement when power distributions obtained with the new methodology are used.« less
Model methodology for estimating pesticide concentration extremes based on sparse monitoring data
Vecchia, Aldo V.
2018-03-22
This report describes a new methodology for using sparse (weekly or less frequent observations) and potentially highly censored pesticide monitoring data to simulate daily pesticide concentrations and associated quantities used for acute and chronic exposure assessments, such as the annual maximum daily concentration. The new methodology is based on a statistical model that expresses log-transformed daily pesticide concentration in terms of a seasonal wave, flow-related variability, long-term trend, and serially correlated errors. Methods are described for estimating the model parameters, generating conditional simulations of daily pesticide concentration given sparse (weekly or less frequent) and potentially highly censored observations, and estimating concentration extremes based on the conditional simulations. The model can be applied to datasets with as few as 3 years of record, as few as 30 total observations, and as few as 10 uncensored observations. The model was applied to atrazine, carbaryl, chlorpyrifos, and fipronil data for U.S. Geological Survey pesticide sampling sites with sufficient data for applying the model. A total of 112 sites were analyzed for atrazine, 38 for carbaryl, 34 for chlorpyrifos, and 33 for fipronil. The results are summarized in this report; and, R functions, described in this report and provided in an accompanying model archive, can be used to fit the model parameters and generate conditional simulations of daily concentrations for use in investigations involving pesticide exposure risk and uncertainty.
A methodology for identification and control of electro-mechanical actuators
Tutunji, Tarek A.; Saleem, Ashraf
2015-01-01
Mechatronic systems are fully-integrated engineering systems that are composed of mechanical, electronic, and computer control sub-systems. These integrated systems use electro-mechanical actuators to cause the required motion. Therefore, the design of appropriate controllers for these actuators are an essential step in mechatronic system design. In this paper, a three-stage methodology for real-time identification and control of electro-mechanical actuator plants is presented, tested, and validated. First, identification models are constructed from experimental data to approximate the plants’ response. Second, the identified model is used in a simulation environment for the purpose of designing a suitable controller. Finally, the designed controller is applied and tested on the real plant through Hardware-in-the-Loop (HIL) environment. The described three-stage methodology provides the following practical contributions: • Establishes an easy-to-follow methodology for controller design of electro-mechanical actuators. • Combines off-line and on-line controller design for practical performance. • Modifies the HIL concept by using physical plants with computer control (rather than virtual plants with physical controllers). Simulated and experimental results for two case studies, induction motor and vehicle drive system, are presented in order to validate the proposed methodology. These results showed that electromechanical actuators can be identified and controlled using an easy-to-duplicate and flexible procedure. PMID:26150992
A methodology for identification and control of electro-mechanical actuators.
Tutunji, Tarek A; Saleem, Ashraf
2015-01-01
Mechatronic systems are fully-integrated engineering systems that are composed of mechanical, electronic, and computer control sub-systems. These integrated systems use electro-mechanical actuators to cause the required motion. Therefore, the design of appropriate controllers for these actuators are an essential step in mechatronic system design. In this paper, a three-stage methodology for real-time identification and control of electro-mechanical actuator plants is presented, tested, and validated. First, identification models are constructed from experimental data to approximate the plants' response. Second, the identified model is used in a simulation environment for the purpose of designing a suitable controller. Finally, the designed controller is applied and tested on the real plant through Hardware-in-the-Loop (HIL) environment. The described three-stage methodology provides the following practical contributions: •Establishes an easy-to-follow methodology for controller design of electro-mechanical actuators.•Combines off-line and on-line controller design for practical performance.•Modifies the HIL concept by using physical plants with computer control (rather than virtual plants with physical controllers). Simulated and experimental results for two case studies, induction motor and vehicle drive system, are presented in order to validate the proposed methodology. These results showed that electromechanical actuators can be identified and controlled using an easy-to-duplicate and flexible procedure.
The Case for Optically Thick High-Velocity Broad-Line Region Gas in Active Galactic Nuclei
NASA Astrophysics Data System (ADS)
Snedden, Stephanie A.; Gaskell, C. Martin
2007-11-01
A combined analysis of the profiles of the main broad quasar emission lines in both Hubble Space Telescope and optical spectra shows that while the profiles of the strong UV lines are quite similar, there is frequently a strong increase in the Lyα/Hα ratio in the high-velocity gas. We show that the suggestion that the high-velocity gas is optically thin presents many problems. We show that the relative strengths of the high-velocity wings arise naturally in an optically thick BLR component. An optically thick model successfully explains the equivalent widths of the lines, the Lyα/Hα ratios and flatter Balmer decrements in the line wings, the strengths of C III] and the λ1400 blend, and the strong variability in flux of high-velocity, high-ionization lines (especially He II and He I).
Reverberation Mapping of Optical Emission Lines in Five Active Galaxies
Fausnaugh, M. M.; Grier, C. J.; Bentz, M. C.; ...
2017-05-10
We present the first results from an optical reverberation mapping campaign executed in 2014 targeting the active galactic nuclei (AGNs) MCG+08-11-011, NGC 2617, NGC 4051, 3C 382, and Mrk 374. Our targets have diverse and interesting observational properties, including a “changing look” AGN and a broad-line radio galaxy. Based on continuum-Hβ lags, we measure black hole masses for all five targets. We also obtain Hγ and He II λ4686 lags for all objects except 3C 382. The He II λ4686 lags indicate radial stratification of the BLR, and the masses derived from different emission lines are in general agreement. Themore » relative responsivities of these lines are also in qualitative agreement with photoionization models. Finally, these spectra have extremely high signal-to-noise ratios (100–300 per pixel) and there are excellent prospects for obtaining velocity-resolved reverberation signatures.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fausnaugh, M. M.; Grier, C. J.; Bentz, M. C.
We present the first results from an optical reverberation mapping campaign executed in 2014 targeting the active galactic nuclei (AGNs) MCG+08-11-011, NGC 2617, NGC 4051, 3C 382, and Mrk 374. Our targets have diverse and interesting observational properties, including a “changing look” AGN and a broad-line radio galaxy. Based on continuum-Hβ lags, we measure black hole masses for all five targets. We also obtain Hγ and He II λ4686 lags for all objects except 3C 382. The He II λ4686 lags indicate radial stratification of the BLR, and the masses derived from different emission lines are in general agreement. Themore » relative responsivities of these lines are also in qualitative agreement with photoionization models. Finally, these spectra have extremely high signal-to-noise ratios (100–300 per pixel) and there are excellent prospects for obtaining velocity-resolved reverberation signatures.« less
Radio-Loud AGN: The Suzaku View
NASA Technical Reports Server (NTRS)
Sambruna, Rita
2009-01-01
We review our Suzaku observations of Broad-Line Radio Galaxies (BLRGs). The continuum above 2 approx.keV in BLRGs is dominated by emission from an accretion flow, with little or no trace of a jet, which is instead expected to emerge at GeV energies and be detected by Fermi. Concerning the physical conditions of the accretion disk, BLRGs are a mixed bag. In some sources the data suggest relatively high disk ionization, in others obscuration of the innermost regions, perhaps by the jet base. While at hard X-rays the distinction between BLRGs and Seyferts appears blurry, one of the cleanest observational differences between the two classes is at soft X-rays, where Seyferts exhibit warm absorbers related to disk winds while BLRGs do not. We discuss the possibility that jet formation inhibits disk winds, and thus is related to the remarkable dearth of absorption features at soft X-rays in BLRGs and other radio-loud AGN.
Broad-band properties of the CfA Seyfert galaxies. III - Ultraviolet variability
NASA Technical Reports Server (NTRS)
Edelson, R. A.; Pike, G. F.; Krolik, J. H.
1990-01-01
A total of 657 archived IUE spectra are used to study the UV variability properties of six members of the CfA Seyfert I galaxy sample. All show strong evidence for continuum and line variations and a tendency for less luminous objects to be more strongly variable. Most objects show a clear correlation at zero lag between UV spectral index and luminosity, evidence that the variable component is an accretion disk around a black hole which is systematically smaller in less luminous sources. No correlation is seen between the continuum luminosity and equivalent width of the C IV, Mg II, and semiforbidden C III emission lines when the entire sample is examined, but a clear anticorrelation is present when only repeated observations of individual objects are considered. This is due to a combination of light-travel time effects in the broad-line region and the nonlinear responses of lines to continuum fluctuations.
Reverberation Mapping of Optical Emission Lines in Five Active Galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fausnaugh, M. M.; Denney, K. D.; Peterson, B. M.
2017-05-10
We present the first results from an optical reverberation mapping campaign executed in 2014 targeting the active galactic nuclei (AGNs) MCG+08-11-011, NGC 2617, NGC 4051, 3C 382, and Mrk 374. Our targets have diverse and interesting observational properties, including a “changing look” AGN and a broad-line radio galaxy. Based on continuum-H β lags, we measure black hole masses for all five targets. We also obtain H γ and He ii λ 4686 lags for all objects except 3C 382. The He ii λ 4686 lags indicate radial stratification of the BLR, and the masses derived from different emission lines aremore » in general agreement. The relative responsivities of these lines are also in qualitative agreement with photoionization models. These spectra have extremely high signal-to-noise ratios (100–300 per pixel) and there are excellent prospects for obtaining velocity-resolved reverberation signatures.« less
Physical Orbit for Lam Vir and Testing of Stellar Evolution Models
NASA Astrophysics Data System (ADS)
Zhao, M.; Monnier, J. D.; Torres, G.; Pedretti, E.; Millan-Gabet, R.; Berger, J.-P.; Traub, W. A.; Schloerb, F. P.
2005-12-01
Lambda Virginis is a well-known double-lined spectroscopic Am binary with the interesting property that both stars are very similar in abundance but one is sharp-lined and the other is broad-lined. The differing rotation rates and the unusual metallic-lined nature of this system presents a unique opportunity to test stellar evolution models. In this poster, we present high resolution observations of Lam Vir, taken with the Infrared-Optical Telescopes Array (IOTA) between 2003 and 2005. By combining our interferometric data with double-lined radial velocity data, we determined for the first time the physical orbit of Lam Vir, as well as the orbital parallax of the system. In addition, the masses of the two components are determined with 1% and 1.5% errors respectively. Our preliminary result from comparison with stellar evolution models suggests a discrepancy between Lam Vir and standard models.
A novel simulation methodology merging source-sink dynamics and landscape connectivity
Source-sink dynamics are an emergent property of complex species-landscape interactions. This study explores the patterns of source and sink behavior that become established across a large landscape, using a simulation model for the northern spotted owl (Strix occidentalis cauri...
DOT National Transportation Integrated Search
2002-07-01
The purpose of the work is to validate the safety assessment methodology previously developed for passenger rail vehicle dynamics, which requires the application of simulation tools as well as testing of vehicles under different track scenarios. This...
ERIC Educational Resources Information Center
Peisachovich, Eva Hava; Nelles, L. J.; Johnson, Samantha; Nicholson, Laura; Gal, Raya; Kerr, Barbara; Celia, Popovic; Epstein, Iris; Da Silva, Celina
2017-01-01
Numerous forecasts suggest that professional-competence development depends on human encounters. Interaction between organizations, tasks, and individual providers influence human behaviour, affect organizations' or systems' performance, and are a key component of professional-competence development. Further, insufficient or ineffective…
NASA Astrophysics Data System (ADS)
Feng, Wei; Watanabe, Naoya; Shimamoto, Haruo; Aoyagi, Masahiro; Kikuchi, Katsuya
2018-07-01
The residual stresses induced around through-silicon vias (TSVs) by a fabrication process is one of the major concerns of reliability. We proposed a methodology to investigate the residual stress in a via-last TSV. Firstly, radial and axial thermal stresses were measured by polarized Raman spectroscopy. The agreement between the simulated stress level and measured results validated the detail simulation model. Furthermore, the validated simulation model was adopted to the study of residual stress by element death/birth methods. The residual stress at room temperature concentrates at passivation layers owing to the high fabrication process temperatures of 420 °C for SiN film and 350 °C for SiO2 films. For a Si substrate, a high-level stress was observed near potential device locations, which requires attention to address reliability concerns in stress-sensitive devices. This methodology of residual stress analysis can be adopted to investigate the residual stress in other devices.
Modeling Negotiation by a Paticipatory Approach
NASA Astrophysics Data System (ADS)
Torii, Daisuke; Ishida, Toru; Bousquet, François
In a participatory approach by social scientists, role playing games (RPG) are effectively used to understand real thinking and behavior of stakeholders, but RPG is not sufficient to handle a dynamic process like negotiation. In this study, a participatory simulation where user-controlled avatars and autonomous agents coexist is introduced to the participatory approach for modeling negotiation. To establish a modeling methodology of negotiation, we have tackled the following two issues. First, for enabling domain experts to concentrate interaction design for participatory simulation, we have adopted the architecture in which an interaction layer controls agents and have defined three types of interaction descriptions (interaction protocol, interaction scenario and avatar control scenario) to be described. Second, for enabling domain experts and stakeholders to capitalize on participatory simulation, we have established a four-step process for acquiring negotiation model: 1) surveys and interviews to stakeholders, 2) RPG, 3) interaction design, and 4) participatory simulation. Finally, we discussed our methodology through a case study of agricultural economics in the northeast Thailand.
Simulation of Ejecta Production and Mixing Process of Sn Sample under shock loading
NASA Astrophysics Data System (ADS)
Wang, Pei; Chen, Dawei; Sun, Haiquan; Ma, Dongjun
2017-06-01
Ejection may occur when a strong shock wave release at the free surface of metal material and the ejecta of high-speed particulate matter will be formed and further mixed with the surrounding gas. Ejecta production and its mixing process has been one of the most difficult problems in shock physics remain unresolved, and have many important engineering applications in the imploding compression science. The present paper will introduce a methodology for the theoretical modeling and numerical simulation of the complex ejection and mixing process. The ejecta production is decoupled with the particle mixing process, and the ejecta state can be achieved by the direct numerical simulation for the evolution of initial defect on the metal surface. Then the particle mixing process can be simulated and resolved by a two phase gas-particle model which uses the aforementioned ejecta state as the initial condition. A preliminary ejecta experiment of planar Sn metal Sample has validated the feasibility of the proposed methodology.
Simulation of thermomechanical fatigue in solder joints
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, H.E.; Porter, V.L.; Fye, R.M.
1997-12-31
Thermomechanical fatigue (TMF) is a very complex phenomenon in electronic component systems and has been identified as one prominent degradation mechanism for surface mount solder joints in the stockpile. In order to precisely predict the TMF-related effects on the reliability of electronic components in weapons, a multi-level simulation methodology is being developed at Sandia National Laboratories. This methodology links simulation codes of continuum mechanics (JAS3D), microstructural mechanics (GLAD), and microstructural evolution (PARGRAIN) to treat the disparate length scales that exist between the macroscopic response of the component and the microstructural changes occurring in its constituent materials. JAS3D is used tomore » predict strain/temperature distributions in the component due to environmental variable fluctuations. GLAD identifies damage initiation and accumulation in detail based on the spatial information provided by JAS3D. PARGRAIN simulates the changes of material microstructure, such as the heterogeneous coarsening in Sn-Pb solder, when the component`s service environment varies.« less
Döntgen, Malte; Schmalz, Felix; Kopp, Wassja A; Kröger, Leif C; Leonhard, Kai
2018-06-13
An automated scheme for obtaining chemical kinetic models from scratch using reactive molecular dynamics and quantum chemistry simulations is presented. This methodology combines the phase space sampling of reactive molecular dynamics with the thermochemistry and kinetics prediction capabilities of quantum mechanics. This scheme provides the NASA polynomial and modified Arrhenius equation parameters for all species and reactions that are observed during the simulation and supplies them in the ChemKin format. The ab initio level of theory for predictions is easily exchangeable and the presently used G3MP2 level of theory is found to reliably reproduce hydrogen and methane oxidation thermochemistry and kinetics data. Chemical kinetic models obtained with this approach are ready-to-use for, e.g., ignition delay time simulations, as shown for hydrogen combustion. The presented extension of the ChemTraYzer approach can be used as a basis for methodologically advancing chemical kinetic modeling schemes and as a black-box approach to generate chemical kinetic models.
NASA Technical Reports Server (NTRS)
Phatak, A. V.
1980-01-01
A systematic analytical approach to the determination of helicopter IFR precision approach requirements is formulated. The approach is based upon the hypothesis that pilot acceptance level or opinion rating of a given system is inversely related to the degree of pilot involvement in the control task. A nonlinear simulation of the helicopter approach to landing task incorporating appropriate models for UH-1H aircraft, the environmental disturbances and the human pilot was developed as a tool for evaluating the pilot acceptance hypothesis. The simulated pilot model is generic in nature and includes analytical representation of the human information acquisition, processing, and control strategies. Simulation analyses in the flight director mode indicate that the pilot model used is reasonable. Results of the simulation are used to identify candidate pilot workload metrics and to test the well known performance-work-load relationship. A pilot acceptance analytical methodology is formulated as a basis for further investigation, development and validation.
NASA Astrophysics Data System (ADS)
Acri, Antonio; Offner, Guenter; Nijman, Eugene; Rejlek, Jan
2016-10-01
Noise legislations and the increasing customer demands determine the Noise Vibration and Harshness (NVH) development of modern commercial vehicles. In order to meet the stringent legislative requirements for the vehicle noise emission, exact knowledge of all vehicle noise sources and their acoustic behavior is required. Transfer path analysis (TPA) is a fairly well established technique for estimating and ranking individual low-frequency noise or vibration contributions via the different transmission paths. Transmission paths from different sources to target points of interest and their contributions can be analyzed by applying TPA. This technique is applied on test measurements, which can only be available on prototypes, at the end of the designing process. In order to overcome the limits of TPA, a numerical transfer path analysis methodology based on the substructuring of a multibody system is proposed in this paper. Being based on numerical simulation, this methodology can be performed starting from the first steps of the designing process. The main target of the proposed methodology is to get information of noise sources contributions of a dynamic system considering the possibility to have multiple forces contemporary acting on the system. The contributions of these forces are investigated with particular focus on distribute or moving forces. In this paper, the mathematical basics of the proposed methodology and its advantages in comparison with TPA will be discussed. Then, a dynamic system is investigated with a combination of two methods. Being based on the dynamic substructuring (DS) of the investigated model, the methodology proposed requires the evaluation of the contact forces at interfaces, which are computed with a flexible multi-body dynamic (FMBD) simulation. Then, the structure-borne noise paths are computed with the wave based method (WBM). As an example application a 4-cylinder engine is investigated and the proposed methodology is applied on the engine block. The aim is to get accurate and clear relationships between excitations and responses of the simulated dynamic system, analyzing the noise and vibrational sources inside a car engine, showing the main advantages of a numerical methodology.
Bond, William F; Hui, Joshua; Fernandez, Rosemarie
2018-02-01
Over the past decade, emergency medicine (EM) took a lead role in healthcare simulation in part due to its demands for successful interprofessional and multidisciplinary collaboration, along with educational needs in a diverse array of cognitive and procedural skills. Simulation-based methodologies have the capacity to support training and research platforms that model micro-, meso-, and macrosystems of healthcare. To fully capitalize on the potential of simulation-based research to improve emergency healthcare delivery will require the application of rigorous methods from engineering, social science, and basic science disciplines. The Academic Emergency Medicine (AEM) Consensus Conference "Catalyzing System Change Through Healthcare Simulation: Systems, Competency, and Outcome" was conceived to foster discussion among experts in EM, engineering, and social sciences, focusing on key barriers and opportunities in simulation-based research. This executive summary describes the overall rationale for the conference, conference planning, and consensus-building approaches and outlines the focus of the eight breakout sessions. The consensus outcomes from each breakout session are summarized in proceedings papers published in this issue of Academic Emergency Medicine. Each paper provides an overview of methodologic and knowledge gaps in simulation research and identifies future research targets aimed at improving the safety and quality of healthcare. © 2017 by the Society for Academic Emergency Medicine.
Technology CAD for integrated circuit fabrication technology development and technology transfer
NASA Astrophysics Data System (ADS)
Saha, Samar
2003-07-01
In this paper systematic simulation-based methodologies for integrated circuit (IC) manufacturing technology development and technology transfer are presented. In technology development, technology computer-aided design (TCAD) tools are used to optimize the device and process parameters to develop a new generation of IC manufacturing technology by reverse engineering from the target product specifications. While in technology transfer to manufacturing co-location, TCAD is used for process centering with respect to high-volume manufacturing equipment of the target manufacturing equipment of the target manufacturing facility. A quantitative model is developed to demonstrate the potential benefits of the simulation-based methodology in reducing the cycle time and cost of typical technology development and technology transfer projects over the traditional practices. The strategy for predictive simulation to improve the effectiveness of a TCAD-based project, is also discussed.
Optimization as a Tool for Consistency Maintenance in Multi-Resolution Simulation
NASA Technical Reports Server (NTRS)
Drewry, Darren T; Reynolds, Jr , Paul F; Emanuel, William R
2006-01-01
The need for new approaches to the consistent simulation of related phenomena at multiple levels of resolution is great. While many fields of application would benefit from a complete and approachable solution to this problem, such solutions have proven extremely difficult. We present a multi-resolution simulation methodology that uses numerical optimization as a tool for maintaining external consistency between models of the same phenomena operating at different levels of temporal and/or spatial resolution. Our approach follows from previous work in the disparate fields of inverse modeling and spacetime constraint-based animation. As a case study, our methodology is applied to two environmental models of forest canopy processes that make overlapping predictions under unique sets of operating assumptions, and which execute at different temporal resolutions. Experimental results are presented and future directions are addressed.
Low-Order Modeling of Internal Heat Transfer in Biomass Particle Pyrolysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiggins, Gavin M.; Ciesielski, Peter N.; Daw, C. Stuart
2016-06-16
We present a computationally efficient, one-dimensional simulation methodology for biomass particle heating under conditions typical of fast pyrolysis. Our methodology is based on identifying the rate limiting geometric and structural factors for conductive heat transport in biomass particle models with realistic morphology to develop low-order approximations that behave appropriately. Comparisons of transient temperature trends predicted by our one-dimensional method with three-dimensional simulations of woody biomass particles reveal good agreement, if the appropriate equivalent spherical diameter and bulk thermal properties are used. We conclude that, for particle sizes and heating regimes typical of fast pyrolysis, it is possible to simulate biomassmore » particle heating with reasonable accuracy and minimal computational overhead, even when variable size, aspherical shape, anisotropic conductivity, and complex, species-specific internal pore geometry are incorporated.« less
NASA Astrophysics Data System (ADS)
Leskiw, Donald M.; Zhau, Junmei
2000-06-01
This paper reports on results from an ongoing project to develop methodologies for representing and managing multiple, concurrent levels of detail and enabling high performance computing using parallel arrays within distributed object-based simulation frameworks. At this time we present the methodology for representing and managing multiple, concurrent levels of detail and modeling accuracy by using a representation based on the Kalman approach for estimation. The Kalman System Model equations are used to represent model accuracy, Kalman Measurement Model equations provide transformations between heterogeneous levels of detail, and interoperability among disparate abstractions is provided using a form of the Kalman Update equations.
NASA Astrophysics Data System (ADS)
Sreekanth, J.; Datta, Bithin
2011-07-01
Overexploitation of the coastal aquifers results in saltwater intrusion. Once saltwater intrusion occurs, it involves huge cost and long-term remediation measures to remediate these contaminated aquifers. Hence, it is important to have strategies for the sustainable use of coastal aquifers. This study develops a methodology for the optimal management of saltwater intrusion prone aquifers. A linked simulation-optimization-based management strategy is developed. The methodology uses genetic-programming-based models for simulating the aquifer processes, which is then linked to a multi-objective genetic algorithm to obtain optimal management strategies in terms of groundwater extraction from potential well locations in the aquifer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radojcic, Riko; Nowak, Matt; Nakamoto, Mark
The status of the development of a Design-for-Stress simulation flow that captures the stress effects in packaged 3D-stacked Si products like integrated circuits (ICs) using advanced via-middle Through Si Via technology is outlined. The next set of challenges required to proliferate the methodology and to deploy it for making and dispositioning real Si product decisions are described here. These include the adoption and support of a Process Design Kit (PDK) that includes the relevant material properties, the development of stress simulation methodologies that operate at higher levels of abstraction in a design flow, and the development and adoption of suitablemore » models required to make real product reliability decisions.« less
An almost-parameter-free harmony search algorithm for groundwater pollution source identification.
Jiang, Simin; Zhang, Yali; Wang, Pei; Zheng, Maohui
2013-01-01
The spatiotemporal characterization of unknown sources of groundwater pollution is frequently encountered in environmental problems. This study adopts a simulation-optimization approach that combines a contaminant transport simulation model with a heuristic harmony search algorithm to identify unknown pollution sources. In the proposed methodology, an almost-parameter-free harmony search algorithm is developed. The performance of this methodology is evaluated on an illustrative groundwater pollution source identification problem, and the identified results indicate that the proposed almost-parameter-free harmony search algorithm-based optimization model can give satisfactory estimations, even when the irregular geometry, erroneous monitoring data, and prior information shortage of potential locations are considered.
Scalable tuning of building models to hourly data
Garrett, Aaron; New, Joshua Ryan
2015-03-31
Energy models of existing buildings are unreliable unless calibrated so they correlate well with actual energy usage. Manual tuning requires a skilled professional, is prohibitively expensive for small projects, imperfect, non-repeatable, non-transferable, and not scalable to the dozens of sensor channels that smart meters, smart appliances, and cheap/ubiquitous sensors are beginning to make available today. A scalable, automated methodology is needed to quickly and intelligently calibrate building energy models to all available data, increase the usefulness of those models, and facilitate speed-and-scale penetration of simulation-based capabilities into the marketplace for actualized energy savings. The "Autotune'' project is a novel, model-agnosticmore » methodology which leverages supercomputing, large simulation ensembles, and big data mining with multiple machine learning algorithms to allow automatic calibration of simulations that match measured experimental data in a way that is deployable on commodity hardware. This paper shares several methodologies employed to reduce the combinatorial complexity to a computationally tractable search problem for hundreds of input parameters. Furthermore, accuracy metrics are provided which quantify model error to measured data for either monthly or hourly electrical usage from a highly-instrumented, emulated-occupancy research home.« less
NASA Astrophysics Data System (ADS)
Michelon, M. F.; Antonelli, A.
2010-03-01
We have developed a methodology to study the thermodynamics of order-disorder transformations in n -component substitutional alloys that combines nonequilibrium methods, which can efficiently compute free energies, with Monte Carlo simulations, in which configurational and vibrational degrees of freedom are simultaneously considered on an equal footing basis. Furthermore, with this methodology one can easily perform simulations in the canonical and in the isobaric-isothermal ensembles, which allow the investigation of the bulk volume effect. We have applied this methodology to calculate configurational and vibrational contributions to the entropy of the Ni3Al alloy as functions of temperature. The simulations show that when the volume of the system is kept constant, the vibrational entropy does not change upon transition while constant-pressure calculations indicate that the volume increase at the order-disorder transition causes a vibrational entropy increase of 0.08kB/atom . This is significant when compared to the configurational entropy increase of 0.27kB/atom . Our calculations also indicate that the inclusion of vibrations reduces in about 30% the order-disorder transition temperature determined solely considering the configurational degrees of freedom.
Discrete crack growth analysis methodology for through cracks in pressurized fuselage structures
NASA Technical Reports Server (NTRS)
Potyondy, David O.; Wawrzynek, Paul A.; Ingraffea, Anthony R.
1994-01-01
A methodology for simulating the growth of long through cracks in the skin of pressurized aircraft fuselage structures is described. Crack trajectories are allowed to be arbitrary and are computed as part of the simulation. The interaction between the mechanical loads acting on the superstructure and the local structural response near the crack tips is accounted for by employing a hierarchical modeling strategy. The structural response for each cracked configuration is obtained using a geometrically nonlinear shell finite element analysis procedure. Four stress intensity factors, two for membrane behavior and two for bending using Kirchhoff plate theory, are computed using an extension of the modified crack closure integral method. Crack trajectories are determined by applying the maximum tangential stress criterion. Crack growth results in localized mesh deletion, and the deletion regions are remeshed automatically using a newly developed all-quadrilateral meshing algorithm. The effectiveness of the methodology and its applicability to performing practical analyses of realistic structures is demonstrated by simulating curvilinear crack growth in a fuselage panel that is representative of a typical narrow-body aircraft. The predicted crack trajectory and fatigue life compare well with measurements of these same quantities from a full-scale pressurized panel test.
NASA Technical Reports Server (NTRS)
Miles, R. F., Jr.
1986-01-01
A research and development (R&D) project often involves a number of decisions that must be made concerning which subset of systems or tasks are to be undertaken to achieve the goal of the R&D project. To help in this decision making, SIMRAND (SIMulation of Research ANd Development Projects) is a methodology for the selection of the optimal subset of systems or tasks to be undertaken on an R&D project. Using alternative networks, the SIMRAND methodology models the alternative subsets of systems or tasks under consideration. Each path through an alternative network represents one way of satisfying the project goals. Equations are developed that relate the system or task variables to the measure of reference. Uncertainty is incorporated by treating the variables of the equations probabilistically as random variables, with cumulative distribution functions assessed by technical experts. Analytical techniques of probability theory are used to reduce the complexity of the alternative networks. Cardinal utility functions over the measure of preference are assessed for the decision makers. A run of the SIMRAND Computer I Program combines, in a Monte Carlo simulation model, the network structure, the equations, the cumulative distribution functions, and the utility functions.
Rejeb, Olfa; Pilet, Claire; Hamana, Sabri; Xie, Xiaolan; Durand, Thierry; Aloui, Saber; Doly, Anne; Biron, Pierre; Perrier, Lionel; Augusto, Vincent
2018-06-01
Innovation and health-care funding reforms have contributed to the deployment of Information and Communication Technology (ICT) to improve patient care. Many health-care organizations considered the application of ICT as a crucial key to enhance health-care management. The purpose of this paper is to provide a methodology to assess the organizational impact of high-level Health Information System (HIS) on patient pathway. We propose an integrated performance evaluation of HIS approach through the combination of formal modeling using the Architecture of Integrated Information Systems (ARIS) models, a micro-costing approach for cost evaluation, and a Discrete-Event Simulation (DES) approach. The methodology is applied to the consultation for cancer treatment process. Simulation scenarios are established to conclude about the impact of HIS on patient pathway. We demonstrated that although high level HIS lengthen the consultation, occupation rate of oncologists are lower and quality of service is higher (through the number of available information accessed during the consultation to formulate the diagnostic). The provided method allows also to determine the most cost-effective ICT elements to improve the care process quality while minimizing costs. The methodology is flexible enough to be applied to other health-care systems.
Simulation of a complete X-ray digital radiographic system for industrial applications.
Nazemi, E; Rokrok, B; Movafeghi, A; Choopan Dastjerdi, M H
2018-05-19
Simulating X-ray images is of great importance in industry and medicine. Using such simulation permits us to optimize parameters which affect image's quality without the limitations of an experimental procedure. This study revolves around a novel methodology to simulate a complete industrial X-ray digital radiographic system composed of an X-ray tube and a computed radiography (CR) image plate using Monte Carlo N Particle eXtended (MCNPX) code. In the process of our research, an industrial X-ray tube with maximum voltage of 300 kV and current of 5 mA was simulated. A 3-layer uniform plate including a polymer overcoat layer, a phosphor layer and a polycarbonate backing layer was also defined and simulated as the CR imaging plate. To model the image formation in the image plate, at first the absorbed dose was calculated in each pixel inside the phosphor layer of CR imaging plate using the mesh tally in MCNPX code and then was converted to gray value using a mathematical relationship determined in a separate procedure. To validate the simulation results, an experimental setup was designed and the images of two step wedges created out of aluminum and steel were captured by the experiments and compared with the simulations. The results show that the simulated images are in good agreement with the experimental ones demonstrating the ability of the proposed methodology for simulating an industrial X-ray imaging system. Copyright © 2018 Elsevier Ltd. All rights reserved.
Realistic micromechanical modeling and simulation of two-phase heterogeneous materials
NASA Astrophysics Data System (ADS)
Sreeranganathan, Arun
This dissertation research focuses on micromechanical modeling and simulations of two-phase heterogeneous materials exhibiting anisotropic and non-uniform microstructures with long-range spatial correlations. Completed work involves development of methodologies for realistic micromechanical analyses of materials using a combination of stereological techniques, two- and three-dimensional digital image processing, and finite element based modeling tools. The methodologies are developed via its applications to two technologically important material systems, namely, discontinuously reinforced aluminum composites containing silicon carbide particles as reinforcement, and boron modified titanium alloys containing in situ formed titanium boride whiskers. Microstructural attributes such as the shape, size, volume fraction, and spatial distribution of the reinforcement phase in these materials were incorporated in the models without any simplifying assumptions. Instrumented indentation was used to determine the constitutive properties of individual microstructural phases. Micromechanical analyses were performed using realistic 2D and 3D models and the results were compared with experimental data. Results indicated that 2D models fail to capture the deformation behavior of these materials and 3D analyses are required for realistic simulations. The effect of clustering of silicon carbide particles and associated porosity on the mechanical response of discontinuously reinforced aluminum composites was investigated using 3D models. Parametric studies were carried out using computer simulated microstructures incorporating realistic microstructural attributes. The intrinsic merit of this research is the development and integration of the required enabling techniques and methodologies for representation, modeling, and simulations of complex geometry of microstructures in two- and three-dimensional space facilitating better understanding of the effects of microstructural geometry on the mechanical behavior of materials.
Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria
NASA Astrophysics Data System (ADS)
Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong
2017-08-01
In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.
Methodology for the Preliminary Design of High Performance Schools in Hot and Humid Climates
ERIC Educational Resources Information Center
Im, Piljae
2009-01-01
A methodology to develop an easy-to-use toolkit for the preliminary design of high performance schools in hot and humid climates was presented. The toolkit proposed in this research will allow decision makers without simulation knowledge easily to evaluate accurately energy efficient measures for K-5 schools, which would contribute to the…
A DDDAS Framework for Volcanic Ash Propagation and Hazard Analysis
2012-01-01
probability distribution for the input variables (for example, Hermite polynomials for normally distributed parameters, or Legendre for uniformly...parameters and windfields will drive our simulations. We will use uncertainty quantification methodology – polynomial chaos quadrature in combination...quantification methodology ? polynomial chaos quadrature in combination with data integration to complete the DDDAS loop. 15. SUBJECT TERMS 16. SECURITY
USDA-ARS?s Scientific Manuscript database
The objective of this research was to develop a new one-step methodology that uses a dynamic approach to directly construct a tertiary model for prediction of the growth of C. perfringens in cooked beef. This methodology was based on numerical analysis and optimization of both primary and secondary...
Lithographic process window optimization for mask aligner proximity lithography
NASA Astrophysics Data System (ADS)
Voelkel, Reinhard; Vogler, Uwe; Bramati, Arianna; Erdmann, Andreas; Ünal, Nezih; Hofmann, Ulrich; Hennemeyer, Marc; Zoberbier, Ralph; Nguyen, David; Brugger, Juergen
2014-03-01
We introduce a complete methodology for process window optimization in proximity mask aligner lithography. The commercially available lithography simulation software LAB from GenISys GmbH was used for simulation of light propagation and 3D resist development. The methodology was tested for the practical example of lines and spaces, 5 micron half-pitch, printed in a 1 micron thick layer of AZ® 1512HS1 positive photoresist on a silicon wafer. A SUSS MicroTec MA8 mask aligner, equipped with MO Exposure Optics® was used in simulation and experiment. MO Exposure Optics® is the latest generation of illumination systems for mask aligners. MO Exposure Optics® provides telecentric illumination and excellent light uniformity over the full mask field. MO Exposure Optics® allows the lithography engineer to freely shape the angular spectrum of the illumination light (customized illumination), which is a mandatory requirement for process window optimization. Three different illumination settings have been tested for 0 to 100 micron proximity gap. The results obtained prove, that the introduced process window methodology is a major step forward to obtain more robust processes in mask aligner lithography. The most remarkable outcome of the presented study is that a smaller exposure gap does not automatically lead to better print results in proximity lithography - what the "good instinct" of a lithographer would expect. With more than 5'000 mask aligners installed in research and industry worldwide, the proposed process window methodology might have significant impact on yield improvement and cost saving in industry.
Numerical aerodynamic simulation facility preliminary study: Executive study
NASA Technical Reports Server (NTRS)
1977-01-01
A computing system was designed with the capability of providing an effective throughput of one billion floating point operations per second for three dimensional Navier-Stokes codes. The methodology used in defining the baseline design, and the major elements of the numerical aerodynamic simulation facility are described.
Effect of Accessory Power Take-off Variation on a Turbofan Engine Performance
2012-09-26
amount of energy from the low pressure spool shaft. A high bypass turbofan engine was modeled using the Numerical Propulsion System Simulation ( NPSS ...4 II.2 Power Extraction Techniques ..........................................................................8 II.3 NPSS ...Methodology and Simulation Setup ...........................................................................25 III.1 Engine NPSS Model
Understanding Contamination; Twenty Years of Simulating Radiological Contamination
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emily Snyder; John Drake; Ryan James
A wide variety of simulated contamination methods have been developed by researchers to reproducibly test radiological decontamination methods. Some twenty years ago a method of non-radioactive contamination simulation was proposed at the Idaho National Laboratory (INL) that mimicked the character of radioactive cesium and zirconium contamination on stainless steel. It involved baking the contamination into the surface of the stainless steel in order to 'fix' it into a tenacious, tightly bound oxide layer. This type of contamination was particularly applicable to nuclear processing facilities (and nuclear reactors) where oxide growth and exchange of radioactive materials within the oxide layer becamemore » the predominant model for material/contaminant interaction. Additional simulation methods and their empirically derived basis (from a nuclear fuel reprocessing facility) are discussed. In the last ten years the INL, working with the Defense Advanced Research Projects Agency (DARPA) and the National Homeland Security Research Center (NHSRC), has continued to develop contamination simulation methodologies. The most notable of these newer methodologies was developed to compare the efficacy of different decontamination technologies against radiological dispersal device (RDD, 'dirty bomb') type of contamination. There are many different scenarios for how RDD contamination may be spread, but the most commonly used one at the INL involves the dispersal of an aqueous solution containing radioactive Cs-137. This method was chosen during the DARPA projects and has continued through the NHSRC series of decontamination trials and also gives a tenacious 'fixed' contamination. Much has been learned about the interaction of cesium contamination with building materials, particularly concrete, throughout these tests. The effects of porosity, cation-exchange capacity of the material and the amount of dirt and debris on the surface are very important factors. The interaction of the contaminant/substrate with the particular decontamination technology is also very important. Results of decontamination testing from hundreds of contaminated coupons have lead to certain conclusions about the contamination and the type of decontamination methods being deployed. A recent addition to the DARPA initiated methodology simulates the deposition of nuclear fallout. This contamination differs from previous tests in that it has been developed and validated purely to simulate a 'loose' type of contamination. This may represent the first time that a radiologically contaminated 'fallout' stimulant has been developed to reproducibly test decontamination methods. While no contaminant/methodology may serve as a complete example of all aspects that could be seen in the field, the study of this family of simulation methods provides insight into the nature of radiological contamination.« less
Establish an Agent-Simulant Technology Relationship (ASTR)
2017-04-14
for quantitative measures that characterize simulant performance in testing , such as the ability to be removed from surfaces. Component-level ASTRs...Overall Test and Agent-Simulant Technology Relationship (ASTR) process. 1.2 Background. a. Historically, many tests did not develop quantitative ...methodology report14. Report provides a VX-TPP ASTR for post -decon contact hazard and off- gassing. In the Stryker production verification test (PVT
Combat Simulation Using Breach Computer Language
1979-09-01
simulation and weapon system analysis computer language Two types of models were constructed: a stochastic duel and a dynamic engagement model The... duel model validates the BREACH approach by comparing results with mathematical solutions. The dynamic model shows the capability of the BREACH...BREACH 2 Background 2 The Language 3 Static Duel 4 Background and Methodology 4 Validation 5 Results 8 Tank Duel Simulation 8 Dynamic Assault Model
Free-Energy Profiles of Membrane Insertion of the M2 Transmembrane Peptide from Influenza A Virus
2008-12-01
ABSTRACT The insertion of the M2 transmembrane peptide from influenza A virus into a membrane has been studied with molecular - dynamics simulations ...performed replica-exchange molecular - dynamics simulations with umbrella-sampling techniques to characterize the probability distribution and conformation...atomic- detailed molecular dynamics (MD) simulation techniques represent a valuable complementary methodology to inves- tigate membrane-insertion of
Simulators IV; Proceedings of the SCS Conference, Orlando, FL, Apr. 6-9, 1987
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fairchild, B.T.
1987-01-01
The conference presents papers on the applicability of AI techniques to simulation models, the simulation of a reentry vehicle on Simstar, simstar missile simulation, measurement issues associated with simulator sickness, and tracing the etiology of simulator sickness. Consideration is given to a simulator of a steam generator tube bundle response to a blowdown transient, the census of simulators for fossil fueled boiler and gas turbine plant operation training, and a new approach for flight simulator visual systems. Other topics include past and present simulated aircraft maintenance trainers, an AI-simulation based approach for aircraft maintenance training, simulator qualification using EPRI methodology,more » and the role of instinct in organizational dysfunction.« less
Hanna, John V; Pike, Kevin J; Charpentier, Thibault; Kemp, Thomas F; Smith, Mark E; Lucier, Bryan E G; Schurko, Robert W; Cahill, Lindsay S
2010-03-08
A variable B(0) field static (broadline) NMR study of a large suite of niobate materials has enabled the elucidation of high-precision measurement of (93)Nb NMR interaction parameters such as the isotropic chemical shift (delta(iso)), quadrupole coupling constant and asymmetry parameter (C(Q) and eta(Q)), chemical shift span/anisotropy and skew/asymmetry (Omega/Deltadelta and kappa/eta(delta)) and Euler angles (alpha, beta, gamma) describing the relative orientation of the quadrupolar and chemical shift tensorial frames. These measurements have been augmented with ab initio DFT calculations by using WIEN2k and NMR-CASTEP codes, which corroborate these reported values. Unlike previous assertions made about the inability to detect CSA (chemical shift anisotropy) contributions from Nb(V) in most oxo environments, this study emphasises that a thorough variable B(0) approach coupled with the VOCS (variable offset cumulative spectroscopy) technique for the acquisition of undistorted broad (-1/2<-->+1/2) central transition resonances facilitates the unambiguous observation of both quadrupolar and CSA contributions within these (93)Nb broadline data. These measurements reveal that the (93)Nb electric field gradient tensor is a particularly sensitive measure of the immediate and extended environments of the Nb(V) positions, with C(Q) values in the 0 to >80 MHz range being measured; similarly, the delta(iso) (covering an approximately 250 ppm range) and Omega values (covering a 0 to approximately 800 ppm range) characteristic of these niobate systems are also sensitive to structural disposition. However, their systematic rationalisation in terms of the Nb-O bond angles and distances defining the immediate Nb(V) oxo environment is complicated by longer-range influences that usually involve other heavy elements comprising the structure. It has also been established in this study that the best computational method(s) of analysis for the (93)Nb NMR interaction parameters generated here are the all-electron WIEN2k and the gauge included projector augmented wave (GIPAW) NMR-CASTEP DFT approaches, which account for the short- and long-range symmetries, periodicities and interaction-potential characteristics for all elements (and particularly the heavy elements) in comparison with Gaussian 03 methods, which focus on terminated portions of the total structure.
NASA Astrophysics Data System (ADS)
Motta, V.; Mediavilla, E.; Rojas, K.; Falco, E. E.; Jiménez-Vicente, J.; Muñoz, J. A.
2017-02-01
We use single-epoch spectroscopy of three gravitationally lensed quasars, HE 0435-1223, WFI 2033-4723, and HE 2149-2745, to study their inner structure (broad-line region [BLR] and continuum source). We detect microlensing-induced magnification in the wings of the broad emission lines of two of the systems (HE 0435-1223 and WFI 2033-4723). In the case of WFI 2033-4723, microlensing affects two “bumps” in the spectra that are almost symmetrically arranged on the blue (coincident with an Al III emission line) and red wings of C III]. These match the typical double-peaked profile that follows from disk kinematics. The presence of microlensing in the wings of the emission lines indicates the existence of two different regions in the BLR: a relatively small one with kinematics possibly related to an accretion disk, and another one that is substantially more extended and insensitive to microlensing. There is good agreement between the estimated size of the region affected by microlensing in the emission lines, {r}s={10}-7+15\\sqrt{M/{M}⊙ } lt-day (red wing of C IV in HE 0435-1223) and {r}s={11}-7+28\\sqrt{M/{M}⊙ } lt-day (C III] bumps in WFI 2033-4723), and the sizes inferred from the continuum emission, {r}s={13}-4+5\\sqrt{M/{M}⊙ } lt-day (HE 0435-1223) and {r}s={10}-2+3\\sqrt{M/{M}⊙ } lt-day (WFI 2033-4723). For HE 2149-2745 we measure an accretion disk size {r}s={8}-5+11\\sqrt{M/{M}⊙ } lt-day. The estimates of p, the exponent of the size versus wavelength ({r}s\\propto {λ }p), are 1.2 ± 0.6, 0.8 ± 0.2, and 0.4 ± 0.3 for HE 0435-1223, WFI 2033-4723, and HE 2149-2745, respectively. In conclusion, the continuum microlensing amplitude in the three quasars and chromaticity in WFI 2033-4723 and HE 2149-2745 are below expectations for the thin-disk model. The disks are larger and their temperature gradients are flatter than predicted by this model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, Brandon C.; Hernquist, Lars; Siemiginowska, Aneta
2010-08-20
We present an estimate of the black hole mass function of broad-line quasars (BLQSOs) that self-consistently corrects for incompleteness and the statistical uncertainty in the mass estimates, based on a sample of 9886 quasars at 1 < z < 4.5 drawn from the Sloan Digital Sky Survey (SDSS). We find evidence for 'cosmic downsizing' of black holes in BLQSOs, where the peak in their number density shifts to higher redshift with increasing black hole mass. The cosmic mass density for black holes seen as BLQSOs peaks at z {approx} 2. We estimate the completeness of the SDSS as a functionmore » of the black hole mass and Eddington ratio, and find that at z > 1 it is highly incomplete at M {sub BH} {approx}< 10{sup 9} M {sub sun} and L/L{sub Edd} {approx}< 0.5. We estimate a lower limit on the lifetime of a single BLQSO phase to be t {sub BL} > 150 {+-} 15 Myr for black holes at z = 1 with a mass of M {sub BH} = 10{sup 9} M{sub sun}, and we constrain the maximum mass of a black hole in a BLQSO to be {approx}3 x 10{sup 10} M{sub sun}. Our estimated distribution of BLQSO Eddington ratios peaks at L/L {sub Edd} {approx} 0.05 and has a dispersion of {approx}0.4 dex, implying that most BLQSOs are not radiating at or near the Eddington limit; however, the location of the peak is subject to considerable uncertainty. The steep increase in number density of BLQSOs toward lower Eddington ratios is expected if the BLQSO accretion rate monotonically decays with time. Furthermore, our estimated lifetime and Eddington ratio distributions imply that the majority of the most massive black holes spend a significant amount of time growing in an earlier obscured phase, a conclusion which is independent of the unknown obscured fraction. These results are consistent with models for self-regulated black hole growth, at least for massive systems at z > 1, where the BLQSO phase occurs at the end of a fueling event when black hole feedback unbinds the accreting gas, halting the accretion flow.« less
Luigi Ingrassia, Pier; Ragazzoni, Luca; Carenzo, Luca; Colombo, Davide; Ripoll Gallardo, Alba; Della Corte, Francesco
2015-04-01
This study tested the hypothesis that virtual reality simulation is equivalent to live simulation for testing naive medical students' abilities to perform mass casualty triage using the Simple Triage and Rapid Treatment (START) algorithm in a simulated disaster scenario and to detect the improvement in these skills after a teaching session. Fifty-six students in their last year of medical school were randomized into two groups (A and B). The same scenario, a car accident, was developed identically on the two simulation methodologies: virtual reality and live simulation. On day 1, group A was exposed to the live scenario and group B was exposed to the virtual reality scenario, aiming to triage 10 victims. On day 2, all students attended a 2-h lecture on mass casualty triage, specifically the START triage method. On day 3, groups A and B were crossed over. The groups' abilities to perform mass casualty triage in terms of triage accuracy, intervention correctness, and speed in the scenarios were assessed. Triage and lifesaving treatment scores were assessed equally by virtual reality and live simulation on day 1 and on day 3. Both simulation methodologies detected an improvement in triage accuracy and treatment correctness from day 1 to day 3 (P<0.001). The time to complete each scenario and its decrease from day 1 to day 3 were detected equally in the two groups (P<0.05). Virtual reality simulation proved to be a valuable tool, equivalent to live simulation, to test medical students' abilities to perform mass casualty triage and to detect improvement in such skills.
NASA Astrophysics Data System (ADS)
Yamada, Yoshiyuki; Gouda, Naoteru; Yano, Taihei; Kobayashi, Yukiyasu; Tsujimoto, Takuji; Suganuma, Masahiro; Niwa, Yoshito; Sako, Nobutada; Hatsutori, Yoichi; Tanaka, Takashi
2006-06-01
We explain simulation tools in JASMINE project (JASMINE simulator). The JASMINE project stands at the stage where its basic design will be determined in a few years. Then it is very important to simulate the data stream generated by astrometric fields at JASMINE in order to support investigations into error budgets, sampling strategy, data compression, data analysis, scientific performances, etc. Of course, component simulations are needed, but total simulations which include all components from observation target to satellite system are also very important. We find that new software technologies, such as Object Oriented(OO) methodologies are ideal tools for the simulation system of JASMINE(the JASMINE simulator). In this article, we explain the framework of the JASMINE simulator.
ERIC Educational Resources Information Center
Pustejovsky, James E.; Runyon, Christopher
2014-01-01
Direct observation recording procedures produce reductive summary measurements of an underlying stream of behavior. Previous methodological studies of these recording procedures have employed simulation methods for generating random behavior streams, many of which amount to special cases of a statistical model known as the alternating renewal…
Evaluation of methodology for detecting/predicting migration of forest species
Dale S. Solomon; William B. Leak
1996-01-01
Available methods for analyzing migration of forest species are evaluated, including simulation models, remeasured plots, resurveys, pollen/vegetation analysis, and age/distance trends. Simulation models have provided some of the most drastic estimates of species changes due to predicted changes in global climate. However, these models require additional testing...
Bootstrapping Methods Applied for Simulating Laboratory Works
ERIC Educational Resources Information Center
Prodan, Augustin; Campean, Remus
2005-01-01
Purpose: The aim of this work is to implement bootstrapping methods into software tools, based on Java. Design/methodology/approach: This paper presents a category of software e-tools aimed at simulating laboratory works and experiments. Findings: Both students and teaching staff use traditional statistical methods to infer the truth from sample…
Robotics, Artificial Intelligence, Computer Simulation: Future Applications in Special Education.
ERIC Educational Resources Information Center
Moore, Gwendolyn B.; And Others
The report describes three advanced technologies--robotics, artificial intelligence, and computer simulation--and identifies the ways in which they might contribute to special education. A hybrid methodology was employed to identify existing technology and forecast future needs. Following this framework, each of the technologies is defined,…
Enhancing Student Engagement through Simulation in Programming Sessions
ERIC Educational Resources Information Center
Isiaq, Sakirulai Olufemi; Jamil, Md Golam
2018-01-01
Purpose: The purpose of this paper is to explore the use of a simulator for teaching programming to foster student engagement and meaningful learning. Design/methodology/approach: An exploratory mixed-method research approach was adopted in a classroom-based environment at a UK university. A rich account of student engagement dimensions…
Engaging Workers in Simulation-Based E-Learning
ERIC Educational Resources Information Center
Slotte, Virpi; Herbert, Anne
2008-01-01
Purpose: The purpose of this paper is to evaluate learners' attitudes to the use of simulation-based e-learning as part of workplace learning when socially situated interaction and blended learning are specifically included in the instructional design. Design/methodology/approach: Responses to a survey questionnaire of 298 sales personnel were…
Costing Educational Wastage: A Pilot Simulation Study. Current Surveys and Research in Statistics.
ERIC Educational Resources Information Center
Berstecher, D.
This pilot simulation study examines the important methodological problems involved in costing educational wastage, focusing specifically on the cost implications of educational wastage in primary education. Purpose of the study is to provide a clearer picture of the underlying rationale and interrelated consequences of reducing educational…
Automatic mathematical modeling for real time simulation program (AI application)
NASA Technical Reports Server (NTRS)
Wang, Caroline; Purinton, Steve
1989-01-01
A methodology is described for automatic mathematical modeling and generating simulation models. The major objective was to create a user friendly environment for engineers to design, maintain, and verify their models; to automatically convert the mathematical models into conventional code for computation; and finally, to document the model automatically.
Reliability and maintainability assessment factors for reliable fault-tolerant systems
NASA Technical Reports Server (NTRS)
Bavuso, S. J.
1984-01-01
A long term goal of the NASA Langley Research Center is the development of a reliability assessment methodology of sufficient power to enable the credible comparison of the stochastic attributes of one ultrareliable system design against others. This methodology, developed over a 10 year period, is a combined analytic and simulative technique. An analytic component is the Computer Aided Reliability Estimation capability, third generation, or simply CARE III. A simulative component is the Gate Logic Software Simulator capability, or GLOSS. The numerous factors that potentially have a degrading effect on system reliability and the ways in which these factors that are peculiar to highly reliable fault tolerant systems are accounted for in credible reliability assessments. Also presented are the modeling difficulties that result from their inclusion and the ways in which CARE III and GLOSS mitigate the intractability of the heretofore unworkable mathematics.
Ha, Eun-Ho
2018-04-23
Standardized patients (SPs) boost self-confidence, improve problem solving, enhance critical thinking, and advance clinical judgment of nursing students. The aim of this study was to examine nursing students' experience with SPs in simulation-based learning. Q-methodology was used. Department of nursing in Seoul, South Korea. Fourth-year undergraduate nursing students (n = 47). A total of 47 fourth-year undergraduate nursing students ranked 42 Q statements about experiences with SPs into a normal distribution grid. The following three viewpoints were obtained: 1) SPs are helpful for patient care (patient-centered view), 2) SPs roles are important for nursing student learning (SPs roles-centered view), and 3) SPs can promote competency of nursing students (student-centered view). These results indicate that SPs may improve nursing students' confidence and nursing competency. Professors should reflect these three viewpoints in simulation-based learning to effectively engage SPs. Copyright © 2018 Elsevier Ltd. All rights reserved.
Entangled trajectories Hamiltonian dynamics for treating quantum nuclear effects
NASA Astrophysics Data System (ADS)
Smith, Brendan; Akimov, Alexey V.
2018-04-01
A simple and robust methodology, dubbed Entangled Trajectories Hamiltonian Dynamics (ETHD), is developed to capture quantum nuclear effects such as tunneling and zero-point energy through the coupling of multiple classical trajectories. The approach reformulates the classically mapped second-order Quantized Hamiltonian Dynamics (QHD-2) in terms of coupled classical trajectories. The method partially enforces the uncertainty principle and facilitates tunneling. The applicability of the method is demonstrated by studying the dynamics in symmetric double well and cubic metastable state potentials. The methodology is validated using exact quantum simulations and is compared to QHD-2. We illustrate its relationship to the rigorous Bohmian quantum potential approach, from which ETHD can be derived. Our simulations show a remarkable agreement of the ETHD calculation with the quantum results, suggesting that ETHD may be a simple and inexpensive way of including quantum nuclear effects in molecular dynamics simulations.
Verification of a Constraint Force Equation Methodology for Modeling Multi-Body Stage Separation
NASA Technical Reports Server (NTRS)
Tartabini, Paul V.; Roithmayr, Carlos; Toniolo, Matthew D.; Karlgaard, Christopher; Pamadi, Bandu N.
2008-01-01
This paper discusses the verification of the Constraint Force Equation (CFE) methodology and its implementation in the Program to Optimize Simulated Trajectories II (POST2) for multibody separation problems using three specially designed test cases. The first test case involves two rigid bodies connected by a fixed joint; the second case involves two rigid bodies connected with a universal joint; and the third test case is that of Mach 7 separation of the Hyper-X vehicle. For the first two cases, the POST2/CFE solutions compared well with those obtained using industry standard benchmark codes, namely AUTOLEV and ADAMS. For the Hyper-X case, the POST2/CFE solutions were in reasonable agreement with the flight test data. The CFE implementation in POST2 facilitates the analysis and simulation of stage separation as an integral part of POST2 for seamless end-to-end simulations of launch vehicle trajectories.
Schmidt, Irma; Minceva, Mirjana; Arlt, Wolfgang
2012-02-17
The X-ray computed tomography (CT) is used to determine local parameters related to the column packing homogeneity and hydrodynamics in columns packed with spherically and irregularly shaped particles of same size. The results showed that the variation of porosity and axial dispersion coefficient along the column axis is insignificant, compared to their radial distribution. The methodology of using the data attained by CT measurements to perform a CFD simulation of a batch separation of model binary mixtures, with different concentration and separation factors is demonstrated. The results of the CFD simulation study show that columns packed with spherically shaped particles provide higher yield in comparison to columns packed with irregularly shaped particles only below a certain value of the separation factor. The presented methodology can be used for selecting a suited packing material for a particular separation task. Copyright © 2012 Elsevier B.V. All rights reserved.
Fogolari, Federico; Moroni, Elisabetta; Wojciechowski, Marcin; Baginski, Maciej; Ragona, Laura; Molinari, Henriette
2005-04-01
The pH-driven opening and closure of beta-lactoglobulin EF loop, acting as a lid and closing the internal cavity of the protein, has been studied by molecular dynamics (MD) simulations and free energy calculations based on molecular mechanics/Poisson-Boltzmann (PB) solvent-accessible surface area (MM/PBSA) methodology. The forms above and below the transition pH differ presumably only in the protonation state of residue Glu89. MM/PBSA calculations are able to reproduce qualitatively the thermodynamics of the transition. The analysis of MD simulations using a combination of MM/PBSA methodology and the colony energy approach is able to highlight the driving forces implied in the transition. The analysis suggests that global rearrangements take place before the equilibrium local conformation is reached. This conclusion may bear general relevance to conformational transitions in all lipocalins and proteins in general. (c) 2005 Wiley-Liss, Inc.
Additional confirmation of the validity of laboratory simulation of cloud radiances
NASA Technical Reports Server (NTRS)
Davis, J. M.; Cox, S. K.
1986-01-01
The results of a laboratory experiment are presented that provide additional verification of the methodology adopted for simulation of the radiances reflected from fields of optically thick clouds using the Cloud Field Optical Simulator (CFOS) at Colorado State University. The comparison of these data with their theoretically derived counterparts indicates that the crucial mechanism of cloud-to-cloud radiance field interaction is accurately simulated in the CFOS experiments and adds confidence to the manner in which the optical depth is scaled.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, David; Agarwal, Deborah A.; Sun, Xin
2011-09-01
The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, D.; Agarwal, D.; Sun, X.
2011-01-01
The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.
Modeling Amorphous Microporous Polymers for CO2 Capture and Separations.
Kupgan, Grit; Abbott, Lauren J; Hart, Kyle E; Colina, Coray M
2018-06-13
This review concentrates on the advances of atomistic molecular simulations to design and evaluate amorphous microporous polymeric materials for CO 2 capture and separations. A description of atomistic molecular simulations is provided, including simulation techniques, structural generation approaches, relaxation and equilibration methodologies, and considerations needed for validation of simulated samples. The review provides general guidelines and a comprehensive update of the recent literature (since 2007) to promote the acceleration of the discovery and screening of amorphous microporous polymers for CO 2 capture and separation processes.
Probabilistic Based Modeling and Simulation Assessment
2010-06-01
different crash and blast scenarios. With the integration of the high fidelity neck and head model, a methodology to calculate the probability of injury...variability, correlation, and multiple (often competing) failure metrics. Important scenarios include vehicular collisions, blast /fragment impact, and...first area of focus is to develop a methodology to integrate probabilistic analysis into finite element analysis of vehicle collisions and blast . The
Coordinated crew performance in commercial aircraft operations
NASA Technical Reports Server (NTRS)
Murphy, M. R.
1977-01-01
A specific methodology is proposed for an improved system of coding and analyzing crew member interaction. The complexity and lack of precision of many crew and task variables suggest the usefulness of fuzzy linguistic techniques for modeling and computer simulation of the crew performance process. Other research methodologies and concepts that have promise for increasing the effectiveness of research on crew performance are identified.
RT 24 - Architecture, Modeling & Simulation, and Software Design
2010-11-01
focus on tool extensions (UPDM, SysML, SoaML, BPMN ) Leverage “best of breed” architecture methodologies Provide tooling to support the methodology DoDAF...Capability 10 Example: BPMN 11 DoDAF 2.0 MetaModel BPMN MetaModel Mapping SysML to DoDAF 2.0 12 DoDAF V2.0 Models OV-2 SysML Diagrams Requirement
NASA Technical Reports Server (NTRS)
Nakajima, Yukio; Padovan, Joe
1987-01-01
In a three-part series of papers, a generalized finite element methodology is formulated to handle traveling load problems involving large deformation fields in structure composed of viscoelastic media. The main thrust of this paper is to develop an overall finite element methodology and associated solution algorithms to handle the transient aspects of moving problems involving contact impact type loading fields. Based on the methodology and algorithms formulated, several numerical experiments are considered. These include the rolling/sliding impact of tires with road obstructions.
An engineering methodology for implementing and testing VLSI (Very Large Scale Integrated) circuits
NASA Astrophysics Data System (ADS)
Corliss, Walter F., II
1989-03-01
The engineering methodology for producing a fully tested VLSI chip from a design layout is presented. A 16-bit correlator, NPS CORN88, that was previously designed, was used as a vehicle to demonstrate this methodology. The study of the design and simulation tools, MAGIC and MOSSIM II, was the focus of the design and validation process. The design was then implemented and the chip was fabricated by MOSIS. This fabricated chip was then used to develop a testing methodology for using the digital test facilities at NPS. NPS CORN88 was the first full custom VLSI chip, designed at NPS, to be tested with the NPS digital analysis system, Tektronix DAS 9100 series tester. The capabilities and limitations of these test facilities are examined. NPS CORN88 test results are included to demonstrate the capabilities of the digital test system. A translator, MOS2DAS, was developed to convert the MOSSIM II simulation program to the input files required by the DAS 9100 device verification software, 91DVS. Finally, a tutorial for using the digital test facilities, including the DAS 9100 and associated support equipments, is included as an appendix.
NASA Astrophysics Data System (ADS)
Dadashzadeh, N.; Duzgun, H. S. B.; Yesiloglu-Gultekin, N.
2017-08-01
While advanced numerical techniques in slope stability analysis are successfully used in deterministic studies, they have so far found limited use in probabilistic analyses due to their high computation cost. The first-order reliability method (FORM) is one of the most efficient probabilistic techniques to perform probabilistic stability analysis by considering the associated uncertainties in the analysis parameters. However, it is not possible to directly use FORM in numerical slope stability evaluations as it requires definition of a limit state performance function. In this study, an integrated methodology for probabilistic numerical modeling of rock slope stability is proposed. The methodology is based on response surface method, where FORM is used to develop an explicit performance function from the results of numerical simulations. The implementation of the proposed methodology is performed by considering a large potential rock wedge in Sumela Monastery, Turkey. The accuracy of the developed performance function to truly represent the limit state surface is evaluated by monitoring the slope behavior. The calculated probability of failure is compared with Monte Carlo simulation (MCS) method. The proposed methodology is found to be 72% more efficient than MCS, while the accuracy is decreased with an error of 24%.
A novel methodology for building robust design rules by using design based metrology (DBM)
NASA Astrophysics Data System (ADS)
Lee, Myeongdong; Choi, Seiryung; Choi, Jinwoo; Kim, Jeahyun; Sung, Hyunju; Yeo, Hyunyoung; Shim, Myoungseob; Jin, Gyoyoung; Chung, Eunseung; Roh, Yonghan
2013-03-01
This paper addresses a methodology for building robust design rules by using design based metrology (DBM). Conventional method for building design rules has been using a simulation tool and a simple pattern spider mask. At the early stage of the device, the estimation of simulation tool is poor. And the evaluation of the simple pattern spider mask is rather subjective because it depends on the experiential judgment of an engineer. In this work, we designed a huge number of pattern situations including various 1D and 2D design structures. In order to overcome the difficulties of inspecting many types of patterns, we introduced Design Based Metrology (DBM) of Nano Geometry Research, Inc. And those mass patterns could be inspected at a fast speed with DBM. We also carried out quantitative analysis on PWQ silicon data to estimate process variability. Our methodology demonstrates high speed and accuracy for building design rules. All of test patterns were inspected within a few hours. Mass silicon data were handled with not personal decision but statistical processing. From the results, robust design rules are successfully verified and extracted. Finally we found out that our methodology is appropriate for building robust design rules.
NASA Astrophysics Data System (ADS)
Dib, Alain; Kavvas, M. Levent
2018-03-01
The Saint-Venant equations are commonly used as the governing equations to solve for modeling the spatially varied unsteady flow in open channels. The presence of uncertainties in the channel or flow parameters renders these equations stochastic, thus requiring their solution in a stochastic framework in order to quantify the ensemble behavior and the variability of the process. While the Monte Carlo approach can be used for such a solution, its computational expense and its large number of simulations act to its disadvantage. This study proposes, explains, and derives a new methodology for solving the stochastic Saint-Venant equations in only one shot, without the need for a large number of simulations. The proposed methodology is derived by developing the nonlocal Lagrangian-Eulerian Fokker-Planck equation of the characteristic form of the stochastic Saint-Venant equations for an open-channel flow process, with an uncertain roughness coefficient. A numerical method for its solution is subsequently devised. The application and validation of this methodology are provided in a companion paper, in which the statistical results computed by the proposed methodology are compared against the results obtained by the Monte Carlo approach.
High Resolution Imaging Using Phase Retrieval. Volume 2
1991-10-01
aberrations of the telescope. It will also correct aberrations due to atmospheric turbulence for a ground- based telescope, and can be used with several other...retrieval algorithm, based on the Ayers/Dainty blind deconvolution algorithm, was also developed. A new methodology for exploring the uniqueness of phase...Simulation Experiments ..................... 42 3.3.1 Initial Simulations with Noisy Modulus Data ..... 45 3.3.2 Simulations of a Space- Based Amplitude
An Overview of the Greyscales Lethality Assessment Methodology
2011-01-01
code has already been integrated into the Weapon Systems Division MECA and DUEL missile engagement simulations. It can also be integrated into...incorporated into a variety of simulations. The code has already been integrated into the Weapon Systems Division MECA and DUEL missile engagement...capable of being incorporated into a variety of simulations. The code has already been integrated into the Weapon Systems Division MECA and DUEL missile
An in vitro simulation method for the tribological assessment of complete natural hip joints
Fisher, John; Williams, Sophie
2017-01-01
The use of hip joint simulators to evaluate the tribological performance of total hip replacements is widely reported in the literature, however, in vitro simulation studies investigating the tribology of the natural hip joint are limited with heterogeneous methodologies reported. An in vitro simulation system for the complete natural hip joint, enabling the acetabulum and femoral head to be positioned with different orientations whilst maintaining the correct joint centre of rotation, was successfully developed for this study. The efficacy of the simulation system was assessed by testing complete, matched natural porcine hip joints and porcine hip hemiarthroplasty joints in a pendulum friction simulator. The results showed evidence of biphasic lubrication, with a non-linear increase in friction being observed in both groups. Lower overall mean friction factor values in the complete natural joint group that increased at a lower rate over time, suggest that the exudation of fluid and transition to solid phase lubrication occurred more slowly in the complete natural hip joint compared to the hip hemiarthroplasty joint. It is envisaged that this methodology will be used to investigate morphological risk factors for developing hip osteoarthritis, as well as the effectiveness of early interventional treatments for degenerative hip disease. PMID:28886084
How to assess the impact of a physical parameterization in simulations of moist convection?
NASA Astrophysics Data System (ADS)
Grabowski, Wojciech
2017-04-01
A numerical model capable in simulating moist convection (e.g., cloud-resolving model or large-eddy simulation model) consists of a fluid flow solver combined with required representations (i.e., parameterizations) of physical processes. The later typically include cloud microphysics, radiative transfer, and unresolved turbulent transport. Traditional approaches to investigate impacts of such parameterizations on convective dynamics involve parallel simulations with different parameterization schemes or with different scheme parameters. Such methodologies are not reliable because of the natural variability of a cloud field that is affected by the feedback between the physics and dynamics. For instance, changing the cloud microphysics typically leads to a different realization of the cloud-scale flow, and separating dynamical and microphysical impacts is difficult. This presentation will present a novel modeling methodology, the piggybacking, that allows studying the impact of a physical parameterization on cloud dynamics with confidence. The focus will be on the impact of cloud microphysics parameterization. Specific examples of the piggybacking approach will include simulations concerning the hypothesized deep convection invigoration in polluted environments, the validity of the saturation adjustment in modeling condensation in moist convection, and separation of physical impacts from statistical uncertainty in simulations applying particle-based Lagrangian microphysics, the super-droplet method.
Accounting for Uncertainties in Strengths of SiC MEMS Parts
NASA Technical Reports Server (NTRS)
Nemeth, Noel; Evans, Laura; Beheim, Glen; Trapp, Mark; Jadaan, Osama; Sharpe, William N., Jr.
2007-01-01
A methodology has been devised for accounting for uncertainties in the strengths of silicon carbide structural components of microelectromechanical systems (MEMS). The methodology enables prediction of the probabilistic strengths of complexly shaped MEMS parts using data from tests of simple specimens. This methodology is intended to serve as a part of a rational basis for designing SiC MEMS, supplementing methodologies that have been borrowed from the art of designing macroscopic brittle material structures. The need for this or a similar methodology arises as a consequence of the fundamental nature of MEMS and the brittle silicon-based materials of which they are typically fabricated. When tested to fracture, MEMS and structural components thereof show wide part-to-part scatter in strength. The methodology involves the use of the Ceramics Analysis and Reliability Evaluation of Structures Life (CARES/Life) software in conjunction with the ANSYS Probabilistic Design System (PDS) software to simulate or predict the strength responses of brittle material components while simultaneously accounting for the effects of variability of geometrical features on the strength responses. As such, the methodology involves the use of an extended version of the ANSYS/CARES/PDS software system described in Probabilistic Prediction of Lifetimes of Ceramic Parts (LEW-17682-1/4-1), Software Tech Briefs supplement to NASA Tech Briefs, Vol. 30, No. 9 (September 2006), page 10. The ANSYS PDS software enables the ANSYS finite-element-analysis program to account for uncertainty in the design-and analysis process. The ANSYS PDS software accounts for uncertainty in material properties, dimensions, and loading by assigning probabilistic distributions to user-specified model parameters and performing simulations using various sampling techniques.
Mining data from hemodynamic simulations for generating prediction and explanation models.
Bosnić, Zoran; Vračar, Petar; Radović, Milos D; Devedžić, Goran; Filipović, Nenad D; Kononenko, Igor
2012-03-01
One of the most common causes of human death is stroke, which can be caused by carotid bifurcation stenosis. In our work, we aim at proposing a prototype of a medical expert system that could significantly aid medical experts to detect hemodynamic abnormalities (increased artery wall shear stress). Based on the acquired simulated data, we apply several methodologies for1) predicting magnitudes and locations of maximum wall shear stress in the artery, 2) estimating reliability of computed predictions, and 3) providing user-friendly explanation of the model's decision. The obtained results indicate that the evaluated methodologies can provide a useful tool for the given problem domain. © 2012 IEEE
NASA Astrophysics Data System (ADS)
Marconi, S.; Orfanelli, S.; Karagounis, M.; Hemperek, T.; Christiansen, J.; Placidi, P.
2017-02-01
A dedicated power analysis methodology, based on modern digital design tools and integrated with the VEPIX53 simulation framework developed within RD53 collaboration, is being used to guide vital choices for the design and optimization of the next generation ATLAS and CMS pixel chips and their critical serial powering circuit (shunt-LDO). Power consumption is studied at different stages of the design flow under different operating conditions. Significant effort is put into extensive investigations of dynamic power variations in relation with the decoupling seen by the powering network. Shunt-LDO simulations are also reported to prove the reliability at the system level.
LES, DNS, and RANS for the Analysis of High-Speed Turbulent Reacting Flows
NASA Technical Reports Server (NTRS)
Colucci, P. J.; Jaberi, F. A.; Givi, P.
1996-01-01
A filtered density function (FDF) method suitable for chemically reactive flows is developed in the context of large eddy simulation. The advantage of the FDF methodology is its inherent ability to resolve subgrid scales (SGS) scalar correlations that otherwise have to be modeled. Because of the lack of robust models to accurately predict these correlations in turbulent reactive flows, simulations involving turbulent combustion are often met with a degree of skepticism. The FDF methodology avoids the closure problem associated with these terms and treats the reaction in an exact manner. The scalar FDF approach is particularly attractive since it can be coupled with existing hydrodynamic computational fluid dynamics (CFD) codes.
Parameterizing the Spatial Markov Model From Breakthrough Curve Data Alone
NASA Astrophysics Data System (ADS)
Sherman, Thomas; Fakhari, Abbas; Miller, Savannah; Singha, Kamini; Bolster, Diogo
2017-12-01
The spatial Markov model (SMM) is an upscaled Lagrangian model that effectively captures anomalous transport across a diverse range of hydrologic systems. The distinct feature of the SMM relative to other random walk models is that successive steps are correlated. To date, with some notable exceptions, the model has primarily been applied to data from high-resolution numerical simulations and correlation effects have been measured from simulated particle trajectories. In real systems such knowledge is practically unattainable and the best one might hope for is breakthrough curves (BTCs) at successive downstream locations. We introduce a novel methodology to quantify velocity correlation from BTC data alone. By discretizing two measured BTCs into a set of arrival times and developing an inverse model, we estimate velocity correlation, thereby enabling parameterization of the SMM in studies where detailed Lagrangian velocity statistics are unavailable. The proposed methodology is applied to two synthetic numerical problems, where we measure all details and thus test the veracity of the approach by comparison of estimated parameters with known simulated values. Our results suggest that our estimated transition probabilities agree with simulated values and using the SMM with this estimated parameterization accurately predicts BTCs downstream. Our methodology naturally allows for estimates of uncertainty by calculating lower and upper bounds of velocity correlation, enabling prediction of a range of BTCs. The measured BTCs fall within the range of predicted BTCs. This novel method to parameterize the SMM from BTC data alone is quite parsimonious, thereby widening the SMM's practical applicability.
Convergence studies of deterministic methods for LWR explicit reflector methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Canepa, S.; Hursin, M.; Ferroukhi, H.
2013-07-01
The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on verymore » different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)« less
NASA Astrophysics Data System (ADS)
Liu, Yushi; Poh, Hee Joo
2014-11-01
The Computational Fluid Dynamics analysis has become increasingly important in modern urban planning in order to create highly livable city. This paper presents a multi-scale modeling methodology which couples Weather Research and Forecasting (WRF) Model with open source CFD simulation tool, OpenFOAM. This coupling enables the simulation of the wind flow and pollutant dispersion in urban built-up area with high resolution mesh. In this methodology meso-scale model WRF provides the boundary condition for the micro-scale CFD model OpenFOAM. The advantage is that the realistic weather condition is taken into account in the CFD simulation and complexity of building layout can be handled with ease by meshing utility of OpenFOAM. The result is validated against the Joint Urban 2003 Tracer Field Tests in Oklahoma City and there is reasonably good agreement between the CFD simulation and field observation. The coupling of WRF- OpenFOAM provide urban planners with reliable environmental modeling tool in actual urban built-up area; and it can be further extended with consideration of future weather conditions for the scenario studies on climate change impact.