Sample records for modelling background intensity

  1. Bayesian modelling of the emission spectrum of the Joint European Torus Lithium Beam Emission Spectroscopy system.

    PubMed

    Kwak, Sehyun; Svensson, J; Brix, M; Ghim, Y-C

    2016-02-01

    A Bayesian model of the emission spectrum of the JET lithium beam has been developed to infer the intensity of the Li I (2p-2s) line radiation and associated uncertainties. The detected spectrum for each channel of the lithium beam emission spectroscopy system is here modelled by a single Li line modified by an instrumental function, Bremsstrahlung background, instrumental offset, and interference filter curve. Both the instrumental function and the interference filter curve are modelled with non-parametric Gaussian processes. All free parameters of the model, the intensities of the Li line, Bremsstrahlung background, and instrumental offset, are inferred using Bayesian probability theory with a Gaussian likelihood for photon statistics and electronic background noise. The prior distributions of the free parameters are chosen as Gaussians. Given these assumptions, the intensity of the Li line and corresponding uncertainties are analytically available using a Bayesian linear inversion technique. The proposed approach makes it possible to extract the intensity of Li line without doing a separate background subtraction through modulation of the Li beam.

  2. Non-stationary background intensity and Caribbean seismic events

    NASA Astrophysics Data System (ADS)

    Valmy, Larissa; Vaillant, Jean

    2014-05-01

    We consider seismic risk calculation based on models with non-stationary background intensity. The aim is to improve predictive strategies in the framework of seismic risk assessment from models describing at best the seismic activity in the Caribbean arc. Appropriate statistical methods are required for analyzing the volumes of data collected. The focus is on calculating earthquakes occurrences probability and analyzing spatiotemporal evolution of these probabilities. The main modeling tool is the point process theory in order to take into account past history prior to a given date. Thus, the seismic event conditional intensity is expressed by means of the background intensity and the self exciting component. This intensity can be interpreted as the expected event rate per time and / or surface unit. The most popular intensity model in seismology is the ETAS (Epidemic Type Aftershock Sequence) model introduced and then generalized by Ogata [2, 3]. We extended this model and performed a comparison of different probability density functions for the triggered event times [4]. We illustrate our model by considering the CDSA (Centre de Données Sismiques des Antilles) catalog [1] which contains more than 7000 seismic events occurred in the Lesser Antilles arc. Statistical tools for testing the background intensity stationarity and for dynamical segmentation are presented. [1] Bengoubou-Valérius M., Bazin S., Bertil D., Beauducel F. and Bosson A. (2008). CDSA: a new seismological data center for the French Lesser Antilles, Seismol. Res. Lett., 79 (1), 90-102. [2] Ogata Y. (1998). Space-time point-process models for earthquake occurrences, Annals of the Institute of Statistical Mathematics, 50 (2), 379-402. [3] Ogata, Y. (2011). Significant improvements of the space-time ETAS model for forecasting of accurate baseline seismicity, Earth, Planets and Space, 63 (3), 217-229. [4] Valmy L. and Vaillant J. (2013). Statistical models in seismology: Lesser Antilles arc case, Bull. Soc. géol. France, 2013, 184 (1), 61-67.

  3. A contrast-sensitive channelized-Hotelling observer to predict human performance in a detection task using lumpy backgrounds and Gaussian signals

    NASA Astrophysics Data System (ADS)

    Park, Subok; Badano, Aldo; Gallas, Brandon D.; Myers, Kyle J.

    2007-03-01

    Previously, a non-prewhitening matched filter (NPWMF) incorporating a model for the contrast sensitivity of the human visual system was introduced for modeling human performance in detection tasks with different viewing angles and white-noise backgrounds by Badano et al. But NPWMF observers do not perform well detection tasks involving complex backgrounds since they do not account for random backgrounds. A channelized-Hotelling observer (CHO) using difference-of-Gaussians (DOG) channels has been shown to track human performance well in detection tasks using lumpy backgrounds. In this work, a CHO with DOG channels, incorporating the model of the human contrast sensitivity, was developed similarly. We call this new observer a contrast-sensitive CHO (CS-CHO). The Barten model was the basis of our human contrast sensitivity model. A scalar was multiplied to the Barten model and varied to control the thresholding effect of the contrast sensitivity on luminance-valued images and hence the performance-prediction ability of the CS-CHO. The performance of the CS-CHO was compared to the average human performance from the psychophysical study by Park et al., where the task was to detect a known Gaussian signal in non-Gaussian distributed lumpy backgrounds. Six different signal-intensity values were used in this study. We chose the free parameter of our model to match the mean human performance in the detection experiment at the strongest signal intensity. Then we compared the model to the human at five different signal-intensity values in order to see if the performance of the CS-CHO matched human performance. Our results indicate that the CS-CHO with the chosen scalar for the contrast sensitivity predicts human performance closely as a function of signal intensity.

  4. Mean and turbulent flow downstream of a low-intensity fire: influence of canopy and background atmospheric conditions

    Treesearch

    Michael T. Kiefer; Warren E. Heilman; Shiyuan Zhong; Joseph J. Charney; Xindi Bian

    2015-01-01

    This study examines the sensitivity of mean and turbulent flow in the planetary boundary layer and roughness sublayer to a low-intensity fire and evaluates whether the sensitivity is dependent on canopy and background atmospheric properties. The ARPS-CANOPY model, a modified version of the Advanced Regional Prediction System (ARPS) model with a canopy parameterization...

  5. Background radiation in inelastic X-ray scattering and X-ray emission spectroscopy. A study for Johann-type spectrometers

    NASA Astrophysics Data System (ADS)

    Paredes Mellone, O. A.; Bianco, L. M.; Ceppi, S. A.; Goncalves Honnicke, M.; Stutz, G. E.

    2018-06-01

    A study of the background radiation in inelastic X-ray scattering (IXS) and X-ray emission spectroscopy (XES) based on an analytical model is presented. The calculation model considers spurious radiation originated from elastic and inelastic scattering processes along the beam paths of a Johann-type spectrometer. The dependence of the background radiation intensity on the medium of the beam paths (air and helium), analysed energy and radius of the Rowland circle was studied. The present study shows that both for IXS and XES experiments the background radiation is dominated by spurious radiation owing to scattering processes along the sample-analyser beam path. For IXS experiments the spectral distribution of the main component of the background radiation shows a weak linear dependence on the energy for the most cases. In the case of XES, a strong non-linear behaviour of the background radiation intensity was predicted for energy analysis very close to the backdiffraction condition, with a rapid increase in intensity as the analyser Bragg angle approaches π / 2. The contribution of the analyser-detector beam path is significantly weaker and resembles the spectral distribution of the measured spectra. Present results show that for usual experimental conditions no appreciable structures are introduced by the background radiation into the measured spectra, both in IXS and XES experiments. The usefulness of properly calculating the background profile is demonstrated in a background subtraction procedure for a real experimental situation. The calculation model was able to simulate with high accuracy the energy dependence of the background radiation intensity measured in a particular XES experiment with air beam paths.

  6. Segmentation and intensity estimation of microarray images using a gamma-t mixture model.

    PubMed

    Baek, Jangsun; Son, Young Sook; McLachlan, Geoffrey J

    2007-02-15

    We present a new approach to the analysis of images for complementary DNA microarray experiments. The image segmentation and intensity estimation are performed simultaneously by adopting a two-component mixture model. One component of this mixture corresponds to the distribution of the background intensity, while the other corresponds to the distribution of the foreground intensity. The intensity measurement is a bivariate vector consisting of red and green intensities. The background intensity component is modeled by the bivariate gamma distribution, whose marginal densities for the red and green intensities are independent three-parameter gamma distributions with different parameters. The foreground intensity component is taken to be the bivariate t distribution, with the constraint that the mean of the foreground is greater than that of the background for each of the two colors. The degrees of freedom of this t distribution are inferred from the data but they could be specified in advance to reduce the computation time. Also, the covariance matrix is not restricted to being diagonal and so it allows for nonzero correlation between R and G foreground intensities. This gamma-t mixture model is fitted by maximum likelihood via the EM algorithm. A final step is executed whereby nonparametric (kernel) smoothing is undertaken of the posterior probabilities of component membership. The main advantages of this approach are: (1) it enjoys the well-known strengths of a mixture model, namely flexibility and adaptability to the data; (2) it considers the segmentation and intensity simultaneously and not separately as in commonly used existing software, and it also works with the red and green intensities in a bivariate framework as opposed to their separate estimation via univariate methods; (3) the use of the three-parameter gamma distribution for the background red and green intensities provides a much better fit than the normal (log normal) or t distributions; (4) the use of the bivariate t distribution for the foreground intensity provides a model that is less sensitive to extreme observations; (5) as a consequence of the aforementioned properties, it allows segmentation to be undertaken for a wide range of spot shapes, including doughnut, sickle shape and artifacts. We apply our method for gridding, segmentation and estimation to cDNA microarray real images and artificial data. Our method provides better segmentation results in spot shapes as well as intensity estimation than Spot and spotSegmentation R language softwares. It detected blank spots as well as bright artifact for the real data, and estimated spot intensities with high-accuracy for the synthetic data. The algorithms were implemented in Matlab. The Matlab codes implementing both the gridding and segmentation/estimation are available upon request. Supplementary material is available at Bioinformatics online.

  7. Fine-Scale Fluctuations in the Corona Observed with Hi-C

    NASA Technical Reports Server (NTRS)

    Winebarger, Amy; Schuler, Timothy

    2013-01-01

    The High Resolution Coronal Imager(HiC) flew aboard a NASA sounding rocket on 2012 July11 and captured roughly 345 s of high spatial and temporal resolution images of the solar corona in a narrowband 193 Angstrom channel. We have analyzed the fluctuations in intensity of Active Region11520.We selected events based on a lifetime greater than 11s (twoHiC frames)and intensities greater than a threshold determined from the average background intensity in a pixel and the photon and electronic noise. We find fluctuations occurring down to the smallest timescale(11s).Typical intensity fluctuations are 20% background intensity, while some events peaka t100%the background intensity.Generally the fluctuations are clustered in solar structures, particularly the moss.We interpret the fluctuations in the moss as indicative of heating events. We use the observed events to model the active region core.

  8. Supervised variational model with statistical inference and its application in medical image segmentation.

    PubMed

    Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David

    2015-01-01

    Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.

  9. COSMIC INFRARED BACKGROUND FLUCTUATIONS AND ZODIACAL LIGHT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arendt, Richard G.; Kashlinsky, A.; Moseley, S. H.

    We performed a specific observational test to measure the effect that the zodiacal light can have on measurements of the spatial fluctuations of the near-IR background. Previous estimates of possible fluctuations caused by zodiacal light have often been extrapolated from observations of the thermal emission at longer wavelengths and low angular resolution or from IRAC observations of high-latitude fields where zodiacal light is faint and not strongly varying with time. The new observations analyzed here target the COSMOS field at low ecliptic latitude where the zodiacal light intensity varies by factors of ∼2 over the range of solar elongations atmore » which the field can be observed. We find that the white-noise component of the spatial power spectrum of the background is correlated with the modeled zodiacal light intensity. Roughly half of the measured white noise is correlated with the zodiacal light, but a more detailed interpretation of the white noise is hampered by systematic uncertainties that are evident in the zodiacal light model. At large angular scales (≳100″) where excess power above the white noise is observed, we find no correlation of the power with the modeled intensity of the zodiacal light. This test clearly indicates that the large-scale power in the infrared background is not being caused by the zodiacal light.« less

  10. Cosmic Infrared Background Fluctuations and Zodiacal Light

    NASA Astrophysics Data System (ADS)

    Arendt, Richard G.; Kashlinsky, A.; Moseley, S. H.; Mather, J.

    2016-06-01

    We performed a specific observational test to measure the effect that the zodiacal light can have on measurements of the spatial fluctuations of the near-IR background. Previous estimates of possible fluctuations caused by zodiacal light have often been extrapolated from observations of the thermal emission at longer wavelengths and low angular resolution or from IRAC observations of high-latitude fields where zodiacal light is faint and not strongly varying with time. The new observations analyzed here target the COSMOS field at low ecliptic latitude where the zodiacal light intensity varies by factors of ˜2 over the range of solar elongations at which the field can be observed. We find that the white-noise component of the spatial power spectrum of the background is correlated with the modeled zodiacal light intensity. Roughly half of the measured white noise is correlated with the zodiacal light, but a more detailed interpretation of the white noise is hampered by systematic uncertainties that are evident in the zodiacal light model. At large angular scales (≳100″) where excess power above the white noise is observed, we find no correlation of the power with the modeled intensity of the zodiacal light. This test clearly indicates that the large-scale power in the infrared background is not being caused by the zodiacal light.

  11. Linear model for fast background subtraction in oligonucleotide microarrays.

    PubMed

    Kroll, K Myriam; Barkema, Gerard T; Carlon, Enrico

    2009-11-16

    One important preprocessing step in the analysis of microarray data is background subtraction. In high-density oligonucleotide arrays this is recognized as a crucial step for the global performance of the data analysis from raw intensities to expression values. We propose here an algorithm for background estimation based on a model in which the cost function is quadratic in a set of fitting parameters such that minimization can be performed through linear algebra. The model incorporates two effects: 1) Correlated intensities between neighboring features in the chip and 2) sequence-dependent affinities for non-specific hybridization fitted by an extended nearest-neighbor model. The algorithm has been tested on 360 GeneChips from publicly available data of recent expression experiments. The algorithm is fast and accurate. Strong correlations between the fitted values for different experiments as well as between the free-energy parameters and their counterparts in aqueous solution indicate that the model captures a significant part of the underlying physical chemistry.

  12. An analog retina model for detecting dim moving objects against a bright moving background

    NASA Technical Reports Server (NTRS)

    Searfus, R. M.; Colvin, M. E.; Eeckman, F. H.; Teeters, J. L.; Axelrod, T. S.

    1991-01-01

    We are interested in applications that require the ability to track a dim target against a bright, moving background. Since the target signal will be less than or comparable to the variations in the background signal intensity, sophisticated techniques must be employed to detect the target. We present an analog retina model that adapts to the motion of the background in order to enhance targets that have a velocity difference with respect to the background. Computer simulation results and our preliminary concept of an analog 'Z' focal plane implementation are also presented.

  13. Cosmic Infrared Background Fluctuations and Zodiacal Light

    NASA Technical Reports Server (NTRS)

    Arendt, Richard G.; Kashlinsky, A.; Moseley, S. H.; Mather, J.

    2017-01-01

    We performed a specific observational test to measure the effect that the zodiacal light can have on measurements of the spatial fluctuations of the near-IR (near-infrared)background. Previous estimates of possible fluctuations caused by zodiacal light have often been extrapolated from observations of the thermal emission at longer wavelengths and low angular resolution or from IRAC (Infrared Array Camera) observations of high-latitude fields where zodiacal light is faint and not strongly varying with time. The new observations analyzed here target the COSMOS (Cosmic Evolution Survey) field at low ecliptic latitude where the zodiacal light intensity varies by factors of approximately 2 over the range of solar elongations at which the field can be observed. We find that the white-noise component of the spatial power spectrum of the background is correlated with the modeled zodiacal light intensity. Roughly half of the measured white noise is correlated with the zodiacal light, but a more detailed interpretation of the white noise is hampered by systematic uncertainties that are evident in the zodiacal light model. At large angular scales (greater than or approximately equal to 100 arcseconds) where excess power above the white noise is observed, we find no correlation of the power with the modeled intensity of the zodiacal light. This test clearly indicates that the large-scale power in the infrared background is not being caused by the zodiacal light.

  14. Probabilistic segmentation and intensity estimation for microarray images.

    PubMed

    Gottardo, Raphael; Besag, Julian; Stephens, Matthew; Murua, Alejandro

    2006-01-01

    We describe a probabilistic approach to simultaneous image segmentation and intensity estimation for complementary DNA microarray experiments. The approach overcomes several limitations of existing methods. In particular, it (a) uses a flexible Markov random field approach to segmentation that allows for a wider range of spot shapes than existing methods, including relatively common 'doughnut-shaped' spots; (b) models the image directly as background plus hybridization intensity, and estimates the two quantities simultaneously, avoiding the common logical error that estimates of foreground may be less than those of the corresponding background if the two are estimated separately; and (c) uses a probabilistic modeling approach to simultaneously perform segmentation and intensity estimation, and to compute spot quality measures. We describe two approaches to parameter estimation: a fast algorithm, based on the expectation-maximization and the iterated conditional modes algorithms, and a fully Bayesian framework. These approaches produce comparable results, and both appear to offer some advantages over other methods. We use an HIV experiment to compare our approach to two commercial software products: Spot and Arrayvision.

  15. Strong field QED in lepton colliders and electron/laser interactions

    NASA Astrophysics Data System (ADS)

    Hartin, Anthony

    2018-05-01

    The studies of strong field particle physics processes in electron/laser interactions and lepton collider interaction points (IPs) are reviewed. These processes are defined by the high intensity of the electromagnetic fields involved and the need to take them into account as fully as possible. Thus, the main theoretical framework considered is the Furry interaction picture within intense field quantum field theory. In this framework, the influence of a background electromagnetic field in the Lagrangian is calculated nonperturbatively, involving exact solutions for quantized charged particles in the background field. These “dressed” particles go on to interact perturbatively with other particles, enabling the background field to play both macroscopic and microscopic roles. Macroscopically, the background field starts to polarize the vacuum, in effect rendering it a dispersive medium. Particles encountering this dispersive vacuum obtain a lifetime, either radiating or decaying into pair particles at a rate dependent on the intensity of the background field. In fact, the intensity of the background field enters into the coupling constant of the strong field quantum electrodynamic Lagrangian, influencing all particle processes. A number of new phenomena occur. Particles gain an intensity-dependent rest mass shift that accounts for their presence in the dispersive vacuum. Multi-photon events involving more than one external field photon occur at each vertex. Higher order processes which exchange a virtual strong field particle resonate via the lifetimes of the unstable strong field states. Two main arenas of strong field physics are reviewed; those occurring in relativistic electron interactions with intense laser beams, and those occurring in the beam-beam physics at the interaction point of colliders. This review outlines the theory, describes its significant novel phenomenology and details the experimental schema required to detect strong field effects and the simulation programs required to model them.

  16. Characterization and Prediction of the SPI Background

    NASA Technical Reports Server (NTRS)

    Teegarden, B. J.; Jean, P.; Knodlseder, J.; Skinner, G. K.; Weidenspointer, G.

    2003-01-01

    The INTEGRAL Spectrometer, like most gamma-ray instruments, is background dominated. Signal-to-background ratios of a few percent are typical. The background is primarily due to interactions of cosmic rays in the instrument and spacecraft. It characteristically varies by +/- 5% on time scales of days. This variation is caused mainly by fluctuations in the interplanetary magnetic field that modulates the cosmic ray intensity. To achieve the maximum performance from SPI it is essential to have a high quality model of this background that can predict its value to a fraction of a percent. In this poster we characterize the background and its variability, explore various models, and evaluate the accuracy of their predictions.

  17. Generalization of the normal-exponential model: exploration of a more accurate parametrisation for the signal distribution on Illumina BeadArrays.

    PubMed

    Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv

    2012-12-11

    Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.

  18. A New Determination of the Extragalactic Diffuse X-Ray Background from EGRET Data

    NASA Technical Reports Server (NTRS)

    Strong, Andrew W.; Moskalenko, Igor V.; Reimer, Olaf

    2004-01-01

    We use the GALPROP model for cosmic-ray propagation to obtain a new estimate of the Galactic component of gamma rays, and show that away from the Galactic plane it gives an accurate prediction of the observed EGRET intensities in the energy range 30 MeV - 50 GeV. On this basis we re-evaluate the extragalactic gamma-ray background. We find that for some energies previous work underestimated the Galactic contribution at high latitudes and hence overestimated the background. Our new background spectrum shows a positive curvature similar to that expected for models of the extragalactic emission based on the blazar population.

  19. Stress Intensity Factors for Cracking Metal Structures under Rapid Thermal Loading. Volume 2. Theoretical Background

    DTIC Science & Technology

    1989-08-01

    thermal pulse loadings. The work couples a Green’s function integration technique for transient thermal stresses with the well-known influence ... function approach for calculating stress intensity factors. A total of seven most commonly used crack models were investigated in this study. A computer

  20. Applied Behaviour Analysis: Does Intervention Intensity Relate to Family Stressors and Maternal Well-Being?

    ERIC Educational Resources Information Center

    Schwichtenberg, A.; Poehlmann, J.

    2007-01-01

    Background: Interventions based on applied behaviour analysis (ABA) are commonly recommended for children with an autism spectrum disorder (ASD); however, few studies address how this intervention model impacts families. The intense requirements that ABA programmes place on children and families are often cited as a critique of the programme,…

  1. On the impact of atmospheric thermal stability on the characteristics of nocturnal downslope flows

    NASA Astrophysics Data System (ADS)

    Ye, Z. J.; Garratt, J. R.; Segal, M.; Pielke, R. A.

    1990-04-01

    The impacts of background (or ambient) and local atmospheric thermal stabilities, and slope steepness, on nighttime thermally induced downslope flow in meso-β domains (i.e., 20 200 km horizontal extent) have been investigated using analytical and numerical model approaches. Good agreement between the analytical and numerical evaluations was found. It was concluded that: (i) as anticipated, the intensity of the downslope flow increases with increased slope steepness, although the depth of the downslope flow was found to be insensitive to slope steepness in the studied situations; (ii) the intensity of the downslope flow is generally independent of background atmospheric thermal stability; (iii) for given integrated nighttime cooling across the nocturnal boundary layer (NBL), Q s the local atmospheric thermal stability exerts a strong influence on downslope flow behavior: the downslope flow intensity increases when local atmospheric thermal stability increases; and (iv) the downslope flow intensity is proportional to Q s 1/2.

  2. Background Noise Analysis in a Few-Photon-Level Qubit Memory

    NASA Astrophysics Data System (ADS)

    Mittiga, Thomas; Kupchak, Connor; Jordaan, Bertus; Namazi, Mehdi; Nolleke, Christian; Figeroa, Eden

    2014-05-01

    We have developed an Electromagnetically Induced Transparency based polarization qubit memory. The device is composed of a dual-rail probe field polarization setup colinear with an intense control field to store and retrieve any arbitrary polarization state by addressing a Λ-type energy level scheme in a 87Rb vapor cell. To achieve a signal-to-background ratio at the few photon level sufficient for polarization tomography of the retrieved state, the intense control field is filtered out through an etalon filtrating system. We have developed an analytical model predicting the influence of the signal-to-background ratio on the fidelities and compared it to experimental data. Experimentally measured global fidelities have been found to follow closely the theoretical prediction as signal-to-background decreases. These results suggest the plausibility of employing room temperature memories to store photonic qubits at the single photon level and for future applications in long distance quantum communication schemes.

  3. The albedo and scattering phase function of interstellar dust and the diffuse background at far-ultraviolet wavelengths.

    PubMed

    Hurwitz, M; Bowyer, S; Martin, C

    1991-05-01

    We have determined the scattering parameters of dust in the interstellar medium at far-ultraviolet (FUV) wavelengths (1415-1835 angstroms). Our results are based on spectra of the diffuse background taken with the Berkeley UVX spectrometer. The unique design of this instrument makes possible for the first time accurate determination of the background both at high Galactic latitude, where the signal is intrinsically faint, and at low Galactic latitude, where direct starlight has heretofore compromised measurements of the diffuse emission. Because the data are spectroscopic, the continuum can be distinguished from the atomic and molecular transition features which also contribute to the background. We find the continuum intensity to be well correlated with the Galactic neutral hydrogen column density until saturation at about 1200 photons cm-2 s-1 sr-1 angstrom-1 is reached where tau FUV approximately 1. Our measurement of the intensity where tau FUV > or = 1 is crucial to the determination of the scattering properties of the grains. We interpret the data with a detailed radiative transfer model and conclude that the FUV albedo of the grains is low (<25%) and that the grains scatter fairly isotropically. We evaluate models of dust composition and grain-size distribution and compare their predictions with these new results. We present evidence that, as the Galactic neutral hydrogen column density approaches zero, the FUV continuum background arises primarily from scattering by dust, which implies that dust may be present in virtually all view directions. A non-dust-scattering continuum component has also been identified, with an intensity (external to the foreground Galactic dust) of about 115 photons cm-2 s-1 angstrom-1. With about half this intensity accounted for by two-photon emission from Galactic ionized gas, we identify roughly 50 photons cm-2 s-1 sr-1 angstrom-1 as a true extragalactic component.

  4. Distal and Proximal Factors in Domestic Violence: A Test of an Integrated Model.

    ERIC Educational Resources Information Center

    DeMaris, Alfred; Benson, Michael L.; Fox, Greer L.; Hill, Terrence; Van Wyk, Judy

    2003-01-01

    Tests a model of couple violence drawn from several theoretical perspectives. The outcome distinguishes among nonviolent couples and those experiencing either physical aggression or intense male violence. According to the model, background characteristics of couples are related to relationship stressors, which affect the risk of violence via their…

  5. Simultaneous brightness contrast of foraging Papilio butterflies

    PubMed Central

    Kinoshita, Michiyo; Takahashi, Yuki; Arikawa, Kentaro

    2012-01-01

    This study focuses on the sense of brightness in the foraging Japanese yellow swallowtail butterfly, Papilio xuthus. We presented two red discs of different intensity on a grey background to butterflies, and trained them to select one of the discs. They were successfully trained to select either a high intensity or a low intensity disc. The trained butterflies were tested on their ability to perceive brightness in two different protocols: (i) two orange discs of different intensity presented on the same intensity grey background and (ii) two orange discs of the same intensity separately presented on a grey background that was either higher or lower in intensity than the training background. The butterflies trained to high intensity red selected the orange disc of high intensity in protocol 1, and the disc on the background of low intensity grey in protocol 2. We obtained similar results in another set of experiments with purple discs instead of orange discs. The choices of the butterflies trained to low intensity red were opposite to those just described. Taken together, we conclude that Papilio has the ability to learn brightness and darkness of targets independent of colour, and that they have the so-called simultaneous brightness contrast. PMID:22179808

  6. A biological hierarchical model based underwater moving object detection.

    PubMed

    Shen, Jie; Fan, Tanghuai; Tang, Min; Zhang, Qian; Sun, Zhen; Huang, Fengchen

    2014-01-01

    Underwater moving object detection is the key for many underwater computer vision tasks, such as object recognizing, locating, and tracking. Considering the super ability in visual sensing of the underwater habitats, the visual mechanism of aquatic animals is generally regarded as the cue for establishing bionic models which are more adaptive to the underwater environments. However, the low accuracy rate and the absence of the prior knowledge learning limit their adaptation in underwater applications. Aiming to solve the problems originated from the inhomogeneous lumination and the unstable background, the mechanism of the visual information sensing and processing pattern from the eye of frogs are imitated to produce a hierarchical background model for detecting underwater objects. Firstly, the image is segmented into several subblocks. The intensity information is extracted for establishing background model which could roughly identify the object and the background regions. The texture feature of each pixel in the rough object region is further analyzed to generate the object contour precisely. Experimental results demonstrate that the proposed method gives a better performance. Compared to the traditional Gaussian background model, the completeness of the object detection is 97.92% with only 0.94% of the background region that is included in the detection results.

  7. A Biological Hierarchical Model Based Underwater Moving Object Detection

    PubMed Central

    Shen, Jie; Fan, Tanghuai; Tang, Min; Zhang, Qian; Sun, Zhen; Huang, Fengchen

    2014-01-01

    Underwater moving object detection is the key for many underwater computer vision tasks, such as object recognizing, locating, and tracking. Considering the super ability in visual sensing of the underwater habitats, the visual mechanism of aquatic animals is generally regarded as the cue for establishing bionic models which are more adaptive to the underwater environments. However, the low accuracy rate and the absence of the prior knowledge learning limit their adaptation in underwater applications. Aiming to solve the problems originated from the inhomogeneous lumination and the unstable background, the mechanism of the visual information sensing and processing pattern from the eye of frogs are imitated to produce a hierarchical background model for detecting underwater objects. Firstly, the image is segmented into several subblocks. The intensity information is extracted for establishing background model which could roughly identify the object and the background regions. The texture feature of each pixel in the rough object region is further analyzed to generate the object contour precisely. Experimental results demonstrate that the proposed method gives a better performance. Compared to the traditional Gaussian background model, the completeness of the object detection is 97.92% with only 0.94% of the background region that is included in the detection results. PMID:25140194

  8. Apollo-Soyuz survey of the extreme-ultraviolet/soft X-ray background

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stern, R.; Bowyer, S.

    1979-06-15

    The results of an extensive sky survey of the extreme-ultraviolet (EUV)/soft X-ray background are reported. The data were obtained with a telescope, designed and calibrated at the University of California at Berkeley, which observed EUV sources and the diffuse background as part of the Apollo-Soyuz mission in 1975 July. With a primary field of view of 2 /sup 0/.3 +- 0 /sup 0/.1 FWHM and four EUV bandpass filters (16--25, 20--73, 80--108, and 80--250 eV), the EUV telescope obtained useful background data for 21 sky points, 11 large angle scans, and an additional group of short observations of both types.more » Analysis of the data reveals an intense 80--108 eV diffuse flux of 4.0 +- 1.3 photons cm/sup -2/ sr/sup -1/ eV/sup -1/ (broad-band weighted average). This is roughly a factor of 10 higher than the corresponding 150--280 eV average intensity and confirms the earlier results of Cash, Malina, and Stern. Galactic contributions to the background intensity at still lower energies are most likely masked by large fluxes of geocoronal or interplanetary solar-scattered resonance radiation; however, we drive upper limits to the local galactic background of 2 x 10/sup 4/ and 6 x 10/sup 2/ photons cm/sup -2/ sr/sup -1/ eV/sup -1/ averaged over the 16--25 eV and 20--73 eV bands, respectively. The uniformity of the background flux is uncertain due to limitations in the statistical accuracy of the data; we discuss probable upper limits to any spatial anisotropy. No evidence is found for a correlation between the telescope count rate and Earth-based parameters (zenith angle, Sun angle, etc.) for E> or approx. =80 eV. Unlike some previous claims for the soft X-ray background, no simple dependence upon galactic latitude is seen.Fitting models of thermal emission to the Apollo-Soyuz data yields constraints on model parameters that are consistent for a limited range of temperatures with the EUV results of Cash, Malina, and Stern and the soft X-ray data of Burstein et al.« less

  9. A Heuristic Approach to Remove the Background Intensity on White-light Solar Images. I. STEREO /HI-1 Heliospheric Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stenborg, Guillermo; Howard, Russell A.

    White-light coronal and heliospheric imagers observe scattering of photospheric light from both dust particles (the F-Corona) and free electrons in the corona (the K-corona). The separation of the two coronae is thus vitally important to reveal the faint K-coronal structures (e.g., streamers, co-rotating interaction regions, coronal mass ejections, etc.). However, the separation of the two coronae is very difficult, so we are content in defining a background corona that contains the F- and as little K- as possible. For both the LASCO-C2 and LASCO-C3 coronagraphs aboard the Solar and Heliospheric Observatory ( SOHO ) and the white-light imagers of themore » SECCHI suite aboard the Solar Terrestrial Relationships Observatory ( STEREO ), a time-dependent model of the background corona is generated from about a month of similar images. The creation of such models is possible because the missions carrying these instruments are orbiting the Sun at about 1 au. However, the orbit profiles for the upcoming Solar Orbiter and Solar Probe Plus missions are very different. These missions will have elliptic orbits with a rapidly changing radial distance, hence invalidating the techniques in use for the SOHO /LASCO and STEREO /SECCHI instruments. We have been investigating techniques to generate background models out of just single images that could be used for the Solar Orbiter Heliospheric Imager and the Wide-field Imager for the Solar Probe Plus packages on board the respective spacecraft. In this paper, we introduce a state-of-the-art, heuristic technique to create the background intensity models of STEREO /HI-1 data based solely on individual images, report on new results derived from its application, and discuss its relevance to instrumental and operational issues.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, X.; Florinski, V.

    We present a new model that couples galactic cosmic-ray (GCR) propagation with magnetic turbulence transport and the MHD background evolution in the heliosphere. The model is applied to the problem of the formation of corotating interaction regions (CIRs) during the last solar minimum from the period between 2007 and 2009. The numerical model simultaneously calculates the large-scale supersonic solar wind properties and its small-scale turbulent content from 0.3 au to the termination shock. Cosmic rays are then transported through the background, and thus computed, with diffusion coefficients derived from the solar wind turbulent properties, using a stochastic Parker approach. Ourmore » results demonstrate that GCR variations depend on the ratio of diffusion coefficients in the fast and slow solar winds. Stream interfaces inside the CIRs always lead to depressions of the GCR intensity. On the other hand, heliospheric current sheet (HCS) crossings do not appreciably affect GCR intensities in the model, which is consistent with the two observations under quiet solar wind conditions. Therefore, variations in diffusion coefficients associated with CIR stream interfaces are more important for GCR propagation than the drift effects of the HCS during a negative solar minimum.« less

  11. INTEGRAL/SPI γ-ray line spectroscopy. Response and background characteristics

    NASA Astrophysics Data System (ADS)

    Diehl, Roland; Siegert, Thomas; Greiner, Jochen; Krause, Martin; Kretschmer, Karsten; Lang, Michael; Pleintinger, Moritz; Strong, Andrew W.; Weinberger, Christoph; Zhang, Xiaoling

    2018-03-01

    Context. The space based γ-ray observatory INTEGRAL of the European Space Agency (ESA) includes the spectrometer instrument "SPI". This is a coded mask telescope featuring a 19-element Germanium detector array for high-resolution γ-ray spectroscopy, encapsulated in a scintillation detector assembly that provides a veto for background from charged particles. In space, cosmic rays irradiate spacecraft and instruments, which, in spite of the vetoing detectors, results in a large instrumental background from activation of those materials, and leads to deterioration of the charge collection properties of the Ge detectors. Aim. We aim to determine the measurement characteristics of our detectors and their evolution with time, that is, their spectral response and instrumental background. These incur systematic variations in the SPI signal from celestial photons, hence their determination from a broad empirical database enables a reduction of underlying systematics in data analysis. For this, we explore compromises balancing temporal and spectral resolution within statistical limitations. Our goal is to enable modelling of background applicable to spectroscopic studies of the sky, accounting separately for changes of the spectral response and of instrumental background. Methods: We use 13.5 years of INTEGRAL/SPI data, which consist of spectra for each detector and for each pointing of the satellite. Spectral fits to each such spectrum, with independent but coherent treatment of continuum and line backgrounds, provides us with details about separated background components. From the strongest background lines, we first determine how the spectral response changes with time. Applying symmetry and long-term stability tests, we eliminate degeneracies and reduce statistical fluctuations of background parameters, with the aim of providing a self-consistent description of the spectral response for each individual detector. Accounting for this, we then determine how the instrumental background components change in intensities and other characteristics, most-importantly their relative distribution among detectors. Results: Spectral resolution of Ge detectors in space degrades with time, up to 15% within half a year, consistently for all detectors, and across the SPI energy range. Semi-annual annealing operations recover these losses, yet there is a small long-term degradation. The intensity of instrumental background varies anti-correlated to solar activity, in general. There are significant differences among different lines and with respect to continuum. Background lines are found to have a characteristic, well-defined and long-term consistent intensity ratio among detectors. We use this to categorise lines in groups of similar behaviour. The dataset of spectral-response and background parameters as fitted across the INTEGRAL mission allows studies of SPI spectral response and background behaviour in a broad perspective, and efficiently supports precision modelling of instrumental background.

  12. Thresholding of auditory cortical representation by background noise

    PubMed Central

    Liang, Feixue; Bai, Lin; Tao, Huizhong W.; Zhang, Li I.; Xiao, Zhongju

    2014-01-01

    It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity. PMID:25426029

  13. Raman background photobleaching as a possible method of cancer diagnostics

    NASA Astrophysics Data System (ADS)

    Brandt, Nikolai N.; Brandt, Nikolai B.; Chikishev, Andrey Y.; Gangardt, Mihail G.; Karyakina, Nina F.

    2001-06-01

    Kinetics of photobleaching of background in Raman spectra of aqueous solutions of plant toxins ricin and ricin agglutinin, ricin binding subunit, and normal and malignant human blood serum were measured. For the excitation of the spectra cw and pulsed laser radiation were used. The spectra of Raman background change upon laser irradiation. Background intensity is lower for the samples with small molecular weight. The cyclization of amino acid residues in the toxin molecules as well as in human blood serum can be a reason of the Raman background. The model of the background photobleaching is proposed. The differences in photobleaching kinetics in the cases of cw and pulsed laser radiation are discussed. It is shown that Raman background photobleaching can be very informative for cancer diagnostics.

  14. The kinetics of lactate production and removal during whole-body exercise

    PubMed Central

    2012-01-01

    Background Based on a literature review, the current study aimed to construct mathematical models of lactate production and removal in both muscles and blood during steady state and at varying intensities during whole-body exercise. In order to experimentally test the models in dynamic situations, a cross-country skier performed laboratory tests while treadmill roller skiing, from where work rate, aerobic power and blood lactate concentration were measured. A two-compartment simulation model for blood lactate production and removal was constructed. Results The simulated and experimental data differed less than 0.5 mmol/L both during steady state and varying sub-maximal intensities. However, the simulation model for lactate removal after high exercise intensities seems to require further examination. Conclusions Overall, the simulation models of lactate production and removal provide useful insight into the parameters that affect blood lactate response, and specifically how blood lactate concentration during practical training and testing in dynamical situations should be interpreted. PMID:22413898

  15. Mathematical algorithm development and parametric studies with the GEOFRAC three-dimensional stochastic model of natural rock fracture systems

    NASA Astrophysics Data System (ADS)

    Ivanova, Violeta M.; Sousa, Rita; Murrihy, Brian; Einstein, Herbert H.

    2014-06-01

    This paper presents results from research conducted at MIT during 2010-2012 on modeling of natural rock fracture systems with the GEOFRAC three-dimensional stochastic model. Following a background summary of discrete fracture network models and a brief introduction of GEOFRAC, the paper provides a thorough description of the newly developed mathematical and computer algorithms for fracture intensity, aperture, and intersection representation, which have been implemented in MATLAB. The new methods optimize, in particular, the representation of fracture intensity in terms of cumulative fracture area per unit volume, P32, via the Poisson-Voronoi Tessellation of planes into polygonal fracture shapes. In addition, fracture apertures now can be represented probabilistically or deterministically whereas the newly implemented intersection algorithms allow for computing discrete pathways of interconnected fractures. In conclusion, results from a statistical parametric study, which was conducted with the enhanced GEOFRAC model and the new MATLAB-based Monte Carlo simulation program FRACSIM, demonstrate how fracture intensity, size, and orientations influence fracture connectivity.

  16. Modelling stock order flows with non-homogeneous intensities from high-frequency data

    NASA Astrophysics Data System (ADS)

    Gorshenin, Andrey K.; Korolev, Victor Yu.; Zeifman, Alexander I.; Shorgin, Sergey Ya.; Chertok, Andrey V.; Evstafyev, Artem I.; Korchagin, Alexander Yu.

    2013-10-01

    A micro-scale model is proposed for the evolution of such information system as the limit order book in financial markets. Within this model, the flows of orders (claims) are described by doubly stochastic Poisson processes taking account of the stochastic character of intensities of buy and sell orders that determine the price discovery mechanism. The proposed multiplicative model of stochastic intensities makes it possible to analyze the characteristics of the order flows as well as the instantaneous proportion of the forces of buyers and sellers, that is, the imbalance process, without modelling the external information background. The proposed model gives the opportunity to link the micro-scale (high-frequency) dynamics of the limit order book with the macro-scale models of stock price processes of the form of subordinated Wiener processes by means of limit theorems of probability theory and hence, to use the normal variance-mean mixture models of the corresponding heavy-tailed distributions. The approach can be useful in different areas with similar properties (e.g., in plasma physics).

  17. Adaptation, saturation, and physiological masking in single auditory-nerve fibers.

    PubMed

    Smith, R L

    1979-01-01

    Results are reviewed concerning some effects, at a units's characteristic frequency, of a short-term conditioning stimulus on the responses to perstimulatory and poststimulatory test tones. A phenomenological equation is developed from the poststimulatory results and shown to be consistent with the perstimulatory results. According to the results and equation, the response to a test tone equals the unconditioned or unadapted response minus the decrement produced by adaptation to the conditioning tone. Furthermore, the decrement is proportional to the driven response to the conditioning tone and does not depend on sound intensity per se. The equation has a simple interpretation in terms of two processes in cascade--a static saturating nonlinearity followed by additive adaptation. Results are presented to show that this functional model is sufficient to account for the "physiological masking" produced by wide-band backgrounds. According to this interpretation, a sufficiently intense background produces saturation. Consequently, a superimposed test tone cause no change in response. In addition, when the onset of the background precedes the onset of the test tone, the total firing rate is reduced by adaptation. Evidence is reviewed concerning the possible correspondence between the variables in the model and intracellular events in the auditory periphery.

  18. On the origin of the soft X-ray background. [in cosmological observations

    NASA Technical Reports Server (NTRS)

    Wang, Q. D.; Mccray, Richard

    1993-01-01

    The angular autocorrelation function and spectrum of the soft X-ray background is studied below a discrete source detection limit, using two deep images from the Rosat X-ray satellite. The average spectral shape of pointlike sources, which account for 40 to 60 percent of the background intensity, is determined by using the autocorrelation function. The background spectrum, in the 0.5-0.9 keV band (M band), is decomposed into a pointlike source component characterized by a power law and a diffuse component represented by a two-temperature plasma. These pointlike sources cannot contribute more than 60 percent of the X-ray background intensity in the M band without exceeding the total observed flux in the R7 band. Spectral analysis has shown that the local soft diffuse component, although dominating the background intensity at energies not greater than 0.3 keV, contributes only a small fraction of the M band background intensity. The diffuse component may represent an important constituent of the interstellar or intergalactic medium.

  19. Cross-cultural differences in item and background memory: examining the influence of emotional intensity and scene congruency.

    PubMed

    Mickley Steinmetz, Katherine R; Sturkie, Charlee M; Rochester, Nina M; Liu, Xiaodong; Gutchess, Angela H

    2018-07-01

    After viewing a scene, individuals differ in what they prioritise and remember. Culture may be one factor that influences scene memory, as Westerners have been shown to be more item-focused than Easterners (see Masuda, T., & Nisbett, R. E. (2001). Attending holistically versus analytically: Comparing the context sensitivity of Japanese and Americans. Journal of Personality and Social Psychology, 81, 922-934). However, cultures may differ in their sensitivity to scene incongruences and emotion processing, which may account for cross-cultural differences in scene memory. The current study uses hierarchical linear modeling (HLM) to examine scene memory while controlling for scene congruency and the perceived emotional intensity of the images. American and East Asian participants encoded pictures that included a positive, negative, or neutral item placed on a neutral background. After a 20-min delay, participants were shown the item and background separately along with similar and new items and backgrounds to assess memory specificity. Results indicated that even when congruency and emotional intensity were controlled, there was evidence that Americans had better item memory than East Asians. Incongruent scenes were better remembered than congruent scenes. However, this effect did not differ by culture. This suggests that Americans' item focus may result in memory changes that are robust despite variations in scene congruency and perceived emotion.

  20. A novel content-based active contour model for brain tumor segmentation.

    PubMed

    Sachdeva, Jainy; Kumar, Vinod; Gupta, Indra; Khandelwal, Niranjan; Ahuja, Chirag Kamal

    2012-06-01

    Brain tumor segmentation is a crucial step in surgical and treatment planning. Intensity-based active contour models such as gradient vector flow (GVF), magneto static active contour (MAC) and fluid vector flow (FVF) have been proposed to segment homogeneous objects/tumors in medical images. In this study, extensive experiments are done to analyze the performance of intensity-based techniques for homogeneous tumors on brain magnetic resonance (MR) images. The analysis shows that the state-of-art methods fail to segment homogeneous tumors against similar background or when these tumors show partial diversity toward the background. They also have preconvergence problem in case of false edges/saddle points. However, the presence of weak edges and diffused edges (due to edema around the tumor) leads to oversegmentation by intensity-based techniques. Therefore, the proposed method content-based active contour (CBAC) uses both intensity and texture information present within the active contour to overcome above-stated problems capturing large range in an image. It also proposes a novel use of Gray-Level Co-occurrence Matrix to define texture space for tumor segmentation. The effectiveness of this method is tested on two different real data sets (55 patients - more than 600 images) containing five different types of homogeneous, heterogeneous, diffused tumors and synthetic images (non-MR benchmark images). Remarkable results are obtained in segmenting homogeneous tumors of uniform intensity, complex content heterogeneous, diffused tumors on MR images (T1-weighted, postcontrast T1-weighted and T2-weighted) and synthetic images (non-MR benchmark images of varying intensity, texture, noise content and false edges). Further, tumor volume is efficiently extracted from 2-dimensional slices and is named as 2.5-dimensional segmentation. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Was the Big Bang hot?

    NASA Technical Reports Server (NTRS)

    Wright, E. L.

    1983-01-01

    Techniques for verifying the spectrum defined by Woody and Richards (WR, 1981), which serves as a base for dust-distorted models of the 3 K background, are discussed. WR detected a sharp deviation from the Planck curve in the 3 K background. The absolute intensity of the background may be determined by the frequency dependence of the dipole anisotropy of the background or the frequency dependence effect in galactic clusters. Both methods involve the Doppler shift; analytical formulae are defined for characterization of the dipole anisotropy. The measurement of the 30-300 GHz spectra of cold galactic dust may reveal the presence of significant amounts of needle-shaped grains, which would in turn support a theory of a cold Big Bang.

  2. "Hook"-calibration of GeneChip-microarrays: theory and algorithm.

    PubMed

    Binder, Hans; Preibisch, Stephan

    2008-08-29

    : The improvement of microarray calibration methods is an essential prerequisite for quantitative expression analysis. This issue requires the formulation of an appropriate model describing the basic relationship between the probe intensity and the specific transcript concentration in a complex environment of competing interactions, the estimation of the magnitude these effects and their correction using the intensity information of a given chip and, finally the development of practicable algorithms which judge the quality of a particular hybridization and estimate the expression degree from the intensity values. : We present the so-called hook-calibration method which co-processes the log-difference (delta) and -sum (sigma) of the perfect match (PM) and mismatch (MM) probe-intensities. The MM probes are utilized as an internal reference which is subjected to the same hybridization law as the PM, however with modified characteristics. After sequence-specific affinity correction the method fits the Langmuir-adsorption model to the smoothed delta-versus-sigma plot. The geometrical dimensions of this so-called hook-curve characterize the particular hybridization in terms of simple geometric parameters which provide information about the mean non-specific background intensity, the saturation value, the mean PM/MM-sensitivity gain and the fraction of absent probes. This graphical summary spans a metrics system for expression estimates in natural units such as the mean binding constants and the occupancy of the probe spots. The method is single-chip based, i.e. it separately uses the intensities for each selected chip. : The hook-method corrects the raw intensities for the non-specific background hybridization in a sequence-specific manner, for the potential saturation of the probe-spots with bound transcripts and for the sequence-specific binding of specific transcripts. The obtained chip characteristics in combination with the sensitivity corrected probe-intensity values provide expression estimates scaled in natural units which are given by the binding constants of the particular hybridization.

  3. Background photobleaching in raman spectra of aqueous solutions of plant toxins

    NASA Astrophysics Data System (ADS)

    Brandt, Nikolai N.; Chikishev, Andrey Y.; Tonevitsky, Alexander G.

    2002-05-01

    Kinetics of background photobleaching in Raman spectra of aqueous solutions of ricin, ricin agglutinin and ricin binding subunit were measured. It was found that the spectrum of Raman background changes upon laser irradiation. Background intensity is lower for the samples with lower molecular weight. Photobleaching is characterized by oscillations in the multi exponentially decaying intensity.

  4. Synthetic spectral analysis of a kinetic model for slow-magnetosonic waves in solar corona

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruan, Wenzhi; He, Jiansen; Tu, Chuanyi

    We propose a kinetic model of slow-magnetosonic waves to explain various observational features associated with the propagating intensity disturbances (PIDs) occurring in the solar corona. The characteristics of slow mode waves, e.g, inphase oscillations of density, velocity, and thermal speed, are reproduced in this kinetic model. Moreover, the red-blue (R-B) asymmetry of the velocity distribution as self-consistently generated in the model is found to be contributed from the beam component, as a result of the competition between Landau resonance and Coulomb collisions. Furthermore, we synthesize the spectral lines and make the spectral analysis, based on the kinetic simulation data ofmore » the flux tube plasmas and the hypothesis of the surrounding background plasmas. It is found that the fluctuations of parameters of the synthetic spectral lines are basically consistent with the observations: (1) the line intensity, Doppler shift, and line width are fluctuating in phase; (2) the R-B asymmetry usually oscillate out of phase with the former three parameters; (3) the blueward asymmetry is more evident than the redward asymmetry in the R-B fluctuations. The oscillations of line parameters become weakened for the case with denser surrounding background plasmas. Similar to the observations, there is no doubled-frequency oscillation of the line width for the case with flux-tube plasmas flowing bulkly upward among the static background plasmas. Therefore, we suggest that the “wave + beam flow” kinetic model may be a viable interpretation for the PIDs observed in the solar corona.« less

  5. Hydrodynamic Simulation of the Cosmological X-Ray Background

    NASA Astrophysics Data System (ADS)

    Croft, Rupert A. C.; Di Matteo, Tiziana; Davé, Romeel; Hernquist, Lars; Katz, Neal; Fardal, Mark A.; Weinberg, David H.

    2001-08-01

    We use a hydrodynamic simulation of an inflationary cold dark matter model with a cosmological constant to predict properties of the extragalactic X-ray background (XRB). We focus on emission from the intergalactic medium (IGM), with particular attention to diffuse emission from warm-hot gas that lies in relatively smooth filamentary structures between galaxies and galaxy clusters. We also include X-rays from point sources associated with galaxies in the simulation, and we make maps of the angular distribution of the emission. Although much of the X-ray luminous gas has a filamentary structure, the filaments are not evident in the simulated maps because of projection effects. In the soft (0.5-2 keV) band, our calculated mean intensity of radiation from intergalactic and cluster gas is 2.3×10-12 ergs-1 cm-2 deg-2, 35% of the total softband emission. This intensity is compatible at the ~1 σ level with estimates of the unresolved soft background intensity from deep ROSAT and Chandra measurements. Only 4% of the hard (2-10 keV) emission is associated with intergalactic gas. Relative to active galactic nuclei flux, the IGM component of the XRB peaks at a lower redshift (median z~0.45) and spans a narrower redshift range, so its clustering makes an important contribution to the angular correlation function of the total emission. The clustering on the scales accessible to our simulation (0.1‧-10') is significant, with an amplitude roughly consistent with an extrapolation of recent ROSAT results to small scales. A cross-correlation analysis of the XRB against nearby galaxies taken from a simulated redshift survey also yields a strong signal from the IGM. Our conclusions about the soft background intensity differ from those of some recent papers that have argued that the expected emission from gas in galaxy, group, and cluster halos would exceed the observed background unless much of the gas is expelled by supernova feedback. We obtain reasonable compatibility with current observations in a simulation that incorporates cooling, star formation, and only modest feedback. A clear prediction of our model is that the unresolved portion of the soft XRB will remain mostly unresolved even as observations reach deeper point-source sensitivity.

  6. High-throughput screening in two dimensions: binding intensity and off-rate on a peptide microarray.

    PubMed

    Greving, Matthew P; Belcher, Paul E; Cox, Conor D; Daniel, Douglas; Diehnelt, Chris W; Woodbury, Neal W

    2010-07-01

    We report a high-throughput two-dimensional microarray-based screen, incorporating both target binding intensity and off-rate, which can be used to analyze thousands of compounds in a single binding assay. Relative binding intensities and time-resolved dissociation are measured for labeled tumor necrosis factor alpha (TNF-alpha) bound to a peptide microarray. The time-resolved dissociation is fitted to a one-component exponential decay model, from which relative dissociation rates are determined for all peptides with binding intensities above background. We show that most peptides with the slowest off-rates on the microarray also have the slowest off-rates when measured by surface plasmon resonance (SPR). 2010 Elsevier Inc. All rights reserved.

  7. Improved electron probe microanalysis of trace elements in quartz

    USGS Publications Warehouse

    Donovan, John J.; Lowers, Heather; Rusk, Brian G.

    2011-01-01

    Quartz occurs in a wide range of geologic environments throughout the Earth's crust. The concentration and distribution of trace elements in quartz provide information such as temperature and other physical conditions of formation. Trace element analyses with modern electron-probe microanalysis (EPMA) instruments can achieve 99% confidence detection of ~100 ppm with fairly minimal effort for many elements in samples of low to moderate average atomic number such as many common oxides and silicates. However, trace element measurements below 100 ppm in many materials are limited, not only by the precision of the background measurement, but also by the accuracy with which background levels are determined. A new "blank" correction algorithm has been developed and tested on both Cameca and JEOL instruments, which applies a quantitative correction to the emitted X-ray intensities during the iteration of the sample matrix correction based on a zero level (or known trace) abundance calibration standard. This iterated blank correction, when combined with improved background fit models, and an "aggregate" intensity calculation utilizing multiple spectrometer intensities in software for greater geometric efficiency, yields a detection limit of 2 to 3 ppm for Ti and 6 to 7 ppm for Al in quartz at 99% t-test confidence with similar levels for absolute accuracy.

  8. Chemiluminescence-based multivariate sensing of local equivalence ratios in premixed atmospheric methane-air flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathi, Markandey M.; Krishnan, Sundar R.; Srinivasan, Kalyan K.

    Chemiluminescence emissions from OH*, CH*, C2, and CO2 formed within the reaction zone of premixed flames depend upon the fuel-air equivalence ratio in the burning mixture. In the present paper, a new partial least square regression (PLS-R) based multivariate sensing methodology is investigated and compared with an OH*/CH* intensity ratio-based calibration model for sensing equivalence ratio in atmospheric methane-air premixed flames. Five replications of spectral data at nine different equivalence ratios ranging from 0.73 to 1.48 were used in the calibration of both models. During model development, the PLS-R model was initially validated with the calibration data set using themore » leave-one-out cross validation technique. Since the PLS-R model used the entire raw spectral intensities, it did not need the nonlinear background subtraction of CO2 emission that is required for typical OH*/CH* intensity ratio calibrations. An unbiased spectral data set (not used in the PLS-R model development), for 28 different equivalence ratio conditions ranging from 0.71 to 1.67, was used to predict equivalence ratios using the PLS-R and the intensity ratio calibration models. It was found that the equivalence ratios predicted with the PLS-R based multivariate calibration model matched the experimentally measured equivalence ratios within 7%; whereas, the OH*/CH* intensity ratio calibration grossly underpredicted equivalence ratios in comparison to measured equivalence ratios, especially under rich conditions ( > 1.2). The practical implications of the chemiluminescence-based multivariate equivalence ratio sensing methodology are also discussed.« less

  9. Mimicking Natural Laminar to Turbulent Flow Transition: A Systematic CFD Study Using PAB3D

    NASA Technical Reports Server (NTRS)

    Pao, S. Paul; Abdol-Hamid, Khaled S.

    2005-01-01

    For applied aerodynamic computations using a general purpose Navier-Stokes code, the common practice of treating laminar to turbulent flow transition over a non-slip surface is somewhat arbitrary by either treating the entire flow as turbulent or forcing the flow to undergo transition at given trip locations in the computational domain. In this study, the possibility of using the PAB3D code, standard k-epsilon turbulence model, and the Girimaji explicit algebraic stresses model to mimic natural laminar to turbulent flow transition was explored. The sensitivity of flow transition with respect to two limiters in the standard k-epsilon turbulence model was examined using a flat plate and a 6:1 aspect ratio prolate spheroid for our computations. For the flat plate, a systematic dependence of transition Reynolds number on background turbulence intensity was found. For the prolate spheroid, the transition patterns in the three-dimensional boundary layer at different flow conditions were sensitive to the free stream turbulence viscosity limit, the reference Reynolds number and the angle of attack, but not to background turbulence intensity below a certain threshold value. The computed results showed encouraging agreements with the experimental measurements at the corresponding geometry and flow conditions.

  10. Characterizing the Background Corona with SDO/AIA

    NASA Technical Reports Server (NTRS)

    Napier, Kate; Alexander, Caroline; Winebarger, Amy

    2014-01-01

    Characterizing the nature of the solar coronal background would enable scientists to more accurately determine plasma parameters, and may lead to a better understanding of the coronal heating problem. Because scientists study the 3D structure of the Sun in 2D, any line-of-sight includes both foreground and background material, and thus, the issue of background subtraction arises. By investigating the intensity values in and around an active region, using multiple wavelengths collected from the Atmospheric Imaging Assembly (AIA) on the Solar Dynamics Observatory (SDO) over an eight-hour period, this project aims to characterize the background as smooth or structured. Different methods were employed to measure the true coronal background and create minimum intensity images. These were then investigated for the presence of structure. The background images created were found to contain long-lived structures, including coronal loops, that were still present in all of the wavelengths, 131, 171, 193, 211, and 335 A. The intensity profiles across the active region indicate that the background is much more structured than previously thought.

  11. Fourier-space combination of Planck and Herschel images

    NASA Astrophysics Data System (ADS)

    Abreu-Vicente, J.; Stutz, A.; Henning, Th.; Keto, E.; Ballesteros-Paredes, J.; Robitaille, T.

    2017-08-01

    Context. Herschel has revolutionized our ability to measure column densities (NH) and temperatures (T) of molecular clouds thanks to its far infrared multiwavelength coverage. However, the lack of a well defined background intensity level in the Herschel data limits the accuracy of the NH and T maps. Aims: We aim to provide a method that corrects the missing Herschel background intensity levels using the Planck model for foreground Galactic thermal dust emission. For the Herschel/PACS data, both the constant-offset as well as the spatial dependence of the missing background must be addressed. For the Herschel/SPIRE data, the constant-offset correction has already been applied to the archival data so we are primarily concerned with the spatial dependence, which is most important at 250 μm. Methods: We present a Fourier method that combines the publicly available Planck model on large angular scales with the Herschel images on smaller angular scales. Results: We have applied our method to two regions spanning a range of Galactic environments: Perseus and the Galactic plane region around l = 11deg (HiGal-11). We post-processed the combined dust continuum emission images to generate column density and temperature maps. We compared these to previously adopted constant-offset corrections. We find significant differences (≳20%) over significant ( 15%) areas of the maps, at low column densities (NH ≲ 1022 cm-2) and relatively high temperatures (T ≳ 20 K). We have also applied our method to synthetic observations of a simulated molecular cloud to validate our method. Conclusions: Our method successfully corrects the Herschel images, including both the constant-offset intensity level and the scale-dependent background variations measured by Planck. Our method improves the previous constant-offset corrections, which did not account for variations in the background emission levels. The image FITS files used in this paper are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/604/A65

  12. Registering upper atmosphere parameters in East Siberia with Fabry—Perot Interferometer KEO Scientific "Arinae"

    NASA Astrophysics Data System (ADS)

    Vasilyev, Roman; Artamonov, Maksim; Beletsky, Aleksandr; Zherebtsov, Geliy; Medvedeva, Irina; Mikhalev, Aleksandr; Syrenova, Tatyana

    2017-09-01

    We describe the Fabry–Perot interferometer designed to study Earth’s upper atmosphere. We propose a modification of the existing data processing method for determining the Doppler shift and Doppler widening and also for separating the observed line intensity and the background intensity. The temperature and wind velocity derived from these parameters are compared with physical characteristics obtained from modeling (NRLMSISE-00, HWM14). We demonstrate that the temperature is determined from the oxygen 630 nm line irrespective of the hydroxyl signal existing in interference patterns. We show that the interferometer can obtain temperature from the oxygen 557.7 nm line in case of additional calibration of the device. The observed wind velocity mainly agrees with model data. Night variations in the red and green oxygen lines quite well coincide with those in intensities obtained by devices installed nearby the interferometer.

  13. Cryptotomography: reconstructing 3D Fourier intensities from randomly oriented single-shot diffraction patterns (CXIDB ID 9)

    DOE Data Explorer

    Loh, Ne-Te Duane

    2011-08-01

    These 2000 single-shot diffraction patterns include were either background-scattering only or hits (background-scattering plus diffraction signal from sub-micron ellipsoidal particles at random, undetermined orientations). Candidate hits were identified by eye, and the remainder were presumed as background. 54 usable, background-subtracted hits in this set (procedure in referenced article) were used to reconstruct the 3D diffraction intensities of the average ellipsoidal particle.

  14. Stability of conditioned pain modulation in two musculoskeletal pain models: investigating the influence of shoulder pain intensity and gender

    PubMed Central

    2013-01-01

    Background Several chronic pain populations have demonstrated decreased conditioned pain modulation (CPM). However there is still a need to investigate the stability of CPM paradigms before the measure can be recommended for implementation. The purpose of the present study was to assess whether shoulder pain intensity and gender influence CPM stability within and between sessions. Methods This study examined two different musculoskeletal pain models, clinical shoulder pain and an experimental model of shoulder pain induced with eccentric exercise in healthy participants. Patients in the clinical cohort (N = 134) were tested before surgery and reassessed 3 months post-surgery. The healthy cohort (N = 190) was examined before inducing pain at the shoulder, and 48 and 96 hours later. Results Our results provide evidence that 1) stability of inhibition is not related to changes in pain intensity, and 2) there are sex differences for CPM stability within and between days. Conclusions Fluctuation of pain intensity did not significantly influence CPM stability. Overall, the more stable situations for CPM were females from the clinical cohort and males from the healthy cohort. PMID:23758907

  15. Verbal learning in the context of background music: no influence of vocals and instrumentals on verbal learning

    PubMed Central

    2014-01-01

    Background Whether listening to background music enhances verbal learning performance is still a matter of dispute. In this study we investigated the influence of vocal and instrumental background music on verbal learning. Methods 226 subjects were randomly assigned to one of five groups (one control group and 4 experimental groups). All participants were exposed to a verbal learning task. One group served as control group while the 4 further groups served as experimental groups. The control group learned without background music while the 4 experimental groups were exposed to vocal or instrumental musical pieces during learning with different subjective intensity and valence. Thus, we employed 4 music listening conditions (vocal music with high intensity: VOC_HIGH, vocal music with low intensity: VOC_LOW, instrumental music with high intensity: INST_HIGH, instrumental music with low intensity: INST_LOW) and one control condition (CONT) during which the subjects learned the word lists. Since it turned out that the high and low intensity groups did not differ in terms of the rated intensity during the main experiment these groups were lumped together. Thus, we worked with 3 groups: one control group and two groups, which were exposed to background music (vocal and instrumental) during verbal learning. As dependent variable, the number of learned words was used. Here we measured immediate recall during five learning sessions (recall 1 – recall 5) and delayed recall for 15 minutes (recall 6) and 14 days (recall 7) after the last learning session. Results Verbal learning improved during the first 5 recall sessions without any strong difference between the control and experimental groups. Also the delayed recalls were similar for the three groups. There was only a trend for attenuated verbal learning for the group passively listened to vocals. This learning attenuation diminished during the following learning sessions. Conclusions The exposure to vocal or instrumental background music during encoding did not influence verbal learning. We suggest that the participants are easily able to cope with this background stimulation by ignoring this information channel in order to focus on the verbal learning task. PMID:24670048

  16. A Kp-based model of auroral boundaries

    NASA Astrophysics Data System (ADS)

    Carbary, James F.

    2005-10-01

    The auroral oval can serve as both a representation and a prediction of space weather on a global scale, so a competent model of the oval as a function of a geomagnetic index could conveniently appraise space weather itself. A simple model of the auroral boundaries is constructed by binning several months of images from the Polar Ultraviolet Imager by Kp index. The pixel intensities are first averaged into magnetic latitude-magnetic local time (MLT-MLAT) and local time bins, and intensity profiles are then derived for each Kp level at 1 hour intervals of MLT. After background correction, the boundary latitudes of each profile are determined at a threshold of 4 photons cm-2 s1. The peak locations and peak intensities are also found. The boundary and peak locations vary linearly with Kp index, and the coefficients of the linear fits are tabulated for each MLT. As a general rule of thumb, the UV intensity peak shifts 1° in magnetic latitude for each increment in Kp. The fits are surprisingly good for Kp < 6 but begin to deteriorate at high Kp because of auroral boundary irregularities and poor statistics. The statistical model allows calculation of the auroral boundaries at most MLTs as a function of Kp and can serve as an approximation to the shape and extent of the statistical oval.

  17. Envelope and phase distribution of a resonance transmission through a complex environment

    NASA Astrophysics Data System (ADS)

    Savin, Dmitry V.

    2018-06-01

    A transmission amplitude is considered for quantum or wave transport mediated by a single resonance coupled to the background of many chaotic states. Such a model provides a useful approach to quantify fluctuations in an established signal induced by a complex environment. Applying random matrix theory to the problem, we derive an exact result for the joint distribution of the transmission intensity (envelope) and the transmission phase at arbitrary coupling to the background with finite absorption. The intensity and phase are distributed within a certain region, revealing essential correlations even at strong absorption. In the latter limit, we obtain a simple asymptotic expression that provides a uniformly good approximation of the exact distribution within its whole support, thus going beyond the Rician distribution often used for such purposes. Exact results are also derived for the marginal distribution of the phase, including its limiting forms at weak and strong absorption.

  18. Characterizing the True Background Corona with SDO/AIA

    NASA Technical Reports Server (NTRS)

    Napier, Kate; Winebarger, Amy; Alexander, Caroline

    2014-01-01

    Characterizing the nature of the solar coronal background would enable scientists to more accurately determine plasma parameters, and may lead to a better understanding of the coronal heating problem. Because scientists study the 3D structure of the Sun in 2D, any line of sight includes both foreground and background material, and thus, the issue of background subtraction arises. By investigating the intensity values in and around an active region, using multiple wavelengths collected from the Atmospheric Imaging Assembly (AIA) on the Solar Dynamics Observatory (SDO) over an eight-hour period, this project aims to characterize the background as smooth or structured. Different methods were employed to measure the true coronal background and create minimum intensity images. These were then investigated for the presence of structure. The background images created were found to contain long-lived structures, including coronal loops, that were still present in all of the wavelengths, 193 Angstroms,171 Angstroms,131 Angstroms, and 211 Angstroms. The intensity profiles across the active region indicate that the background is much more structured than previously thought.

  19. Photon-photon scattering at the high-intensity frontier

    NASA Astrophysics Data System (ADS)

    Gies, Holger; Karbstein, Felix; Kohlfürst, Christian; Seegert, Nico

    2018-04-01

    The tremendous progress in high-intensity laser technology and the establishment of dedicated high-field laboratories in recent years have paved the way towards a first observation of quantum vacuum nonlinearities at the high-intensity frontier. We advocate a particularly prospective scenario, where three synchronized high-intensity laser pulses are brought into collision, giving rise to signal photons, whose frequency and propagation direction differ from the driving laser pulses, thus providing various means to achieve an excellent signal to background separation. Based on the theoretical concept of vacuum emission, we employ an efficient numerical algorithm which allows us to model the collision of focused high-intensity laser pulses in unprecedented detail. We provide accurate predictions for the numbers of signal photons accessible in experiment. Our study is the first to predict the precise angular spread of the signal photons, and paves the way for a first verification of quantum vacuum nonlinearity in a well-controlled laboratory experiment at one of the many high-intensity laser facilities currently coming online.

  20. Intensivist physician staffing and the process of care in academic medical centres

    PubMed Central

    Kahn, Jeremy M; Brake, Helga; Steinberg, Kenneth P

    2007-01-01

    Background Although intensivist physician staffing is associated with improved outcomes in critical care, little is known about the mechanism leading to this observation. Objective To determine the relationship between intensivist staffing and select process‐based quality indicators in the intensive care unit. Research design Retrospective cohort study in 29 academic hospitals participating in the University HealthSystem Consortium Mechanically Ventilated Patient Bundle Benchmarking Project. Patients 861 adult patients receiving prolonged mechanical ventilation in an intensive care unit. Results Patient‐level information on physician staffing and process‐of‐care quality indicators were collected on day 4 of mechanical ventilation. By day 4, 668 patients received care under a high intensity staffing model (primary intensivist care or mandatory consult) and 193 patients received care under a low intensity staffing model (optional consultation or no intensivist). Among eligible patients, those receiving care under a high intensity staffing model were more likely to receive prophylaxis for deep vein thrombosis (risk ratio 1.08, 95% CI 1.00 to 1.17), stress ulcer prophylaxis (risk ratio 1.10, 95% CI 1.03 to 1.18), a spontaneous breathing trial (risk ratio 1.37, 95% CI 0.97 to 1.94), interruption of sedation (risk ratio 1.64, 95% CI 1.13 to 2.38) and intensive insulin treatment (risk ratio 1.40, 95% CI 1.18 to 1.79) on day 4 of mechanical ventilation. Models accounting for clustering by hospital produced similar estimates of the staffing effect, except for prophylaxis against thrombosis and stress ulcers. Conclusions High intensity physician staffing is associated with increased use of evidence‐based quality indictors in patients receiving mechanical ventilation. PMID:17913772

  1. Alternative Approaches for Educating Future Global Marketing Professionals: A Comparison of Foreign Study and Research-Intensive Marketing Programs

    ERIC Educational Resources Information Center

    Kaufman, Peter A.; Melton, Horace L.; Varner, Iris I.; Hoelscher, Mark; Schmidt, Klaus; Spaulding, Aslihan D.

    2011-01-01

    Using an experiential learning model as a conceptual background, this article discusses characteristics and learning objectives for well-known foreign study programs such as study tours, study abroad, and internships and compares them with a less common overseas program called the "Global Marketing Program" (GMP). GMP involves…

  2. The Influence of Multicultural Educational Practices on Student Outcomes and Intergroup Relations

    ERIC Educational Resources Information Center

    Zirkel, Sabrina

    2008-01-01

    Background: How best to serve a racially and ethnically diverse student body has been a topic of intensive theory development for the past 30 or 40 years. We have strong theoretical models regarding the need for and practice of multicultural education, the goals of which include both increased educational achievement for students of color and…

  3. A facial expression image database and norm for Asian population: a preliminary report

    NASA Astrophysics Data System (ADS)

    Chen, Chien-Chung; Cho, Shu-ling; Horszowska, Katarzyna; Chen, Mei-Yen; Wu, Chia-Ching; Chen, Hsueh-Chih; Yeh, Yi-Yu; Cheng, Chao-Min

    2009-01-01

    We collected 6604 images of 30 models in eight types of facial expression: happiness, anger, sadness, disgust, fear, surprise, contempt and neutral. Among them, 406 most representative images from 12 models were rated by more than 200 human raters for perceived emotion category and intensity. Such large number of emotion categories, models and raters is sufficient for most serious expression recognition research both in psychology and in computer science. All the models and raters are of Asian background. Hence, this database can also be used when the culture background is a concern. In addition, 43 landmarks each of the 291 rated frontal view images were identified and recorded. This information should facilitate feature based research of facial expression. Overall, the diversity in images and richness in information should make our database and norm useful for a wide range of research.

  4. Geographic divergence and colour change in response to visual backgrounds and illumination intensity in bearded dragons.

    PubMed

    Cadena, Viviana; Smith, Kathleen R; Endler, John A; Stuart-Fox, Devi

    2017-03-15

    Animals may improve camouflage by both dynamic colour change and local evolutionary adaptation of colour but we have little understanding of their relative importance in colour-changing species. We tested for differences in colour change in response to background colour and light intensity in two populations of central bearded dragon lizards ( Pogona vitticeps ) representing the extremes in body coloration and geographical range. We found that bearded dragons change colour in response to various backgrounds and that colour change is affected by illumination intensity. Within-individual colour change was similar in magnitude in the two populations but varied between backgrounds. However, at the endpoints of colour change, each population showed greater similarity to backgrounds that were representative of the local habitat compared with the other population, indicating local adaptation to visual backgrounds. Our results suggest that even in species that change colour, both phenotypic plasticity and geographic divergence of coloration may contribute to improved camouflage. © 2017. Published by The Company of Biologists Ltd.

  5. Cueing properties of the decrease of white noise intensity for avoidance conditioning in cats.

    PubMed

    Zieliński, K

    1979-01-01

    In the main experiment two groups of 6 cats each were trained in active bar-pressing avoidance to a CS consisting of either a 10 dB or 20 dB decrease of the background white noise of 70 dB intensity. The two groups did not differ in rapidity of learning, however cats trained to the greater change .in background noise performed avoidance responses with shorter latencies than did cats trained to smaller change. Within-groups comparisons of cumulative distributions of response latencies for consecutive Vincentized fifths of avoidance acquisition showed the greatest changes in the region of latencies longer than the median latency of instrumental responses. On the other hand, the effects of CS intensity found in between-groups comparisons were located in the region of latencies shorter than the median latency of either group. Comparisons with data obtained in a complementary experiment employing additional 17 cats showed that subjects trained to stimuli less intense than the background noise level were marked by an exceptionally low level of avoidance responding with latencies shorter than 1.1 s, which was lower than expected from the probability of intertrial responses for this period of time. Due to this property of stimuli less intense than the background, the distributions of response latencies were moved to the right, in effect, prefrontal lesions influenced a greater part of latency distributions than in cats trained to stimuli more intense than the background.

  6. Optimal Background Estimators in Single-Molecule FRET Microscopy.

    PubMed

    Preus, Søren; Hildebrandt, Lasse L; Birkedal, Victoria

    2016-09-20

    Single-molecule total internal reflection fluorescence (TIRF) microscopy constitutes an umbrella of powerful tools that facilitate direct observation of the biophysical properties, population heterogeneities, and interactions of single biomolecules without the need for ensemble synchronization. Due to the low signal/noise ratio in single-molecule TIRF microscopy experiments, it is important to determine the local background intensity, especially when the fluorescence intensity of the molecule is used quantitatively. Here we compare and evaluate the performance of different aperture-based background estimators used particularly in single-molecule Förster resonance energy transfer. We introduce the general concept of multiaperture signatures and use this technique to demonstrate how the choice of background can affect the measured fluorescence signal considerably. A new, to our knowledge, and simple background estimator is proposed, called the local statistical percentile (LSP). We show that the LSP background estimator performs as well as current background estimators at low molecular densities and significantly better in regions of high molecular densities. The LSP background estimator is thus suited for single-particle TIRF microscopy of dense biological samples in which the intensity itself is an observable of the technique. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  7. On the kinetics of anaerobic power

    PubMed Central

    2012-01-01

    Background This study investigated two different mathematical models for the kinetics of anaerobic power. Model 1 assumes that the work power is linear with the work rate, while Model 2 assumes a linear relationship between the alactic anaerobic power and the rate of change of the aerobic power. In order to test these models, a cross country skier ran with poles on a treadmill at different exercise intensities. The aerobic power, based on the measured oxygen uptake, was used as input to the models, whereas the simulated blood lactate concentration was compared with experimental results. Thereafter, the metabolic rate from phosphocreatine break down was calculated theoretically. Finally, the models were used to compare phosphocreatine break down during continuous and interval exercises. Results Good similarity was found between experimental and simulated blood lactate concentration during steady state exercise intensities. The measured blood lactate concentrations were lower than simulated for intensities above the lactate threshold, but higher than simulated during recovery after high intensity exercise when the simulated lactate concentration was averaged over the whole lactate space. This fit was improved when the simulated lactate concentration was separated into two compartments; muscles + internal organs and blood. Model 2 gave a better behavior of alactic energy than Model 1 when compared against invasive measurements presented in the literature. During continuous exercise, Model 2 showed that the alactic energy storage decreased with time, whereas Model 1 showed a minimum value when steady state aerobic conditions were achieved. During interval exercise the two models showed similar patterns of alactic energy. Conclusions The current study provides useful insight on the kinetics of anaerobic power. Overall, our data indicate that blood lactate levels can be accurately modeled during steady state, and suggests a linear relationship between the alactic anaerobic power and the rate of change of the aerobic power. PMID:22830586

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imara, Nia; Loeb, Abraham, E-mail: nimara@cfa.harvard.edu

    Infrared emission from intergalactic dust might compromise the ability of future experiments to detect subtle spectral distortions in the Cosmic Microwave Background (CMB) from the early universe. We provide the first estimate of foreground contamination of the CMB signal due to diffuse dust emission in the intergalactic medium. We use models of the extragalactic background light to calculate the intensity of intergalactic dust emission and find that emission by intergalactic dust at z ≲ 0.5 exceeds the sensitivity of the planned Primordial Inflation Explorer to CMB spectral distortions by 1–3 orders of magnitude. In the frequency range ν = 150–2400more » GHz, we place an upper limit of 0.06% on the contribution to the far-infrared background from intergalactic dust emission.« less

  9. The soft X-ray diffuse background observed with the HEAO 1 low-energy detectors

    NASA Technical Reports Server (NTRS)

    Garmire, G. P.; Nousek, J. A.; Apparao, K. M. V.; Burrows, D. N.; Fink, R. L.; Kraft, R. P.

    1992-01-01

    Results of a study of the diffuse soft-X-ray background as observed by the low-energy detectors of the A-2 experiment aboard the HEAO 1 satellite are reported. The observed sky intensities are presented as maps of the diffuse X-ray background sky in several energy bands covering the energy range 0.15-2.8 keV. It is found that the soft X-ray diffuse background (SXDB) between 1.5 and 2.8 keV, assuming a power law form with photon number index 1.4, has a normalization constant of 10.5 +/- 1.0 photons/sq cm s sr keV. Below 1.5 keV the spectrum of the SXDB exceeds the extrapolation of this power law. The low-energy excess for the NEP can be fitted with emission from a two-temperature equilibrium plasma model with the temperatures given by log I1 = 6.16 and log T2 = 6.33. It is found that this model is able to account for the spectrum below 1 keV, but fails to yield the observed Galactic latitude variation.

  10. Reference analysis of the signal + background model in counting experiments II. Approximate reference prior

    NASA Astrophysics Data System (ADS)

    Casadei, D.

    2014-10-01

    The objective Bayesian treatment of a model representing two independent Poisson processes, labelled as ``signal'' and ``background'' and both contributing additively to the total number of counted events, is considered. It is shown that the reference prior for the parameter of interest (the signal intensity) can be well approximated by the widely (ab)used flat prior only when the expected background is very high. On the other hand, a very simple approximation (the limiting form of the reference prior for perfect prior background knowledge) can be safely used over a large portion of the background parameters space. The resulting approximate reference posterior is a Gamma density whose parameters are related to the observed counts. This limiting form is simpler than the result obtained with a flat prior, with the additional advantage of representing a much closer approximation to the reference posterior in all cases. Hence such limiting prior should be considered a better default or conventional prior than the uniform prior. On the computing side, it is shown that a 2-parameter fitting function is able to reproduce extremely well the reference prior for any background prior. Thus, it can be useful in applications requiring the evaluation of the reference prior for a very large number of times.

  11. Continuous diffraction of molecules and disordered molecular crystals

    PubMed Central

    Yefanov, Oleksandr M.; Ayyer, Kartik; White, Thomas A.; Barty, Anton; Morgan, Andrew; Mariani, Valerio; Oberthuer, Dominik; Pande, Kanupriya

    2017-01-01

    The intensities of far-field diffraction patterns of orientationally aligned molecules obey Wilson statistics, whether those molecules are in isolation (giving rise to a continuous diffraction pattern) or arranged in a crystal (giving rise to Bragg peaks). Ensembles of molecules in several orientations, but uncorrelated in position, give rise to the incoherent sum of the diffraction from those objects, modifying the statistics in a similar way as crystal twinning modifies the distribution of Bragg intensities. This situation arises in the continuous diffraction of laser-aligned molecules or translationally disordered molecular crystals. This paper develops the analysis of the intensity statistics of such continuous diffraction to obtain parameters such as scaling, beam coherence and the number of contributing independent object orientations. When measured, continuous molecular diffraction is generally weak and accompanied by a background that far exceeds the strength of the signal. Instead of just relying upon the smallest measured intensities or their mean value to guide the subtraction of the background, it is shown how all measured values can be utilized to estimate the background, noise and signal, by employing a modified ‘noisy Wilson’ distribution that explicitly includes the background. Parameters relating to the background and signal quantities can be estimated from the moments of the measured intensities. The analysis method is demonstrated on previously published continuous diffraction data measured from crystals of photosystem II [Ayyer et al. (2016 ▸), Nature, 530, 202–206]. PMID:28808434

  12. Comparison of Urine Output among Patients Treated with More Intensive Versus Less Intensive RRT: Results from the Acute Renal Failure Trial Network Study

    PubMed Central

    Asafu-Adjei, Josephine; Betensky, Rebecca A.; Palevsky, Paul M.; Waikar, Sushrut S.

    2016-01-01

    Background and objectives Intensive RRT may have adverse effects that account for the absence of benefit observed in randomized trials of more intensive versus less intensive RRT. We wished to determine the association of more intensive RRT with changes in urine output as a marker of worsening residual renal function in critically ill patients with severe AKI. Design, setting, participants, & measurements The Acute Renal Failure Trial Network Study (n=1124) was a multicenter trial that randomized critically ill patients requiring initiation of RRT to more intensive (hemodialysis or sustained low–efficiency dialysis six times per week or continuous venovenous hemodiafiltration at 35 ml/kg per hour) versus less intensive (hemodialysis or sustained low–efficiency dialysis three times per week or continuous venovenous hemodiafiltration at 20 ml/kg per hour) RRT. Mixed linear regression models were fit to estimate the association of RRT intensity with change in daily urine output in survivors through day 7 (n=871); Cox regression models were fit to determine the association of RRT intensity with time to ≥50% decline in urine output in all patients through day 28. Results Mean age of participants was 60±15 years old, 72% were men, and 30% were diabetic. In unadjusted models, among patients who survived ≥7 days, mean urine output was, on average, 31.7 ml/d higher (95% confidence interval, 8.2 to 55.2 ml/d) for the less intensive group compared with the more intensive group (P=0.01). More intensive RRT was associated with 29% greater unadjusted risk of decline in urine output of ≥50% (hazard ratio, 1.29; 95% confidence interval, 1.10 to 1.51). Conclusions More intensive versus less intensive RRT is associated with a greater reduction in urine output during the first 7 days of therapy and a greater risk of developing a decline in urine output of ≥50% in critically ill patients with severe AKI. PMID:27449661

  13. Determination of Atmospheric Aerosol Characteristics from the Polarization of Scattered Radiation

    NASA Technical Reports Server (NTRS)

    Harris, F. S., Jr.; McCormick, M. P.

    1973-01-01

    Aerosols affect the polarization of radiation in scattering, hence measured polarization can be used to infer the nature of the particles. Size distribution, particle shape, real and absorption parts of the complex refractive index affect the scattering. From Lorenz-Mie calculations of the 4-Stokes parameters as a function of scattering angle for various wavelengths the following polarization parameters were plotted: total intensity, intensity of polarization in plane of observation, intensity perpendicular to the plane of observation, polarization ratio, polarization (using all 4-Stokes parameters), plane of the polarization ellipse and its ellipticity. A six-component log-Gaussian size distribution model was used to study the effects of the nature of the polarization due to variations in the size distribution and complex refractive index. Though a rigorous inversion from measurements of scattering to detailed specification of aerosol characteristics is not possible, considerable information about the nature of the aerosols can be obtained. Only single scattering from aerosols was used in this paper. Also, the background due to Rayleigh gas scattering, the reduction of effects as a result of multiple scattering and polarization effects of possible ground background (airborne platforms) were not included.

  14. Physical Education Resources, Class Management, and Student Physical Activity Levels: A Structure-Process-Outcome Approach to Evaluating Physical Education Effectiveness

    ERIC Educational Resources Information Center

    Bevans, Katherine B.; Fitzpatrick, Leslie-Anne; Sanchez, Betty M.; Riley, Anne W.; Forrest, Christopher

    2010-01-01

    Background: This study was conducted to empirically evaluate specific human, curricular, and material resources that maximize student opportunities for physical activity during physical education (PE) class time. A structure-process-outcome model was proposed to identify the resources that influence the frequency of PE and intensity of physical…

  15. HerMES: Redshift Evolution of the Cosmic Infrared Background from Herschel/SPIRE

    NASA Astrophysics Data System (ADS)

    Vieira, Joaquin; HerMES

    2013-01-01

    We report on the redshift evolution of the cosmic infrared background (CIB) at wavelengths of 70-1100 microns. Using data from the Herschel Multi-tiered Extragalactic Survey (HerMES) of the GOODS-N field, we statistically correlate fluctuations in the CIB with external catalogs. We use a deep Spitzer-MIPS 24 micron flux-limited catalog complete with redshifts and stack on MIPS 70 and 160 micron, Herschel-SPIRE 250, 350, and 500 micron, and JCMT-AzTEC 1100 micron maps. We measure the co-moving infrared luminosity density at 0.14 and provides important constraints for models of galaxy formation and evolution.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zemcov, M.; Cooray, A.; Bock, J.

    We have observed four massive galaxy clusters with the SPIRE instrument on the Herschel Space Observatory and measure a deficit of surface brightness within their central region after removing detected sources. We simulate the effects of instrumental sensitivity and resolution, the source population, and the lensing effect of the clusters to estimate the shape and amplitude of the deficit. The amplitude of the central deficit is a strong function of the surface density and flux distribution of the background sources. We find that for the current best fitting faint end number counts, and excellent lensing models, the most likely amplitudemore » of the central deficit is the full intensity of the cosmic infrared background (CIB). Our measurement leads to a lower limit to the integrated total intensity of the CIB of I{sub 250{mu}m}>0.69{sub -0.03}{sup +0.03}(stat.){sub -0.06}{sup +0.11}(sys.) MJy sr{sup -1}, with more CIB possible from both low-redshift sources and from sources within the target clusters. It should be possible to observe this effect in existing high angular resolution data at other wavelengths where the CIB is bright, which would allow tests of models of the faint source component of the CIB.« less

  17. NIR small arms muzzle flash

    NASA Astrophysics Data System (ADS)

    Montoya, Joseph; Kennerly, Stephen; Rede, Edward

    2010-04-01

    Utilization of Near-Infrared (NIR) spectral features in a muzzle flash will allow for small arms detection using low cost silicon (Si)-based imagers. Detection of a small arms muzzle flash in a particular wavelength region is dependent on the intensity of that emission, the efficiency of source emission transmission through the atmosphere, and the relative intensity of the background scene. The NIR muzzle flash signature exists in the relatively large Si spectral response wavelength region of 300 nm-1100 nm, which allows for use of commercial-off-the-shelf (COTS) Si-based detectors. The alkali metal origin of the NIR spectral features in the 7.62 × 39-mm round muzzle flash is discussed, and the basis for the spectral bandwidth is examined, using a calculated Voigt profile. This report will introduce a model of the 7.62 × 39-mm NIR muzzle flash signature based on predicted source characteristics. Atmospheric limitations based on NIR spectral regions are investigated in relation to the NIR muzzle flash signature. A simple signal-to-clutter ratio (SCR) metric is used to predict sensor performance based on a model of radiance for the source and solar background and pixel registered image subtraction.

  18. A 3D model of polarized dust emission in the Milky Way

    NASA Astrophysics Data System (ADS)

    Martínez-Solaeche, Ginés; Karakci, Ata; Delabrouille, Jacques

    2018-05-01

    We present a three-dimensional model of polarized galactic dust emission that takes into account the variation of the dust density, spectral index and temperature along the line of sight, and contains randomly generated small-scale polarization fluctuations. The model is constrained to match observed dust emission on large scales, and match on smaller scales extrapolations of observed intensity and polarization power spectra. This model can be used to investigate the impact of plausible complexity of the polarized dust foreground emission on the analysis and interpretation of future cosmic microwave background polarization observations.

  19. Evaluating the relative effectiveness of high-intensity and low-intensity models of behaviour change communication interventions for abortion care-seeking in Bihar and Jharkhand, India: a cross-sectional study

    PubMed Central

    Banerjee, Sushanta K; Andersen, Kathryn; Pearson, Erin; Warvadekar, Janardan; Khan, Danish U; Batra, Sangeeta

    2017-01-01

    Background This study aimed to compare the effectiveness of a high-intensity model (HIM) and a low-intensity model (LIM) of behaviour change communication interventions in Bihar and Jharkhand states of India designed to improve women's knowledge and usage of safe abortion services, as well as the dose effect of intervention exposure. Methods We conducted two cross-sectional household surveys among married women aged 15–49 years in intervention and comparison districts. Difference-in-difference models were used to assess the efficacy of the intervention, adjusting for sociodemographic characteristics. Results Although both intervention types improved abortion knowledge, the HIM intervention was more effective in improving comprehensive knowledge about abortion. In particular, there were improvements in knowledge on legality of abortion (AOR=2.2; 95% CI 1.6 to 2.9) and nearby sources of safe abortion care (AOR=1.7; 95% CI 1.2 to 1.3). Conclusions Higher level of exposure to abortion-related messages was related to more accurate knowledge about abortion within both intervention groups. Evidence was mixed on changes in abortion care-seeking behaviour. More work is needed to ensure that women seek safe abortion services in lieu of informal services that may be more likely to lead to postabortion complications. PMID:28237953

  20. Fast and Loud Background Music Disrupts Reading Comprehension

    ERIC Educational Resources Information Center

    Thompson, William Forde; Schellenberg, E. Glenn; Letnic, Adriana Katharine

    2012-01-01

    We examined the effect of background music on reading comprehension. Because the emotional consequences of music listening are affected by changes in tempo and intensity, we manipulated these variables to create four repeated-measures conditions: slow/low, slow/high, fast/low, fast/high. Tempo and intensity manipulations were selected to be…

  1. FPGA implementation for real-time background subtraction based on Horprasert model.

    PubMed

    Rodriguez-Gomez, Rafael; Fernandez-Sanchez, Enrique J; Diaz, Javier; Ros, Eduardo

    2012-01-01

    Background subtraction is considered the first processing stage in video surveillance systems, and consists of determining objects in movement in a scene captured by a static camera. It is an intensive task with a high computational cost. This work proposes an embedded novel architecture on FPGA which is able to extract the background on resource-limited environments and offers low degradation (produced because of the hardware-friendly model modification). In addition, the original model is extended in order to detect shadows and improve the quality of the segmentation of the moving objects. We have analyzed the resource consumption and performance in Spartan3 Xilinx FPGAs and compared to others works available on the literature, showing that the current architecture is a good trade-off in terms of accuracy, performance and resources utilization. With less than a 65% of the resources utilization of a XC3SD3400 Spartan-3A low-cost family FPGA, the system achieves a frequency of 66.5 MHz reaching 32.8 fps with resolution 1,024 × 1,024 pixels, and an estimated power consumption of 5.76 W.

  2. FPGA Implementation for Real-Time Background Subtraction Based on Horprasert Model

    PubMed Central

    Rodriguez-Gomez, Rafael; Fernandez-Sanchez, Enrique J.; Diaz, Javier; Ros, Eduardo

    2012-01-01

    Background subtraction is considered the first processing stage in video surveillance systems, and consists of determining objects in movement in a scene captured by a static camera. It is an intensive task with a high computational cost. This work proposes an embedded novel architecture on FPGA which is able to extract the background on resource-limited environments and offers low degradation (produced because of the hardware-friendly model modification). In addition, the original model is extended in order to detect shadows and improve the quality of the segmentation of the moving objects. We have analyzed the resource consumption and performance in Spartan3 Xilinx FPGAs and compared to others works available on the literature, showing that the current architecture is a good trade-off in terms of accuracy, performance and resources utilization. With less than a 65% of the resources utilization of a XC3SD3400 Spartan-3A low-cost family FPGA, the system achieves a frequency of 66.5 MHz reaching 32.8 fps with resolution 1,024 × 1,024 pixels, and an estimated power consumption of 5.76 W. PMID:22368487

  3. Slow light in saturable absorbers: Progress in the resolution of a controversy

    NASA Astrophysics Data System (ADS)

    Macke, Bruno; Razdobreev, Igor; Ségard, Bernard

    2017-06-01

    There are two opposing models in the analysis of the slow transmission of light pulses through saturable absorbers. The canonical incoherent bleaching model simply explains the slow transmission by combined effects of saturation and of noninstantaneous response of the medium resulting in absorption of the front part of the incident pulse larger than that of its rear. The second model, referred to as the coherent-population-oscillations (CPO) model, considers light beams whose intensity is slightly pulse modulated and attributes the time delay of the transmitted pulse to a reduction of the group velocity. We point out some inconsistencies in the CPO model and show that the two models lie in reality on the same hypotheses, the equations derived in the duly rectified CPO model being local expressions of the integral equations obtained in the incoherent bleaching model. When intense pulses without background are used, the CPO model, based on linearized equations, breaks down. The incoherent bleaching model then predicts that the transmitted light should vanish when the intensity of the incident light is strictly zero. This point is confirmed by the experiments that we have performed on ruby with square-wave incident pulses and we show that the whole shape of the observed pulses agrees with that derived analytically by means of the incoherent bleaching model. We also determine in this model the corresponding evolution of the fluorescence light, which seems to have been evidenced in other experiments.

  4. Determining X-ray source intensity and confidence bounds in crowded fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Primini, F. A.; Kashyap, V. L., E-mail: fap@head.cfa.harvard.edu

    We present a rigorous description of the general problem of aperture photometry in high-energy astrophysics photon-count images, in which the statistical noise model is Poisson, not Gaussian. We compute the full posterior probability density function for the expected source intensity for various cases of interest, including the important cases in which both source and background apertures contain contributions from the source, and when multiple source apertures partially overlap. A Bayesian approach offers the advantages of allowing one to (1) include explicit prior information on source intensities, (2) propagate posterior distributions as priors for future observations, and (3) use Poisson likelihoods,more » making the treatment valid in the low-counts regime. Elements of this approach have been implemented in the Chandra Source Catalog.« less

  5. The amount of decrease of the background white noise intensity as a cue for differentiation training.

    PubMed

    Zieliński, K

    1981-01-01

    The course of differentiation learning, using the conditioned emotional response (CER) method, was investigated in two groups of 16 rats. The discriminative stimuli consisted of decreases in the 80 dB background white noise to either 70 dB or 60 dB. Differentiation learning was more efficient with the larger decrease of background noise intensity as the CS(+) and the smaller decrease as the CS(-) than vice versa.

  6. Stimulus Intensity and the Perception of Duration

    ERIC Educational Resources Information Center

    Matthews, William J.; Stewart, Neil; Wearden, John H.

    2011-01-01

    This article explores the widely reported finding that the subjective duration of a stimulus is positively related to its magnitude. In Experiments 1 and 2 we show that, for both auditory and visual stimuli, the effect of stimulus magnitude on the perception of duration depends upon the background: Against a high intensity background, weak stimuli…

  7. Autonomous rock detection on mars through region contrast

    NASA Astrophysics Data System (ADS)

    Xiao, Xueming; Cui, Hutao; Yao, Meibao; Tian, Yang

    2017-08-01

    In this paper, we present a new autonomous rock detection approach through region contrast. Unlike current state-of-art pixel-level rock segmenting methods, new method deals with this issue in region level, which will significantly reduce the computational cost. Image is firstly splitted into homogeneous regions based on intensity information and spatial layout. Considering the high-water memory constraints of onboard flight processor, only low-level features, average intensity and variation of superpixel, are measured. Region contrast is derived as the integration of intensity contrast and smoothness measurement. Rocks are then segmented from the resulting contrast map by an adaptive threshold. Since the merely intensity-based method may cause false detection in background areas with different illuminations from surroundings, a more reliable method is further proposed by introducing spatial factor and background similarity to the region contrast. Spatial factor demonstrates the locality of contrast, while background similarity calculates the probability of each subregion belonging to background. Our method is efficient in dealing with large images and only few parameters are needed. Preliminary experimental results show that our algorithm outperforms edge-based methods in various grayscale rover images.

  8. Pain sensitivity mediates the relationship between stress and headache intensity in chronic tension-type headache

    PubMed Central

    Cathcart, Stuart; Bhullar, Navjot; Immink, Maarten; Della Vedova, Chris; Hayball, John

    2012-01-01

    BACKGROUND: A central model for chronic tension-type headache (CTH) posits that stress contributes to headache, in part, by aggravating existing hyperalgesia in CTH sufferers. The prediction from this model that pain sensitivity mediates the relationship between stress and headache activity has not yet been examined. OBJECTIVE: To determine whether pain sensitivity mediates the relationship between stress and prospective headache activity in CTH sufferers. METHOD: Self-reported stress, pain sensitivity and prospective headache activity were measured in 53 CTH sufferers recruited from the general population. Pain sensitivity was modelled as a mediator between stress and headache activity, and tested using a nonparametric bootstrap analysis. RESULTS: Pain sensitivity significantly mediated the relationship between stress and headache intensity. CONCLUSIONS: The results of the present study support the central model for CTH, which posits that stress contributes to headache, in part, by aggravating existing hyperalgesia in CTH sufferers. Implications for the mechanisms and treatment of CTH are discussed. PMID:23248808

  9. Investigation of shock-induced chemical decomposition of sensitized nitromethane through time-resolved Raman spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pangilinan, G.I.; Constantinou, C.P.; Gruzdkov, Y.A.

    1996-07-01

    Molecular processes associated with shock induced chemical decomposition of a mixture of nitromethane with ethylenediamine (0.1 wt%) are examined using time-resolved, Raman scattering. When shocked by stepwise loading to 14.2 GPa pressure, changes in the nitromethane vibrational modes and the spectral background characterize the onset of reaction. The CN stretch mode softens and disappears even as the NO{sub 2} and CH{sub 3} stretch modes, though modified, retain their identities. The shape and intensity of the spectral background also shows changes characteristic of reaction. Changes in the background, which are observed even at lower peak pressures of 11.4 GPa, are assignedmore » to luminescence from reaction intermediates. The implications of these results to various molecular models of sensitization are discussed.« less

  10. Investigation of detection limits for solutes in water measured by laser raman spectrometry

    USGS Publications Warehouse

    Goldberg, M.C.

    1977-01-01

    The influence of experimental parameters on detection sensitivity was determined for laser Raman analysis of dissolved solutes in water. Individual solutions of nitrate, sulfate, carbonate, bicarbonate, monohydrogen phosphate, dihydrogen phosphate, acetate ion, and acetic acid were measured. An equation is derived which expresses the signal-to-noise ratio in terms of solute concentration, measurement time, spectral slit width, laser power fluctuations, and solvent background intensity. Laser beam intensity fluctuations at the sample and solvent background intensity are the most important limiting factors.

  11. A Clinical Model in Action in Intensive Residential Treatment: Meeting the Needs of Adolescent Boys Who Have Experienced Domestic Violence

    ERIC Educational Resources Information Center

    Stewart, Janet; Todd, Nick; Kopeck, Cameron

    2010-01-01

    The Habitat Program at Wood's Homes, Calgary, Alberta, is an eight bed residential treatment program for conduct-disordered youth who have been exposed or subjected to high levels of violence in their family home. The program was based on the assumption that working effectively with such youth requires consideration of the background experiences…

  12. A VVWBO-BVO-based GM (1,1) and its parameter optimization by GRA-IGSA integration algorithm for annual power load forecasting

    PubMed Central

    Wang, Hongguang

    2018-01-01

    Annual power load forecasting is not only the premise of formulating reasonable macro power planning, but also an important guarantee for the safety and economic operation of power system. In view of the characteristics of annual power load forecasting, the grey model of GM (1,1) are widely applied. Introducing buffer operator into GM (1,1) to pre-process the historical annual power load data is an approach to improve the forecasting accuracy. To solve the problem of nonadjustable action intensity of traditional weakening buffer operator, variable-weight weakening buffer operator (VWWBO) and background value optimization (BVO) are used to dynamically pre-process the historical annual power load data and a VWWBO-BVO-based GM (1,1) is proposed. To find the optimal value of variable-weight buffer coefficient and background value weight generating coefficient of the proposed model, grey relational analysis (GRA) and improved gravitational search algorithm (IGSA) are integrated and a GRA-IGSA integration algorithm is constructed aiming to maximize the grey relativity between simulating value sequence and actual value sequence. By the adjustable action intensity of buffer operator, the proposed model optimized by GRA-IGSA integration algorithm can obtain a better forecasting accuracy which is demonstrated by the case studies and can provide an optimized solution for annual power load forecasting. PMID:29768450

  13. Interplanetary Lyman α background and the heliospheric interface

    NASA Astrophysics Data System (ADS)

    Quémerais, Eric; Sander, Bill R.; Clarke, John T.

    2006-09-01

    We present some recent measurements of the interplanetary Lyman α background which show a clear signature of the heliospheric interface. The Voyager 1 Ultraviolet Spectrometer has measured the variation of the upwind intensity from 1993 to 2006. The derived radial variation of the intensity is clearly slower than what is expected from a hot model computation. This shows that the hydrogen number density increases ahead of the spacecraft, toward the upwind direction. The data also show an abrupt change of slope in 1998 when the Voyager 1 spacecraft was at 65 AU from the sun. This may be linked to temporal variations induced at the heliospheric interface by the variations of solar activity. Interplanetary Lyman α line profiles measured at one AU from the sun also show a clear signature of the heliospheric interface. The SWAN instrument on-board the SOHO spacecraft has studied the line profiles between 1996 and 2002. It was found that the variations seen in line of sight velocities from solar minimum to solar maximum have a larger amplitude than what is derived from hot model computations. The observed features can be better understood when considering that some of the hydrogen atoms crossing the interface region are slowed down and heated. These results are in good agreement with the present models of the interface. Independent spectral observations made by the Hubble Space Telescope in 1995-2001 confirm the SWAN/SOHO measurements.

  14. Effect of Increased Intensity of Physiotherapy on Patient Outcomes After Stroke: An Economic Literature Review and Cost-Effectiveness Analysis

    PubMed Central

    Chan, B

    2015-01-01

    Background Functional improvements have been seen in stroke patients who have received an increased intensity of physiotherapy. This requires additional costs in the form of increased physiotherapist time. Objectives The objective of this economic analysis is to determine the cost-effectiveness of increasing the intensity of physiotherapy (duration and/or frequency) during inpatient rehabilitation after stroke, from the perspective of the Ontario Ministry of Health and Long-term Care. Data Sources The inputs for our economic evaluation were extracted from articles published in peer-reviewed journals and from reports from government sources or the Canadian Stroke Network. Where published data were not available, we sought expert opinion and used inputs based on the experts' estimates. Review Methods The primary outcome we considered was cost per quality-adjusted life-year (QALY). We also evaluated functional strength training because of its similarities to physiotherapy. We used a 2-state Markov model to evaluate the cost-effectiveness of functional strength training and increased physiotherapy intensity for stroke inpatient rehabilitation. The model had a lifetime timeframe with a 5% annual discount rate. We then used sensitivity analyses to evaluate uncertainty in the model inputs. Results We found that functional strength training and higher-intensity physiotherapy resulted in lower costs and improved outcomes over a lifetime. However, our sensitivity analyses revealed high levels of uncertainty in the model inputs, and therefore in the results. Limitations There is a high level of uncertainty in this analysis due to the uncertainty in model inputs, with some of the major inputs based on expert panel consensus or expert opinion. In addition, the utility outcomes were based on a clinical study conducted in the United Kingdom (i.e., 1 study only, and not in an Ontario or Canadian setting). Conclusions Functional strength training and higher-intensity physiotherapy may result in lower costs and improved health outcomes. However, these results should be interpreted with caution. PMID:26366241

  15. Investigation of an Optimum Detection Scheme for a Star-Field Mapping System

    NASA Technical Reports Server (NTRS)

    Aldridge, M. D.; Credeur, L.

    1970-01-01

    An investigation was made to determine the optimum detection scheme for a star-field mapping system that uses coded detection resulting from starlight shining through specially arranged multiple slits of a reticle. The computer solution of equations derived from a theoretical model showed that the greatest probability of detection for a given star and background intensity occurred with the use of a single transparent slit. However, use of multiple slits improved the system's ability to reject the detection of undesirable lower intensity stars, but only by decreasing the probability of detection for lower intensity stars to be mapped. Also, it was found that the coding arrangement affected the root-mean-square star-position error and that detection is possible with error in the system's detected spin rate, though at a reduced probability.

  16. Meteor localization via statistical analysis of spatially temporal fluctuations in image sequences

    NASA Astrophysics Data System (ADS)

    Kukal, Jaromír.; Klimt, Martin; Šihlík, Jan; Fliegel, Karel

    2015-09-01

    Meteor detection is one of the most important procedures in astronomical imaging. Meteor path in Earth's atmosphere is traditionally reconstructed from double station video observation system generating 2D image sequences. However, the atmospheric turbulence and other factors cause spatially-temporal fluctuations of image background, which makes the localization of meteor path more difficult. Our approach is based on nonlinear preprocessing of image intensity using Box-Cox and logarithmic transform as its particular case. The transformed image sequences are then differentiated along discrete coordinates to obtain statistical description of sky background fluctuations, which can be modeled by multivariate normal distribution. After verification and hypothesis testing, we use the statistical model for outlier detection. Meanwhile the isolated outlier points are ignored, the compact cluster of outliers indicates the presence of meteoroids after ignition.

  17. Guidelines for the Institutional Implementation of Developmental Neuroprotective Care in the Neonatal Intensive Care Unit. Part A: Background and Rationale. A Joint Position Statement From the CANN, CAPWHN, NANN, and COINN.

    PubMed

    Milette, Isabelle; Martel, Marie-Josée; Ribeiro da Silva, Margarida; Coughlin McNeil, Mary

    2017-06-01

    The use of age-appropriate care as an organized framework for care delivery in the neonatal intensive care unit is founded on the work of Heidelise Als, PhD, and her synactive theory of development. This theoretical construct has recently been advanced by the work of Gibbins and colleagues with the "universe of developmental care" conceptual model and developmental care core measures which were endorsed by the National Association of Neonatal Nurses in their age-appropriate care of premature infant guidelines as best-practice standards for the provision of high-quality care in the neonatal intensive care unit. These guidelines were recently revised and expanded. In alignment with the Joint Commission's requirement for health-care professionals to provide age-specific care across the lifespan, the core measures for developmental care suggest the necessary competencies for those caring for the premature and critically ill hospitalized infant. Further supported by the Primer Standards of Accreditation and Health Canada, the institutional implementation of theses core measures requires a strong framework for institutional operationalization, presented in these guidelines. Part A of this article will present the background and rationale behind the present guidelines and their condensed table of recommendations.

  18. A Study on Mutil-Scale Background Error Covariances in 3D-Var Data Assimilation

    NASA Astrophysics Data System (ADS)

    Zhang, Xubin; Tan, Zhe-Min

    2017-04-01

    The construction of background error covariances is a key component of three-dimensional variational data assimilation. There are different scale background errors and interactions among them in the numerical weather Prediction. However, the influence of these errors and their interactions cannot be represented in the background error covariances statistics when estimated by the leading methods. So, it is necessary to construct background error covariances influenced by multi-scale interactions among errors. With the NMC method, this article firstly estimates the background error covariances at given model-resolution scales. And then the information of errors whose scales are larger and smaller than the given ones is introduced respectively, using different nesting techniques, to estimate the corresponding covariances. The comparisons of three background error covariances statistics influenced by information of errors at different scales reveal that, the background error variances enhance particularly at large scales and higher levels when introducing the information of larger-scale errors by the lateral boundary condition provided by a lower-resolution model. On the other hand, the variances reduce at medium scales at the higher levels, while those show slight improvement at lower levels in the nested domain, especially at medium and small scales, when introducing the information of smaller-scale errors by nesting a higher-resolution model. In addition, the introduction of information of larger- (smaller-) scale errors leads to larger (smaller) horizontal and vertical correlation scales of background errors. Considering the multivariate correlations, the Ekman coupling increases (decreases) with the information of larger- (smaller-) scale errors included, whereas the geostrophic coupling in free atmosphere weakens in both situations. The three covariances obtained in above work are used in a data assimilation and model forecast system respectively, and then the analysis-forecast cycles for a period of 1 month are conducted. Through the comparison of both analyses and forecasts from this system, it is found that the trends for variation in analysis increments with information of different scale errors introduced are consistent with those for variation in variances and correlations of background errors. In particular, introduction of smaller-scale errors leads to larger amplitude of analysis increments for winds at medium scales at the height of both high- and low- level jet. And analysis increments for both temperature and humidity are greater at the corresponding scales at middle and upper levels under this circumstance. These analysis increments improve the intensity of jet-convection system which includes jets at different levels and coupling between them associated with latent heat release, and these changes in analyses contribute to the better forecasts for winds and temperature in the corresponding areas. When smaller-scale errors are included, analysis increments for humidity enhance significantly at large scales at lower levels to moisten southern analyses. This humidification devotes to correcting dry bias there and eventually improves forecast skill of humidity. Moreover, inclusion of larger- (smaller-) scale errors is beneficial for forecast quality of heavy (light) precipitation at large (small) scales due to the amplification (diminution) of intensity and area in precipitation forecasts but tends to overestimate (underestimate) light (heavy) precipitation .

  19. Cyber-Physical Trade-Offs in Distributed Detection Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; Yao, David K. Y.; Chin, J. C.

    2010-01-01

    We consider a network of sensors that measure the scalar intensity due to the background or a source combined with background, inside a two-dimensional monitoring area. The sensor measurements may be random due to the underlying nature of the source and background or due to sensor errors or both. The detection problem is infer the presence of a source of unknown intensity and location based on sensor measurements. In the conventional approach, detection decisions are made at the individual sensors, which are then combined at the fusion center, for example using the majority rule. With increased communication and computation costs,more » we show that a more complex fusion algorithm based on measurements achieves better detection performance under smooth and non-smooth source intensity functions, Lipschitz conditions on probability ratios and a minimum packing number for the state-space. We show that these conditions for trade-offs between the cyber costs and physical detection performance are applicable for two detection problems: (i) point radiation sources amidst background radiation, and (ii) sources and background with Gaussian distributions.« less

  20. Studies of the extreme ultraviolet/soft x-ray background

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stern, R.A.

    1978-01-01

    The results of an extensive sky survey of the extreme ultraviolet (EUV)/soft x-ray background are reported. The data were obtained with a focusing telescope designed and calibrated at U.C. Berkeley which observed EUV sources and the diffuse background as part of the Apollo-Soyuz mission in July, 1975. With a primary field-of-view of 2.3 + 0.1/sup 0/ FWHM and four EUV bandpass filters (16 to 25, 20 to 73, 80 to 108, and 80 to 250 eV) the EUV telescope obtained background data included in the final observational sample for 21 discrete sky locations and 11 large angular scans, as wellmore » as for a number of shorter observations. Analysis of the data reveals as intense flux above 80 eV energy, with upper limits to the background intensity given for the lower energy filters Ca 2 x 10/sup 4/ and 6 x 10/sup 2/ ph cm/sup -2/ sec/sup -1/ ster/sup -1/ eV/sup -1/ at 21 and 45 eV respectively). The 80 to 108 eV flux agrees within statistical errors with the earlier results of Cash, Malina and Stern (1976): the Apollo-Soyuz average reported intensity is 4.0 +- 1.3 ph cm/sup -2/ sec/sup -1/ ster/sup -1/ eV/sup -1/ at Ca 100 eV, or roughly a factor of ten higher than the corresponding 250 eV intensity. The uniformity of the background flux is uncertain due to limitations in the statistical accuracy of the data; upper limits to the point-to-point standard deviation of the background intensity are (..delta..I/I approximately less than 0.8 +- 0.4 (80 to 108 eV) and approximately less than 0.4 +- 0.2 (80 to 250 eV). No evidence is found for a correlation between the telescope count rate and earth-based parameters (zenith angle, sun angle, etc.) for E approximately greater than 80 eV (the lower energy bandpasses are significantly affected by scattered solar radiation. Unlike some previous claims for the soft x-ray background, no simple dependence upon galactic latitude is seen.« less

  1. A Comparison of Alternating Current and Direct Current Electrospray Ionization for Mass Spectrometry

    PubMed Central

    Sarver, Scott A.; Gartner, Carlos A.; Chetwani, Nishant; Go, David B.; Dovichi, Norman J.

    2014-01-01

    A series of studies comparing the performance of alternating current electrospray ionization (AC ESI) mass spectrometry (MS) and direct current electrospray ionization (DC ESI) MS has been conducted, exploring the absolute signal intensity and signal-to-background ratios produced by both methods using caffeine and a model peptide as targets. Because the high-voltage AC signal was more susceptible to generating gas discharges, the operating voltage range of AC ESI was significantly smaller than that for DC ESI, such that the absolute signal intensities produced by DC ESI at peak voltages were 1 - 2 orders of magnitude greater than those for AC ESI. Using an electronegative nebulizing gas, sulfur hexafluoride (SF6), instead of nitrogen (N2) increased the operating range of AC ESI by ~50%, but did not appreciably improve signal intensities. While DC ESI generated far greater signal intensities, both ionization methods produced comparable signal-to-background noise, with AC ESI spectra appearing qualitatively cleaner. A quantitative calibration analysis was performed for two analytes, caffeine and the peptide MRFA. AC ESI utilizing SF6 outperforms all other techniques for the detection of MRFA, producing chromatographic limits of detection nearly one order of magnitude lower than that of DC ESI utilizing N2, and one half that of DC ESI utilizing SF6. However, DC ESI outperforms AC ESI for the analysis of caffeine, indicating improvements in spectral quality may benefit certain compounds, or classes of compounds, on an individual basis. PMID:24464359

  2. Treatment Intensity and Childhood Apraxia of Speech

    ERIC Educational Resources Information Center

    Namasivayam, Aravind K.; Pukonen, Margit; Goshulak, Debra; Hard, Jennifer; Rudzicz, Frank; Rietveld, Toni; Maassen, Ben; Kroll, Robert; van Lieshout, Pascal

    2015-01-01

    Background: Intensive treatment has been repeatedly recommended for the treatment of speech deficits in childhood apraxia of speech (CAS). However, differences in treatment outcomes as a function of treatment intensity have not been systematically studied in this population. Aim: To investigate the effects of treatment intensity on outcome…

  3. Intensity of Language Treatment: Contribution to Children's Language Outcomes

    ERIC Educational Resources Information Center

    Schmitt, Mary Beth; Justice, Laura M.; Logan, Jessica A. R.

    2017-01-01

    Background: Treatment intensity is an important factor in designing and delivering treatments to children with language impairment (LI). However, to date very little is understood about cumulative intervention intensity for children with LI in the US public school system. Aims: To examine treatment intensity (dose: time spent on language;…

  4. The Substructure of the Solar Corona Observed in the Hi-C Telescope

    NASA Technical Reports Server (NTRS)

    Winebarger, A.; Cirtain, J.; Golub, L.; DeLuca, E.; Savage, S.; Alexander, C.; Schuler, T.

    2014-01-01

    In the summer of 2012, the High-resolution Coronal Imager (Hi-C) flew aboard a NASA sounding rocket and collected the highest spatial resolution images ever obtained of the solar corona. One of the goals of the Hi-C flight was to characterize the substructure of the solar corona. We therefore calculate how the intensity scales from a low-resolution (AIA) pixels to high-resolution (Hi-C) pixels for both the dynamic events and "background" emission (meaning, the steady emission over the 5 minutes of data acquisition time). We find there is no evidence of substructure in the background corona; the intensity scales smoothly from low-resolution to high-resolution Hi-C pixels. In transient events, however, the intensity observed with Hi-C is, on average, 2.6 times larger than observed with AIA. This increase in intensity suggests that AIA is not resolving these events. This result suggests a finely structured dynamic corona embedded in a smoothly varying background.

  5. Survival condition for low-frequency quasi-one-dimensional breathers in a two-dimensional strongly anisotropic crystal

    NASA Astrophysics Data System (ADS)

    Savin, A. V.; Zubova, E. A.; Manevitch, L. I.

    2005-06-01

    We investigate a two-dimensional (2D) strongly anisotropic crystal (2D SAC) on substrate: 2D system of coupled linear chains of particles with strong intrachain and weak interchain interactions, each chain being subjected to the sine background potential. Nonlinear dynamics of one of these chains when the rest of them are fixed is reduced to the well known Frenkel-Kontorova (FK) model. Depending on strengh of the substrate, the 2D SAC models a variety of physical systems: polymer crystals with identical chains having light side groups, an array of inductively coupled long Josephson junctions, anisotropic crystals having light and heavy sublattices. Continuum limit of the FK model, the sine-Gordon (sG) equation, allows two types of soliton solutions: topological solitons and breathers. It is known that the quasi-one-dimensional topological solitons can propagate also in a chain of 2D system of coupled chains and even in a helix chain in a three-dimensional model of polymer crystal. In contrast to this, numerical simulation shows that the long-living breathers inherent to the FK model do not exist in the 2D SAC with weak background potential. The effect changes scenario of kink-antikink collision with small relative velocity: at weak background potential the collision always results only in intensive phonon radiation while kink-antikink recombination in the FK model results in long-living low-frequency sG breather creation. We found the survival condition for breathers in the 2D SAC on substrate depending on breather frequency and strength of the background potential. The survival condition bears no relation to resonances between breather frequency and frequencies of phonon band—contrary to the case of the FK model.

  6. Strong-field control and enhancement of chiral response in bi-elliptical high-order harmonic generation: an analytical model

    NASA Astrophysics Data System (ADS)

    Ayuso, David; Decleva, Piero; Patchkovskii, Serguei; Smirnova, Olga

    2018-06-01

    The generation of high-order harmonics in a medium of chiral molecules driven by intense bi-elliptical laser fields can lead to strong chiroptical response in a broad range of harmonic numbers and ellipticities (Ayuso et al 2018 J. Phys. B: At. Mol. Opt. Phys. 51 06LT01). Here we present a comprehensive analytical model that can describe the most relevant features arising in the high-order harmonic spectra of chiral molecules driven by strong bi-elliptical fields. Our model recovers the physical picture underlying chiral high-order harmonic generation (HHG) based on ultrafast chiral hole motion and identifies the rotationally invariant molecular pseudoscalars responsible for chiral dynamics. Using the chiral molecule propylene oxide as an example, we show that one can control and enhance the chiral response in bi-elliptical HHG by tailoring the driving field, in particular by tuning its frequency, intensity and ellipticity, exploiting a suppression mechanism of achiral background based on the linear Stark effect.

  7. Intensity level for exercise training in fibromyalgia by using mathematical models

    PubMed Central

    2010-01-01

    Background It has not been assessed before whether mathematical models described in the literature for prescriptions of exercise can be used for fibromyalgia syndrome patients. The objective of this paper was to determine how age-predicted heart rate formulas can be used with fibromyalgia syndrome populations as well as to find out which mathematical models are more accurate to control exercise intensity. Methods A total of 60 women aged 18-65 years with fibromyalgia syndrome were included; 32 were randomized to walking training at anaerobic threshold. Age-predicted formulas to maximum heart rate ("220 minus age" and "208 minus 0.7 × age") were correlated with achieved maximum heart rate (HRMax) obtained by spiroergometry. Subsequently, six mathematical models using heart rate reserve (HRR) and age-predicted HRMax formulas were studied to estimate the intensity level of exercise training corresponding to heart rate at anaerobic threshold (HRAT) obtained by spiroergometry. Linear and nonlinear regression models were used for correlations and residues analysis for the adequacy of the models. Results Age-predicted HRMax and HRAT formulas had a good correlation with achieved heart rate obtained in spiroergometry (r = 0.642; p < 0.05). For exercise prescription in the anaerobic threshold intensity, the percentages were 52.2-60.6% HRR and 75.5-80.9% HRMax. Formulas using HRR and the achieved HRMax showed better correlation. Furthermore, the percentages of HRMax and HRR were significantly higher for the trained individuals (p < 0.05). Conclusion Age-predicted formulas can be used for estimating HRMax and for exercise prescriptions in women with fibromyalgia syndrome. Karnoven's formula using heart rate achieved in ergometric test showed a better correlation. For the prescription of exercises in the threshold intensity, 52% to 60% HRR or 75% to 80% HRMax must be used in sedentary women with fibromyalgia syndrome and these values are higher and must be corrected for trained patients. PMID:20307323

  8. Quantification of effectiveness of bilateral and unilateral neuromodulation in the rat bladder rhythmic contraction model

    PubMed Central

    2013-01-01

    Background Using the isovolumetric bladder rhythmic contraction (BRC) model in anesthetized rats, we have quantified the responsiveness to unilateral and bilateral stimulation of the L6 spinal nerve (SN) and characterized the relationship between stimulus intensity and inhibition of the bladder micturition reflex. Methods A wire electrode was placed under either one or both of the L6 SN roots. A cannula was placed into the bladder via the urethra and the urethra was ligated. Saline infusion induced BRC. Results At motor threshold (Tmot) intensity, SN stimulation of both roots (10 Hz) for 10 min reduced bladder contraction frequency from 0.63 ± 0.04 to 0.17 ± 0.09 contractions per min (26 ± 14% of baseline control; n = 10, p < 0.05). However, the same intensity of unilateral stimulation (n = 15) or sequential stimulation of both SNs (e.g. 5 min per side alternatively for a total of 10 min or 20 min) was less efficacious. The greater sensitivity to bilateral stimulation is not dependent upon precise bilateral timing of the stimulation pulses. Bilateral stimulation also produced both acute and prolonged- inhibition on bladder contractions in a stimulation intensity dependent fashion. Conclusions Using the bladder rhythmic contraction model, bilateral stimulation was more effective than unilateral stimulation of the SN. Clinical testing should be conducted to further compare efficacies of unilateral and bilateral stimulation. Bilateral stimulation may allow the use of lower stimulation intensities to achieve higher efficacy for neurostimulation therapies on urinary tract control. PMID:23866931

  9. Clinical Benefits, Costs, and Cost-Effectiveness of Neonatal Intensive Care in Mexico

    PubMed Central

    Profit, Jochen; Lee, Diana; Zupancic, John A.; Papile, LuAnn; Gutierrez, Cristina; Goldie, Sue J.; Gonzalez-Pier, Eduardo; Salomon, Joshua A.

    2010-01-01

    Background Neonatal intensive care improves survival, but is associated with high costs and disability amongst survivors. Recent health reform in Mexico launched a new subsidized insurance program, necessitating informed choices on the different interventions that might be covered by the program, including neonatal intensive care. The purpose of this study was to estimate the clinical outcomes, costs, and cost-effectiveness of neonatal intensive care in Mexico. Methods and Findings A cost-effectiveness analysis was conducted using a decision analytic model of health and economic outcomes following preterm birth. Model parameters governing health outcomes were estimated from Mexican vital registration and hospital discharge databases, supplemented with meta-analyses and systematic reviews from the published literature. Costs were estimated on the basis of data provided by the Ministry of Health in Mexico and World Health Organization price lists, supplemented with published studies from other countries as needed. The model estimated changes in clinical outcomes, life expectancy, disability-free life expectancy, lifetime costs, disability-adjusted life years (DALYs), and incremental cost-effectiveness ratios (ICERs) for neonatal intensive care compared to no intensive care. Uncertainty around the results was characterized using one-way sensitivity analyses and a multivariate probabilistic sensitivity analysis. In the base-case analysis, neonatal intensive care for infants born at 24–26, 27–29, and 30–33 weeks gestational age prolonged life expectancy by 28, 43, and 34 years and averted 9, 15, and 12 DALYs, at incremental costs per infant of US$11,400, US$9,500, and US$3,000, respectively, compared to an alternative of no intensive care. The ICERs of neonatal intensive care at 24–26, 27–29, and 30–33 weeks were US$1,200, US$650, and US$240, per DALY averted, respectively. The findings were robust to variation in parameter values over wide ranges in sensitivity analyses. Conclusions Incremental cost-effectiveness ratios for neonatal intensive care imply very high value for money on the basis of conventional benchmarks for cost-effectiveness analysis. Please see later in the article for the Editors' Summary PMID:21179496

  10. SOM-based nonlinear least squares twin SVM via active contours for noisy image segmentation

    NASA Astrophysics Data System (ADS)

    Xie, Xiaomin; Wang, Tingting

    2017-02-01

    In this paper, a nonlinear least square twin support vector machine (NLSTSVM) with the integration of active contour model (ACM) is proposed for noisy image segmentation. Efforts have been made to seek the kernel-generated surfaces instead of hyper-planes for the pixels belonging to the foreground and background, respectively, using the kernel trick to enhance the performance. The concurrent self organizing maps (SOMs) are applied to approximate the intensity distributions in a supervised way, so as to establish the original training sets for the NLSTSVM. Further, the two sets are updated by adding the global region average intensities at each iteration. Moreover, a local variable regional term rather than edge stop function is adopted in the energy function to ameliorate the noise robustness. Experiment results demonstrate that our model holds the higher segmentation accuracy and more noise robustness.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Ke; Linden, Tim, E-mail: kefang@umd.edu, E-mail: linden.70@osu.edu

    Radio observations at multiple frequencies have detected a significant isotropic emission component between 22 MHz and 10 GHz, commonly termed the ARCADE-2 Excess. The origin of this radio emission is unknown, as the intensity, spectrum and isotropy of the signal are difficult to model with either traditional astrophysical mechanisms or novel physics such as dark matter annihilation. We posit a new model capable of explaining the key components of the excess radio emission. Specifically, we show that the re-acceleration of non-thermal electrons via turbulence in merging galaxy clusters are capable of explaining the intensity, spectrum, and isotropy of the ARCADE-2more » data. We examine the parameter spaces of cluster re-acceleration, magnetic field, and merger rate, finding that the radio excess can be reproduced assuming reasonable assumptions for each. Finally, we point out that future observations will definitively confirm or rule-out the contribution of cluster mergers to the isotropic radio background.« less

  12. The cosmic X-ray background. [heao observations

    NASA Technical Reports Server (NTRS)

    Boldt, E. A.

    1980-01-01

    The cosmic X-ray experiment carried out with the A2 Instrument on HEAO-1 made systematics-free measurements of the extra-galactic X-ray sky and yielded the broadband spectral characteristics for two extreme aspects of this radiation. For the apparently isotropic radiation of cosmological origin that dominates the extragalactic X-ray flux ( 3 keV), the spectrum over the energy band of maximum intensity is remarkably well described by a thermal model with a temperature of a half-billion degrees. At the other extreme, broadband observations of individual extragalactic X-ray sources with HEAO-1 are restricted to objects within the present epoch. While the non-thermal hard spectral components associated with unevolved X-ray emitting active galaxies could account for most of the gamma-ray background, the contribution of such sources to the X-ray background must be relatively small. In contrast, the 'deep-space' sources detected in soft X-rays with the HEAO-2 telescope probably represent a major portion of the extragalactic soft X-ray ( 3 keV) background.

  13. A systematic analysis of the XMM-Newton background: III. Impact of the magnetospheric environment

    NASA Astrophysics Data System (ADS)

    Ghizzardi, Simona; Marelli, Martino; Salvetti, David; Gastaldello, Fabio; Molendi, Silvano; De Luca, Andrea; Moretti, Alberto; Rossetti, Mariachiara; Tiengo, Andrea

    2017-12-01

    A detailed characterization of the particle induced background is fundamental for many of the scientific objectives of the Athena X-ray telescope, thus an adequate knowledge of the background that will be encountered by Athena is desirable. Current X-ray telescopes have shown that the intensity of the particle induced background can be highly variable. Different regions of the magnetosphere can have very different environmental conditions, which can, in principle, differently affect the particle induced background detected by the instruments. We present results concerning the influence of the magnetospheric environment on the background detected by EPIC instrument onboard XMM-Newton through the estimate of the variation of the in-Field-of-View background excess along the XMM-Newton orbit. An important contribution to the XMM background, which may affect the Athena background as well, comes from soft proton flares. Along with the flaring component a low-intensity component is also present. We find that both show modest variations in the different magnetozones and that the soft proton component shows a strong trend with the distance from Earth.

  14. Development and evaluation of a connective tissue phantom model for subsurface visualization of cancers requiring wide local excision

    NASA Astrophysics Data System (ADS)

    Samkoe, Kimberley S.; Bates, Brent D.; Tselepidakis, Niki N.; DSouza, Alisha V.; Gunn, Jason R.; Ramkumar, Dipak B.; Paulsen, Keith D.; Pogue, Brian W.; Henderson, Eric R.

    2017-12-01

    Wide local excision (WLE) of tumors with negative margins remains a challenge because surgeons cannot directly visualize the mass. Fluorescence-guided surgery (FGS) may improve surgical accuracy; however, conventional methods with direct surface tumor visualization are not immediately applicable, and properties of tissues surrounding the cancer must be considered. We developed a phantom model for sarcoma resection with the near-infrared fluorophore IRDye 800CW and used it to iteratively define the properties of connective tissues that typically surround sarcoma tumors. We then tested the ability of a blinded surgeon to resect fluorescent tumor-simulating inclusions with ˜1-cm margins using predetermined target fluorescence intensities and a Solaris open-air fluorescence imaging system. In connective tissue-simulating phantoms, fluorescence intensity decreased with increasing blood concentration and increased with increasing intralipid concentrations. Fluorescent inclusions could be resolved at ≥1-cm depth in all inclusion concentrations and sizes tested. When inclusion depth was held constant, fluorescence intensity decreased with decreasing volume. Using targeted fluorescence intensities, a blinded surgeon was able to successfully excise inclusions with ˜1-cm margins from fat- and muscle-simulating phantoms with inclusion-to-background contrast ratios as low as 2∶1. Indirect, subsurface FGS is a promising tool for surgical resection of cancers requiring WLE.

  15. State Variability and Psychopathological Attractors: the Behavioural Complexity as Discriminating Factor Between the Pathology and Normality Profiles

    NASA Astrophysics Data System (ADS)

    Marconi, Pier Luigi

    369 patients, selected within a set of 1215 outpatients, were studied. The data were clustered into two set: the baseline set and the endpoint set. The clinical parameters had a higher variability at the baseline than at the endpoint. 4 to 5 factors were extracted in total group and 3 subgroups (190 "affective", 34 type-B personality, 166 without any of both disorders). In all subgroups there was a background pattern of 6 components: 3 components confirming the trifactorial temperamental model of Cloninger; 1 component related to the quality of social relationships; 2 components (that are the main components of factorial model about in all groups) relating to quality of life and adjustment self perceived by patients, and to pattern of dysfunctional behavior, inner feelings, and thought processes externally evaluated. These background components seem to aggregate differently in the subgroups in accordance to the clinical diagnosis. These patterns may be interpreted as expression of an increased "coherence" among parameters due to a lack of flexibility caused by the illness. The different class of illness can be further distinguished by intensity of maladjustment, that is related to the intensity of clinical signs just only at the baseline. These data suggest that the main interfering factors are clinical psychopathology at baseline and stable personality traits at endpoint. This persistent chronic maladjustment personality-driven is evidenced after the clinical disorder was cured by treatment. An interpretative model is presented by the author.

  16. Determining the Intensity of a Point-Like Source Observed on the Background of AN Extended Source

    NASA Astrophysics Data System (ADS)

    Kornienko, Y. V.; Skuratovskiy, S. I.

    2014-12-01

    The problem of determining the time dependence of intensity of a point-like source in case of atmospheric blur is formulated and solved by using the Bayesian statistical approach. A pointlike source is supposed to be observed on the background of an extended source with constant in time though unknown brightness. The equation system for optimal statistical estimation of the sequence of intensity values in observation moments is obtained. The problem is particularly relevant for studying gravitational mirages which appear while observing a quasar through the gravitational field of a far galaxy.

  17. Probabilistic multi-resolution human classification

    NASA Astrophysics Data System (ADS)

    Tu, Jun; Ran, H.

    2006-02-01

    Recently there has been some interest in using infrared cameras for human detection because of the sharply decreasing prices of infrared cameras. The training data used in our work for developing the probabilistic template consists images known to contain humans in different poses and orientation but having the same height. Multiresolution templates are performed. They are based on contour and edges. This is done so that the model does not learn the intensity variations among the background pixels and intensity variations among the foreground pixels. Each template at every level is then translated so that the centroid of the non-zero pixels matches the geometrical center of the image. After this normalization step, for each pixel of the template, the probability of it being pedestrian is calculated based on the how frequently it appears as 1 in the training data. We also use periodicity gait to verify the pedestrian in a Bayesian manner for the whole blob in a probabilistic way. The videos had quite a lot of variations in the scenes, sizes of people, amount of occlusions and clutter in the backgrounds as is clearly evident. Preliminary experiments show the robustness.

  18. Reducing respiratory motion artifacts in positron emission tomography through retrospective stacking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thorndyke, Brian; Schreibmann, Eduard; Koong, Albert

    Respiratory motion artifacts in positron emission tomography (PET) imaging can alter lesion intensity profiles, and result in substantially reduced activity and contrast-to-noise ratios (CNRs). We propose a corrective algorithm, coined 'retrospective stacking' (RS), to restore image quality without requiring additional scan time. Retrospective stacking uses b-spline deformable image registration to combine amplitude-binned PET data along the entire respiratory cycle into a single respiratory end point. We applied the method to a phantom model consisting of a small, hot vial oscillating within a warm background, as well as to {sup 18}FDG-PET images of a pancreatic and a liver patient. Comparisons weremore » made using cross-section visualizations, activity profiles, and CNRs within the region of interest. Retrospective stacking was found to properly restore the lesion location and intensity profile in all cases. In addition, RS provided CNR improvements up to three-fold over gated images, and up to five-fold over ungated data. These phantom and patient studies demonstrate that RS can correct for lesion motion and deformation, while substantially improving tumor visibility and background noise.« less

  19. Subject design and factors affecting achievement in mathematics for biomedical science

    NASA Astrophysics Data System (ADS)

    Carnie, Steven; Morphett, Anthony

    2017-01-01

    Reports such as Bio2010 emphasize the importance of integrating mathematical modelling skills into undergraduate biology and life science programmes, to ensure students have the skills and knowledge needed for biological research in the twenty-first century. One way to do this is by developing a dedicated mathematics subject to teach modelling and mathematical concepts in biological contexts. We describe such a subject at a research-intensive Australian university, and discuss the considerations informing its design. We also present an investigation into the effect of mathematical and biological background, prior mathematical achievement, and gender, on student achievement in the subject. The investigation shows that several factors known to predict performance in standard calculus subjects apply also to specialized discipline-specific mathematics subjects, and give some insight into the relative importance of mathematical versus biological background for a biology-focused mathematics subject.

  20. Generation of gene-targeted mice using embryonic stem cells derived from a transgenic mouse model of Alzheimer's disease.

    PubMed

    Yamamoto, Satoshi; Ooshima, Yuki; Nakata, Mitsugu; Yano, Takashi; Matsuoka, Kunio; Watanabe, Sayuri; Maeda, Ryouta; Takahashi, Hideki; Takeyama, Michiyasu; Matsumoto, Yoshio; Hashimoto, Tadatoshi

    2013-06-01

    Gene-targeting technology using mouse embryonic stem (ES) cells has become the "gold standard" for analyzing gene functions and producing disease models. Recently, genetically modified mice with multiple mutations have increasingly been produced to study the interaction between proteins and polygenic diseases. However, introduction of an additional mutation into mice already harboring several mutations by conventional natural crossbreeding is an extremely time- and labor-intensive process. Moreover, to do so in mice with a complex genetic background, several years may be required if the genetic background is to be retained. Establishing ES cells from multiple-mutant mice, or disease-model mice with a complex genetic background, would offer a possible solution. Here, we report the establishment and characterization of novel ES cell lines from a mouse model of Alzheimer's disease (3xTg-AD mouse, Oddo et al. in Neuron 39:409-421, 2003) harboring 3 mutated genes (APPswe, TauP301L, and PS1M146V) and a complex genetic background. Thirty blastocysts were cultured and 15 stable ES cell lines (male: 11; female: 4) obtained. By injecting these ES cells into diploid or tetraploid blastocysts, we generated germline-competent chimeras. Subsequently, we confirmed that F1 mice derived from these animals showed similar biochemical and behavioral characteristics to the original 3xTg-AD mice. Furthermore, we introduced a gene-targeting vector into the ES cells and successfully obtained gene-targeted ES cells, which were then used to generate knockout mice for the targeted gene. These results suggest that the present methodology is effective for introducing an additional mutation into mice already harboring multiple mutated genes and/or a complex genetic background.

  1. Comparisons of Air Radiation Model with Shock Tube Measurements

    NASA Technical Reports Server (NTRS)

    Bose, Deepak; McCorkle, Evan; Bogdanoff, David W.; Allen, Gary A., Jr.

    2009-01-01

    This paper presents an assessment of the predictive capability of shock layer radiation model appropriate for NASA s Orion Crew Exploration Vehicle lunar return entry. A detailed set of spectrally resolved radiation intensity comparisons are made with recently conducted tests in the Electric Arc Shock Tube (EAST) facility at NASA Ames Research Center. The spectral range spanned from vacuum ultraviolet wavelength of 115 nm to infrared wavelength of 1400 nm. The analysis is done for 9.5-10.5 km/s shock passing through room temperature synthetic air at 0.2, 0.3 and 0.7 Torr. The comparisons between model and measurements show discrepancies in the level of background continuum radiation and intensities of atomic lines. Impurities in the EAST facility in the form of carbon bearing species are also modeled to estimate the level of contaminants and their impact on the comparisons. The discrepancies, although large is some cases, exhibit order and consistency. A set of tests and analyses improvements are proposed as forward work plan in order to confirm or reject various proposed reasons for the observed discrepancies.

  2. Microarray image analysis: background estimation using quantile and morphological filters.

    PubMed

    Bengtsson, Anders; Bengtsson, Henrik

    2006-02-28

    In a microarray experiment the difference in expression between genes on the same slide is up to 103 fold or more. At low expression, even a small error in the estimate will have great influence on the final test and reference ratios. In addition to the true spot intensity the scanned signal consists of different kinds of noise referred to as background. In order to assess the true spot intensity background must be subtracted. The standard approach to estimate background intensities is to assume they are equal to the intensity levels between spots. In the literature, morphological opening is suggested to be one of the best methods for estimating background this way. This paper examines fundamental properties of rank and quantile filters, which include morphological filters at the extremes, with focus on their ability to estimate between-spot intensity levels. The bias and variance of these filter estimates are driven by the number of background pixels used and their distributions. A new rank-filter algorithm is implemented and compared to methods available in Spot by CSIRO and GenePix Pro by Axon Instruments. Spot's morphological opening has a mean bias between -47 and -248 compared to a bias between 2 and -2 for the rank filter and the variability of the morphological opening estimate is 3 times higher than for the rank filter. The mean bias of Spot's second method, morph.close.open, is between -5 and -16 and the variability is approximately the same as for morphological opening. The variability of GenePix Pro's region-based estimate is more than ten times higher than the variability of the rank-filter estimate and with slightly more bias. The large variability is because the size of the background window changes with spot size. To overcome this, a non-adaptive region-based method is implemented. Its bias and variability are comparable to that of the rank filter. The performance of more advanced rank filters is equal to the best region-based methods. However, in order to get unbiased estimates these filters have to be implemented with great care. The performance of morphological opening is in general poor with a substantial spatial-dependent bias.

  3. Extragalactic background light measurements and applications.

    PubMed

    Cooray, Asantha

    2016-03-01

    This review covers the measurements related to the extragalactic background light intensity from γ-rays to radio in the electromagnetic spectrum over 20 decades in wavelength. The cosmic microwave background (CMB) remains the best measured spectrum with an accuracy better than 1%. The measurements related to the cosmic optical background (COB), centred at 1 μm, are impacted by the large zodiacal light associated with interplanetary dust in the inner Solar System. The best measurements of COB come from an indirect technique involving γ-ray spectra of bright blazars with an absorption feature resulting from pair-production off of COB photons. The cosmic infrared background (CIB) peaking at around 100 μm established an energetically important background with an intensity comparable to the optical background. This discovery paved the way for large aperture far-infrared and sub-millimetre observations resulting in the discovery of dusty, starbursting galaxies. Their role in galaxy formation and evolution remains an active area of research in modern-day astrophysics. The extreme UV (EUV) background remains mostly unexplored and will be a challenge to measure due to the high Galactic background and absorption of extragalactic photons by the intergalactic medium at these EUV/soft X-ray energies. We also summarize our understanding of the spatial anisotropies and angular power spectra of intensity fluctuations. We motivate a precise direct measurement of the COB between 0.1 and 5 μm using a small aperture telescope observing either from the outer Solar System, at distances of 5 AU or more, or out of the ecliptic plane. Other future applications include improving our understanding of the background at TeV energies and spectral distortions of CMB and CIB.

  4. Extragalactic background light measurements and applications

    PubMed Central

    Cooray, Asantha

    2016-01-01

    This review covers the measurements related to the extragalactic background light intensity from γ-rays to radio in the electromagnetic spectrum over 20 decades in wavelength. The cosmic microwave background (CMB) remains the best measured spectrum with an accuracy better than 1%. The measurements related to the cosmic optical background (COB), centred at 1 μm, are impacted by the large zodiacal light associated with interplanetary dust in the inner Solar System. The best measurements of COB come from an indirect technique involving γ-ray spectra of bright blazars with an absorption feature resulting from pair-production off of COB photons. The cosmic infrared background (CIB) peaking at around 100 μm established an energetically important background with an intensity comparable to the optical background. This discovery paved the way for large aperture far-infrared and sub-millimetre observations resulting in the discovery of dusty, starbursting galaxies. Their role in galaxy formation and evolution remains an active area of research in modern-day astrophysics. The extreme UV (EUV) background remains mostly unexplored and will be a challenge to measure due to the high Galactic background and absorption of extragalactic photons by the intergalactic medium at these EUV/soft X-ray energies. We also summarize our understanding of the spatial anisotropies and angular power spectra of intensity fluctuations. We motivate a precise direct measurement of the COB between 0.1 and 5 μm using a small aperture telescope observing either from the outer Solar System, at distances of 5 AU or more, or out of the ecliptic plane. Other future applications include improving our understanding of the background at TeV energies and spectral distortions of CMB and CIB. PMID:27069645

  5. Maximum Likelihood Detection of Electro-Optic Moving Targets

    DTIC Science & Technology

    1992-01-16

    indicates intensity. The Infrared Measurements Sensor (IRMS) is a scanning sensor that collects both long wave- length infrared ( LWIR , 8 to 12 fim...moving clutter. Nonstationary spatial statistics correspond to the nonuniform intensity of the background scene. An equivalent viewpoint is to...Figure 6 compares theory and experiment for 10 frames of the Longjump LWIR data obtained from the IRMS scanning sensor, which is looking at a background

  6. MGGPOD: a Monte Carlo Suite for Modeling Instrumental Line and Continuum Backgrounds in Gamma-Ray Astronomy

    NASA Technical Reports Server (NTRS)

    Weidenspointner, G.; Harris, M. J.; Sturner, S.; Teegarden, B. J.; Ferguson, C.

    2004-01-01

    Intense and complex instrumental backgrounds, against which the much smaller signals from celestial sources have to be discerned, are a notorious problem for low and intermediate energy gamma-ray astronomy (approximately 50 keV - 10 MeV). Therefore a detailed qualitative and quantitative understanding of instrumental line and continuum backgrounds is crucial for most stages of gamma-ray astronomy missions, ranging from the design and development of new instrumentation through performance prediction to data reduction. We have developed MGGPOD, a user-friendly suite of Monte Carlo codes built around the widely used GEANT (Version 3.21) package, to simulate ab initio the physical processes relevant for the production of instrumental backgrounds. These include the build-up and delayed decay of radioactive isotopes as well as the prompt de-excitation of excited nuclei, both of which give rise to a plethora of instrumental gamma-ray background lines in addition t o continuum backgrounds. The MGGPOD package and documentation are publicly available for download. We demonstrate the capabilities of the MGGPOD suite by modeling high resolution gamma-ray spectra recorded by the Transient Gamma-Ray Spectrometer (TGRS) on board Wind during 1995. The TGRS is a Ge spectrometer operating in the 40 keV to 8 MeV range. Due to its fine energy resolution, these spectra reveal the complex instrumental background in formidable detail, particularly the many prompt and delayed gamma-ray lines. We evaluate the successes and failures of the MGGPOD package in reproducing TGRS data, and provide identifications for the numerous instrumental lines.

  7. Spectroscopic limits to an extragalactic far-ultraviolet background.

    PubMed

    Martin, C; Hurwitz, M; Bowyer, S

    1991-10-01

    We use a spectrum of the lowest intensity diffuse far-ultraviolet background obtained from a series of observations in a number of celestial view directions to constrain the properties of the extragalactic FUV background. The mean continuum level, IEG = 280 +/- 35 photons cm-2 s-1 angstrom-1 sr-1, was obtained in a direction with very low H I column density, and this represents a firm upper limit to any extragalactic background in the 1400-1900 angstroms band. Previous work has demonstrated that the far-ultraviolet background includes (depending on a view direction) contributions from dust-scattered Galactic light, high-ionization emission lines, two-photon emission from H II, H2 fluorescence, and the integrated light of spiral galaxies. We find no evidence in the spectrum of line or continuum features that would signify additional extragalactic components. Motivated by the observation of steep BJ and U number count distributions, we have made a detailed comparison of galaxy evolution models to optical and UV data. We find that the observations are difficult to reconcile with a dominant contribution from unclustered, starburst galaxies at low redshifts. Our measurement rules out large ionizing fluxes at z = 0, but cannot strongly constrain the QSO background light, which is expected to be 0.5%-4% of IEG. We present improved limits on radiative lifetimes of massive neutrinos. We demonstrated with a simple model that IGM radiation is unlikely to make a significant contribution to IEG. Since dust scattering could produce a significant part of the continuum in this lowest intensity spectrum, we carried out a series of tests to evaluate this possibility. We find that the spectrum of a nearby target with higher NH I, when corrected for H2 fluorescence, is very similar to the spectrum obtained in the low H I view direction. This is evidence that the majority of the continuum observed at low NH I is also dust reflection, indicating either the existence of a hitherto unidentified dust component, or of a large enhancement in dust scattering efficiency in low-density gas. We also review the effects of an additional dust component on the far-infrared background and on extragalactic FUV observations. We conclude that dust reflection, combined with modest contributions from H II two-photon emission and from the integrated light of late-type galaxies, may account for virtually all of the FUV background in low H I column density directions.

  8. Spatial temperature distribution in human hairy and glabrous skin after infrared CO2 laser radiation

    PubMed Central

    2010-01-01

    Background CO2 lasers have been used for several decades as an experimental non-touching pain stimulator. The laser energy is absorbed by the water content in the most superficial layers of the skin. The deeper located nociceptors are activated by passive conduction of heat from superficial to deeper skin layers. Methods In the current study, a 2D axial finite element model was developed and validated to describe the spatial temperature distribution in the skin after infrared CO2 laser stimulation. The geometry of the model was based on high resolution ultrasound scans. The simulations were compared to the subjective pain intensity ratings from 16 subjects and to the surface skin temperature distributions measured by an infrared camera. Results The stimulations were sensed significantly slower and less intense in glabrous skin than they were in hairy skin (MANOVA, p < 0.001). The model simulations of superficial temperature correlated with the measured skin surface temperature (r > 0.90, p < 0.001). Of the 16 subjects tested; eight subjects reported pricking pain in the hairy skin following a stimulus of 0.6 J/cm2 (5 W, 0.12 s, d1/e2 = 11.4 mm) only two reported pain to glabrous skin stimulation using the same stimulus intensity. The temperature at the epidermal-dermal junction (depth 50 μm in hairy and depth 133 μm in glabrous skin) was estimated to 46°C for hairy skin stimulation and 39°C for glabrous skin stimulation. Conclusions As compared to previous one dimensional heat distribution models, the current two dimensional model provides new possibilities for detailed studies regarding CO2 laser stimulation intensity, temperature levels and nociceptor activation. PMID:21059226

  9. Felsenkeller 5 MV underground accelerator: Towards the Holy Grail of Nuclear Astrophysics 12C(α, γ)16O

    NASA Astrophysics Data System (ADS)

    Bemmerer, Daniel; Cowan, Thomas E.; Grieger, Marcel; Hammer, Sebastian; Hensel, Thomas; Junghans, Arnd R.; Koppitz, Martina; Ludwig, Felix; Müller, Stefan E.; Rimarzig, Bernd; Reinicke, Stefan; Schwengner, Ronald; Stöckel, Klaus; Szücs, Tamás; Takács, Marcell P.; Turkat, Steffen; Wagner, Andreas; Wagner, Louis; Zuber, Kai

    2018-05-01

    Low-background experiments with stable ion beams are an important tool for putting the model of stellar hydrogen, helium, and carbon burning on a solid experimental foundation. The pioneering work in this regard has been done by the LUNA collaboration at Gran Sasso, using a 0.4 MV accelerator. The present contribution reviews the status of the project for a higher-energy underground accelerator in Felsenkeller, Germany. Results from γ-ray, neutron, and muon background measurements in the Felsenkeller underground site in Dresden, Germany, show that the background conditions are satisfactory. Two tunnels of the Felsenkeller site have recently been refurbished for the installation of a 5MV high-current Pelletron accelerator. Civil construction work has completed in March 2018. The accelerator will provide intense, 50 μA, beams of 1H+, 4He+, and 12C+ ions, enabling research on astrophysically relevant nuclear reactions with unprecedented sensitivity.

  10. Visual and Vestibular Determinants of Perceived Eye-Level

    NASA Technical Reports Server (NTRS)

    Cohen, Malcolm Martin

    2003-01-01

    Both gravitational and optical sources of stimulation combine to determine the perceived elevations of visual targets. The ways in which these sources of stimulation combine with one another in operational aeronautical environments are critical for pilots to make accurate judgments of the relative altitudes of other aircraft and of their own altitude relative to the terrain. In a recent study, my colleagues and I required eighteen observers to set visual targets at their apparent horizon while they experienced various levels of G(sub z) in the human centrifuge at NASA-Ames Research Center. The targets were viewed in darkness and also against specific background optical arrays that were oriented at various angles with respect to the vertical; target settings were lowered as Gz was increased; this effect was reduced when the background optical array was visible. Also, target settings were displaced in the direction that the background optical array was pitched. Our results were attributed to the combined influences of otolith-oculomotor mechanisms that underlie the elevator illusion and visual-oculomotor mechanisms (optostatic responses) that underlie the perceptual effects of viewing pitched optical arrays that comprise the background. In this paper, I present a mathematical model that describes the independent and combined effects of G(sub z) intensity and the orientation and structure of background optical arrays; the model predicts quantitative deviations from normal accurate perceptions of target localization under a variety of conditions. Our earlier experimental results and the mathematical model are described in some detail, and the effects of viewing specific optical arrays under various gravitational-inertial conditions encountered in aeronautical environments are discussed.

  11. Physical activity is independently associated with reduced mortality: 15-years follow-up of the Hordaland Health Study (HUSK)

    PubMed Central

    Kopperstad, Øyvind; Skogen, Jens Christoffer; Sivertsen, Børge; Tell, Grethe S.

    2017-01-01

    Background Physical activity (PA) is associated with lower risk for non-communicable diseases and mortality. We aimed to investigate the prospective association between PA and all-cause and cause-specific mortality, and the impact of other potentially contributing factors. Method Data from the community-based Hordaland Health Study (HUSK, 1997–99) were linked to the Norwegian Cause of Death Registry. The study included 20,506 individuals born 1950–1957 and 2,225 born in 1925–1927 (baseline age 40–49 and 70–74). Based on self-report, individuals were grouped as habitually performing low intensity, short duration, low intensity, longer duration or high intensity PA. The hazard ratios (HR) for all-cause and cause-specific mortality during follow-up were calculated. Measures of socioeconomic status, physical health, mental health, smoking and alcohol consumption were added separately and cumulatively to the model. Results PA was associated with lower all-cause mortality in both older (HR 0.75 (95% CI 0.67–0.84)) and younger individuals (HR 0.82 (95% CI 0.72–0.92)) (crude models, HR: risk associated with moving from low intensity, short duration to low intensity, longer duration PA, and from low intensity, longer duration to high intensity). Smoking, education, somatic diagnoses and mental health accounted for some of the association between physical activity and mortality, but a separate protective effect of PA remained in fully adjusted models for cardiovascular (HR 0.78 (95% CI 0.66–0.92)) and respiratory (HR 0.45 (95% CI 0.32–0.63) mortality (both age-groups together), as well as all-cause mortality in the older age group (HR 0.74, 95%CI 0.66–0.83). Conclusion Low intensity, longer duration and high intensity physical activity was associated with reduced all-cause, respiratory and cardiovascular mortality, indicating that physical activity is beneficial also among older individuals, and that a moderate increase in PA can be beneficial. PMID:28328994

  12. Multi-energy CT based on a prior rank, intensity and sparsity model (PRISM).

    PubMed

    Gao, Hao; Yu, Hengyong; Osher, Stanley; Wang, Ge

    2011-11-01

    We propose a compressive sensing approach for multi-energy computed tomography (CT), namely the prior rank, intensity and sparsity model (PRISM). To further compress the multi-energy image for allowing the reconstruction with fewer CT data and less radiation dose, the PRISM models a multi-energy image as the superposition of a low-rank matrix and a sparse matrix (with row dimension in space and column dimension in energy), where the low-rank matrix corresponds to the stationary background over energy that has a low matrix rank, and the sparse matrix represents the rest of distinct spectral features that are often sparse. Distinct from previous methods, the PRISM utilizes the generalized rank, e.g., the matrix rank of tight-frame transform of a multi-energy image, which offers a way to characterize the multi-level and multi-filtered image coherence across the energy spectrum. Besides, the energy-dependent intensity information can be incorporated into the PRISM in terms of the spectral curves for base materials, with which the restoration of the multi-energy image becomes the reconstruction of the energy-independent material composition matrix. In other words, the PRISM utilizes prior knowledge on the generalized rank and sparsity of a multi-energy image, and intensity/spectral characteristics of base materials. Furthermore, we develop an accurate and fast split Bregman method for the PRISM and demonstrate the superior performance of the PRISM relative to several competing methods in simulations.

  13. Time series analysis as input for clinical predictive modeling: Modeling cardiac arrest in a pediatric ICU

    PubMed Central

    2011-01-01

    Background Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. Methods We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Results Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. Conclusions We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting. PMID:22023778

  14. Laser system refinements to reduce variability in infarct size in the rat photothrombotic stroke model

    PubMed Central

    Alaverdashvili, Mariam; Paterson, Phyllis G.; Bradley, Michael P.

    2015-01-01

    Background The rat photothrombotic stroke model can induce brain infarcts with reasonable biological variability. Nevertheless, we observed unexplained high inter-individual variability despite using a rigorous protocol. Of the three major determinants of infarct volume, photosensitive dye concentration and illumination period were strictly controlled, whereas undetected fluctuation in laser power output was suspected to account for the variability. New method The frequently utilized Diode Pumped Solid State (DPSS) lasers emitting 532 nm (green) light can exhibit fluctuations in output power due to temperature and input power alterations. The polarization properties of the Nd:YAG and Nd:YVO4 crystals commonly used in these lasers are another potential source of fluctuation, since one means of controlling output power uses a polarizer with a variable transmission axis. Thus, the properties of DPSS lasers and the relationship between power output and infarct size were explored. Results DPSS laser beam intensity showed considerable variation. Either a polarizer or a variable neutral density filter allowed adjustment of a polarized laser beam to the desired intensity. When the beam was unpolarized, the experimenter was restricted to using a variable neutral density filter. Comparison with existing method(s) Our refined approach includes continuous monitoring of DPSS laser intensity via beam sampling using a pellicle beamsplitter and photodiode sensor. This guarantees the desired beam intensity at the targeted brain area during stroke induction, with the intensity controlled either through a polarizer or variable neutral density filter. Conclusions Continuous monitoring and control of laser beam intensity is critical for ensuring consistent infarct size. PMID:25840363

  15. Change of ENSO characteristics in response to global warming

    NASA Astrophysics Data System (ADS)

    Sun, X.; Xia, Y.; Yan, Y.; Feng, W.; Huang, F.; Yang, X. Q.

    2017-12-01

    By using datasets of HadISST monthly SST from 1895 to 2014 and 600-year simulations of two CESM model experiments with/without doubling of CO2 concentration, ENSO characteristics are compared pre- and post- global warming. The main results are as follows. Due to global warming, the maximum climatological SST warming occurs in the tropical western Pacific (La Niña-like background warming) and the tropical eastern Pacific (El Niño-like background warming) for observations and model, respectively, resulting in opposite zonal SST gradient anomalies in the tropical Pacific. The La Niña-like background warming induces intense surface divergence in the tropical central Pacific, which enhances the easterly trade winds in the tropical central-western Pacific and shifts the strongest ocean-atmosphere coupling westward, correspondingly. On the contrary, the El Niño-like background warming causes westerly winds in the whole tropical Pacific and moves the strongest ocean-atmosphere coupling eastward. Under the La Niña-like background warming, ENSO tends to develop and mature in the tropical central Pacific, because the background easterly wind anomaly weakens the ENSO-induced westerly wind anomaly in the tropical western Pacific, leading to the so-called "Central Pacific ENSO (CP ENSO)". However, the so-called "Eastern Pacific ENSO (EP ENSO)" is likely formed due to increased westerly wind anomaly by the El Niño-like background warming. ENSO lifetime is significantly extended under both the El Niño-like and the La Niña-like background warmings, and especially, it can be prolonged by up to 3 months in the situation of El Niño-like background warming. The prolonged El Nino lifetime mainly applies to extreme El Niño events, which is caused by earlier outbreak of the westerly wind bursts, shallower climatological thermocline depth and weaker "discharge" rate of the ENSO warm signal in response to global warming. Results from both observations and the model also show that the frequency of ENSO events greatly increases due to global warming, and many more extreme El Niño and La Niña events appear under the El Niño-like and the La Niña-like background warmings, respectively. This study reconciles the phenomena and mechanisms of different characteristics of ENSO changes in observations and models.

  16. Upper limit on the inner radiation belt MeV electron intensity.

    PubMed

    Li, X; Selesnick, R S; Baker, D N; Jaynes, A N; Kanekal, S G; Schiller, Q; Blum, L; Fennell, J; Blake, J B

    2015-02-01

    No instruments in the inner radiation belt are immune from the unforgiving penetration of the highly energetic protons (tens of MeV to GeV). The inner belt proton flux level, however, is relatively stable; thus, for any given instrument, the proton contamination often leads to a certain background noise. Measurements from the Relativistic Electron and Proton Telescope integrated little experiment on board Colorado Student Space Weather Experiment CubeSat, in a low Earth orbit, clearly demonstrate that there exist sub-MeV electrons in the inner belt because their flux level is orders of magnitude higher than the background, while higher-energy electron (>1.6 MeV) measurements cannot be distinguished from the background. Detailed analysis of high-quality measurements from the Relativistic Electron and Proton Telescope on board Van Allen Probes, in a geo-transfer-like orbit, provides, for the first time, quantified upper limits on MeV electron fluxes in various energy ranges in the inner belt. These upper limits are rather different from flux levels in the AE8 and AE9 models, which were developed based on older data sources. For 1.7, 2.5, and 3.3 MeV electrons, the upper limits are about 1 order of magnitude lower than predicted model fluxes. The implication of this difference is profound in that unless there are extreme solar wind conditions, which have not happened yet since the launch of Van Allen Probes, significant enhancements of MeV electrons do not occur in the inner belt even though such enhancements are commonly seen in the outer belt. Quantified upper limit of MeV electrons in the inner beltActual MeV electron intensity likely much lower than the upper limitMore detailed understanding of relativistic electrons in the magnetosphere.

  17. Upper limit on the inner radiation belt MeV electron intensity

    PubMed Central

    Li, X; Selesnick, RS; Baker, DN; Jaynes, AN; Kanekal, SG; Schiller, Q; Blum, L; Fennell, J; Blake, JB

    2015-01-01

    No instruments in the inner radiation belt are immune from the unforgiving penetration of the highly energetic protons (tens of MeV to GeV). The inner belt proton flux level, however, is relatively stable; thus, for any given instrument, the proton contamination often leads to a certain background noise. Measurements from the Relativistic Electron and Proton Telescope integrated little experiment on board Colorado Student Space Weather Experiment CubeSat, in a low Earth orbit, clearly demonstrate that there exist sub-MeV electrons in the inner belt because their flux level is orders of magnitude higher than the background, while higher-energy electron (>1.6 MeV) measurements cannot be distinguished from the background. Detailed analysis of high-quality measurements from the Relativistic Electron and Proton Telescope on board Van Allen Probes, in a geo-transfer-like orbit, provides, for the first time, quantified upper limits on MeV electron fluxes in various energy ranges in the inner belt. These upper limits are rather different from flux levels in the AE8 and AE9 models, which were developed based on older data sources. For 1.7, 2.5, and 3.3 MeV electrons, the upper limits are about 1 order of magnitude lower than predicted model fluxes. The implication of this difference is profound in that unless there are extreme solar wind conditions, which have not happened yet since the launch of Van Allen Probes, significant enhancements of MeV electrons do not occur in the inner belt even though such enhancements are commonly seen in the outer belt. Key Points Quantified upper limit of MeV electrons in the inner belt Actual MeV electron intensity likely much lower than the upper limit More detailed understanding of relativistic electrons in the magnetosphere PMID:26167446

  18. [METROPLASTY FOR OBSTETRIC PERITONITIS, ARISING IN THE BACKGROUND SUTURE FAILURE OF THE UTERUS].

    PubMed

    Tussupkaliyev, A; Daribay, Zh; Saduov, M; Dossimbetova, M; Rakhmetullina, G

    2016-12-01

    Improving treatment outcomes obstetric peritonitis after cesarean section on the basis of organ-preserving treatment and reasonable intensive care in the postpartum period. Fifteen clinical cases in which on the background of peritonitis were made conserving surgery, which included: excision of necrotic areas on the uterus, uterine cavity curettage, metroplasty. Nasointestinal bowel intubation and drainage of the abdominal cavity. It is discussed tactics of postpartum women with obstetric peritonitis on the background of insolvency seams on the uterus, currently existing criteria for evaluation and treatment of patients data. The necessity of using in the algorithm survey postpartum women with obstetric peritonitis diagnostic criteria SIRS, leukocyte index of intoxication, integrated scales organ dysfunctions. Modern approaches to surgical treatment, the starting antibiotic therapy antibiotics ultra wide spectrum of action, combined with early intensive treatment in an intensive care unit avoids removal of the uterus as a primary focus.

  19. Segregation of acid plume pixels from background water pixels, signatures of background water and dispersed acid plumes, and implications for calculation of iron concentration in dense plumes

    NASA Technical Reports Server (NTRS)

    Bahn, G. S.

    1978-01-01

    Two files of data, obtained with a modular multiband scanner, for an acid waste dump into ocean water, were analyzed intensively. Signatures were derived for background water at different levels of effective sunlight intensity, and for different iron concentrations in the dispersed plume from the dump. The effect of increased sunlight intensity on the calculated iron concentration was found to be relatively important at low iron concentrations and relatively unimportant at high values of iron concentration in dispersed plumes. It was concluded that the basic equation for iron concentration is not applicable to dense plumes, particularly because lower values are indicated at the very core of the plume, than in the surrounding sheath, whereas radiances increase consistently from background water to dispersed plume to inner sheath to innermost core. It was likewise concluded that in the dense plume the iron concentration would probably best be measured by the higher wave length radiances, although the suitable relationship remains unknown.

  20. Cross-correlation cosmography with intensity mapping of the neutral hydrogen 21 cm emission

    NASA Astrophysics Data System (ADS)

    Pourtsidou, A.; Bacon, D.; Crittenden, R.

    2015-11-01

    The cross-correlation of a foreground density field with two different background convergence fields can be used to measure cosmographic distance ratios and constrain dark energy parameters. We investigate the possibility of performing such measurements using a combination of optical galaxy surveys and neutral hydrogen (HI) intensity mapping surveys, with emphasis on the performance of the planned Square Kilometre Array (SKA). Using HI intensity mapping to probe the foreground density tracer field and/or the background source fields has the advantage of excellent redshift resolution and a longer lever arm achieved by using the lensing signal from high redshift background sources. Our results show that, for our best SKA-optical configuration of surveys, a constant equation of state for dark energy can be constrained to ≃8 % for a sky coverage fsky=0.5 and assuming a σ (ΩDE)=0.03 prior for the dark energy density parameter. We also show that using the cosmic microwave background as the second source plane is not competitive, even when considering a COrE-like satellite.

  1. Parameter estimation for the exponential-normal convolution model for background correction of affymetrix GeneChip data.

    PubMed

    McGee, Monnie; Chen, Zhongxue

    2006-01-01

    There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.

  2. Design of the radiation shielding for the time of flight enhanced diagnostics neutron spectrometer at Experimental Advanced Superconducting Tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Du, T. F.; Chen, Z. J.; Peng, X. Y.

    A radiation shielding has been designed to reduce scattered neutrons and background gamma-rays for the new double-ring Time Of Flight Enhanced Diagnostics (TOFED). The shielding was designed based on simulation with the Monte Carlo code MCNP5. Dedicated model of the EAST tokamak has been developed together with the emission neutron source profile and spectrum; the latter were simulated with the Nubeam and GENESIS codes. Significant reduction of background radiation at the detector can be achieved and this satisfies the requirement of TOFED. The intensities of the scattered and direct neutrons in the line of sight of the TOFED neutron spectrometermore » at EAST are studied for future data interpretation.« less

  3. Improving LHC searches for dark photons using lepton-jet substructure

    NASA Astrophysics Data System (ADS)

    Barello, G.; Chang, Spencer; Newby, Christopher A.; Ostdiek, Bryan

    2017-03-01

    Collider signals of dark photons are an exciting probe for new gauge forces and are characterized by events with boosted lepton jets. Existing techniques are efficient in searching for muonic lepton jets but due to substantial backgrounds have difficulty constraining lepton jets containing only electrons. This is unfortunate since upcoming intensity frontier experiments are sensitive to dark photon masses which only allow electron decays. Analyzing a recently proposed model of kinetic mixing, with new scalar particles decaying into dark photons, we find that existing techniques for electron jets can be substantially improved. We show that using lepton-jet-substructure variables, in association with a boosted decision tree, improves background rejection, significantly increasing the LHC's reach for dark photons in this region of parameter space.

  4. Segmentation-based retrospective shading correction in fluorescence microscopy E. coli images for quantitative analysis

    NASA Astrophysics Data System (ADS)

    Mai, Fei; Chang, Chunqi; Liu, Wenqing; Xu, Weichao; Hung, Yeung S.

    2009-10-01

    Due to the inherent imperfections in the imaging process, fluorescence microscopy images often suffer from spurious intensity variations, which is usually referred to as intensity inhomogeneity, intensity non uniformity, shading or bias field. In this paper, a retrospective shading correction method for fluorescence microscopy Escherichia coli (E. Coli) images is proposed based on segmentation result. Segmentation and shading correction are coupled together, so we iteratively correct the shading effects based on segmentation result and refine the segmentation by segmenting the image after shading correction. A fluorescence microscopy E. Coli image can be segmented (based on its intensity value) into two classes: the background and the cells, where the intensity variation within each class is close to zero if there is no shading. Therefore, we make use of this characteristics to correct the shading in each iteration. Shading is mathematically modeled as a multiplicative component and an additive noise component. The additive component is removed by a denoising process, and the multiplicative component is estimated using a fast algorithm to minimize the intra-class intensity variation. We tested our method on synthetic images and real fluorescence E.coli images. It works well not only for visual inspection, but also for numerical evaluation. Our proposed method should be useful for further quantitative analysis especially for protein expression value comparison.

  5. Laryngeal Aerodynamics in Healthy Older Adults and Adults With Parkinson's Disease.

    PubMed

    Matheron, Deborah; Stathopoulos, Elaine T; Huber, Jessica E; Sussman, Joan E

    2017-03-01

    The present study compared laryngeal aerodynamic function of healthy older adults (HOA) to adults with Parkinson's disease (PD) while speaking at a comfortable and increased vocal intensity. Laryngeal aerodynamic measures (subglottal pressure, peak-to-peak flow, minimum flow, and open quotient [OQ]) were compared between HOAs and individuals with PD who had a diagnosis of hypophonia. Increased vocal intensity was elicited via monaurally presented multitalker background noise. At a comfortable speaking intensity, HOAs and individuals with PD produced comparable vocal intensity, rates of vocal fold closure, and minimum flow. HOAs used smaller OQs, higher subglottal pressure, and lower peak-to-peak flow than individuals with PD. Both groups increased speaking intensity when speaking in noise to the same degree. However, HOAs produced increased intensity with greater driving pressure, faster vocal fold closure rates, and smaller OQs than individuals with PD. Monaural background noise elicited equivalent vocal intensity increases in HOAs and individuals with PD. Although both groups used laryngeal mechanisms as expected to increase sound pressure level, they used these mechanisms to different degrees. The HOAs appeared to have better control of the laryngeal mechanism to make changes to their vocal intensity.

  6. Orbital Noise in the Earth System is a Common Cause of Climate and Greenhouse-Gas Fluctuation

    NASA Technical Reports Server (NTRS)

    Liu, H. S.; Kolenkiewicz, R.; Wade, C., Jr.; Smith, David E. (Technical Monitor)

    2002-01-01

    The mismatch between fossil isotopic data and climate models known as the cool-tropic paradox implies that either the data are flawed or we understand very little about the climate models of greenhouse warming. Here we question the validity of the climate models on the scientific background of orbital noise in the Earth system. Our study shows that the insolation pulsation induced by orbital noise is the common cause of climate change and atmospheric concentrations of carbon dioxide and methane. In addition, we find that the intensity of the insolation pulses is dependent on the latitude of the Earth. Thus, orbital noise is the key to understanding the troubling paradox in climate models.

  7. Measurements of Long-range Electronic Correlations During Femtosecond Diffraction Experiments Performed on Nanocrystals of Buckminsterfullerene

    PubMed Central

    Ryan, Rebecca A.; Williams, Sophie; Martin, Andrew V.; Dilanian, Ruben A.; Darmanin, Connie; Putkunz, Corey T.; Wood, David; Streltsov, Victor A.; Jones, Michael W.M.; Gaffney, Naylyn; Hofmann, Felix; Williams, Garth J.; Boutet, Sebastien; Messerschmidt, Marc; Seibert, M. Marvin; Curwood, Evan K.; Balaur, Eugeniu; Peele, Andrew G.; Nugent, Keith A.; Quiney, Harry M.; Abbey, Brian

    2017-01-01

    The precise details of the interaction of intense X-ray pulses with matter are a topic of intense interest to researchers attempting to interpret the results of femtosecond X-ray free electron laser (XFEL) experiments. An increasing number of experimental observations have shown that although nuclear motion can be negligible, given a short enough incident pulse duration, electronic motion cannot be ignored. The current and widely accepted models assume that although electrons undergo dynamics driven by interaction with the pulse, their motion could largely be considered 'random'. This would then allow the supposedly incoherent contribution from the electronic motion to be treated as a continuous background signal and thus ignored. The original aim of our experiment was to precisely measure the change in intensity of individual Bragg peaks, due to X-ray induced electronic damage in a model system, crystalline C60. Contrary to this expectation, we observed that at the highest X-ray intensities, the electron dynamics in C60 were in fact highly correlated, and over sufficiently long distances that the positions of the Bragg reflections are significantly altered. This paper describes in detail the methods and protocols used for these experiments, which were conducted both at the Linac Coherent Light Source (LCLS) and the Australian Synchrotron (AS) as well as the crystallographic approaches used to analyse the data. PMID:28872125

  8. Measurements of Long-range Electronic Correlations During Femtosecond Diffraction Experiments Performed on Nanocrystals of Buckminsterfullerene

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, Rebecca A.; Williams, Sophie; Martin, Andrew V.

    The precise details of the interaction of intense X-ray pulses with matter are a topic of intense interest to researchers attempting to interpret the results of femtosecond X-ray free electron laser (XFEL) experiments. An increasing number of experimental observations have shown that although nuclear motion can be negligible, given a short enough incident pulse duration, electronic motion cannot be ignored. The current and widely accepted models assume that although electrons undergo dynamics driven by interaction with the pulse, their motion could largely be considered 'random'. This would then allow the supposedly incoherent contribution from the electronic motion to be treatedmore » as a continuous background signal and thus ignored. The original aim of our experiment was to precisely measure the change in intensity of individual Bragg peaks, due to X-ray induced electronic damage in a model system, crystalline C 60. Contrary to this expectation, we observed that at the highest X-ray intensities, the electron dynamics in C 60 were in fact highly correlated, and over sufficiently long distances that the positions of the Bragg reflections are significantly altered. Our paper describes in detail the methods and protocols used for these experiments, which were conducted both at the Linac Coherent Light Source (LCLS) and the Australian Synchrotron (AS) as well as the crystallographic approaches used to analyse the data.« less

  9. Development and evaluation of a connective tissue phantom model for subsurface visualization of cancers requiring wide local excision.

    PubMed

    Samkoe, Kimberley S; Bates, Brent D; Tselepidakis, Niki N; DSouza, Alisha V; Gunn, Jason R; Ramkumar, Dipak B; Paulsen, Keith D; Pogue, Brian W; Henderson, Eric R

    2017-12-01

    Wide local excision (WLE) of tumors with negative margins remains a challenge because surgeons cannot directly visualize the mass. Fluorescence-guided surgery (FGS) may improve surgical accuracy; however, conventional methods with direct surface tumor visualization are not immediately applicable, and properties of tissues surrounding the cancer must be considered. We developed a phantom model for sarcoma resection with the near-infrared fluorophore IRDye 800CW and used it to iteratively define the properties of connective tissues that typically surround sarcoma tumors. We then tested the ability of a blinded surgeon to resect fluorescent tumor-simulating inclusions with ∼1-cm margins using predetermined target fluorescence intensities and a Solaris open-air fluorescence imaging system. In connective tissue-simulating phantoms, fluorescence intensity decreased with increasing blood concentration and increased with increasing intralipid concentrations. Fluorescent inclusions could be resolved at ≥1-cm depth in all inclusion concentrations and sizes tested. When inclusion depth was held constant, fluorescence intensity decreased with decreasing volume. Using targeted fluorescence intensities, a blinded surgeon was able to successfully excise inclusions with ∼1-cm margins from fat- and muscle-simulating phantoms with inclusion-to-background contrast ratios as low as 2∶1. Indirect, subsurface FGS is a promising tool for surgical resection of cancers requiring WLE. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  10. Measurements of Long-range Electronic Correlations During Femtosecond Diffraction Experiments Performed on Nanocrystals of Buckminsterfullerene

    DOE PAGES

    Ryan, Rebecca A.; Williams, Sophie; Martin, Andrew V.; ...

    2017-08-22

    The precise details of the interaction of intense X-ray pulses with matter are a topic of intense interest to researchers attempting to interpret the results of femtosecond X-ray free electron laser (XFEL) experiments. An increasing number of experimental observations have shown that although nuclear motion can be negligible, given a short enough incident pulse duration, electronic motion cannot be ignored. The current and widely accepted models assume that although electrons undergo dynamics driven by interaction with the pulse, their motion could largely be considered 'random'. This would then allow the supposedly incoherent contribution from the electronic motion to be treatedmore » as a continuous background signal and thus ignored. The original aim of our experiment was to precisely measure the change in intensity of individual Bragg peaks, due to X-ray induced electronic damage in a model system, crystalline C 60. Contrary to this expectation, we observed that at the highest X-ray intensities, the electron dynamics in C 60 were in fact highly correlated, and over sufficiently long distances that the positions of the Bragg reflections are significantly altered. Our paper describes in detail the methods and protocols used for these experiments, which were conducted both at the Linac Coherent Light Source (LCLS) and the Australian Synchrotron (AS) as well as the crystallographic approaches used to analyse the data.« less

  11. Search for C II Emission on Cosmological Scales at Redshift Z ˜ 2.6

    NASA Astrophysics Data System (ADS)

    Pullen, Anthony R.; Serra, Paolo; Chang, Tzu-Ching; Doré, Olivier; Ho, Shirley

    2018-05-01

    We present a search for Cii emission over cosmological scales at high-redshifts. The Cii line is a prime candidate to be a tracer of star formation over large-scale structure since it is one of the brightest emission lines from galaxies. Redshifted Cii emission appears in the submillimeter regime, meaning it could potentially be present in the higher frequency intensity data from the Planck satellite used to measure the cosmic infrared background (CIB). We search for Cii emission over redshifts z = 2 - 3.2 in the Planck 545 GHz intensity map by cross-correlating the 3 highest frequency Planck maps with spectroscopic quasars and CMASS galaxies from the Sloan Digital Sky Survey III (SDSS-III), which we then use to jointly fit for Cii intensity, CIB parameters, and thermal Sunyaev-Zeldovich (SZ) emission. We report a measurement of an anomalous emission I_ν =6.6^{+5.0}_{-4.8}× 10^4Jy/sr at 95% confidence, which could be explained by Cii emission, favoring collisional excitation models of Cii emission that tend to be more optimistic than models based on Cii luminosity scaling relations from local measurements; however, a comparison of Bayesian information criteria reveal that this model and the CIB & SZ only model are equally plausible. Thus, more sensitive measurements will be needed to confirm the existence of large-scale Cii emission at high redshifts. Finally, we forecast that intensity maps from Planck cross-correlated with quasars from the Dark Energy Spectroscopic Instrument (DESI) would increase our sensitivity to Cii emission by a factor of 5, while the proposed Primordial Inflation Explorer (PIXIE) could increase the sensitivity further.

  12. [RELATIONSHIP BETWEEN THE PREVALENCE OF CHRONIC NONINFECTIOUS DISEASES AND ELECTROPHYSICAL STATE OF THE ENVIRONMENT].

    PubMed

    Rakhmanin, Yu A; Stekhin, A A; Yakovleva, G V; Karasev, A K; Marasanov, A V; Iksanova, T I; Ryabikov, V V

    2015-01-01

    In the paper there is evaluated the relationship of features of electronic state of the environment with a level of chronic, noninfectious diseases (CNID) in the regions of Russia, obtained on the basis of the monitoring measurements of the intensity of natural background electronic Bose condensate (BEBC) of natural ecosystems in a number of Russian regions and seas of the Arctic Ocean. The assessment of BEBC was implemented on results of measurements of redox state of distilled water being in the contact with natural water. The equilibrium redox state of distilled water, determined by the influx of electrons (quantum reduction) outside, is proportional to the intensity of BEBC. The obtained data attest to an increase in the intensity of the background of EBC in Siberia regions and, especially, within the limits ofwaters of Lake Baikal (the redox potential of the surface water in the lake ~ -70mV). Also there is observed a strong dependence of the background EBC in the latitudinal direction. Low levels of background EBC were noted in the Arkhangelsk region and the north-eastern Chukotka. Functioning of international systems of plasma sounding of ionosphere (such systems as HAARP) were established to have a detrimental effect on the background EBC in these regions. According to the results of measurements of the relative values of intensities of natural background of Bose condensate of electrons there was constructed the dependence reflecting the relationship of the prevalence of noninfectious diseases in the regions of Russia with the redox state of distilled water which can be characterized as a significant (regression coefficient R2 = 0.78). The relationship between noninfectious diseases (NID, %) with the intensity of the background of EBC (Ib rel. units) is estimated by the equation: NID [%] = 0.24Eh [mV]-25, where Eh ~ I/Ib. Numerical evaluations show that an increase in the biosphere redox potential of water by 90mV leads to an increase of the primary incidence by 20% (relatively to the average values for Russia). Analysis of results attests to the relationship of CNID with the electrophysical state of the environment that allows from different positions to arrive to true causes of their emergence, associated with changes in the electrophysical conditions of habitation and human activities that lead to the nascency of cellular metabolic disturbances.

  13. Identification of Geologic and Anthropogenic Sources of Phosphorus to Streams in California and Portions of Adjacent States, U.S.A., Using SPARROW Modeling

    NASA Astrophysics Data System (ADS)

    Domagalski, J. L.

    2013-12-01

    The SPARROW (Spatially Referenced Regressions On Watershed Attributes) model allows for the simulation of nutrient transport at un-gauged catchments on a regional scale. The model was used to understand natural and anthropogenic factors affecting phosphorus transport in developed, undeveloped, and mixed watersheds. The SPARROW model is a statistical tool that allows for mass balance calculation of constituent sources, transport, and aquatic decay based upon a calibration of a subset of stream networks, where concentrations and discharge have been measured. Calibration is accomplished using potential sources for a given year and may include fertilizer, geological background (based on bed-sediment samples and aggregated with geochemical map units), point source discharge, and land use categories. NHD Plus version 2 was used to model the hydrologic system. Land to water transport variables tested were precipitation, permeability, soil type, tile drains, and irrigation. For this study area, point sources, cultivated land, and geological background are significant phosphorus sources to streams. Precipitation and clay content of soil are significant land to water transport variables and various stream sizes show significance with respect to aquatic decay. Specific rock types result in different levels of phosphorus loading and watershed yield. Some important geological sources are volcanic rocks (andesite and basalt), granodiorite, glacial deposits, and Mesozoic to Cenozoic marine deposits. Marine sediments vary in their phosphorus content, but are responsible for some of the highest natural phosphorus yields, especially along the Central and Southern California coast. The Miocene Monterey Formation was found to be an especially important local source in southern California. In contrast, mixed metamorphic and igneous assemblages such as argillites, peridotite, and shales of the Trinity Mountains of northern California result in some of the lowest phosphorus yields. The agriculturally productive Central Valley of California has a low amount of background phosphorus in spite of inputs from streams draining upland areas. Many years of intensive agriculture may be responsible for the decrease of soil phosphorus in that area. Watersheds with significant background sources of phosphorus and large amounts of cultivated land had some of the highest per hectare yields. Seven different stream systems important for water management, or to describe transport processes, were investigated in detail for downstream changes in sources and loads. For example, the Klamath River (Oregon and California) has intensive agriculture and andesite-derived phosphorus in the upper reach. The proportion of agricultural-derived phosphorus decreases as the river flows into California before discharge to the ocean. The river flows through at least three different types of geological background sources from high to intermediate to very low. Knowledge of the role of natural sources in developed watersheds is critical for developing nutrient management strategies and these model results will have applicability for the establishment of realistic nutrient criteria.

  14. A Randomized, Rater-Blinded, Parallel Trial of Intensive Speech Therapy in Sub-Acute Post-Stroke Aphasia: The SP-I-R-IT Study

    ERIC Educational Resources Information Center

    Martins, Isabel Pavao; Leal, Gabriela; Fonseca, Isabel; Farrajota, Luisa; Aguiar, Marta; Fonseca, Jose; Lauterbach, Martin; Goncalves, Luis; Cary, M. Carmo; Ferreira, Joaquim J.; Ferro, Jose M.

    2013-01-01

    Background: There is conflicting evidence regarding the benefits of intensive speech and language therapy (SLT), particularly because intensity is often confounded with total SLT provided. Aims: A two-centre, randomized, rater-blinded, parallel study was conducted to compare the efficacy of 100 h of SLT in a regular (RT) versus intensive (IT)…

  15. Achieving Superior Tropical Cyclone Intensity Forecasts by Improving the Assimilation of High-Resolution Satellite Data into Mesoscale Prediction Models

    DTIC Science & Technology

    2013-09-30

    using polar orbit microwave and infrared sounder measurements from the Global Telecommunication System (GTS). The SDAT system was developed as a...WRF/GSI initial conditions and WRF boundary conditions. • WRF system to do short-range forecasts (6 hours) to provide the background fields for GSI...UCAR is related to a NASA GNSS proposal: “Improving Tropical Prediction and Analysis using COSMIC Radio Occultation Observations and an Ensemble Data

  16. Infrared small target detection in heavy sky scene clutter based on sparse representation

    NASA Astrophysics Data System (ADS)

    Liu, Depeng; Li, Zhengzhou; Liu, Bing; Chen, Wenhao; Liu, Tianmei; Cao, Lei

    2017-09-01

    A novel infrared small target detection method based on sky clutter and target sparse representation is proposed in this paper to cope with the representing uncertainty of clutter and target. The sky scene background clutter is described by fractal random field, and it is perceived and eliminated via the sparse representation on fractal background over-complete dictionary (FBOD). The infrared small target signal is simulated by generalized Gaussian intensity model, and it is expressed by the generalized Gaussian target over-complete dictionary (GGTOD), which could describe small target more efficiently than traditional structured dictionaries. Infrared image is decomposed on the union of FBOD and GGTOD, and the sparse representation energy that target signal and background clutter decomposed on GGTOD differ so distinctly that it is adopted to distinguish target from clutter. Some experiments are induced and the experimental results show that the proposed approach could improve the small target detection performance especially under heavy clutter for background clutter could be efficiently perceived and suppressed by FBOD and the changing target could also be represented accurately by GGTOD.

  17. Pain Intensity Moderates the Relationship Between Age and Pain Interference in Chronic Orofacial Pain Patients.

    PubMed

    Boggero, Ian A; Geiger, Paul J; Segerstrom, Suzanne C; Carlson, Charles R

    2015-01-01

    BACKGROUND/STUDY CONTEXT: Chronic pain is associated with increased interference in daily functioning that becomes more pronounced as pain intensity increases. Based on previous research showing that older adults maintain well-being in the face of pain as well as or better than their younger counterparts, the current study examined the interaction of age and pain intensity on interference in a sample of chronic orofacial pain patients. Data were obtained from the records of 508 chronic orofacial pain patients being seen for an initial evaluation from 2008 to 2012. Collected data included age (range: 18-78) and self-reported measures of pain intensity and pain interference. Bivariate correlations and regression models were used to assess for statistical interactions. Regression analyses revealed that pain intensity positively predicted pain interference (R(2) = .35, B = 10.40, SE = 0.62, t(507) = 16.70, p < .001). A significant interaction supported the primary hypothesis that aging was associated with reduced interference at high levels of pain intensity (ΔR(2) = .01, B = -1.31, SE = 0.63, t(505) = -2.90, p = .04). At high levels of pain intensity, interference decreased with age, although the age by pain intensity interaction effect was small. This evidence converges with aging theories, including socioemotional selectivity theory, which posits that as people age, they become more motivated to maximize positive emotions and minimize negative ones. The results highlight the importance of studying the mechanisms older adults use to successfully cope with pain.

  18. SiPM timing characteristics under conditions of a large background for lidars

    NASA Astrophysics Data System (ADS)

    Antonova, A. M.; Kaplin, V. A.

    2018-01-01

    Silicon photomultipliers (SiPM) have found their use in various fields of industry and scientific experiments. This paper considers study of the SiPM possibility to detect low-intensity light pulses (down to single photons) under high-intensity background illumination. This may be useful for the development of laser rangefinders operating under natural light using SiPM as crucial photosensor. Moreover, the presented data describes some physical properties of LIDAR with SiPM under radiation exposure, which always affects its intrinsic noise.

  19. Uremic Pruritus, Dialysis Adequacy, and Metabolic Profiles in Hemodialysis Patients: A Prospective 5-Year Cohort Study

    PubMed Central

    Chen, Hung-Yuan; Chiu, Yen-Ling; Hsu, Shih-Ping; Pai, Mei-Fen; Ju-YehYang; Lai, Chun-Fu; Lu, Hui-Min; Huang, Shu-Chen; Yang, Shao-Yu; Wen, Su-Yin; Chiu, Hsien-Ching; Hu, Fu-Chang; Peng, Yu-Sen; Jee, Shiou-Hwa

    2013-01-01

    Background Uremic pruritus is a common and intractable symptom in patients on chronic hemodialysis, but factors associated with the severity of pruritus remain unclear. This study aimed to explore the associations of metabolic factors and dialysis adequacy with the aggravation of pruritus. Methods We conducted a 5-year prospective cohort study on patients with maintenance hemodialysis. A visual analogue scale (VAS) was used to assess the intensity of pruritus. Patient demographic and clinical characteristics, laboratory parameters, dialysis adequacy (assessed by Kt/V), and pruritus intensity were recorded at baseline and follow-up. Change score analysis of the difference score of VAS between baseline and follow-up was performed using multiple linear regression models. The optimal threshold of Kt/V, which is associated with the aggravation of uremic pruritus, was determined by generalized additive models and receiver operating characteristic analysis. Results A total of 111 patients completed the study. Linear regression analysis showed that lower Kt/V and use of low-flux dialyzer were significantly associated with the aggravation of pruritus after adjusting for the baseline pruritus intensity and a variety of confounding factors. The optimal threshold value of Kt/V for pruritus was 1.5 suggested by both generalized additive models and receiver operating characteristic analysis. Conclusions Hemodialysis with the target of Kt/V ≥1.5 and use of high-flux dialyzer may reduce the intensity of pruritus in patients on chronic hemodialysis. Further clinical trials are required to determine the optimal dialysis dose and regimen for uremic pruritus. PMID:23940749

  20. A multi-resolution ensemble study of a tropical urban environment and its interactions with the background regional atmosphere

    NASA Astrophysics Data System (ADS)

    Li, Xian-Xiang; Koh, Tieh-Yong; Entekhabi, Dara; Roth, Matthias; Panda, Jagabandhu; Norford, Leslie K.

    2013-09-01

    This study employed the Weather Research and Forecasting model with a single-layer urban canopy model to investigate the urban environment of a tropical city, Singapore. The coupled model was evaluated against available observational data from a sensor network and flux tower. The effects of land use type and anthropogenic heat (AH) on the thermal and wind environment were investigated with a series of sensitivity tests using an ensemble approach for low advection, high convective available potential energy, intermonsoon season cases. The diurnal cycle and spatial pattern of urban heat island (UHI) intensity and planetary boundary layer height were investigated. The mean UHI intensity peaked in the early morning at 2.2°C, reaching 2.4°C in industrial areas. Sea and land breezes developed during daytime and nighttime, respectively, with the former much stronger than the latter. The model predicted that sea breezes from different coastlines of the Malay Peninsula meet and converge, inducing strong updrafts. AH was found to play roles in all the processes studied, while the effect of different land use types was most pronounced during nighttime, and least visible near noon.

  1. Quantitative image analysis of immunohistochemical stains using a CMYK color model

    PubMed Central

    Pham, Nhu-An; Morrison, Andrew; Schwock, Joerg; Aviel-Ronen, Sarit; Iakovlev, Vladimir; Tsao, Ming-Sound; Ho, James; Hedley, David W

    2007-01-01

    Background Computer image analysis techniques have decreased effects of observer biases, and increased the sensitivity and the throughput of immunohistochemistry (IHC) as a tissue-based procedure for the evaluation of diseases. Methods We adapted a Cyan/Magenta/Yellow/Key (CMYK) model for automated computer image analysis to quantify IHC stains in hematoxylin counterstained histological sections. Results The spectral characteristics of the chromogens AEC, DAB and NovaRed as well as the counterstain hematoxylin were first determined using CMYK, Red/Green/Blue (RGB), normalized RGB and Hue/Saturation/Lightness (HSL) color models. The contrast of chromogen intensities on a 0–255 scale (24-bit image file) as well as compared to the hematoxylin counterstain was greatest using the Yellow channel of a CMYK color model, suggesting an improved sensitivity for IHC evaluation compared to other color models. An increase in activated STAT3 levels due to growth factor stimulation, quantified using the Yellow channel image analysis was associated with an increase detected by Western blotting. Two clinical image data sets were used to compare the Yellow channel automated method with observer-dependent methods. First, a quantification of DAB-labeled carbonic anhydrase IX hypoxia marker in 414 sections obtained from 138 biopsies of cervical carcinoma showed strong association between Yellow channel and positive color selection results. Second, a linear relationship was also demonstrated between Yellow intensity and visual scoring for NovaRed-labeled epidermal growth factor receptor in 256 non-small cell lung cancer biopsies. Conclusion The Yellow channel image analysis method based on a CMYK color model is independent of observer biases for threshold and positive color selection, applicable to different chromogens, tolerant of hematoxylin, sensitive to small changes in IHC intensity and is applicable to simple automation procedures. These characteristics are advantageous for both basic as well as clinical research in an unbiased, reproducible and high throughput evaluation of IHC intensity. PMID:17326824

  2. Initiation-promotion model of tumor prevalence in mice from space radiation exposures

    NASA Technical Reports Server (NTRS)

    Cucinotta, F. A.; Wilson, J. W.

    1995-01-01

    Exposures in space consist of low-level background components from galactic cosmic rays (GCR), occasional intense-energetic solar-particle events, periodic passes through geomagnetic-trapped radiation, and exposure from possible onboard nuclear-propulsion engines. Risk models for astronaut exposure from such diverse components and modalities must be developed to assure adequate protection in future NASA missions. The low-level background exposures (GCR), including relativistic heavy ions (HZE), will be the ultimate limiting factor for astronaut career exposure. We consider herein a two-mutation, initiation-promotion, radiation-carcinogenesis model in mice in which the initiation stage is represented by a linear kinetics model of cellular repair/misrepair, including the track-structure model for heavy ion action cross-sections. The model is validated by comparison with the harderian gland tumor experiments of Alpen et al. for various ion beams. We apply the initiation-promotion model to exposures from galactic cosmic rays, using models of the cosmic-ray environment and heavy ion transport, and consider the effects of the age of the mice prior to and after the exposure and of the length of time in space on predictions of relative risk. Our results indicate that biophysical models of age-dependent radiation hazard will provide a better understanding of GCR risk than models that rely strictly on estimates of the initial slopes of these radiations.

  3. Cosmic-muon intensity measurement and overburden estimation in a building at surface level and in an underground facility using two BC408 scintillation detectors coincidence counting system.

    PubMed

    Zhang, Weihua; Ungar, Kurt; Liu, Chuanlei; Mailhot, Maverick

    2016-10-01

    A series of measurements have been recently conducted to determine the cosmic-muon intensities and attenuation factors at various indoor and underground locations for a gamma spectrometer. For this purpose, a digital coincidence spectrometer was developed by using two BC408 plastic scintillation detectors and an XIA LLC Digital Gamma Finder (DGF)/Pixie-4 software and card package. The results indicate that the overburden in the building at surface level absorbs a large part of cosmic ray protons while attenuating the cosmic-muon intensity by 20-50%. The underground facility has the largest overburden of 39 m water equivalent, where the cosmic-muon intensity is reduced by a factor of 6. The study provides a cosmic-muon intensity measurement and overburden assessment, which are important parameters for analysing the background of an HPGe counting system, or for comparing the background of similar systems. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin Shoufu, E-mail: linshf2003@126.co; Zhao Dingtao, E-mail: box@ustc.edu.c; Marinova, Dora, E-mail: D.Marinova@curtin.edu.a

    Assuming that energy consumption is the main source of GHG emissions in China, this paper analyses the effect of population, urbanisation level, GDP per capita, industrialisation level and energy intensity on the country's environmental impact using the STIRPAT model with data for 1978-2006. The analysis shows that population has the largest potential effect on environmental impact, followed by urbanisation level, industrialisation level, GDP per capita and energy intensity. Hence, China's One Child Policy, which restrains rapid population growth, has been an effective way of reducing the country's environmental impact. However, due to the difference in growth rates, GDP per capitamore » had a higher effect on the environmental impact, contributing to 38% of its increase (while population's contribution was at 32%). The rapid decrease in energy intensity was the main factor restraining the increase in China's environmental impact but recently it has also been rising. Against this background, the future of the country looks bleak unless a change in human behaviour towards more ecologically sensitive economic choices occurs.« less

  5. Introduction of the identification, situation, background, assessment, recommendations tool to improve the quality of information transfer during medical handover in intensive care.

    PubMed

    Ramasubbu, Benjamin; Stewart, Emma; Spiritoso, Rosalba

    2017-02-01

    To audit the quality and safety of the current doctor-to-doctor handover of patient information in our Cardiothoracic Intensive Care Unit. If deficient, to implement a validated handover tool to improve the quality of the handover process. In Cycle 1 we observed the verbal handover and reviewed the written handover information transferred for 50 consecutive patients in St George's Hospital Cardiothoracic Intensive Care Unit. For each patient's handover, we assessed whether each section of the Identification, Situation, Background, Assessment, Recommendations tool was used on a scale of 0-2. Zero if no information in that category was transferred, one if the information was partially transferred and two if all relevant information was transferred. Each patient's handover received a score from 0 to 10 and thus, each cycle a total score of 0-500. Following the implementation of the Identification, Situation, Background, Assessment, Recommendations handover tool in our Intensive Care Unit in Cycle 2, we re-observed the handover process for another 50 consecutive patients hence, completing the audit cycle. There was a significant difference between the total scores from Cycle 1 and 2 (263/500 versus 457/500, p < 0.001). The median handover score for Cycle 1 was 5/10 (interquartile range 4-6). The median handover score for Cycle 2 was 9/10 (interquartile range 9-10). Patient handover scores increased significantly between Cycle 1 and 2, U = 13.5, p < 0.001. The introduction of a standardised handover template (Identification, Situation, Background, Assessment, Recommendations tool) has improved the quality and safety of the doctor-to-doctor handover of patient information in our Intensive Care Unit.

  6. Improving Precision, Maintaining Accuracy, and Reducing Acquisition Time for Trace Elements in EPMA

    NASA Astrophysics Data System (ADS)

    Donovan, J.; Singer, J.; Armstrong, J. T.

    2016-12-01

    Trace element precision in electron probe micro analysis (EPMA) is limited by intrinsic random variation in the x-ray continuum. Traditionally we characterize background intensity by measuring on either side of the emission line and interpolating the intensity underneath the peak to obtain the net intensity. Alternatively, we can measure the background intensity at the on-peak spectrometer position using a number of standard materials that do not contain the element of interest. This so-called mean atomic number (MAN) background calibration (Donovan, et al., 2016) uses a set of standard measurements, covering an appropriate range of average atomic number, to iteratively estimate the continuum intensity for the unknown composition (and hence average atomic number). We will demonstrate that, at least for materials with a relatively simple matrix such as SiO2, TiO2, ZrSiO4, etc. where one may obtain a matrix matched standard for use in the so called "blank correction", we can obtain trace element accuracy comparable to traditional off-peak methods, and with improved precision, in about half the time. Donovan, Singer and Armstrong, A New EPMA Method for Fast Trace Element Analysis in Simple Matrices ", American Mineralogist, v101, p1839-1853, 2016 Figure 1. Uranium concentration line profiles from quantitative x-ray maps (20 keV, 100 nA, 5 um beam size and 4000 msec per pixel), for both off-peak and MAN background methods without (a), and with (b), the blank correction applied. We see precision significantly improved compared with traditional off-peak measurements while, in this case, the blank correction provides a small but discernable improvement in accuracy.

  7. A simple and complete model for wind turbine wakes over complex terrain

    NASA Astrophysics Data System (ADS)

    Rommelfanger, Nick; Rajborirug, Mai; Luzzatto-Fegiz, Paolo

    2017-11-01

    Simple models for turbine wakes have been used extensively in the wind energy community, both as independent tools, as well as to complement more refined and computationally-intensive techniques. These models typically prescribe empirical relations for how the wake radius grows with downstream distance x and obtain the wake velocity at each x through the application of either mass conservation, or of both mass and momentum conservation (e.g. Katić et al. 1986; Frandsen et al. 2006; Bastankhah & Porté-Agel 2014). Since these models assume a global behavior of the wake (for example, linear spreading with x) they cannot respond to local changes in background flow, as may occur over complex terrain. Instead of assuming a global wake shape, we develop a model by relying on a local assumption for the growth of the turbulent interface. To this end, we introduce to wind turbine wakes the use of the entrainment hypothesis, which has been used extensively in other areas of geophysical fluid dynamics. We obtain two coupled ordinary differential equations for mass and momentum conservation, which can be readily solved with a prescribed background pressure gradient. Our model is in good agreement with published data for the development of wakes over complex terrain.

  8. Sensor performance and weather effects modeling for intelligent transportation systems (ITS) applications

    NASA Astrophysics Data System (ADS)

    Everson, Jeffrey H.; Kopala, Edward W.; Lazofson, Laurence E.; Choe, Howard C.; Pomerleau, Dean A.

    1995-01-01

    Optical sensors are used for several ITS applications, including lateral control of vehicles, traffic sign recognition, car following, autonomous vehicle navigation, and obstacle detection. This paper treats the performance assessment of a sensor/image processor used as part of an on-board countermeasure system to prevent single vehicle roadway departure crashes. Sufficient image contrast between objects of interest and backgrounds is an essential factor influencing overall system performance. Contrast is determined by material properties affecting reflected/radiated intensities, as well as weather and visibility conditions. This paper discusses the modeling of these parameters and characterizes the contrast performance effects due to reduced visibility. The analysis process first involves generation of inherent road/off- road contrasts, followed by weather effects as a contrast modification. The sensor is modeled as a charge coupled device (CCD), with variable parameters. The results of the sensor/weather modeling are used to predict the performance on an in-vehicle warning system under various levels of adverse weather. Software employed in this effort was previously developed for the U.S. Air Force Wright Laboratory to determine target/background detection and recognition ranges for different sensor systems operating under various mission scenarios.

  9. Absolute intensity measurements of impurity emissions in a shock tunnel and their consequences for laser-induced fluorescence experiments

    NASA Technical Reports Server (NTRS)

    Palma, P. C.; Houwing, A. F. P.; Sandeman, R. J.

    1993-01-01

    Absolute intensity measurements of impurity emissions in a shock tunnel nozzle flow are presented. The impurity emission intensities were measured with a photomultiplier and optical multichannel analyzer and calibrated against an intensity standard. The various metallic contaminants were identified and their intensities measured in the spectral regions 290 to 330 nm and 375 to 385 nm. A comparison with calculated fluorescence intensities for predissociated laser-induced fluorescence signals is made. It is found that the emission background is negligible for most fluorescence experiments.

  10. The Extragalactic Background Light and the Gamma-ray Opacity of the Universe

    NASA Technical Reports Server (NTRS)

    Dwek, Eli; Krennrich, Frank

    2012-01-01

    The extragalactic background light (EBL) is one of the fundamental observational quantities in cosmology. All energy releases from resolved and unresolved extragalactic sources, and the light from any truly diffuse background, excluding the cosmic microwave background (CMB), contribute to its intensity and spectral energy distribution. It therefore plays a crucial role in cosmological tests for the formation and evolution of stellar objects and galaxies, and for setting limits on exotic energy releases in the universe. The EBL also plays an important role in the propagation of very high energy gamma-rays which are attenuated en route to Earth by pair producing gamma-gamma interactions with the EBL and CMB. The EBL affects the spectrum of the sources, predominantly blazars, in the approx 10 GeV to 10 TeV energy regime. Knowledge of the EBL intensity and spectrum will allow the determination of the intrinsic blazar spectrum in a crucial energy regime that can be used to test particle acceleration mechanisms and VHE gamma-ray production models. Conversely, knowledge of the intrinsic gamma-ray spectrum and the detection of blazars at increasingly higher redshifts will set strong limits on the EBL and its evolution. This paper reviews the latest developments in the determination of the EBL and its impact on the current understanding of the origin and production mechanisms of gamma-rays in blazars, and on energy releases in the universe. The review concludes with a summary and future directions in Cherenkov Telescope Array techniques and in infrared ground-based and space observatories that will greatly improve our knowledge of the EBL and the origin and production of very high energy gamma-rays.

  11. Iterative Correction Scheme Based on Discrete Cosine Transform and L1 Regularization for Fluorescence Molecular Tomography With Background Fluorescence.

    PubMed

    Zhang, Jiulou; Shi, Junwei; Guang, Huizhi; Zuo, Simin; Liu, Fei; Bai, Jing; Luo, Jianwen

    2016-06-01

    High-intensity background fluorescence is generally encountered in fluorescence molecular tomography (FMT), because of the accumulation of fluorescent probes in nontarget tissues or the existence of autofluorescence in biological tissues. The reconstruction results are affected or even distorted by the background fluorescence, especially when the distribution of fluorescent targets is relatively sparse. The purpose of this paper is to reduce the negative effect of background fluorescence on FMT reconstruction. After each iteration of the Tikhonov regularization algorithm, 3-D discrete cosine transform is adopted to filter the intermediate results. And then, a sparsity constraint step based on L1 regularization is applied to restrain the energy of the objective function. Phantom experiments with different fluorescence intensities of homogeneous and heterogeneous background are carried out to validate the performance of the proposed scheme. The results show that the reconstruction quality can be improved with the proposed iterative correction scheme. The influence of background fluorescence in FMT can be reduced effectively because of the filtering of the intermediate results, the detail preservation, and noise suppression of L1 regularization.

  12. Effect of gravitational stratification on the propagation of a CME

    NASA Astrophysics Data System (ADS)

    Pagano, P.; Mackay, D. H.; Poedts, S.

    2013-12-01

    Context. Coronal mass ejections (CMEs) are the most violent phenomenon found on the Sun. One model that explains their occurrence is the flux rope ejection model. A magnetic flux rope is ejected from the solar corona and reaches the interplanetary space where it interacts with the pre-existing magnetic fields and plasma. Both gravity and the stratification of the corona affect the early evolution of the flux rope. Aims: Our aim is to study the role of gravitational stratification on the propagation of CMEs. In particular, we assess how it influences the speed and shape of CMEs and under what conditions the flux rope ejection becomes a CME or when it is quenched. Methods: We ran a set of MHD simulations that adopt an eruptive initial magnetic configuration that has already been shown to be suitable for a flux rope ejection. We varied the temperature of the backgroud corona and the intensity of the initial magnetic field to tune the gravitational stratification and the amount of ejected magnetic flux. We used an automatic technique to track the expansion and the propagation of the magnetic flux rope in the MHD simulations. From the analysis of the parameter space, we evaluate the role of gravitational stratification on the CME speed and expansion. Results: Our study shows that gravitational stratification plays a significant role in determining whether the flux rope ejection will turn into a full CME or whether the magnetic flux rope will stop in the corona. The CME speed is affected by the background corona where it travels faster when the corona is colder and when the initial magnetic field is more intense. The fastest CME we reproduce in our parameter space travels at ~850 km s-1. Moreover, the background gravitational stratification plays a role in the side expansion of the CME, and we find that when the background temperature is higher, the resulting shape of the CME is flattened more. Conclusions: Our study shows that although the initiation mechanisms of the CME are purely magnetic, the background coronal plasma plays a key role in the CME propagation, and full MHD models should be applied when one focuses especially on the production of a CME from a flux rope ejection. Movies are available in electronic form at http://www.aanda.org

  13. Inviscid Limit for Damped and Driven Incompressible Navier-Stokes Equations in mathbb R^2

    NASA Astrophysics Data System (ADS)

    Ramanah, D.; Raghunath, S.; Mee, D. J.; Rösgen, T.; Jacobs, P. A.

    2007-08-01

    Experiments to demonstrate the use of the background-oriented schlieren (BOS) technique in hypersonic impulse facilities are reported. BOS uses a simple optical set-up consisting of a structured background pattern, an electronic camera with a high shutter speed and a high intensity light source. The visualization technique is demonstrated in a small reflected shock tunnel with a Mach 4 conical nozzle, nozzle supply pressure of 2.2 MPa and nozzle supply enthalpy of 1.8 MJ/kg. A 20° sharp circular cone and a model of the MUSES-C re-entry body were tested. Images captured were processed using PIV-style image analysis to visualize variations in the density field. The shock angle on the cone measured from the BOS images agreed with theoretical calculations to within 0.5°. Shock standoff distances could be measured from the BOS image for the re-entry body. Preliminary experiments are also reported in higher enthalpy facilities where flow luminosity can interfere with imaging of the background pattern.

  14. Source detection in astronomical images by Bayesian model comparison

    NASA Astrophysics Data System (ADS)

    Frean, Marcus; Friedlander, Anna; Johnston-Hollitt, Melanie; Hollitt, Christopher

    2014-12-01

    The next generation of radio telescopes will generate exabytes of data on hundreds of millions of objects, making automated methods for the detection of astronomical objects ("sources") essential. Of particular importance are faint, diffuse objects embedded in noise. There is a pressing need for source finding software that identifies these sources, involves little manual tuning, yet is tractable to calculate. We first give a novel image discretisation method that incorporates uncertainty about how an image should be discretised. We then propose a hierarchical prior for astronomical images, which leads to a Bayes factor indicating how well a given region conforms to a model of source that is exceptionally unconstrained, compared to a model of background. This enables the efficient localisation of regions that are "suspiciously different" from the background distribution, so our method looks not for brightness but for anomalous distributions of intensity, which is much more general. The model of background can be iteratively improved by removing the influence on it of sources as they are discovered. The approach is evaluated by identifying sources in real and simulated data, and performs well on these measures: the Bayes factor is maximized at most real objects, while returning only a moderate number of false positives. In comparison to a catalogue constructed by widely-used source detection software with manual post-processing by an astronomer, our method found a number of dim sources that were missing from the "ground truth" catalogue.

  15. Comparison of global storm activity rate calculated from Schumann resonance background components to electric field intensity E0 Z

    NASA Astrophysics Data System (ADS)

    Nieckarz, Zenon; Kułak, Andrzej; Zięba, Stanisław; Kubicki, Marek; Michnowski, Stanisław; Barański, Piotr

    2009-02-01

    This work presents the results of a comparison between the global storm activity rate IRS and electric field intensity E0 Z. The permanent analysis of the IRS may become an important tool for testing Global Electric Circuit models. IRS is determined by a new method that uses the background component of the first 7 Schumann resonances (SR). The rate calculations are based on ELF observations carried out in 2005 and 2006 in the observatory station "Hylaty" of the Jagiellonian University in the Eastern Carpathians (Kułak, A., Zięba, S., Micek, S., Nieckarz, Z., 2003. Solar variations in extremely low frequency propagation parameters: I. A two-dimensional telegraph equation (TDTE) model of ELF propagation and fundamental parameters of Schumann resonances, J. Geophys. Res., 108, 1270, doi:10.1029/2002JA009304). Diurnal runs of the IRS rate were compared with diurnal runs of E0 Z amplitudes registered at the Earth's surface in the Geophysical Observatory of the Polish Academy of Sciences in Świder (Kubicki, M., 2005. Results of Atmospheric Electricity and Meteorological Observations, S. Kalinowski Geophysical Observatory at Świder 2004, Pub. Inst. Geophysics Polish Academy of Sciences, D-68 (383), Warszawa.). The days with the highest values of the correlation coefficient ( R) between amplitudes of both observed parameters characterizing atmosphere electric activity are shown. The seasonal changes of R, IRS and E0 Z are also presented.

  16. Moisture Influence Reducing Method for Heavy Metals Detection in Plant Materials Using Laser-Induced Breakdown Spectroscopy: A Case Study for Chromium Content Detection in Rice Leaves.

    PubMed

    Peng, Jiyu; He, Yong; Ye, Lanhan; Shen, Tingting; Liu, Fei; Kong, Wenwen; Liu, Xiaodan; Zhao, Yun

    2017-07-18

    Fast detection of heavy metals in plant materials is crucial for environmental remediation and ensuring food safety. However, most plant materials contain high moisture content, the influence of which cannot be simply ignored. Hence, we proposed moisture influence reducing method for fast detection of heavy metals using laser-induced breakdown spectroscopy (LIBS). First, we investigated the effect of moisture content on signal intensity, stability, and plasma parameters (temperature and electron density) and determined the main influential factors (experimental parameters F and the change of analyte concentration) on the variations of signal. For chromium content detection, the rice leaves were performed with a quick drying procedure, and two strategies were further used to reduce the effect of moisture content and shot-to-shot fluctuation. An exponential model based on the intensity of background was used to correct the actual element concentration in analyte. Also, the ratio of signal-to-background for univariable calibration and partial least squared regression (PLSR) for multivariable calibration were used to compensate the prediction deviations. The PLSR calibration model obtained the best result, with the correlation coefficient of 0.9669 and root-mean-square error of 4.75 mg/kg in the prediction set. The preliminary results indicated that the proposed method allowed for the detection of heavy metals in plant materials using LIBS, and it could be possibly used for element mapping in future work.

  17. Application of the bridged crack model for evaluation of materials repairing and self-healing

    NASA Astrophysics Data System (ADS)

    Perelmuter, M.

    2017-12-01

    The bridged crack model is used for analysis of repairing and self-healing of cracked structures. Material repairing is treated as insertions of external ligaments into cracks or placement of the reinforcing patches over cracks. Bonds destruction and regeneration at the crack bridged zone is evaluated by the thermo-fluctuation kinetic theory. The healing time is dependent on the chemical reaction rate of the healing agent, the crack size and the external loads. The decreasing of the stress intensity factors is used as the measure of the repairing and healing effects. The mathematical background of the problem solution is based on the methods of the singular integral-differential equations. The model can be used for the evaluation of composite materials durability.

  18. New concepts in palliative care in the intensive care unit

    PubMed Central

    Coelho, Cristina Bueno Terzi; Yankaskas, James R.

    2017-01-01

    Some patients admitted to an intensive care unit may face a terminal illness situation, which usually leads to death. Knowledge of palliative care is strongly recommended for the health care providers who are taking care of these patients. In many situations, the patients should be evaluated daily as the introduction of further treatments may not be beneficial to them. The discussions among health team members that are related to prognosis and the goals of care should be carefully evaluated in collaboration with the patients and their families. The adoption of protocols related to end-of-life patients in the intensive care unit is fundamental. A multidisciplinary team is important for determining whether the withdrawal or withholding of advanced care is required. In addition, patients and families should be informed that palliative care involves the best possible care for that specific situation, as well as respect for their wishes and the consideration of social and spiritual backgrounds. Thus, the aim of this review is to present palliative care as a reasonable option to support the intensive care unit team in assisting terminally ill patients. Updates regarding diet, mechanical ventilation, and dialysis in these patients will be presented. Additionally, the hospice-model philosophy as an alternative to the intensive care unit/hospital environment will be discussed. PMID:28977262

  19. Refinement of a limit cycle oscillator model of the effects of light on the human circadian pacemaker

    NASA Technical Reports Server (NTRS)

    Jewett, M. E.; Kronauer, R. E.; Brown, E. N. (Principal Investigator)

    1998-01-01

    In 1990, Kronauer proposed a mathematical model of the effects of light on the human circadian pacemaker. Although this model predicted many general features of the response of the human circadian pacemaker to light exposure, additional data now available enable us to refine the original model. We first refined the original model by incorporating the results of a dose response curve to light into the model's predicted relationship between light intensity and the strength of the drive onto the pacemaker. Data from three bright light phase resetting experiments were then used to refine the amplitude recovery characteristics of the model. Finally, the model was tested and further refined using data from an extensive phase resetting experiment in which a 3-cycle bright light stimulus was presented against a background of dim light. In order to describe the results of the four resetting experiments, the following major refinements to the original model were necessary: (i) the relationship between light intensity (I) and drive onto the pacemaker was reduced from I1/3 to I0.23 for light levels between 150 and 10,000 lux; (ii) the van der Pol oscillator from the original model was replaced with a higher-order limit cycle oscillator so that amplitude recovery is slower near the singularity and faster near the limit cycle; (iii) a direct effect of light on circadian period (tau x) was incorporated into the model such that as I increases, tau x decreases, which is in accordance with "Aschoff's rule". This refined model generates the following testable predictions: it should be difficult to enhance normal circadian amplitude via bright light; near the critical point of a type 0 phase response curve (PRC) the slope should be steeper than it is in a type 1 PRC; and circadian period measured during forced desynchrony should be directly affected by ambient light intensity.

  20. Using an agent-based model to simulate children’s active travel to school

    PubMed Central

    2013-01-01

    Background Despite the multiple advantages of active travel to school, only a small percentage of US children and adolescents walk or bicycle to school. Intervention studies are in a relatively early stage and evidence of their effectiveness over long periods is limited. The purpose of this study was to illustrate the utility of agent-based models in exploring how various policies may influence children’s active travel to school. Methods An agent-based model was developed to simulate children’s school travel behavior within a hypothetical city. The model was used to explore the plausible implications of policies targeting two established barriers to active school travel: long distance to school and traffic safety. The percent of children who walk to school was compared for various scenarios. Results To maximize the percent of children who walk to school the school locations should be evenly distributed over space and children should be assigned to the closest school. In the case of interventions to improve traffic safety, targeting a smaller area around the school with greater intensity may be more effective than targeting a larger area with less intensity. Conclusions Despite the challenges they present, agent based models are a useful complement to other analytical strategies in studying the plausible impact of various policies on active travel to school. PMID:23705953

  1. A malaria transmission-directed model of mosquito life cycle and ecology

    PubMed Central

    2011-01-01

    Background Malaria is a major public health issue in much of the world, and the mosquito vectors which drive transmission are key targets for interventions. Mathematical models for planning malaria eradication benefit from detailed representations of local mosquito populations, their natural dynamics and their response to campaign pressures. Methods A new model is presented for mosquito population dynamics, effects of weather, and impacts of multiple simultaneous interventions. This model is then embedded in a large-scale individual-based simulation and results for local elimination of malaria are discussed. Mosquito population behaviours, such as anthropophily and indoor feeding, are included to study their effect upon the efficacy of vector control-based elimination campaigns. Results Results for vector control tools, such as bed nets, indoor spraying, larval control and space spraying, both alone and in combination, are displayed for a single-location simulation with vector species and seasonality characteristic of central Tanzania, varying baseline transmission intensity and vector bionomics. The sensitivities to habitat type, anthropophily, indoor feeding, and baseline transmission intensity are explored. Conclusions The ability to model a spectrum of local vector species with different ecologies and behaviours allows local customization of packages of interventions and exploration of the effect of proposed new tools. PMID:21999664

  2. Means and method of detection in chemical separation procedures

    DOEpatents

    Yeung, Edward S.; Koutny, Lance B.; Hogan, Barry L.; Cheung, Chan K.; Ma, Yinfa

    1993-03-09

    A means and method for indirect detection of constituent components of a mixture separated in a chemical separation process. Fluorescing ions are distributed across the area in which separation of the mixture will occur to provide a generally uniform background fluorescence intensity. For example, the mixture is comprised of one or more charged analytes which displace fluorescing ions where its constituent components separate to. Fluorescing ions of the same charge as the charged analyte components cause a displacement. The displacement results in the location of the separated components having a reduced fluorescence intensity to the remainder of the background. Detection of the lower fluorescence intensity areas can be visually, by photographic means and methods, or by automated laser scanning.

  3. Means and method of detection in chemical separation procedures

    DOEpatents

    Yeung, E.S.; Koutny, L.B.; Hogan, B.L.; Cheung, C.K.; Yinfa Ma.

    1993-03-09

    A means and method are described for indirect detection of constituent components of a mixture separated in a chemical separation process. Fluorescing ions are distributed across the area in which separation of the mixture will occur to provide a generally uniform background fluorescence intensity. For example, the mixture is comprised of one or more charged analytes which displace fluorescing ions where its constituent components separate to. Fluorescing ions of the same charge as the charged analyte components cause a displacement. The displacement results in the location of the separated components having a reduced fluorescence intensity to the remainder of the background. Detection of the lower fluorescence intensity areas can be visually, by photographic means and methods, or by automated laser scanning.

  4. Stellar granulation as seen in disk-integrated intensity. II. Theoretical scaling relations compared with observations

    NASA Astrophysics Data System (ADS)

    Samadi, R.; Belkacem, K.; Ludwig, H.-G.; Caffau, E.; Campante, T. L.; Davies, G. R.; Kallinger, T.; Lund, M. N.; Mosser, B.; Baglin, A.; Mathur, S.; Garcia, R. A.

    2013-11-01

    Context. A large set of stars observed by CoRoT and Kepler shows clear evidence for the presence of a stellar background, which is interpreted to arise from surface convection, i.e., granulation. These observations show that the characteristic time-scale (τeff) and the root-mean-square (rms) brightness fluctuations (σ) associated with the granulation scale as a function of the peak frequency (νmax) of the solar-like oscillations. Aims: We aim at providing a theoretical background to the observed scaling relations based on a model developed in Paper I. Methods: We computed for each 3D model the theoretical power density spectrum (PDS) associated with the granulation as seen in disk-integrated intensity on the basis of the theoretical model published in Paper I. For each PDS we derived the associated characteristic time (τeff) and the rms brightness fluctuations (σ) and compared these theoretical values with the theoretical scaling relations derived from the theoretical model and the measurements made on a large set of Kepler targets. Results: We derive theoretical scaling relations for τeff and σ, which show the same dependence on νmax as the observed scaling relations. In addition, we show that these quantities also scale as a function of the turbulent Mach number (ℳa) estimated at the photosphere. The theoretical scaling relations for τeff and σ match the observations well on a global scale. Quantitatively, the remaining discrepancies with the observations are found to be much smaller than previous theoretical calculations made for red giants. Conclusions: Our modelling provides additional theoretical support for the observed variations of σ and τeff with νmax. It also highlights the important role of ℳa in controlling the properties of the stellar granulation. However, the observations made with Kepler on a wide variety of stars cannot confirm the dependence of our scaling relations on ℳa. Measurements of the granulation background and detections of solar-like oscillations in a statistically sufficient number of cool dwarf stars will be required for confirming the dependence of the theoretical scaling relations with ℳa. Appendices are available in electronic form at http://www.aanda.org

  5. Alfven waves in spiral interplanetary field

    NASA Technical Reports Server (NTRS)

    Whang, Y. C.

    1973-01-01

    A theoretical study is presented of the Alfven waves in the spiral interplanetary magnetic field. The Alfven waves under consideration are arbitrary, large amplitude, non-monochromatic, microscale waves of any polarization. They superpose on a mesoscale background flow of thermally anisotropic plasma. Using WKB approximation, an analytical solution for the amplitude vectors is obtained as a function of the background flow properties: density, velocity, Alfven speed, thermal anisotropy, and the spiral angel. The necessary condition for the validity of the WKB solution is discussed. The intensity of fluctuations is calculated as a function of heliocentric distance. Relative intensity of fluctuations as compared with the magnitude of the background field has its maximum in the region near l au. Thus outside of this region, the solar wind is less turbulent.

  6. Electromagnetic backscattering from freak waves in (1 + 1)-dimensional deep-water

    NASA Astrophysics Data System (ADS)

    Xie, Tao; Shen, Tao; William, Perrie; Chen, Wei; Kuang, Hai-Lan

    2010-05-01

    To study the electromagnetic (EM) backscatter characteristics of freak waves at moderate incidence angles, we establish an EM backscattering model for freak waves in (1 + 1)-dimensional deep water. The nonlinear interaction between freak waves and Bragg short waves is considered to be the basic hydrodynamic spectra modulation mechanism in the model. Numerical results suggest that the EM backscattering intensities of freak waves are less than those from the background sea surface at moderate incidence angles. The normalised radar cross sections (NRCSs) from freak waves are highly polarisation dependent, even at low incidence angles, which is different from the situation for normal sea waves; moreover, the NRCS of freak waves is more polarisation dependent than the background sea surface. NRCS discrepancies between freak waves and the background sea surface with using horizontal transmitting horizomtal (HH) polarisation are larger than those using vertical transmitting vertical (VV) polarisation, at moderate incident angles. NRCS discrepancies between freak waves and background sea surface decreases with the increase of incidence angle, in both HH and VV polarisation radars. As an application, in the synthetic-aperture radar (SAR) imaging of freak waves, we suggest that freak waves should have extremely low backscatter NRCSs for the freak wave facet with the strongest slope. Compared with the background sea surface, the freak waves should be darker in HH polarisation echo images than in VV echo images, in SAR images. Freak waves can be more easily detected from the background sea surface in HH polarisation images than in VV polarisation images. The possibility of detection of freak waves at low incidence angles is much higher than at high incidence angles.

  7. Mortality among High Risk Patients with Acute Myocardial Infarction Admitted to U.S. Teaching-Intensive Hospitals in July: A Retrospective Observational Study

    PubMed Central

    Jena, Anupam B.; Sun, Eric C.; Romley, John A.

    2014-01-01

    Background Studies of whether inpatient mortality in U.S. teaching hospitals rises in July as a result of organizational disruption and relative inexperience of new physicians (‘July effect’) find small and mixed results, perhaps because study populations primarily include low-risk inpatients whose mortality outcomes are unlikely to exhibit a July effect. Methods and Results Using the U.S. Nationwide Inpatient sample, we estimated difference-in-difference models of mortality, percutaneous coronary intervention (PCI) rates, and bleeding complication rates, for high and low risk patients with acute myocardial infarction (AMI) admitted to 98 teaching-intensive and 1353 non-teaching-intensive hospitals during May and July 2002 to 2008. Among patients in the top quartile of predicted AMI mortality (high risk), adjusted mortality was lower in May than July in teaching-intensive hospitals (18.8% in May, 22.7% in July, p<0.01), but similar in non-teaching-intensive hospitals (22.5% in May, 22.8% in July, p=0.70). Among patients in the lowest three quartiles of predicted AMI mortality (low risk), adjusted mortality was similar in May and July in both teaching-intensive hospitals (2.1% in May, 1.9% in July, p=0.45) and non-teaching-intensive hospitals (2.7% in May, 2.8% in July, p=0.21). Differences in PCI and bleeding complication rates could not explain the observed July mortality effect among high risk patients. Conclusions High risk AMI patients experience similar mortality in teaching- and non-teaching-intensive hospitals in July, but lower mortality in teaching-intensive hospitals in May. Low risk patients experience no such “July effect” in teaching-intensive hospitals. PMID:24152859

  8. Modeling strong‐motion recordings of the 2010 Mw 8.8 Maule, Chile, earthquake with high stress‐drop subevents and background slip

    USGS Publications Warehouse

    Frankel, Arthur

    2017-01-01

    Strong‐motion recordings of the Mw 8.8 Maule earthquake were modeled using a compound rupture model consisting of (1) a background slip distribution with large correlation lengths, relatively low slip velocity, and long peak rise time of slip of about 10 s and (2) high stress‐drop subevents (asperities) on the deeper portion of the rupture with moment magnitudes 7.9–8.2, high slip velocity, and rise times of slip of about 2 s. In this model, the high‐frequency energy is not produced in the same location as the peak coseismic slip, but is generated in the deeper part of the rupture zone. Using synthetic seismograms generated for a plane‐layered velocity model, I find that the high stress‐drop subevents explain the observed Fourier spectral amplitude from about 0.1 to 1.0 Hz. Broadband synthetics (0–10 Hz) were calculated by combining deterministic synthetics derived from the background slip and asperities (≤1  Hz) with stochastic synthetics generated only at the asperities (≥1  Hz). The broadband synthetics produced response spectral accelerations with low bias compared to the data, for periods of 0.1–10 s. A subevent stress drop of 200–350 bars for the high‐frequency stochastic synthetics was found to bracket the observed spectral accelerations at frequencies greater than 1 Hz. For most of the stations, the synthetics had durations of the Arias intensity similar to the observed records.

  9. Modeling the Proton Radiation Belt With Van Allen Probes Relativistic Electron-Proton Telescope Data

    NASA Technical Reports Server (NTRS)

    Kanekal, S. G.; Li, X.; Baker, D. N.; Selesnick, R. S.; Hoxie, V. C.

    2018-01-01

    An empirical model of the proton radiation belt is constructed from data taken during 2013-2017 by the Relativistic Electron-Proton Telescopes on the Van Allen Probes satellites. The model intensity is a function of time, kinetic energy in the range 18-600 megaelectronvolts, equatorial pitch angle, and L shell of proton guiding centers. Data are selected, on the basis of energy deposits in each of the nine silicon detectors, to reduce background caused by hard proton energy spectra at low L. Instrument response functions are computed by Monte Carlo integration, using simulated proton paths through a simplified structural model, to account for energy loss in shielding material for protons outside the nominal field of view. Overlap of energy channels, their wide angular response, and changing satellite orientation require the model dependencies on all three independent variables be determined simultaneously. This is done by least squares minimization with a customized steepest descent algorithm. Model uncertainty accounts for statistical data error and systematic error in the simulated instrument response. A proton energy spectrum is also computed from data taken during the 8 January 2014 solar event, to illustrate methods for the simpler case of an isotropic and homogeneous model distribution. Radiation belt and solar proton results are compared to intensities computed with a simplified, on-axis response that can provide a good approximation under limited circumstances.

  10. Modeling the Proton Radiation Belt With Van Allen Probes Relativistic Electron-Proton Telescope Data

    NASA Astrophysics Data System (ADS)

    Selesnick, R. S.; Baker, D. N.; Kanekal, S. G.; Hoxie, V. C.; Li, X.

    2018-01-01

    An empirical model of the proton radiation belt is constructed from data taken during 2013-2017 by the Relativistic Electron-Proton Telescopes on the Van Allen Probes satellites. The model intensity is a function of time, kinetic energy in the range 18-600 MeV, equatorial pitch angle, and L shell of proton guiding centers. Data are selected, on the basis of energy deposits in each of the nine silicon detectors, to reduce background caused by hard proton energy spectra at low L. Instrument response functions are computed by Monte Carlo integration, using simulated proton paths through a simplified structural model, to account for energy loss in shielding material for protons outside the nominal field of view. Overlap of energy channels, their wide angular response, and changing satellite orientation require the model dependencies on all three independent variables be determined simultaneously. This is done by least squares minimization with a customized steepest descent algorithm. Model uncertainty accounts for statistical data error and systematic error in the simulated instrument response. A proton energy spectrum is also computed from data taken during the 8 January 2014 solar event, to illustrate methods for the simpler case of an isotropic and homogeneous model distribution. Radiation belt and solar proton results are compared to intensities computed with a simplified, on-axis response that can provide a good approximation under limited circumstances.

  11. Pediatric stroke and transcranial direct current stimulation: methods for rational individualized dose optimization

    PubMed Central

    Gillick, Bernadette T.; Kirton, Adam; Carmel, Jason B.; Minhas, Preet; Bikson, Marom

    2014-01-01

    Background: Transcranial direct current stimulation (tDCS) has been investigated mainly in adults and doses may not be appropriate in pediatric applications. In perinatal stroke where potential applications are promising, rational adaptation of dosage for children remains under investigation. Objective: Construct child-specific tDCS dosing parameters through case study within a perinatal stroke tDCS safety and feasibility trial. Methods: 10-year-old subject with a diagnosis of presumed perinatal ischemic stroke and hemiparesis was identified. T1 magnetic resonance imaging (MRI) scans used to derive computerized model for current flow and electrode positions. Workflow using modeling results and consideration of dosage in previous clinical trials was incorporated. Prior ad hoc adult montages vs. de novo optimized montages provided distinct risk benefit analysis. Approximating adult dose required consideration of changes in both peak brain current flow and distribution which further tradeoff between maximizing efficacy and adding safety factors. Electrode size, position, current intensity, compliance voltage, and duration were controlled independently in this process. Results: Brain electric fields modeled and compared to values previously predicted models (Datta et al., 2011; Minhas et al., 2012). Approximating conservative brain current flow patterns and intensities used in previous adult trials for comparable indications, the optimal current intensity established was 0.7 mA for 10 min with a tDCS C3/C4 montage. Specifically 0.7 mA produced comparable peak brain current intensity of an average adult receiving 1.0 mA. Electrode size of 5 × 7 cm2 with 1.0 mA and low-voltage tDCS was employed to maximize tolerability. Safety and feasibility confirmed with subject tolerating the session well and no serious adverse events. Conclusion: Rational approaches to dose customization, with steps informed by computational modeling, may improve guidance for pediatric stroke tDCS trials. PMID:25285077

  12. Model and reconstruction of a K-edge contrast agent distribution with an X-ray photon-counting detector

    PubMed Central

    Meng, Bo; Cong, Wenxiang; Xi, Yan; De Man, Bruno; Yang, Jian; Wang, Ge

    2017-01-01

    Contrast-enhanced computed tomography (CECT) helps enhance the visibility for tumor imaging. When a high-Z contrast agent interacts with X-rays across its K-edge, X-ray photoelectric absorption would experience a sudden increment, resulting in a significant difference of the X-ray transmission intensity between the left and right energy windows of the K-edge. Using photon-counting detectors, the X-ray intensity data in the left and right windows of the K-edge can be measured simultaneously. The differential information of the two kinds of intensity data reflects the contrast-agent concentration distribution. K-edge differences between various matters allow opportunities for the identification of contrast agents in biomedical applications. In this paper, a general radon transform is established to link the contrast-agent concentration to X-ray intensity measurement data. An iterative algorithm is proposed to reconstruct a contrast-agent distribution and tissue attenuation background simultaneously. Comprehensive numerical simulations are performed to demonstrate the merits of the proposed method over the existing K-edge imaging methods. Our results show that the proposed method accurately quantifies a distribution of a contrast agent, optimizing the contrast-to-noise ratio at a high dose efficiency. PMID:28437900

  13. Sociospatial distribution of access to facilities for moderate and vigorous intensity physical activity in Scotland by different modes of transport

    PubMed Central

    2012-01-01

    Background People living in neighbourhoods of lower socioeconomic status have been shown to have higher rates of obesity and a lower likelihood of meeting physical activity recommendations than their more affluent counterparts. This study examines the sociospatial distribution of access to facilities for moderate or vigorous intensity physical activity in Scotland and whether such access differs by the mode of transport available and by Urban Rural Classification. Methods A database of all fixed physical activity facilities was obtained from the national agency for sport in Scotland. Facilities were categorised into light, moderate and vigorous intensity activity groupings before being mapped. Transport networks were created to assess the number of each type of facility accessible from the population weighted centroid of each small area in Scotland on foot, by bicycle, by car and by bus. Multilevel modelling was used to investigate the distribution of the number of accessible facilities by small area deprivation within urban, small town and rural areas separately, adjusting for population size and local authority. Results Prior to adjustment for Urban Rural Classification and local authority, the median number of accessible facilities for moderate or vigorous intensity activity increased with increasing deprivation from the most affluent or second most affluent quintile to the most deprived for all modes of transport. However, after adjustment, the modelling results suggest that those in more affluent areas have significantly higher access to moderate and vigorous intensity facilities by car than those living in more deprived areas. Conclusions The sociospatial distributions of access to facilities for both moderate intensity and vigorous intensity physical activity were similar. However, the results suggest that those living in the most affluent neighbourhoods have poorer access to facilities of either type that can be reached on foot, by bicycle or by bus than those living in less affluent areas. This poorer access from the most affluent areas appears to be reversed for those with access to a car. PMID:22568969

  14. Label-Free Delineation of Brain Tumors by Coherent Anti-Stokes Raman Scattering Microscopy in an Orthotopic Mouse Model and Human Glioblastoma

    PubMed Central

    Tamosaityte, Sandra; Leipnitz, Elke; Geiger, Kathrin D.; Schackert, Gabriele; Koch, Edmund; Steiner, Gerald; Kirsch, Matthias

    2014-01-01

    Background Coherent anti-Stokes Raman scattering (CARS) microscopy provides fine resolution imaging and displays morphochemical properties of unstained tissue. Here, we evaluated this technique to delineate and identify brain tumors. Methods Different human tumors (glioblastoma, brain metastases of melanoma and breast cancer) were induced in an orthotopic mouse model. Cryosections were investigated by CARS imaging tuned to probe C-H molecular vibrations, thereby addressing the lipid content of the sample. Raman microspectroscopy was used as reference. Histopathology provided information about the tumor's localization, cell proliferation and vascularization. Results The morphochemical contrast of CARS images enabled identifying brain tumors irrespective of the tumor type and properties: All tumors were characterized by a lower CARS signal intensity than the normal parenchyma. On this basis, tumor borders and infiltrations could be identified with cellular resolution. Quantitative analysis revealed that the tumor-related reduction of CARS signal intensity was more pronounced in glioblastoma than in metastases. Raman spectroscopy enabled relating the CARS intensity variation to the decline of total lipid content in the tumors. The analysis of the immunohistochemical stainings revealed no correlation between tumor-induced cytological changes and the extent of CARS signal intensity reductions. The results were confirmed on samples of human glioblastoma. Conclusions CARS imaging enables label-free, rapid and objective identification of primary and secondary brain tumors. Therefore, it is a potential tool for diagnostic neuropathology as well as for intraoperative tumor delineation. PMID:25198698

  15. Effect of background correction on peak detection and quantification in online comprehensive two-dimensional liquid chromatography using diode array detection.

    PubMed

    Allen, Robert C; John, Mallory G; Rutan, Sarah C; Filgueira, Marcelo R; Carr, Peter W

    2012-09-07

    A singular value decomposition-based background correction (SVD-BC) technique is proposed for the reduction of background contributions in online comprehensive two-dimensional liquid chromatography (LC×LC) data. The SVD-BC technique was compared to simply subtracting a blank chromatogram from a sample chromatogram and to a previously reported background correction technique for one dimensional chromatography, which uses an asymmetric weighted least squares (AWLS) approach. AWLS was the only background correction technique to completely remove the background artifacts from the samples as evaluated by visual inspection. However, the SVD-BC technique greatly reduced or eliminated the background artifacts as well and preserved the peak intensity better than AWLS. The loss in peak intensity by AWLS resulted in lower peak counts at the detection thresholds established using standards samples. However, the SVD-BC technique was found to introduce noise which led to detection of false peaks at the lower detection thresholds. As a result, the AWLS technique gave more precise peak counts than the SVD-BC technique, particularly at the lower detection thresholds. While the AWLS technique resulted in more consistent percent residual standard deviation values, a statistical improvement in peak quantification after background correction was not found regardless of the background correction technique used. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. PAIN INTENSITY MODERATES THE RELATIONSHIP BETWEEN AGE AND PAIN INTERFERENCE IN CHRONIC OROFACIAL PAIN PATIENTS

    PubMed Central

    Boggero, Ian A.; Geiger, Paul J.; Segerstrom, Suzanne C.; Carlson, Charles R.

    2015-01-01

    Background/Study Context Chronic pain is associated with increased interference in daily functioning that becomes more pronounced as pain intensity increases. Based on previous research showing that older adults maintain well-being in the face of pain as well as or better than their younger counterparts, the current study examined the interaction of age and pain intensity on interference in a sample of chronic orofacial pain patients. Methods Data were obtained from the records of 508 chronic orofacial pain patients being seen for an initial evaluation from 2008 to 2012. Collected data included age (range: 18–78) and self-reported measures of pain intensity and pain interference. Bivariate correlations and regression models were used to assess for statistical interactions. Results Regression analyses revealed that pain intensity positively predicted pain interference (R2 = .35, B = 10.40, SE = 0.62, t(507) = 16.70, p < .001). A significant interaction supported the primary hypothesis that aging was associated with reduced interference at high levels of pain intensity (ΔR2 = .01, B = −1.31, SE = 0.63, t(505) = −2.90, p = .04). Conclusion At high levels of pain intensity, interference decreased with age, although the age by pain intensity interaction effect was small. This evidence converges with aging theories, including socioemotional selectivity theory, which posits that as people age, they become more motivated to maximize positive emotions and minimize negative ones. The results highlight the importance of studying the mechanisms older adults use to successfully cope with pain. PMID:26214102

  17. Geocoronal imaging with Dynamics Explorer

    NASA Technical Reports Server (NTRS)

    Rairden, R. L.; Frank, L. A.; Craven, J. D.

    1986-01-01

    The ultraviolet photometer of the University of Iowa spin-scan auroral imaging instrumentation on board Dynamics Explorer-1 has returned numerous hydrogen Lyman alpha images of the geocorona from altitudes of 570 km to 23,300 km (1.09 R sub E to 4.66 R sub E geocentric radial distance). The hydrogen density gradient is shown by a plot of the zenith intensities throughout this range, which decrease to near celestial background values as the spacecraft approaches apogee. Characterizing the upper geocorona as optically thin (single-scattering), the zenith intensity is converted directly to vertical column density. This approximation loses its validity deeper in the geocorona, where the hydrogen is demonstrated to be optically thick in that there is no Lyman alpha limb brightening. Further study of the geocoronal hydrogen distribution will require computer modeling of the radiative transfer.

  18. The effective application of a discrete transition model to explore cell-cycle regulation in yeast

    PubMed Central

    2013-01-01

    Background Bench biologists often do not take part in the development of computational models for their systems, and therefore, they frequently employ them as “black-boxes”. Our aim was to construct and test a model that does not depend on the availability of quantitative data, and can be directly used without a need for intensive computational background. Results We present a discrete transition model. We used cell-cycle in budding yeast as a paradigm for a complex network, demonstrating phenomena such as sequential protein expression and activity, and cell-cycle oscillation. The structure of the network was validated by its response to computational perturbations such as mutations, and its response to mating-pheromone or nitrogen depletion. The model has a strong predicative capability, demonstrating how the activity of a specific transcription factor, Hcm1, is regulated, and what determines commitment of cells to enter and complete the cell-cycle. Conclusion The model presented herein is intuitive, yet is expressive enough to elucidate the intrinsic structure and qualitative behavior of large and complex regulatory networks. Moreover our model allowed us to examine multiple hypotheses in a simple and intuitive manner, giving rise to testable predictions. This methodology can be easily integrated as a useful approach for the study of networks, enriching experimental biology with computational insights. PMID:23915717

  19. Modeling Personnel Turnover in the Parametric Organization

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1991-01-01

    A primary issue in organizing a new parametric cost analysis function is to determine the skill mix and number of personnel required. The skill mix can be obtained by a functional decomposition of the tasks required within the organization and a matrixed correlation with educational or experience backgrounds. The number of personnel is a function of the skills required to cover all tasks, personnel skill background and cross training, the intensity of the workload for each task, migration through various tasks by personnel along a career path, personnel hiring limitations imposed by management and the applicant marketplace, personnel training limitations imposed by management and personnel capability, and the rate at which personnel leave the organization for whatever reason. Faced with the task of relating all of these organizational facets in order to grow a parametric cost analysis (PCA) organization from scratch, it was decided that a dynamic model was required in order to account for the obvious dynamics of the forming organization. The challenge was to create such a simple model which would be credible during all phases of organizational development. The model development process was broken down into the activities of determining the tasks required for PCA, determining the skills required for each PCA task, determining the skills available in the applicant marketplace, determining the structure of the dynamic model, implementing the dynamic model, and testing the dynamic model.

  20. Neonatal Intensive Care Nursing Curriculum Challenges based on Context, Input, Process, and Product Evaluation Model: A Qualitative Study

    PubMed Central

    Ashghali-Farahani, Mansoureh; Ghaffari, Fatemeh; Hoseini-Esfidarjani, Sara-Sadat; Hadian, Zahra; Qomi, Robabeh; Dargahi, Helen

    2018-01-01

    Background: Weakness of curriculum development in nursing education results in lack of professional skills in graduates. This study was done on master's students in nursing to evaluate challenges of neonatal intensive care nursing curriculum based on context, input, process, and product (CIPP) evaluation model. Materials and Methods: This study was conducted with qualitative approach, which was completed according to the CIPP evaluation model. The study was conducted from May 2014 to April 2015. The research community included neonatal intensive care nursing master's students, the graduates, faculty members, neonatologists, nurses working in neonatal intensive care unit (NICU), and mothers of infants who were hospitalized in such wards. Purposeful sampling was applied. Results: The data analysis showed that there were two main categories: “inappropriate infrastructure” and “unknown duties,” which influenced the context formation of NICU master's curriculum. The input was formed by five categories, including “biomedical approach,” “incomprehensive curriculum,” “lack of professional NICU nursing mentors,” “inappropriate admission process of NICU students,” and “lack of NICU skill labs.” Three categories were extracted in the process, including “more emphasize on theoretical education,” “the overlap of credits with each other and the inconsistency among the mentors,” and “ineffective assessment.” Finally, five categories were extracted in the product, including “preferring routine work instead of professional job,” “tendency to leave the job,” “clinical incompetency of graduates,” “the conflict between graduates and nursing staff expectations,” and “dissatisfaction of graduates.” Conclusions: Some changes are needed in NICU master's curriculum by considering the nursing experts' comments and evaluating the consequences of such program by them. PMID:29628958

  1. Binary recursive partitioning: background, methods, and application to psychology.

    PubMed

    Merkle, Edgar C; Shaffer, Victoria A

    2011-02-01

    Binary recursive partitioning (BRP) is a computationally intensive statistical method that can be used in situations where linear models are often used. Instead of imposing many assumptions to arrive at a tractable statistical model, BRP simply seeks to accurately predict a response variable based on values of predictor variables. The method outputs a decision tree depicting the predictor variables that were related to the response variable, along with the nature of the variables' relationships. No significance tests are involved, and the tree's 'goodness' is judged based on its predictive accuracy. In this paper, we describe BRP methods in a detailed manner and illustrate their use in psychological research. We also provide R code for carrying out the methods.

  2. Object tracking algorithm based on the color histogram probability distribution

    NASA Astrophysics Data System (ADS)

    Li, Ning; Lu, Tongwei; Zhang, Yanduo

    2018-04-01

    In order to resolve tracking failure resulted from target's being occlusion and follower jamming caused by objects similar to target in the background, reduce the influence of light intensity. This paper change HSV and YCbCr color channel correction the update center of the target, continuously updated image threshold self-adaptive target detection effect, Clustering the initial obstacles is roughly range, shorten the threshold range, maximum to detect the target. In order to improve the accuracy of detector, this paper increased the Kalman filter to estimate the target state area. The direction predictor based on the Markov model is added to realize the target state estimation under the condition of background color interference and enhance the ability of the detector to identify similar objects. The experimental results show that the improved algorithm more accurate and faster speed of processing.

  3. Cosmic sculpture: a new way to visualise the cosmic microwave background

    NASA Astrophysics Data System (ADS)

    Clements, D. L.; Sato, S.; Portela Fonseca, A.

    2017-01-01

    3D printing presents an attractive alternative to visual representation of physical datasets such as astronomical images that can be used for research, outreach or teaching purposes, and is especially relevant to people with a visual disability. We here report the use of 3D printing technology to produce a representation of the all-sky cosmic microwave background (CMB) intensity anisotropy maps produced by the Planck mission. The success of this work in representing key features of the CMB is discussed as is the potential of this approach for representing other astrophysical data sets. 3D printing such datasets represents a highly complementary approach to the usual 2D projections used in teaching and outreach work, and can also form the basis of undergraduate projects. The CAD files used to produce the models discussed in this paper are made available.

  4. SU-G-IeP1-13: Sub-Nyquist Dynamic MRI Via Prior Rank, Intensity and Sparsity Model (PRISM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, B; Gao, H

    Purpose: Accelerated dynamic MRI is important for MRI guided radiotherapy. Inspired by compressive sensing (CS), sub-Nyquist dynamic MRI has been an active research area, i.e., sparse sampling in k-t space for accelerated dynamic MRI. This work is to investigate sub-Nyquist dynamic MRI via a previously developed CS model, namely Prior Rank, Intensity and Sparsity Model (PRISM). Methods: The proposed method utilizes PRISM with rank minimization and incoherent sampling patterns for sub-Nyquist reconstruction. In PRISM, the low-rank background image, which is automatically calculated by rank minimization, is excluded from the L1 minimization step of the CS reconstruction to further sparsify themore » residual image, thus allowing for higher acceleration rates. Furthermore, the sampling pattern in k-t space is made more incoherent by sampling a different set of k-space points at different temporal frames. Results: Reconstruction results from L1-sparsity method and PRISM method with 30% undersampled data and 15% undersampled data are compared to demonstrate the power of PRISM for dynamic MRI. Conclusion: A sub- Nyquist MRI reconstruction method based on PRISM is developed with improved image quality from the L1-sparsity method.« less

  5. Resonant Raman scattering background in XRF spectra of binary samples

    NASA Astrophysics Data System (ADS)

    Sánchez, Héctor Jorge; Leani, Juan José

    2015-02-01

    In x-ray fluorescence analysis, spectra present singular characteristics produced by the different scattering processes. When atoms are irradiated with incident energy lower and close to an absorption edge, scattering peaks appear due to an inelastic process known as resonant Raman scattering. In this work we present theoretical calculations of the resonant Raman scattering contributions to the background of x-ray fluorescence spectra of binary samples of current technological or biological interest. On one hand, a binary alloy of Fe with traces of Mn (Mn: 0.01%, Fe: 99.99%) was studied because of its importance in the stainless steels industries. On the second hand a pure sample of Ti with V traces (Ti: 99%, V: 1%) was analyzed due to the current relevance in medical applications. In order to perform the calculations the Shiraiwa and Fujino's model was used to calculate characteristic intensities and scattering interactions. This model makes certain assumptions and approximations to achieve the calculations, especially in the case of the geometrical conditions and the incident and take-off beams. For the binary sample studied in this work and the considered experimental conditions, the calculations show that the resonant Raman scattering background is significant under the fluorescent peak, affects the symmetry of the peaks and, depending on the concentrations, overcomes the enhancements contributions (secondary fluorescence).

  6. Classification of normal and malignant human gastric mucosa tissue with confocal Raman microspectroscopy and wavelet analysis

    NASA Astrophysics Data System (ADS)

    Hu, Yaogai; Shen, Aiguo; Jiang, Tao; Ai, Yong; Hu, Jiming

    2008-02-01

    Thirty-two samples from the human gastric mucosa tissue, including 13 normal and 19 malignant tissue samples were measured by confocal Raman microspectroscopy. The low signal-to-background ratio spectra from human gastric mucosa tissues were obtained by this technique without any sample preparation. Raman spectral interferences include a broad featureless sloping background due to fluorescence and noise. They mask most Raman spectral feature and lead to problems with precision and quantitation of the original spectral information. A preprocessed algorithm based on wavelet analysis was used to reduce noise and eliminate background/baseline of Raman spectra. Comparing preprocessed spectra of malignant gastric mucosa tissues with those of counterpart normal ones, there were obvious spectral changes, including intensity increase at ˜1156 cm -1 and intensity decrease at ˜1587 cm -1. The quantitative criterion based upon the intensity ratio of the ˜1156 and ˜1587 cm -1 was extracted for classification of the normal and malignant gastric mucosa tissue samples. This could result in a new diagnostic method, which would assist the early diagnosis of gastric cancer.

  7. Robust Data Detection for the Photon-Counting Free-Space Optical System With Implicit CSI Acquisition and Background Radiation Compensation

    NASA Astrophysics Data System (ADS)

    Song, Tianyu; Kam, Pooi-Yuen

    2016-02-01

    Since atmospheric turbulence and pointing errors cause signal intensity fluctuations and the background radiation surrounding the free-space optical (FSO) receiver contributes an undesired noisy component, the receiver requires accurate channel state information (CSI) and background information to adjust the detection threshold. In most previous studies, for CSI acquisition, pilot symbols were employed, which leads to a reduction of spectral and energy efficiency; and an impractical assumption that the background radiation component is perfectly known was made. In this paper, we develop an efficient and robust sequence receiver, which acquires the CSI and the background information implicitly and requires no knowledge about the channel model information. It is robust since it can automatically estimate the CSI and background component and detect the data sequence accordingly. Its decision metric has a simple form and involves no integrals, and thus can be easily evaluated. A Viterbi-type trellis-search algorithm is adopted to improve the search efficiency, and a selective-store strategy is adopted to overcome a potential error floor problem as well as to increase the memory efficiency. To further simplify the receiver, a decision-feedback symbol-by-symbol receiver is proposed as an approximation of the sequence receiver. By simulations and theoretical analysis, we show that the performance of both the sequence receiver and the symbol-by-symbol receiver, approach that of detection with perfect knowledge of the CSI and background radiation, as the length of the window for forming the decision metric increases.

  8. Hospice Use and High-Intensity Care in Men Dying of Prostate Cancer

    PubMed Central

    Bergman, Jonathan; Saigal, Christopher S.; Lorenz, Karl A.; Hanley, Janet; Miller, David C.; Gore, John L.; Litwin, Mark S.

    2012-01-01

    Background Hospice programs improve the quality of life and quality of death for men dying of cancer. We sought to characterize hospice use by men dying of prostate cancer and to compare the use of high-intensity care between those who did or did not enroll in hospice. Methods We used linked Surveillance, Epidemiology, and End Results–Medicare data to identify a cohort of Medicare beneficiaries who died of prostate cancer between 1992 and 2005. We created 2 multivariable logistic regression models, one to identify factors associated with hospice use and one to determine the association of hospice use with the receipt of diagnostic and interventional procedures and physician visits at the end of life. Results Of 14 521 men dying of prostate cancer, 7646 (53%) used hospice for a median of 24 days. Multivariable modeling demonstrated that African American ethnicity (odds ratio [OR], 0.78; 95% confidence interval [CI], 0.68–0.88) and higher Charlson comorbidity index (OR, 0.49; 95% CI, 0.44–0.55) were associated with lower odds of hospice use, while having a partner (OR, 1.23; 95% CI, 1.14–1.32) and more recent year of death (OR, 1.12; 95% CI, 1.11–1.14) were associated with higher odds of hospice use. Men dying of prostate cancer who enrolled in hospice were less likely (OR, 0.82; 95% CI, 0.74–0.91) to receive high-intensity care, including intensive care unit admissions, inpatient stays, and multiple emergency department visits. Conclusions The proportion of individuals using hospice is increasing, but the timing of hospice referral remains poor. Those who enroll in hospice are less likely to receive high-intensity end-of-life care. PMID:20937914

  9. Automatic liver segmentation from abdominal CT volumes using graph cuts and border marching.

    PubMed

    Liao, Miao; Zhao, Yu-Qian; Liu, Xi-Yao; Zeng, Ye-Zhan; Zou, Bei-Ji; Wang, Xiao-Fang; Shih, Frank Y

    2017-05-01

    Identifying liver regions from abdominal computed tomography (CT) volumes is an important task for computer-aided liver disease diagnosis and surgical planning. This paper presents a fully automatic method for liver segmentation from CT volumes based on graph cuts and border marching. An initial slice is segmented by density peak clustering. Based on pixel- and patch-wise features, an intensity model and a PCA-based regional appearance model are developed to enhance the contrast between liver and background. Then, these models as well as the location constraint estimated iteratively are integrated into graph cuts in order to segment the liver in each slice automatically. Finally, a vessel compensation method based on the border marching is used to increase the segmentation accuracy. Experiments are conducted on a clinical data set we created and also on the MICCAI2007 Grand Challenge liver data. The results show that the proposed intensity, appearance models, and the location constraint are significantly effective for liver recognition, and the undersegmented vessels can be compensated by the border marching based method. The segmentation performances in terms of VOE, RVD, ASD, RMSD, and MSD as well as the average running time achieved by our method on the SLIVER07 public database are 5.8 ± 3.2%, -0.1 ± 4.1%, 1.0 ± 0.5mm, 2.0 ± 1.2mm, 21.2 ± 9.3mm, and 4.7 minutes, respectively, which are superior to those of existing methods. The proposed method does not require time-consuming training process and statistical model construction, and is capable of dealing with complicated shapes and intensity variations successfully. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Challenges in Identifying Sites Climatically Matched to the Native Ranges of Animal Invaders

    PubMed Central

    Rodda, Gordon H.; Jarnevich, Catherine S.; Reed, Robert N.

    2011-01-01

    Background Species distribution models are often used to characterize a species' native range climate, so as to identify sites elsewhere in the world that may be climatically similar and therefore at risk of invasion by the species. This endeavor provoked intense public controversy over recent attempts to model areas at risk of invasion by the Indian Python (Python molurus). We evaluated a number of MaxEnt models on this species to assess MaxEnt's utility for vertebrate climate matching. Methodology/Principal Findings Overall, we found MaxEnt models to be very sensitive to modeling choices and selection of input localities and background regions. As used, MaxEnt invoked minimal protections against data dredging, multi-collinearity of explanatory axes, and overfitting. As used, MaxEnt endeavored to identify a single ideal climate, whereas different climatic considerations may determine range boundaries in different parts of the native range. MaxEnt was extremely sensitive to both the choice of background locations for the python, and to selection of presence points: inclusion of just four erroneous localities was responsible for Pyron et al.'s conclusion that no additional portions of the U.S. mainland were at risk of python invasion. When used with default settings, MaxEnt overfit the realized climate space, identifying models with about 60 parameters, about five times the number of parameters justifiable when optimized on the basis of Akaike's Information Criterion. Conclusions/Significance When used with default settings, MaxEnt may not be an appropriate vehicle for identifying all sites at risk of colonization. Model instability and dearth of protections against overfitting, multi-collinearity, and data dredging may combine with a failure to distinguish fundamental from realized climate envelopes to produce models of limited utility. A priori identification of biologically realistic model structure, combined with computational protections against these statistical problems, may produce more robust models of invasion risk. PMID:21347411

  11. Challenges in identifying sites climatically matched to the native ranges of animal invaders

    USGS Publications Warehouse

    Rodda, G.H.; Jarnevich, C.S.; Reed, R.N.

    2011-01-01

    Background: Species distribution models are often used to characterize a species' native range climate, so as to identify sites elsewhere in the world that may be climatically similar and therefore at risk of invasion by the species. This endeavor provoked intense public controversy over recent attempts to model areas at risk of invasion by the Indian Python (Python molurus). We evaluated a number of MaxEnt models on this species to assess MaxEnt's utility for vertebrate climate matching. Methodology/Principal Findings: Overall, we found MaxEnt models to be very sensitive to modeling choices and selection of input localities and background regions. As used, MaxEnt invoked minimal protections against data dredging, multi-collinearity of explanatory axes, and overfitting. As used, MaxEnt endeavored to identify a single ideal climate, whereas different climatic considerations may determine range boundaries in different parts of the native range. MaxEnt was extremely sensitive to both the choice of background locations for the python, and to selection of presence points: inclusion of just four erroneous localities was responsible for Pyron et al.'s conclusion that no additional portions of the U.S. mainland were at risk of python invasion. When used with default settings, MaxEnt overfit the realized climate space, identifying models with about 60 parameters, about five times the number of parameters justifiable when optimized on the basis of Akaike's Information Criterion. Conclusions/Significance: When used with default settings, MaxEnt may not be an appropriate vehicle for identifying all sites at risk of colonization. Model instability and dearth of protections against overfitting, multi-collinearity, and data dredging may combine with a failure to distinguish fundamental from realized climate envelopes to produce models of limited utility. A priori identification of biologically realistic model structure, combined with computational protections against these statistical problems, may produce more robust models of invasion risk.

  12. Effect of a chameleon scalar field on the cosmic microwave background

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Anne-Christine; Schelpe, Camilla A. O.; Shaw, Douglas J.

    2009-09-15

    We show that a direct coupling between a chameleonlike scalar field and photons can give rise to a modified Sunyaev-Zel'dovich (SZ) effect in the cosmic microwave background (CMB). The coupling induces a mixing between chameleon particles and the CMB photons when they pass through the magnetic field of a galaxy cluster. Both the intensity and the polarization of the radiation are modified. The degree of modification depends strongly on the properties of the galaxy cluster such as magnetic field strength and electron number density. Existing SZ measurements of the Coma cluster enable us to place constraints on the photon-chameleon coupling.more » The constrained conversion probability in the cluster is P{sub Coma}(204 GHz)<6.2x10{sup -5} at 95% confidence, corresponding to an upper bound on the coupling strength of g{sub eff}{sup (cell)}<2.2x10{sup -8} GeV{sup -1} or g{sub eff}{sup (Kolmo)}<(7.2-32.5)x10{sup -10} GeV{sup -1}, depending on the model that is assumed for the cluster magnetic field structure. We predict the radial profile of the chameleonic CMB intensity decrement. We find that the chameleon effect extends farther toward the edges of the cluster than the thermal SZ effect. Thus we might see a discrepancy between the x-ray emission data and the observed SZ intensity decrement. We further predict the expected change to the CMB polarization arising from the existence of a chameleonlike scalar field. These predictions could be verified or constrained by future CMB experiments.« less

  13. Implosion and heating experiments of fast ignition targets by Gekko-XII and LFEX lasers

    NASA Astrophysics Data System (ADS)

    Shiraga, H.; Fujioka, S.; Nakai, M.; Watari, T.; Nakamura, H.; Arikawa, Y.; Hosoda, H.; Nagai, T.; Koga, M.; Kikuchi, H.; Ishii, Y.; Sogo, T.; Shigemori, K.; Nishimura, H.; Zhang, Z.; Tanabe, M.; Ohira, S.; Fujii, Y.; Namimoto, T.; Sakawa, Y.; Maegawa, O.; Ozaki, T.; Tanaka, K. A.; Habara, H.; Iwawaki, T.; Shimada, K.; Key, M.; Norreys, P.; Pasley, J.; Nagatomo, H.; Johzaki, T.; Sunahara, A.; Murakami, M.; Sakagami, H.; Taguchi, T.; Norimatsu, T.; Homma, H.; Fujimoto, Y.; Iwamoto, A.; Miyanaga, N.; Kawanaka, J.; Kanabe, T.; Jitsuno, T.; Nakata, Y.; Tsubakimoto, K.; Sueda, K.; Kodama, R.; Kondo, K.; Morio, N.; Matsuo, S.; Kawasaki, T.; Sawai, K.; Tsuji, K.; Murakami, H.; Sarukura, N.; Shimizu, T.; Mima, K.; Azechi, H.

    2013-11-01

    The FIREX-1 project, the goal of which is to demonstrate fuel heating up to 5 keV by fast ignition scheme, has been carried out since 2003 including construction and tuning of LFEX laser and integrated experiments. Implosion and heating experiment of Fast Ignition targets have been performed since 2009 with Gekko-XII and LFEX lasers. A deuterated polystyrene shell target was imploded with the 0.53- μm Gekko-XII, and the 1.053- μm beam of the LFEX laser was injected through a gold cone attached to the shell to generate hot electrons to heat the imploded fuel plasma. Pulse contrast ratio of the LFEX beam was significantly improved. Also a variety of plasma diagnostic instruments were developed to be compatible with harsh environment of intense hard x-rays (γ rays) and electromagnetic pulses due to the intense LFEX beam on the target. Large background signals around the DD neutron signal in time-of-flight record of neutron detector were found to consist of neutrons via (γ,n) reactions and scattered gamma rays. Enhanced neutron yield was confirmed by carefully eliminating such backgrounds. Neutron enhancement up to 3.5 × 107 was observed. Heating efficiency was estimated to be 10-20% assuming a uniform temperature rise model.

  14. Effect of Upstream ULF Waves on the Energetic Ion Diffusion at the Earth's Foreshock. I. Theory and Simulation

    NASA Astrophysics Data System (ADS)

    Otsuka, Fumiko; Matsukiyo, Shuichi; Kis, Arpad; Nakanishi, Kento; Hada, Tohru

    2018-02-01

    Field-aligned diffusion of energetic ions in the Earth’s foreshock is investigated by using the quasi-linear theory (QLT) and test particle simulation. Non-propagating MHD turbulence in the solar wind rest frame is assumed to be purely transverse with respect to the background field. We use a turbulence model based on a multi-power-law spectrum including an intense peak that corresponds to upstream ULF waves resonantly generated by the field-aligned beam (FAB). The presence of the ULF peak produces a concave shape of the diffusion coefficient when it is plotted versus the ion energy. The QLT including the effect of the ULF wave explains the simulation result well, when the energy density of the turbulent magnetic field is 1% of that of the background magnetic field and the power-law index of the wave spectrum is less than 2. The numerically obtained e-folding distances from 10 to 32 keV ions match with the observational values in the event discussed in the companion paper, which contains an intense ULF peak in the spectra generated by the FAB. Evolution of the power spectrum of the ULF waves when approaching the shock significantly affects the energy dependence of the e-folding distance.

  15. Seismic hazard analysis for Jayapura city, Papua

    NASA Astrophysics Data System (ADS)

    Robiana, R.; Cipta, A.

    2015-04-01

    Jayapura city had destructive earthquake which occurred on June 25, 1976 with the maximum intensity VII MMI scale. Probabilistic methods are used to determine the earthquake hazard by considering all possible earthquakes that can occur in this region. Earthquake source models using three types of source models are subduction model; comes from the New Guinea Trench subduction zone (North Papuan Thrust), fault models; derived from fault Yapen, TareraAiduna, Wamena, Memberamo, Waipago, Jayapura, and Jayawijaya, and 7 background models to accommodate unknown earthquakes. Amplification factor using geomorphological approaches are corrected by the measurement data. This data is related to rock type and depth of soft soil. Site class in Jayapura city can be grouped into classes B, C, D and E, with the amplification between 0.5 - 6. Hazard maps are presented with a 10% probability of earthquake occurrence within a period of 500 years for the dominant periods of 0.0, 0.2, and 1.0 seconds.

  16. Daytime Sky Brightness Characterization for Persistent GEO SSA

    NASA Astrophysics Data System (ADS)

    Thomas, G.; Cobb, R. G.

    Space Situational Awareness (SSA) is fundamental to operating in space. SSA for collision avoidance ensures safety of flight for both government and commercial spacecraft through persistent monitoring. A worldwide network of optical and radar sensors gather satellite ephemeris data from the nighttime sky. Current practice for daytime satellite tracking is limited exclusively to radar as the brightening daytime sky prevents the use of visible-band optical sensors. Radar coverage is not pervasive and results in significant daytime coverage gaps in SSA. To mitigate these gaps, optical telescopes equipped with sensors in the near-infrared band (0.75-0.9m) may be used. The diminished intensity of the background sky radiance in the near-infrared band may allow for daylight tracking further into the twilight hours. To determine the performance of a near-infrared sensor for daylight custody, the sky background radiance must first be characterized spectrally as a function of wavelength. Using a physics-based atmospheric model with access to near-real time weather, we developed a generalized model for the apparent sky brightness of the Geostationary satellite belt. The model results are then compared to measured data collected from Dayton, OH through various look and Sun angles for model validation and spectral sky radiance quantification in the visible and near-infrared bands.

  17. THE EFFECT OF BACKGROUND SIGNAL AND ITS REPRESENTATION IN DECONVOLUTION OF EPR SPECTRA ON ACCURACY OF EPR DOSIMETRY IN BONE.

    PubMed

    Ciesielski, Bartlomiej; Marciniak, Agnieszka; Zientek, Agnieszka; Krefft, Karolina; Cieszyński, Mateusz; Boguś, Piotr; Prawdzik-Dampc, Anita

    2016-12-01

    This study is about the accuracy of EPR dosimetry in bones based on deconvolution of the experimental spectra into the background (BG) and the radiation-induced signal (RIS) components. The model RIS's were represented by EPR spectra from irradiated enamel or bone powder; the model BG signals by EPR spectra of unirradiated bone samples or by simulated spectra. Samples of compact and trabecular bones were irradiated in the 30-270 Gy range and the intensities of their RIS's were calculated using various combinations of those benchmark spectra. The relationships between the dose and the RIS were linear (R 2  > 0.995), with practically no difference between results obtained when using signals from irradiated enamel or bone as the model RIS. Use of different experimental spectra for the model BG resulted in variations in intercepts of the dose-RIS calibration lines, leading to systematic errors in reconstructed doses, in particular for high- BG samples of trabecular bone. These errors were reduced when simulated spectra instead of the experimental ones were used as the benchmark BG signal in the applied deconvolution procedures. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Dendritic tree extraction from noisy maximum intensity projection images in C. elegans.

    PubMed

    Greenblum, Ayala; Sznitman, Raphael; Fua, Pascal; Arratia, Paulo E; Oren, Meital; Podbilewicz, Benjamin; Sznitman, Josué

    2014-06-12

    Maximum Intensity Projections (MIP) of neuronal dendritic trees obtained from confocal microscopy are frequently used to study the relationship between tree morphology and mechanosensory function in the model organism C. elegans. Extracting dendritic trees from noisy images remains however a strenuous process that has traditionally relied on manual approaches. Here, we focus on automated and reliable 2D segmentations of dendritic trees following a statistical learning framework. Our dendritic tree extraction (DTE) method uses small amounts of labelled training data on MIPs to learn noise models of texture-based features from the responses of tree structures and image background. Our strategy lies in evaluating statistical models of noise that account for both the variability generated from the imaging process and from the aggregation of information in the MIP images. These noisy models are then used within a probabilistic, or Bayesian framework to provide a coarse 2D dendritic tree segmentation. Finally, some post-processing is applied to refine the segmentations and provide skeletonized trees using a morphological thinning process. Following a Leave-One-Out Cross Validation (LOOCV) method for an MIP databse with available "ground truth" images, we demonstrate that our approach provides significant improvements in tree-structure segmentations over traditional intensity-based methods. Improvements for MIPs under various imaging conditions are both qualitative and quantitative, as measured from Receiver Operator Characteristic (ROC) curves and the yield and error rates in the final segmentations. In a final step, we demonstrate our DTE approach on previously unseen MIP samples including the extraction of skeletonized structures, and compare our method to a state-of-the art dendritic tree tracing software. Overall, our DTE method allows for robust dendritic tree segmentations in noisy MIPs, outperforming traditional intensity-based methods. Such approach provides a useable segmentation framework, ultimately delivering a speed-up for dendritic tree identification on the user end and a reliable first step towards further morphological characterizations of tree arborization.

  19. Estimated long-term outdoor air pollution concentrations in a cohort study

    NASA Astrophysics Data System (ADS)

    Beelen, Rob; Hoek, Gerard; Fischer, Paul; Brandt, Piet A. van den; Brunekreef, Bert

    Several recent studies associated long-term exposure to air pollution with increased mortality. An ongoing cohort study, the Netherlands Cohort Study on Diet and Cancer (NLCS), was used to study the association between long-term exposure to traffic-related air pollution and mortality. Following on a previous exposure assessment study in the NLCS, we improved the exposure assessment methods. Long-term exposure to nitrogen dioxide (NO 2), nitrogen oxide (NO), black smoke (BS), and sulphur dioxide (SO 2) was estimated. Exposure at each home address ( N=21 868) was considered as a function of a regional, an urban and a local component. The regional component was estimated using inverse distance weighed interpolation of measurement data from regional background sites in a national monitoring network. Regression models with urban concentrations as dependent variables, and number of inhabitants in different buffers and land use variables, derived with a Geographic Information System (GIS), as predictor variables were used to estimate the urban component. The local component was assessed using a GIS and a digital road network with linked traffic intensities. Traffic intensity on the nearest road and on the nearest major road, and the sum of traffic intensity in a buffer of 100 m around each home address were assessed. Further, a quantitative estimate of the local component was estimated. The regression models to estimate the urban component explained 67%, 46%, 49% and 35% of the variances of NO 2, NO, BS, and SO 2 concentrations, respectively. Overall regression models which incorporated the regional, urban and local component explained 84%, 44%, 59% and 56% of the variability in concentrations for NO 2, NO, BS and SO 2, respectively. We were able to develop an exposure assessment model using GIS methods and traffic intensities that explained a large part of the variations in outdoor air pollution concentrations.

  20. Magnified Neural Envelope Coding Predicts Deficits in Speech Perception in Noise.

    PubMed

    Millman, Rebecca E; Mattys, Sven L; Gouws, André D; Prendergast, Garreth

    2017-08-09

    Verbal communication in noisy backgrounds is challenging. Understanding speech in background noise that fluctuates in intensity over time is particularly difficult for hearing-impaired listeners with a sensorineural hearing loss (SNHL). The reduction in fast-acting cochlear compression associated with SNHL exaggerates the perceived fluctuations in intensity in amplitude-modulated sounds. SNHL-induced changes in the coding of amplitude-modulated sounds may have a detrimental effect on the ability of SNHL listeners to understand speech in the presence of modulated background noise. To date, direct evidence for a link between magnified envelope coding and deficits in speech identification in modulated noise has been absent. Here, magnetoencephalography was used to quantify the effects of SNHL on phase locking to the temporal envelope of modulated noise (envelope coding) in human auditory cortex. Our results show that SNHL enhances the amplitude of envelope coding in posteromedial auditory cortex, whereas it enhances the fidelity of envelope coding in posteromedial and posterolateral auditory cortex. This dissociation was more evident in the right hemisphere, demonstrating functional lateralization in enhanced envelope coding in SNHL listeners. However, enhanced envelope coding was not perceptually beneficial. Our results also show that both hearing thresholds and, to a lesser extent, magnified cortical envelope coding in left posteromedial auditory cortex predict speech identification in modulated background noise. We propose a framework in which magnified envelope coding in posteromedial auditory cortex disrupts the segregation of speech from background noise, leading to deficits in speech perception in modulated background noise. SIGNIFICANCE STATEMENT People with hearing loss struggle to follow conversations in noisy environments. Background noise that fluctuates in intensity over time poses a particular challenge. Using magnetoencephalography, we demonstrate anatomically distinct cortical representations of modulated noise in normal-hearing and hearing-impaired listeners. This work provides the first link among hearing thresholds, the amplitude of cortical representations of modulated sounds, and the ability to understand speech in modulated background noise. In light of previous work, we propose that magnified cortical representations of modulated sounds disrupt the separation of speech from modulated background noise in auditory cortex. Copyright © 2017 Millman et al.

  1. Effect of a Brief Seated Massage on Nursing Student Attitudes Toward Touch for Comfort Care

    PubMed Central

    Yearwood, Edilma L.; Friedmann, Erika

    2014-01-01

    Background: While massage has been removed from nursing curricula, studies have reported massage as safe and effective for stress reduction, relaxation, pain relief, fatigue, and quality of life. Objective: To compare the efficacy of two intensities of touch administered during two seated massages on the attitudes of nursing students toward touch for their self-care and patient care. Participants: Nursing students who volunteered gave institutional review board–approved written informed consent to undergo massage by a licensed massage therapist. Settings/location: A private room adjacent to the nursing lab in a school of nursing. Intervention: Brief seated massages of differing intensities. Each participant received low-intensity and high-intensity touch in a two-block, randomized order, within-subjects design. Linear mixed models nested within subject and random intercept analyses were used to test hypotheses in this two-treatment, two-sequence, two-period crossover design. Outcome measures: Health questionnaires/visual analogue scales pertaining to physical/affective/and attitudinal status were completed before and after each massage. Results: Twenty-nine participants (93% female, 83% single) completed the study. Before massage, the optimal intensity of touch anticipated for self-comfort was 6.6 (0=no pressure;10=most intense pressure imaginable). The mean touch intensities were 6.7 for high-intensity massage and 0.5 for low-intensity (p<0.001). The overall percentage differences (feeling better or worse) following massage were as follows: low intensity, 37.5% better; high intensity, 62.7% better (p<0.001). Significantly more improvement was reported for energy, pain, stress, and feeling physically uptight after high-intensity compared with low-intensity (p<0.03). Participants were more likely to both receive touch for self-care and provide touch for patient care after experiencing high- versus low-intensity massage (p<0.01). Conclusions: High-intensity seated massage was more efficacious than low-intensity massage and positively influenced nursing student attitudes toward the inclusion of massage in self-care/patient care. The role of touch for self-care/patient care in the nursing curricula merits reconsideration. PMID:25140587

  2. Astronomy Laboratory Exercise on Olbers’ Paradox and the Age of the Universe

    NASA Astrophysics Data System (ADS)

    Glazer, Kelsey Samantha; Edwards, Charlotte; Overduin, James; Storrs, Alex

    2018-01-01

    We describe the development of a new laboratory exercise for undergraduate introductory astronomy courses. Students begin by estimating the intensity of the extragalactic background light using a simple Newtonian cosmological model that agrees with recent measurements to within a factor of two. They then use the 0.5m Towson University telescope to image a dark patch of sky such as the Hubble Deep Field near or during new Moon, and compare the intensity actually observed with that predicted. This comparison leads to a new appreciation of foreground contributions such as light pollution, airglow, zodiacal light, starlight and others. Students pick up important skills in uncertainty analysis and astronomical unit conversion. But the most valuable aspect of the exercise in our view is that it enables students to draw a direct connection between the evidence of their own eyes and the age of the Universe.

  3. Geocoronal imaging with Dynamics Explorer - A first look

    NASA Technical Reports Server (NTRS)

    Rairden, R. L.; Frank, L. A.; Craven, J. D.

    1983-01-01

    The ultraviolet photometer of the University of Iowa spin-scan auroral imaging instrumentation on board Dynamics Explorer-1 has returned numerous hydrogen Lyman alpha images of the geocorona from altitudes of 570 km to 23,300 km (1.09 R sub E to 4.66 R sub E geocentric radial distance). The hydrogen density gradient is shown by a plot of the zenith intensities throughout this range, which decrease to near celestial background values as the spacecraft approaches apogee. Characterizing the upper geocorona as optically thin (single-scattering), the zenith intensity is converted directly to vertical column density. This approximation loses its validity deeper in the geocorona, where the hydrogen is demonstrated to be optically thick in that there is no Lyman alpha limb brightening. Further study of the geocoronal hydrogen distribution will require computer modeling of the radiative transfer. Previously announced in STAR as N83-20889

  4. Hot Electrons from Two-Plasmon Decay

    NASA Astrophysics Data System (ADS)

    Russell, D. A.; Dubois, D. F.

    2000-10-01

    We solve, self-consistently, the relativistic quasilinear diffusion equation and Zakharov's model equations of Langmuir wave (LW) and ion acoustic wave (IAW) turbulence, in two dimensions, for saturated states of the Two-Plasmon Decay instability. Parameters are those of the shorter gradient scale-length (50 microns) high temperature (4 keV) inhomogeneous plasmas anticipated at LLE’s Omega laser facility. We calculate the fraction of incident laser power absorbed in hot electron production as a function of laser intensity for a plane-wave laser field propagating parallel to the background density gradient. Two distinct regimes are identified: In the strong-turbulent regime, hot electron bursts occur intermittently in time, well correlated with collapse in the LW and IAW fields. A significant fraction of the incident laser power ( ~10%) is absorbed by hot electrons during a single burst. In the weak or convective regime, relatively constant rates of hot electron production are observed at much reduced intensities.

  5. Micro-Raman spectroscopy of algae: composition analysis and fluorescence background behavior.

    PubMed

    Huang, Y Y; Beal, C M; Cai, W W; Ruoff, R S; Terentjev, E M

    2010-04-01

    Preliminary feasibility studies were performed using Stokes Raman scattering for compositional analysis of algae. Two algal species, Chlorella sorokiniana (UTEX #1230) and Neochloris oleoabundans (UTEX #1185), were chosen for this study. Both species were considered to be candidates for biofuel production. Raman signals due to storage lipids (specifically triglycerides) were clearly identified in the nitrogen-starved C. sorokiniana and N. oleoabundans, but not in their healthy counterparts. On the other hand, signals resulting from the carotenoids were found to be present in all of the samples. Composition mapping was conducted in which Raman spectra were acquired from a dense sequence of locations over a small region of interest. The spectra obtained for the mapping images were filtered for the wavelengths of characteristic peaks that correspond to components of interest (i.e., triglyceride or carotenoid). The locations of the components of interest could be identified by the high intensity areas in the composition maps. Finally, the time evolution of fluorescence background was observed while acquiring Raman signals from the algae. The time dependence of fluorescence background is characterized by a general power law decay interrupted by sudden high intensity fluorescence events. The decreasing trend is likely a result of photo-bleaching of cell pigments due to prolonged intense laser exposure, while the sudden high intensity fluorescence events are not understood. (c) 2009 Wiley Periodicals, Inc.

  6. Quantitative x-ray photoelectron spectroscopy: Quadrupole effects, shake-up, Shirley background, and relative sensitivity factors from a database of true x-ray photoelectron spectra

    NASA Astrophysics Data System (ADS)

    Seah, M. P.; Gilmore, I. S.

    2006-05-01

    An analysis is provided of the x-ray photoelectron spectroscopy (XPS) intensities measured in the National Physical Laboratory (NPL) XPS database for 46 solid elements. This present analysis does not change our previous conclusions concerning the excellent correlation between experimental intensities, following deconvolving the spectra with angle-averaged reflection electron energy loss data, and the theoretical intensities involving the dipole approximation using Scofield’s cross sections. Here, more recent calculations for cross sections by Trzhaskovskaya involving quadrupole terms are evaluated and it is shown that their cross sections diverge from the experimental database results by up to a factor of 5. The quadrupole angular terms lead to small corrections that are close to our measurement limit but do appear to be supported in the present analysis. Measurements of the extent of shake-up for the 46 elements broadly agree with the calculations of Yarzhemsky but not in detail. The predicted constancy in the shake-up contribution by Yarzhemsky implies that the use of the Shirley background will lead to a peak area that is a constant fraction of the true peak area including the shake-up intensities. However, the measured variability of the shake-up contribution makes the Shirley background invalid for quantification except for situations where the sensitivity factors are from reference samples similar to those being analyzed.

  7. Understanding large SEP events with the PATH code: Modeling of the 13 December 2006 SEP event

    NASA Astrophysics Data System (ADS)

    Verkhoglyadova, O. P.; Li, G.; Zank, G. P.; Hu, Q.; Cohen, C. M. S.; Mewaldt, R. A.; Mason, G. M.; Haggerty, D. K.; von Rosenvinge, T. T.; Looper, M. D.

    2010-12-01

    The Particle Acceleration and Transport in the Heliosphere (PATH) numerical code was developed to understand solar energetic particle (SEP) events in the near-Earth environment. We discuss simulation results for the 13 December 2006 SEP event. The PATH code includes modeling a background solar wind through which a CME-driven oblique shock propagates. The code incorporates a mixed population of both flare and shock-accelerated solar wind suprathermal particles. The shock parameters derived from ACE measurements at 1 AU and observational flare characteristics are used as input into the numerical model. We assume that the diffusive shock acceleration mechanism is responsible for particle energization. We model the subsequent transport of particles originated at the flare site and particles escaping from the shock and propagating in the equatorial plane through the interplanetary medium. We derive spectra for protons, oxygen, and iron ions, together with their time-intensity profiles at 1 AU. Our modeling results show reasonable agreement with in situ measurements by ACE, STEREO, GOES, and SAMPEX for this event. We numerically estimate the Fe/O abundance ratio and discuss the physics underlying a mixed SEP event. We point out that the flare population is as important as shock geometry changes during shock propagation for modeling time-intensity profiles and spectra at 1 AU. The combined effects of seed population and shock geometry will be examined in the framework of an extended PATH code in future modeling efforts.

  8. Environmental, psychological, and social influences on physical activity among Japanese adults: structural equation modeling analysis

    PubMed Central

    2010-01-01

    Background An understanding of the contributing factors to be considered when examining how individuals engage in physical activity is important for promoting population-based physical activity. The environment influences long-term effects on population-based health behaviors. Personal variables, such as self-efficacy and social support, can act as mediators of the predictive relationship between the environment and physical activity. The present study examines the direct and indirect effects of environmental, psychological, and social factors on walking, moderate-intensity activity excluding walking, and vigorous-intensity activity among Japanese adults. Methods The participants included 1,928 Japanese adults aged 20-79 years. Seven sociodemographic attributes (e.g., gender, age, education level, employment status), psychological variables (self-efficacy, pros, and cons), social variables (social support), environmental variables (home fitness equipment, access to facilities, neighborhood safety, aesthetic sensibilities, and frequency of observing others exercising), and the International Physical Activity Questionnaire were assessed via an Internet-based survey. Structural equation modeling was conducted to determine associations between environmental, psychological, and social factors with physical activity. Results Environmental factors could be seen to have indirect effects on physical activity through their influence on psychological and social variables such as self-efficacy, pros and cons, and social support. The strongest indirect effects could be observed by examining the consequences of environmental factors on physical activity through cons to self-efficacy. The total effects of environmental factors on physical activity were 0.02 on walking, 0.02 on moderate-intensity activity excluding walking, and 0.05 on vigorous-intensity activity. Conclusions The present study indicates that environmental factors had indirect effects on walking, moderate-intensity activity excluding walking and vigorous-intensity activity among Japanese adults, especially through the effects on these factors of self-efficacy, social support, and pros and cons. The findings of the present study imply that intervention strategies to promote more engagement in physical activity for population-based health promotion may be necessary. PMID:20684794

  9. Comparison of the efficacy and safety of intensive-dose and standard-dose statin treatment for stroke prevention

    PubMed Central

    Wang, Juan; Chen, Dan; Li, Da-Bing; Yu, Xin; Shi, Guo-Bing

    2016-01-01

    Abstract Background: Previous study indicated that high-dose statin treatment might increase the risk of hemorrhagic stroke and adverse reactions. We aim to compare the efficacy and safety of intensive-dose and standard-dose statin treatment for preventing stroke in high-risk patients. Methods: A thorough search was performed of multiple databases for publications from 1990 to June 2015. We selected the randomized clinical trials comparing standard-dose statin with placebo and intensive-dose statin with standard-dose statin or placebo for the prevention of stroke events in patients. Duplicate independent data extraction and bias assessments were performed. Data were pooled using a fixed-effects model or a random-effects model if significant heterogeneity was present. Results: For the all stroke incidences, intensive-dose statin treatment compared with placebo treatment and standard-dose statin treatment compared with placebo treatment showed a significant 21% reduction in relative risk (RR) (RR 0.79, 95% confidence interval (CI) [0.71, 0.87], P < 0.00001) and an 18% reduction in RR (RR 0.82, 95% CI [0.73, 0.93], P = 0.002) in the subgroup without renal transplant recipients and patients undergoing regular hemodialysis separately. For the fatal stroke incidences, intensive-dose statin treatment compared with standard dose or placebo was effective reducing fatal stroke (RR 0.61, 95% CI [0.39, 0.96], P = 0.03) and the RR was 1.01 (95% CI [0.85, 1.20], P = 0.90) in standard-dose statin treatment compared with placebo. Conclusion: The results of this meta-analysis suggest that intensive-dose statin treatment might be more favorable for reducing the incidences of all strokes than standard-dose statin treatment, especially for patients older than 65 years in reducing the incidences of all stroke incidences. PMID:27684837

  10. Proposed chemical mechanismsManagement practices impacts soil nutrients and bacterial populations in backgrounding beef feedlot

    USDA-ARS?s Scientific Manuscript database

    Intensive beef backgrounding often accumulate manure born soil nutrients, microbes, and pharmaceuticals at different site locations. Unless properly managed, such waste materials can pollute surrounding soil and water sources. Soil sampling from these sites helps determining waste material levels bu...

  11. The Effectiveness of Intensive Interaction, A Systematic Literature Review

    ERIC Educational Resources Information Center

    Hutchinson, Nick; Bodicoat, Anna

    2015-01-01

    Background: Intensive Interaction is an approach used for communicating with people with profound and multiple intellectual disabilities [PMID] or autism. It has gained increased recognition as a helpful technique, but the evidence has not been systematically reviewed. Method: Computerized and hand searches of the literature were conducted using…

  12. Effectiveness of Intensive, Group Therapy for Teenagers Who Stutter

    ERIC Educational Resources Information Center

    Fry, Jane; Millard, Sharon; Botterill, Willie

    2014-01-01

    Background: Treatment of adolescents who stutter is an under-researched area that would benefit from greater attention. Aims: To investigate whether an intensive treatment programme for older teenagers who stutter, aged over 16 years of age, is effective in reducing overt and covert aspects of stuttering. Methods & Procedures: A…

  13. Urban Heat Islands in China Enhanced by Haze Pollution

    NASA Astrophysics Data System (ADS)

    Cao, C.; Lee, X.; Liu, S.; Oleson, K. W.; Schultz, N. M.; Xiao, W.; Zhang, M.; Zhao, L.

    2015-12-01

    Land conversion from natural surfaces to artificial urban structures has led to the phenomenon of urban heat island (UHI). The intensity of UHI is thought to be controlled primarily by biophysical factors such as changes in albedo, aerodynamic resistance and evapotranspiration, while influences of biogeochemical factors such as aerosol pollution have long been ignored. We hypothesize that increased downward longwave radiation associated with anthropogenic aerosols in urban air will exacerbate nighttime UHI intensity. Here we tested this hypothesis by using the MODIS satellite land surface temperature product and the Community Land Model (CLM) for 39 cities in China. Our results showed that in contrast to observations in North America and elsewhere, nighttime surface UHI of these Chinese cities (3.34 K) was greater than daytime UHI (2.06 K). Variations in the nighttime UHI among the cities were positively correlated with difference in the aerosol optical depth between urban and the adjacent rural area (confidence level p < 0.01). The CLM was able to reproduce the MODIS UHI intensity in the daytime but underestimated the observed UHI intensity at night. The model performance was improved by including an aerosol-enhanced downward longwave radiation in urban land and a more realistic anthropogenic heat flux. Our study illustrates that although climate background largely determine spatial differences in the daytime UHI, in countries like China with serious air quality problems, aerosol-induced pollution plays an important role in the night-time UHI formation. Mitigation of particulate pollution therefore has the added co-benefit by reducing UHI-related heat stress on urban residents.

  14. CAN A NANOFLARE MODEL OF EXTREME-ULTRAVIOLET IRRADIANCES DESCRIBE THE HEATING OF THE SOLAR CORONA?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tajfirouze, E.; Safari, H.

    2012-01-10

    Nanoflares, the basic units of impulsive energy release, may produce much of the solar background emission. Extrapolation of the energy frequency distribution of observed microflares, which follows a power law to lower energies, can give an estimation of the importance of nanoflares for heating the solar corona. If the power-law index is greater than 2, then the nanoflare contribution is dominant. We model a time series of extreme-ultraviolet emission radiance as random flares with a power-law exponent of the flare event distribution. The model is based on three key parameters: the flare rate, the flare duration, and the power-law exponentmore » of the flare intensity frequency distribution. We use this model to simulate emission line radiance detected in 171 A, observed by Solar Terrestrial Relation Observatory/Extreme-Ultraviolet Imager and Solar Dynamics Observatory/Atmospheric Imaging Assembly. The observed light curves are matched with simulated light curves using an Artificial Neural Network, and the parameter values are determined across the active region, quiet Sun, and coronal hole. The damping rate of nanoflares is compared with the radiative losses cooling time. The effect of background emission, data cadence, and network sensitivity on the key parameters of the model is studied. Most of the observed light curves have a power-law exponent, {alpha}, greater than the critical value 2. At these sites, nanoflare heating could be significant.« less

  15. Poverty, health and satellite-derived vegetation indices: their inter-spatial relationship in West Africa

    PubMed Central

    Sedda, Luigi; Tatem, Andrew J.; Morley, David W.; Atkinson, Peter M.; Wardrop, Nicola A.; Pezzulo, Carla; Sorichetta, Alessandro; Kuleszo, Joanna; Rogers, David J.

    2015-01-01

    Background Previous analyses have shown the individual correlations between poverty, health and satellite-derived vegetation indices such as the normalized difference vegetation index (NDVI). However, generally these analyses did not explore the statistical interconnections between poverty, health outcomes and NDVI. Methods In this research aspatial methods (principal component analysis) and spatial models (variography, factorial kriging and cokriging) were applied to investigate the correlations and spatial relationships between intensity of poverty, health (expressed as child mortality and undernutrition), and NDVI for a large area of West Africa. Results This research showed that the intensity of poverty (and hence child mortality and nutrition) varies inversely with NDVI. From the spatial point-of-view, similarities in the spatial variation of intensity of poverty and NDVI were found. Conclusions These results highlight the utility of satellite-based metrics for poverty models including health and ecological components and, in general for large scale analysis, estimation and optimisation of multidimensional poverty metrics. However, it also stresses the need for further studies on the causes of the association between NDVI, health and poverty. Once these relationships are confirmed and better understood, the presence of this ecological component in poverty metrics has the potential to facilitate the analysis of the impacts of climate change on the rural populations afflicted by poverty and child mortality. PMID:25733559

  16. In situ hybridization protocol for enhanced detection of gene expression in the planarian Schmidtea mediterranea

    PubMed Central

    2013-01-01

    Background The freshwater planarian Schmidtea mediterranea has emerged as a powerful model for studies of regenerative, stem cell, and germ cell biology. Whole-mount in situ hybridization (WISH) and whole-mount fluorescent in situ hybridization (FISH) are critical methods for determining gene expression patterns in planarians. While expression patterns for a number of genes have been elucidated using established protocols, determining the expression patterns for particularly low-abundance transcripts remains a challenge. Results We show here that a short bleaching step in formamide dramatically enhances signal intensity of WISH and FISH. To further improve signal sensitivity we optimized blocking conditions for multiple anti-hapten antibodies, developed a copper sulfate quenching step that virtually eliminates autofluorescence, and enhanced signal intensity through iterative rounds of tyramide signal amplification. For FISH on regenerating planarians, we employed a heat-induced antigen retrieval step that provides a better balance between permeabilization of mature tissues and preservation of regenerating tissues. We also show that azide most effectively quenches peroxidase activity between rounds of development for multicolor FISH experiments. Finally, we apply these modifications to elucidate the expression patterns of a few low-abundance transcripts. Conclusion The modifications we present here provide significant improvements in signal intensity and signal sensitivity for WISH and FISH in planarians. Additionally, these modifications might be of widespread utility for whole-mount FISH in other model organisms. PMID:23497040

  17. Convergence of Health Level Seven Version 2 Messages to Semantic Web Technologies for Software-Intensive Systems in Telemedicine Trauma Care.

    PubMed

    Menezes, Pedro Monteiro; Cook, Timothy Wayne; Cavalini, Luciana Tricai

    2016-01-01

    To present the technical background and the development of a procedure that enriches the semantics of Health Level Seven version 2 (HL7v2) messages for software-intensive systems in telemedicine trauma care. This study followed a multilevel model-driven approach for the development of semantically interoperable health information systems. The Pre-Hospital Trauma Life Support (PHTLS) ABCDE protocol was adopted as the use case. A prototype application embedded the semantics into an HL7v2 message as an eXtensible Markup Language (XML) file, which was validated against an XML schema that defines constraints on a common reference model. This message was exchanged with a second prototype application, developed on the Mirth middleware, which was also used to parse and validate both the original and the hybrid messages. Both versions of the data instance (one pure XML, one embedded in the HL7v2 message) were equally validated and the RDF-based semantics recovered by the receiving side of the prototype from the shared XML schema. This study demonstrated the semantic enrichment of HL7v2 messages for intensive-software telemedicine systems for trauma care, by validating components of extracts generated in various computing environments. The adoption of the method proposed in this study ensures the compliance of the HL7v2 standard in Semantic Web technologies.

  18. Disaster Emergency Rapid Assessment Based on Remote Sensing and Background Data

    NASA Astrophysics Data System (ADS)

    Han, X.; Wu, J.

    2018-04-01

    The period from starting to the stable conditions is an important stage of disaster development. In addition to collecting and reporting information on disaster situations, remote sensing images by satellites and drones and monitoring results from disaster-stricken areas should be obtained. Fusion of multi-source background data such as population, geography and topography, and remote sensing monitoring information can be used in geographic information system analysis to quickly and objectively assess the disaster information. According to the characteristics of different hazards, the models and methods driven by the rapid assessment of mission requirements are tested and screened. Based on remote sensing images, the features of exposures quickly determine disaster-affected areas and intensity levels, and extract key disaster information about affected hospitals and schools as well as cultivated land and crops, and make decisions after emergency response with visual assessment results.

  19. To be seen or to hide: visual characteristics of body patterns for camouflage and communication in the Australian giant cuttlefish Sepia apama.

    PubMed

    Zylinski, S; How, M J; Osorio, D; Hanlon, R T; Marshall, N J

    2011-05-01

    It might seem obvious that a camouflaged animal must generally match its background whereas to be conspicuous an organism must differ from the background. However, the image parameters (or statistics) that evaluate the conspicuousness of patterns and textures are seldom well defined, and animal coloration patterns are rarely compared quantitatively with their respective backgrounds. Here we examine this issue in the Australian giant cuttlefish Sepia apama. We confine our analysis to the best-known and simplest image statistic, the correlation in intensity between neighboring pixels. Sepia apama can rapidly change their body patterns from assumed conspicuous signaling to assumed camouflage, thus providing an excellent and unique opportunity to investigate how such patterns differ in a single visual habitat. We describe the intensity variance and spatial frequency power spectra of these differing body patterns and compare these patterns with the backgrounds against which they are viewed. The measured image statistics of camouflaged animals closely resemble their backgrounds, while signaling animals differ significantly from their backgrounds. Our findings may provide the basis for a set of general rules for crypsis and signals. Furthermore, our methods may be widely applicable to the quantitative study of animal coloration.

  20. Range image segmentation using Zernike moment-based generalized edge detector

    NASA Technical Reports Server (NTRS)

    Ghosal, S.; Mehrotra, R.

    1992-01-01

    The authors proposed a novel Zernike moment-based generalized step edge detection method which can be used for segmenting range and intensity images. A generalized step edge detector is developed to identify different kinds of edges in range images. These edge maps are thinned and linked to provide final segmentation. A generalized edge is modeled in terms of five parameters: orientation, two slopes, one step jump at the location of the edge, and the background gray level. Two complex and two real Zernike moment-based masks are required to determine all these parameters of the edge model. Theoretical noise analysis is performed to show that these operators are quite noise tolerant. Experimental results are included to demonstrate edge-based segmentation technique.

  1. Neurophysiological model of the normal and abnormal human pupil

    NASA Technical Reports Server (NTRS)

    Krenz, W.; Robin, M.; Barez, S.; Stark, L.

    1985-01-01

    Anatomical, experimental, and computer simulation studies were used to determine the structure of the neurophysiological model of the pupil size control system. The computer simulation of this model demonstrates the role played by each of the elements in the neurological pathways influencing the size of the pupil. Simulations of the effect of drugs and common abnormalities in the system help to illustrate the workings of the pathways and processes involved. The simulation program allows the user to select pupil condition (normal or an abnormality), specific site along the neurological pathway (retina, hypothalamus, etc.) drug class input (barbiturate, narcotic, etc.), stimulus/response mode, display mode, stimulus type and input waveform, stimulus or background intensity and frequency, the input and output conditions, and the response at the neuroanatomical site. The model can be used as a teaching aid or as a tool for testing hypotheses regarding the system.

  2. The range of attraction for light traps catching Culicoides biting midges (Diptera: Ceratopogonidae)

    PubMed Central

    2013-01-01

    Background Culicoides are vectors of e.g. bluetongue virus and Schmallenberg virus in northern Europe. Light trapping is an important tool for detecting the presence and quantifying the abundance of vectors in the field. Until now, few studies have investigated the range of attraction of light traps. Methods Here we test a previously described mathematical model (Model I) and two novel models for the attraction of vectors to light traps (Model II and III). In Model I, Culicoides fly to the nearest trap from within a fixed range of attraction. In Model II Culicoides fly towards areas with greater light intensity, and in Model III Culicoides evaluate light sources in the field of view and fly towards the strongest. Model II and III incorporated the directionally dependent light field created around light traps with fluorescent light tubes. All three models were fitted to light trap collections obtained from two novel experimental setups in the field where traps were placed in different configurations. Results Results showed that overlapping ranges of attraction of neighboring traps extended the shared range of attraction. Model I did not fit data from any of the experimental setups. Model II could only fit data from one of the setups, while Model III fitted data from both experimental setups. Conclusions The model with the best fit, Model III, indicates that Culicoides continuously evaluate the light source direction and intensity. The maximum range of attraction of a single 4W CDC light trap was estimated to be approximately 15.25 meters. The attraction towards light traps is different from the attraction to host animals and thus light trap catches may not represent the vector species and numbers attracted to hosts. PMID:23497628

  3. MO-F-CAMPUS-J-05: Toward MRI-Only Radiotherapy: Novel Tissue Segmentation and Pseudo-CT Generation Techniques Based On T1 MRI Sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aouadi, S; McGarry, M; Hammoud, R

    Purpose: To develop and validate a 4 class tissue segmentation approach (air cavities, background, bone and soft-tissue) on T1 -weighted brain MRI and to create a pseudo-CT for MRI-only radiation therapy verification. Methods: Contrast-enhanced T1-weighted fast-spin-echo sequences (TR = 756ms, TE= 7.152ms), acquired on a 1.5T GE MRI-Simulator, are used.MRIs are firstly pre-processed to correct for non uniformity using the non parametric, non uniformity intensity normalization algorithm. Subsequently, a logarithmic inverse scaling log(1/image) is applied, prior to segmentation, to better differentiate bone and air from soft-tissues. Finally, the following method is enrolled to classify intensities into air cavities, background, bonemore » and soft-tissue:Thresholded region growing with seed points in image corners is applied to get a mask of Air+Bone+Background. The background is, afterward, separated by the scan-line filling algorithm. The air mask is extracted by morphological opening followed by a post-processing based on knowledge about air regions geometry. The remaining rough bone pre-segmentation is refined by applying 3D geodesic active contours; bone segmentation evolves by the sum of internal forces from contour geometry and external force derived from image gradient magnitude.Pseudo-CT is obtained by assigning −1000HU to air and background voxels, performing linear mapping of soft-tissue MR intensities in [-400HU, 200HU] and inverse linear mapping of bone MR intensities in [200HU, 1000HU]. Results: Three brain patients having registered MRI and CT are used for validation. CT intensities classification into 4 classes is performed by thresholding. Dice and misclassification errors are quantified. Correct classifications for soft-tissue, bone, and air are respectively 89.67%, 77.8%, and 64.5%. Dice indices are acceptable for bone (0.74) and soft-tissue (0.91) but low for air regions (0.48). Pseudo-CT produces DRRs with acceptable clinical visual agreement to CT-based DRR. Conclusion: The proposed approach makes it possible to use T1-weighted MRI to generate accurate pseudo-CT from 4-class segmentation.« less

  4. Diurnal variations of ELF transients and background noise in the Schumann resonance band

    NASA Astrophysics Data System (ADS)

    Greenberg, Eran; Price, Colin

    2007-02-01

    Schumann resonances (SR) are resonant electromagnetic waves in the Earth-ionosphere cavity, induced primarily by lightning discharges, with a fundamental frequency of about 8 Hz and higher-order modes separated by approximately 6 Hz. The SR are made up of the background signal resulting from global lightning activity and extremely low frequency (ELF) transients resulting from particularly intense lightning discharges somewhere on the planet. Since transients within the Earth-ionosphere cavity due to lightning propagate globally in the ELF range, we can monitor and study global ELF transients from a single station. Data from our Negev Desert (Israel) ELF site are collected using two horizontal magnetic induction coils and a vertical electric field ball antenna, monitored in the 5-40 Hz range with a sampling frequency of 250 Hz. In this paper we present statistics related to the probability distribution of ELF transients and background noise in the time domain and its temporal variations during the day. Our results show that the ELF signal in the time domain follows the normal distribution very well. The σ parameter exhibits three peaks at 0800, 1400, and 2000 UT, which are related to the three main global lightning activity centers in Asia, Africa, and America, respectively. Furthermore, the occurrence of intense ELF events obeys the Poisson distribution, with such intense events occurring every ~10 s, depending on the time of the day. We found that the diurnal changes of the σ parameter are several percent of the mean, while for the number of intense events per minute, the diurnal changes are tens of percent about the mean. We also present the diurnal changes of the SR intensities in the frequency domain as observed at our station. To better understand the diurnal variability of the observations, we simulated the measured ELF background noise using space observations as input, as detected by the Optical Transient Detector (OTD). The most active center which is reflected from both ELF measurements and OTD observations is in Africa. However, the second most active center on the basis of ELF measurements appears to be Asia, while OTD observations show that the American center is more active than the Asian center. These differences are discussed. This paper contributes to our understanding of the origin of the SR by comparing different lightning data sets: background electromagnetic radiation and optical emission observed from space.

  5. A model of binding on DNA microarrays: understanding the combined effect of probe synthesis failure, cross-hybridization, DNA fragmentation and other experimental details of affymetrix arrays

    PubMed Central

    2012-01-01

    Background DNA microarrays are used both for research and for diagnostics. In research, Affymetrix arrays are commonly used for genome wide association studies, resequencing, and for gene expression analysis. These arrays provide large amounts of data. This data is analyzed using statistical methods that quite often discard a large portion of the information. Most of the information that is lost comes from probes that systematically fail across chips and from batch effects. The aim of this study was to develop a comprehensive model for hybridization that predicts probe intensities for Affymetrix arrays and that could provide a basis for improved microarray analysis and probe development. The first part of the model calculates probe binding affinities to all the possible targets in the hybridization solution using the Langmuir isotherm. In the second part of the model we integrate details that are specific to each experiment and contribute to the differences between hybridization in solution and on the microarray. These details include fragmentation, wash stringency, temperature, salt concentration, and scanner settings. Furthermore, the model fits probe synthesis efficiency and target concentration parameters directly to the data. All the parameters used in the model have a well-established physical origin. Results For the 302 chips that were analyzed the mean correlation between expected and observed probe intensities was 0.701 with a range of 0.88 to 0.55. All available chips were included in the analysis regardless of the data quality. Our results show that batch effects arise from differences in probe synthesis, scanner settings, wash strength, and target fragmentation. We also show that probe synthesis efficiencies for different nucleotides are not uniform. Conclusions To date this is the most complete model for binding on microarrays. This is the first model that includes both probe synthesis efficiency and hybridization kinetics/cross-hybridization. These two factors are sequence dependent and have a large impact on probe intensity. The results presented here provide novel insight into the effect of probe synthesis errors on Affymetrix microarrays; furthermore, the algorithms developed in this work provide useful tools for the analysis of cross-hybridization, probe synthesis efficiency, fragmentation, wash stringency, temperature, and salt concentration on microarray intensities. PMID:23270536

  6. The effect of convection on infrared detection by antennal warm cells in the bloodsucking bug Rhodnius prolixus

    PubMed Central

    Tichy, Harald

    2015-01-01

    Previous work revealed that bloodsucking bugs can discriminate between oscillating changes in infrared (IR) radiation and air temperature (T) using two types of warm cells located in peg-in-pit sensilla and tapered hairs (Zopf LM, Lazzari CR, Tichy H. J Neurophysiol 111: 1341–1349, 2014). These two stimuli are encoded and discriminated by the response quotient of the two warm cell types. IR radiation stimulates the warm cell in the peg-in-pit sensillum more strongly than that in the tapered hair. T stimuli evoke the reverse responses; they stimulate the latter more strongly than the former. In nature, IR and T cues are always present with certain radiation intensities and air temperatures, here referred to as background IR radiation and background T. In this article, we found that the response quotient permits the discrimination of IR and T oscillations even in the presence of different backgrounds. We show that the two warm cells respond well to IR oscillations if the background T operates by natural convection but poorly at forced convection, even if the background T is higher than at natural convection. Background IR radiation strongly affects the responses to T oscillations: the discharge rates of both warm cells are higher the higher the power of the IR background. We compared the warm cell responses with the T measured inside small model objects shaped like a cylinder, a cone, or a disc. The experiments indicate that passive thermal effects of the sense organs rather than intrinsic properties of the sensory cells are responsible for the observed results. PMID:25609113

  7. Respiratory allergy to Blomia tropicalis: Immune response in four syngeneic mouse strains and assessment of a low allergen-dose, short-term experimental model

    PubMed Central

    2010-01-01

    Background The dust mite Blomia tropicalis is an important source of aeroallergens in tropical areas. Although a mouse model for B. tropicalis extract (BtE)-induced asthma has been described, no study comparing different mouse strains in this asthma model has been reported. The relevance and reproducibility of experimental animal models of allergy depends on the genetic background of the animal, the molecular composition of the allergen and the experimental protocol. Objectives This work had two objectives. The first was to study the anti-B. tropicalis allergic responses in different mouse strains using a short-term model of respiratory allergy to BtE. This study included the comparison of the allergic responses elicited by BtE with those elicited by ovalbumin in mice of the strain that responded better to BtE sensitization. The second objective was to investigate whether the best responder mouse strain could be used in an experimental model of allergy employing relatively low BtE doses. Methods Groups of mice of four different syngeneic strains were sensitized subcutaneously with 100 μg of BtE on days 0 and 7 and challenged four times intranasally, at days 8, 10, 12, and 14, with 10 μg of BtE. A/J mice, that were the best responders to BtE sensitization, were used to compare the B. tropicalis-specific asthma experimental model with the conventional experimental model of ovalbumin (OVA)-specific asthma. A/J mice were also sensitized with a lower dose of BtE. Results Mice of all strains had lung inflammatory-cell infiltration and increased levels of anti-BtE IgE antibodies, but these responses were significantly more intense in A/J mice than in CBA/J, BALB/c or C57BL/6J mice. Immunization of A/J mice with BtE induced a more intense airway eosinophil influx, higher levels of total IgE, similar airway hyperreactivity to methacholine but less intense mucous production, and lower levels of specific IgE, IgG1 and IgG2 antibodies than sensitization with OVA. Finally, immunization with a relatively low BtE dose (10 μg per subcutaneous injection per mouse) was able to sensitize A/J mice, which were the best responders to high-dose BtE immunization, for the development of allergy-associated immune and lung inflammatory responses. Conclusions The described short-term model of BtE-induced allergic lung disease is reproducible in different syngeneic mouse strains, and mice of the A/J strain was the most responsive to it. In addition, it was shown that OVA and BtE induce quantitatively different immune responses in A/J mice and that the experimental model can be set up with low amounts of BtE. PMID:20433763

  8. Modelling of flow processes of the structured two-phase disperse systems (solid phase-liquid medium).

    PubMed

    Uriev, N B; Kuchin, I V

    2007-10-31

    A review of the basic theories and models of shear flow of suspensions is presented and the results of modeling of structured suspensions under flow conditions. The physical backgrounds and conditions of macroscopic discontinuity in the behaviour of high-concentrated systems are analyzed. The use of surfactants and imposed vibration for regulation of rheological properties of suspensions are considered. A review of the recent approaches and methods of computer simulation of concentrated suspensions is undertaken and results of computer simulation of suspensions are presented. Formation and destruction of the structure of suspension under static and dynamic conditions (including imposed combined shear and orthogonal oscillations) are discussed. The influence of interaction of particles as well as of some parameters characterizing a type and intensity of external perturbations on suspensions behavior is demonstrated.

  9. Using an intense laser beam in interaction with muon/electron beam to probe the noncommutative QED

    NASA Astrophysics Data System (ADS)

    Tizchang, S.; Batebi, S.; Haghighat, M.; Mohammadi, R.

    2017-02-01

    It is known that the linearly polarized photons can partly transform to circularly polarized ones via forward Compton scattering in a background such as the external magnetic field or noncommutative space time. Based on this fact we explore the effects of the NC-background on the scattering of a linearly polarized laser beam from an intense beam of charged leptons. We show that for a muon/electron beam flux {overline{ɛ}}_{μ, e}˜ 1{0}^{12}/{10}^{10} TeV cm-2 sec-1 and a linearly polarized laser beam with energy k 0 ˜1 eV and average power {overline{P}}_{laser}˜eq 1{0}^3 KW, the generation rate of circularly polarized photons is about R V ˜ 104 /sec for noncommutative energy scale ΛNC ˜ 10 TeV. This is fairly large and can grow for more intense beams in near future.

  10. Exploring the dusty star-formation in the early Universe using intensity mapping

    NASA Astrophysics Data System (ADS)

    Lagache, Guilaine

    2018-05-01

    In the last decade, it has become clear that the dust-enshrouded star formation contributes significantly to early galaxy evolution. Detection of dust is therefore essential in determining the properties of galaxies in the high-redshift universe. This requires observations at the (sub-)millimeter wavelengths. Unfortunately, sensitivity and background confusion of single dish observations on the one hand, and mapping efficiency of interferometers on the other hand, pose unique challenges to observers. One promising route to overcome these difficulties is intensity mapping of fluctuations which exploits the confusion-limited regime and measures the collective light emission from all sources, including unresolved faint galaxies. We discuss in this contribution how 2D and 3D intensity mapping can measure the dusty star formation at high redshift, through the Cosmic Infrared Background (2D) and [CII] fine structure transition (3D) anisotropies.

  11. Physical fitness predicts technical-tactical and time-motion profile in simulated Judo and Brazilian Jiu-Jitsu matches

    PubMed Central

    Gentil, Paulo; Bueno, João C.A.; Follmer, Bruno; Marques, Vitor A.; Del Vecchio, Fabrício B.

    2018-01-01

    Background Among combat sports, Judo and Brazilian Jiu-Jitsu (BJJ) present elevated physical fitness demands from the high-intensity intermittent efforts. However, information regarding how metabolic and neuromuscular physical fitness is associated with technical-tactical performance in Judo and BJJ fights is not available. This study aimed to relate indicators of physical fitness with combat performance variables in Judo and BJJ. Methods The sample consisted of Judo (n = 16) and BJJ (n = 24) male athletes. At the first meeting, the physical tests were applied and, in the second, simulated fights were performed for later notational analysis. Results The main findings indicate: (i) high reproducibility of the proposed instrument and protocol used for notational analysis in a mobile device; (ii) differences in the technical-tactical and time-motion patterns between modalities; (iii) performance-related variables are different in Judo and BJJ; and (iv) regression models based on metabolic fitness variables may account for up to 53% of the variances in technical-tactical and/or time-motion variables in Judo and up to 31% in BJJ, whereas neuromuscular fitness models can reach values up to 44 and 73% of prediction in Judo and BJJ, respectively. When all components are combined, they can explain up to 90% of high intensity actions in Judo. Discussion In conclusion, performance prediction models in simulated combat indicate that anaerobic, aerobic and neuromuscular fitness variables contribute to explain time-motion variables associated with high intensity and technical-tactical variables in Judo and BJJ fights. PMID:29844991

  12. Processor core for real time background identification of HD video based on OpenCV Gaussian mixture model algorithm

    NASA Astrophysics Data System (ADS)

    Genovese, Mariangela; Napoli, Ettore

    2013-05-01

    The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.

  13. Relationship between Frequency and Intensity of Physical Activity and Health Behaviors of Adolescents

    ERIC Educational Resources Information Center

    Delisle, Tony T.; Werch, Chudley E.; Wong, Alvin H.; Bian, Hui; Weiler, Robert

    2010-01-01

    Background: While studies have determined the importance of physical activity in advancing health outcomes, relatively few have explored the relationship between exercise and various health behaviors of adolescents. The purpose of this study is to examine the relationship between frequency and intensity of physical activity and both health risk…

  14. Lecturers' Attitudes on Electronically Supported Pre-Lecturing Material for Intensive Programs: A Case Study

    ERIC Educational Resources Information Center

    Kozaris, Ioannis; Varella, Evangelia A.

    2010-01-01

    In 2006 and 2008, two large trans-national residential summer schools on conservation science were organized as intensive programs. Learners were not only second/third cycle students in both exact sciences and humanities, but further practicing restorers; consequently their educational background, and even their way of approaching scientific…

  15. Reduced modeling of signal transduction – a modular approach

    PubMed Central

    Koschorreck, Markus; Conzelmann, Holger; Ebert, Sybille; Ederer, Michael; Gilles, Ernst Dieter

    2007-01-01

    Background Combinatorial complexity is a challenging problem in detailed and mechanistic mathematical modeling of signal transduction. This subject has been discussed intensively and a lot of progress has been made within the last few years. A software tool (BioNetGen) was developed which allows an automatic rule-based set-up of mechanistic model equations. In many cases these models can be reduced by an exact domain-oriented lumping technique. However, the resulting models can still consist of a very large number of differential equations. Results We introduce a new reduction technique, which allows building modularized and highly reduced models. Compared to existing approaches further reduction of signal transduction networks is possible. The method also provides a new modularization criterion, which allows to dissect the model into smaller modules that are called layers and can be modeled independently. Hallmarks of the approach are conservation relations within each layer and connection of layers by signal flows instead of mass flows. The reduced model can be formulated directly without previous generation of detailed model equations. It can be understood and interpreted intuitively, as model variables are macroscopic quantities that are converted by rates following simple kinetics. The proposed technique is applicable without using complex mathematical tools and even without detailed knowledge of the mathematical background. However, we provide a detailed mathematical analysis to show performance and limitations of the method. For physiologically relevant parameter domains the transient as well as the stationary errors caused by the reduction are negligible. Conclusion The new layer based reduced modeling method allows building modularized and strongly reduced models of signal transduction networks. Reduced model equations can be directly formulated and are intuitively interpretable. Additionally, the method provides very good approximations especially for macroscopic variables. It can be combined with existing reduction methods without any difficulties. PMID:17854494

  16. Efficacy of Intensive Control of Glucose in Stroke Prevention: A Meta-Analysis of Data from 59197 Participants in 9 Randomized Controlled Trials

    PubMed Central

    Zhang, Chi; Zhou, Yu-Hao; Xu, Chun-Li; Chi, Feng-Ling; Ju, Hai-Ning

    2013-01-01

    Background The efficacy of treatments that lower glucose in reducing the risk of incident stroke remains unclear. We therefore did a systematic review and meta-analysis to evaluate the efficacy of intensive control of glucose in the prevention of stroke. Methodology/Principal Findings We systematically searched Medline, EmBase, and the Cochrane Library for trials published between 1950 and June, 2012. We included randomized controlled trials that reported on the effects of intensive control of glucose on incident stroke compared with standard care. Summary estimates of relative risk (RR) reductions were calculated with a random effects model, and the analysis was further stratified by factors that could affect the treatment effects. Of 649 identified studies, we included nine relevant trials, which provided data for 59197 patients and 2037 events of stroke. Overall, intensive control of glucose as compared to standard care had no effect on incident stroke (RR, 0.96; 95%CI 0.88–1.06; P = 0.445). In the stratified analyses, a beneficial effect was seen in those trials when body mass index (BMI) more than 30 (RR, 0.86; 95%CI: 0.75–0.99; P = 0.041). No other significant differences were detected between the effect of intensive control of glucose and standard care when based on other subset factors. Conclusions/Significance Our study indicated intensive control of glucose can effectively reduce the risk of incident stroke when patients with BMI more than 30. PMID:23372729

  17. [The role of some psychological, psychosocial and obstetrical factors in the intensity of postpartum blues].

    PubMed

    Séjourné, N; Denis, A; Theux, G; Chabrol, H

    2008-04-01

    Within days following birth, most women show signs of mood changes, commonly named baby blues. Baby blues can result in postpartum depression. Hence it appears important to explore in more details the clinical background related to the intensity of postpartum blues. The aim of this study is to investigate the contribution of psychological, psychosocial and obstetrical factors to the intensity of postpartum blues. One hundred and forty-eight women participated in the study and completed questionnaires three days after delivery. A questionnaire was built to collect information on psychosocial and obstetrical factors. The Maternity Blues (Kennerley and Gath, 1989) was used to assess postpartum blues. Psychological factors were measured with the Maternal Self-Report Inventory (Shea et Tronick, 1988), the Perceived Stress Scale (Cohen, Kamarch et Mermelstein, 1983) and the Sarason's Social Support Questionnaire (1983). Four multiple regression analyses were conducted to predict the intensity of postpartum blues by entering psychosocial factors, history of depression, obstetrical factors and psychological and relational factors. Significant predictors (maternal self-esteem, marital status, previous psychotherapeutic treatment, previous antidepressant treatment) were entered in a multiple regression analysis predicting the intensity of postpartum blues. This model accounted for 31% of the variance in the intensity of postpartum blues (F(4, 143)=17.9; P<0.001). Maternal self-esteem (beta=-0.37; P<0.001), marital situation (beta=-0.16; P=0.02) were significant predictors. Previous antidepressant treatment (beta=0.13; P=0.05) was almost a significant predictor. The preventive implication of this study is important. Some psychological and psychosocial variables predicted the intensity of postpartum blues and may be used in order to detect women who exhibit risk factors.

  18. Prevention and treatment of deep vein thrombosis and pulmonary embolism in critically ill patients.

    PubMed

    Yang, Jack C

    2005-01-01

    Deep vein thrombosis and pulmonary embolism remain common problems in the intensive care unit, with limb- and life-threatening complications that are potentially preventable. The intensive care unit clinician is called on to be vigilant with diagnosis and facile with prevention and treatment of thromboembolic disease (venous thromboembolism). This article reviews background, current options, and recommendations regarding the occurrence of deep vein thrombosis and pulmonary embolism in the intensive care unit population.

  19. Time Series Analysis for Forecasting Hospital Census: Application to the Neonatal Intensive Care Unit

    PubMed Central

    Hoover, Stephen; Jackson, Eric V.; Paul, David; Locke, Robert

    2016-01-01

    Summary Background Accurate prediction of future patient census in hospital units is essential for patient safety, health outcomes, and resource planning. Forecasting census in the Neonatal Intensive Care Unit (NICU) is particularly challenging due to limited ability to control the census and clinical trajectories. The fixed average census approach, using average census from previous year, is a forecasting alternative used in clinical practice, but has limitations due to census variations. Objective Our objectives are to: (i) analyze the daily NICU census at a single health care facility and develop census forecasting models, (ii) explore models with and without patient data characteristics obtained at the time of admission, and (iii) evaluate accuracy of the models compared with the fixed average census approach. Methods We used five years of retrospective daily NICU census data for model development (January 2008 – December 2012, N=1827 observations) and one year of data for validation (January – December 2013, N=365 observations). Best-fitting models of ARIMA and linear regression were applied to various 7-day prediction periods and compared using error statistics. Results The census showed a slightly increasing linear trend. Best fitting models included a non-seasonal model, ARIMA(1,0,0), seasonal ARIMA models, ARIMA(1,0,0)x(1,1,2)7 and ARIMA(2,1,4)x(1,1,2)14, as well as a seasonal linear regression model. Proposed forecasting models resulted on average in 36.49% improvement in forecasting accuracy compared with the fixed average census approach. Conclusions Time series models provide higher prediction accuracy under different census conditions compared with the fixed average census approach. Presented methodology is easily applicable in clinical practice, can be generalized to other care settings, support short- and long-term census forecasting, and inform staff resource planning. PMID:27437040

  20. Solar energetic particle events in different types of solar wind

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahler, S. W.; Vourlidas, A., E-mail: stephen.kahler@kirtland.af.mil

    2014-08-10

    We examine statistically some properties of 96 20 MeV gradual solar energetic proton (SEP) events as a function of three different types of solar wind (SW) as classified by Richardson and Cane. Gradual SEP (E > 10 MeV) events are produced in shocks driven by fast (V ≳ 900 km s{sup –1}) and wide (W > 60°) coronal mass ejections (CMEs). We find no differences among the transient, fast, and slow SW streams for SEP 20 MeV proton event timescales. It has recently been found that the peak intensities Ip of these SEP events scale with the ∼2 MeV protonmore » background intensities, which may be a proxy for the near-Sun shock seed particles. Both the intensities Ip and their 2 MeV backgrounds are significantly enhanced in transient SW compared to those of fast and slow SW streams, and the values of Ip normalized to the 2 MeV backgrounds only weakly correlate with CME V for all SW types. This result implies that forecasts of SEP events could be improved by monitoring both the Sun and the local SW stream properties and that the well known power-law size distributions of Ip may differ between transient and long-lived SW streams. We interpret an observed correlation between CME V and the 2 MeV background for SEP events in transient SW as a manifestation of enhanced solar activity.« less

  1. Equity in health services use and intensity of use in Canada

    PubMed Central

    Asada, Yukiko; Kephart, George

    2007-01-01

    Background The Canadian health care system has striven to remove financial or other barriers to access to medically necessary health care services since the establishment of the Canada Health Act 20 years ago. Evidence has been conflicting as to what extent the Canadian health care system has met this goal of equitable access. The objective of this study was to examine whether and where socioeconomic inequities in health care utilization occur in Canada. Methods We used a nationally representative cross-sectional survey, the 2000/01 Canadian Community Health Survey, which provides a large sample size (about 110,000) and permits more comprehensive adjustment for need indicators than previous studies. We separately examined general practitioner, specialist, and hospital services using two-part hurdle models: use versus non-use by logistic regression, and the intensity of use among users by zero-truncated negative binomial regression. Results We found that lower income was associated with less contact with general practitioners, but among those who had contact, lower income and education were associated with greater intensity of use of general practitioners. Both lower income and education were associated with less contact with specialists, but there was no statistically significant relationship between these socioeconomic variables and intensity of specialist use among the users. Neither income nor education was statistically significantly associated with use or intensity of use of hospitals. Conclusion Our study unveiled possible socioeconomic inequities in the use of health care services in Canada. PMID:17349059

  2. Impairment of Vision in a Mouse Model of Usher Syndrome Type III.

    PubMed

    Tian, Guilian; Lee, Richard; Ropelewski, Philip; Imanishi, Yoshikazu

    2016-03-01

    The purpose of this study was to obtain an Usher syndrome type III mouse model with retinal phenotype. Speed congenic method was used to obtain Clrn1 exon 1 knockout (Clrn1-/-) and Clrn1N48K knockin (Clrn1N48K/N48K) mice under A/J background. To study the retinal functions of these mice, we measured scotopic and photopic ERG responses. To observe if there are any structural abnormalities, we conducted light and transmission electron microscopy of fixed retinal specimens. In 3-month-old Clrn1-/- mice, scotopic b-wave amplitude was reduced by more than 25% at the light intensities from -2.2 to 0.38 log cd·s/m2, but scotopic a-wave amplitudes were comparable to those of age-matched wild type mice at all the light intensities tested. In 9-month-old Clrn1-/- mice, scotopic b-wave amplitudes were further reduced by more than 35%, and scotopic a-wave amplitude also showed a small decline as compared with wild type mice. Photopic ERG responses were comparable between Clrn1-/- and wild type mice. Those electrophysiological defects were not associated with a loss of rods. In Clrn1N48K/N48K mice, both a- and b-wave amplitudes were not discernable from those of wild type mice aged up to 10 months. Mutations that are Clrn1-/- biallelic cause visual defects when placed under A/J background. The absence of apparent rod degeneration suggests that the observed phenotype is due to functional defects, and not due to loss of rods. Biallelic Clrn1N48K/N48K mutations did not cause discernible visual defects, suggesting that Clrn1- allele is more severely dysfunctional than ClrnN48K allele.

  3. Contrast-enhanced spectral mammography (CESM) versus breast magnetic resonance imaging (MRI): A retrospective comparison in 66 breast lesions.

    PubMed

    Li, L; Roth, R; Germaine, P; Ren, S; Lee, M; Hunter, K; Tinney, E; Liao, L

    2017-02-01

    The purpose of this study was to retrospectively compare the diagnostic performance of contrast-enhanced spectral mammography (CESM) with that of breast magnetic resonance imaging (BMRI) in breast cancer detection using parameters, including sensitivity, positive predictive value (PPV), lesion size, morphology, lesion and background enhancement, and examination time. A total of 48 women (mean age, 56years±10.6 [SD]) with breast lesions detected between October 2012 and March 2014 were included. Both CESM and BMRI were performed for each patient within 30 days. The enhancement intensity of lesions and breast background parenchyma was subjectively assessed for both modalities and was quantified for comparison. Statistical significance was analyzed using paired t-test for mean size of index lesions in all malignant breasts (an index lesion defined as the largest lesion in each breast), and a mean score of enhancement intensity for index lesions and breast background. PPV, sensitivity, and accuracy were calculated for both CESM and BMRI. The average duration time of CESM and MRI examinations was also compared. A total of 66 lesions were identified, including 62 malignant and 4 benign lesions. Both CESM and BMRI demonstrated a sensitivity of 100% for detection of breast cancer. There was no statistically significant difference between the mean size of index lesions (P=0.108). The enhancement intensity of breast background was significantly lower for CESM than for BMRI (P<0.01). The mean score of enhancement intensity of index lesions on CESM was significantly less than that for BMRI (P<0.01). The smallest lesion that was detected by both modalities measured 4mm. CESM had a higher PPV than BMRI (P>0.05). The average examination time for CESM was significantly shorter than that of BMRI (P<0.01). CESM has similar sensitivity than BMRI in breast cancer detection, with higher PPV and less background enhancement. CESM is associate with significantly shorter exam time thus a more accessible alternative to BMRI, and has the potential to play an important tool in breast cancer detection and staging. Copyright © 2016 Éditions françaises de radiologie. Published by Elsevier Masson SAS. All rights reserved.

  4. Sexual selection on cuticular hydrocarbons in the Australian field cricket, Teleogryllus oceanicus

    PubMed Central

    Thomas, Melissa L; Simmons, Leigh W

    2009-01-01

    Background Females in a wide range of taxa have been shown to base their choice of mates on pheromone signals. However, little research has focussed specifically on the form and intensity of selection that mate choice imposes on the pheromone signal. Using multivariate selection analysis, we characterise directly the form and intensity of sexual selection acting on cuticular hydrocarbons, chemical compounds widely used in the selection of mates in insects. Using the Australian field cricket Teleogryllus oceanicus as a model organism, we use three measures of male attractiveness to estimate fitness; mating success, the duration of courtship required to elicit copulation, and subsequent spermatophore attachment duration. Results We found that all three measures of male attractiveness generated sexual selection on male cuticular hydrocarbons, however there were differences in the form and intensity of selection among these three measures. Mating success was the only measure of attractiveness that imposed both univariate linear and quadratic selection on cuticular hydrocarbons. Although we found that all three attractiveness measures generated nonlinear selection, again only mating success was found to exert statistically significant stabilizing selection. Conclusion This study shows that sexual selection plays an important role in the evolution of male cuticular hydrocarbon signals. PMID:19594896

  5. A Markov Environment-dependent Hurricane Intensity Model and Its Comparison with Multiple Dynamic Models

    NASA Astrophysics Data System (ADS)

    Jing, R.; Lin, N.; Emanuel, K.; Vecchi, G. A.; Knutson, T. R.

    2017-12-01

    A Markov environment-dependent hurricane intensity model (MeHiM) is developed to simulate the climatology of hurricane intensity given the surrounding large-scale environment. The model considers three unobserved discrete states representing respectively storm's slow, moderate, and rapid intensification (and deintensification). Each state is associated with a probability distribution of intensity change. The storm's movement from one state to another, regarded as a Markov chain, is described by a transition probability matrix. The initial state is estimated with a Bayesian approach. All three model components (initial intensity, state transition, and intensity change) are dependent on environmental variables including potential intensity, vertical wind shear, midlevel relative humidity, and ocean mixing characteristics. This dependent Markov model of hurricane intensity shows a significant improvement over previous statistical models (e.g., linear, nonlinear, and finite mixture models) in estimating the distributions of 6-h and 24-h intensity change, lifetime maximum intensity, and landfall intensity, etc. Here we compare MeHiM with various dynamical models, including a global climate model [High-Resolution Forecast-Oriented Low Ocean Resolution model (HiFLOR)], a regional hurricane model (Geophysical Fluid Dynamics Laboratory (GFDL) hurricane model), and a simplified hurricane dynamic model [Coupled Hurricane Intensity Prediction System (CHIPS)] and its newly developed fast simulator. The MeHiM developed based on the reanalysis data is applied to estimate the intensity of simulated storms to compare with the dynamical-model predictions under the current climate. The dependences of hurricanes on the environment under current and future projected climates in the various models will also be compared statistically.

  6. Language Learning Strategy Use of ESL Students in an Intensive English Learning Context

    ERIC Educational Resources Information Center

    Hong-Nam, Kyungsim; Leavell, Alexandra G.

    2006-01-01

    This study investigated the language learning strategy use of 55 ESL students with differing cultural and linguistic backgrounds enrolled in a college Intensive English Program (IEP). The IEP is a language learning institute for pre-admissions university ESL students, and is an important step in developing not only students' basic Interpersonal…

  7. Differences in Vigorous and Moderate Physical Activity by Gender, Race/Ethnicity, Age, Education, and Income among U.S. Adults

    ERIC Educational Resources Information Center

    Seo, Dong-Chul; Torabi, Mohammad

    2007-01-01

    Background: Inconsistent findings exist regarding correlates of physical activity (PA) in the literature. Leisure-time physical activity among U.S. adults has declined for the last decade. Purpose: This article examines differences in vigorous-intensity and moderate-intensity physical activity by gender, race/ethnicity, age, education, and income…

  8. The Development of Emotion-Processing in Children: Effects of Age, Emotion, and Intensity

    ERIC Educational Resources Information Center

    Herba, Catherine M.; Landau, Sabine; Russell, Tamara; Ecker, Christine; Phillips, Mary L.

    2006-01-01

    Background: This study examined the effects of age and two novel factors (intensity and emotion category) on healthy children's developing emotion-processing from 4 to 15 years using two matching paradigms. Methods: An explicit emotion-matching task was employed in which children matched the emotion of a target individual, and an implicit task…

  9. Integrated systems to improve care for very high intensity users of hospital emergency department and for long-term conditions in the community.

    PubMed

    Rea, Harry; Kenealy, Tim; Horwood, Fiona; Sheridan, Nicolette; Parsons, Matthew; Wemekamp, Beverly; Winter, Fionna; Maingay, Gray; Degeling, Pieter

    2010-08-13

    Adult patients who are very high intensity users of hospital emergency departments (VHIU) have complex medical and psychosocial needs. Their care is often poorly coordinated and expensive. Substantial health and social resources may be available to these patients but it is ineffective for a variety of reasons. In 2009 Counties Manukau District Health Board approved a business case for a programme designed to improve the care of VHIU patients identified at Middlemore Hospital. The model of care includes medical and social review, a multidisciplinary planning approach with a designated 'navigator' and assertive follow-up, self and family management, and involvement of community based organisations, primary care and secondary care. The model has been organised around geographic localities and alongside other initiatives. An intermediate care team has been established to attend to the current presenting problems, however the main emphasis is on optimising ongoing care and reducing subsequent admissions especially by connecting patients with primary health care. This whole process could be driven by the primacy care sector in due course. The background and initial experience with implementation are described.

  10. Seismic hazard analysis for Jayapura city, Papua

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robiana, R., E-mail: robiana-geo104@yahoo.com; Cipta, A.

    Jayapura city had destructive earthquake which occurred on June 25, 1976 with the maximum intensity VII MMI scale. Probabilistic methods are used to determine the earthquake hazard by considering all possible earthquakes that can occur in this region. Earthquake source models using three types of source models are subduction model; comes from the New Guinea Trench subduction zone (North Papuan Thrust), fault models; derived from fault Yapen, TareraAiduna, Wamena, Memberamo, Waipago, Jayapura, and Jayawijaya, and 7 background models to accommodate unknown earthquakes. Amplification factor using geomorphological approaches are corrected by the measurement data. This data is related to rock typemore » and depth of soft soil. Site class in Jayapura city can be grouped into classes B, C, D and E, with the amplification between 0.5 – 6. Hazard maps are presented with a 10% probability of earthquake occurrence within a period of 500 years for the dominant periods of 0.0, 0.2, and 1.0 seconds.« less

  11. Atmospheric consequences of cosmic ray variability in the extragalactic shock model: 2. Revised ionization levels and their consequences

    NASA Astrophysics Data System (ADS)

    Melott, Adrian L.; Atri, Dimitra; Thomas, Brian C.; Medvedev, Mikhail V.; Wilson, Graham W.; Murray, Michael J.

    2010-08-01

    It has been suggested that galactic shock asymmetry induced by our galaxy's infall toward the Virgo Cluster may be a source of periodicity in cosmic ray exposure as the solar system oscillates perpendicular to the galactic plane, thereby, inducing an observed terrestrial periodicity in biodiversity. There are a number of plausible mechanisms by which cosmic rays might affect terrestrial biodiversity. Here we investigate one of these mechanisms, the consequent ionization and dissociation in the atmosphere, resulting in changes in atmospheric chemistry that culminate in the depletion of ozone and a resulting increase in the dangerous solar UVB flux on the ground. We use a heuristic model of the cosmic ray intensity enhancement originally suggested by Medvedev and Melott (2007) to compute steady state atmospheric effects. This paper is a reexamination of an issue we have studied before with a simplified approximation for the distribution of incidence angles. The new results are based on an improved ionization background computation averaged over a massive ensemble (about 7 × 105) shower simulations at various energies and incidence angles. We adopt a range with a minimal model and a fit to full exposure to the postulated extragalactic background. The atmospheric effects are greater than they were with our earlier, simplified ionization model. At the lower end of the intensity range, we find that the effects are too small to be of serious consequence. At the upper end of this range, ˜6% global average loss of ozone column density exceeds that currently experienced due to anthropogenic effects such as accumulated chlorofluorocarbons. We discuss some of the possible effects. The intensity of the atmospheric effects is less than those of a nearby supernova or galactic γ ray burst, but the duration of the effects would be about 106 times longer. Present UVB enhancement from current ozone depletion ˜3% is a documented stress on the biosphere, but a depletion of the magnitude found at the upper end of our range would approximately double the global average UVB flux. We conclude that for estimates at the upper end of the reasonable range of the cosmic ray variability over geologic time, the mechanism of atmospheric ozone depletion may provide a major biological stress, which could easily bring about major loss of biodiversity. It is possible that future high-energy astrophysical observations will resolve the question of whether such depletion is likely.

  12. Infrared dim moving target tracking via sparsity-based discriminative classifier and convolutional network

    NASA Astrophysics Data System (ADS)

    Qian, Kun; Zhou, Huixin; Wang, Bingjian; Song, Shangzhen; Zhao, Dong

    2017-11-01

    Infrared dim and small target tracking is a great challenging task. The main challenge for target tracking is to account for appearance change of an object, which submerges in the cluttered background. An efficient appearance model that exploits both the global template and local representation over infrared image sequences is constructed for dim moving target tracking. A Sparsity-based Discriminative Classifier (SDC) and a Convolutional Network-based Generative Model (CNGM) are combined with a prior model. In the SDC model, a sparse representation-based algorithm is adopted to calculate the confidence value that assigns more weights to target templates than negative background templates. In the CNGM model, simple cell feature maps are obtained by calculating the convolution between target templates and fixed filters, which are extracted from the target region at the first frame. These maps measure similarities between each filter and local intensity patterns across the target template, therefore encoding its local structural information. Then, all the maps form a representation, preserving the inner geometric layout of a candidate template. Furthermore, the fixed target template set is processed via an efficient prior model. The same operation is applied to candidate templates in the CNGM model. The online update scheme not only accounts for appearance variations but also alleviates the migration problem. At last, collaborative confidence values of particles are utilized to generate particles' importance weights. Experiments on various infrared sequences have validated the tracking capability of the presented algorithm. Experimental results show that this algorithm runs in real-time and provides a higher accuracy than state of the art algorithms.

  13. Model observer design for multi-signal detection in the presence of anatomical noise

    NASA Astrophysics Data System (ADS)

    Wen, Gezheng; Markey, Mia K.; Park, Subok

    2017-02-01

    As psychophysical studies are resource-intensive to conduct, model observers are commonly used to assess and optimize medical imaging quality. Model observers are typically designed to detect at most one signal. However, in clinical practice, there may be multiple abnormalities in a single image set (e.g. multifocal multicentric (MFMC) breast cancer), which can impact treatment planning. Prevalence of signals can be different across anatomical regions, and human observers do not know the number or location of signals a priori. As new imaging techniques have the potential to improve multiple-signal detection (e.g. digital breast tomosynthesis may be more effective for diagnosis of MFMC than mammography), image quality assessment approaches addressing such tasks are needed. In this study, we present a model observer to detect multiple signals in an image dataset. A novel implementation of partial least squares (PLS) was developed to estimate different sets of efficient channels directly from the images. The PLS channels are adaptive to the characteristics of signals and the background, and they capture the interactions among signal locations. Corresponding linear decision templates are employed to generate both image-level and location-specific scores on the presence of signals. Our results show that: (1) the model observer can achieve high performance with a reasonably small number of channels; (2) the model observer with PLS channels outperforms that with benchmark modified Laguerre-Gauss channels, especially when realistic signal shapes and complex background statistics are involved; (3) the tasks of clinical interest, and other constraints such as sample size would alter the optimal design of the model observer.

  14. Label fusion based brain MR image segmentation via a latent selective model

    NASA Astrophysics Data System (ADS)

    Liu, Gang; Guo, Xiantang; Zhu, Kai; Liao, Hengxu

    2018-04-01

    Multi-atlas segmentation is an effective approach and increasingly popular for automatically labeling objects of interest in medical images. Recently, segmentation methods based on generative models and patch-based techniques have become the two principal branches of label fusion. However, these generative models and patch-based techniques are only loosely related, and the requirement for higher accuracy, faster segmentation, and robustness is always a great challenge. In this paper, we propose novel algorithm that combines the two branches using global weighted fusion strategy based on a patch latent selective model to perform segmentation of specific anatomical structures for human brain magnetic resonance (MR) images. In establishing this probabilistic model of label fusion between the target patch and patch dictionary, we explored the Kronecker delta function in the label prior, which is more suitable than other models, and designed a latent selective model as a membership prior to determine from which training patch the intensity and label of the target patch are generated at each spatial location. Because the image background is an equally important factor for segmentation, it is analyzed in label fusion procedure and we regard it as an isolated label to keep the same privilege between the background and the regions of interest. During label fusion with the global weighted fusion scheme, we use Bayesian inference and expectation maximization algorithm to estimate the labels of the target scan to produce the segmentation map. Experimental results indicate that the proposed algorithm is more accurate and robust than the other segmentation methods.

  15. Comparing Alternative Voices in the Academy: Navigating the Complexity of Mentoring Relationships from Divergent Ethnic Backgrounds

    ERIC Educational Resources Information Center

    Mackey, Hollie; Shannon, Katheryn

    2014-01-01

    In this article, we explore the personal mentoring experiences of two female scholars of diverse ethnic backgrounds across research-intensive institutions. Female faculty of color face substantial barriers to success in academe including mental and emotional discomfort, being treated as symbolically representing their race and gender, and social…

  16. Effectiveness of a Short, Intense Bridging Course for Scaffolding Students Commencing University-level Study of Chemistry

    NASA Astrophysics Data System (ADS)

    Schmid, Siegbert; Youl, David J.; George, Adrian V.; Read, Justin R.

    2012-05-01

    Bridging courses designed to support students commencing tertiary education are widespread, and some evidence for the value of semester-length courses has been reported; however, little attention has been paid to short-but-intense bridging courses, and empirical evidence of their effectiveness is particularly sparse. The current study followed the academic performance of a cohort of students enrolled in a first-year chemistry unit of study designed for those with little or no background knowledge of chemistry. The aims of this study are two-fold: first to determine the strength of the linkage between prior knowledge in chemistry and performance on the end of semester exam, and secondly, to explore the reasons for any differences observed. In particular, the value of the week-long intensive-mode bridging course offered by the University of Sydney was analysed. Quantitative and qualitative data were collected. The research has shown that participation in the chemistry bridging course is associated with 'bridging the gap' in academic performance between students with No Prior Chemistry background and those with a Strong Background. While the content of the bridging course is a significant contributor to academic success, so too is the confidence in their own learning that the course engenders among participants.

  17. Computer aided weld defect delineation using statistical parametric active contours in radiographic inspection.

    PubMed

    Goumeidane, Aicha Baya; Nacereddine, Nafaa; Khamadja, Mohammed

    2015-01-01

    A perfect knowledge of a defect shape is determinant for the analysis step in automatic radiographic inspection. Image segmentation is carried out on radiographic images and extract defects indications. This paper deals with weld defect delineation in radiographic images. The proposed method is based on a new statistics-based explicit active contour. An association of local and global modeling of the image pixels intensities is used to push the model to the desired boundaries. Furthermore, other strategies are proposed to accelerate its evolution and make the convergence speed depending only on the defect size as selecting a band around the active contour curve. The experimental results are very promising, since experiments on synthetic and radiographic images show the ability of the proposed model to extract a piece-wise homogenous object from very inhomogeneous background, even in a bad quality image.

  18. Clinical innovation for promoting family care in paediatric intensive care: demonstration, role modelling and reflective practice.

    PubMed

    Tomlinson, Patricia S; Thomlinson, Elizabeth; Peden-McAlpine, Cynthia; Kirschbaum, Mark

    2002-04-01

    To explore family caregiving problems in paediatric crisis care and methods that could be applied to move the abstraction of family care to development of specific family interventions. Family centred care has been accepted as the ideal philosophy for holistic health care of children, but methods for its implementation are not well established. In paediatric health crises, family care requires special sensitivity to family needs and a type of complex nursing care for which many practitioners are not sufficiently prepared. Developing family sensitive models of intervention and finding a strategy for transfer of this knowledge to clinical practice is an important challenge facing family nursing today. Social learning theory provides a rich background to explore these issues. Specific techniques of role modelling and reflective practice are suggested as effective approaches to teach family sensitive care in clinical settings where families are part of the care environment.

  19. Impact of Sea Level Rise on Storm Surge and Inundation in the Northern Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Veeramony, J.

    2016-12-01

    Assessing the impact of climate change on surge and inundation due to tropical cyclones is important for coastal adaptation as well as mitigation efforts. Changes in global climate increase vulnerability of coastal environments to the threat posed by severe storms in a number of ways. Both the intensity of future storms as well as the return periods of more severe storms are expected to increase signficantly. Increasing mean sea levels lead to more areas being inundated due to storm surge and bring the threat of inundation further inland. Rainfall associated with severe storms are also expected to increase substantially, which will add to the intensity of inland flooding and coastal inundation. In this study, we will examine the effects of sea level rise and increasing rainfall intensity using Hurricane Ike as the baseline. The Delft3D modeling system will be set up in nested mode, with the outermost nest covering the Gulf of Mexico. The system will be run in a coupled mode, modeling both waves and the hydrodynamics. The baseline simulation will use the atmospheric forcing which consists of the NOAA H*Wind (Powell et all 1998) for the core hurricane characteristics blended with reanalyzed background winds to create a smooth wind field. The rainfall estimates are obtained from TRMM. From this baseline, a set of simulations will be performed to show the impact of sea level rise and increased rainfall activity on flooding and inundation along theTexas-Lousiana coast.

  20. Convergence of Health Level Seven Version 2 Messages to Semantic Web Technologies for Software-Intensive Systems in Telemedicine Trauma Care

    PubMed Central

    Cook, Timothy Wayne; Cavalini, Luciana Tricai

    2016-01-01

    Objectives To present the technical background and the development of a procedure that enriches the semantics of Health Level Seven version 2 (HL7v2) messages for software-intensive systems in telemedicine trauma care. Methods This study followed a multilevel model-driven approach for the development of semantically interoperable health information systems. The Pre-Hospital Trauma Life Support (PHTLS) ABCDE protocol was adopted as the use case. A prototype application embedded the semantics into an HL7v2 message as an eXtensible Markup Language (XML) file, which was validated against an XML schema that defines constraints on a common reference model. This message was exchanged with a second prototype application, developed on the Mirth middleware, which was also used to parse and validate both the original and the hybrid messages. Results Both versions of the data instance (one pure XML, one embedded in the HL7v2 message) were equally validated and the RDF-based semantics recovered by the receiving side of the prototype from the shared XML schema. Conclusions This study demonstrated the semantic enrichment of HL7v2 messages for intensive-software telemedicine systems for trauma care, by validating components of extracts generated in various computing environments. The adoption of the method proposed in this study ensures the compliance of the HL7v2 standard in Semantic Web technologies. PMID:26893947

  1. The effect of convection on infrared detection by antennal warm cells in the bloodsucking bug Rhodnius prolixus.

    PubMed

    Tichy, Harald; Zopf, Lydia M

    2015-04-01

    Previous work revealed that bloodsucking bugs can discriminate between oscillating changes in infrared (IR) radiation and air temperature (T) using two types of warm cells located in peg-in-pit sensilla and tapered hairs (Zopf LM, Lazzari CR, Tichy H. J Neurophysiol 111: 1341-1349, 2014). These two stimuli are encoded and discriminated by the response quotient of the two warm cell types. IR radiation stimulates the warm cell in the peg-in-pit sensillum more strongly than that in the tapered hair. T stimuli evoke the reverse responses; they stimulate the latter more strongly than the former. In nature, IR and T cues are always present with certain radiation intensities and air temperatures, here referred to as background IR radiation and background T. In this article, we found that the response quotient permits the discrimination of IR and T oscillations even in the presence of different backgrounds. We show that the two warm cells respond well to IR oscillations if the background T operates by natural convection but poorly at forced convection, even if the background T is higher than at natural convection. Background IR radiation strongly affects the responses to T oscillations: the discharge rates of both warm cells are higher the higher the power of the IR background. We compared the warm cell responses with the T measured inside small model objects shaped like a cylinder, a cone, or a disc. The experiments indicate that passive thermal effects of the sense organs rather than intrinsic properties of the sensory cells are responsible for the observed results. Copyright © 2015 the American Physiological Society.

  2. Reionization Simulations Powered by Graphics Processing Units. I. On the Structure of the Ultraviolet Radiation Field

    NASA Astrophysics Data System (ADS)

    Aubert, Dominique; Teyssier, Romain

    2010-11-01

    We present a set of cosmological simulations with radiative transfer in order to model the reionization history of the universe from z = 18 down to z = 6. Galaxy formation and the associated star formation are followed self-consistently with gas and dark matter dynamics using the RAMSES code, while radiative transfer is performed as a post-processing step using a moment-based method with the M1 closure relation in the ATON code. The latter has been ported to a multiple Graphics Processing Unit (GPU) architecture using the CUDA language together with the MPI library, resulting in an overall acceleration that allows us to tackle radiative transfer problems at a significantly higher resolution than previously reported: 10243 + 2 levels of refinement for the hydrodynamic adaptive grid and 10243 for the radiative transfer Cartesian grid. We reach a typical acceleration factor close to 100× when compared to the CPU version, allowing us to perform 1/4 million time steps in less than 3000 GPU hr. We observe good convergence properties between our different resolution runs for various volume- and mass-averaged quantities such as neutral fraction, UV background, and Thomson optical depth, as long as the effects of finite resolution on the star formation history are properly taken into account. We also show that the neutral fraction depends on the total mass density, in a way close to the predictions of photoionization equilibrium, as long as the effect of self-shielding are included in the background radiation model. Although our simulation suite has reached unprecedented mass and spatial resolution, we still fail in reproducing the z ~ 6 constraints on the neutral fraction of hydrogen and the intensity of the UV background. In order to account for unresolved density fluctuations, we have modified our chemistry solver with a simple clumping factor model. Using our most spatially resolved simulation (12.5 Mpc h -1 with 10243 particles) to calibrate our subgrid model, we have resimulated our largest box (100 Mpc h -1 with 10243 particles) with the modified chemistry, successfully reproducing the observed level of neutral hydrogen in the spectra of high-redshift quasars. We however did not reproduce the average photoionization rate inferred from the same observations. We argue that this discrepancy could be partly explained by the fact that the average radiation intensity and the average neutral fraction depend on different regions of the gas density distribution, so that one quantity cannot be simply deduced from the other.

  3. Migrant and minority family members in the intensive care unit. A review of the literature

    PubMed Central

    Quindemil, KettyElena; Anderson, Kathryn Hoehn; Mayer, Hanna

    2013-01-01

    Statistics show that people with migrant and minority background as patients are significant in numbers in the intensive care unit. This also puts family members in the perspective of nursing because family members are an inherent part of the intensive care unit. Family-centered care is perhaps most applicable to vulnerable populations like migrant family in the intensive care unit to meet family member’s needs. But very little is known about the situation of migrant and minority family members in the intensive care unit. The aim of the study was to explore the state of the science regarding family-centered care in the intensive care unit of patients with migration background in general and with a possible focus on major migrant populations in Austria—Former Yugoslavian und Turkish origin. A literature review investigated research articles that contained information on migrant and minority family members in the intensive care unit. Key points in the relevant articles were identified and categorized into themes with an explanation of findings at the end. Seventeen articles fulfilled the inclusion criteria. No article was found regarding groups of major migrant population groups in Austria. The included articles uncovered five predominant themes: importance of cultural norms, communication, family dynamics, universal caring, and nursing/provider deficit in culturally competent care. In order to provide adequate nursing care a more cohesive body of information on more specific geographic and cultural populations is recommended. Because of the complete lack of research regarding migrant families of Former Yugoslavian and Turkish origin into Austria, an exploration of this population is recommended. PMID:24860716

  4. Effects of a 10-Day Intensive Health Promotion Program Combining Diet and Physical Activity on Body Composition, Physical Fitness, and Blood Factors of Young Adults: A Randomized Pilot Study

    PubMed Central

    Lee, Kyoung Soon; Lee, Jae Koo; Yeun, Young Ran

    2017-01-01

    Background A lifestyle characterized by poor eating habits and physical inactivity is a risk factor for multiple lifestyle diseases in young adults. This study assessed the effects of implementing an intensive 10-day health promotion program combining diet and physical activities on body composition, physical fitness, and biochemical parameters of young adults. Material/Methods In this randomized pilot study, 30 female undergraduate students were randomly allocated to an intervention and a control group. The health promotion program consisted of unlimited amounts of vegetarian food; aerobic, flexibility, and strength exercises (3 hours/day); lectures on health (3 hours/day); massage practice (2 hours/day); and healthy cooking practice (1 hour/day). The effects of the intervention were analyzed using the Mann-Whitney U test and the Wilcoxon signed-rank test. Results The intensive 10-day health promotion program significantly reduced body weight, body mass index, triglyceride, total cholesterol, low-density lipoprotein cholesterol, blood glucose, and the homeostasis model assessment of insulin resistance. At the same time, participants demonstrated increased back muscle, leg muscle, and grip strength; waist and shoulder flexibility; balance; and cardiorespiratory endurance. Conclusions The intensive 10-day health promotion program is a viable intervention for improving body composition, physical fitness, glycemic control, and blood lipid levels in young adults. PMID:28399076

  5. Die Fledermaus: Regarding Optokinetic Contrast Sensitivity and Light-Adaptation, Chicks Are Mice with Wings

    PubMed Central

    Shi, Qing; Stell, William K.

    2013-01-01

    Background Through adaptation, animals can function visually under an extremely broad range of light intensities. Light adaptation starts in the retina, through shifts in photoreceptor sensitivity and kinetics plus modulation of visual processing in retinal circuits. Although considerable research has been conducted on retinal adaptation in nocturnal species with rod-dominated retinas, such as the mouse, little is known about how cone-dominated avian retinas adapt to changes in mean light intensity. Methodology/Principal Findings We used the optokinetic response to characterize contrast sensitivity (CS) in the chick retina as a function of spatial frequency and temporal frequency at different mean light intensities. We found that: 1) daytime, cone-driven CS was tuned to spatial frequency; 2) nighttime, presumably rod-driven CS was tuned to temporal frequency and spatial frequency; 3) daytime, presumably cone-driven CS at threshold intensity was invariant with temporal and spatial frequency; and 4) daytime photopic CS was invariant with clock time. Conclusion/Significance Light- and dark-adaptational changes in CS were investigated comprehensively for the first time in the cone-dominated retina of an avian, diurnal species. The chick retina, like the mouse retina, adapts by using a “day/night” or “cone/rod” switch in tuning preference during changes in lighting conditions. The chick optokinetic response is an attractive model for noninvasive, behavioral studies of adaptation in retinal circuitry in health and disease. PMID:24098693

  6. Unified anomaly suppression and boundary extraction in laser radar range imagery based on a joint curve-evolution and expectation-maximization algorithm.

    PubMed

    Feng, Haihua; Karl, William Clem; Castañon, David A

    2008-05-01

    In this paper, we develop a new unified approach for laser radar range anomaly suppression, range profiling, and segmentation. This approach combines an object-based hybrid scene model for representing the range distribution of the field and a statistical mixture model for the range data measurement noise. The image segmentation problem is formulated as a minimization problem which jointly estimates the target boundary together with the target region range variation and background range variation directly from the noisy and anomaly-filled range data. This formulation allows direct incorporation of prior information concerning the target boundary, target ranges, and background ranges into an optimal reconstruction process. Curve evolution techniques and a generalized expectation-maximization algorithm are jointly employed as an efficient solver for minimizing the objective energy, resulting in a coupled pair of object and intensity optimization tasks. The method directly and optimally extracts the target boundary, avoiding a suboptimal two-step process involving image smoothing followed by boundary extraction. Experiments are presented demonstrating that the proposed approach is robust to anomalous pixels (missing data) and capable of producing accurate estimation of the target boundary and range values from noisy data.

  7. First 3-D simulations of meteor plasma dynamics and turbulence

    NASA Astrophysics Data System (ADS)

    Oppenheim, Meers M.; Dimant, Yakov S.

    2015-02-01

    Millions of small but detectable meteors hit the Earth's atmosphere every second, creating trails of hot plasma that turbulently diffuse into the background atmosphere. For over 60 years, radars have detected meteor plasmas and used these signals to infer characteristics of the meteoroid population and upper atmosphere, but, despite the importance of meteor radar measurements, the complex processes by which these plasmas evolve have never been thoroughly explained or modeled. In this paper, we present the first fully 3-D simulations of meteor evolution, showing meteor plasmas developing instabilities, becoming turbulent, and inhomogeneously diffusing into the background ionosphere. These instabilities explain the characteristics and strength of many radar observations, in particular the high-resolution nonspecular echoes made by large radars. The simulations reveal how meteors create strong electric fields that dig out deep plasma channels along the Earth's magnetic fields. They also allow researchers to explore the impacts of the intense winds and wind shears, commonly found at these altitudes, on meteor plasma evolution. This study will allow the development of more sophisticated models of meteor radar signals, enabling the extraction of detailed information about the properties of meteoroid particles and the atmosphere.

  8. Water, Sanitation and Hygiene (WASH) and environmental risk factors for soil-transmitted helminth intensity of infection in Timor-Leste, using real time PCR

    PubMed Central

    Nery, Susana V.; Wardell, Rebecca; D’Este, Catherine A.; Gray, Darren J.; McCarthy, James S.; Traub, Rebecca J.; Andrews, Ross M.; Llewellyn, Stacey; Vallely, Andrew J.; Williams, Gail M.; Clements, Archie C. A.

    2017-01-01

    Background No investigations have been undertaken of risk factors for intensity of soil-transmitted helminth (STH) infection in Timor-Leste. This study provides the first analysis of risk factors for intensity of STH infection, as determined by quantitative PCR (qPCR), examining a broad range of water, sanitation and hygiene (WASH) and environmental factors, among communities in Manufahi District, Timor-Leste. Methods A baseline cross-sectional survey of 18 communities was undertaken as part of a cluster randomised controlled trial, with additional identically-collected data from six other communities. qPCR was used to assess STH infection from stool samples, and questionnaires administered to collect WASH, demographic, and socioeconomic data. Environmental information was obtained from open-access sources and linked to infection outcomes. Mixed-effects multinomial logistic regression was undertaken to assess risk factors for intensity of Necator americanus and Ascaris infection. Results 2152 participants provided stool and questionnaire information for this analysis. In adjusted models incorporating WASH, demographic and environmental variables, environmental variables were generally associated with infection intensity for both N. americanus and Ascaris spp. Precipitation (in centimetres) was associated with increased risk of moderate-intensity (adjusted relative risk [ARR] 6.1; 95% confidence interval [CI] 1.9–19.3) and heavy-intensity (ARR 6.6; 95% CI 3.1–14.1) N. americanus infection, as was sandy-loam soil around households (moderate-intensity ARR 2.1; 95% CI 1.0–4.3; heavy-intensity ARR 2.7; 95% CI 1.6–4.5; compared to no infection). For Ascaris, alkaline soil around the household was associated with reduced risk of moderate-intensity infection (ARR 0.21; 95% CI 0.09–0.51), and heavy-intensity infection (ARR 0.04; 95% CI 0.01–0.25). Few WASH risk factors were significant. Conclusion In this high-prevalence setting, strong risk associations with environmental factors indicate that anthelmintic treatment alone will be insufficient to interrupt STH transmission, as conditions are favourable for ongoing environmental transmission. Integrated STH control strategies should be explored as a priority. PMID:28346536

  9. The Impact of Electromagnetic Cascades of Very-high Energy Gamma Rays on the Extragalactic Gamma-ray Background

    NASA Technical Reports Server (NTRS)

    Venters, Tonia

    2012-01-01

    As very high energy (VHE) photons propagate through the extragalactic background light (EBL), they interact with the soft photons of the EBL and initiate electromagnetic cascades of photons and electrons. The collective intensity of a cosmological population emitting at VHEs (such as blazars) will be attenuated at the highest energies through interactions with the EBL and enhanced at lower energies by the resulting cascade. As such, depending on the space density and spectra of the sources and the model of the EBL, cascade radiation can provide a significant contribution to the extragalactic gamma-ray background (EGB). Through deflections of the charged particles of the cascade, an intergalactic magnetic field (IGMF) may leave an imprint on the anisotropy properties of the EGB. The impact of a strong IGMF is to isotropize lower energy cascade photons, inducing a modulation in the anisotropy energy spectrum of the EGB. We discuss the implications of cascade radiation for the origins of the EGB and the nature of the IGMF, as well as insight that will be provided by data from the Fermi Large Area Telescope in the upcoming years.

  10. A Determination of the Intergalactic Redshift Dependent UV-Optical-NIR Photon Density Using Deep Galaxy Survey Data and the Gamma-Ray Opacity of the Universe

    NASA Technical Reports Server (NTRS)

    Stecker, Floyd W.

    2012-01-01

    We calculate the intensity and photon spectrum of the intergalactic background light (IBL) as a function of red shift using an approach based on observational data obtained at in different wavelength bands from local to deep galaxy surveys. Our empirically based approach allows us, for the firs.t time, to obtain a completely model independent determination of the IBL and to quantify its uncertainties. Using our results on the IBL, we then place upper and lower limits on the opacity of the universe to gamma-rays, independent of previous constraints.

  11. Spectral distortion of the CMB by the cumulative CO emission from galaxies throughout cosmic history

    NASA Astrophysics Data System (ADS)

    Mashian, Natalie; Loeb, Abraham; Sternberg, Amiel

    2016-05-01

    We show that the cumulative CO emission from galaxies throughout cosmic history distorts the spectrum of the cosmic microwave background at a level that is well above the detection limit of future instruments, such as the Primordial Inflation Explorer. The modelled CO signal has a prominent bump in the frequency interval 100-200 GHz, with a characteristic peak intensity of ˜2 × 10-23 W m-2 Hz-1 sr-1. Most of the CO foreground originates from modest redshifts, z ˜ 2-5, and needs to be efficiently removed for more subtle distortions from the earlier Universe to be detected.

  12. Characterization of plasma wake excitation and particle trapping in the nonlinear bubble regime

    NASA Astrophysics Data System (ADS)

    Benedetti, Carlo; Schroeder, Carl; Esarey, Eric; Leemans, Wim

    2010-11-01

    We investigate the excitation of nonlinear wake (bubble) formation by an ultra-short (kpL ˜2), intense (e Alaser/mc^2 > 2) laser pulse interacting with an underdense plasma. A detailed analysis of particle orbits in the wakefield is performed by using reduced analytical models and numerical simulations performed with the 2D cylindrical, envelope, ponderomotive, hybrid PIC/fluid code INF&RNO, recently developed at LBNL. In particular we study the requirements for injection and/or trapping of background plasma electrons in the nonlinear wake. Characterization of the phase-space properties of the injected particle bunch will also be discussed.

  13. [Effect of aminothiol anthihypoxants on hydration and peroxidation processes in traumatic brain injury].

    PubMed

    Novikov, V E; Ponamareva, N S

    2007-01-01

    The hydration (content of total, bound, and free water) and the activity of lipid peroxidation (LPO) processes in the brain have been studied in rats on the background of traumatic brain injury (TBI) dynamics. It is established that aminothiol-based anthihypoxants such as bemithyl and amthizol in a dose of 25 mg/kg alleviate changes induced by TBI. In particular, the drugs decrease the content of total and free water, increase the level of bound water, and inhibit the LPO intensity in the brain. The effect of drugs is more pronounced on the 4th and 7th day after TBI model induction.

  14. Evaluation of absorbed dose in irradiated sugar-containing plant material (peony roots) by an ESR method

    NASA Astrophysics Data System (ADS)

    Yamaoki, Rumi; Kimura, Shojiro; Ohta, Masatoshi

    2015-12-01

    The relationship between electron spin resonance (ESR) signal intensity of irradiated plant materials and sugar content was investigated by spectral analysis using peony roots. A weak background signal near g=2.005 was observed in the roots. After a 10 kGy irradiation, the ESR line broadened and the intensity increased, and the spectral characteristics were similar to a typical spectrum of irradiated food containing crystalline sugars. The free radical concentration was nearly stable 30 days after irradiation. The spectrum of peony root 30 days after irradiation was simulated using the summation of the intensities of six assumed components: radical signals derived from (a) sucrose, (b) glucose, (c) fructose, (d) cellulose, (e) the background signal near g=2.005 and (f) unidentified component. The simulated spectra using the six components were in agreement with the observed sample spectra. The intensity of sucrose radical signal in irradiated samples increased proportionally up to 20 kGy. In addition, the intensity of sucrose radical signals was strongly correlated with the sucrose contents of the samples. The results showed that the radiation sensitivity of sucrose in peony roots was influenced little by other plant constituents. There was also a good correlation between the total area of the spectra and the sucrose content, because the sucrose content was higher than that of other sugars in the samples. In peony roots, estimation of the absorbed dose from the ESR signal intensity may be possible by a calibration method based on the sucrose content.

  15. On the Problem of Patient-Specific Endogenous Glucose Production in Neonates on Stochastic Targeted Glycemic Control

    PubMed Central

    Dickson, Jennifer L.; Hewett, James N.; Gunn, Cameron A.; Lynn, Adrienne; Shaw, Geoffrey M.; Chase, Geoffrey

    2013-01-01

    Background: Both stress and prematurity can induce hyperglycemia in the neonatal intensive care unit, which, in turn, is associated with worsened outcomes. Endogenous glucose production (EGP) is the formation of glucose by the body from substrates and contributes to blood glucose (BG) levels. Due to the inherent fragility of the extremely low birth weight (ELBW) neonates, true fasting EGP cannot be explicitly determined, introducing uncertainty into glycemic models that rely on quantifying glucose sources. Stochastic targeting, or STAR, is one such glycemic control framework. Methods: A literature review was carried out to gather metabolic and EGP values on preterm infants with a gestational age (GA) <32 weeks and a birth weight (BW) <2 kg. The data were analyzed for EGP trends with BW, GA, BG, plasma insulin, and glucose infusion (GI) rates. Trends were modeled and compared with a literature-derived range of population constant EGP models using clinically validated virtual trials on retrospective clinical data. Results: No clear relationship was found for EGP and BW, GA, or plasma insulin. Some evidence of suppression of EGP with increasing GI or BG was seen. Virtual trial results showed that population-constant EGP models fit clinical data best and gave tighter control performance to a target band in virtual trials. Conclusions: Variation in EGP cannot easily be quantified, and EGP is sufficiently modeled as a population constant in the neonatal intensive care insulin–nutrition–glucose model. Analysis of the clinical data and fitting error suggests that ELBW hyperglycemic preterm neonates have unsuppressed EGP in the higher range than that seen in literature. PMID:23911173

  16. Ganalyzer: A tool for automatic galaxy image analysis

    NASA Astrophysics Data System (ADS)

    Shamir, Lior

    2011-05-01

    Ganalyzer is a model-based tool that automatically analyzes and classifies galaxy images. Ganalyzer works by separating the galaxy pixels from the background pixels, finding the center and radius of the galaxy, generating the radial intensity plot, and then computing the slopes of the peaks detected in the radial intensity plot to measure the spirality of the galaxy and determine its morphological class. Unlike algorithms that are based on machine learning, Ganalyzer is based on measuring the spirality of the galaxy, a task that is difficult to perform manually, and in many cases can provide a more accurate analysis compared to manual observation. Ganalyzer is simple to use, and can be easily embedded into other image analysis applications. Another advantage is its speed, which allows it to analyze ~10,000,000 galaxy images in five days using a standard modern desktop computer. These capabilities can make Ganalyzer a useful tool in analyzing large datasets of galaxy images collected by autonomous sky surveys such as SDSS, LSST or DES.

  17. QED multi-dimensional vacuum polarization finite-difference solver

    NASA Astrophysics Data System (ADS)

    Carneiro, Pedro; Grismayer, Thomas; Silva, Luís; Fonseca, Ricardo

    2015-11-01

    The Extreme Light Infrastructure (ELI) is expected to deliver peak intensities of 1023 - 1024 W/cm2 allowing to probe nonlinear Quantum Electrodynamics (QED) phenomena in an unprecedented regime. Within the framework of QED, the second order process of photon-photon scattering leads to a set of extended Maxwell's equations [W. Heisenberg and H. Euler, Z. Physik 98, 714] effectively creating nonlinear polarization and magnetization terms that account for the nonlinear response of the vacuum. To model this in a self-consistent way, we present a multi dimensional generalized Maxwell equation finite difference solver with significantly enhanced dispersive properties, which was implemented in the OSIRIS particle-in-cell code [R.A. Fonseca et al. LNCS 2331, pp. 342-351, 2002]. We present a detailed numerical analysis of this electromagnetic solver. As an illustration of the properties of the solver, we explore several examples in extreme conditions. We confirm the theoretical prediction of vacuum birefringence of a pulse propagating in the presence of an intense static background field [arXiv:1301.4918 [quant-ph

  18. Physical understanding of the tropical cyclone wind-pressure relationship.

    PubMed

    Chavas, Daniel R; Reed, Kevin A; Knaff, John A

    2017-11-08

    The relationship between the two common measures of tropical cyclone intensity, the central pressure deficit and the peak near-surface wind speed, is a long-standing problem in tropical meteorology that has been approximated empirically yet lacks physical understanding. Here we provide theoretical grounding for this relationship. We first demonstrate that the central pressure deficit is highly predictable from the low-level wind field via gradient wind balance. We then show that this relationship reduces to a dependence on two velocity scales: the maximum azimuthal-mean azimuthal wind speed and half the product of the Coriolis parameter and outer storm size. This simple theory is found to hold across a hierarchy of models spanning reduced-complexity and Earth-like global simulations and observations. Thus, the central pressure deficit is an intensity measure that combines maximum wind speed, storm size, and background rotation rate. This work has significant implications for both fundamental understanding and risk analysis, including why the central pressure better explains historical economic damages than does maximum wind speed.

  19. Simulation and Analysis of Neutron Activation Risk for the IsoDAR High-Intensity Electron Antineutrino Source

    NASA Astrophysics Data System (ADS)

    Skuhersky, Michael

    2013-04-01

    IsoDAR (Isotope Decay-At-Rest) is a proposed high-intensity source of electron antineutrinos intended for use in searches for beyond standard model physics, the main analysis being a short baseline search for sterile neutrinos at a kiloton scale liquid scintillator detector. The source uses a compact cyclotron to deliver 600kW of protons at 60 MeV/nucleon in the form of H2^+ onto a Beryllium target which produces a large intermediate energy neutron flux. These neutrons thermalize and capture on a 99.9% pure ^7Li sleeve, which produces ^8Li at rest, which subsequently beta decays producing νe. Due to the high neutron fluxes, large duty factor, and low background environment surrounding the neutrino detector, we need to understand the activation risk and design a shield to minimize this risk allowing for the safe operation of the source. I will report on my neutron activation studies and the benchmarking of Geant4 for these applications.

  20. Compassion Fatigue and the Healthy Work Environment.

    PubMed

    Kelly, Lesly; Todd, Michael

    2017-01-01

    Burnout is a concern for critical care nurses in high-intensity environments. Studies have highlighted the importance of a healthy work environment in promoting optimal nurse and patient outcomes, but research examining the relationship between a healthy work environment and burnout is limited. To examine how healthy work environment components relate to compassion fatigue (eg, burnout, secondary trauma) and compassion satisfaction. Nurses (n = 105) in 3 intensive care units at an academic medical center completed a survey including the Professional Quality of Life and the American Association of Critical-Care Nurses' Healthy Work Environment standards. Regression models using each Healthy Work Environment component to predict each outcome, adjusting for background variables, showed that the 5 Healthy Work Environment components predicted burnout and that meaningful recognition and authentic leadership predicted compassion satisfaction. Findings on associations between healthy work environment standards and burnout suggest the potential importance of implementing the American Association of Critical-Care Nurses' Healthy Work Environment standards as a mechanism for decreasing burnout. ©2017 American Association of Critical-Care Nurses.

  1. Leonid predictions for the period 2001-2100

    NASA Astrophysics Data System (ADS)

    Maslov, Mikhail

    2007-02-01

    This article provides a set of summaries of what to expect from the Leonid meteor shower for each year of the period 2001-2100. Each summary contains the moments of maximum/maxima, their expected intensity and some comments about average meteor brightness during them. Special attention was paid to background (traditional) maxima, which are characterized with their expected times and intensities.

  2. Early Intensive Behavioral Intervention (EIBI) for Young Children with Autism Spectrum Disorders (ASD): A Systematic Review. Campbell Systematic Reviews 2014:9

    ERIC Educational Resources Information Center

    Reichow, Brian; Barton, Erin E.; Boyd, Brian A.; Hume, Kara

    2014-01-01

    Background: The rising prevalence of autism spectrum disorders (ASD) increases the need for evidence-based behavioral treatments to lessen the impact of symptoms on children's functioning. At present, there are no curative or psychopharmacological therapies to effectively treat all symptoms of the disorder. Early intensive behavioral intervention…

  3. Intensive Interaction Training for Paid Carers: "Looking, Looking and Find out When They Want to Relate to You"

    ERIC Educational Resources Information Center

    Nagra, Maninder K.; White, Rose; Appiah, Afua; Rayner, Kelly

    2017-01-01

    Background: Intensive interaction (II) is a communication approach useful for working with people with severe intellectual disabilities. Health and social care providers offer II training courses to paid carers working in local services with the goal of improving social communication for their clients. Materials and methods: Eight paid carers who…

  4. Turbulence in planetary occultations. IV - Power spectra of phase and intensity fluctuations

    NASA Technical Reports Server (NTRS)

    Haugstad, B. S.

    1979-01-01

    Power spectra of phase and intensity scintillations during occultation by turbulent planetary atmospheres are significantly affected by the inhomogeneous background upon which the turbulence is superimposed. Such coupling is particularly pronounced in the intensity, where there is also a marked difference in spectral shape between a central and grazing occultation. While the former has its structural features smoothed by coupling to the inhomogeneous background, such features are enhanced in the latter. Indeed, the latter power spectrum peaks around the characteristic frequency that is determined by the size of the free-space Fresnel zone and the ray velocity in the atmosphere; at higher frequencies strong fringes develop in the power spectrum. A confrontation between the theoretical scintillation spectra computed here and those calculated from the Mariner 5 Venus mission by Woo et al. (1974) is inconclusive, mainly because of insufficient statistical resolution. Phase and/or intensity power spectra computed from occultation data may be used to deduce characteristics of the turbulence and to distinguish turbulence from other perturbations in the refractive index. Such determinations are facilitated if observations are made at two or more frequencies (radio occultation) or in two or more colors (stellar occultation).

  5. In situ, satellite measurement and model evidence on the dominant regional contribution to fine particulate matter levels in the Paris megacity

    NASA Astrophysics Data System (ADS)

    Beekmann, M.; Prévôt, A. S. H.; Drewnick, F.; Sciare, J.; Pandis, S. N.; Denier van der Gon, H. A. C.; Crippa, M.; Freutel, F.; Poulain, L.; Ghersi, V.; Rodriguez, E.; Beirle, S.; Zotter, P.; von der Weiden-Reinmüller, S.-L.; Bressi, M.; Fountoukis, C.; Petetin, H.; Szidat, S.; Schneider, J.; Rosso, A.; El Haddad, I.; Megaritis, A.; Zhang, Q. J.; Michoud, V.; Slowik, J. G.; Moukhtar, S.; Kolmonen, P.; Stohl, A.; Eckhardt, S.; Borbon, A.; Gros, V.; Marchand, N.; Jaffrezo, J. L.; Schwarzenboeck, A.; Colomb, A.; Wiedensohler, A.; Borrmann, S.; Lawrence, M.; Baklanov, A.; Baltensperger, U.

    2015-08-01

    A detailed characterization of air quality in the megacity of Paris (France) during two 1-month intensive campaigns and from additional 1-year observations revealed that about 70 % of the urban background fine particulate matter (PM) is transported on average into the megacity from upwind regions. This dominant influence of regional sources was confirmed by in situ measurements during short intensive and longer-term campaigns, aerosol optical depth (AOD) measurements from ENVISAT, and modeling results from PMCAMx and CHIMERE chemistry transport models. While advection of sulfate is well documented for other megacities, there was surprisingly high contribution from long-range transport for both nitrate and organic aerosol. The origin of organic PM was investigated by comprehensive analysis of aerosol mass spectrometer (AMS), radiocarbon and tracer measurements during two intensive campaigns. Primary fossil fuel combustion emissions constituted less than 20 % in winter and 40 % in summer of carbonaceous fine PM, unexpectedly small for a megacity. Cooking activities and, during winter, residential wood burning are the major primary organic PM sources. This analysis suggests that the major part of secondary organic aerosol is of modern origin, i.e., from biogenic precursors and from wood burning. Black carbon concentrations are on the lower end of values encountered in megacities worldwide, but still represent an issue for air quality. These comparatively low air pollution levels are due to a combination of low emissions per inhabitant, flat terrain, and a meteorology that is in general not conducive to local pollution build-up. This revised picture of a megacity only being partially responsible for its own average and peak PM levels has important implications for air pollution regulation policies.

  6. Particle sizing by weighted measurements of scattered light

    NASA Technical Reports Server (NTRS)

    Buchele, Donald R.

    1988-01-01

    A description is given of a measurement method, applicable to a poly-dispersion of particles, in which the intensity of scattered light at any angle is weighted by a factor proportional to that angle. Determination is then made of four angles at which the weighted intensity is four fractions of the maximum intensity. These yield four characteristic diameters, i.e., the diameters of the volume/area mean (D sub 32 the Sauter mean) and the volume/diameter mean (D sub 31); the diameters at cumulative volume fractions of 0.5 (D sub v0.5 the volume median) and 0.75 (D sub v0.75). They also yield the volume dispersion of diameters. Mie scattering computations show that an average diameter less than three micrometers cannot be accurately measured. The results are relatively insensitive to extraneous background light and to the nature of the diameter distribution. Also described is an experimental method of verifying the conclusions by using two microscopic slides coated with polystyrene microspheres to simulate the particles and the background.

  7. Hydrological Modeling of Storm Water Drainage System due to Frequent and Intense Precipitation of Dhaka city using Storm Water Management Model (SWMM)

    NASA Astrophysics Data System (ADS)

    Hossain, S., Jr.

    2015-12-01

    Rainfall induced flooding during rainy season is a regular phenomenon in Dhaka City. Almost every year a significant part of the city suffers badly with drainage congestion. There are some highly dense areas with lower ground elevation which submerge under water even with an intense precipitation of few hours. The higher areas also suffer with the drainage problem due to inadequate maintenance of the system and encroachment or illegal filling up of the drainage canals and lakes. Most part of the city suffered from long term urban flooding during historical extreme rainfall events in September 2004, 2007 and July 2009. The situation is likely to worsen in the future due to Climate Change, which may lead to more frequent and intense precipitation. To assess the major and minor drainage systems and elements of the urban basins using the hydrodynamic modelling and, through this, identifying the flooding events and areas, taking into account the current situation and future flood or drainage scenarios. Stormwater modeling has a major role in preventing issues such as flash floods and urban water-quality problems. Stormwater models of a lowered spatial resolution would thus appear valuable if only their ability to provide realistic results could be proved. The present scenario of urban morphology of Dhaka city and existing drainage system is complex for hydrological and hydrodynamic modeling. Furthermore limitations of background data and uncertain future urban scenarios may confine the potential outputs of a model. Although several studies were carried out including modeling for drainage master planning, a detail model for whole DAP (Detaile Area Plan) of Dhaka city area is not available. The model developed under this study is covering the existing drainage system in the study area as well as natural flows in the fringe area. A good number of models are available for hydrological and hydraulic analysis of urban areas. These are MIKE 11, MOUSE, HEC-RAS, HEC HMS and EPA SWMM. EPA-SWMM is used for the study area which is mostly developed and consists pipe networks, open channels and water bodies. This study proposes a methodology for rapid catchment delineation and stormwater management model (SWMM) set-up in a large urban area with model calibration and validation.

  8. Astrophysical interpretation of the anisotropies in the unresolved gamma-ray background

    NASA Astrophysics Data System (ADS)

    Ando, Shin'ichiro; Fornasa, Mattia; Fornengo, Nicolao; Regis, Marco; Zechlin, Hannes-S.

    2017-06-01

    Recently, a new measurement of the auto- and cross-correlation angular power spectrum (APS) of the isotropic gamma-ray background was performed, based on 81 months of data of the Fermi Large-Area Telescope (LAT). Here, we fit, for the first time, the new APS data with a model describing the emission of unresolved blazars. These sources are expected to dominate the anisotropy signal. The model we employ in our analysis reproduces well the blazars resolved by Fermi LAT. When considering the APS obtained by masking the sources listed in the 3FGL catalog, we find that unresolved blazars underproduce the measured APS below ˜1 GeV . Contrary to past results, this suggests the presence of a new contribution to the low-energy APS, with a significance of, at least, 5 σ . The excess can be ascribed to a new class of faint gamma-ray emitters. If we consider the APS obtained by masking the sources in the 2FGL catalog, there is no underproduction of the APS below 1 GeV, but the new source class is still preferred over the blazars-only scenario (with a significance larger than 10 σ ). The properties of the new source class and the level of anisotropies induced in the isotropic gamma-ray background are the same, independent of the APS data used. In particular, the new gamma-ray emitters must have a soft energy spectrum, with a spectral index ranging, approximately, from 2.7 to 3.2. This complicates their interpretation in terms of known sources, since, normally, star-forming and radio galaxies are observed with a harder spectrum. The new source class identified here is also expected to contribute significantly to the intensity of the isotropic gamma-ray background.

  9. Integration of neutron time-of-flight single-crystal Bragg peaks in reciprocal space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schultz, Arthur J; Joergensen, Mads; Wang, Xiaoping

    2014-01-01

    The intensity of single crystal Bragg peaks obtained by mapping neutron time-of-flight event data into reciprocal space and integrating in various ways are compared. These include spherical integration with a fixed radius, ellipsoid fitting and integrating of the peak intensity and one-dimensional peak profile fitting. In comparison to intensities obtained by integrating in real detector histogram space, the data integrated in reciprocal space results in better agreement factors and more accurate atomic parameters. Furthermore, structure refinement using integrated intensities from one-dimensional profile fitting is demonstrated to be more accurate than simple peak-minus-background integration.

  10. Spatial distribution and risk factors of Schistosoma haematobium and hookworm infections among schoolchildren in Kwale, Kenya

    PubMed Central

    Chadeka, Evans Asena; Nagi, Sachiyo; Sunahara, Toshihiko; Cheruiyot, Ngetich Benard; Bahati, Felix; Ozeki, Yuriko; Inoue, Manabu; Osada-Oka, Mayuko; Okabe, Mayuko; Hirayama, Yukio; Changoma, Mwatasa; Adachi, Keishi; Mwende, Faith; Kikuchi, Mihoko; Nakamura, Risa; Kalenda, Yombo Dan Justin; Kaneko, Satoshi; Hirayama, Kenji; Shimada, Masaaki; Ichinose, Yoshio; Njenga, Sammy M.; Matsumoto, Sohkichi

    2017-01-01

    Background Large-scale schistosomiasis control programs are implemented in regions with diverse social and economic environments. A key epidemiological feature of schistosomiasis is its small-scale heterogeneity. Locally profiling disease dynamics including risk factors associated with its transmission is essential for designing appropriate control programs. To determine spatial distribution of schistosomiasis and its drivers, we examined schoolchildren in Kwale, Kenya. Methodology/Principal findings We conducted a cross-sectional study of 368 schoolchildren from six primary schools. Soil-transmitted helminths and Schistosoma mansoni eggs in stool were evaluated by the Kato-Katz method. We measured the intensity of Schistosoma haematobium infection by urine filtration. The geometrical mean intensity of S. haematobium was 3.1 eggs/10 ml urine (school range, 1.4–9.2). The hookworm geometric mean intensity was 3.2 eggs/g feces (school range, 0–17.4). Heterogeneity in the intensity of S. haematobium and hookworm infections was evident in the study area. To identify factors associated with the intensity of helminth infections, we utilized negative binomial generalized linear mixed models. The intensity of S. haematobium infection was associated with religion and socioeconomic status (SES), while that of hookworm infection was related to SES, sex, distance to river and history of anthelmintic treatment. Conclusions/Significance Both S. haematobium and hookworm infections showed micro-geographical heterogeneities in this Kwale community. To confirm and explain our observation of high S. haematobium risk among Muslims, further extensive investigations are necessary. The observed small scale clustering of the S. haematobium and hookworm infections might imply less uniform strategies even at finer scale for efficient utilization of limited resources. PMID:28863133

  11. Aspirin Compared to Low Intensity Anticoagulation in Patients with Non-Valvular Atrial Fibrillation. A Systematic Review and Meta-Analysis

    PubMed Central

    Vazquez, Fernando J.; Gonzalez, Joaquín P.; Gándara, Esteban

    2015-01-01

    Background Despite its lack of efficacy, aspirin is commonly used for stroke prevention in atrial fibrillation. Since prior studies have suggested a benefit of low-intensity anticoagulation over aspirin in the prevention of vascular events, the aim of this systematic review was to compare the outcomes of patients with non-valvular atrial fibrillation treated with low-intensity anticoagulation with Vitamin K antagonists or aspirin. Methods We conducted a systematic review searching Ovid MEDLINE, Embase and the Cochrane Central Register of Controlled Trials, from 1946 to October 14th, 2015. Randomized controlled trials were included if they reported the outcomes of patients with non-valvular atrial fibrillation treated with a low-intensity anticoagulation compared to patients treated with aspirin. The primary outcome was a combination of ischemic stroke or systemic embolism. The random-effects model odds ratio was used as the outcome measure. Results Our initial search identified 6309relevant articles of which three satisfied our inclusion criteria and were included. Compared to low-intensity anticoagulation, aspirin alone did not reduce the incidence of ischemic stroke or systemic embolism OR 0.94 (95% CI 0.57–1.56), major bleeding OR 1.06 (95% CI 0.42–2.62) or vascular death OR 1.04 (95% CI 0.61–1.75). The use of aspirin was associated with a significant increase in all-cause mortality OR 1.66 (95% CI 1.12–2.48). Conclusion In patients with non-valvular atrial fibrillation, aspirin provides no benefits over low-intensity anticoagulation. Furthermore, the use of aspirin appears to be associated with an increased risk in all-cause mortality. Our study provides more evidence against the use aspirin in patients with non-valvular atrial fibrillation. PMID:26561858

  12. Effectiveness of an intensive E-mail based intervention in smoking cessation (TABATIC study): study protocol for a randomized controlled trial

    PubMed Central

    2013-01-01

    Background Intensive interventions on smoking cessation increase abstinence rates. However, few electronic mail (E-mail) based intensive interventions have been tested in smokers and none in primary care (PC) setting. The aim of the present study is to evaluate the effectiveness of an intensive E-mail based intervention in smokers attending PC services. Methods/design Randomized Controlled Multicentric Trial. Study population: 1060 smokers aged between 18–70 years from Catalonia, Salamanca and Aragón (Spain) who have and check regularly an E-mail account. Patients will be randomly assigned to control or intervention group. Intervention: Six phase intensive intervention with two face to face interviews and four automatically created and personal E-mail patients tracking, if needed other E-mail contacts will be made. Control group will receive a brief advice on smoking cessation. Outcome measures: Will be measured at 6 and 12 months after intervention: self reported continuous abstinence (confirmed by cooximetry), point prevalence abstinence, tobacco consumption, evolution of stage according to Prochaska and DiClemente's Stages of Change Model, length of visit, costs for the patient to access Primary Care Center. Statistical analysis: Descriptive and logistic and Poisson regression analysis under the intention to treat basis using SPSS v.17. Discussion The proposed intervention is an E-mail based intensive intervention in smokers attending primary care. Positive results could be useful to demonstrate a higher percentage of short and long-term abstinence among smokers attended in PC in Spain who regularly use E-mail. Furthermore, this intervention could be helpful in all health services to help smokers to quit. Trial Registration Clinical Trials.gov Identifier: NCT01494246. PMID:23597262

  13. Development and characterisation of a brain tumour mimicking protoporphyrin IX fluorescence phantom (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Xie, Yijing; Tisca, Cristiana; Peveler, William; Noimark, Sacha; Desjardins, Adrien E.; Parkin, Ivan P.; Ourselin, Sebastien; Vercauteren, Tom

    2017-02-01

    5-ALA-PpIX fluorescence-guided brain tumour resection can increase the accuracy at which cancerous tissue is removed and thereby improve patient outcomes, as compared with standard white light imaging. Novel optical devices that aim to increase the specificity and sensitivity of PpIX detection are typically assessed by measurements in tissue-mimicking optical phantoms of which all optical properties are defined. Current existing optical phantoms specified for PpIX lack consistency in their optical properties, and stability with respect to photobleaching, thus yielding an unstable correspondence between PpIX concentration and the fluorescence intensity. In this study, we developed a set of aqueous-based phantoms with different compositions, using deionised water or PBS buffer as background medium, intralipid as scattering material, bovine haemoglobin as background absorber, and either PpIX dissolved in DMSO or a novel nanoparticle with similar absorption and emission spectrum to PpIX as the fluorophore. We investigated the phantom stability in terms of aggregation and photobleaching by comparing with different background medium and fluorophores, respectively. We characterised the fluorescence intensity of the fluorescent nanoparticle in different concentration of intralipid and haemoglobin and its time-dependent stability, as compared to the PpIX-induced fluorescence. We corroborated that the background medium was essential to prepare a stable aqueous phantom. The novel fluorescent nanoparticle used as surrogate fluorophore of PpIX presented an improved temporal stability and a reliable correspondence between concentration and emission intensity. We proposed an optimised phantom composition and recipe to produce reliable and repeatable phantom for validation of imaging device.

  14. Increased visual sensitivity following periods of dim illumination.

    PubMed

    McKeown, Alex S; Kraft, Timothy W; Loop, Michael S

    2015-02-19

    We measured changes in the sensitivity of the human rod pathway by testing visual reaction times before and after light adaptation. We targeted a specific range of conditioning light intensities to see if a physiological adaptation recently discovered in mouse rods is observable at the perceptual level in humans. We also measured the noise spectrum of single mouse rods due to the importance of the signal-to-noise ratio in rod to rod bipolar cell signal transfer. Using the well-defined relationship between stimulus intensity and reaction time (Piéron's law), we measured the reaction times of eight human subjects (ages 24-66) to scotopic test flashes of a single intensity before and after the presentation of a 3-minute background. We also made recordings from single mouse rods and processed the cellular noise spectrum before and after similar conditioning exposures. Subject reaction times to a fixed-strength stimulus were fastest 5 seconds after conditioning background exposure (79% ± 1% of the preconditioning mean, in darkness) and were significantly faster for the first 12 seconds after background exposure (P < 0.01). During the period of increased rod sensitivity, the continuous noise spectrum of individual mouse rods was not significantly increased. A decrease in human reaction times to a dim flash after conditioning background exposure may originate in rod photoreceptors through a transient increase in the sensitivity of the phototransduction cascade. There is no accompanying increase in rod cellular noise, allowing for reliable transmission of larger rod signals after conditioning exposures and the observed increase in perceptual sensitivity. Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.

  15. Statistical simulations of the dust foreground to cosmic microwave background polarization

    NASA Astrophysics Data System (ADS)

    Vansyngel, F.; Boulanger, F.; Ghosh, T.; Wandelt, B.; Aumont, J.; Bracco, A.; Levrier, F.; Martin, P. G.; Montier, L.

    2017-07-01

    The characterization of the dust polarization foreground to the cosmic microwave background (CMB) is a necessary step toward the detection of the B-mode signal associated with primordial gravitational waves. We present a method to simulate maps of polarized dust emission on the sphere that is similar to the approach used for CMB anisotropies. This method builds on the understanding of Galactic polarization stemming from the analysis of Planck data. It relates the dust polarization sky to the structure of the Galactic magnetic field and its coupling with interstellar matter and turbulence. The Galactic magnetic field is modeled as a superposition of a mean uniform field and a Gaussian random (turbulent) component with a power-law power spectrum of exponent αM. The integration along the line of sight carried out to compute Stokes maps is approximated by a sum over a small number of emitting layers with different realizations of the random component of the magnetic field. The model parameters are constrained to fit the power spectra of dust polarization EE, BB, and TE measured using Planck data. We find that the slopes of the E and B power spectra of dust polarization are matched for αM = -2.5, an exponent close to that measured for total dust intensity but larger than the Kolmogorov exponent - 11/3. The model allows us to compute multiple realizations of the Stokes Q and U maps for different realizations of the random component of the magnetic field, and to quantify the variance of dust polarization spectra for any given sky area outside of the Galactic plane. The simulations reproduce the scaling relation between the dust polarization power and the mean total dust intensity including the observed dispersion around the mean relation. We also propose a method to carry out multifrequency simulations, including the decorrelation measured recently by Planck, using a given covariance matrix of the polarization maps. These simulations are well suited to optimize component separation methods and to quantify the confidence with which the dust and CMB B-modes can be separated in present and future experiments. We also provide an astrophysical perspective on our phenomenological modeling of the dust polarization spectra.

  16. Piecewise Potential Vorticity Inversion for Intense Extratropical Cyclones

    NASA Astrophysics Data System (ADS)

    Seiler, C.; Zwiers, F. W.

    2017-12-01

    Global climate models (GCMs) tend to simulate too few intense extratropical cyclones (ETCs) in the Northern Hemisphere (NH) under historic climate conditions. This bias may arise from the interactions of multiple drivers, including surface temperature gradients, latent heating in the lower troposphere, and the upper-level jet stream. Previous attempts to quantify the importance of these drivers include idealized model experiments or statistical approaches. The first method however cannot easily be implemented for a multi-GCM ensemble, and the second approach does not disentangle the interactions among drivers, nor does it prove causality. An alternative method that overcomes these limitations is piecewise potential vorticity inversion (PPVI). PPVI derives the wind and geopotential height fields by inverting potential vorticity (PV) for discrete atmospheric levels. Despite being a powerful diagnostic tool, PPVI has primarily been used to study the dynamics of individual events only. This study presents the first PPVI climatology for the 5% most intense NH ETCs that occurred from 1980 to 2016. Conducting PPVI to 3273 ETC tracks identified in ERA-Interim reanalysis, we quantified the contributions from 3 atmospheric layers to ETC intensity. The respective layers are the surface (1000 hPa), a lower atmospheric level (700-850 hPa) and an upper atmospheric level (100-500 hPa) that are associated with the contributions from surface temperature gradients, latent heating, and the jet stream, respectively. Results show that contributions are dominated by the lower level (40%), followed by the upper level (20%) and the surface (17%), while the remaining 23% are associated with the background flow. Contributions from the surface and the lower level are stronger in the western ocean basins owed to the presence of the warm ocean currents, while contributions from the upper level are stronger in the eastern basins. Vertical cross sections of ETC-centered composites show an undulation of the dynamic tropopause and the formation of a PV tower with values exceeding 1 PV unit during maximum ETC intensity. The dominant contribution from the lower level underlines the importance of latent heating for intense ETCs. The ability of GCMs to reproduce this mechanism remains to be assessed.

  17. Effects of background, direction and intensity of ambient light, measuring position, and adjacent teeth, on anterior tooth colour measurement in vitro.

    PubMed

    Ma, Jian Feng; Du, Ruo Xi; Wang, Si Qian; Li, Yi Ming

    2010-01-01

    to investigate the effects of different background colours (black, white or pink), direction and intensity of ambient light, measuring position, and the adjacent teeth, on the in vitro colour measurement of maxillary anterior teeth, using the Minolta CR-321 colorimeter. ten extracted human maxillary central incisors were selected. A fibre-optic light MI-150 was used as the ambient illuminant. Teeth were irradiated from a 3- or 12-o'clock direction. L*a*b* values of seven sites on the labial surfaces were obtained by means of the Minolta CR-321 colorimeter, using three background colours, with or without the adjacent teeth. The recorded data were analysed with two-tailed Student t tests and analysis of variance (α = 0.05). the ambient light did not affect the colour measurement of anterior teeth, regardless of the presence or absence of the adjacent teeth. There were no statistically significant differences in L*a*b* values at the same position under different background colours, except ΔE12 (colour difference between site 1 and site 2) between black and white backgrounds. ΔE12 (under black background), ΔE13 and ΔE15 were greater than 1.5, while the others were lower than 1.5. the background, ambient light and the presence of adjacent teeth did not affect the colour measurement of anterior teeth using the Minolta CR-321 colorimeter in vitro. The inherent disadvantages of using the naked eye during clinical visual shade assessment may be overcome by the colorimeter.

  18. Assessing Nursing Care Needs of Children with Complex Medical Conditions: The Nursing Kids Intensity of Care Survey (N-KICS)

    PubMed Central

    Navarra, Ann-Margaret; Schlau, Rona; Murray, Meghan; Mosiello, Linda; Schneider, Laura; Jackson, Olivia; Cohen, Bevin; Saiman, Lisa; Larson, Elaine L.

    2015-01-01

    Background Recent medical advances have resulted in increased survival of children with complex medical conditions (CMC), but there are no validated methods to measure their care needs. Objectives/Methods To design and test the Nursing-Kids Intensity of Care Survey (N-KICS) tool and describe intensity of nursing care for children with CMC. Results The psychometric evaluation confirmed an acceptable standard for reliability and validity and feasibility. Intensity scores were highest for nursing care related to infection control, medication administration, nutrition, diaper changes, hygiene, neurological and respiratory support, and standing program. Conclusions Development of a psychometrically sound measure of nursing intensity will help evaluate and plan nursing care for children with CMC. PMID:26777429

  19. Oxidation of laser-induced plasma species in different background conditions

    NASA Astrophysics Data System (ADS)

    Bator, Matthias; Schneider, Christof W.; Lippert, Thomas; Wokaun, Alexander

    2013-08-01

    The evolution of Lu and LuO species in a laser ablation plasma from different targets has been investigated by simultaneously performing mass spectrometry and plasma imaging. Ablation was achieved with a 248 nm KrF laser from a Lu, a Lu2O5 and a LuMnO3 target under different background gas conditions. Mass spectrometry measurements show very similar intensities and ratios for the respective species for all three targets under the same ablation conditions. This indicates only a small influence of the target on the final Lu and LuO contents in the plasma, with the major influence coming from collisions with the background gas. Furthermore, spatially, timely and spectrally resolved plasma imaging was utilized to clearly identify the shockwave at the plasma front as the main region for Lu oxidation. A strong decrease of Lu intensities together with a directly correlated increase of LuO was observed toward the outer regions of the plasma.

  20. A mesoscale hybrid data assimilation system based on the JMA nonhydrostatic model

    NASA Astrophysics Data System (ADS)

    Ito, K.; Kunii, M.; Kawabata, T. T.; Saito, K. K.; Duc, L. L.

    2015-12-01

    This work evaluates the potential of a hybrid ensemble Kalman filter and four-dimensional variational (4D-Var) data assimilation system for predicting severe weather events from a deterministic point of view. This hybrid system is an adjoint-based 4D-Var system using a background error covariance matrix constructed from the mixture of a so-called NMC method and perturbations in a local ensemble transform Kalman filter data assimilation system, both of which are based on the Japan Meteorological Agency nonhydrostatic model. To construct the background error covariance matrix, we investigated two types of schemes. One is a spatial localization scheme and the other is neighboring ensemble approach, which regards the result at a horizontally spatially shifted point in each ensemble member as that obtained from a different realization of ensemble simulation. An assimilation of a pseudo single-observation located to the north of a tropical cyclone (TC) yielded an analysis increment of wind and temperature physically consistent with what is expected for a mature TC in both hybrid systems, whereas an analysis increment in a 4D-Var system using a static background error covariance distorted a structure of the mature TC. Real data assimilation experiments applied to 4 TCs and 3 local heavy rainfall events showed that hybrid systems and EnKF provided better initial conditions than the NMC-based 4D-Var, both for predicting the intensity and track forecast of TCs and for the location and amount of local heavy rainfall events.

  1. Far-field detection of sub-wavelength Tetris without extra near-field metal parts based on phase prints of time-reversed fields with intensive background interference.

    PubMed

    Chen, Yingming; Wang, Bing-Zhong

    2014-07-14

    Time-reversal (TR) phase prints are first used in far-field (FF) detection of sub-wavelength (SW) deformable scatterers without any extra metal structure positioned in the vicinity of the target. The 2D prints derive from discrete short-time Fourier transform of 1D TR electromagnetic (EM) signals. Because the time-invariant intensive background interference is effectively centralized by TR technique, the time-variant weak indication from FF SW scatterers can be highlighted. This method shows a different use of TR technique in which the focus peak of TR EM waves is unusually removed and the most useful information is conveyed by the other part.

  2. The Colour of Velvet. A Transdisciplinary Approach to Connecting Students from a Refugee Background to the Natural World

    ERIC Educational Resources Information Center

    Brown, Leni; O'Keefe, Lise; Paige, Kathryn

    2017-01-01

    What pedagogical strategies support students from a refugee background connecting to the natural world? What would these strategies look like for fifteen students participating in a language intensive New Arrivals Program (NAP)? These questions were the focus of a small collaborative project set up to investigate the impact of pedagogical…

  3. Proceedings of the Air Power Symposium on the Role of Airpower in Low Intensity Conflict (9th) Held at Maxwell AFB, Alabama on 11-13 March 1985. Appendix 2. Symposium Papers,

    DTIC Science & Technology

    1985-05-01

    Much has been written recently on the subject of non- nuclear , unconventional, limited war--or, low intensity conflict. These articles have raised...341 Low intensity conflict, for the purpose of this paper, is defined as conflict at the lower end of the warfare spectrum.* (Global nuclear war is at...conflict and conventional and nuclear war--priorities and forces disposition. BACKGROUND Traditionally, the U.S. has thought in global or regional terms

  4. Electron microprobe analysis program for biological specimens: BIOMAP

    NASA Technical Reports Server (NTRS)

    Edwards, B. F.

    1972-01-01

    BIOMAP is a Univac 1108 compatible program which facilitates the electron probe microanalysis of biological specimens. Input data are X-ray intensity data from biological samples, the X-ray intensity and composition data from a standard sample and the electron probe operating parameters. Outputs are estimates of the weight percentages of the analyzed elements, the distribution of these estimates for sets of red blood cells and the probabilities for correlation between elemental concentrations. An optional feature statistically estimates the X-ray intensity and residual background of a principal standard relative to a series of standards.

  5. Infrared spectroscopic imaging for noninvasive detection of latent fingerprints.

    PubMed

    Crane, Nicole J; Bartick, Edward G; Perlman, Rebecca Schwartz; Huffman, Scott

    2007-01-01

    The capability of Fourier transform infrared (FTIR) spectroscopic imaging to provide detailed images of unprocessed latent fingerprints while also preserving important trace evidence is demonstrated. Unprocessed fingerprints were developed on various porous and nonporous substrates. Data-processing methods used to extract the latent fingerprint ridge pattern from the background material included basic infrared spectroscopic band intensities, addition and subtraction of band intensity measurements, principal components analysis (PCA) and calculation of second derivative band intensities, as well as combinations of these various techniques. Additionally, trace evidence within the fingerprints was recovered and identified.

  6. Neutron spectra from beam-target reactions in dense Z-pinches

    NASA Astrophysics Data System (ADS)

    Appelbe, B.; Chittenden, J.

    2015-10-01

    The energy spectrum of neutrons emitted by a range of deuterium and deuterium-tritium Z-pinch devices is investigated computationally using a hybrid kinetic-MHD model. 3D MHD simulations are used to model the implosion, stagnation, and break-up of dense plasma focus devices at currents of 70 kA, 500 kA, and 2 MA and also a 15 MA gas puff. Instabilities in the MHD simulations generate large electric and magnetic fields, which accelerate ions during the stagnation and break-up phases. A kinetic model is used to calculate the trajectories of these ions and the neutron spectra produced due to the interaction of these ions with the background plasma. It is found that these beam-target neutron spectra are sensitive to the electric and magnetic fields at stagnation resulting in significant differences in the spectra emitted by each device. Most notably, magnetization of the accelerated ions causes the beam-target spectra to be isotropic for the gas puff simulations. It is also shown that beam-target spectra can have a peak intensity located at a lower energy than the peak intensity of a thermonuclear spectrum. A number of other differences in the shapes of beam-target and thermonuclear spectra are also observed for each device. Finally, significant differences between the shapes of beam-target DD and DT neutron spectra, due to differences in the reaction cross-sections, are illustrated.

  7. An Application of Bayesian Approach in Modeling Risk of Death in an Intensive Care Unit

    PubMed Central

    Wong, Rowena Syn Yin; Ismail, Noor Azina

    2016-01-01

    Background and Objectives There are not many studies that attempt to model intensive care unit (ICU) risk of death in developing countries, especially in South East Asia. The aim of this study was to propose and describe application of a Bayesian approach in modeling in-ICU deaths in a Malaysian ICU. Methods This was a prospective study in a mixed medical-surgery ICU in a multidisciplinary tertiary referral hospital in Malaysia. Data collection included variables that were defined in Acute Physiology and Chronic Health Evaluation IV (APACHE IV) model. Bayesian Markov Chain Monte Carlo (MCMC) simulation approach was applied in the development of four multivariate logistic regression predictive models for the ICU, where the main outcome measure was in-ICU mortality risk. The performance of the models were assessed through overall model fit, discrimination and calibration measures. Results from the Bayesian models were also compared against results obtained using frequentist maximum likelihood method. Results The study involved 1,286 consecutive ICU admissions between January 1, 2009 and June 30, 2010, of which 1,111 met the inclusion criteria. Patients who were admitted to the ICU were generally younger, predominantly male, with low co-morbidity load and mostly under mechanical ventilation. The overall in-ICU mortality rate was 18.5% and the overall mean Acute Physiology Score (APS) was 68.5. All four models exhibited good discrimination, with area under receiver operating characteristic curve (AUC) values approximately 0.8. Calibration was acceptable (Hosmer-Lemeshow p-values > 0.05) for all models, except for model M3. Model M1 was identified as the model with the best overall performance in this study. Conclusion Four prediction models were proposed, where the best model was chosen based on its overall performance in this study. This study has also demonstrated the promising potential of the Bayesian MCMC approach as an alternative in the analysis and modeling of in-ICU mortality outcomes. PMID:27007413

  8. Background radiation measurements at high power research reactors

    NASA Astrophysics Data System (ADS)

    Ashenfelter, J.; Balantekin, B.; Baldenegro, C. X.; Band, H. R.; Barclay, G.; Bass, C. D.; Berish, D.; Bowden, N. S.; Bryan, C. D.; Cherwinka, J. J.; Chu, R.; Classen, T.; Davee, D.; Dean, D.; Deichert, G.; Dolinski, M. J.; Dolph, J.; Dwyer, D. A.; Fan, S.; Gaison, J. K.; Galindo-Uribarri, A.; Gilje, K.; Glenn, A.; Green, M.; Han, K.; Hans, S.; Heeger, K. M.; Heffron, B.; Jaffe, D. E.; Kettell, S.; Langford, T. J.; Littlejohn, B. R.; Martinez, D.; McKeown, R. D.; Morrell, S.; Mueller, P. E.; Mumm, H. P.; Napolitano, J.; Norcini, D.; Pushin, D.; Romero, E.; Rosero, R.; Saldana, L.; Seilhan, B. S.; Sharma, R.; Stemen, N. T.; Surukuchi, P. T.; Thompson, S. J.; Varner, R. L.; Wang, W.; Watson, S. M.; White, B.; White, C.; Wilhelmi, J.; Williams, C.; Wise, T.; Yao, H.; Yeh, M.; Yen, Y.-R.; Zhang, C.; Zhang, X.; Prospect Collaboration

    2016-01-01

    Research reactors host a wide range of activities that make use of the intense neutron fluxes generated at these facilities. Recent interest in performing measurements with relatively low event rates, e.g. reactor antineutrino detection, at these facilities necessitates a detailed understanding of background radiation fields. Both reactor-correlated and naturally occurring background sources are potentially important, even at levels well below those of importance for typical activities. Here we describe a comprehensive series of background assessments at three high-power research reactors, including γ-ray, neutron, and muon measurements. For each facility we describe the characteristics and identify the sources of the background fields encountered. The general understanding gained of background production mechanisms and their relationship to facility features will prove valuable for the planning of any sensitive measurement conducted therein.

  9. Aircraft vulnerability analysis by modeling and simulation

    NASA Astrophysics Data System (ADS)

    Willers, Cornelius J.; Willers, Maria S.; de Waal, Alta

    2014-10-01

    Infrared missiles pose a significant threat to civilian and military aviation. ManPADS missiles are especially dangerous in the hands of rogue and undisciplined forces. Yet, not all the launched missiles hit their targets; the miss being either attributable to misuse of the weapon or to missile performance restrictions. This paper analyses some of the factors affecting aircraft vulnerability and demonstrates a structured analysis of the risk and aircraft vulnerability problem. The aircraft-missile engagement is a complex series of events, many of which are only partially understood. Aircraft and missile designers focus on the optimal design and performance of their respective systems, often testing only in a limited set of scenarios. Most missiles react to the contrast intensity, but the variability of the background is rarely considered. Finally, the vulnerability of the aircraft depends jointly on the missile's performance and the doctrine governing the missile's launch. These factors are considered in a holistic investigation. The view direction, altitude, time of day, sun position, latitude/longitude and terrain determine the background against which the aircraft is observed. Especially high gradients in sky radiance occur around the sun and on the horizon. This paper considers uncluttered background scenes (uniform terrain and clear sky) and presents examples of background radiance at all view angles across a sphere around the sensor. A detailed geometrical and spatially distributed radiometric model is used to model the aircraft. This model provides the signature at all possible view angles across the sphere around the aircraft. The signature is determined in absolute terms (no background) and in contrast terms (with background). It is shown that the background significantly affects the contrast signature as observed by the missile sensor. A simplified missile model is constructed by defining the thrust and mass profiles, maximum seeker tracking rate, maximum guidance acceleration and seeker sensitivity. For the purpose of this investigation the aircraft is equipped with conventional pyrotechnic decoy flares and the missile has no counter-countermeasure means (security restrictions on open publication). This complete simulation is used to calculate the missile miss distance, when the missile is launched from different locations around the aircraft. The miss distance data is then graphically presented showing miss distance (aircraft vulnerability) as a function of launch direction and range. The aircraft vulnerability graph accounts for aircraft and missile characteristics, but does not account for missile deployment doctrine. A Bayesian network is constructed to fuse the doctrinal rules with the aircraft vulnerability data. The Bayesian network now provides the capability to evaluate the combined risk of missile launch and aircraft vulnerability. It is shown in this paper that it is indeed possible to predict the aircraft vulnerability to missile attack in a comprehensive modelling and a holistic process. By using the appropriate real-world models, this approach is used to evaluate the effectiveness of specific countermeasure techniques against specific missile threats. The use of a Bayesian network provides the means to fuse simulated performance data with more abstract doctrinal rules to provide a realistic assessment of the aircraft vulnerability.

  10. On the Decreasing Fraction of Strong Ly α Emitters around z ∼ 6-7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadoun, Raphael; Zheng, Zheng; Miralda-Escudé, Jordi, E-mail: raphael.sadoun@utah.edu

    2017-04-10

    The fraction of galaxies with strong Ly α emission has been observed to decrease rapidly with redshift at z ≳ 6, after a gradual increase at z < 6. This has been interpreted as being a trace of the reionization of the intergalactic medium (IGM): the emitted Ly α photons would be scattered by an increasingly neutral IGM at z > 6. We study this effect by modeling the ionization and Ly α radiative transfer in the infall region and the IGM around a Ly α emitting galaxy (LAE), for a spherical halo model with the mean density and radialmore » velocity profiles in the standard ΛCDM cosmological scenario. We find that the expected fast increase of the ionizing background intensity toward the end of the reionization epoch implies a rapid evolution of halo infall regions from being self-shielded against the external ionizing background to being mostly ionized. Whereas self-shielded infall regions can scatter the Ly α photons over a much larger area than the commonly used apertures for observing LAEs, the same infalling gas is no longer optically thick to the Ly α emission line after it is ionized by the external background, making the Ly α emission more compact and brighter within the observed apertures. Based on this simple model, we show that the observed drop in the abundance of LAEs at z > 6 does not imply a rapid increase with redshift of the fraction of the whole IGM volume that is atomic, but is accounted for by a rapid increase of the neutral fraction in the infall regions around galaxy host halos.« less

  11. Comparison of multiple enzyme activatable near infrared fluorescent molecular probes for detection and quantification of inflammation in murine colitis models

    PubMed Central

    Ding, Shengli; Blue, Randal E.; Morgan, Douglas R.; Lund, Pauline K.

    2015-01-01

    Background Activatable near-infrared fluorescent (NIRF) probes have been used for ex vivo and in vivo detection of intestinal tumors in animal models. We hypothesized that NIRF probes activatable by cathepsins or MMPs will detect and quantify dextran sulphate sodium (DSS) induced acute colonic inflammation in wild type (WT) mice or chronic colitis in IL-10 null mice ex vivo or in vivo. Methods WT mice given DSS, water controls and IL-10 null mice with chronic colitis were administered probes by retro-orbital injection. FMT2500 LX system imaged fresh and fixed intestine ex vivo and mice in vivo. Inflammation detected by probes was verified by histology and colitis scoring. NIRF signal intensity was quantified using 2D region of interest (ROI) ex vivo or 3D ROI-analysis in vivo. Results Ex vivo, seven probes tested yielded significant higher NIRF signals in colon of DSS treated mice versus controls. A subset of probes was tested in IL-10 null mice and yielded strong ex vivo signals. Ex vivo fluorescence signal with 680 series probes was preserved after formalin fixation. In DSS and IL-10 null models, ex vivo NIRF signal strongly and significantly correlated with colitis scores. In vivo, ProSense680, CatK680FAST and MMPsense680 yielded significantly higher NIRF signals in DSS treated mice than controls but background was high in controls. Conclusion Both cathepsin or MMP-activated NIRF-probes can detect and quantify colonic inflammation ex vivo. ProSense680 yielded the strongest signals in DSS colitis ex vivo and in vivo, but background remains a problem for in vivo quantification of colitis. PMID:24374874

  12. Infrared images target detection based on background modeling in the discrete cosine domain

    NASA Astrophysics Data System (ADS)

    Ye, Han; Pei, Jihong

    2018-02-01

    Background modeling is the critical technology to detect the moving target for video surveillance. Most background modeling techniques are aimed at land monitoring and operated in the spatial domain. A background establishment becomes difficult when the scene is a complex fluctuating sea surface. In this paper, the background stability and separability between target are analyzed deeply in the discrete cosine transform (DCT) domain, on this basis, we propose a background modeling method. The proposed method models each frequency point as a single Gaussian model to represent background, and the target is extracted by suppressing the background coefficients. Experimental results show that our approach can establish an accurate background model for seawater, and the detection results outperform other background modeling methods in the spatial domain.

  13. Light intensity physical activity and sedentary behavior in relation to body mass index and grip strength in older adults: cross-sectional findings from the Lifestyle Interventions and Independence for Elders (LIFE) study

    USDA-ARS?s Scientific Manuscript database

    Background: Identifying modifiable determinants of fat mass and muscle strength in older adults is important given their impact on physical functioning and health. Light intensity physical activity and sedentary behavior are potential determinants, but their relations to these outcomes are poorly un...

  14. Evaluating the Inter-Respondent (Consumer vs. Staff) Reliability and Construct Validity (SIS vs. Vineland) of the Supports Intensity Scale on a Dutch Sample

    ERIC Educational Resources Information Center

    Claes, C.; Van Hove, G.; van Loon, J.; Vandevelde, S.; Schalock, R. L.

    2009-01-01

    Background: Despite various reliability studies on the Supports Intensity Scale (SIS), to date there has not been an evaluation of the reliability of client vs. staff judgments. Such determination is important, given the increasing consumer-driven approach to services. Additionally, there has not been an evaluation of the instrument's construct…

  15. Benefit and harm of intensive blood pressure treatment: Derivation and validation of risk models using data from the SPRINT and ACCORD trials

    PubMed Central

    Denton, Brian T.; Hayward, Rodney A.

    2017-01-01

    Background Intensive blood pressure (BP) treatment can avert cardiovascular disease (CVD) events but can cause some serious adverse events. We sought to develop and validate risk models for predicting absolute risk difference (increased risk or decreased risk) for CVD events and serious adverse events from intensive BP therapy. A secondary aim was to test if the statistical method of elastic net regularization would improve the estimation of risk models for predicting absolute risk difference, as compared to a traditional backwards variable selection approach. Methods and findings Cox models were derived from SPRINT trial data and validated on ACCORD-BP trial data to estimate risk of CVD events and serious adverse events; the models included terms for intensive BP treatment and heterogeneous response to intensive treatment. The Cox models were then used to estimate the absolute reduction in probability of CVD events (benefit) and absolute increase in probability of serious adverse events (harm) for each individual from intensive treatment. We compared the method of elastic net regularization, which uses repeated internal cross-validation to select variables and estimate coefficients in the presence of collinearity, to a traditional backwards variable selection approach. Data from 9,069 SPRINT participants with complete data on covariates were utilized for model development, and data from 4,498 ACCORD-BP participants with complete data were utilized for model validation. Participants were exposed to intensive (goal systolic pressure < 120 mm Hg) versus standard (<140 mm Hg) treatment. Two composite primary outcome measures were evaluated: (i) CVD events/deaths (myocardial infarction, acute coronary syndrome, stroke, congestive heart failure, or CVD death), and (ii) serious adverse events (hypotension, syncope, electrolyte abnormalities, bradycardia, or acute kidney injury/failure). The model for CVD chosen through elastic net regularization included interaction terms suggesting that older age, black race, higher diastolic BP, and higher lipids were associated with greater CVD risk reduction benefits from intensive treatment, while current smoking was associated with fewer benefits. The model for serious adverse events chosen through elastic net regularization suggested that male sex, current smoking, statin use, elevated creatinine, and higher lipids were associated with greater risk of serious adverse events from intensive treatment. SPRINT participants in the highest predicted benefit subgroup had a number needed to treat (NNT) of 24 to prevent 1 CVD event/death over 5 years (absolute risk reduction [ARR] = 0.042, 95% CI: 0.018, 0.066; P = 0.001), those in the middle predicted benefit subgroup had a NNT of 76 (ARR = 0.013, 95% CI: −0.0001, 0.026; P = 0.053), and those in the lowest subgroup had no significant risk reduction (ARR = 0.006, 95% CI: −0.007, 0.018; P = 0.71). Those in the highest predicted harm subgroup had a number needed to harm (NNH) of 27 to induce 1 serious adverse event (absolute risk increase [ARI] = 0.038, 95% CI: 0.014, 0.061; P = 0.002), those in the middle predicted harm subgroup had a NNH of 41 (ARI = 0.025, 95% CI: 0.012, 0.038; P < 0.001), and those in the lowest subgroup had no significant risk increase (ARI = −0.007, 95% CI: −0.043, 0.030; P = 0.72). In ACCORD-BP, participants in the highest subgroup of predicted benefit had significant absolute CVD risk reduction, but the overall ACCORD-BP participant sample was skewed towards participants with less predicted benefit and more predicted risk than in SPRINT. The models chosen through traditional backwards selection had similar ability to identify absolute risk difference for CVD as the elastic net models, but poorer ability to correctly identify absolute risk difference for serious adverse events. A key limitation of the analysis is the limited sample size of the ACCORD-BP trial, which expanded confidence intervals for ARI among persons with type 2 diabetes. Additionally, it is not possible to mechanistically explain the physiological relationships explaining the heterogeneous treatment effects captured by the models, since the study was an observational secondary data analysis. Conclusions We found that predictive models could help identify subgroups of participants in both SPRINT and ACCORD-BP who had lower versus higher ARRs in CVD events/deaths with intensive BP treatment, and participants who had lower versus higher ARIs in serious adverse events. PMID:29040268

  16. A model for assessing water quality risk in catchments prone to wildfire

    NASA Astrophysics Data System (ADS)

    Langhans, Christoph; Smith, Hugh; Chong, Derek; Nyman, Petter; Lane, Patrick; Sheridan, Gary

    2017-04-01

    Post-fire debris flows can have erosion rates up to three orders of magnitude higher than background rates. They are major sources of fine suspended sediment, which is critical to the safety of water supply from forested catchments. Fire can cover parts or all of these large catchments and burn severity is often heterogeneous. The probability of spatial and temporal overlap of fire disturbance and rainfall events, and the susceptibility of hillslopes to severe erosion determine the risk to water quality. Here we present a model to calculate recurrence intervals of high magnitude sediment delivery from runoff-generated debris flows to a reservoir in a large catchment (>100 km2) accounting for heterogeneous burn conditions. Debris flow initiation was modelled with indicators of surface runoff and soil surface erodibility. Debris flow volume was calculated with an empirical model, and fine sediment delivery was calculated using simple, expert-based assumptions. In a Monte-Carlo simulation, wildfire was modelled with a fire spread model using historic data on weather and ignition probabilities for a forested catchment in central Victoria, Australia. Multiple high intensity storms covering the study catchment were simulated using Intensity-Frequency-Duration relationships, and the runoff indicator calculated with a runoff model for hillslopes. A sensitivity analysis showed that fine sediment is most sensitive to variables related to the texture of the source material, debris flow volume estimation, and the proportion of fine sediment transported to the reservoir. As a measure of indirect validation, denudation rates of 4.6 - 28.5 mm ka-1 were estimated and compared well to other studies in the region. From the results it was extrapolated that in the absence of fire management intervention the critical sediment concentrations in the studied reservoir could be exceeded in intervals of 18 - 124 years.

  17. [End-of-Life Care in Intensive Care Units: Nursing strategies of family involvement at the end of life].

    PubMed

    Cyrol, Katharina; Fröhlich, Martin R; Piatti, Francesca; Imhof, Lorenz

    2018-06-01

    Background: Family members of people dying in the intensive care unit (ICU) are exposed to many stress factors and they often do not experience involvement in End-of-Life (EoL) situations. For example, they criticize a low degree of participation in patients care, delayed or incomplete information and lack of privacy. Even nursing staff is facing various obstacles in EoL situations in ICUs. Aim: This study investigates strategies used by ICU nursing staff in German-speaking Switzerland to increase family members participation in situations at the end of life. Method: Data was collected by conducting 12 semi-structured interviews using an approach based on Grounded Theory. A model was developed to explain nursing strategies for family involvement in EoL situations in the ICU. Conclusions: Nurses provide personal space and tranquillity for family members and allow them to be present at any time. Against this background, they support family members and enable them to say goodbye consciously to a loved one. Subsequent work should examine the effectiveness of the strategies described, particularly in terms of stress reactions displayed by family members in the aftermath of EoL situations. In practice, family members should be provided space for privacy. The entire healthcare team is recommended to identify and pursue common values and objectives. Moreover, intradisciplinary exchange and mentoring need to be encouraged. In order to prepare future nursing staff for EoL situations in the ICU, recognizing and promoting their educational skills is mandatory.

  18. Machine processing of remotely sensed data - quantifying global process: Models, sensor systems, and analytical methods; Proceedings of the Eleventh International Symposium, Purdue University, West Lafayette, IN, June 25-27, 1985

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mengel, S.K.; Morrison, D.B.

    1985-01-01

    Consideration is given to global biogeochemical issues, image processing, remote sensing of tropical environments, global processes, geology, landcover hydrology, and ecosystems modeling. Topics discussed include multisensor remote sensing strategies, geographic information systems, radars, and agricultural remote sensing. Papers are presented on fast feature extraction; a computational approach for adjusting TM imagery terrain distortions; the segmentation of a textured image by a maximum likelihood classifier; analysis of MSS Landsat data; sun angle and background effects on spectral response of simulated forest canopies; an integrated approach for vegetation/landcover mapping with digital Landsat images; geological and geomorphological studies using an image processing technique;more » and wavelength intensity indices in relation to tree conditions and leaf-nutrient content.« less

  19. The internal/external issue what is an outer object? Another person as object and as separate other in object relations models.

    PubMed

    Zachrisson, Anders

    2013-01-01

    The question of what we mean by the term outer object has its roots in the epistemological foundation of psychoanalysis. From the very beginning, Freud's view was Kantian, and psychoanalysis has kept that stance, as it seems. The author reviews the internal/external issue in Freud's thinking and in the central object relations theories (Klein, Winnicott, and Bion). On this background he proposes a simple model to differentiate the concept of object along one central dimension: internal object, external object, and actual person. The main arguments are: (1) there is no direct, unmediated perception of the actual person--the experience of the other is always affected by the perceiver's subjectivity; (2) in intense transference reactions and projections, the perception of the person is dominated by the qualities of an inner object--and the other person "becomes" an external object for the perceiver; (3) when this distortion is less dominating, the other person to a higher degree remains a separate other--a person in his or her own right. Clinical material illustrates these phenomena, and a graphical picture of the model is presented. Finally with the model as background, the author comments on a selection of phenomena and concepts such as unobjectionable transference, "the third position," mourning and loneliness. The way that the internal colours and distorts the external is of course a central preoccupation of psychoanalysis generally. (Spillius et al., 2011, p. 326)

  20. Simple Elimination of Background Fluorescence in Formalin-Fixed Human Brain Tissue for Immunofluorescence Microscopy.

    PubMed

    Sun, Yulong; Ip, Philbert; Chakrabartty, Avijit

    2017-09-03

    Immunofluorescence is a common method used to visualize subcellular compartments and to determine the localization of specific proteins within a tissue sample. A great hindrance to the acquisition of high quality immunofluorescence images is endogenous autofluorescence of the tissue caused by aging pigments such as lipofuscin or by common sample preparation processes such as aldehyde fixation. This protocol describes how background fluorescence can be greatly reduced through photobleaching using white phosphor light emitting diode (LED) arrays prior to treatment with fluorescent probes. The broad-spectrum emission of white phosphor LEDs allow for bleaching of fluorophores across a range of emission peaks. The photobleaching apparatus can be constructed from off-the-shelf components at very low cost and offers an accessible alternative to commercially available chemical quenchers. A photobleaching pre-treatment of the tissue followed by conventional immunofluorescence staining generates images free of background autofluorescence. Compared to established chemical quenchers which reduced probe as well as background signals, photobleaching treatment had no effect on probe fluorescence intensity while it effectively reduced background and lipofuscin fluorescence. Although photobleaching requires more time for pre-treatment, higher intensity LED arrays may be used to reduce photobleaching time. This simple method can potentially be applied to a variety of tissues, particularly postmitotic tissues that accumulate lipofuscin such as the brain and cardiac or skeletal muscles.

  1. The Effect of Background Pressure on Electron Acceleration from Ultra-Intense Laser-Matter Interactions

    NASA Astrophysics Data System (ADS)

    Le, Manh; Ngirmang, Gregory; Orban, Chris; Morrison, John; Chowdhury, Enam; Roquemore, William

    2017-10-01

    We present two-dimensional particle-in-cell (PIC) simulations that investigate the role of background pressure on the acceleration of electrons from ultra intense laser interaction at normal incidence with liquid density ethylene glycol targets. The interaction was simulated at ten different pressures varying from 7.8 mTorr to 26 Torr. We calculated conversion efficiencies from the simulation results and plotted the efficiencies with respect to the background pressure. The results revealed that the laser to > 100 keV electron conversion efficiency remained flat around 0.35% from 7.8 mTorr to 1.2 Torr and increased exponentially from 1.2 Torr onward to about 1.47% at 26 Torr. Increasing the background pressure clearly has a dramatic effect on the acceleration of electrons from the target. We explain how electrostatic effects, in particular the neutralization of the target by the background plasma, allows electrons to escape more easily and that this effect is strengthened with higher densities. This work could facilitate the design of future experiments in increasing laser to electron conversion efficiency and generating substantial bursts of electrons with relativistic energies. This research is supported by the Air Force Office of Scientific Research under LRIR Project 17RQCOR504 under the management of Dr. Riq Parra and Dr. Jean-Luc Cambier. Support was also provided by the DOD HPCMP Internship Program.

  2. Depression and Anxiety Symptoms in Mothers of Newborns Hospitalized on the Neonatal Intensive Care Unit

    PubMed Central

    Segre, Lisa S.; McCabe, Jennifer E.; Chuffo-Siewert, Rebecca; O’Hara, Michael W.

    2014-01-01

    Background Mothers of infants hospitalized in the neonatal intensive care unit (NICU) are at risk for clinically significant levels of depression and anxiety symptoms; however, the maternal/infant characteristics that predict risk have been difficult to determine. Previous studies have conceptualized depression and anxiety symptoms separately, ignoring their comorbidity. Moreover, risk factors for these symptoms have not been assessed together in one study sample. Objectives The primary aim of this study was to determine whether a diagnostic classification approach or a common-factor model better explained the pattern of symptoms reported by NICU mothers, including depression, generalized anxiety, panic, and trauma. A secondary aim was to assess risk factors of aversive emotional states in NICU mothers based on the supported conceptual model. Method In this cross-sectional study, a nonprobability convenience sample of 200 NICU mothers completed questionnaires assessing maternal demographic and infant health characteristics, as well as maternal depression and anxiety symptoms. Structural equation modeling was used to test a diagnostic classification model, and a common-factor model of aversive emotional states and the risk factors of aversive emotional states in mothers in the NICU. Results Maximum likelihood estimates indicated that examining symptoms of depression and anxiety disorders as separate diagnostic classifications did not fit the data well, whereas examining the common factor of negative emotionality rendered an adequate fit to the data, and identified a history of depression, infant illness, and infant prematurity as significant risk factors. Discussion This study supports a multidimensional view of depression, and should guide both clinical practice and future research with NICU mothers. PMID:25171558

  3. β -delayed γ decay of P 26 : Possible evidence of a proton halo

    DOE PAGES

    Pérez-Loureiro, D.; Wrede, C.; Bennett, M. B.; ...

    2016-06-01

    Background: Measurements of β decay provide important nuclear structure information that can be used to probe isospin asymmetries and inform nuclear astrophysics studies. Purpose: To measure the β-delayed γ decay of 26P and compare the results with previous experimental results and shell-model calculations. Method: A 26P fast beam produced using nuclear fragmentation was implanted into a planar germanium detector. Its β-delayed γ-ray emission was measured with an array of 16 high-purity germanium detectors. Positrons emitted in the decay were detected in coincidence to reduce the background. Results: The absolute intensities of 26P β-delayed γ-rays were determined. A total of sixmore » new β-decay branches and 15 new γ-ray lines have been observed for the first time in 26P β-decay. A complete β-decay scheme was built for the allowed transitions to bound excited states of 26Si. ft values and Gamow-Teller strengths were also determined for these transitions and compared with shell model calculations and the mirror β-decay of 26Na, revealing significant mirror asymmetries. Conclusions: A very good agreement with theoretical predictions based on the USDB shell model is observed. The significant mirror asymmetry observed for the transition to the first excited state (δ=51(10)%) may be evidence for a proton halo in 26P.« less

  4. Impact of Neutral Boundary-Layer Turbulence on Wind-Turbine Wakes: A Numerical Modelling Study

    NASA Astrophysics Data System (ADS)

    Englberger, Antonia; Dörnbrack, Andreas

    2017-03-01

    The wake characteristics of a wind turbine in a turbulent boundary layer under neutral stratification are investigated systematically by means of large-eddy simulations. A methodology to maintain the turbulence of the background flow for simulations with open horizontal boundaries, without the necessity of the permanent import of turbulence data from a precursor simulation, was implemented in the geophysical flow solver EULAG. These requirements are fulfilled by applying the spectral energy distribution of a neutral boundary layer in the wind-turbine simulations. A detailed analysis of the wake response towards different turbulence levels of the background flow results in a more rapid recovery of the wake for a higher level of turbulence. A modified version of the Rankine-Froude actuator disc model and the blade element momentum method are tested as wind-turbine parametrizations resulting in a strong dependence of the near-wake wind field on the parametrization, whereas the far-wake flow is fairly insensitive to it. The wake characteristics are influenced by the two considered airfoils in the blade element momentum method up to a streamwise distance of 14 D ( D = rotor diameter). In addition, the swirl induced by the rotation has an impact on the velocity field of the wind turbine even in the far wake. Further, a wake response study reveals a considerable effect of different subgrid-scale closure models on the streamwise turbulent intensity.

  5. Modeling RNA polymerase interaction in mitochondria of chordates

    PubMed Central

    2012-01-01

    Background In previous work, we introduced a concept, a mathematical model and its computer realization that describe the interaction between bacterial and phage type RNA polymerases, protein factors, DNA and RNA secondary structures during transcription, including transcription initiation and termination. The model accurately reproduces changes of gene transcription level observed in polymerase sigma-subunit knockout and heat shock experiments in plant plastids. The corresponding computer program and a user guide are available at http://lab6.iitp.ru/en/rivals. Here we apply the model to the analysis of transcription and (partially) translation processes in the mitochondria of frog, rat and human. Notably, mitochondria possess only phage-type polymerases. We consider the entire mitochondrial genome so that our model allows RNA polymerases to complete more than one circle on the DNA strand. Results Our model of RNA polymerase interaction during transcription initiation and elongation accurately reproduces experimental data obtained for plastids. Moreover, it also reproduces evidence on bulk RNA concentrations and RNA half-lives in the mitochondria of frog, human with or without the MELAS mutation, and rat with normal (euthyroid) or hyposecretion of thyroid hormone (hypothyroid). The transcription characteristics predicted by the model include: (i) the fraction of polymerases terminating at a protein-dependent terminator in both directions (the terminator polarization), (ii) the binding intensities of the regulatory protein factor (mTERF) with the termination site and, (iii) the transcription initiation intensities (initiation frequencies) of all promoters in all five conditions (frog, healthy human, human with MELAS syndrome, healthy rat, and hypothyroid rat with aberrant mtDNA methylation). Using the model, absolute levels of all gene transcription can be inferred from an arbitrary array of the three transcription characteristics, whereas, for selected genes only relative RNA concentrations have been experimentally determined. Conversely, these characteristics and absolute transcription levels can be obtained using relative RNA concentrations and RNA half-lives known from various experimental studies. In this case, the “inverse problem” is solved with multi-objective optimization. Conclusions In this study, we demonstrate that our model accurately reproduces all relevant experimental data available for plant plastids, as well as the mitochondria of chordates. Using experimental data, the model is applied to estimate binding intensities of phage-type RNA polymerases to their promoters as well as predicting terminator characteristics, including polarization. In addition, one can predict characteristics of phage-type RNA polymerases and the transcription process that are difficult to measure directly, e.g., the association between the promoter’s nucleotide composition and the intensity of polymerase binding. To illustrate the application of our model in functional predictions, we propose a possible mechanism for MELAS syndrome development in human involving a decrease of Phe-tRNA, Val-tRNA and rRNA concentrations in the cell. In addition, we describe how changes in methylation patterns of the mTERF binding site and three promoters in hypothyroid rat correlate with changes in intensities of the mTERF binding and transcription initiations. Finally, we introduce an auxiliary model to describe the interaction between polysomal mRNA and ribonucleases. PMID:22873568

  6. Mainstream Teacher Candidates' Perspectives on ESL Writing: The Effects of Writer Identity and Rater Background

    ERIC Educational Resources Information Center

    Kang, Hyun-Sook; Veitch, Hillary

    2017-01-01

    This study explored the extent to which the ethnic identity of a writer and the background (gender and area of teaching) of a rater can influence mainstream teacher candidates' evaluation of English as a second language (ESL) writing, using a matched-guise method. A one-page essay was elicited from an ESL learner enrolled in an intensive English…

  7. Low Intensity Conflict as Practiced by John Singleton Mosby in the American Civil War

    DTIC Science & Technology

    1986-06-06

    perserverence, and (12) his leadership style was personalized, charismatic, and nonbureaucratic Mosby acquired his leadership abilities in a number of...operation; equipment and logistics; communications, command and control; results; the element of chance; and leadership . In addition, Mosbys background...logistics; communications, command and control; results; the element of chance; and leadership . In addition, Mosby’s background and character are also

  8. Diversity in Intensive English Language Centres in South Australia: Sociocultural Approaches to Education for Students with Migrant or Refugee Backgrounds

    ERIC Educational Resources Information Center

    Due, Clemence; Riggs, Damien W.; Augoustinos, Martha

    2016-01-01

    While there is a body of research concerning the education of students with migrant or refugee backgrounds, little of this research focuses on primary school-aged children. In order to address this gap, the current paper utilises data gained from an ethnographic study to consider the challenges and opportunities associated with diverse classrooms…

  9. Probing large-scale magnetism with the cosmic microwave background

    NASA Astrophysics Data System (ADS)

    Giovannini, Massimo

    2018-04-01

    Prior to photon decoupling magnetic random fields of comoving intensity in the nano-Gauss range distort the temperature and the polarization anisotropies of the microwave background, potentially induce a peculiar B-mode power spectrum and may even generate a frequency-dependent circularly polarized V-mode. We critically analyze the theoretical foundations and the recent achievements of an interesting trialogue involving plasma physics, general relativity and astrophysics.

  10. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu; Visvikis, Dimitris; Fernandez, Philippe

    2015-02-15

    Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimationmore » of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a wavelet-based denoising in the reconstruction process to better correct for PVE. Future work includes further evaluations of the proposed method on clinical datasets and the use of improved PSF models.« less

  11. Relationship of boreal summer 10-20-day and 30-60-day intraseasonal oscillation intensity over the tropical western North Pacific to tropical Indo-Pacific SST

    NASA Astrophysics Data System (ADS)

    Wu, Renguang; Cao, Xi

    2017-06-01

    The present study contrasts interannual variations in the intensity of boreal summer 10-20-day and 30-60-day intraseasonal oscillations (ISOs) over the tropical western North Pacific and their factors. A pronounced difference is found in the relationship of the two ISOs to El Niño-Southern Oscillation. The 10-20-day ISO intensity is enhanced during El Niño developing summer, whereas the 30-60-day ISO intensity is enhanced during La Niña decaying summer. The above different relationship is interpreted as follows. The equatorial central and eastern Pacific SST anomalies modify vertical wind shear, lower-level moisture, and vertical motion in a southeast-northwest oriented band from the equatorial western Pacific to the tropical western North Pacific where the 10-20-day ISOs originate and propagate. These background field changes modulate the amplitude of 10-20-day ISOs. Preceding equatorial central and eastern Pacific SST anomalies induce SST anomalies in the North Indian Ocean in summer, which in turn modify vertical wind shear and vertical motion over the tropical western North Pacific. The modified background fields influence the amplitude of the 30-60-day ISOs when they reach the tropical western North Pacific from the equatorial region. A feedback of ISO intensity on local SST change is identified in the tropical western North Pacific likely due to a net effect of ISOs on surface heat flux anomalies. This feedback is more prominent from the 10-20-day than the 30-60-day ISO intensity change.

  12. Development of classification models to detect Salmonella Enteritidis and Salmonella Typhimurium found in poultry carcass rinses by visible-near infrared hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Seo, Young Wook; Yoon, Seung Chul; Park, Bosoon; Hinton, Arthur; Windham, William R.; Lawrence, Kurt C.

    2013-05-01

    Salmonella is a major cause of foodborne disease outbreaks resulting from the consumption of contaminated food products in the United States. This paper reports the development of a hyperspectral imaging technique for detecting and differentiating two of the most common Salmonella serotypes, Salmonella Enteritidis (SE) and Salmonella Typhimurium (ST), from background microflora that are often found in poultry carcass rinse. Presumptive positive screening of colonies with a traditional direct plating method is a labor intensive and time consuming task. Thus, this paper is concerned with the detection of differences in spectral characteristics among the pure SE, ST, and background microflora grown on brilliant green sulfa (BGS) and xylose lysine tergitol 4 (XLT4) agar media with a spread plating technique. Visible near-infrared hyperspectral imaging, providing the spectral and spatial information unique to each microorganism, was utilized to differentiate SE and ST from the background microflora. A total of 10 classification models, including five machine learning algorithms, each without and with principal component analysis (PCA), were validated and compared to find the best model in classification accuracy. The five machine learning (classification) algorithms used in this study were Mahalanobis distance (MD), k-nearest neighbor (kNN), linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and support vector machine (SVM). The average classification accuracy of all 10 models on a calibration (or training) set of the pure cultures on BGS agar plates was 98% (Kappa coefficient = 0.95) in determining the presence of SE and/or ST although it was difficult to differentiate between SE and ST. The average classification accuracy of all 10 models on a training set for ST detection on XLT4 agar was over 99% (Kappa coefficient = 0.99) although SE colonies on XLT4 agar were difficult to differentiate from background microflora. The average classification accuracy of all 10 models on a validation set of chicken carcass rinses spiked with SE or ST and incubated on BGS agar plates was 94.45% and 83.73%, without and with PCA for classification, respectively. The best performing classification model on the validation set was QDA without PCA by achieving the classification accuracy of 98.65% (Kappa coefficient=0.98). The overall best performing classification model regardless of using PCA was MD with the classification accuracy of 94.84% (Kappa coefficient=0.88) on the validation set.

  13. Is Survival Better at Hospitals With Higher “End-of-Life” Treatment Intensity?

    PubMed Central

    Barnato, Amber E.; Chang, Chung-Chou H.; Farrell, Max H.; Lave, Judith R.; Roberts, Mark S.; Angus, Derek C.

    2013-01-01

    Background Concern regarding wide variations in spending and intensive care unit use for patients at the end of life hinges on the assumption that such treatment offers little or no survival benefit. Objective To explore the relationship between hospital “end-of-life” (EOL) treatment intensity and postadmission survival. Research Design Retrospective cohort analysis of Pennsylvania Health Care Cost Containment Council discharge data April 2001 to March 2005 linked to vital statistics data through September 2005 using hospital-level correlation, admission-level marginal structural logistic regression, and pooled logistic regression to approximate a Cox survival model. Subjects A total of 1,021,909 patients ≥65 years old, incurring 2,216,815 admissions in 169 Pennsylvania acute care hospitals. Measures EOL treatment intensity (a summed index of standardized intensive care unit and life-sustaining treatment use among patients with a high predicted probability of dying [PPD] at admission) and 30- and 180-day postadmission mortality. Results There was a nonlinear negative relationship between hospital EOL treatment intensity and 30-day mortality among all admissions, although patients with higher PPD derived the greatest benefit. Compared with admission at an average intensity hospital, admission to a hospital 1 standard deviation below versus 1 standard deviation above average intensity resulted in an adjusted odds ratio of mortality for admissions at low PPD of 1.06 (1.04–1.08) versus 0.97 (0.96–0.99); average PPD: 1.06 (1.04–1.09) versus 0.97 (0.96–0.99); and high PPD: 1.09 (1.07–1.11) versus 0.97 (0.95– 0.99), respectively. By 180 days, the benefits to intensity attenuated (low PPD: 1.03 [1.01–1.04] vs. 1.00 [0.98–1.01]; average PPD: 1.03 [1.02–1.05] vs. 1.00 [0.98–1.01]; and high PPD: 1.06 [1.04–1.09] vs. 1.00 [0.98–1.02]), respectively. Conclusions Admission to higher EOL treatment intensity hospitals is associated with small gains in postadmission survival. The marginal returns to intensity diminish for admission to hospitals above average EOL treatment intensity and wane with time. PMID:20057328

  14. Asymmetry in serial femtosecond crystallography data.

    PubMed

    Sharma, Amit; Johansson, Linda; Dunevall, Elin; Wahlgren, Weixiao Y; Neutze, Richard; Katona, Gergely

    2017-03-01

    Serial crystallography is an increasingly important approach to protein crystallography that exploits both X-ray free-electron laser (XFEL) and synchrotron radiation. Serial crystallography recovers complete X-ray diffraction data by processing and merging diffraction images from thousands of randomly oriented non-uniform microcrystals, of which all observations are partial Bragg reflections. Random fluctuations in the XFEL pulse energy spectrum, variations in the size and shape of microcrystals, integrating over millions of weak partial observations and instabilities in the XFEL beam position lead to new types of experimental errors. The quality of Bragg intensity estimates deriving from serial crystallography is therefore contingent upon assumptions made while modeling these data. Here it is observed that serial femtosecond crystallography (SFX) Bragg reflections do not follow a unimodal Gaussian distribution and it is recommended that an idealized assumption of single Gaussian peak profiles be relaxed to incorporate apparent asymmetries when processing SFX data. The phenomenon is illustrated by re-analyzing data collected from microcrystals of the Blastochloris viridis photosynthetic reaction center and comparing these intensity observations with conventional synchrotron data. The results show that skewness in the SFX observations captures the essence of the Wilson plot and an empirical treatment is suggested that can help to separate the diffraction Bragg intensity from the background.

  15. SPECTRAL LINE DE-CONFUSION IN AN INTENSITY MAPPING SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Yun-Ting; Bock, James; Bradford, C. Matt

    2016-12-01

    Spectral line intensity mapping (LIM) has been proposed as a promising tool to efficiently probe the cosmic reionization and the large-scale structure. Without detecting individual sources, LIM makes use of all available photons and measures the integrated light in the source confusion limit to efficiently map the three-dimensional matter distribution on large scales as traced by a given emission line. One particular challenge is the separation of desired signals from astrophysical continuum foregrounds and line interlopers. Here we present a technique to extract large-scale structure information traced by emission lines from different redshifts, embedded in a three-dimensional intensity mapping data cube.more » The line redshifts are distinguished by the anisotropic shape of the power spectra when projected onto a common coordinate frame. We consider the case where high-redshift [C ii] lines are confused with multiple low-redshift CO rotational lines. We present a semi-analytic model for [C ii] and CO line estimates based on the cosmic infrared background measurements, and show that with a modest instrumental noise level and survey geometry, the large-scale [C ii] and CO power spectrum amplitudes can be successfully extracted from a confusion-limited data set, without external information. We discuss the implications and limits of this technique for possible LIM experiments.« less

  16. Association of mixed hematopoietic chimerism with elevated circulating autoantibodies and chronic graft-versus-host disease occurrence

    PubMed Central

    Perruche, Sylvain; Marandin, Aliette; Kleinclauss, François M.; Angonin, Régis; Fresnay, Stéphanie; Baron, Marie Hélène; Tiberghien, Pierre; Saas, Philippe

    2006-01-01

    Background Use of a reduced intensity conditioning regimen before an allogeneic hematopoietic cell transplantation is frequently associated with an early state of mixed hematopoietic chimerism. Such a co-existence of both host and donor hematopoietic cells may influence post-transplant alloreactivity and may affect the occurrence and severity of acute and chronic graft-versus-host disease (GVHD) as well as the intensity of the graft-versus-leukemia effect. Here we evaluated the relation between chimerism state after reduced intensity conditioning transplantation (RICT), auto-antibody production and chronic GVHD (cGVHD)-related pathology. Methods Chimerism state, circulating anti-cardiolipin and anti-double stranded DNA auto-antibody (Ab) titers as well as occurrence of cGVHD-like lesions were investigated in a murine RICT model. Results We observed a novel association between mixed chimerism state, high levels of pathogenic IgG auto-Abs and subsequent development of cGVHD-like lesions. Furthermore, we found that the persistence of host B cells, but not dendritic cell origin or subset, was a factor associated with the appearance of cGVHD-like lesions. The implication of host B cells was confirmed by a host origin of auto-Abs. Conclusions Recipient B cell persistence may therefore contribute to the frequency and/or severity of cGVHD after RICT. PMID:16495806

  17. An interdisciplinary approach for earthquake modelling and forecasting

    NASA Astrophysics Data System (ADS)

    Han, P.; Zhuang, J.; Hattori, K.; Ogata, Y.

    2016-12-01

    Earthquake is one of the most serious disasters, which may cause heavy casualties and economic losses. Especially in the past two decades, huge/mega earthquakes have hit many countries. Effective earthquake forecasting (including time, location, and magnitude) becomes extremely important and urgent. To date, various heuristically derived algorithms have been developed for forecasting earthquakes. Generally, they can be classified into two types: catalog-based approaches and non-catalog-based approaches. Thanks to the rapid development of statistical seismology in the past 30 years, now we are able to evaluate the performances of these earthquake forecast approaches quantitatively. Although a certain amount of precursory information is available in both earthquake catalogs and non-catalog observations, the earthquake forecast is still far from satisfactory. In most case, the precursory phenomena were studied individually. An earthquake model that combines self-exciting and mutually exciting elements was developed by Ogata and Utsu from the Hawkes process. The core idea of this combined model is that the status of the event at present is controlled by the event itself (self-exciting) and all the external factors (mutually exciting) in the past. In essence, the conditional intensity function is a time-varying Poisson process with rate λ(t), which is composed of the background rate, the self-exciting term (the information from past seismic events), and the external excitation term (the information from past non-seismic observations). This model shows us a way to integrate the catalog-based forecast and non-catalog-based forecast. Against this background, we are trying to develop a new earthquake forecast model which combines catalog-based and non-catalog-based approaches.

  18. Resolving vorticity-driven lateral fire spread using the WRF-Fire coupled atmosphere-fire numerical model

    NASA Astrophysics Data System (ADS)

    Simpson, C. C.; Sharples, J. J.; Evans, J. P.

    2014-09-01

    Vorticity-driven lateral fire spread (VLS) is a form of dynamic fire behaviour, during which a wildland fire spreads rapidly across a steep leeward slope in a direction approximately transverse to the background winds. VLS is often accompanied by a downwind extension of the active flaming region and intense pyro-convection. In this study, the WRF-Fire (WRF stands for Weather Research and Forecasting) coupled atmosphere-fire model is used to examine the sensitivity of resolving VLS to both the horizontal and vertical grid spacing, and the fire-to-atmosphere coupling from within the model framework. The atmospheric horizontal and vertical grid spacing are varied between 25 and 90 m, and the fire-to-atmosphere coupling is either enabled or disabled. At high spatial resolutions, the inclusion of fire-to-atmosphere coupling increases the upslope and lateral rate of spread by factors of up to 2.7 and 9.5, respectively. This increase in the upslope and lateral rate of spread diminishes at coarser spatial resolutions, and VLS is not modelled for a horizontal and vertical grid spacing of 90 m. The lateral fire spread is driven by fire whirls formed due to an interaction between the background winds and the vertical circulation generated at the flank of the fire front as part of the pyro-convective updraft. The laterally advancing fire fronts become the dominant contributors to the extreme pyro-convection. The results presented in this study demonstrate that both high spatial resolution and two-way atmosphere-fire coupling are required to model VLS with WRF-Fire.

  19. Tinnitus

    MedlinePlus

    ... Sound therapies that involve simple things like background music or noise or specialized ear level maskers may ... exposure to high intensity sounds, specifically listening to music. In particular, virtually all teenagers use personal MP3 ...

  20. Multi-channel lock-in amplifier assisted femtosecond time-resolved fluorescence non-collinear optical parametric amplification spectroscopy with efficient rejection of superfluorescence background.

    PubMed

    Mao, Pengcheng; Wang, Zhuan; Dang, Wei; Weng, Yuxiang

    2015-12-01

    Superfluorescence appears as an intense background in femtosecond time-resolved fluorescence noncollinear optical parametric amplification spectroscopy, which severely interferes the reliable acquisition of the time-resolved fluorescence spectra especially for an optically dilute sample. Superfluorescence originates from the optical amplification of the vacuum quantum noise, which would be inevitably concomitant with the amplified fluorescence photons during the optical parametric amplification process. Here, we report the development of a femtosecond time-resolved fluorescence non-collinear optical parametric amplification spectrometer assisted with a 32-channel lock-in amplifier for efficient rejection of the superfluorescence background. With this spectrometer, the superfluorescence background signal can be significantly reduced to 1/300-1/100 when the seeding fluorescence is modulated. An integrated 32-bundle optical fiber is used as a linear array light receiver connected to 32 photodiodes in one-to-one mode, and the photodiodes are further coupled to a home-built 32-channel synchronous digital lock-in amplifier. As an implementation, time-resolved fluorescence spectra for rhodamine 6G dye in ethanol solution at an optically dilute concentration of 10(-5)M excited at 510 nm with an excitation intensity of 70 nJ/pulse have been successfully recorded, and the detection limit at a pump intensity of 60 μJ/pulse was determined as about 13 photons/pulse. Concentration dependent redshift starting at 30 ps after the excitation in time-resolved fluorescence spectra of this dye has also been observed, which can be attributed to the formation of the excimer at a higher concentration, while the blueshift in the earlier time within 10 ps is attributed to the solvation process.

  1. Multi-channel lock-in amplifier assisted femtosecond time-resolved fluorescence non-collinear optical parametric amplification spectroscopy with efficient rejection of superfluorescence background

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mao, Pengcheng; Wang, Zhuan; Dang, Wei

    Superfluorescence appears as an intense background in femtosecond time-resolved fluorescence noncollinear optical parametric amplification spectroscopy, which severely interferes the reliable acquisition of the time-resolved fluorescence spectra especially for an optically dilute sample. Superfluorescence originates from the optical amplification of the vacuum quantum noise, which would be inevitably concomitant with the amplified fluorescence photons during the optical parametric amplification process. Here, we report the development of a femtosecond time-resolved fluorescence non-collinear optical parametric amplification spectrometer assisted with a 32-channel lock-in amplifier for efficient rejection of the superfluorescence background. With this spectrometer, the superfluorescence background signal can be significantly reduced to 1/300–1/100more » when the seeding fluorescence is modulated. An integrated 32-bundle optical fiber is used as a linear array light receiver connected to 32 photodiodes in one-to-one mode, and the photodiodes are further coupled to a home-built 32-channel synchronous digital lock-in amplifier. As an implementation, time-resolved fluorescence spectra for rhodamine 6G dye in ethanol solution at an optically dilute concentration of 10{sup −5}M excited at 510 nm with an excitation intensity of 70 nJ/pulse have been successfully recorded, and the detection limit at a pump intensity of 60 μJ/pulse was determined as about 13 photons/pulse. Concentration dependent redshift starting at 30 ps after the excitation in time-resolved fluorescence spectra of this dye has also been observed, which can be attributed to the formation of the excimer at a higher concentration, while the blueshift in the earlier time within 10 ps is attributed to the solvation process.« less

  2. The soft X-ray diffuse background

    NASA Technical Reports Server (NTRS)

    Mccammon, D.; Burrows, D. N.; Sanders, W. T.; Kraushaar, W. L.

    1982-01-01

    Maps of the diffuse X-ray background intensity covering essentially the entire sky with approx. 7 deg spatial resolution are presented for seven energy bands. The data were obtained on a series of ten sounding rocket flights conducted over a seven-year period. The different nature of the spatial distributions in different bands implies at least three distinct origins for the diffuse X-rays, none of which is well-understood. At energies or approx. 2000 eV, an isotropic and presumably extraglalactic 500 and 1000 eV, an origin which is at least partially galactic seems called for. At energies 284 eV, the observed intensity is anticorrelated with neutral hydrogen column density, but we find it unlikely that this anticorrelation is simply due to absorption of an extragalactic or halo source.

  3. An Empirical Decomposition of Near-IR Emission into Galactic and Extragalactic Components

    NASA Technical Reports Server (NTRS)

    Dwek, Eli; Arendt, Richard G.

    2002-01-01

    We decompose the COBE/DIRBE observations of the near-IR sky brightness (minus zodiacal light) into Galactic stellar and interstellar medium (ISM) components and an extragalactic background. This empirical procedure allows us to estimate the 4.9 micron cosmic infrared background (CIB) as a function of the CIB intensity at shorter wavelengths. A weak indication of a rising CIB intensity at wavelengths greater than 3.5$ microns hints at interesting astrophysics in the CIB spectrum, or warns that the foreground zodiacal emission may be incompletely subtracted. Subtraction of only the stellar component from the zodiacal-light-subtracted all--sky map reveals the clearest 3.5 micron ISM emission map, which is found to be tightly correlated with the ISM emission at far-IR wavelengths.

  4. Enhancement of vegetation-rainfall feedbacks on the Australian summer monsoon by the Madden-Julian Oscillation

    NASA Astrophysics Data System (ADS)

    Notaro, Michael

    2018-01-01

    A regional climate modeling analysis of the Australian monsoon system reveals a substantial modulation of vegetation-rainfall feedbacks by the Madden Julian Oscillation (MJO), both of which operate at similar sub-seasonal time scales, as evidence that the intensity of land-atmosphere interactions is sensitive to the background atmospheric state. Based on ensemble experiments with imposed modification of northern Australian leaf area index (LAI), the atmospheric responses to LAI anomalies are composited for negative and positive modes of the propagating MJO. In the regional climate model (RCM), northern Australian vegetation feedbacks are characterized by evapotranspiration (ET)-driven rainfall responses, with the moisture feedback mechanism dominating over albedo and roughness feedback mechanisms. During November-April, both Tropical Rainfall Measuring Mission and RCM data reveal MJO's pronounced influence on rainfall patterns across northern Australia, tropical Indian Ocean, Timor Sea, Arafura Sea, and Gulf of Carpentaria, with the MJO dominating over vegetation feedbacks in terms of regulating monsoon rainfall variability. Convectively-active MJO phases support an enhancement of positive vegetation feedbacks on monsoon rainfall. While the MJO imposes minimal regulation of ET responses to LAI anomalies, the vegetation feedback-induced responses in precipitable water, cloud water, and rainfall are greatly enhanced during convectively-active MJO phases over northern Australia, which are characterized by intense low-level convergence and efficient precipitable water conversion. The sub-seasonal response of vegetation-rainfall feedback intensity to the MJO is complex, with significant enhancement of rainfall responses to LAI anomalies in February during convectively-active MJO phases compared to minimal modulation by the MJO during prior and subsequent calendar months.

  5. Characterising an intense PM pollution episode in March 2015 in France from multi-site approach and near real time data: Climatology, variabilities, geographical origins and model evaluation

    NASA Astrophysics Data System (ADS)

    Petit, J.-E.; Amodeo, T.; Meleux, F.; Bessagnet, B.; Menut, L.; Grenier, D.; Pellan, Y.; Ockler, A.; Rocq, B.; Gros, V.; Sciare, J.; Favez, O.

    2017-04-01

    During March 2015, a severe and large-scale particulate matter (PM) pollution episode occurred in France. Measurements in near real-time of the major chemical composition at four different urban background sites across the country (Paris, Creil, Metz and Lyon) allowed the investigation of spatiotemporal variabilities during this episode. A climatology approach showed that all sites experienced clear unusual rain shortage, a pattern that is also found on a longer timescale, highlighting the role of synoptic conditions over Wester-Europe. This episode is characterized by a strong predominance of secondary pollution, and more particularly of ammonium nitrate, which accounted for more than 50% of submicron aerosols at all sites during the most intense period of the episode. Pollution advection is illustrated by similar variabilities in Paris and Creil (distant of around 100 km), as well as trajectory analyses applied on nitrate and sulphate. Local sources, especially wood burning, are however found to contribute to local/regional sub-episodes, notably in Metz. Finally, simulated concentrations from Chemistry-Transport model CHIMERE were compared to observed ones. Results highlighted different patterns depending on the chemical components and the measuring site, reinforcing the need of such exercises over other pollution episodes and sites.

  6. Effect of magnetic fields on green color formation in frog skin

    NASA Astrophysics Data System (ADS)

    Kashiwagi, H.; Kashiwagi, A.; Iwasaka, M.

    2017-05-01

    The present work is focused on a dynamic and efficient optical control system that is made possible by investigation of the body surfaces of various animals. Specifically, we expect Japanese tree frog (Hyla japonica) skin to provide a model for a flexible display device actuator mechanism. Tree frogs change body color from their original green to other colors in response to background colors. The color formation is controlled not only by chromatophores, but also by guanine microcrystals in iridophores. We collected sample microcrystals from the frog's dorsal skin and made a model display sheet using the green skin layers. The transparent chamber that contained the crystal suspension was layered to enhance light reflection. Sheet color was observed while the angle of light incidence was varied, with and without magnetic field exposure at 0.3 T. A slight increase in red and green intensity was detected. Additionally, reflected intensity increased with increasing angle of incidence. These results indicate that the guanine crystal platelets in frog skin can efficiently switch the reflected light direction under application of a magnetic field. This in turn suggests that a several-micron-sized microcrystal of this type is a candidate material for development of flexible optical chips for ambient light control.

  7. Increased airway reactivity in a neonatal mouse model of Continuous Positive Airway Pressure (CPAP)

    PubMed Central

    Mayer, Catherine A.; Martin, Richard J.; MacFarlane, Peter M.

    2015-01-01

    Background Continuous positive airway pressure (CPAP) is a primary form of respiratory support used in the intensive care of preterm infants, but its long-term effects on airway (AW) function are unknown. Methods We developed a neonatal mouse model of CPAP treatment to determine whether it modifies later AW reactivity. Un-anesthetized spontaneously breathing mice were fitted with a mask to deliver CPAP (6cmH2O, 3hrs/day) for 7 consecutive days starting at postnatal day 1. Airway reactivity to methacholine was assessed using the in vitro living lung slice preparation. Results One week of CPAP increased AW responsiveness to methacholine in male, but not female mice, compared to untreated control animals. The AW hyper-reactivity of male mice persisted for 2 weeks (at P21) after CPAP treatment ended. 4 days of CPAP, however, did not significantly increase AW reactivity. Females also exhibited AW hyper-reactivity at P21, suggesting a delayed response to early (7 days) CPAP treatment. The effects of 7 days of CPAP on hyper-reactivity to methacholine were unique to smaller AWs whereas larger ones were relatively unaffected. Conclusion These data may be important to our understanding of the potential long-term consequences of neonatal CPAP therapy used in the intensive care of preterm infants. PMID:25950451

  8. Family Nurture Intervention in the Neonatal Intensive Care Unit Improves Social-Relatedness, Attention, and Neurodevelopment of Preterm Infants at 18 Months in a Randomized Controlled Trial

    ERIC Educational Resources Information Center

    Welch, Martha G.; Firestein, Morgan R.; Austin, Judy; Hane, Amie A.; Stark, Raymond I.; Hofer, Myron A.; Garland, Marianne; Glickstein, Sara B.; Brunelli, Susan A.; Ludwig, Robert J.; Myers, Michael M.

    2015-01-01

    Background: Preterm infants are at high risk for adverse neurodevelopmental and behavioral outcomes. Family Nurture Intervention (FNI) in the Neonatal Intensive Care Unit (NICU) is designed to counteract adverse effects of separation of mothers and their preterm infants. Here, we evaluate effects of FNI on neurobehavioral outcomes. Methods: Data…

  9. Massed versus Spaced Practice in Vocology: Effect of a Short-Term Intensive Voice Training versus a Longer-Term Traditional Voice Training

    ERIC Educational Resources Information Center

    Meerschman, Iris; Van Lierde, Kristiane; Van Puyvelde, Caro; Bostyn, Astrid; Claeys, Sofie; D'haeseleer, Evelien

    2018-01-01

    Background: In contrast with most medical and pharmaceutical therapies, the optimal dosage for voice therapy or training is unknown. Aims: The aim of this study was to compare the effect of a short-term intensive voice training (IVT) with a longer-term traditional voice training (TVT) on the vocal quality and vocal capacities of vocally healthy…

  10. Emergence of neural encoding of auditory objects while listening to competing speakers

    PubMed Central

    Ding, Nai; Simon, Jonathan Z.

    2012-01-01

    A visual scene is perceived in terms of visual objects. Similar ideas have been proposed for the analogous case of auditory scene analysis, although their hypothesized neural underpinnings have not yet been established. Here, we address this question by recording from subjects selectively listening to one of two competing speakers, either of different or the same sex, using magnetoencephalography. Individual neural representations are seen for the speech of the two speakers, with each being selectively phase locked to the rhythm of the corresponding speech stream and from which can be exclusively reconstructed the temporal envelope of that speech stream. The neural representation of the attended speech dominates responses (with latency near 100 ms) in posterior auditory cortex. Furthermore, when the intensity of the attended and background speakers is separately varied over an 8-dB range, the neural representation of the attended speech adapts only to the intensity of that speaker but not to the intensity of the background speaker, suggesting an object-level intensity gain control. In summary, these results indicate that concurrent auditory objects, even if spectrotemporally overlapping and not resolvable at the auditory periphery, are neurally encoded individually in auditory cortex and emerge as fundamental representational units for top-down attentional modulation and bottom-up neural adaptation. PMID:22753470

  11. Reconstruction of pulse noisy images via stochastic resonance

    PubMed Central

    Han, Jing; Liu, Hongjun; Sun, Qibing; Huang, Nan

    2015-01-01

    We investigate a practical technology for reconstructing nanosecond pulse noisy images via stochastic resonance, which is based on the modulation instability. A theoretical model of this method for optical pulse signal is built to effectively recover the pulse image. The nanosecond noise-hidden images grow at the expense of noise during the stochastic resonance process in a photorefractive medium. The properties of output images are mainly determined by the input signal-to-noise intensity ratio, the applied voltage across the medium, and the correlation length of noise background. A high cross-correlation gain is obtained by optimizing these parameters. This provides a potential method for detecting low-level or hidden pulse images in various imaging applications. PMID:26067911

  12. Origin of particulate matter and gaseous precursors in the Paris Megacity: Results from intensive campaigns, long term measurements and modelling

    NASA Astrophysics Data System (ADS)

    Beekmann, Matthias; Petetin, Hervé; Zhang, Qijie; Prevot, André S. H.; Sciare, Jean; Gros, Valérie; Ghersi, Véronique; Rosso, Amandine; Crippa, Monica; Zotter, Peter; Freutel, Fredericke; Poulain, Laurent; Freney, Evelyne; Sellegri, Karine; Drewnick, Frank; Borbon, Agnès; Wiedensohler, Aflred; Pandis, Spyros N.; Baltensperger, Urs

    2016-04-01

    Uncertainties on the origin of primary and secondary particulate matter and its gaseous precursors in megacities is still large and needs to be reduced. A detailed characterization of air quality in Paris (France), a megacity of more than 10 million inhabitants, during two one month intensive campaigns (MEGAPOLI) and from additional one year observations (PARTICULATE and FRANCIPOL), revealed that about 70% of the fine particulate matter (PM) at urban background is transported on average into the megacity from upwind regions. While advection of sulfate is well documented for other megacities, there was a surprisingly high contribution from long-range transport for both nitrate and organic aerosol. The data set of urban local and advected PM concentrations in the Paris area were used for a thorough evaluation of the CHIMERE model and revealed error compensation for the local and advected components of organic matter and nitrate. During spring time, CHIMERE simulations overestimate the sensitivity of ammonium nitrate peaks to NH3, because (i) they underestimate the urban background NH3 levels, probably due to neglecting enhanced NH3 emissions for larger temperatures, and because they overestimate HNO3. However, from an ensemble of mobile Max-DOAS NO2 column and airborne NOy measurements around Paris, no clear sign on a NOx emission bias in the TNO-Airparif data set was made evident. The origin of organic PM was investigated by a comprehensive analysis of aerosol mass spectrometer (AMS), radiocarbon and tracer measurements during two intensive campaigns. Primary fossil fuel combustion emissions contributed less than 20% in winter and 40% in summer to carbonaceous fine PM, unexpectedly little for a megacity. Cooking activities and, during winter, residential wood burning are the major primary organic PM sources. This analysis suggests that the major part of secondary organic aerosol is of modern origin, i.e. from biogenic precursors and from wood burning. Implementation of different configurations of the volatility basis set into the CHIMERE model allowed correctly representing summertime organic aerosol (OA) peaks within the agglomeration and attributing them to biogenic secondary OA as a major source. OA build-up from anthropogenic precursors within the plume was also correctly simulated, but it was not possible to attribute it specifically to oxidation of aromatic or of semi/ intermediate volatile organic compounds. Plume build-up of PM significantly contributes to regional air quality around the Paris region.

  13. Evaluation of the effectiveness of Gaussian filtering in distinguishing punctate synaptic signals from background noise during image analysis.

    PubMed

    Iwabuchi, Sadahiro; Kakazu, Yasuhiro; Koh, Jin-Young; Harata, N Charles

    2014-02-15

    Images in biomedical imaging research are often affected by non-specific background noise. This poses a serious problem when the noise overlaps with specific signals to be quantified, e.g. for their number and intensity. A simple and effective means of removing background noise is to prepare a filtered image that closely reflects background noise and to subtract it from the original unfiltered image. This approach is in common use, but its effectiveness in identifying and quantifying synaptic puncta has not been characterized in detail. We report on our assessment of the effectiveness of isolating punctate signals from diffusely distributed background noise using one variant of this approach, "Difference of Gaussian(s) (DoG)" which is based on a Gaussian filter. We evaluated immunocytochemically stained, cultured mouse hippocampal neurons as an example, and provided the rationale for choosing specific parameter values for individual steps in detecting glutamatergic nerve terminals. The intensity and width of the detected puncta were proportional to those obtained by manual fitting of two-dimensional Gaussian functions to the local information in the original image. DoG was compared with the rolling-ball method, using biological data and numerical simulations. Both methods removed background noise, but differed slightly with respect to their efficiency in discriminating neighboring peaks, as well as their susceptibility to high-frequency noise and variability in object size. DoG will be useful in detecting punctate signals, once its characteristics are examined quantitatively by experimenters. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Modeling evolution of spatially distributed bacterial communities: a simulation with the haploid evolutionary constructor

    PubMed Central

    2015-01-01

    Background Multiscale approaches for integrating submodels of various levels of biological organization into a single model became the major tool of systems biology. In this paper, we have constructed and simulated a set of multiscale models of spatially distributed microbial communities and study an influence of unevenly distributed environmental factors on the genetic diversity and evolution of the community members. Results Haploid Evolutionary Constructor software http://evol-constructor.bionet.nsc.ru/ was expanded by adding the tool for the spatial modeling of a microbial community (1D, 2D and 3D versions). A set of the models of spatially distributed communities was built to demonstrate that the spatial distribution of cells affects both intensity of selection and evolution rate. Conclusion In spatially heterogeneous communities, the change in the direction of the environmental flow might be reflected in local irregular population dynamics, while the genetic structure of populations (frequencies of the alleles) remains stable. Furthermore, in spatially heterogeneous communities, the chemotaxis might dramatically affect the evolution of community members. PMID:25708911

  15. Cosmic Infrared Background Fluctuations in Deep Spitzer Infrared Array Camera Images: Data Processing and Analysis

    NASA Astrophysics Data System (ADS)

    Arendt, Richard G.; Kashlinsky, A.; Moseley, S. H.; Mather, J.

    2010-01-01

    This paper provides a detailed description of the data reduction and analysis procedures that have been employed in our previous studies of spatial fluctuation of the cosmic infrared background (CIB) using deep Spitzer Infrared Array Camera observations. The self-calibration we apply removes a strong instrumental signal from the fluctuations that would otherwise corrupt the results. The procedures and results for masking bright sources and modeling faint sources down to levels set by the instrumental noise are presented. Various tests are performed to demonstrate that the resulting power spectra of these fields are not dominated by instrumental or procedural effects. These tests indicate that the large-scale (gsim30') fluctuations that remain in the deepest fields are not directly related to the galaxies that are bright enough to be individually detected. We provide the parameterization of these power spectra in terms of separate instrument noise, shot noise, and power-law components. We discuss the relationship between fluctuations measured at different wavelengths and depths, and the relations between constraints on the mean intensity of the CIB and its fluctuation spectrum. Consistent with growing evidence that the ~1-5 μm mean intensity of the CIB may not be as far above the integrated emission of resolved galaxies as has been reported in some analyses of DIRBE and IRTS observations, our measurements of spatial fluctuations of the CIB intensity indicate the mean emission from the objects producing the fluctuations is quite low (gsim1 nW m-2 sr-1 at 3-5 μm), and thus consistent with current γ-ray absorption constraints. The source of the fluctuations may be high-z Population III objects, or a more local component of very low luminosity objects with clustering properties that differ from the resolved galaxies. Finally, we discuss the prospects of the upcoming space-based surveys to directly measure the epochs inhabited by the populations producing these source-subtracted CIB fluctuations, and to isolate the individual fluxes of these populations.

  16. TU-F-BRD-01: Biomedical Informatics for Medical Physicists

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, M; Kalet, I; McNutt, T

    Biomedical informatics encompasses a very large domain of knowledge and applications. This broad and loosely defined field can make it difficult to navigate. Physicists often are called upon to provide informatics services and/or to take part in projects involving principles of the field. The purpose of the presentations in this symposium is to help medical physicists gain some knowledge about the breadth of the field and how, in the current clinical and research environment, they can participate and contribute. Three talks have been designed to give an overview from the perspective of physicists and to provide a more in-depth discussionmore » in two areas. One of the primary purposes, and the main subject of the first talk, is to help physicists achieve a perspective about the range of the topics and concepts that fall under the heading of 'informatics'. The approach is to de-mystify topics and jargon and to help physicists find resources in the field should they need them. The other talks explore two areas of biomedical informatics in more depth. The goal is to highlight two domains of intense current interest--databases and models--in enough depth into current approaches so that an adequate background for independent inquiry is achieved. These two areas will serve as good examples of how physicists, using informatics principles, can contribute to oncology practice and research. Learning Objectives: To understand how the principles of biomedical informatics are used by medical physicists. To put the relevant informatics concepts in perspective with regard to biomedicine in general. To use clinical database design as an example of biomedical informatics. To provide a solid background into the problems and issues of the design and use of data and databases in radiation oncology. To use modeling in the service of decision support systems as an example of modeling methods and data use. To provide a background into how uncertainty in our data and knowledge can be incorporated into modeling methods.« less

  17. Improved tomographic reconstructions using adaptive time-dependent intensity normalization.

    PubMed

    Titarenko, Valeriy; Titarenko, Sofya; Withers, Philip J; De Carlo, Francesco; Xiao, Xianghui

    2010-09-01

    The first processing step in synchrotron-based micro-tomography is the normalization of the projection images against the background, also referred to as a white field. Owing to time-dependent variations in illumination and defects in detection sensitivity, the white field is different from the projection background. In this case standard normalization methods introduce ring and wave artefacts into the resulting three-dimensional reconstruction. In this paper the authors propose a new adaptive technique accounting for these variations and allowing one to obtain cleaner normalized data and to suppress ring and wave artefacts. The background is modelled by the product of two time-dependent terms representing the illumination and detection stages. These terms are written as unknown functions, one scaled and shifted along a fixed direction (describing the illumination term) and one translated by an unknown two-dimensional vector (describing the detection term). The proposed method is applied to two sets (a stem Salix variegata and a zebrafish Danio rerio) acquired at the parallel beam of the micro-tomography station 2-BM at the Advanced Photon Source showing significant reductions in both ring and wave artefacts. In principle the method could be used to correct for time-dependent phenomena that affect other tomographic imaging geometries such as cone beam laboratory X-ray computed tomography.

  18. Development and evaluation of a data-adaptive alerting algorithm for univariate temporal biosurveillance data.

    PubMed

    Elbert, Yevgeniy; Burkom, Howard S

    2009-11-20

    This paper discusses further advances in making robust predictions with the Holt-Winters forecasts for a variety of syndromic time series behaviors and introduces a control-chart detection approach based on these forecasts. Using three collections of time series data, we compare biosurveillance alerting methods with quantified measures of forecast agreement, signal sensitivity, and time-to-detect. The study presents practical rules for initialization and parameterization of biosurveillance time series. Several outbreak scenarios are used for detection comparison. We derive an alerting algorithm from forecasts using Holt-Winters-generalized smoothing for prospective application to daily syndromic time series. The derived algorithm is compared with simple control-chart adaptations and to more computationally intensive regression modeling methods. The comparisons are conducted on background data from both authentic and simulated data streams. Both types of background data include time series that vary widely by both mean value and cyclic or seasonal behavior. Plausible, simulated signals are added to the background data for detection performance testing at signal strengths calculated to be neither too easy nor too hard to separate the compared methods. Results show that both the sensitivity and the timeliness of the Holt-Winters-based algorithm proved to be comparable or superior to that of the more traditional prediction methods used for syndromic surveillance.

  19. Design of a portable fluoroquinolone analyzer based on terbium-sensitized luminescence

    NASA Astrophysics Data System (ADS)

    Chen, Guoying

    2007-09-01

    A portable fluoroquinolone (FQ) analyzer is designed and prototyped based on terbium-sensitized luminescence (TSL). The excitation source is a 327-nm light emitting diode (LED) operated in pulsed mode; and the luminescence signal is detected by a photomultiplier tube (PMT). In comparison to a conventional xenon flashlamp, an LED is small, light, robust, and energy efficient. More importantly, its narrow emission bandwidth and low residual radiation reduce background signal. In pulse mode, an LED operates at a current 1-2 orders of magnitude lower than that of a xenon flashlamp, thus minimizing electromagnetic interference (EMI) to the detector circuitry. The PMT is gated to minimize its response to the light source. These measures lead to reduced background noise in time domain. To overcome pulse-to-pulse variation signal normalization is implemented based on individual pulse energy. Instrument operation and data processing are controlled by a computer running a custom LabVIEW program. Enrofloxacin (ENRO) is used as a model analyte to evaluate instrument performance. The integrated TSL intensity reveals a linear dependence up to 2 ppm. A 1.1-ppb limit of detection (LOD) is achieved with relative standard deviation (RSD) averaged at 5.1%. The background noise corresponds to ~5 ppb. At 19 lbs, this portable analyzer is field deployable for agriculture, environmental and clinical analyses.

  20. Simulating secondary organic aerosol from missing diesel-related intermediate-volatility organic compound emissions during the Clean Air for London (ClearfLo) campaign

    NASA Astrophysics Data System (ADS)

    Ots, Riinu; Young, Dominique E.; Vieno, Massimo; Xu, Lu; Dunmore, Rachel E.; Allan, James D.; Coe, Hugh; Williams, Leah R.; Herndon, Scott C.; Ng, Nga L.; Hamilton, Jacqueline F.; Bergström, Robert; Di Marco, Chiara; Nemitz, Eiko; Mackenzie, Ian A.; Kuenen, Jeroen J. P.; Green, David C.; Reis, Stefan; Heal, Mathew R.

    2016-05-01

    We present high-resolution (5 km × 5 km) atmospheric chemical transport model (ACTM) simulations of the impact of newly estimated traffic-related emissions on secondary organic aerosol (SOA) formation over the UK for 2012. Our simulations include additional diesel-related intermediate-volatility organic compound (IVOC) emissions derived directly from comprehensive field measurements at an urban background site in London during the 2012 Clean Air for London (ClearfLo) campaign. Our IVOC emissions are added proportionally to VOC emissions, as opposed to proportionally to primary organic aerosol (POA) as has been done by previous ACTM studies seeking to simulate the effects of these missing emissions. Modelled concentrations are evaluated against hourly and daily measurements of organic aerosol (OA) components derived from aerosol mass spectrometer (AMS) measurements also made during the ClearfLo campaign at three sites in the London area. According to the model simulations, diesel-related IVOCs can explain on average ˜ 30 % of the annual SOA in and around London. Furthermore, the 90th percentile of modelled daily SOA concentrations for the whole year is 3.8 µg m-3, constituting a notable addition to total particulate matter. More measurements of these precursors (currently not included in official emissions inventories) is recommended. During the period of concurrent measurements, SOA concentrations at the Detling rural background location east of London were greater than at the central London location. The model shows that this was caused by an intense pollution plume with a strong gradient of imported SOA passing over the rural location. This demonstrates the value of modelling for supporting the interpretation of measurements taken at different sites or for short durations.

  1. Soviet research on the transport of intense relativistic electron beams through high-pressure air

    NASA Astrophysics Data System (ADS)

    Wells, Nikita

    1987-05-01

    Soviet development of intense relativistic electron beams (IREB) through background air at pressures from 1/100 Torr to atmospheric is analyzed as reflected by Soviet open literature of the last 15 years. Important Soviet findings include: (1) the formation of a plasma channel created by an IREB propagating through background air and the effect of beam parameters upon the plasma channel parameters (and vice versa); (2) determination of the background air pressure for the optimum transport of IREB in two ranges, an ion focused regime at 0.06 to 0.09 Torr and a low pressure window at 1 Torr; (3) observation of current enhancement, whereby the IREB-induced current in plasma is higher than the initial beam current; and (4) the effect of resistive hose instability on IREB propagation. This research is characterized by absence of high energy experimentation. A conclusion of the research is that, for optimum beam transport through air, it is imperative to ensure conditions that allow full neutralization of the IREB's self-fields along the entire path of the beam's transport.

  2. Effect of Detector Dead Time on the Performance of Optical Direct-Detection Communication Links

    NASA Technical Reports Server (NTRS)

    Chen, C.-C.

    1988-01-01

    Avalanche photodiodes (APDs) operating in the Geiger mode can provide a significantly improved single-photon detect ion sensitivity over conventional photodiodes. However, the quenching circuit required to remove the excess charge carriers after each photon event can introduce an undesirable dead time into the detection process. The effect of this detector dead time on the performance of a binary pulse-position-modulted (PPM) channel is studied by analyzing the error probability. It is shown that, when back- ground noise is negligible, the performance of the detector with dead time is similar to that o f a quantum-limited receiver. For systems with increasing background intensities, the error rate of the receiver starts to degrade rapidly with increasing dead time. The power penalty due to detector dead time is also evaluated and shown to depend critically on background intensity as well as dead time. Given the expected background strength in an optical channel, therefore, a constraint must be placed on the bandwidth of the receiver to limit the amount of power penalty due to detector dead time.

  3. DMR 'Map of the Early Universe.'

    NASA Technical Reports Server (NTRS)

    2002-01-01

    DMR 'Map of the Early Universe.' This false-color image shows tiny variations in the intensity of the cosmic microwave background measured in four years of observations by the Differential Microwave Radiometers on NASA's Cosmic Background Explorer (COBE). The cosmic microwave background is widely believed to be a remnant of the Big Bang; the blue and red spots correspond to regions of greater or lesser density in the early Universe. These 'fossilized' relics record the distribution of matter and energy in the early Universe before the matter became organized into stars and galaxies. While the initial discovery of variations in the intensity of the CMB (made by COBE in 1992) was based on a mathematical examination of the data, this picture of the sky from the full four-year mission gives an accurate visual impression of the data. The features traced in this map stretch across the visible Universe: the largest features seen by optical telescopes, such as the 'Great Wall' of galaxies, would fit neatly within the smallest feature in this map. (See Bennett et al. 1996, ApJ, 464, L1 and references therein for details.)

  4. Effect of detector dead time on the performance of optical direct-detection communication links

    NASA Astrophysics Data System (ADS)

    Chen, C.-C.

    1988-05-01

    Avalanche photodiodes (APDs) operating in the Geiger mode can provide a significantly improved single-photon detection sensitivity over conventional photodiodes. However, the quenching circuit required to remove the excess charge carriers after each photon event can introduce an undesirable dead time into the detection process. The effect of this detector dead time on the performance of a binary pulse-position-modulated (PPM) channel is studied by analyzing the error probability. It is shown that, when background noise is negligible, the performance of the detector with dead time is similar to that of a quantum-limited receiver. For systems with increasing background intensities, the error rate of the receiver starts to degrade rapidly with increasing dead time. The power penalty due to detector dead time is also evaluated and shown to depend critically on badkground intensity as well as dead time. Given the expected background strength in an optical channel, therefore, a constraint must be placed on the bandwidth of the receiver to limit the amount of power penalty due to detector dead time.

  5. Acute low back pain is marked by variability: An internet-based pilot study

    PubMed Central

    2011-01-01

    Background Pain variability in acute LBP has received limited study. The objectives of this pilot study were to characterize fluctuations in pain during acute LBP, to determine whether self-reported 'flares' of pain represent discrete periods of increased pain intensity, and to examine whether the frequency of flares was associated with back-related disability outcomes. Methods We conducted a cohort study of acute LBP patients utilizing frequent serial assessments and Internet-based data collection. Adults with acute LBP (lasting ≤3 months) completed questionnaires at the time of seeking care, and at both 3-day and 1-week intervals, for 6 weeks. Back pain was measured using a numerical pain rating scale (NPRS), and disability was measured using the Oswestry Disability Index (ODI). A pain flare was defined as 'a period of increased pain lasting at least 2 hours, when your pain intensity is distinctly worse than it has been recently'. We used mixed-effects linear regression to model longitudinal changes in pain intensity, and multivariate linear regression to model associations between flare frequency and disability outcomes. Results 42 of 47 participants (89%) reported pain flares, and the average number of discrete flare periods per patient was 3.5 over 6 weeks of follow-up. More than half of flares were less than 4 hours in duration, and about 75% of flares were less than one day in duration. A model with a quadratic trend for time best characterized improvements in pain. Pain decreased rapidly during the first 14 days after seeking care, and leveled off after about 28 days. Patients who reported a pain flare experienced an almost 3-point greater current NPRS than those not reporting a flare (mean difference [SD] 2.70 [0.11]; p < 0.0001). Higher flare frequency was independently associated with a higher final ODI score (ß [SE} 0.28 (0.08); p = 0.002). Conclusions Acute LBP is characterized by variability. Patients with acute LBP report multiple distinct flares of pain, which correspond to discrete increases in pain intensity. A higher flare frequency is associated with worse disability outcomes. PMID:21974962

  6. "Inner electron" radiation belt: problems of model creation

    NASA Astrophysics Data System (ADS)

    Temnyi, V.

    The contents of intensive fluxes of trapped electrons J_e with energies E_e>40 keV in center of the inner terrestrial radiation belt is remains uncertain in model Vette AE-8, 1991. It is explained by methodical difficulties of discrete measurements of electrons by narrow-angle spectrometers with background from omnidirectional penetrating protons with energies E_p>40 MeV and electrons with E_e>1 MeV after STARFISH burst. The results of integral measurements of trapped electrons by 2 groups: Krassovsky V.I. on III Soviet satellite (May 1958) and J. Van Allen on EXPLORER-IV (July-August 1958) and on INJUN-1 (1961) heave given a performances concerning electron energy fluxes I_e(E_e>20 keV) ˜ (20-100) erg cm-2 c-1 into inner radiation belt. Improved integral measurements of electrons by Krassovsky group on satellites KOSMOS-3,-5 and ELECTRON-1,-3 (1962-1964) allow to determine the distributions of their intensities in the whole inner belt. They can add the central part of inner belt of AE-8 model (see report Bolunova et al., COSPAR-1965, publ. in SPACE RESEARCH VI, 1967, p. 649-661). From these data a maximum of trapped electrons J_e(E_e>40 keV)=2\\cdot10^9 cm-2 c-1 is placed on L=1,6, B/B_0=1. Intensities up to 2\\cdot10^7 cm-2 c-1 are determined only by coordinates (L, B). For smaller intensities become essential dependence from longitude along a drift shell. So, in the center of the inner radiation belt the energy fluxes I_e(E_e>40 keV) reach 500 erg cm-2 c-1 and density n_e=0,2 cm-3 while for trapped protons I_p(E_p>40 MeV) is less than 3 erg cm-2 c-1 and n_p< 5\\cdot10-6 cm-3. It forces to search a more powerful sources trapped electron than beta-decay of neutrons albedo of cosmic rays.

  7. Upper limit on the inner radiation belt MeV electron intensity

    NASA Astrophysics Data System (ADS)

    Li, X.; Selesnick, R. S.; Baker, D. N.; Jaynes, A. N.; Kanekal, S. G.; Schiller, Q.; Blum, L.; Fennell, J.; Blake, J. B.

    2015-02-01

    No instruments in the inner radiation belt are immune from the unforgiving penetration of the highly energetic protons (tens of MeV to GeV). The inner belt proton flux level, however, is relatively stable; thus, for any given instrument, the proton contamination often leads to a certain background noise. Measurements from the Relativistic Electron and Proton Telescope integrated little experiment on board Colorado Student Space Weather Experiment CubeSat, in a low Earth orbit, clearly demonstrate that there exist sub-MeV electrons in the inner belt because their flux level is orders of magnitude higher than the background, while higher-energy electron (>1.6 MeV) measurements cannot be distinguished from the background. Detailed analysis of high-quality measurements from the Relativistic Electron and Proton Telescope on board Van Allen Probes, in a geo-transfer-like orbit, provides, for the first time, quantified upper limits on MeV electron fluxes in various energy ranges in the inner belt. These upper limits are rather different from flux levels in the AE8 and AE9 models, which were developed based on older data sources. For 1.7, 2.5, and 3.3 MeV electrons, the upper limits are about 1 order of magnitude lower than predicted model fluxes. The implication of this difference is profound in that unless there are extreme solar wind conditions, which have not happened yet since the launch of Van Allen Probes, significant enhancements of MeV electrons do not occur in the inner belt even though such enhancements are commonly seen in the outer belt.

  8. Society of Pediatric Psychology Diversity Award: Training Underrepresented Minority Students in Psychology

    PubMed Central

    Mitchell, Monica J.; Crosby, Lori E.

    2016-01-01

    Improving diversity, particularly among trainees and professionals from underrepresented ethnic minority backgrounds, has been a long-stated goal for the field of Psychology. Research has provided strategies and best practices, such as ensuring cultural sensitivity and relevance in coursework, clinical and research training, promoting a supportive and inclusive climate, providing access to cultural and community opportunities, and increasing insight and cultural competence among professionals (Rogers & Molina, 2006). Despite this, the rates of psychologists from ethnically diverse and underrepresented minority (URM) backgrounds remain low and few published studies have described programmatic efforts to increase diversity within the field. This paper describes the INNOVATIONS training model, which provides community and culturally related research experiences, graduate-school related advising, and mentoring to high school and college students. The paper also examines how the model may support enrollment of URM students in doctoral programs in psychology. Findings indicate that INNOVATIONS supported students’ transition from high school and college to graduate programs (with approximately 75% of students enrolling in Master’s and Doctoral programs). INNOVATIONS also supported students, including those from URM backgrounds, enrolling in doctoral programs (41.7%). Students who were trained in the research assistant track were most likely to enroll in psychology doctoral programs, perhaps as a result of the intensive time and training committed to research and clinical experiences. Data support the importance of research training for URM students pursuing psychology graduate study and the need to ensure cultural relevance of the training. Implications for clinical and pediatric psychology are discussed. PMID:28603680

  9. Relationship of Impaired Driving Enforcement Intensity to Drinking and Driving on the Roads

    PubMed Central

    Fell, James C.; Waehrer, Geetha; Voas, Robert B.; Auld-Owens, Amy; Carr, Katherine; Pell, Karen

    2014-01-01

    Background It is principally the area of enforcement that offers the greatest opportunity for reducing alcohol-impaired driving in the near future. How much of a reduction in drinking and driving would be achieved by how much improvement in enforcement intensity? Methods We developed logistic regression models to explore how enforcement intensity (six different measures) related to the prevalence of weekend, nighttime drivers in the 2007 National Roadside Survey (NRS) who had been drinking (blood alcohol concentration [BAC]>.00 g/dL), who had BACs>.05 g/dL, and who were driving with an illegal BAC>.08 g/dL. Results Drivers on the roads in our sample of 30 communities who were exposed to fewer than 228 traffic stops per 10,000 population aged 18 and older had 2.4 times the odds of being BAC positive, 3.6 times the odds of driving with a BAC>0.05, and 3.8 times the odds of driving with a BAC>0.08 compared to those drivers on the roads in communities with more than 1,275 traffic stops per 10,000 population. Drivers on the roads in communities with fewer than 3.7 driving-under-the-influence (DUI) arrests per 10,000 population had 2.7 times the odds of BAC-positive drivers on the roads compared to communities with the highest intensity of DUI arrest activity (>38 DUI arrests per 10,000 population). Conclusion The number of traffic stops and DUI arrests per capita were significantly associated with the odds of drinking and driving on the roads in these communities. This might reflect traffic enforcement visibility. The findings in this study may help law enforcement agencies around the country adjust their traffic enforcement intensity to reduce impaired driving in their community. PMID:25515820

  10. In vivo assessment of wound re-epithelialization by UV fluorescence excitation imaging

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Ortega-Martinez, Antonio; Padilla-Martinez, Juan Pablo; Williams, Maura; Farinelli, William; Anderson, R. R.; Franco, Walfre

    2017-02-01

    Background and Objectives: We have previously demonstrated the efficacy of a non-invasive, non-contact, fast and simple but robust fluorescence imaging (u-FEI) method to monitor the healing of skin wounds in vitro. This system can image highly-proliferating cellular processes (295/340 nm excitation/emission wavelengths) to study epithelialization in a cultured wound model. The objective of the current work is to evaluate the suitability of u-FEI for monitoring wound re-epithelialization in vivo. Study Design: Full-thickness wounds were created in the tail of rats and imaged weekly using u-FEI at 295/340nm excitation/emission wavelengths. Histology was used to investigate the correlation between the spatial distribution and intensity of fluorescence and the extent of wound epithelialization. In addition, the expression of the nuclear protein Ki67 was used to confirm the association between the proliferation of keratinocyte cells and the intensity of fluorescence. Results: Keratinocytes forming neo-epidermis exhibited higher fluorescence intensity than the keratinocytes not involved in re-epithelialization. In full-thickness wounds the fluorescence first appeared at the wound edge where keratinocytes initiated the epithelialization process. Fluorescence intensity increased towards the center as the keratinocytes partially covered the wound. As the wound healed, fluorescence decreased at the edges and was present only at the center as the keratinocytes completely covered the wound at day 21. Histology demonstrated that changes in fluorescence intensity from the 295/340nm band corresponded to newly formed epidermis. Conclusions: u-FEI at 295/340nm allows visualization of proliferating keratinocyte cells during re-epithelialization of wounds in vivo, potentially providing a quantitative, objective and simple method for evaluating wound closure in the clinic.

  11. The mean intensity of radiation at 2 microns in the solar neighborhood

    NASA Technical Reports Server (NTRS)

    Jura, M.

    1979-01-01

    Consideration is given to the value of the mean intensity at 2 microns in the solar neighborhood, and it is found that it is likely to be a factor of four greater than previously estimated on theoretical grounds. It is noted however, that the estimate does agree with a reasonable extrapolation of the results of the survey of the Galactic plane by the Japanese group. It is concluded that the mean intensity in the solar neighborhood therefore probably peaks somewhat longward of 1 micron, and that this result is important for understanding the temperature of interstellar dust and the intensity of the far infrared background. This means specifically that dark clouds probably emit significantly more far infrared radiation than previously predicted.

  12. X-ray emission reduction and photon dose lowering by energy loss of fast electrons induced by return current during the interaction of a short-pulse high-intensity laser on a metal solid target

    NASA Astrophysics Data System (ADS)

    Compant La Fontaine, A.

    2018-04-01

    During the interaction of a short-pulse high-intensity laser with the preplasma produced by the pulse's pedestal in front of a high-Z metal solid target, high-energy electrons are produced, which in turn create an X-ray source by interacting with the atoms of the converter target. The current brought by the hot electrons is almost completely neutralized by a return current j → driven by the background electrons of the conductive target, and the force exerted on the hot electrons by the electric field E → which induces Ohmic heating j → .E → , produced by the background electrons, reduces the energy of the hot electrons and thus lowers the X-ray emission and photon dose. This effect is analyzed here by means of a simple 1-D temperature model which contains the most significant terms of the relativistic Fokker-Planck equation with electron multiple scattering, and the energy equations of ions, hot, and cold electrons are then solved numerically. This Ohmic heating energy loss fraction τOh is introduced as a corrective term in an improved photon dose model. For instance, for a ps laser pulse with 10 μm spot size, the dose obtained with a tantalum target is reduced by less than about 10% to 40% by the Ohmic heating, depending upon the plasma scale length, target thickness, laser parameters, and in particular its spot size. The laser and plasma parameters may be optimized to limit the effect of Ohmic heating, for instance at a small plasma scale length or small laser spot size. Conversely, others regimes not suitable for dose production are identified. For instance, the resistive heating is enhanced in a foam target or at a long plasma scale length and high laser spot size and intensity, as the mean emission angle θ0 of the incident hot electron bunch given by the ponderomotive force is small; thus, the dose produced by a laser interacting in a gas jet may be inhibited under these circumstances. The resistive heating may also be maximized in order to reduce the X-ray emission to lower the radiation level for instance in a safety radiological goal.

  13. Discrimination of Biological and Chemical Threat Simulants in Residue Mixtures on Multiple Substrates

    DTIC Science & Technology

    2011-02-18

    environmental interferents selected for this study included dolomitic limestone (Lime, NIST Standard Reference Materials, Catalog No. SRM 88b) and ovalbumin...emission lines due solely to substrates or interferents can be ignored. As in previous studies by our group, the background-corrected peak ...calculated by adding the intensi- ties of the emission lines at 486 and 656 nm); the summed intensities were normalized to the total peak intensity of the

  14. Single Crystal Diffractometry

    NASA Astrophysics Data System (ADS)

    Arndt, U. W.; Willis, B. T. M.

    2009-06-01

    Preface; Acknowledgements; Part I. Introduction; Part II. Diffraction Geometry; Part III. The Design of Diffractometers; Part IV. Detectors; Part V. Electronic Circuits; Part VI. The Production of the Primary Beam (X-rays); Part VII. The Production of the Primary Beam (Neutrons); Part VIII. The Background; Part IX. Systematic Errors in Measuring Relative Integrated Intensities; Part X. Procedure for Measuring Integrated Intensities; Part XI. Derivation and Accuracy of Structure Factors; Part XII. Computer Programs and On-line Control; Appendix; References; Index.

  15. Angular power spectrum of the diffuse gamma-ray emission as measured by the Fermi Large Area Telescope and constraints on its dark matter interpretation

    DOE PAGES

    Fornasa, Mattia; Cuoco, Alessandro; Zavala, Jesús; ...

    2016-12-09

    The isotropic gamma-ray background arises from the contribution of unresolved sources, including members of confirmed source classes and proposed gamma-ray emitters such as the radiation induced by dark matter annihilation and decay. Clues about the properties of the contributing sources are imprinted in the anisotropy characteristics of the gamma-ray background. We use 81 months of Pass 7 Reprocessed data from the Fermi Large Area Telescope to perform a measurement of the anisotropy angular power spectrum of the gamma-ray background. Here, we analyze energies between 0.5 and 500 GeV, extending the range considered in the previous measurement based on 22 monthsmore » of data. We also compute, for the first time, the cross-correlation angular power spectrum between different energy bins. The derived angular spectra are compatible with being Poissonian, i.e. constant in multipole. Furthermore, the energy dependence of the anisotropy suggests that the signal is due to two populations of sources, contributing, respectively, below and above ~ 2 GeV . Finally, using data from state-of-the-art numerical simulations to model the dark matter distribution, we constrain the contribution from dark matter annihilation and decay in Galactic and extra-Galactic structures to the measured anisotropy. These constraints are competitive with those that can be derived from the average intensity of the isotropic gamma-ray background.« less

  16. Probing pre-inflationary anisotropy with directional variations in the gravitational wave background

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furuya, Yu; Niiyama, Yuki; Sendouda, Yuuiti, E-mail: furuya@tap.st.hirosaki-u.ac.jp, E-mail: niiyama@tap.st.hirosaki-u.ac.jp, E-mail: sendouda@hirosaki-u.ac.jp

    We perform a detailed analysis on a primordial gravitational-wave background amplified during a Kasner-like pre-inflationary phase allowing for general triaxial anisotropies. It is found that the predicted angular distribution map of gravitational-wave intensity on large scales exhibits topologically distinctive patterns according to the degree of the pre-inflationary anisotropy, thereby serving as a potential probe for the pre-inflationary early universe with future all-sky observations of gravitational waves. We also derive an observational limit on the amplitude of such anisotropic gravitational waves from the B -mode polarisation of the cosmic microwave background.

  17. The clinical course over the first year of Whiplash Associated Disorders (WAD): pain-related disability predicts outcome in a mildly affected sample

    PubMed Central

    2013-01-01

    Background Different recovery patterns are reported for those befallen a whip-lash injury, but little is known about the variability within subgroups. The aims were (1) to compare a self-selected mildly affected sample (MILD) with a self-selected moderately to severely affected sample (MOD/SEV) with regard to background characteristics and pain-related disability, pain intensity, functional self-efficacy, fear of movement/(re)injury, pain catastrophising, post-traumatic stress symptoms in the acute stage (at baseline), (2) to study the development over the first year after the accident for the above listed clinical variables in the MILD sample, and (3) to study the validity of a prediction model including baseline levels of clinical variables on pain-related disability one year after baseline assessments. Methods The study had a prospective and correlative design. Ninety-eight participants were consecutively selected. Inclusion criteria; age 18 to 65 years, WAD grade I-II, Swedish language skills, and subjective report of not being in need of treatment due to mild symptoms. A multivariate linear regression model was applied for the prediction analysis. Results The MILD sample was less affected in all study variables compared to the MOD/SEV sample. Pain-related disability, pain catastrophising, and post-traumatic stress symptoms decreased over the first year after the accident, whereas functional self-efficacy and fear of movement/(re)injury increased. Pain intensity was stable. Pain-related disability at baseline emerged as the only statistically significant predictor of pain-related disability one year after the accident (Adj r2 = 0.67). Conclusion A good prognosis over the first year is expected for the majority of individuals with WAD grade I or II who decline treatment due to mild symptoms. The prediction model was not valid in the MILD sample except for the contribution of pain-related disability. An implication is that early observations of individuals with elevated levels of pain-related disability are warranted, although they may decline treatment. PMID:24359208

  18. Relativistically induced transparency acceleration of light ions by an ultrashort laser pulse interacting with a heavy-ion-plasma density gradient.

    PubMed

    Sahai, Aakash A; Tsung, Frank S; Tableman, Adam R; Mori, Warren B; Katsouleas, Thomas C

    2013-10-01

    The relativistically induced transparency acceleration (RITA) scheme of proton and ion acceleration using laser-plasma interactions is introduced, modeled, and compared to the existing schemes. Protons are accelerated with femtosecond relativistic pulses to produce quasimonoenergetic bunches with controllable peak energy. The RITA scheme works by a relativistic laser inducing transparency [Akhiezer and Polovin, Zh. Eksp. Teor. Fiz 30, 915 (1956); Kaw and Dawson, Phys. Fluids 13, 472 (1970); Max and Perkins, Phys. Rev. Lett. 27, 1342 (1971)] to densities higher than the cold-electron critical density, while the background heavy ions are stationary. The rising laser pulse creates a traveling acceleration structure at the relativistic critical density by ponderomotively [Lindl and Kaw, Phys. Fluids 14, 371 (1971); Silva et al., Phys. Rev. E 59, 2273 (1999)] driving a local electron density inflation, creating an electron snowplow and a co-propagating electrostatic potential. The snowplow advances with a velocity determined by the rate of the rise of the laser's intensity envelope and the heavy-ion-plasma density gradient scale length. The rising laser is incrementally rendered transparent to higher densities such that the relativistic-electron plasma frequency is resonant with the laser frequency. In the snowplow frame, trace density protons reflect off the electrostatic potential and get snowplowed, while the heavier background ions are relatively unperturbed. Quasimonoenergetic bunches of velocity equal to twice the snowplow velocity can be obtained and tuned by controlling the snowplow velocity using laser-plasma parameters. An analytical model for the proton energy as a function of laser intensity, rise time, and plasma density gradient is developed and compared to 1D and 2D PIC OSIRIS [Fonseca et al., Lect. Note Comput. Sci. 2331, 342 (2002)] simulations. We model the acceleration of protons to GeV energies with tens-of-femtoseconds laser pulses of a few petawatts. The scaling of proton energy with laser power compares favorably to other mechanisms for ultrashort pulses [Schreiber et al., Phys. Rev. Lett. 97, 045005 (2006); Esirkepov et al., Phys. Rev. Lett. 92, 175003 (2004); Silva et al., Phys. Rev. Lett. 92, 015002 (2004); Fiuza et al., Phys. Rev. Lett. 109, 215001 (2012)].

  19. The selective digital integrator: A new device for modulated polarization spectroscopy

    NASA Astrophysics Data System (ADS)

    Vrancic, Aljosa

    1998-12-01

    A new device, a selective digital integrator (SDI), for the acquisition of modulated polarization spectroscopy (MPS) signals is described. Special attention is given to the accurate measurement of very small (AC component of interest <10-3 x DC component), rapidly modulated (~50 kHz) signals at or below noise levels. Various data acquisition methods and problems associated with the collection of modulated signals are discussed. The SDI solves most of these problems and has the following advantages: it provides the average-time resolved profile of a modulated signal; it eliminates errors if the modulation is not sinusoidal; it enables separate measurements of the various phases of the signal modulation cycle; it permits simultaneous measurement of absorption, circular dichroism (CD) and linear dichroism (LD) spectra; it facilitates 3-D absorbance measurements; it has a wide gain-switching-free dynamic range (10 orders of magnitude or more); it offers a constant S/N ratio mode of operation; it eliminates the need for photomultiplier voltage feedback, and it has faster scanning speeds. The time-resolution, selectivity, wide dynamic range, and low-overhead on-the-fly data processing are useful for other modulated spectroscopy (MS) and non-MS experiments such as pulse height distribution and time-resolved pulse counting measurements. The advantages of the MPS-SDI method are tested on the first Rydberg electronic transitions of (+)-3- methylcyclopentanone. The experimental results validate the predicted SDI capabilities. However, they also point to two difficulties that had not been noted previously: the presence of LD in a gaseous sample and a pressure- dependence of the relative peak heights of the CD spectrum. Models for these anomalies are proposed. The presence of the oscillatory LD (but not an LD background) is explained with a sample cell model based on the observed polarization-dependent time-resolved profiles of transmitted light intensity. To obtain expressions for these intensities, a theoretical background, which provides a new approach to the treatment of light/matter interaction, is included as an Appendix. To explain the second anomaly, present only at high optical densities, a model based on the presence of scattered light is introduced and verified. The mode of correction for the scattering problem is outlined.

  20. Performance of the score systems Acute Physiology and Chronic Health Evaluation II and III at an interdisciplinary intensive care unit, after customization

    PubMed Central

    Markgraf, Rainer; Deutschinoff, Gerd; Pientka, Ludger; Scholten, Theo; Lorenz, Cristoph

    2001-01-01

    Background: Mortality predictions calculated using scoring scales are often not accurate in populations other than those in which the scales were developed because of differences in case-mix. The present study investigates the effect of first-level customization, using a logistic regression technique, on discrimination and calibration of the Acute Physiology and Chronic Health Evaluation (APACHE) II and III scales. Method: Probabilities of hospital death for patients were estimated by applying APACHE II and III and comparing these with observed outcomes. Using the split sample technique, a customized model to predict outcome was developed by logistic regression. The overall goodness-of-fit of the original and the customized models was assessed. Results: Of 3383 consecutive intensive care unit (ICU) admissions over 3 years, 2795 patients could be analyzed, and were split randomly into development and validation samples. The discriminative powers of APACHE II and III were unchanged by customization (areas under the receiver operating characteristic [ROC] curve 0.82 and 0.85, respectively). Hosmer-Lemeshow goodness-of-fit tests showed good calibration for APACHE II, but insufficient calibration for APACHE III. Customization improved calibration for both models, with a good fit for APACHE III as well. However, fit was different for various subgroups. Conclusions: The overall goodness-of-fit of APACHE III mortality prediction was improved significantly by customization, but uniformity of fit in different subgroups was not achieved. Therefore, application of the customized model provides no advantage, because differences in case-mix still limit comparisons of quality of care. PMID:11178223

  1. Evolution of the axial electron cyclotron maser instability, with applications to solar microwave spikes

    NASA Technical Reports Server (NTRS)

    Vlahos, Loukas; Sprangle, Phillip

    1987-01-01

    The nonlinear evolution of cyclotron radiation from streaming and gyrating electrons in an external magnetic field is analyzed. The nonlinear dynamics of both the fields and the particles are treated fully relativistically and self-consistently. The model includes a background plasma and electrostatic effects. The analytical and numerical results show that a substantial portion of the beam particle energy can be converted to electromagnetic wave energy at frequencies far above the electron cyclotron frequency. In general, the excited radiation can propagate parallel to the magnetic field and, hence, escape gyrothermal absorption at higher cyclotron harmonics. The high-frequency Doppler-shifted cyclotron instability can have saturation efficiencies far higher than those associated with well-known instabilities of the electron cyclotron maser type. Although the analysis is general, the possibility of using this model to explain the intense radio emission observed from the sun is explored in detail.

  2. Three-dimensional modeling of lightning-induced electromagnetic pulses on Venus, Jupiter, and Saturn

    NASA Astrophysics Data System (ADS)

    Pérez-Invernón, F. J.; Luque, A.; Gordillo-Vázquez, F. J.

    2017-07-01

    While lightning activity in Venus is still controversial, its existence in Jupiter and Saturn was first detected by the Voyager missions and later on confirmed by Cassini and New Horizons optical recordings in the case of Jupiter, and recently by Cassini on Saturn in 2009. Based on a recently developed 3-D model, we investigate the influence of lightning-emitted electromagnetic pulses on the upper atmosphere of Venus, Saturn, and Jupiter. We explore how different lightning properties such as total energy released and orientation (vertical, horizontal, and oblique) can produce mesospheric transient optical emissions of different shapes, sizes, and intensities. Moreover, we show that the relatively strong background magnetic field of Saturn can enhance the lightning-induced quasi-electrostatic and inductive electric field components above 1000 km of altitude producing stronger transient optical emissions that could be detected from orbital probes.

  3. Modeling multi-GeV class laser-plasma accelerators with INF&RNO

    NASA Astrophysics Data System (ADS)

    Benedetti, Carlo; Schroeder, Carl; Bulanov, Stepan; Geddes, Cameron; Esarey, Eric; Leemans, Wim

    2016-10-01

    Laser plasma accelerators (LPAs) can produce accelerating gradients on the order of tens to hundreds of GV/m, making them attractive as compact particle accelerators for radiation production or as drivers for future high-energy colliders. Understanding and optimizing the performance of LPAs requires detailed numerical modeling of the nonlinear laser-plasma interaction. We present simulation results, obtained with the computationally efficient, PIC/fluid code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde), concerning present (multi-GeV stages) and future (10 GeV stages) LPA experiments performed with the BELLA PW laser system at LBNL. In particular, we will illustrate the issues related to the guiding of a high-intensity, short-pulse, laser when a realistic description for both the laser driver and the background plasma is adopted. Work Supported by the U.S. Department of Energy under contract No. DE-AC02-05CH11231.

  4. Spectroscopic Characteristic and Analytical Capability of Ar-N₂ Inductively Coupled Plasma in Axially Viewing Optical Emission Spectrometry.

    PubMed

    Ohata, Masaki

    2016-01-01

    The spectroscopic characteristics and analytical capability of argon-nitrogen (Ar-N2) inductively coupled plasma (ICP) in axially viewing optical emission spectrometry (OES) were examined and figures of merit were determined in the present study. The spectroscopic characteristics such as the emission intensity profile and the excitation temperature observed from the analytical zone of Ar-N2 ICP in axially viewing ICPOES, in order to elucidate the enhancement of the emission intensity of elements obtained in our previous study, were evaluated and compared to those of the standard ICP. The background and emission intensities of elements as well as their excitation behavior for both atom and ion lines were also examined. As results, a narrower emission intensity profile and an increased excitation temperature as well as enhancements for both background and emission intensities of elements, which could be due to the ICP shrunken as well as the enhancement of the interaction between the central channel of the ICP and samples introduced, were observed for Ar-N2 ICP in axially viewing OES. In addition, the elements with relatively higher excitation and ionization energies such as As, Bi, Cd, Ni, P, and Zn revealed larger enhancements of the emission intensities as well as improved limits of detection (LODs), which were also attributed to the enhanced interaction between Ar-N2 ICP and the samples. Since the Ar-N2 ICP could be obtained easily only by the addition of a small amount of N2 gas to the Ar plasma gas of the standard ICP and no optimization on the alignment between Ar-N2 ICP and the spectrometer in commercially available ICPOES instruments was needed, it could be utilized as simple and optional excitation and ionization sources in axially viewing ICPOES.

  5. Introducing an osteopathic approach into neonatology ward: the NE-O model

    PubMed Central

    2014-01-01

    Background Several studies showed the effect of osteopathic manipulative treatment on neonatal care in reducing length of stay in hospital, gastrointestinal problems, clubfoot complications and improving cranial asymmetry of infants affected by plagiocephaly. Despite several results obtained, there is still a lack of standardized osteopathic evaluation and treatment procedures for newborns recovered in neonatal intensive care unit (NICU). The aim of this paper is to suggest a protocol on osteopathic approach (NE-O model) in treating hospitalized newborns. Methods The NE-O model is composed by specific evaluation tests and treatments to tailor osteopathic method according to preterm and term infants’ needs, NICU environment, medical and paramedical assistance. This model was developed to maximize the effectiveness and the clinical use of osteopathy into NICU. Results The NE-O model was adopted in 2006 to evaluate the efficacy of OMT in neonatology. Results from research showed the effectiveness of this osteopathic model in reducing preterms’ length of stay and hospital costs. Additionally the present model was demonstrated to be safe. Conclusion The present paper defines the key steps for a rigorous and effective osteopathic approach into NICU setting, providing a scientific and methodological example of integrated medicine and complex intervention. PMID:24904746

  6. In vivo fluorescence imaging of hepatocellular carcinoma using a novel GPC3-specific aptamer probe

    PubMed Central

    Zhao, Menglong; Dong, Lili; Liu, Zhuang; Yang, Shuohui

    2018-01-01

    Background Glypican-3 (GPC3) is highly expressed in most of the hepatocellular carcinomas (HCCs), even in small HCCs. It may be used as a potential biomarker for early detection of HCC. The aptamer is a promising targeting agent with unique advantages over antibody. This study was to introduce a novel GPC3 specific aptamer (AP613-1), to verify its specific binding property in vitro, and to evaluate its targeting efficiency in vivo by performing near-infrared (NIR) fluorescence imaging on an HCC xenograft model. Methods AP613-1 was generated from the systematic evolution of ligands by exponential enrichment. Flow cytometry and aptamer-based immunofluorescence imaging were performed to verify the binding affinity of AP613-1 to GPC3 in vitro. NIR Fluorescence images of nude mice with unilateral (n=12) and bilateral (n=4) subcutaneous xenograft tumors were obtained. Correlation between the tumor fluorescence intensities in vivo and ex vivo was analyzed. Results AP613-1 could specifically bind to GPC3 in vitro. In vivo and ex vivo tumors, fluorescence intensities were in excellent correlation (P<0.001, r=0.968). The fluorescence intensity is significantly higher in tumors given Alexa Fluor 750 (AF750) labeled AP613-1 than in those given AF750 labeled initial ssDNA library both in vivo (P<0.001) and ex vivo (P=0.022). In the mice with bilateral subcutaneous tumors injected with AF750 labeled AP613-1, Huh-7 tumors showed significantly higher fluorescence intensities than A549 tumors both in vivo (P=0.016) and ex vivo (P=0.004). Conclusions AP613-1 displays a specific binding affinity to GPC3 positive HCC. Fluorescently labeled AP613-1 could be used as an imaging probe to subcutaneous HCC in xenograft models. PMID:29675356

  7. Pain in methadone patients: Time to address undertreatment and suicide risk (ANRS-Methaville trial)

    PubMed Central

    Nordmann, Sandra; Vilotitch, Antoine; Lions, Caroline; Michel, Laurent; Mora, Marion; Spire, Bruno; Maradan, Gwenaelle; Bendiane, Marc-Karim; Morel, Alain; Roux, Perrine; Carrieri, Patrizia

    2017-01-01

    Background Pain in opioid-dependent patients is common but data measuring the course of pain (and its correlates) using validated scales in patients initiating methadone treatment are sparse. We aimed to assess pain and its interference in daily life, associated correlates, and undertreatment before and during methadone treatment. Methods This is a secondary analysis using longitudinal data of a randomized trial comparing two methadone initiation models. We assessed the effect of methadone initiation and other correlates on pain intensity and interference (using the Brief Pain Inventory) at months 0, 6 and 12 using a mixed multinomial logistic regression model. Results The study group comprised 168 patients who had data for either pain intensity or interference for at least one visit. Moderate to severe pain was reported in 12.9% of patients at M0, 5.4% at M6 and 7.3% at M12. Substantial interference with daily functioning was reported in 36.0% at M0, 14.5% at M6 and 17.1% at M12. Of the 98 visits where patients reported moderate to severe pain or substantial interference, 55.1% reported no treatment for pain relief, non-opioid analgesics were reported by 34.7%, opioid analgesics by 3.1% and both opioid and non-opioid analgesics by 7.1%. Methadone was associated with decreased pain intensity at 6 months (OR = 0.29, p = 0.04) and 12 months (OR = 0.30, p = 0.05) of follow-up and tended to be associated with substantial pain interference. Suicide risk was associated with both pain intensity and pain interference. Conclusions Methadone in opioid-dependent patients can reduce pain. However, undertreatment of pain in methadone patients remains a major clinical concern. Patients with pain are at higher risk of suicide. Adequate screening and management of pain in this population is a priority and needs to be integrated into routine comprehensive care. PMID:28520735

  8. Fluorescent Applications to Crystallization

    NASA Technical Reports Server (NTRS)

    Pusey, Marc L.; Forsythe, Elizabeth; Achari, Aniruddha

    2006-01-01

    By covalently modifying a subpopulation, less than or equal to 1%, of a macromolecule with a fluorescent probe, the labeled material will add to a growing crystal as a microheterogeneous growth unit. Labeling procedures can be readily incorporated into the final stages of purification, and tests with model proteins have shown that labeling u to 5 percent of the protein molecules does not affect the X-ray data quality obtained . The presence of the trace fluorescent label gives a number of advantages. Since the label is covalently attached to the protein molecules, it "tracks" the protein s response to the crystallization conditions. The covalently attached probe will concentrate in the crystal relative to the solution, and under fluorescent illumination crystals show up as bright objects against a darker background. Non-protein structures, such as salt crystals, do not show up under fluorescent illumination. Crystals have the highest protein concentration and are readily observed against less bright precipitated phases, which under white light illumination may obscure the crystals. Automated image analysis to find crystals should be greatly facilitated, without having to first define crystallization drop boundaries as the protein or protein structures is all that shows up. Fluorescence intensity is a faster search parameter, whether visually or by automated methods, than looking for crystalline features. Preliminary tests, using model proteins, indicates that we can use high fluorescence intensity regions, in the absence of clear crystalline features or "hits", as a means for determining potential lead conditions. A working hypothesis is that more rapid amorphous precipitation kinetics may overwhelm and trap more slowly formed ordered assemblies, which subsequently show up as regions of brighter fluorescence intensity. Experiments are now being carried out to test this approach using a wider range, of proteins. The trace fluorescently labeled crystals will also emit with sufficient intensity to aid in the automation of crystal alignment using relatively low cost optics, further increasing throughput at synchrotrons.

  9. Parameter Optimization Analysis of Prolonged Analgesia Effect of tDCS on Neuropathic Pain Rats

    PubMed Central

    Wen, Hui-Zhong; Gao, Shi-Hao; Zhao, Yan-Dong; He, Wen-Juan; Tian, Xue-Long; Ruan, Huai-Zhen

    2017-01-01

    Background: Transcranial direct current stimulation (tDCS) is widely used to treat human nerve disorders and neuropathic pain by modulating the excitability of cortex. The effectiveness of tDCS is influenced by its stimulation parameters, but there have been no systematic studies to help guide the selection of different parameters. Objective: This study aims to assess the effects of tDCS of primary motor cortex (M1) on chronic neuropathic pain in rats and to test for the optimal parameter combinations for analgesia. Methods: Using the chronic neuropathic pain models of chronic constriction injury (CCI), we measured pain thresholds before and after anodal-tDCS (A-tDCS) using different parameter conditions, including stimulation intensity, stimulation time, intervention time and electrode located (ipsilateral or contralateral M1 of the ligated paw on male/female CCI models). Results: Following the application of A-tDCS over M1, we observed that the antinociceptive effects were depended on different parameters. First, we found that repetitive A-tDCS had a longer analgesic effect than single stimulus, and both ipsilateral-tDCS (ip-tDCS) and contralateral-tDCS (con-tDCS) produce a long-lasting analgesic effect on neuropathic pain. Second, the antinociceptive effects were intensity-dependent and time-dependent, high intensities worked better than low intensities and long stimulus durations worked better than short stimulus durations. Third, timing of the intervention after injury affected the stimulation outcome, early use of tDCS was an effective method to prevent the development of pain, and more frequent intervention induced more analgesia in CCI rats, finally, similar antinociceptive effects of con- and ip-tDCS were observed in both sexes of CCI rats. Conclusion: Optimized protocols of tDCS for treating antinociceptive effects were developed. These findings should be taken into consideration when using tDCS to produce analgesic effects in clinical applications. PMID:28659772

  10. Decibels Made Easy.

    ERIC Educational Resources Information Center

    Tindle, C. T.

    1996-01-01

    Describes a method to teach acoustics to students with minimal mathematical backgrounds. Discusses the uses of charts in teaching topics of sound intensity level and the decibel scale. Avoids the difficulties of working with logarithm functions. (JRH)

  11. Infrared small target enhancement: grey level mapping based on improved sigmoid transformation and saliency histogram

    NASA Astrophysics Data System (ADS)

    Wan, Minjie; Gu, Guohua; Qian, Weixian; Ren, Kan; Chen, Qian

    2018-06-01

    Infrared (IR) small target enhancement plays a significant role in modern infrared search and track (IRST) systems and is the basic technique of target detection and tracking. In this paper, a coarse-to-fine grey level mapping method using improved sigmoid transformation and saliency histogram is designed to enhance IR small targets under different backgrounds. For the stage of rough enhancement, the intensity histogram is modified via an improved sigmoid function so as to narrow the regular intensity range of background as much as possible. For the part of further enhancement, a linear transformation is accomplished based on a saliency histogram constructed by averaging the cumulative saliency values provided by a saliency map. Compared with other typical methods, the presented method can achieve both better visual performances and quantitative evaluations.

  12. Accurate phase extraction algorithm based on Gram–Schmidt orthonormalization and least square ellipse fitting method

    NASA Astrophysics Data System (ADS)

    Lei, Hebing; Yao, Yong; Liu, Haopeng; Tian, Yiting; Yang, Yanfu; Gu, Yinglong

    2018-06-01

    An accurate algorithm by combing Gram-Schmidt orthonormalization and least square ellipse fitting technology is proposed, which could be used for phase extraction from two or three interferograms. The DC term of background intensity is suppressed by subtraction operation on three interferograms or by high-pass filter on two interferograms. Performing Gram-Schmidt orthonormalization on pre-processing interferograms, the phase shift error is corrected and a general ellipse form is derived. Then the background intensity error and the corrected error could be compensated by least square ellipse fitting method. Finally, the phase could be extracted rapidly. The algorithm could cope with the two or three interferograms with environmental disturbance, low fringe number or small phase shifts. The accuracy and effectiveness of the proposed algorithm are verified by both of the numerical simulations and experiments.

  13. [Changes in the chromatin structure of lymphoid cells under the influence of low-intensity extremely high-frequency electromagnetic radiation against the background of inflammatory process].

    PubMed

    Gapeev, A B; Romanova, N A; Chemeris, N K

    2011-01-01

    Using the alkaline single cell gel electrophoresis technique (comet assay), changes in chromatin structure of peripheral blood leukocytes and peritoneal neutrophils have been studied in mice exposed to low-intensity extremely high-frequency electromagnetic radiation (42.2 GHz, 0.1 mW/cm2, 20 min at 1 h after induction of inflammation) against the background of the systemic inflammatory process. It was revealed that the exposure of mice with the developing inflammation leads to a pronounced decrease in the level of DNA damage to peripheral blood leukocytes and peritoneal neutrophils. It is supposed that the changes in the chromatin structure of lymphoid cells have a genoprotective character in the inflammatory process and can underlie the mechanisms of realization of antiinflammatory effects of the electromagnetic radiation.

  14. Probing reionization with the cross-power spectrum of 21 cm and near-infrared radiation backgrounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mao, Xiao-Chun, E-mail: xcmao@bao.ac.cn

    2014-08-01

    The cross-correlation between the 21 cm emission from the high-redshift intergalactic medium and the near-infrared (NIR) background light from high-redshift galaxies promises to be a powerful probe of cosmic reionization. In this paper, we investigate the cross-power spectrum during the epoch of reionization. We employ an improved halo approach to derive the distribution of the density field and consider two stellar populations in the star formation model: metal-free stars and metal-poor stars. The reionization history is further generated to be consistent with the electron-scattering optical depth from cosmic microwave background measurements. Then, the intensity of the NIR background is estimatedmore » by collecting emission from stars in first-light galaxies. On large scales, we find that the 21 cm and NIR radiation backgrounds are positively correlated during the very early stages of reionization. However, these two radiation backgrounds quickly become anti-correlated as reionization proceeds. The maximum absolute value of the cross-power spectrum is |Δ{sub 21,NIR}{sup 2}|∼10{sup −4} mK nW m{sup –2} sr{sup –1}, reached at ℓ ∼ 1000 when the mean fraction of ionized hydrogen is x-bar{sub i}∼0.9. We find that Square Kilometer Array can measure the 21 cm-NIR cross-power spectrum in conjunction with mild extensions to the existing CIBER survey, provided that the integration time independently adds up to 1000 and 1 hr for 21 cm and NIR observations, and that the sky coverage fraction of the CIBER survey is extended from 4 × 10{sup –4} to 0.1. Measuring the cross-correlation signal as a function of redshift provides valuable information on reionization and helps confirm the origin of the 'missing' NIR background.« less

  15. Ionospheric Alfvén resonator and aurora: Modeling of MICA observations

    NASA Astrophysics Data System (ADS)

    Tulegenov, B.; Streltsov, A. V.

    2017-07-01

    We present results from a numerical study of small-scale, intense magnetic field-aligned currents observed in the vicinity of the discrete auroral arc by the Magnetosphere-Ionosphere Coupling in the Alfvén Resonator (MICA) sounding rocket launched from Poker Flat, Alaska, on 19 February 2012. The goal of the MICA project was to investigate the hypothesis that such currents can be produced inside the ionospheric Alfvén resonator by the ionospheric feedback instability (IFI) driven by the system of large-scale magnetic field-aligned currents interacting with the ionosphere. The trajectory of the MICA rocket crossed two discrete auroral arcs and detected packages of intense, small-scale currents at the edges of these arcs, in the most favorable location for the development of the ionospheric feedback instability, predicted by the IFI theory. Simulations of the reduced MHD model derived in the dipole magnetic field geometry with realistic background parameters confirm that IFI indeed generates small-scale ULF waves inside the ionospheric Alfvén resonator with frequency, scale size, and amplitude showing a good, quantitative agreement with the observations. The comparison between numerical results and observations was performed by "flying" a virtual MICA rocket through the computational domain, and this comparison shows that, for example, the waves generated in the numerical model have frequencies in the range from 0.30 to 0.45 Hz, and the waves detected by the MICA rocket have frequencies in the range from 0.18 to 0.50 Hz.

  16. In vitro investigation of heat transfer phenomenon in human immature teeth.

    PubMed

    Talebi, Maryam; Moghimi, Sahar; Shafagh, Mina; Kalani, Hadi; Mazhari, Fatemeh

    2014-01-01

    Background and aims. Heat generated within tooth during clinical dentistry can cause thermally induced damage to hard and soft components of the tooth (enamel, dentin and pulp). Geometrical characteristics of immature teeth are different from those of mature teeth. The purpose of this experimental and theoretical study was to investigate thermal changes in immature permanent teeth during the use of LED light-curing units (LCU). Materials and methods. This study was performed on the second mandibular premolars. This experimental investiga-tion was carried out for recording temperature variations of different sites of tooth and two dimensional finite element models were used for heat transfer phenomenon in immature teeth. Sensitivity analysis and local tests were included in the model validation phase. Results. Overall, thermal stimulation for 30 seconds with a low-intensity LED LCU increased the temperature from 28°C to 38°C in IIT (intact immature tooth) and PIT (cavity-prepared immature tooth). When a high-intensity LED LCU was used, tooth temperature increased from 28°C to 48°C. The results of the experimental tests and mathematical modeling illustrated that using LED LCU on immature teeth did not have any detrimental effect on the pulp temperature. Conclusion. Using LED LCU in immature teeth had no effect on pulp temperature in this study. Sensitivity analysis showed that variations of heat conductivity might affect heat transfer in immature teeth; therefore, further studies are required to determine thermal conductivity of immature teeth.

  17. Selective dye-labeling of newly synthesized proteins in bacterial cells.

    PubMed

    Beatty, Kimberly E; Xie, Fang; Wang, Qian; Tirrell, David A

    2005-10-19

    We describe fluorescence labeling of newly synthesized proteins in Escherichia coli cells by means of Cu(I)-catalyzed cycloaddition between alkynyl amino acid side chains and the fluorogenic dye 3-azido-7-hydroxycoumarin. The method involves co-translational labeling of proteins by the non-natural amino acids homopropargylglycine (Hpg) or ethynylphenylalanine (Eth) followed by treatment with the dye. As a demonstration, the model protein barstar was expressed and treated overnight with Cu(I) and 3-azido-7-hydroxycoumarin. Examination of treated cells by confocal microscopy revealed that strong fluorescence enhancement was observed only for alkynyl-barstar treated with Cu(I) and the reactive dye. The cellular fluorescence was punctate, and gel electrophoresis confirmed that labeled barstar was localized in inclusion bodies. Other proteins showed little fluorescence. Examination of treated cells by fluorimetry demonstrated that cultures supplemented with Eth or Hpg showed an 8- to 14-fold enhancement in fluorescence intensity after labeling. Addition of a protein synthesis inhibitor reduced the emission intensity to levels slightly above background, confirming selective labeling of newly synthesized proteins in the bacterial cell.

  18. FORWARD MODELING OF STANDING KINK MODES IN CORONAL LOOPS. I. SYNTHETIC VIEWS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Ding; Doorsselaere, Tom Van, E-mail: DYuan2@uclan.ac.uk

    2016-04-15

    Kink magnetohydrodynamic (MHD) waves are frequently observed in various magnetic structures of the solar atmosphere. They may contribute significantly to coronal heating and could be used as a tool to diagnose the solar plasma. In this study, we synthesize the Fe ix λ171.073 Å emission of a coronal loop supporting a standing kink MHD mode. The kink MHD wave solution of a plasma cylinder is mapped into a semi-torus structure to simulate a curved coronal loop. We decompose the solution into a quasi-rigid kink motion and a quadrupole term, which dominate the plasma inside and outside of the flux tube, respectively.more » At the loop edges, the line of sight integrates relatively more ambient plasma, and the background emission becomes significant. The plasma motion associated with the quadrupole term causes spectral line broadening and emission suppression. The periodic intensity suppression will modulate the integrated intensity and the effective loop width, which both exhibit oscillatory variations at half of the kink period. The quadrupole term can be directly observed as a pendular motion at the front view.« less

  19. Identifying unusual performance in Australian and New Zealand intensive care units from 2000 to 2010

    PubMed Central

    2014-01-01

    Background The Australian and New Zealand Intensive Care Society (ANZICS) Adult Patient Database (APD) collects voluntary data on patient admissions to Australian and New Zealand intensive care units (ICUs). This paper presents an in-depth statistical analysis of risk-adjusted mortality of ICU admissions from 2000 to 2010 for the purpose of identifying ICUs with unusual performance. Methods A cohort of 523,462 patients from 144 ICUs was analysed. For each ICU, the natural logarithm of the standardised mortality ratio (log-SMR) was estimated from a risk-adjusted, three-level hierarchical model. This is the first time a three-level model has been fitted to such a large ICU database anywhere. The analysis was conducted in three stages which included the estimation of a null distribution to describe usual ICU performance. Log-SMRs with appropriate estimates of standard errors are presented in a funnel plot using 5% false discovery rate thresholds. False coverage-statement rate confidence intervals are also presented. The observed numbers of deaths for ICUs identified as unusual are compared to the predicted true worst numbers of deaths under the model for usual ICU performance. Results Seven ICUs were identified as performing unusually over the period 2000 to 2010, in particular, demonstrating high risk-adjusted mortality compared to the majority of ICUs. Four of the seven were ICUs in private hospitals. Our three-stage approach to the analysis detected outlying ICUs which were not identified in a conventional (single) risk-adjusted model for mortality using SMRs to compare ICUs. We also observed a significant linear decline in mortality over the decade. Distinct yearly and weekly respiratory seasonal effects were observed across regions of Australia and New Zealand for the first time. Conclusions The statistical approach proposed in this paper is intended to be used for the review of observed ICU and hospital mortality. Two important messages from our study are firstly, that comprehensive risk-adjustment is essential in modelling patient mortality for comparing performance, and secondly, that the appropriate statistical analysis is complicated. PMID:24755369

  20. Spacecraft Internal Acoustic Environment Modeling

    NASA Technical Reports Server (NTRS)

    Chu, SShao-sheng R.; Allen, Christopher S.

    2009-01-01

    Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. In FY09, the physical mockup developed in FY08, with interior geometric shape similar to Orion CM (Crew Module) IML (Interior Mode Line), was used to validate SEA (Statistical Energy Analysis) acoustic model development with realistic ventilation fan sources. The sound power levels of these sources were unknown a priori, as opposed to previous studies that RSS (Reference Sound Source) with known sound power level was used. The modeling results were evaluated based on comparisons to measurements of sound pressure levels over a wide frequency range, including the frequency range where SEA gives good results. Sound intensity measurement was performed over a rectangular-shaped grid system enclosing the ventilation fan source. Sound intensities were measured at the top, front, back, right, and left surfaces of the and system. Sound intensity at the bottom surface was not measured, but sound blocking material was placed tinder the bottom surface to reflect most of the incident sound energy back to the remaining measured surfaces. Integrating measured sound intensities over measured surfaces renders estimated sound power of the source. The reverberation time T6o of the mockup interior had been modified to match reverberation levels of ISS US Lab interior for speech frequency bands, i.e., 0.5k, 1k, 2k, 4 kHz, by attaching appropriately sized Thinsulate sound absorption material to the interior wall of the mockup. Sound absorption of Thinsulate was modeled in three methods: Sabine equation with measured mockup interior reverberation time T60, layup model based on past impedance tube testing, and layup model plus air absorption correction. The evaluation/validation was carried out by acquiring octave band microphone data simultaneously at ten fixed locations throughout the mockup. SPLs (Sound Pressure Levels) predicted by our SEA model match well with measurements for our CM mockup, with a more complicated shape. Additionally in FY09, background NC noise (Noise Criterion) simulation and MRT (Modified Rhyme Test) were developed and performed in the mockup to determine the maximum noise level in CM habitable volume for fair crew voice communications. Numerous demonstrations of simulated noise environment in the mockup and associated SIL (Speech Interference Level) via MRT were performed for various communities, including members from NASA and Orion prime-/sub-contractors. Also, a new HSIR (Human-Systems Integration Requirement) for limiting pre- and post-landing SIL was proposed.

  1. Challenges in identifying sites climatically matched to the native ranges of animal invaders.

    PubMed

    Rodda, Gordon H; Jarnevich, Catherine S; Reed, Robert N

    2011-02-09

    Species distribution models are often used to characterize a species' native range climate, so as to identify sites elsewhere in the world that may be climatically similar and therefore at risk of invasion by the species. This endeavor provoked intense public controversy over recent attempts to model areas at risk of invasion by the Indian Python (Python molurus). We evaluated a number of MaxEnt models on this species to assess MaxEnt's utility for vertebrate climate matching. Overall, we found MaxEnt models to be very sensitive to modeling choices and selection of input localities and background regions. As used, MaxEnt invoked minimal protections against data dredging, multi-collinearity of explanatory axes, and overfitting. As used, MaxEnt endeavored to identify a single ideal climate, whereas different climatic considerations may determine range boundaries in different parts of the native range. MaxEnt was extremely sensitive to both the choice of background locations for the python, and to selection of presence points: inclusion of just four erroneous localities was responsible for Pyron et al.'s conclusion that no additional portions of the U.S. mainland were at risk of python invasion. When used with default settings, MaxEnt overfit the realized climate space, identifying models with about 60 parameters, about five times the number of parameters justifiable when optimized on the basis of Akaike's Information Criterion. When used with default settings, MaxEnt may not be an appropriate vehicle for identifying all sites at risk of colonization. Model instability and dearth of protections against overfitting, multi-collinearity, and data dredging may combine with a failure to distinguish fundamental from realized climate envelopes to produce models of limited utility. A priori identification of biologically realistic model structure, combined with computational protections against these statistical problems, may produce more robust models of invasion risk.

  2. Five Years of Citizen Science: Macroseismic Data Collection with the USGS Community Internet Intensity Maps (``Did You Feel It?'')

    NASA Astrophysics Data System (ADS)

    Quitoriano, V.; Wald, D. J.; Dewey, J. W.; Hopper, M.; Tarr, A.

    2003-12-01

    The U.S. Geological Survey Community Internet Intensity Map (CIIM) is an automatic Web-based system for rapidly generating seismic intensity maps based on shaking and damage reports collected from Internet users immediately following felt earthquakes in the United States. The data collection procedure is fundamentally Citizen Science. The vast majority of data are contributed by non-specialists, describing their own experiences of earthquakes. Internet data contributed by the public have profoundly changed the approach, coverage and usefulness of intensity observation in the U.S. We now typically receive thousands of individual questionnaire responses for widely felt earthquakes. After five years, these total over 350,000 individual entries nationwide, including entries from all 50 States, the District of Columbia, as well as territories of Guam, the Virgin Islands and Puerto Rico. The widespread access and use of online felt reports have added unanticipated but welcome capacities to USGS earthquake reporting. We can more easily validate earthquake occurrence in poorly instrumented regions, identify and locate sonic booms, and readily gauge societal importance of earthquakes by the nature of the response. In some parts of the U.S., CIIM provides constraints on earthquake magnitudes and focal depths beyond those provided by instrumental data, and the data are robust enough to test regionalized models of ground-motion attenuation. CIIM invokes an enthusiastic response from members of the public who contribute to it; it clearly provides an important opportunity for public education and outreach. In this paper we provide background on advantages and limitations of on-line data collection and explore recent developments and improvements to the CIIM system, including improved quality assurance using a relational database and greater data availability for scientific and sociological studies. We also describe a number of post-processing tools and applications that make use of the extensive intensity data sets now gathered. These new applications include automatic location and magnitude determination, estimating ground motions from the intensity observations thereby augmenting ShakeMap, automatic geocoding to allow for more refined intensity localization, and recovering higher precision decimal intensities rather than limiting intensities to integer values. Because of differences in the data and procedure, CIIM intensities are not strictly comparable to intensities assigned with the Modified Mercalli scale. Hence, continued collection of traditional macroseismic data will be essential to calibrate our understanding of CIIM intensities, and, conversely, CIIM data will improve our understanding of conventional macroseismic intensities. CIIM can be found online at http://earthquake.usgs.gov under ``Did You Feel It?''.

  3. A study of primary care physicians rating their immigrant patients' pain intensity.

    PubMed

    André, M; Löfvander, M

    2013-01-01

    Few studies focus on how physicians evaluate pain in foreign-born patients with varying cultural backgrounds. This study aimed to compare pain ratings [visual analogue scale (VAS) 0-100] done by Swedish primary care physicians and their patients, and to analyse which factors predicted physicians' higher ratings of pain in patients aged 18-45 years with long-standing disabling back pain. The two physicians jointly carried out the somatic and psychiatric diagnostic evaluations and alternated as consulting doctor or observer. One-third of the consultations were interpreted. Towards the end of the consultations, the patients rated their pain intensity 'right now' (patients' VAS). After the patient had left, the two physicians independently rated how much pain they thought the patient had, without looking at the patient's VAS score. The mean of the two doctors' VAS values (physicians' VAS) for each patient was used in the logistic regression calculations of odds ratios (OR) in main effect models for physicians' VAS above median (md) with patient's sex, education, origin, depression, psychosocial stress and pain sites as explanatory variables. Physicians' VAS values were significantly lower (md 15) than patients' VAS (md 66; women md 73, men md 52). The ratings showed no significant association with whether the physician was acting as consultant or observer. The higher physician VAS was only predicted by findings of multiple pain sites. Physicians appear to overlook psychological and emotional aspects when rating the pain of patients from other cultural backgrounds. This finding highlights a potential problem in multicultural care settings. © 2012 European Federation of International Association for the Study of Pain Chapters.

  4. 3D Inversion of Natural Source Electromagnetics

    NASA Astrophysics Data System (ADS)

    Holtham, E. M.; Oldenburg, D. W.

    2010-12-01

    The superior depth of investigation of natural source electromagnetic techniques makes these methods excellent candidates for crustal studies as well as for mining and hydrocarbon exploration. The traditional natural source method, the magnetotelluric (MT) technique, has practical limitations because the surveys are costly and time consuming due to the labor intensive nature of ground based surveys. In an effort to continue to use the penetration advantage of natural sources, it has long been recognized that tipper data, the ratio of the local vertical magnetic field to the horizontal magnetic field, provide information about 3D electrical conductivity structure. It was this understanding that prompted the development of AFMAG (Audio Frequency Magnetics) and recently the new airborne Z-Axis Tipper Electromagnetic Technique (ZTEM). In ZTEM, the vertical component of the magnetic field is recorded above the entire survey area, while the horizontal fields are recorded at a ground-based reference station. MT processing techniques yield frequency domain transfer functions typically between 30-720 Hz that relate the vertical fields over the survey area to the horizontal fields at the reference station. The result is a cost effective procedure for collecting natural source EM data and for finding large scale targets at moderate depths. It is well known however that 1D layered structures produce zero vertical magnetic fields and thus ZTEM data cannot recover such background conductivities. This is in sharp contrast to the MT technique where electric fields are measured and a 1D background conductivity can be recovered from the off diagonal elements of the impedance tensor. While 1D models produce no vertical fields, two and three dimensional structures will produce anomalous currents and a ZTEM response. For such models the background conductivity structure does affect the data. In general however, the ZTEM data have weak sensitivity to the background conductivity and while we show that it is possible to obtain the background structure by inverting the ZTEM data alone, it is desirable to obtain robust background conductivity information from other sources. This information could come from a priori geologic and petrophysical information or from additional geophysical data such as MT. To counter the costly nature of large MT surveys and the limited sensitivity of the ZTEM technique to the background conductivity we show that an effective method is to collect and invert both MT and ZTEM data. A sparse MT survey grid can gather information about the background conductivity and deep structures while keeping the survey costs affordable. Higher spatial resolution at moderate depths can be obtained by flying multiple lines of ZTEM data.

  5. The Python Sky Model: software for simulating the Galactic microwave sky

    NASA Astrophysics Data System (ADS)

    Thorne, B.; Dunkley, J.; Alonso, D.; Næss, S.

    2017-08-01

    We present a numerical code to simulate maps of Galactic emission in intensity and polarization at microwave frequencies, aiding in the design of cosmic microwave background experiments. This python code builds on existing efforts to simulate the sky by providing an easy-to-use interface and is based on publicly available data from the WMAP (Wilkinson Microwave Anisotropy Probe) and Planck satellite missions. We simulate synchrotron, thermal dust, free-free and anomalous microwave emission over the whole sky, in addition to the cosmic microwave background, and include a set of alternative prescriptions for the frequency dependence of each component, for example, polarized dust with multiple temperatures and a decorrelation of the signals with frequency, which introduce complexity that is consistent with current data. We also present a new prescription for adding small-scale realizations of these components at resolutions greater than current all-sky measurements. The usefulness of the code is demonstrated by forecasting the impact of varying foreground complexity on the recovered tensor-to-scalar ratio for the LiteBIRD satellite. The code is available at: https://github.com/bthorne93/PySM_public.

  6. Aligned metal absorbers and the ultraviolet background at the end of reionization

    NASA Astrophysics Data System (ADS)

    Doughty, Caitlin; Finlator, Kristian; Oppenheimer, Benjamin D.; Davé, Romeel; Zackrisson, Erik

    2018-04-01

    We use observations of spatially aligned C II, C IV, Si II, Si IV, and O I absorbers to probe the slope and intensity of the ultraviolet background (UVB) at z ˜ 6. We accomplish this by comparing observations with predictions from a cosmological hydrodynamic simulation using three trial UVBs applied in post-processing: a spectrally soft, fluctuating UVB calculated using multifrequency radiative transfer; a soft, spatially uniform UVB; and a hard, spatially uniform `quasars-only' model. When considering our paired high-ionization absorbers (C IV/Si IV), the observed statistics strongly prefer the hard, spatially uniform UVB. This echoes recent findings that cosmological simulations generically underproduce strong C IV absorbers at z > 5. A single low/high ionization pair (Si II/Si IV), by contrast, shows a preference for the HM12 UVB, whereas two more (C II/C IV and O I/C IV) show no preference for any of the three UVBs. Despite this, future observations of specific absorbers, particularly Si IV/C IV, with next-generation telescopes probing to lower column densities should yield tighter constraints on the UVB.

  7. Ionization Waves of Arbitrary Velocity

    NASA Astrophysics Data System (ADS)

    Turnbull, D.; Franke, P.; Katz, J.; Palastro, J. P.; Begishev, I. A.; Boni, R.; Bromage, J.; Milder, A. L.; Shaw, J. L.; Froula, D. H.

    2018-06-01

    Flying focus is a technique that uses a chirped laser beam focused by a highly chromatic lens to produce an extended focal region within which the peak laser intensity can propagate at any velocity. When that intensity is high enough to ionize a background gas, an ionization wave will track the intensity isosurface corresponding to the ionization threshold. We report on the demonstration of such ionization waves of arbitrary velocity. Subluminal and superluminal ionization fronts were produced that propagated both forward and backward relative to the ionizing laser. All backward and all superluminal cases mitigated the issue of ionization-induced refraction that typically inhibits the formation of long, contiguous plasma channels.

  8. Background radiation measurements at high power research reactors

    DOE PAGES

    Ashenfelter, J.; Yeh, M.; Balantekin, B.; ...

    2015-10-23

    Research reactors host a wide range of activities that make use of the intense neutron fluxes generated at these facilities. Recent interest in performing measurements with relatively low event rates, e.g. reactor antineutrino detection, at these facilities necessitates a detailed understanding of background radiation fields. Both reactor-correlated and naturally occurring background sources are potentially important, even at levels well below those of importance for typical activities. Here we describe a comprehensive series of background assessments at three high-power research reactors, including γ-ray, neutron, and muon measurements. For each facility we describe the characteristics and identify the sources of the backgroundmore » fields encountered. Furthermore, the general understanding gained of background production mechanisms and their relationship to facility features will prove valuable for the planning of any sensitive measurement conducted therein.« less

  9. Developing and Translating a New Model for Teaching Empowerment Into Routine Chronic Care Management

    PubMed Central

    Wallace, Carolyn A; Pontin, David; Dokova, Klara; Mikkonen, Irma; Savage, Eileen; Koskinen, Liisa

    2017-01-01

    Background: Health professional education has been criticized for not integrating patient expertise into professional curricula to develop professional skills in patient empowerment. Objective: To develop and translate a new expert patient-centered model for teaching empowerment into professional education about routine chronic care management. Methods: Eight Finnish patients (known as expert patients), 31 students, and 11 lecturers from 4 European countries participated in a new pilot intensive educational module. Thirteen focus groups, artefacts, and an online student evaluation were analyzed using a thematic analysis and triangulated using a meta-matrix. Results: A patient-centered pedagogical model is presented, which describes 3 phases of empowerment: (1) preliminary work, (2) the elements of empowerment, and (3) the expected outcomes. These 3 phases were bound by 2 cross-cutting themes “time” and “enabling resources.” Conclusion: Patient expertise was embedded into the new module curriculum. Using an example of care planning, and Pentland and Feldman’s theory of routine organization, the results are translated into a patient-centered educational model for teaching empowerment to health profession students. PMID:29582009

  10. Dependence of spectral characteristics on parameters describing CO2 exchange between crop species and the atmosphere

    NASA Astrophysics Data System (ADS)

    Uździcka, Bogna; Stróżecki, Marcin; Urbaniak, Marek; Juszczak, Radosław

    2017-07-01

    The aim of this paper is to demonstrate that spectral vegetation indices are good indicators of parameters describing the intensity of CO2 exchange between crops and the atmosphere. Measurements were conducted over 2011-2013 on plots of an experimental arable station on winter wheat, winter rye, spring barley, and potatoes. CO2 fluxes were measured using the dynamic closed chamber system, while spectral vegetation indices were determined using SKYE multispectral sensors. Based on spectral data collected in 2011 and 2013, various models to estimate net ecosystem productivity and gross ecosystem productivity were developed. These models were then verified based on data collected in 2012. The R2 for the best model based on spectral data ranged from 0.71 to 0.83 and from 0.78 to 0.92, for net ecosystem productivity and gross ecosystem productivity, respectively. Such high R2 values indicate the utility of spectral vegetation indices in estimating CO2 fluxes of crops. The effects of the soil background turned out to be an important factor decreasing the accuracy of the tested models.

  11. Near-IR Polarized Scattered Light Imagery of the DoAr 28 Transitional Disk

    NASA Technical Reports Server (NTRS)

    Rich, Evan A.; Wisiniewski, John P.; Mayama, Satoshi; Brandt, Timothy D.; Hashimoto, Jun; Kudo, Tomoyuki; Kusakabe, Nobuhiko; Espaillat, Catherine; Serabyn, Eugene; Grady, Carol A.; hide

    2015-01-01

    We present the first spatially resolved polarized scattered light H-band detection of the DoAr 28 transitional disk. Our two epochs of imagery detect the scattered light disk from our effective inner working angle of 0 double prime.10 (13 AU) out to 0double prime.50 (65 AU). This inner working angle is interior to the location of the system's gap inferred by previous studies using spectral energy distribution modeling (15 AU). We detected a candidate point source companion 1 double prime.08 northwest of the system; however, our second epoch of imagery strongly suggests that this object is a background star. We constructed a grid of Monte Carlo Radiative Transfer models of the system, and our best fit models utilize a modestly inclined (50 degrees), 0.01 solar mass disk that has a partially depleted inner gap from the dust sublimation radius out to approximately 8 AU. Subtracting this best fit, axi-symmetric model from our polarized intensity data reveals evidence for two small asymmetries in the disk, which could be attributable to a variety of mechanisms.

  12. SRS 2010 Vegetation Inventory GeoStatistical Mapping Results for Custom Reaction Intensity and Total Dead Fuels.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, Lloyd A.; Paresol, Bernard

    This report of the geostatistical analysis results of the fire fuels response variables, custom reaction intensity and total dead fuels is but a part of an SRS 2010 vegetation inventory project. For detailed description of project, theory and background including sample design, methods, and results please refer to USDA Forest Service Savannah River Site internal report “SRS 2010 Vegetation Inventory GeoStatistical Mapping Report”, (Edwards & Parresol 2013).

  13. The effects of ipsilateral, contralateral, and bilateral broadband noise on the mid-level hump in intensity discriminationa)

    PubMed Central

    Roverud, Elin; Strickland, Elizabeth A.

    2015-01-01

    Previous psychoacoustical and physiological studies indicate that the medial olivocochlear reflex (MOCR), a bilateral, sound-evoked reflex, may lead to improved sound intensity discrimination in background noise. The MOCR can decrease the range of basilar-membrane compression and can counteract effects of neural adaptation from background noise. However, the contribution of these processes to intensity discrimination is not well understood. This study examined the effect of ipsilateral, contralateral, and bilateral noise on the “mid-level hump.” The mid-level hump refers to intensity discrimination Weber fractions (WFs) measured for short-duration, high-frequency tones which are poorer at mid levels than at lower or higher levels. The mid-level hump WFs may reflect a limitation due to basilar-membrane compression, and thus may be decreased by the MOCR. The noise was either short (50 ms) or long (150 ms), with the long noise intended to elicit the sluggish MOCR. For a tone in quiet, mid-level hump WFs improved with ipsilateral noise for most listeners, but not with contralateral noise. For a tone in ipsilateral noise, WFs improved with contralateral noise for most listeners, but only when both noises were long. These results are consistent with MOCR-induced WF improvements, possibly via decreases in effects of compression and neural adaptation. PMID:26627798

  14. Three-dimensional laser radar modeling

    NASA Astrophysics Data System (ADS)

    Steinvall, Ove K.; Carlsson, Tomas

    2001-09-01

    Laser radars have the unique capability to give intensity and full 3-D images of an object. Doppler lidars can give velocity and vibration characteristics of an objects. These systems have many civilian and military applications such as terrain modelling, depth sounding, object detection and classification as well as object positioning. In order to derive the signal waveform from the object one has to account for the laser pulse time characteristics, media effects such as the atmospheric attenuation and turbulence effects or scattering properties, the target shape and reflection (BRDF), speckle noise together with the receiver and background noise. Finally the type of waveform processing (peak detection, leading edge etc.) is needed to model the sensor output to be compared with observations. We have developed a computer model which models performance of a 3-D laser radar. We will give examples of signal waveforms generated from model different targets calculated by integrating the laser beam profile in space and time over the target including reflection characteristics during different speckle and turbulence conditions. The result will be of help when designing and using new laser radar systems. The importance of different type of signal processing of the waveform in order to fulfil performance goals will be shown.

  15. The two types of ENSO in CMIP5 models

    NASA Astrophysics Data System (ADS)

    Kim, Seon Tae; Yu, Jin-Yi

    2012-06-01

    In this study, we evaluate the intensity of the Central-Pacific (CP) and Eastern-Pacific (EP) types of El Niño-Southern Oscillation (ENSO) simulated in the pre-industrial, historical, and the Representative Concentration Pathways (RCP) 4.5 experiments of the Coupled Model Intercomparison Project Phase 5 (CMIP5). Compared to the CMIP3 models, the pre-industrial simulations of the CMIP5 models are found to (1) better simulate the observed spatial patterns of the two types of ENSO and (2) have a significantly smaller inter-model diversity in ENSO intensities. The decrease in the CMIP5 model discrepancies is particularly obvious in the simulation of the EP ENSO intensity, although it is still more difficult for the models to reproduce the observed EP ENSO intensity than the observed CP ENSO intensity. Ensemble means of the CMIP5 models indicate that the intensity of the CP ENSO increases steadily from the pre-industrial to the historical and the RCP4.5 simulations, but the intensity of the EP ENSO increases from the pre-industrial to the historical simulations and then decreases in the RCP4.5 projections. The CP-to-EP ENSO intensity ratio, as a result, is almost the same in the pre-industrial and historical simulations but increases in the RCP4.5 simulation.

  16. Controls on the global distribution of contourite drifts: Insights from an eddy-resolving ocean model

    NASA Astrophysics Data System (ADS)

    Thran, Amanda C.; Dutkiewicz, Adriana; Spence, Paul; Müller, R. Dietmar

    2018-05-01

    Contourite drifts are anomalously high sediment accumulations that form due to reworking by bottom currents. Due to the lack of a comprehensive contourite database, the link between vigorous bottom water activity and drift occurrence has yet to be demonstrated on a global scale. Using an eddy-resolving ocean model and a new georeferenced database of 267 contourites, we show that the global distribution of modern contourite drifts strongly depends on the configuration of the world's most powerful bottom currents, many of which are associated with global meridional overturning circulation. Bathymetric obstacles frequently modify flow direction and intensity, imposing additional finer-scale control on drift occurrence. Mean bottom current speed over contourite-covered areas is only slightly higher (2.2 cm/s) than the rest of the global ocean (1.1 cm/s), falling below proposed thresholds deemed necessary to re-suspend and redistribute sediments (10-15 cm/s). However, currents fluctuate more frequently and intensely over areas with drifts, highlighting the role of intermittent, high-energy bottom current events in sediment erosion, transport, and subsequent drift accumulation. We identify eddies as a major driver of these bottom current fluctuations, and we find that simulated bottom eddy kinetic energy is over three times higher in contourite-covered areas in comparison to the rest of the ocean. Our work supports previous hypotheses which suggest that contourite deposition predominantly occurs due to repeated acute events as opposed to continuous reworking under average-intensity background flow conditions. This suggests that the contourite record should be interpreted in terms of a bottom current's susceptibility to experiencing periodic, high-speed current events. Our results also highlight the potential role of upper ocean dynamics in contourite sedimentation through its direct influence on deep eddy circulation.

  17. In-situ, satellite measurement and model evidence for a~dominant regional contribution to fine particulate matter levels in the Paris Megacity

    NASA Astrophysics Data System (ADS)

    Beekmann, M.; Prévôt, A. S. H.; Drewnick, F.; Sciare, J.; Pandis, S. N.; Denier van der Gon, H. A. C.; Crippa, M.; Freutel, F.; Poulain, L.; Ghersi, V.; Rodriguez, E.; Beirle, S.; Zotter, P.; von der Weiden-Reinmüller, S.-L.; Bressi, M.; Fountoukis, C.; Petetin, H.; Szidat, S.; Schneider, J.; Rosso, A.; El Haddad, I.; Megaritis, A.; Zhang, Q. J.; Michoud, V.; Slowik, J. G.; Moukhtar, S.; Kolmonen, P.; Stohl, A.; Eckhardt, S.; Borbon, A.; Gros, V.; Marchand, N.; Jaffrezo, J. L.; Schwarzenboeck, A.; Colomb, A.; Wiedensohler, A.; Borrmann, S.; Lawrence, M.; Baklanov, A.; Baltensperger, U.

    2015-03-01

    A detailed characterization of air quality in Paris (France), a megacity of more than 10 million inhabitants, during two one month intensive campaigns and from additional one year observations, revealed that about 70% of the fine particulate matter (PM) at urban background is transported on average into the megacity from upwind regions. This dominant influence of regional sources was confirmed by in-situ measurements during short intensive and longer term campaigns, aerosol optical depth (AOD) measurements from ENVISAT, and modeling results from PMCAMx and CHIMERE. While advection of sulfate is well documented for other megacities, there was surprisingly high contribution from long-range transport for both nitrate and organic aerosol. The origin of organic PM was investigated by a comprehensive analysis of aerosol mass spectrometer (AMS), radiocarbon and tracer measurements during two intensive campaigns. Primary fossil fuel combustion emissions contributed less than 20% in winter and 40% in summer to carbonaceous fine PM, unexpectedly little for a megacity. Cooking activities and, during winter, residential wood burning are the major primary organic PM sources. This analysis suggests that the major part of secondary organic aerosol is of modern origin, i.e. from biogenic precursors and from wood burning. Black carbon concentrations are on the lower end of values encountered in megacities worldwide, but still represent an issue for air quality. These comparatively low air pollution levels are due to a combination of low emissions per inhabitant, flat terrain, and a meteorology that is in general not conducive to local pollution build-up. This revised picture of a megacity only controlling part of its own average and peak PM levels has important implications for air pollution regulation policies.

  18. Cost-effectiveness analysis of ultrasonography screening for nonalcoholic fatty liver disease in metabolic syndrome patients

    PubMed Central

    Phisalprapa, Pochamana; Supakankunti, Siripen; Charatcharoenwitthaya, Phunchai; Apisarnthanarak, Piyaporn; Charoensak, Aphinya; Washirasaksiri, Chaiwat; Srivanichakorn, Weerachai; Chaiyakunapruk, Nathorn

    2017-01-01

    Abstract Background: Nonalcoholic fatty liver disease (NAFLD) can be diagnosed early by noninvasive ultrasonography; however, the cost-effectiveness of ultrasonography screening with intensive weight reduction program in metabolic syndrome patients is not clear. This study aims to estimate economic and clinical outcomes of ultrasonography in Thailand. Methods: Cost-effectiveness analysis used decision tree and Markov models to estimate lifetime costs and health benefits from societal perspective, based on a cohort of 509 metabolic syndrome patients in Thailand. Data were obtained from published literatures and Thai database. Results were reported as incremental cost-effectiveness ratios (ICERs) in 2014 US dollars (USD) per quality-adjusted life year (QALY) gained with discount rate of 3%. Sensitivity analyses were performed to assess the influence of parameter uncertainty on the results. Results: The ICER of ultrasonography screening of 50-year-old metabolic syndrome patients with intensive weight reduction program was 958 USD/QALY gained when compared with no screening. The probability of being cost-effective was 67% using willingness-to-pay threshold in Thailand (4848 USD/QALY gained). Screening before 45 years was cost saving while screening at 45 to 64 years was cost-effective. Conclusions: For patients with metabolic syndromes, ultrasonography screening for NAFLD with intensive weight reduction program is a cost-effective program in Thailand. Study can be used as part of evidence-informed decision making. Translational Impacts: Findings could contribute to changes of NAFLD diagnosis practice in settings where economic evidence is used as part of decision-making process. Furthermore, study design, model structure, and input parameters could also be used for future research addressing similar questions. PMID:28445256

  19. Core measures for developmentally supportive care in neonatal intensive care units: theory, precedence and practice

    PubMed Central

    Coughlin, Mary; Gibbins, Sharyn; Hoath, Steven

    2009-01-01

    Title Core measures for developmentally supportive care in neonatal intensive care units: theory, precedence and practice. Aim This paper is a discussion of evidence-based core measures for developmental care in neonatal intensive care units. Background Inconsistent definition, application and evaluation of developmental care have resulted in criticism of its scientific merit. The key concept guiding data organization in this paper is the United States of America’s Joint Commission’s concept of ‘core measures’ for evaluating and accrediting healthcare organizations. This concept is applied to five disease- and procedure-independent measures based on the Universe of Developmental Care model. Data sources Electronically accessible, peer reviewed studies on developmental care published in English were culled for data supporting the selected objective core measures between 1978 and 2008. The quality of evidence was based on a structured predetermined format that included three independent reviewers. Systematic reviews and randomized control trials were considered the strongest level of evidence. When unavailable, cohort, case control, consensus statements and qualitative methods were considered the strongest level of evidence for a particular clinical issue. Discussion Five core measure sets for evidence-based developmental care were evaluated: (1) protected sleep, (2) pain and stress assessment and management, (3) developmental activities of daily living, (4) family-centred care, and (5) the healing environment. These five categories reflect recurring themes that emerged from the literature review regarding developmentally supportive care and quality caring practices in neonatal populations. This practice model provides clear metrics for nursing actions having an impact on the hospital experience of infant-family dyads. Conclusion Standardized disease-independent core measures for developmental care establish minimum evidence-based practice expectations and offer an objective basis for cross-institutional comparison of developmental care programmes. PMID:19686402

  20. Respiratory Syncytial Virus Genomic Load and Disease Severity Among Children Hospitalized With Bronchiolitis: Multicenter Cohort Studies in the United States and Finland

    PubMed Central

    Hasegawa, Kohei; Jartti, Tuomas; Mansbach, Jonathan M.; Laham, Federico R.; Jewell, Alan M.; Espinola, Janice A.; Piedra, Pedro A.; Camargo, Carlos A.

    2015-01-01

    Background. We investigated whether children with a higher respiratory syncytial virus (RSV) genomic load are at a higher risk of more-severe bronchiolitis. Methods. Two multicenter prospective cohort studies in the United States and Finland used the same protocol to enroll children aged <2 years hospitalized for bronchiolitis and collect nasopharyngeal aspirates. By using real-time polymerase chain reaction analysis, patients were classified into 3 genomic load status groups: low, intermediate, and high. Outcome measures were a length of hospital stay (LOS) of ≥3 days and intensive care use, defined as admission to the intensive care unit or use of mechanical ventilation. Results. Of 2615 enrolled children, 1764 (67%) had RSV bronchiolitis. Children with a low genomic load had a higher unadjusted risk of having a length of stay of ≥3 days (52%), compared with children with intermediate and those with high genomic loads (42% and 51%, respectively). In a multivariable model, the risk of having a length of stay of ≥3 days remained significantly higher in the groups with intermediate (odds ratio [OR], 1.43; 95% confidence interval [CI], 1.20–1.69) and high (OR, 1.58; 95% CI, 1.29–1.94) genomic loads. Similarly, children with a high genomic load had a higher risk of intensive care use (20%, compared with 15% and 16% in the groups with low and intermediate genomic loads, respectively). In a multivariable model, the risk remained significantly higher in the group with a high genomic load (OR, 1.43; 95% CI, 1.03–1.99). Conclusion. Children with a higher RSV genomic load had a higher risk for more-severe bronchiolitis. PMID:25425699

  1. Thermal Non-equilibrium Revealed by Periodic Pulses of Random Amplitudes in Solar Coronal Loops

    NASA Astrophysics Data System (ADS)

    Auchère, F.; Froment, C.; Bocchialini, K.; Buchlin, E.; Solomon, J.

    2016-08-01

    We recently detected variations in extreme ultraviolet intensity in coronal loops repeating with periods of several hours. Models of loops including stratified and quasi-steady heating predict the development of a state of thermal non-equilibrium (TNE): cycles of evaporative upflows at the footpoints followed by falling condensations at the apex. Based on Fourier and wavelet analysis, we demonstrate that the observed periodic signals are indeed not signatures of vibrational modes. Instead, superimposed on the power law expected from the stochastic background emission, the power spectra of the time series exhibit the discrete harmonics and continua expected from periodic trains of pulses of random amplitudes. These characteristics reinforce our earlier interpretation of these pulsations as being aborted TNE cycles.

  2. Thermal Non-Equilibrium Revealed by Periodic Pulses of Random Amplitudes in Solar Coronal Loops

    NASA Astrophysics Data System (ADS)

    Auchère, F.; Froment, C.; Bocchialini, K.; Buchlin, E.; Solomon, J.

    2016-10-01

    We recently detected variations in extreme ultraviolet intensity in coronal loops repeating with periods of several hours. Models of loops including stratified and quasi-steady heating predict the development of a state of thermal non-equilibrium (TNE): cycles of evaporative upflows at the footpoints followed by falling condensations at the apex. Based on Fourier and wavelet analysis, we demonstrate that the observed periodic signals are indeed not signatures of vibrational modes. Instead, superimposed on the power law expected from the stochastic background emission, the power spectra of the time series exhibit the discrete harmonics and continua expected from periodic trains of pulses of random amplitudes. These characteristics reinforce our earlier interpretation of these pulsations as being aborted TNE cycles.

  3. Propagation characteristics of a focused laser beam in a strontium barium niobate photorefractive crystal under reverse external electric field.

    PubMed

    Guo, Q L; Liang, B L; Wang, Y; Deng, G Y; Jiang, Y H; Zhang, S H; Fu, G S; Simmonds, P J

    2014-10-01

    The propagation characteristics of a focused laser beam in a SBN:75 photorefractive crystal strongly depend on the signal-to-background intensity ratio (R=Is/Ib) under reverse external electric field. In the range 20>R>0.05, the laser beam shows enhanced self-defocusing behavior with increasing external electric field, while it shows self-focusing in the range 0.03>R>0.01. Spatial solitons are observed under a suitable reverse external electric field for R=0.025. A theoretical model is proposed to explain the experimental observations, which suggest a new type of soliton formation due to "enhancement" not "screening" of the external electrical field.

  4. A Near-Infrared Spectrometer to Measure Zodiacal Light Absorption Spectrum

    NASA Technical Reports Server (NTRS)

    Kutyrev, A. S.; Arendt, R.; Dwek, E.; Kimble, R.; Moseley, S. H.; Rapchun, D.; Silverberg, R. F.

    2010-01-01

    We have developed a high throughput infrared spectrometer for zodiacal light fraunhofer lines measurements. The instrument is based on a cryogenic dual silicon Fabry-Perot etalon which is designed to achieve high signal to noise Fraunhofer line profile measurements. Very large aperture silicon Fabry-Perot etalons and fast camera optics make these measurements possible. The results of the absorption line profile measurements will provide a model free measure of the zodiacal Light intensity in the near infrared. The knowledge of the zodiacal light brightness is crucial for accurate subtraction of zodiacal light foreground for accurate measure of the extragalactic background light after the subtraction of zodiacal light foreground. We present the final design of the instrument and the first results of its performance.

  5. An Economic Analysis of Robot-Assisted Therapy for Long-Term Upper-Limb Impairment After Stroke

    PubMed Central

    Wagner, Todd H.; Lo, Albert C.; Peduzzi, Peter; Bravata, Dawn M.; Huang, Grant D.; Krebs, Hermano I.; Ringer, Robert J.; Federman, Daniel G.; Richards, Lorie G.; Haselkorn, Jodie K.; Wittenberg, George F.; Volpe, Bruce T.; Bever, Christopher T.; Duncan, Pamela W.; Siroka, Andrew; Guarino, Peter D.

    2015-01-01

    Background and Purpose Stroke is a leading cause of disability. Rehabilitation robotics have been developed to aid in recovery after a stroke. This study determined the additional cost of robot-assisted therapy and tested its cost-effectiveness. Methods We estimated the intervention costs and tracked participants' healthcare costs. We collected quality of life using the Stroke Impact Scale and the Health Utilities Index. We analyzed the cost data at 36 weeks postrandomization using multivariate regression models controlling for site, presence of a prior stroke, and Veterans Affairs costs in the year before randomization. Results A total of 127 participants were randomized to usual care plus robot therapy (n=49), usual care plus intensive comparison therapy (n=50), or usual care alone (n=28). The average cost of delivering robot therapy and intensive comparison therapy was $5152 and $7382, respectively (P<0.001), and both were significantly more expensive than usual care alone (no additional intervention costs). At 36 weeks postrandomization, the total costs were comparable for the 3 groups ($17 831 for robot therapy, $19 746 for intensive comparison therapy, and $19 098 for usual care). Changes in quality of life were modest and not statistically different. Conclusions The added cost of delivering robot or intensive comparison therapy was recuperated by lower healthcare use costs compared with those in the usual care group. However, uncertainty remains about the cost-effectiveness of robotic-assisted rehabilitation compared with traditional rehabilitation. Clinical Trial Registration URL: http://clinicaltrials.gov. Unique identifier: NCT00372411. PMID:21757677

  6. The cosmic X-ray background-IRAS galaxy correlation and the local X-ray volume emissivity

    NASA Technical Reports Server (NTRS)

    Miyaji, Takamitsu; Lahav, Ofer; Jahoda, Keith; Boldt, Elihu

    1994-01-01

    We have cross-correlated the galaxies from the IRAS 2 Jy redshift survey sample and the 0.7 Jy projected sample with the all-sky cosmic X-ray background (CXB) map obtained from the High Energy Astronomy Observatory (HEAO) 1 A-2 experiment. We have detected a significant correlation signal between surface density of IRAS galaxies and the X-ray background intensity, with W(sub xg) = (mean value of ((delta I)(delta N)))/(mean value of I)(mean value of N)) of several times 10(exp -3). While this correlation signal has a significant implication for the contribution of the local universe to the hard (E greater than 2 keV) X-ray background, its interpretation is model-dependent. We have developed a formulation to model the cross-correlation between CXB surface brightness and galaxy counts. This includes the effects of source clustering and the X-ray-far-infrared luminosity correlation. Using an X-ray flux-limited sample of active galactic nuclei (AGNs), which has IRAS 60 micrometer measurements, we have estimated the contribution of the AGN component to the observed CXB-IRAS galaxy count correlations in order to see whether there is an excess component, i.e., contribution from low X-ray luminosity sources. We have applied both the analytical approach and Monte Carlo simulations for the estimations. Our estimate of the local X-ray volume emissivity in the 2-10 keV band is rho(sub x) approximately = (4.3 +/- 1.2) x 10(exp 38) h(sub 50) ergs/s/cu Mpc, consistent with the value expected from the luminosity function of AGNs alone. This sets a limit to the local volume emissivity from lower luminosity sources (e.g., star-forming galaxies, low-ionization nuclear emission-line regions (LINERs)) to rho(sub x) less than or approximately = 2 x 10(exp 38) h(sub 50) ergs/s/cu Mpc.

  7. Estimating the prevalence and intensity of Schistosoma mansoni infection among rural communities in Western Tanzania: The influence of sampling strategy and statistical approach

    PubMed Central

    Bakuza, Jared S.; Denwood, Matthew J.; Nkwengulila, Gamba

    2017-01-01

    Background Schistosoma mansoni is a parasite of major public health importance in developing countries, where it causes a neglected tropical disease known as intestinal schistosomiasis. However, the distribution of the parasite within many endemic regions is currently unknown, which hinders effective control. The purpose of this study was to characterize the prevalence and intensity of infection of S. mansoni in a remote area of western Tanzania. Methodology/Principal findings Stool samples were collected from 192 children and 147 adults residing in Gombe National Park and four nearby villages. Children were actively sampled in local schools, and adults were sampled passively by voluntary presentation at the local health clinics. The two datasets were therefore analysed separately. Faecal worm egg count (FWEC) data were analysed using negative binomial and zero-inflated negative binomial (ZINB) models with explanatory variables of site, sex, and age. The ZINB models indicated that a substantial proportion of the observed zero FWEC reflected a failure to detect eggs in truly infected individuals, meaning that the estimated true prevalence was much higher than the apparent prevalence as calculated based on the simple proportion of non-zero FWEC. For the passively sampled data from adults, the data were consistent with close to 100% true prevalence of infection. Both the prevalence and intensity of infection differed significantly between sites, but there were no significant associations with sex or age. Conclusions/Significance Overall, our data suggest a more widespread distribution of S. mansoni in this part of Tanzania than was previously thought. The apparent prevalence estimates substantially under-estimated the true prevalence as determined by the ZINB models, and the two types of sampling strategies also resulted in differing conclusions regarding prevalence of infection. We therefore recommend that future surveillance programmes designed to assess risk factors should use active sampling whenever possible, in order to avoid the self-selection bias associated with passive sampling. PMID:28934206

  8. Dynamic intensity-weighted region of interest imaging for conebeam CT

    PubMed Central

    Pearson, Erik; Pan, Xiaochuan; Pelizzari, Charles

    2017-01-01

    BACKGROUND Patient dose from image guidance in radiotherapy is small compared to the treatment dose. However, the imaging beam is untargeted and deposits dose equally in tumor and healthy tissues. It is desirable to minimize imaging dose while maintaining efficacy. OBJECTIVE Image guidance typically does not require full image quality throughout the patient. Dynamic filtration of the kV beam allows local control of CT image noise for high quality around the target volume and lower quality elsewhere, with substantial dose sparing and reduced scatter fluence on the detector. METHODS The dynamic Intensity-Weighted Region of Interest (dIWROI) technique spatially varies beam intensity during acquisition with copper filter collimation. Fluence is reduced by 95% under the filters with the aperture conformed dynamically to the ROI during cone-beam CT scanning. Preprocessing to account for physical effects of the collimator before reconstruction is described. RESULTS Reconstructions show image quality comparable to a standard scan in the ROI, with higher noise and streak artifacts in the outer region but still adequate quality for patient localization. Monte Carlo modeling shows dose reduction by 10–15% in the ROI due to reduced scatter, and up to 75% outside. CONCLUSIONS The presented technique offers a method to reduce imaging dose by accepting increased image noise outside the ROI, while maintaining full image quality inside the ROI. PMID:27257875

  9. Development of an inverse distance weighted active infrared stealth scheme using the repulsive particle swarm optimization algorithm.

    PubMed

    Han, Kuk-Il; Kim, Do-Hwi; Choi, Jun-Hyuk; Kim, Tae-Kuk

    2018-04-20

    Treatments for detection by infrared (IR) signals are higher than for other signals such as radar or sonar because an object detected by the IR sensor cannot easily recognize its detection status. Recently, research for actively reducing IR signal has been conducted to control the IR signal by adjusting the surface temperature of the object. In this paper, we propose an active IR stealth algorithm to synchronize IR signals from the object and the background around the object. The proposed method includes the repulsive particle swarm optimization statistical optimization algorithm to estimate the IR stealth surface temperature, which will result in a synchronization between the IR signals from the object and the surrounding background by setting the inverse distance weighted contrast radiant intensity (CRI) equal to zero. We tested the IR stealth performance in mid wavelength infrared (MWIR) and long wavelength infrared (LWIR) bands for a test plate located at three different positions on a forest scene to verify the proposed method. Our results show that the inverse distance weighted active IR stealth technique proposed in this study is proved to be an effective method for reducing the contrast radiant intensity between the object and background up to 32% as compared to the previous method using the CRI determined as the simple signal difference between the object and the background.

  10. Objectively assessed recess physical activity in girls and boys from high and low socioeconomic backgrounds.

    PubMed

    Baquet, Georges; Ridgers, Nicola D; Blaes, Aurélie; Aucouturier, Julien; Van Praagh, Emmanuel; Berthoin, Serge

    2014-02-21

    The school environment influences children's opportunities for physical activity participation. The aim of the present study was to assess objectively measured school recess physical activity in children from high and low socioeconomic backgrounds. Four hundred and seven children (6-11 years old) from 4 primary schools located in high socioeconomic status (high-SES) and low socioeconomic status (low-SES) areas participated in the study. Children's physical activity was measured using accelerometry during morning and afternoon recess during a 4-day school week. The percentage of time spent in light, moderate, vigorous, very high and in moderate- to very high-intensity physical activity were calculated using age-dependent cut-points. Sedentary time was defined as 100 counts per minute. Boys were significantly (p < 0.001) more active than girls. No difference in sedentary time between socioeconomic backgrounds was observed. The low-SES group spent significantly more time in light (p < 0.001) and very high (p < 0.05) intensity physical activity compared to the high-SES group. High-SES boys and girls spent significantly more time in moderate (p < 0.001 and p < 0.05, respectively) and vigorous (p < 0.001) physical activity than low-SES boys. Differences were observed in recess physical activity levels according to socioeconomic background and sex. These results indicate that recess interventions should target children in low-SES schools.

  11. Fluctuations in radiation backgrounds at high redshift and the first stars

    NASA Astrophysics Data System (ADS)

    Holzbauer, Lauren Nicole

    The first stars to light up our universe are as yet unseen, but there have been many attempts to elucidate their properties. The characteristics of these stars (`Population/Pop III' stars) that we do know lie mostly within theory; they formed out of metal-free hydrogen and helium gas contained in dark matter minihalos at redshifts z 20-30. The extent to which Pop III star formation reached into later times is unknown. Current and near future instruments are incapable of resolving individual Pop III stars. Consequently, astronomers must devise creative means with which to indirectly predict and measure and their properties. In this thesis, we will investigate a few of those means. We use a new method to model fluctuations of the Lyman-Werner (LW) and Lyman-alpha radiation backgrounds at high redshift. At these early epochs the backgrounds are symptoms of a universe newly lit with its first stars. LW photons (11.5-13.6 eV) are of particular interest because they dissociate molecular hydrogen, the primary coolant in the first minihalos that is necessary for star formation. By using a variation of the `halo model', which describes the spatial distribution and clustering of halos, we can efficiently generate power spectra for these backgrounds. Spatial fluctuations in the LW and (indirectly) the Lyman-alpha BG can tell us about the transition from primordial star formation to a more metal-enriched mode that marks the beginning of the second generation of stars in our Universe. The Near Infrared Background (NIRB) has for some time been considered a potential tool with which to indirectly observe the first stars. Ultraviolet (UV) emission from these stars is redshifted into the NIR band, making the NIRB amenable for hunting Pop III stellar signatures. There have been several measurements of the NIRB and subsequent theoretical studies attempting to explain them in recent years. Though controversial, residual levels of the mean NIRB intensity and anisotropies have been detected after subtracting all known foreground stars and galaxies. Pop III stars have been the leading candidates thought responsible for this observed NIRB excess. We model the Pop III stellar contribution to the NIRB mean intensity and fluctuations and generate observationally motivated values of the star formation (SF) efficiency using high redshift measurements of the UV luminosity density with UDF09, UDF12, and WMAP-9 data. This allows us to characterize the properties of a Pop III stellar population that are required to produce the measured excess. Finally, we propose a new method for detecting primordial metal-free and very metal-poor stellar populations by cross-correlating fluctuations in the intensity of Lyman-alpha and He II &λ;1640A emission sourced from high redshifts. Pop III stars are expected to be more massive and more compact than later generations of stars. This results in a much harder ionizing spectrum. A large portion of the ionizing photons have energies with hnu > 54.4 eV that carve out substantial patches of doubly ionized helium, He III. These photoionized regions then begin to shine brightly in He II recombination emission. Due to the lack of heavy elements in these regions, Pop III stars must rely on hydrogen and helium for cooling, enhancing both the Lyman-alpha and He II emission lines. In this regard, Pop III stars can be characterized as `dual emitters,' i.e. producers of both Lyman-alpha and He II emission signatures. Though Lyman-alpha emission is characteristic of both metal-free and metal-enriched stars, He II emission appears to be unique to extremely metal poor stars and metal-free stars, making it a very strong signature of the first stars. Detecting Lyman-alpha + He II dual emission in individual galaxies at high redshift is difficult and so far rare. The astrophysical engines powering the few Lyman-alpha + He II dual emitters that have been discovered have still not been clearly identified. Alternatively, we may be able to map fluctuations in the total intensity of the Lyman-alpha and He II lambda1640 lines, which will allow us to indirectly assess sources that are below typical luminosity thresholds of deep surveys. Cross-correlating these lines can provide us with a useful new tool for inferring properties of the first stars, since the two lines together allow us to better isolate the redshift of source emission and the presence of He II lambda1640 emission is extremely sensitive to stellar metallicity.

  12. A conceptual model of physician work intensity: guidance for evaluating policies and practices to improve health care delivery.

    PubMed

    Horner, Ronnie D; Matthews, Gerald; Yi, Michael S

    2012-08-01

    Physician work intensity, although a major factor in determining the payment for medical services, may potentially affect patient health outcomes including quality of care and patient safety, and has implications for the redesign of medical practice to improve health care delivery. However, to date, there has been minimal research regarding the relationship between physician work intensity and either patient outcomes or the organization and management of medical practices. A theoretical model on physician work intensity will provide useful guidance to such inquiries. To describe an initial conceptual model to facilitate further investigations of physician work intensity. A conceptual model of physician work intensity is described using as its theoretical base human performance science relating to work intensity. For each of the theoretical components, we present relevant empirical evidence derived from a review of the current literature. The proposed model specifies that the level of work intensity experienced by a physician is a consequence of the physician performing the set of tasks (ie, demands) relating to a medical service. It is conceptualized that each medical service has an inherent level of intensity that is experienced by a physician as a function of factors relating to the physician, patient, and medical practice environment. The proposed conceptual model provides guidance to researchers as to the factors to consider in studies of how physician work intensity impacts patient health outcomes and how work intensity may be affected by proposed policies and approaches to health care delivery.

  13. A Millimetre-Wave Cuboid Solid Immersion Lens with Intensity-Enhanced Amplitude Mask Apodization

    NASA Astrophysics Data System (ADS)

    Yue, Liyang; Yan, Bing; Monks, James N.; Dhama, Rakesh; Wang, Zengbo; Minin, Oleg V.; Minin, Igor V.

    2018-06-01

    Photonic jet is a narrow, highly intensive, weak-diverging beam propagating into a background medium and can be produced by a cuboid solid immersion lens (SIL) in both transmission and reflection modes. Amplitude mask apodization is an optical method to further improve the spatial resolution of a SIL imaging system via reduction of waist size of photonic jet, but always leading to intensity loss due to central masking of the incoming plane wave. In this letter, we report a particularly sized millimetre-wave cuboid SIL with the intensity-enhanced amplitude mask apodization for the first time. It is able to simultaneously deliver extra intensity enhancement and waist narrowing to the produced photonic jet. Both numerical simulation and experimental verification of the intensity-enhanced apodization effect are demonstrated using a copper-masked Teflon cuboid SIL with 22-mm side length under radiation of a plane wave with 8-mm wavelength. Peak intensity enhancement and the lateral resolution of the optical system increase by about 36.0% and 36.4% in this approach, respectively.

  14. Fractal and topological sustainable methods of overcoming expected uncertainty in the radiolocation of low-contrast targets and in the processing of weak multi-dimensional signals on the background of high-intensity noise: A new direction in the statistical decision theory

    NASA Astrophysics Data System (ADS)

    Potapov, A. A.

    2017-11-01

    The main purpose of this work is to interpret the main directions of radio physics, radio engineering and radio location in “fractal” language that makes new ways and generalizations on future promising radio systems. We introduce a new kind and approach of up-to-date radiolocation: fractal-scaling or scale-invariant radiolocation. The new topologic signs and methods of detecting the low-contrast objects against the high-intensity noise background are presented. It leads to basic changes in the theoretical radiolocation structure itself and also in its mathematical apparatus. The fractal radio systems conception, sampling topology, global fractal-scaling approach and the fractal paradigm underlie the scientific direction established by the author in Russia and all over the world for the first time ever.

  15. Optically stimulated luminescence dosimetry with gypsum wallboard (drywall).

    PubMed

    Thompson, Jeroen W; Burdette, Kevin E; Inrig, Elizabeth L; Dewitt, Regina; Mistry, Rajesh; Rink, W Jack; Boreham, Douglas R

    2010-09-01

    Gypsum wallboard (drywall) represents an attractive target for retrospective dosimetry by optically stimulated luminescence (OSL) in the event of a radiological accident or malicious use of nuclear material. In this study, wallboard is shown to display a radiation-induced luminescence signal (RIS) as well as a natural background signal (NS), which is comparable in intensity to the RIS. Excitation and emission spectra show that maximum luminescence intensity is obtained for stimulation with blue light-emitting diodes (470 nm) and for detection in the ultraviolet region (290-370 nm). It is necessary to decrease the optical stimulation power dramatically in order to adequately separate the RIS from the interfering background signal. The necessary protocols are developed for accurately measuring the absorbed dose as low as 500 mGy and demonstrate that the RIS decays logarithmically with storage time, with complete erasure expected within 1-4 d.

  16. Role of Cigarette Sensory Cues in Modifying Puffing Topography

    PubMed Central

    Rees, Vaughan W.; Kreslake, Jennifer M.; Wayne, Geoffrey Ferris; O Connor, Richard J.; Cummings, K. Michael; Connolly, Gregory N.

    2012-01-01

    Background Human puffing topography promotes tobacco dependence by ensuring nicotine delivery, but the factors that determine puffing behavior are not well explained by existing models. Chemosensory cues generated by variations in cigarette product design features may serve as conditioned cues to allow the smoker to optimize nicotine delivery by adjusting puffing topography. Internal tobacco industry research documents were reviewed to understand the influence of sensory cues on puffing topography, and to examine how the tobacco industry has designed cigarettes, including modified risk tobacco products (MRTPs), to enhance puffing behavior to optimize nicotine delivery and product acceptability. Methods Relevant internal tobacco industry documents were identified using systematic searching with key search terms and phrases, and then snowball sampling method was applied to establish further search terms. Results Modern cigarettes are designed by cigarette manufacturers to provide sensory characteristics that not only maintain appeal, but provide cues which inform puffing intensity. Alterations in the chemosensory cues provided in tobacco smoke play an important role in modifying smoking behavior independently of the central effects of nicotine. Conclusions An associative learning model is proposed to explain the influence of chemosensory cues on variation in puffing topography. These cues are delivered via tobacco smoke and are moderated by design features and additives used in cigarettes. The implications for regulation of design features of modified risk tobacco products, which may act to promote intensive puffing while lowering risk perceptions, are discussed. PMID:22365895

  17. Why do horseflies need polarization vision for host detection? Polarization helps tabanid flies to select sunlit dark host animals from the dark patches of the visual environment

    PubMed Central

    Szörényi, Tamás; Pereszlényi, Ádám; Gerics, Balázs; Hegedüs, Ramón; Barta, András

    2017-01-01

    Horseflies (Tabanidae) are polarotactic, being attracted to linearly polarized light when searching for water or host animals. Although it is well known that horseflies prefer sunlit dark and strongly polarizing hosts, the reason for this preference is unknown. According to our hypothesis, horseflies use their polarization sensitivity to look for targets with higher degrees of polarization in their optical environment, which as a result facilitates detection of sunlit dark host animals. In this work, we tested this hypothesis. Using imaging polarimetry, we measured the reflection–polarization patterns of a dark host model and a living black cow under various illumination conditions and with different vegetation backgrounds. We focused on the intensity and degree of polarization of light originating from dark patches of vegetation and the dark model/cow. We compared the chances of successful host selection based on either intensity or degree of polarization of the target and the combination of these two parameters. We show that the use of polarization information considerably increases the effectiveness of visual detection of dark host animals even in front of sunny–shady–patchy vegetation. Differentiation between a weakly polarizing, shady (dark) vegetation region and a sunlit, highly polarizing dark host animal increases the efficiency of host search by horseflies. PMID:29291065

  18. Why do horseflies need polarization vision for host detection? Polarization helps tabanid flies to select sunlit dark host animals from the dark patches of the visual environment.

    PubMed

    Horváth, Gábor; Szörényi, Tamás; Pereszlényi, Ádám; Gerics, Balázs; Hegedüs, Ramón; Barta, András; Åkesson, Susanne

    2017-11-01

    Horseflies (Tabanidae) are polarotactic, being attracted to linearly polarized light when searching for water or host animals. Although it is well known that horseflies prefer sunlit dark and strongly polarizing hosts, the reason for this preference is unknown. According to our hypothesis, horseflies use their polarization sensitivity to look for targets with higher degrees of polarization in their optical environment, which as a result facilitates detection of sunlit dark host animals. In this work, we tested this hypothesis. Using imaging polarimetry, we measured the reflection-polarization patterns of a dark host model and a living black cow under various illumination conditions and with different vegetation backgrounds. We focused on the intensity and degree of polarization of light originating from dark patches of vegetation and the dark model/cow. We compared the chances of successful host selection based on either intensity or degree of polarization of the target and the combination of these two parameters. We show that the use of polarization information considerably increases the effectiveness of visual detection of dark host animals even in front of sunny-shady-patchy vegetation. Differentiation between a weakly polarizing, shady (dark) vegetation region and a sunlit, highly polarizing dark host animal increases the efficiency of host search by horseflies.

  19. A Bridge from Optical to Infrared Galaxies: Explaining Local Properties, Predicting Galaxy Counts and the Cosmic Background Radiation

    NASA Astrophysics Data System (ADS)

    Totani, T.; Takeuchi, T. T.

    2001-12-01

    A new model of infrared galaxy counts and the cosmic background radiation (CBR) is developed by extending a model for optical/near-infrared galaxies. Important new characteristics of this model are that mass scale dependence of dust extinction is introduced based on the size-luminosity relation of optical galaxies, and that the big grain dust temperature T dust is calculated based on a physical consideration for energy balance, rather than using the empirical relation between T dust and total infrared luminosity L IR found in local galaxies, which has been employed in most of previous works. Consequently, the local properties of infrared galaxies, i.e., optical/infrared luminosity ratios, L IR-T dust correlation, and infrared luminosity function are outputs predicted by the model, while these have been inputs in a number of previous models. Our model indeed reproduces these local properties reasonably well. Then we make predictions for faint infrared counts (in 15, 60, 90, 170, 450, and 850 μ m) and CBR by this model. We found considerably different results from most of previous works based on the empirical L IR-T dust relation; especially, it is shown that the dust temperature of starbursting primordial elliptical galaxies is expected to be very high (40--80K). This indicates that intense starbursts of forming elliptical galaxies should have occurred at z ~ 2--3, in contrast to the previous results that significant starbursts beyond z ~ 1 tend to overproduce the far-infrared (FIR) CBR detected by COBE/FIRAS. On the other hand, our model predicts that the mid-infrared (MIR) flux from warm/nonequilibrium dust is relatively weak in such galaxies making FIR CBR, and this effect reconciles the prima facie conflict between the upper limit on MIR CBR from TeV gamma-ray observations and the COBE\\ detections of FIR CBR. The authors thank the financial support by the Japan Society for Promotion of Science.

  20. Roles of hot electrons in generating upper-hybrid waves in the earth's radiation belt

    DOE PAGES

    Hwang, J.; Shin, D. K.; Yoon, P. H.; ...

    2017-05-01

    Electrostatic fluctuations near upper-hybrid frequency, which are sometimes accompanied by multiple-harmonic electron cyclotron frequency bands above and below the upper-hybrid frequency, are common occurrences in the Earth's radiation belt, as revealed through the twin Van Allen Probe spacecrafts. It is customary to use the upper-hybrid emissions for estimating the background electron density, which in turn can be used to determine the plasmapause locations, but the role of hot electrons in generating such fluctuations has not been discussed in detail. The present paper carries out detailed analyses of data from the Waves instrument, which is part of the Electric and Magneticmore » Field Instrument Suite and Integrated Science suite onboard the Van Allen Probes. Combined with the theoretical calculation, it is shown that the peak intensity associated with the upper-hybrid fluctuations might be predominantly determined by tenuous but hot electrons and that denser cold background electrons do not seem to contribute much to the peak intensity. This finding shows that upper-hybrid fluctuations detected during quiet time are not only useful for the determination of the background cold electron density but also contain information on the ambient hot electrons population as well.« less

  1. Roles of hot electrons in generating upper-hybrid waves in the earth's radiation belt

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, J.; Shin, D. K.; Yoon, P. H.

    Electrostatic fluctuations near upper-hybrid frequency, which are sometimes accompanied by multiple-harmonic electron cyclotron frequency bands above and below the upper-hybrid frequency, are common occurrences in the Earth's radiation belt, as revealed through the twin Van Allen Probe spacecrafts. It is customary to use the upper-hybrid emissions for estimating the background electron density, which in turn can be used to determine the plasmapause locations, but the role of hot electrons in generating such fluctuations has not been discussed in detail. The present paper carries out detailed analyses of data from the Waves instrument, which is part of the Electric and Magneticmore » Field Instrument Suite and Integrated Science suite onboard the Van Allen Probes. Combined with the theoretical calculation, it is shown that the peak intensity associated with the upper-hybrid fluctuations might be predominantly determined by tenuous but hot electrons and that denser cold background electrons do not seem to contribute much to the peak intensity. This finding shows that upper-hybrid fluctuations detected during quiet time are not only useful for the determination of the background cold electron density but also contain information on the ambient hot electrons population as well.« less

  2. Picturing and modelling catchments by representative hillslopes

    NASA Astrophysics Data System (ADS)

    Loritz, Ralf; Hassler, Sibylle; Jackisch, Conrad; Zehe, Erwin

    2016-04-01

    Hydrological modelling studies often start with a qualitative sketch of the hydrological processes of a catchment. These so-called perceptual models are often pictured as hillslopes and are generalizations displaying only the dominant and relevant processes of a catchment or hillslope. The problem with these models is that they are prone to become too much predetermined by the designer's background and experience. Moreover it is difficult to know if that picture is correct and contains enough complexity to represent the system under study. Nevertheless, because of their qualitative form, perceptual models are easy to understand and can be an excellent tool for multidisciplinary exchange between researchers with different backgrounds, helping to identify the dominant structures and processes in a catchment. In our study we explore whether a perceptual model built upon an intensive field campaign may serve as a blueprint for setting up representative hillslopes in a hydrological model to reproduce the functioning of two distinctly different catchments. We use a physically-based 2D hillslope model which has proven capable to be driven by measured soil-hydrological parameters. A key asset of our approach is that the model structure itself remains a picture of the perceptual model, which is benchmarked against a) geo-physical images of the subsurface and b) observed dynamics of discharge, distributed state variables and fluxes (soil moisture, matric potential and sap flow). Within this approach we are able to set up two behavioral model structures which allow the simulation of the most important hydrological fluxes and state variables in good accordance with available observations within the 19.4 km2 large Colpach catchment and the 4.5 km2 large Wollefsbach catchment in Luxembourg without the necessity of calibration. This corroborates, contrary to the widespread opinion, that a) lower mesoscale catchments may be modelled by representative hillslopes and b) physically-based models can be parametrized based on comprehensive field data and a good perceptual model. Our results particularly indicate that the main challenge in understanding and modelling the seasonal water balance of a catchment is a proper representation of the phenological cycle of vegetation, not exclusively the structure of the subsurface and spatial variability of soil hydraulic parameters.

  3. Implementation of a chemical background method (OH-CHEM) for measurements of OH using the Leeds FAGE instrument: Characterisation and observations from a coastal location

    NASA Astrophysics Data System (ADS)

    Woodward-Massey, R.; Cryer, D. R.; Whalley, L. K.; Ingham, T.; Seakins, P. W.; Heard, D. E.; Stimpson, L. M.

    2015-12-01

    The removal of pollutants and greenhouse gases in the troposphere is dominated by reactions with the hydroxyl radical (OH), which is closely coupled to the hydroperoxy radical (HO2). Comparisons of the levels of OH and HO2 observed during field campaigns to the results of detailed chemical box models serve as a vital tool to assess our understanding of the underlying chemical mechanisms involved in tropospheric oxidation. Recent measurements of OH and HO2 radicals are significantly higher than those predicted by models for some instruments measuring in certain environments, especially those influenced by high emissions of biogenic compounds such as isoprene, prompting intense laboratory research to account for such discrepancies. While current chemical mechanisms are likely incomplete, it is also possible that, at least in part, these elevated radical observations have been influenced by instrumental biases from interfering species. Recent studies have suggested that fluorescence assay by gas expansion (FAGE) instruments may be susceptible to an unknown interference in the measurement of OH. This hypothesis can be tested through the implementation of an alternative method to determine the OH background signal, whereby OH is removed by the addition of a chemical scavenger prior to sampling by FAGE. The Leeds FAGE instrument was modified to facilitate this method by the construction of an inlet pre-injector (IPI), where OH is removed through reaction with propane. The modified Leeds FAGE instrument was deployed at a coastal location in southeast England during summer 2015 as part of the ICOZA (Integrated Chemistry of OZone in the Atmosphere) project. Measurements of OH made using both background methods will be presented, alongside results from laboratory characterisation experiments and details of the IPI design.

  4. Cosmic Metal Production and the Contribution of QSO Absorption Systems to the Ionizing Background

    NASA Technical Reports Server (NTRS)

    Madau, Piero; Shull, J. Michael

    1996-01-01

    The recent discovery by Cowie et al. (1995) and Tytler et al. (1995) of metals in the Ly alpha clouds shows that the intergalactic medium (IGM) at high redshift is contaminated by the products of stars and suggests that ionizing photons from massive star formation may be a significant contributor to the UV background radiation at early epochs. We assess the validity of the stellar photoionization hypothesis. Based on recent computations of metal yields and 0-star Lyman continuum (Lyc) fluxes, we find that 0.2 percent of the rest-mass energy of the metals produced is radiated as Lyc. By modeling the transfer of ionizing radiation through the IGM and the rate of chemical enrichment, we demonstrate that the background intensity of photons at 1 ryd that accompanies the production of metals in the Ly alpha forest clouds may be significant, approaching 0.5 x 10(exp -21) ergs cm squared s(-1) Hz(-1) sr(-1) at z approximately equals 3 if the Lyc escape fraction is greater than of equal to 0.25. Together with quasars, massive stars could then, in principle, provide the hydrogen and helium Lyc photons required to ionize the universe at high redshifts. We propose that observations of the He2 Gunn-Peterson effect and of the metal ionization states of the Ly alpha forest and Lyman-limit absorbers should show the signature of a stellar spectrum. We also note that the stellar photoionization model fails if a large fraction of the UV radiation emitted from stars cannot escape into the IGM, as suggested by the recent Hopkins Ultraviolet Telescope observations by Leitherer et al. (1995) of low-redshift starburst galaxies, or if most of the metals observed at z is approximately 3 were produced at much earlier epochs.

  5. A general method for baseline-removal in ultrafast electron powder diffraction data using the dual-tree complex wavelet transform.

    PubMed

    René de Cotret, Laurent P; Siwick, Bradley J

    2017-07-01

    The general problem of background subtraction in ultrafast electron powder diffraction (UEPD) is presented with a focus on the diffraction patterns obtained from materials of moderately complex structure which contain many overlapping peaks and effectively no scattering vector regions that can be considered exclusively background. We compare the performance of background subtraction algorithms based on discrete and dual-tree complex (DTCWT) wavelet transforms when applied to simulated UEPD data on the M1-R phase transition in VO 2 with a time-varying background. We find that the DTCWT approach is capable of extracting intensities that are accurate to better than 2% across the whole range of scattering vector simulated, effectively independent of delay time. A Python package is available.

  6. Formation of 2D bright spatial solitons in lithium niobate with photovoltaic response and incoherent background

    NASA Astrophysics Data System (ADS)

    Pustozerov, A.; Shandarov, V.

    2017-12-01

    The influence of incoherent background illumination produced by light-emitting diodes (LED's) of different average wavelengths and laser diode emitting in blue region of visible on diffraction characteristics of narrow coherent light beams of He-Ne laser due to refractive index changes of Fe-doped lithium niobate sample are studied. It has been experimentally demonstrated that nonlinear diffraction of red beams with wavelength 633 nm and diameters on full width of half maximum (FWHM) near to 15 μm may be totally compensated using background light with average wavelengths 450 - 465 nm. To provide the necessary intensity of incoherent background, the combinations of spherical and cylindrical concave lenses with blue LED and laser diode module without focusing its beam have been used.

  7. A preliminary measurement of the cosmic microwave background spectrum by the Cosmic Background Explorer (COBE) satellite

    NASA Technical Reports Server (NTRS)

    Mather, J. C.; Cheng, E. S.; Shafer, R. A.; Bennett, C. L.; Boggess, N. W.; Dwek, E.; Hauser, M. G.; Kelsall, T.; Moseley, S. H., Jr.; Silverberg, R. F.

    1990-01-01

    A preliminary spectrum is presented of the background radiation between 1 and 20/cm from regions near the north Galactic pole, as observed by the FIRAS instrument on the COBE satellite. The spectral resolution is 1/cm. The spectrum is well fitted by a blackbody with a temperature of 2.735 + or - 0.06 K, and the deviation from a blackbody is less than 1 percent of the peak intensity over the range 1-20/cm. These new data show no evidence for the submillimeter excess previously reported by Matsumoto et al. (1988) in the cosmic microwave background. Further analysis and additional data are expected to improve the sensitivity to deviations from a blackbody spectrum by an order of magnitude.

  8. Colour image segmentation using unsupervised clustering technique for acute leukemia images

    NASA Astrophysics Data System (ADS)

    Halim, N. H. Abd; Mashor, M. Y.; Nasir, A. S. Abdul; Mustafa, N.; Hassan, R.

    2015-05-01

    Colour image segmentation has becoming more popular for computer vision due to its important process in most medical analysis tasks. This paper proposes comparison between different colour components of RGB(red, green, blue) and HSI (hue, saturation, intensity) colour models that will be used in order to segment the acute leukemia images. First, partial contrast stretching is applied on leukemia images to increase the visual aspect of the blast cells. Then, an unsupervised moving k-means clustering algorithm is applied on the various colour components of RGB and HSI colour models for the purpose of segmentation of blast cells from the red blood cells and background regions in leukemia image. Different colour components of RGB and HSI colour models have been analyzed in order to identify the colour component that can give the good segmentation performance. The segmented images are then processed using median filter and region growing technique to reduce noise and smooth the images. The results show that segmentation using saturation component of HSI colour model has proven to be the best in segmenting nucleus of the blast cells in acute leukemia image as compared to the other colour components of RGB and HSI colour models.

  9. Seasonal and diurnal variations of ozone at a high-altitude mountain baseline station in East Asia

    NASA Astrophysics Data System (ADS)

    Ou Yang, Chang-Feng; Lin, Neng-Huei; Sheu, Guey-Rong; Lee, Chung-Te; Wang, Jia-Lin

    2012-01-01

    Continuous measurements of tropospheric ozone were conducted at the Lulin Atmospheric Background Station (LABS) at an altitude of 2862 m from April 2006 to the end of 2009. Distinct seasonal variations in the ozone concentration were observed at the LABS, with a springtime maximum and a summertime minimum. Based on a backward trajectory analysis, CO data, and ozonesondes, the springtime maximum was most likely caused by the long-range transport of air masses from Southeast Asia, where biomass burning was intense in spring. In contrast, a greater Pacific influence contributed to the summertime minimum. In addition to seasonal variations, a distinct diurnal pattern was also observed at the LABS, with a daytime minimum and a nighttime maximum. The daytime ozone minimum was presumably caused by sinks of dry deposition and NO titration during the up-slope transport of surface air. The higher nighttime values, however, could be the result of air subsidence at night bringing ozone aloft to the LABS. After filtering out the daytime data to remove possible local surface contributions, the average background ozone value for the period of 2006-2009 was approximately 36.6 ppb, increased from 32.3 ppb prior to data filtering, without any changes in the seasonal pattern. By applying HYSPLIT4 model analysis, the origins of the air masses contributing to the background ozone observed at the LABS were investigated.

  10. Evaluation of high intensity sheeting for overhead highway signs.

    DOT National Transportation Integrated Search

    1974-01-01

    The current practice in Virginia is to reflectorize and illuminate all overhead highway signs because of their important role in the safe and orderly flow of traffic. Reflectorization is obtained by using reflective sheeting as background and legend ...

  11. Potential indirect effects of aerosol on tropical cyclone intensity: convective fluxes and cold-pool activity

    NASA Astrophysics Data System (ADS)

    Krall, G. M.; Cottom, W. R.

    2012-01-01

    Observational and model evidence suggest that a 2008 Western Pacific typhoon (NURI) ingested elevated concentrations of aerosol as it neared the Chinese coast. This study uses a regional model with two-moment bin-emulating microphysics to simulate the typhoon as it enters the field of elevated aerosol concentrations. A clean maritime field of cloud condensation nuclei (CCN) was prescribed as marine background CCN concentrations and then based on satellite and global aerosol model output, increased to pollution levels and further enhanced in sensitivity tests. The typhoon was simulated for 96 h beginning 17 August 2008. During the final 60 h CCN concentrations were enhanced as it neared the Philippines and coastal China. The model was initialized with both global reanalysis model data and irregularly spaced dropsonde data from the 2008 T-PARC observational campaign using an objective analysis routine. At 36 h, the internal nudging of the model was switched off and allowed to freely evolve on its own. As the typhoon encountered the elevated CCN in the sensitivity tests, a significant perturbation of windspeed, convective fluxes, and hydrometeor species behavior was simulated. Early during the ingestion of enhanced CCN, precipitation was reduced due to suppressed collision and coalescence, and storm winds increased in strength. Subsequently, owing to reduced fall speeds of the smaller drops, greater amounts of condensate were thrust into supercooled levels where the drops froze releasing greater amounts of latent heat of freezing. Convection thereby intensified which resulted in enhanced rainfall and more vigorous convectively-produced downdrafts. As the convection intensified in the outer rainbands the storm drifted over the developing cold-pools. The enhanced cold-pools blocked the inflow of warm, moist air into the core of the typhoon which led to a weakening of the typhoon with significantly reduced low level wind speeds. The very high amounts of pollution aerosols resulted in large amounts of condensate being thrust into the storm anvil which weakened convective downdrafts and cold-pools, yet the system did show reductions in windspeed (although weaker) compared with the clean control run. This study suggests that ingestion of elevated amounts of CCN into a tropical cyclone (TC) can appreciably alter the intensity of the storm. This implies that intensity prediction of TCs would be improved by including indirect aerosol affects. However, the pollution aerosols have very little impact on the storm track.

  12. Infrared monitoring of the Space Station environment

    NASA Technical Reports Server (NTRS)

    Kostiuk, Theodor; Jennings, Donald E.; Mumma, Michael J.

    1988-01-01

    The measurement and monitoring of infrared emission in the environment of the Space Station has a twofold importance - for the study of the phenomena itself and as an aid in planning and interpreting Station based infrared experiments. Spectral measurements of the infrared component of the spacecraft glow will, along with measurements in other spectral regions, provide data necessary to fully understand and model the physical and chemical processes producing these emissions. The monitoring of the intensity of these emissions will provide background limits for Space Station based infrared experiments and permit the determination of optimum instrument placement and pointing direction. Continuous monitoring of temporal changes in the background radiation (glow) will also permit better interpretation of Station-based infrared earth sensing and astronomical observations. The primary processes producing infrared emissions in the Space Station environment are: (1) Gas phase excitations of Station generated molecules ( e.g., CO2, H2O, organics...) by collisions with the ambient flux of mainly O and N2. Molecular excitations and generation of new species by collisions of ambient molecules with Station surfaces. They provide a list of resulting species, transition energies, excitation cross sections and relevant time constants. The modeled spectrum of the excited species occurs primarily at wavelengths shorter than 8 micrometer. Emissions at longer wavelengths may become important during rocket firing or in the presence of dust.

  13. Characterizing the spatial variability of local and background concentration signals for air pollution at the neighbourhood scale

    NASA Astrophysics Data System (ADS)

    Shairsingh, Kerolyn K.; Jeong, Cheol-Heon; Wang, Jonathan M.; Evans, Greg J.

    2018-06-01

    Vehicle emissions represent a major source of air pollution in urban districts, producing highly variable concentrations of some pollutants within cities. The main goal of this study was to identify a deconvolving method so as to characterize variability in local, neighbourhood and regional background concentration signals. This method was validated by examining how traffic-related and non-traffic-related sources influenced the different signals. Sampling with a mobile monitoring platform was conducted across the Greater Toronto Area over a seven-day period during summer 2015. This mobile monitoring platform was equipped with instruments for measuring a wide range of pollutants at time resolutions of 1 s (ultrafine particles, black carbon) to 20 s (nitric oxide, nitrogen oxides). The monitored neighbourhoods were selected based on their land use categories (e.g. industrial, commercial, parks and residential areas). The high time-resolution data allowed pollutant concentrations to be separated into signals representing background and local concentrations. The background signals were determined using a spline of minimums; local signals were derived by subtracting the background concentration from the total concentration. Our study showed that temporal scales of 500 s and 2400 s were associated with the neighbourhood and regional background signals respectively. The percent contribution of the pollutant concentration that was attributed to local signals was highest for nitric oxide (NO) (37-95%) and lowest for ultrafine particles (9-58%); the ultrafine particles were predominantly regional (32-87%) in origin on these days. Local concentrations showed stronger associations than total concentrations with traffic intensity in a 100 m buffer (ρ:0.21-0.44). The neighbourhood scale signal also showed stronger associations with industrial facilities than the total concentrations. Given that the signals show stronger associations with different land use suggests that resolving the ambient concentrations differentiates which emission sources drive the variability in each signal. The benefit of this deconvolution method is that it may reduce exposure misclassification when coupled with predictive models.

  14. Mapping species abundance by a spatial zero-inflated Poisson model: a case study in the Wadden Sea, the Netherlands.

    PubMed

    Lyashevska, Olga; Brus, Dick J; van der Meer, Jaap

    2016-01-01

    The objective of the study was to provide a general procedure for mapping species abundance when data are zero-inflated and spatially correlated counts. The bivalve species Macoma balthica was observed on a 500×500 m grid in the Dutch part of the Wadden Sea. In total, 66% of the 3451 counts were zeros. A zero-inflated Poisson mixture model was used to relate counts to environmental covariates. Two models were considered, one with relatively fewer covariates (model "small") than the other (model "large"). The models contained two processes: a Bernoulli (species prevalence) and a Poisson (species intensity, when the Bernoulli process predicts presence). The model was used to make predictions for sites where only environmental data are available. Predicted prevalences and intensities show that the model "small" predicts lower mean prevalence and higher mean intensity, than the model "large". Yet, the product of prevalence and intensity, which might be called the unconditional intensity, is very similar. Cross-validation showed that the model "small" performed slightly better, but the difference was small. The proposed methodology might be generally applicable, but is computer intensive.

  15. Modeling midwave infrared muzzle flash spectra from unsuppressed and flash-suppressed large caliber munitions

    NASA Astrophysics Data System (ADS)

    Steward, Bryan J.; Perram, Glen P.; Gross, Kevin C.

    2012-07-01

    Time-resolved infrared spectra of firings from a 152 mm howitzer were acquired over an 1800-6000 cm-1 spectral range using a Fourier-transform spectrometer. The instrument collected primarily at 32 cm-1 spectral and 100 Hz temporal resolutions. Munitions included unsuppressed and chemically flash suppressed propellants. Secondary combustion occurred with unsuppressed propellants resulting in flash emissions lasting ˜100 ms and dominated by H2O and CO2 spectral structure. Non-combusting plume emissions were one-tenth as intense and approached background levels within 20-40 ms. A low-dimensional phenomenological model was used to reduce the data to temperatures, soot absorbances, and column densities of H2O, CO2, CH4, and CO. The combusting plumes exhibit peak temperatures of ˜1400 K, areas of greater than 32 m2, low soot emissivity of ˜0.04, with nearly all the CO converted to CO2. The non-combusting plumes exhibit lower temperatures of ˜1000 K, areas of ˜5 m2, soot emissivity of greater than 0.38 and CO as the primary product. Maximum fit residual relative to peak intensity are 14% and 8.9% for combusting and non-combusting plumes, respectively. The model was generalized to account for turbulence-induced variations in the muzzle plumes. Distributions of temperature and concentration in 1-2 spatial regions demonstrate a reduction in maximum residuals by 40%. A two-region model of combusting plumes provides a plausible interpretation as a ˜1550 K, optically thick plume core and ˜2550 K, thin, surface-layer flame-front. Temperature rate of change was used to characterize timescales and energy release for plume emissions. Heat of combustion was estimated to be ˜5 MJ/kg.

  16. Ionization Waves of Arbitrary Velocity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turnbull, D.; Franke, P.; Katz, J.

    The flying focus is a technique in which a chirped laser beam is focused by a chromatic lens to produce an extended focal spot within which laser intensity can propagate at any velocity. If the intensity is above the ionization threshold of a background gas, an ionization wave will track the ionization threshold intensity isosurface as it propagates. We report on the demonstration of such ionization waves of arbitrary velocity. Subluminal and superluminal ionization fronts were produced, both forward- and backward-propagating relative to the ionizing laser. In conclusion, all backward and all superluminal cases mitigated the issue of ionization-induced refractionmore » that typically challenges the formation of long, contiguous plasma channels.« less

  17. Rapid quantification and taxonomic classification of environmentalDNA from both prokaryotic and eukaryotic origins using a microarray

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeSantis, Todd Z.; Stone, Carol E.; Murray, Sonya R.

    2005-02-22

    A microarray has been designed using 62,358 probes matched to both prokaryotic and eukaryotic small-subunit ribosomal RNA genes. The array categorized environmental DNA to specific phylogenetic clusters in under 9 h. To a background of DNA generated from natural outdoor aerosols, known quantities of rRNA gene copies from distinct organisms were added producing corresponding hybridization intensity scores that correlated well with their concentrations (r=0.917). Reproducible differences in microbial community composition were observed by altering the genomic DNA extraction method. Notably, gentle extractions produced peak intensities for Mycoplasmatales and Burkholderiales, whereas a vigorous disruption produced peak intensities for Vibrionales,Clostridiales, and Bacillales.

  18. Ionization Waves of Arbitrary Velocity

    DOE PAGES

    Turnbull, D.; Franke, P.; Katz, J.; ...

    2018-05-31

    The flying focus is a technique in which a chirped laser beam is focused by a chromatic lens to produce an extended focal spot within which laser intensity can propagate at any velocity. If the intensity is above the ionization threshold of a background gas, an ionization wave will track the ionization threshold intensity isosurface as it propagates. We report on the demonstration of such ionization waves of arbitrary velocity. Subluminal and superluminal ionization fronts were produced, both forward- and backward-propagating relative to the ionizing laser. In conclusion, all backward and all superluminal cases mitigated the issue of ionization-induced refractionmore » that typically challenges the formation of long, contiguous plasma channels.« less

  19. Detecting background changes in environments with dynamic foreground by separating probability distribution function mixtures using Pearson's method of moments

    NASA Astrophysics Data System (ADS)

    Jenkins, Colleen; Jordan, Jay; Carlson, Jeff

    2007-02-01

    This paper presents parameter estimation techniques useful for detecting background changes in a video sequence with extreme foreground activity. A specific application of interest is automated detection of the covert placement of threats (e.g., a briefcase bomb) inside crowded public facilities. We propose that a histogram of pixel intensity acquired from a fixed mounted camera over time for a series of images will be a mixture of two Gaussian functions: the foreground probability distribution function and background probability distribution function. We will use Pearson's Method of Moments to separate the two probability distribution functions. The background function can then be "remembered" and changes in the background can be detected. Subsequent comparisons of background estimates are used to detect changes. Changes are flagged to alert security forces to the presence and location of potential threats. Results are presented that indicate the significant potential for robust parameter estimation techniques as applied to video surveillance.

  20. Threshold and Beyond: Modeling The Intensity Dependence of Auditory Responses

    PubMed Central

    2007-01-01

    In many studies of auditory-evoked responses to low-intensity sounds, the response amplitude appears to increase roughly linearly with the sound level in decibels (dB), corresponding to a logarithmic intensity dependence. But the auditory system is assumed to be linear in the low-intensity limit. The goal of this study was to resolve the seeming contradiction. Based on assumptions about the rate-intensity functions of single auditory-nerve fibers and the pattern of cochlear excitation caused by a tone, a model for the gross response of the population of auditory nerve fibers was developed. In accordance with signal detection theory, the model denies the existence of a threshold. This implies that regarding the detection of a significant stimulus-related effect, a reduction in sound intensity can always be compensated for by increasing the measurement time, at least in theory. The model suggests that the gross response is proportional to intensity when the latter is low (range I), and a linear function of sound level at higher intensities (range III). For intensities in between, it is concluded that noisy experimental data may provide seemingly irrefutable evidence of a linear dependence on sound pressure (range II). In view of the small response amplitudes that are to be expected for intensity range I, direct observation of the predicted proportionality with intensity will generally be a challenging task for an experimenter. Although the model was developed for the auditory nerve, the basic conclusions are probably valid for higher levels of the auditory system, too, and might help to improve models for loudness at threshold. PMID:18008105

  1. Spectral embedding based active contour (SEAC) for lesion segmentation on breast dynamic contrast enhanced magnetic resonance imaging.

    PubMed

    Agner, Shannon C; Xu, Jun; Madabhushi, Anant

    2013-03-01

    Segmentation of breast lesions on dynamic contrast enhanced (DCE) magnetic resonance imaging (MRI) is the first step in lesion diagnosis in a computer-aided diagnosis framework. Because manual segmentation of such lesions is both time consuming and highly susceptible to human error and issues of reproducibility, an automated lesion segmentation method is highly desirable. Traditional automated image segmentation methods such as boundary-based active contour (AC) models require a strong gradient at the lesion boundary. Even when region-based terms are introduced to an AC model, grayscale image intensities often do not allow for clear definition of foreground and background region statistics. Thus, there is a need to find alternative image representations that might provide (1) strong gradients at the margin of the object of interest (OOI); and (2) larger separation between intensity distributions and region statistics for the foreground and background, which are necessary to halt evolution of the AC model upon reaching the border of the OOI. In this paper, the authors introduce a spectral embedding (SE) based AC (SEAC) for lesion segmentation on breast DCE-MRI. SE, a nonlinear dimensionality reduction scheme, is applied to the DCE time series in a voxelwise fashion to reduce several time point images to a single parametric image where every voxel is characterized by the three dominant eigenvectors. This parametric eigenvector image (PrEIm) representation allows for better capture of image region statistics and stronger gradients for use with a hybrid AC model, which is driven by both boundary and region information. They compare SEAC to ACs that employ fuzzy c-means (FCM) and principal component analysis (PCA) as alternative image representations. Segmentation performance was evaluated by boundary and region metrics as well as comparing lesion classification using morphological features from SEAC, PCA+AC, and FCM+AC. On a cohort of 50 breast DCE-MRI studies, PrEIm yielded overall better region and boundary-based statistics compared to the original DCE-MR image, FCM, and PCA based image representations. Additionally, SEAC outperformed a hybrid AC applied to both PCA and FCM image representations. Mean dice similarity coefficient (DSC) for SEAC was significantly better (DSC = 0.74 ± 0.21) than FCM+AC (DSC = 0.50 ± 0.32) and similar to PCA+AC (DSC = 0.73 ± 0.22). Boundary-based metrics of mean absolute difference and Hausdorff distance followed the same trends. Of the automated segmentation methods, breast lesion classification based on morphologic features derived from SEAC segmentation using a support vector machine classifier also performed better (AUC = 0.67 ± 0.05; p < 0.05) than FCM+AC (AUC = 0.50 ± 0.07), and PCA+AC (AUC = 0.49 ± 0.07). In this work, we presented SEAC, an accurate, general purpose AC segmentation tool that could be applied to any imaging domain that employs time series data. SE allows for projection of time series data into a PrEIm representation so that every voxel is characterized by the dominant eigenvectors, capturing the global and local time-intensity curve similarities in the data. This PrEIm allows for the calculation of strong tensor gradients and better region statistics than the original image intensities or alternative image representations such as PCA and FCM. The PrEIm also allows for building a more accurate hybrid AC scheme.

  2. Practical Application of Model-based Programming and State-based Architecture to Space Missions

    NASA Technical Reports Server (NTRS)

    Horvath, Gregory; Ingham, Michel; Chung, Seung; Martin, Oliver; Williams, Brian

    2006-01-01

    A viewgraph presentation to develop models from systems engineers that accomplish mission objectives and manage the health of the system is shown. The topics include: 1) Overview; 2) Motivation; 3) Objective/Vision; 4) Approach; 5) Background: The Mission Data System; 6) Background: State-based Control Architecture System; 7) Background: State Analysis; 8) Overview of State Analysis; 9) Background: MDS Software Frameworks; 10) Background: Model-based Programming; 10) Background: Titan Model-based Executive; 11) Model-based Execution Architecture; 12) Compatibility Analysis of MDS and Titan Architectures; 13) Integrating Model-based Programming and Execution into the Architecture; 14) State Analysis and Modeling; 15) IMU Subsystem State Effects Diagram; 16) Titan Subsystem Model: IMU Health; 17) Integrating Model-based Programming and Execution into the Software IMU; 18) Testing Program; 19) Computationally Tractable State Estimation & Fault Diagnosis; 20) Diagnostic Algorithm Performance; 21) Integration and Test Issues; 22) Demonstrated Benefits; and 23) Next Steps

  3. [Electormagnetic field of the mobile phone base station: case study].

    PubMed

    Bieńkowski, Paweł; Zubrzak, Bartłomiej; Surma, Robert

    2011-01-01

    The paper presents changes in the electromagnetic field intensity in a school building and its surrounding after the mobile phone base station installation on the roof of the school. The comparison of EMF intensity measured before the base station was launched (electromagnetic background measurement) and after starting its operation (two independent control measurements) is discussed. Analyses of measurements are presented and the authors also propose the method of the electromagnetic field distribution adjustment in the area of radiation antennas side lobe to reduce the intensity of the EMF level in the base station proximity. The presented method involves the regulation of the inclination. On the basis of the measurements, it was found that the EMF intensity increased in the building and its surroundings, but the values measured with wide margins meet the requirements of the Polish law on environmental protection.

  4. On the Edge of Life, II: House Officer Struggles Recorded in an Intensive Care Unit Journal

    PubMed Central

    Sekeres, Mikkael A.; Stern, Theodore A.

    2002-01-01

    Background: In a general hospital, few clinical settings match the intensity of the intensive care unit (ICU) experience. Clinical rotations in ICUs elicit and emphasize the struggles house officers face on a daily basis throughout their training. Method: These struggles were recorded by hundreds of residents in a journal maintained in one Medical ICU for the past 20 years. We systematically reviewed these unsolicited entries to develop categories that define and illustrate common stressors. Results: Stressors for house officers include isolation, insecurity, care for the terminally ill, sleep deprivation, and long work weeks. Conclusion: By placing the struggles of house staff in context, trainees and their residency training programs can be prepared for the intensity of the experience and for work in clinical practice settings that follows completion of training. PMID:15014706

  5. Application of principal component analysis for improvement of X-ray fluorescence images obtained by polycapillary-based micro-XRF technique

    NASA Astrophysics Data System (ADS)

    Aida, S.; Matsuno, T.; Hasegawa, T.; Tsuji, K.

    2017-07-01

    Micro X-ray fluorescence (micro-XRF) analysis is repeated as a means of producing elemental maps. In some cases, however, the XRF images of trace elements that are obtained are not clear due to high background intensity. To solve this problem, we applied principal component analysis (PCA) to XRF spectra. We focused on improving the quality of XRF images by applying PCA. XRF images of the dried residue of standard solution on the glass substrate were taken. The XRF intensities for the dried residue were analyzed before and after PCA. Standard deviations of XRF intensities in the PCA-filtered images were improved, leading to clear contrast of the images. This improvement of the XRF images was effective in cases where the XRF intensity was weak.

  6. The scotopic threshold response of the dark-adapted electroretinogram of the mouse.

    PubMed

    Saszik, Shannon M; Robson, John G; Frishman, Laura J

    2002-09-15

    The most sensitive response in the dark-adapted electroretinogram (ERG), the scotopic threshold response (STR) which originates from the proximal retina, has been identified in several mammals including humans, but previously not in the mouse. The current study established the presence and assessed the nature of the mouse STR. ERGs were recorded from adult wild-type C57/BL6 mice anaesthetized with ketamine (70 mg kg(-1)) and xylazine (7 mg kg(-1)). Recordings were between DTL fibres placed under contact lenses on the two eyes. Monocular test stimuli were brief flashes (lambda(max) 462 nm; -6.1 to +1.8 log scotopic Troland seconds(sc td s)) under fully dark-adapted conditions and in the presence of steady adapting backgrounds (-3.2 to -1.7 log sc td). For the weakest test stimuli, ERGs consisted of a slow negative potential maximal approximately 200 ms after the flash, with a small positive potential preceding it. The negative wave resembled the STR of other species. As intensity was increased, the negative potential saturated but the positive potential (maximal approximately 110 ms) continued to grow as the b-wave. For stimuli that saturated the b-wave, the a-wave emerged. For stimulus strengths up to those at which the a-wave emerged, ERG amplitudes measured at fixed times after the flash (110 and 200 ms) were fitted with a model assuming an initially linear rise of response amplitude with intensity, followed by saturation of five components of declining sensitivity: a negative STR (nSTR), a positive STR (pSTR), a positive scotopic response (pSR), PII (the bipolar cell component) and PIII (the photoreceptor component). The nSTR and pSTR were approximately 3 times more sensitive than the pSR, which was approximately 7 times more sensitive than PII. The sensitive positive components dominated the b-wave up to > 5 % of its saturated amplitude. Pharmacological agents that suppress proximal retinal activity (e.g. GABA) minimized the pSTR, nSTR and pSR, essentially isolating PII which rose linearly with intensity before showing hyperbolic saturation. The nSTR, pSTR and pSR were desensitized by weaker backgrounds than those desensitizing PII. In conclusion, ERG components of proximal retinal origin that are more sensitive to test flashes and adapting backgrounds than PII provide the 'threshold' negative and positive (b-wave) responses of the mouse dark-adapted ERG. These results support the use of the mouse ERG in studies of proximal retinal function.

  7. Effect on intensity of treadmill running on learning, memory and expressions of cell cycle-related proteins in rats with cerebral ischemia.

    PubMed

    Zhao, Ya-Ning; Li, Jian-Min; Chen, Chang-Xiang; Li, Shu-Xing; Xue, Cheng-Jing

    2017-06-20

    We discussed the intensity of treadmill running on learning, memory and expression of cell cycle-related proteins in rats with cerebral ischemia. Eighty healthy male SD rats were randomly divided into normal group, model group, intensity I group and intensity II group, with 20 rats in each group. The four-vessel occlusion method of Pulsinelli (4-VO) was used to induce global cerebral ischemia. Brain neuronal morphology was observed by hematoxylin-eosin (HE) staining at 3h, 6h, 24h and 48h after modeling, respectively. Hippocampal expressions of cyclin A and cyclin E were detected by immunohistochemistry. At 48h after modeling, the learning and memory performance of rats was tested by water maze experiment. Compared with the normal group, the other three groups had a significant reduction in surviving neurons, prolonging of escape latency and decreased number of passes over the former position of the platform (P<0.05). The number of surviving neurons and the number of passes over the former position of the platform were obviously lower in the model group than in intensity I group (P<0.05), but significantly higher compared with intensity II group (P<0.05). Escape latency of the model group was obviously prolonged as compared with intensity I group (P<0.05), but much shorter than that of intensity II group (P<0.05). Compared with the normal group, the expressions of cyclin A and cyclin E were significantly upregulated at different time points after modeling (P<0.05). The expression of the model group was higher than that of intensity I group, but lower than that of intensity II group (P<0.05). Moderate intensity of treadmill running can help protect brain neurons and improve learning and memory performance of rats with global cerebral ischemia. But high intensity of treadmill running has a negative impact, possibly through the regulation of cell cycle-related proteins in ischemia/reperfusion injury.

  8. Quantifying Urban Watershed Stressor Gradients and Evaluating How Different Land Cover Datasets Affect Stream Management.

    PubMed

    Smucker, Nathan J; Kuhn, Anne; Charpentier, Michael A; Cruz-Quinones, Carlos J; Elonen, Colleen M; Whorley, Sarah B; Jicha, Terri M; Serbst, Jonathan R; Hill, Brian H; Wehr, John D

    2016-03-01

    Watershed management and policies affecting downstream ecosystems benefit from identifying relationships between land cover and water quality. However, different data sources can create dissimilarities in land cover estimates and models that characterize ecosystem responses. We used a spatially balanced stream study (1) to effectively sample development and urban stressor gradients while representing the extent of a large coastal watershed (>4400 km(2)), (2) to document differences between estimates of watershed land cover using 30-m resolution national land cover database (NLCD) and <1-m resolution land cover data, and (3) to determine if predictive models and relationships between water quality and land cover differed when using these two land cover datasets. Increased concentrations of nutrients, anions, and cations had similarly significant correlations with increased watershed percent impervious cover (IC), regardless of data resolution. The NLCD underestimated percent forest for 71/76 sites by a mean of 11 % and overestimated percent wetlands for 71/76 sites by a mean of 8 %. The NLCD almost always underestimated IC at low development intensities and overestimated IC at high development intensities. As a result of underestimated IC, regression models using NLCD data predicted mean background concentrations of NO3 (-) and Cl(-) that were 475 and 177 %, respectively, of those predicted when using finer resolution land cover data. Our sampling design could help states and other agencies seeking to create monitoring programs and indicators responsive to anthropogenic impacts. Differences between land cover datasets could affect resource protection due to misguided management targets, watershed development and conservation practices, or water quality criteria.

  9. Cost-effectiveness of population-based, community, workplace and individual policies for diabetes prevention in the UK.

    PubMed

    Breeze, P R; Thomas, C; Squires, H; Brennan, A; Greaves, C; Diggle, P; Brunner, E; Tabak, A; Preston, L; Chilcott, J

    2017-08-01

    To analyse the cost-effectiveness of different interventions for Type 2 diabetes prevention within a common framework. A micro-simulation model was developed to evaluate the cost-effectiveness of a range of diabetes prevention interventions including: (1) soft drinks taxation; (2) retail policy in socially deprived areas; (3) workplace intervention; (4) community-based intervention; and (5) screening and intensive lifestyle intervention in individuals with high diabetes risk. Within the model, individuals follow metabolic trajectories (for BMI, cholesterol, systolic blood pressure and glycaemia); individuals may develop diabetes, and some may exhibit complications of diabetes and related disorders, including cardiovascular disease, and eventually die. Lifetime healthcare costs, employment costs and quality-adjusted life-years are collected for each person. All interventions generate more life-years and lifetime quality-adjusted life-years and reduce healthcare spending compared with doing nothing. Screening and intensive lifestyle intervention generates greatest lifetime net benefit (£37) but is costly to implement. In comparison, soft drinks taxation or retail policy generate lower net benefit (£11 and £11) but are cost-saving in a shorter time period, preferentially benefit individuals from deprived backgrounds and reduce employer costs. The model enables a wide range of diabetes prevention interventions to be evaluated according to cost-effectiveness, employment and equity impacts over the short and long term, allowing decision-makers to prioritize policies that maximize the expected benefits, as well as fulfilling other policy targets, such as addressing social inequalities. © 2017 The Authors. Diabetic Medicine published by John Wiley & Sons Ltd on behalf of Diabetes UK.

  10. Generalized expectation-maximization segmentation of brain MR images

    NASA Astrophysics Data System (ADS)

    Devalkeneer, Arnaud A.; Robe, Pierre A.; Verly, Jacques G.; Phillips, Christophe L. M.

    2006-03-01

    Manual segmentation of medical images is unpractical because it is time consuming, not reproducible, and prone to human error. It is also very difficult to take into account the 3D nature of the images. Thus, semi- or fully-automatic methods are of great interest. Current segmentation algorithms based on an Expectation- Maximization (EM) procedure present some limitations. The algorithm by Ashburner et al., 2005, does not allow multichannel inputs, e.g. two MR images of different contrast, and does not use spatial constraints between adjacent voxels, e.g. Markov random field (MRF) constraints. The solution of Van Leemput et al., 1999, employs a simplified model (mixture coefficients are not estimated and only one Gaussian is used by tissue class, with three for the image background). We have thus implemented an algorithm that combines the features of these two approaches: multichannel inputs, intensity bias correction, multi-Gaussian histogram model, and Markov random field (MRF) constraints. Our proposed method classifies tissues in three iterative main stages by way of a Generalized-EM (GEM) algorithm: (1) estimation of the Gaussian parameters modeling the histogram of the images, (2) correction of image intensity non-uniformity, and (3) modification of prior classification knowledge by MRF techniques. The goal of the GEM algorithm is to maximize the log-likelihood across the classes and voxels. Our segmentation algorithm was validated on synthetic data (with the Dice metric criterion) and real data (by a neurosurgeon) and compared to the original algorithms by Ashburner et al. and Van Leemput et al. Our combined approach leads to more robust and accurate segmentation.

  11. EVIDENCE FOR EVAPORATION-INCOMPLETE CONDENSATION CYCLES IN WARM SOLAR CORONAL LOOPS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Froment, C.; Auchère, F.; Bocchialini, K.

    2015-07-10

    Quasi-constant heating at the footpoints of loops leads to evaporation and condensation cycles of the plasma: thermal non-equilibrium (TNE). This phenomenon is believed to play a role in the formation of prominences and coronal rain. However, it is often discounted as being involved in the heating of warm loops because the models do not reproduce observations. Recent simulations have shown that these inconsistencies with observations may be due to oversimplifications of the geometries of the models. In addition, our recent observations reveal that long-period intensity pulsations (several hours) are common in solar coronal loops. These periods are consistent with thosemore » expected from TNE. The aim of this paper is to derive characteristic physical properties of the plasma for some of these events to test the potential role of TNE in loop heating. We analyzed three events in detail using the six EUV coronal channels of the Solar Dynamics Observatory/Atmospheric Imaging Assembly. We performed both a differential emission measure (DEM) and a time-lag analysis, including a new method to isolate the relevant signal from the foreground and background emission. For the three events, the DEM undergoes long-period pulsations, which is a signature of periodic heating even though the loops are captured in their cooling phase, as is the bulk of the active regions. We link long-period intensity pulsations to new signatures of loop heating with strong evidence for evaporation and condensation cycles. We thus simultaneously witness widespread cooling and TNE. Finally, we discuss the implications of our new observations for both static and impulsive heating models.« less

  12. Study and optimization of key parameters of a laser ablation ion mobility spectrometer

    NASA Astrophysics Data System (ADS)

    Ni, Kai; Li, Jianan; Tang, Binchao; Shi, Yuan; Yu, Quan; Qian, Xiang; Wang, Xiaohao

    2016-11-01

    Ion Mobility Spectrometry (IMS), having an advantage in real-time and on-line detection, is an atmospheric pressure detecting technique. LA-IMS (Laser Ablation Ion Mobility Spectrometry) uses Nd-YAG laser as ionization source, whose energy is high enough to ionize metal. In this work, we tested the signal in different electric field intensity by a home-made ion mobility spectrometer, using silicon wafers the sample. The transportation of metal ions was match with the formula: Td = d/K • 1/E, when the electric field intensity is greater than 350v/cm. The relationship between signal intensity and collection angle (the angle between drift tube and the surface of the sample) was studied. With the increasing of the collection angle, signal intensity had a significant increase; while the variation of incident angle of the laser had no significant influence. The signal intensity had a 140% increase when the collection angle varied from 0 to 45 degree, while the angle between the drift tube and incident laser beam keeping the same as 90 degree. The position of ion gate in LA-IMS(Laser Ablation Ion Mobility Spectrometry) is different from the traditional ones for the kinetic energy of the ions is too big, if the distance between ion gate and sampling points less than 2.5cm the ion gate will not work, the ions could go through ion gate when it closed. The SNR had been improved by define the signal when the ion gate is closed as background signal, the signal noise including shock wave and electrical field perturbation produced during the interaction between laser beam and samples is eliminated when the signal that the ion gate opened minus the background signal.

  13. Modelling and Predicting eHealth Usage in Europe: A Multidimensional Approach From an Online Survey of 13,000 European Union Internet Users

    PubMed Central

    Soler-Ramos, Ivan

    2016-01-01

    Background More advanced methods and models are needed to evaluate the participation of patients and citizens in the shared health care model that eHealth proposes. Objective The goal of our study was to design and evaluate a predictive multidimensional model of eHealth usage. Methods We used 2011 survey data from a sample of 13,000 European citizens aged 16–74 years who had used the Internet in the previous 3 months. We proposed and tested an eHealth usage composite indicator through 2-stage structural equation modelling with latent variables and measurement errors. Logistic regression (odds ratios, ORs) to model the predictors of eHealth usage was calculated using health status and sociodemographic independent variables. Results The dimensions with more explanatory power of eHealth usage were health Internet attitudes, information health Internet usage, empowerment of health Internet users, and the usefulness of health Internet usage. Some 52.39% (6811/13,000) of European Internet users’ eHealth usage was more intensive (greater than the mean). Users with long-term health problems or illnesses (OR 1.20, 95% CI 1.12–1.29) or receiving long-term treatment (OR 1.11, 95% CI 1.03–1.20), having family members with long-term health problems or illnesses (OR 1.44, 95% CI 1.34–1.55), or undertaking care activities for other people (OR 1.58, 95% CI 1.40–1.77) had a high propensity toward intensive eHealth usage. Sociodemographic predictors showed that Internet users who were female (OR 1.23, 95% CI 1.14–1.31), aged 25–54 years (OR 1.12, 95% CI 1.05–1.21), living in larger households (3 members: OR 1.25, 95% CI 1.15–1.36; 5 members: OR 1.13, 95% CI 0.97–1.28; ≥6 members: OR 1.31, 95% CI 1.10–1.57), had more children <16 years of age (1 child: OR 1.29, 95% CI 1.18–1.14; 2 children: OR 1.05, 95% CI 0.94–1.17; 4 children: OR 1.35, 95% CI 0.88–2.08), and had more family members >65 years of age (1 member: OR 1.33, 95% CI 1.18–1.50; ≥4 members: OR 1.82, 95% CI 0.54–6.03) had a greater propensity toward intensive eHealth usage. Likewise, users residing in densely populated areas, such as cities and large towns (OR 1.17, 95% CI 1.09–1.25), also had a greater propensity toward intensive eHealth usage. Educational levels presented an inverted U shape in relation to intensive eHealth usage, with greater propensities among those with a secondary education (OR 1.08, 95% CI 1.01–1.16). Finally, occupational categories and net monthly income data suggest a higher propensity among the employed or self-employed (OR 1.07, 95% CI 0.99–1.15) and among the minimum wage stratum, earning ≤€1000 per month (OR 1.66, 95% CI 1.48–1.87). Conclusions We provide new evidence of inequalities that explain intensive eHealth usage. The results highlight the need to develop more specific eHealth practices to address different realities. PMID:27450189

  14. Monitoring a boreal wildfire using multi-temporal Radarsat-1 intensity and coherence images

    USGS Publications Warehouse

    Rykhus, Russell P.; Lu, Zhong

    2011-01-01

    Twenty-five C-band Radarsat-1 synthetic aperture radar (SAR) images acquired from the summer of 2002 to the summer of 2005 are used to map a 2003 boreal wildfire (B346) in the Yukon Flats National Wildlife Refuge, Alaska under conditions of near-persistent cloud cover. Our analysis is primarily based on the 15 SAR scenes acquired during arctic growing seasons. The Radarsat-1 intensity data are used to map the onset and progression of the fire, and interferometric coherence images are used to qualify burn severity and monitor post-fire recovery. We base our analysis of the fire on three test sites, two from within the fire and one unburned site. The B346 fire increased backscattered intensity values for the two burn study sites by approximately 5–6 dB and substantially reduced coherence from background levels of approximately 0.8 in unburned background forested areas to approximately 0.2 in the burned area. Using ancillary vegetation information from the National Land Cover Database (NLCD) and information on burn severity from Normalized Burn Ratio (NBR) data, we conclude that burn site 2 was more severely burned than burn site 1 and that C-band interferometric coherence data are useful for mapping landscape changes due to fire. Differences in burn severity and topography are determined to be the likely reasons for the observed differences in post-fire intensity and coherence trends between burn sites.

  15. Simulation of Streamflow and Selected Water-Quality Constituents through a Model of the Onondaga Lake Basin, Onondaga County, New York - A Guide to Model Application

    USGS Publications Warehouse

    Coon, William F.

    2008-01-01

    A computer model of hydrologic and water-quality processes of the Onondaga Lake basin in Onondaga County, N.Y., was developed during 2003-07 to assist water-resources managers in making basin-wide management decisions that could affect peak flows and the water quality of tributaries to Onondaga Lake. The model was developed with the Hydrological Simulation Program-Fortran (HSPF) and was designed to allow simulation of proposed or hypothetical land-use changes, best-management practices (BMPs), and instream stormwater-detention basins such that their effects on flows and loads of suspended sediment, orthophosphate, total phosphorus, ammonia, organic nitrogen, and nitrate could be analyzed. Extreme weather conditions, such as intense storms and prolonged droughts, can be simulated through manipulation of the precipitation record. Model results obtained from different scenarios can then be compared and analyzed through an interactive computer program known as Generation and Analysis of Model Simulation Scenarios for Watersheds (GenScn). Background information on HSPF and GenScn is presented to familiarize the user with these two programs. Step-by-step examples are provided on (1) the creation of land-use, BMP, and stormflow-detention scenarios for simulation by the HSPF model, and (2) the analysis of simulation results through GenScn.

  16. The organisation of critical care for burn patients in the UK: epidemiology and comparison of mortality prediction models.

    PubMed

    Toft-Petersen, A P; Ferrando-Vivas, P; Harrison, D A; Dunn, K; Rowan, K M

    2018-05-15

    In the UK, a network of specialist centres has been set up to provide critical care for burn patients. However, some burn patients are admitted to general intensive care units. Little is known about the casemix of these patients and how it compares with patients in specialist burn centres. It is not known whether burn-specific or generic risk prediction models perform better when applied to patients managed in intensive care units. We examined admissions for burns in the Case Mix Programme Database from April 2010 to March 2016. The casemix, activity and outcome in general and specialist burn intensive care units were compared and the fit of two burn-specific risk prediction models (revised Baux and Belgian Outcome in Burn Injury models) and one generic model (Intensive Care National Audit and Research Centre model) were compared. Patients in burn intensive care units had more extensive injuries compared with patients in general intensive care units (median (IQR [range]) burn surface area 16 (7-32 [0-98])% vs. 8 (1-18 [0-100])%, respectively) but in-hospital mortality was similar (22.8% vs. 19.0%, respectively). The discrimination and calibration of the generic Intensive Care National Audit and Research Centre model was superior to the revised Baux and Belgian Outcome in Burn Injury burn-specific models for patients managed on both specialist burn and general intensive care units. © 2018 The Association of Anaesthetists of Great Britain and Ireland.

  17. Radon-222 related influence on ambient gamma dose.

    PubMed

    Melintescu, A; Chambers, S D; Crawford, J; Williams, A G; Zorila, B; Galeriu, D

    2018-04-03

    Ambient gamma dose, radon, and rainfall have been monitored in southern Bucharest, Romania, from 2010 to 2016. The seasonal cycle of background ambient gamma dose peaked between July and October (100-105 nSv h -1 ), with minimum values in February (75-80 nSv h -1 ), the time of maximum snow cover. Based on 10 m a.g.l. radon concentrations, the ambient gamma dose increased by around 1 nSv h -1 for every 5 Bq m -3 increase in radon. Radon variability attributable to diurnal changes in atmospheric mixing contributed less than 15 nSv h -1 to the overall variability in ambient gamma dose, a factor of 4 more than synoptic timescale changes in air mass fetch. By contrast, precipitation-related enhancements of the ambient gamma dose were 15-80 nSv h -1 . To facilitate routine analysis, and account in part for occasional equipment failure, an automated method for identifying precipitation spikes in the ambient gamma dose was developed. Lastly, a simple model for predicting rainfall-related enhancement of the ambient gamma dose is tested against rainfall observations from events of contrasting duration and intensity. Results are also compared with those from previously published models of simple and complex formulation. Generally, the model performed very well. When simulations underestimated observations the absolute difference was typically less than the natural variability in ambient gamma dose arising from atmospheric mixing influences. Consequently, combined use of the automated event detection method and the simple model of this study could enable the ambient gamma dose "attention limit" (which indicates a potential radiological emergency) to be reduced from 200 to 400% above background to 25-50%. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Quantification of source uncertainties in Seismic Probabilistic Tsunami Hazard Analysis (SPTHA)

    NASA Astrophysics Data System (ADS)

    Selva, J.; Tonini, R.; Molinari, I.; Tiberti, M. M.; Romano, F.; Grezio, A.; Melini, D.; Piatanesi, A.; Basili, R.; Lorito, S.

    2016-06-01

    We propose a procedure for uncertainty quantification in Probabilistic Tsunami Hazard Analysis (PTHA), with a special emphasis on the uncertainty related to statistical modelling of the earthquake source in Seismic PTHA (SPTHA), and on the separate treatment of subduction and crustal earthquakes (treated as background seismicity). An event tree approach and ensemble modelling are used in spite of more classical approaches, such as the hazard integral and the logic tree. This procedure consists of four steps: (1) exploration of aleatory uncertainty through an event tree, with alternative implementations for exploring epistemic uncertainty; (2) numerical computation of tsunami generation and propagation up to a given offshore isobath; (3) (optional) site-specific quantification of inundation; (4) simultaneous quantification of aleatory and epistemic uncertainty through ensemble modelling. The proposed procedure is general and independent of the kind of tsunami source considered; however, we implement step 1, the event tree, specifically for SPTHA, focusing on seismic source uncertainty. To exemplify the procedure, we develop a case study considering seismic sources in the Ionian Sea (central-eastern Mediterranean Sea), using the coasts of Southern Italy as a target zone. The results show that an efficient and complete quantification of all the uncertainties is feasible even when treating a large number of potential sources and a large set of alternative model formulations. We also find that (i) treating separately subduction and background (crustal) earthquakes allows for optimal use of available information and for avoiding significant biases; (ii) both subduction interface and crustal faults contribute to the SPTHA, with different proportions that depend on source-target position and tsunami intensity; (iii) the proposed framework allows sensitivity and deaggregation analyses, demonstrating the applicability of the method for operational assessments.

  19. A new background distribution-based active contour model for three-dimensional lesion segmentation in breast DCE-MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Hui; Liu, Yiping; Qiu, Tianshuang

    2014-08-15

    Purpose: To develop and evaluate a computerized semiautomatic segmentation method for accurate extraction of three-dimensional lesions from dynamic contrast-enhanced magnetic resonance images (DCE-MRIs) of the breast. Methods: The authors propose a new background distribution-based active contour model using level set (BDACMLS) to segment lesions in breast DCE-MRIs. The method starts with manual selection of a region of interest (ROI) that contains the entire lesion in a single slice where the lesion is enhanced. Then the lesion volume from the volume data of interest, which is captured automatically, is separated. The core idea of BDACMLS is a new signed pressure functionmore » which is based solely on the intensity distribution combined with pathophysiological basis. To compare the algorithm results, two experienced radiologists delineated all lesions jointly to obtain the ground truth. In addition, results generated by other different methods based on level set (LS) are also compared with the authors’ method. Finally, the performance of the proposed method is evaluated by several region-based metrics such as the overlap ratio. Results: Forty-two studies with 46 lesions that contain 29 benign and 17 malignant lesions are evaluated. The dataset includes various typical pathologies of the breast such as invasive ductal carcinoma, ductal carcinomain situ, scar carcinoma, phyllodes tumor, breast cysts, fibroadenoma, etc. The overlap ratio for BDACMLS with respect to manual segmentation is 79.55% ± 12.60% (mean ± s.d.). Conclusions: A new active contour model method has been developed and shown to successfully segment breast DCE-MRI three-dimensional lesions. The results from this model correspond more closely to manual segmentation, solve the weak-edge-passed problem, and improve the robustness in segmenting different lesions.« less

  20. 33 CFR 86.05 - Sound signal intensity and range of audibility.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... average background noise level at the listening posts (taken to be 68 dB in the octave band centered on... regarded as typical but under conditions of strong wind or high ambient noise level at the listening post...

Top