Sample records for ordered subsets expectation

  1. Statistical iterative reconstruction for streak artefact reduction when using multidetector CT to image the dento-alveolar structures.

    PubMed

    Dong, J; Hayakawa, Y; Kober, C

    2014-01-01

    When metallic prosthetic appliances and dental fillings exist in the oral cavity, the appearance of metal-induced streak artefacts is not avoidable in CT images. The aim of this study was to develop a method for artefact reduction using the statistical reconstruction on multidetector row CT images. Adjacent CT images often depict similar anatomical structures. Therefore, reconstructed images with weak artefacts were attempted using projection data of an artefact-free image in a neighbouring thin slice. Images with moderate and strong artefacts were continuously processed in sequence by successive iterative restoration where the projection data was generated from the adjacent reconstructed slice. First, the basic maximum likelihood-expectation maximization algorithm was applied. Next, the ordered subset-expectation maximization algorithm was examined. Alternatively, a small region of interest setting was designated. Finally, the general purpose graphic processing unit machine was applied in both situations. The algorithms reduced the metal-induced streak artefacts on multidetector row CT images when the sequential processing method was applied. The ordered subset-expectation maximization and small region of interest reduced the processing duration without apparent detriments. A general-purpose graphic processing unit realized the high performance. A statistical reconstruction method was applied for the streak artefact reduction. The alternative algorithms applied were effective. Both software and hardware tools, such as ordered subset-expectation maximization, small region of interest and general-purpose graphic processing unit achieved fast artefact correction.

  2. Evaluation of reconstruction techniques in regional cerebral blood flow SPECT using trade-off plots: a Monte Carlo study.

    PubMed

    Olsson, Anna; Arlig, Asa; Carlsson, Gudrun Alm; Gustafsson, Agnetha

    2007-09-01

    The image quality of single photon emission computed tomography (SPECT) depends on the reconstruction algorithm used. The purpose of the present study was to evaluate parameters in ordered subset expectation maximization (OSEM) and to compare systematically with filtered back-projection (FBP) for reconstruction of regional cerebral blood flow (rCBF) SPECT, incorporating attenuation and scatter correction. The evaluation was based on the trade-off between contrast recovery and statistical noise using different sizes of subsets, number of iterations and filter parameters. Monte Carlo simulated SPECT studies of a digital human brain phantom were used. The contrast recovery was calculated as measured contrast divided by true contrast. Statistical noise in the reconstructed images was calculated as the coefficient of variation in pixel values. A constant contrast level was reached above 195 equivalent maximum likelihood expectation maximization iterations. The choice of subset size was not crucial as long as there were > or = 2 projections per subset. The OSEM reconstruction was found to give 5-14% higher contrast recovery than FBP for all clinically relevant noise levels in rCBF SPECT. The Butterworth filter, power 6, achieved the highest stable contrast recovery level at all clinically relevant noise levels. The cut-off frequency should be chosen according to the noise level accepted in the image. Trade-off plots are shown to be a practical way of deciding the number of iterations and subset size for the OSEM reconstruction and can be used for other examination types in nuclear medicine.

  3. Accelerated time-of-flight (TOF) PET image reconstruction using TOF bin subsetization and TOF weighting matrix pre-computation.

    PubMed

    Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib

    2016-02-07

    Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.

  4. Accelerated time-of-flight (TOF) PET image reconstruction using TOF bin subsetization and TOF weighting matrix pre-computation

    NASA Astrophysics Data System (ADS)

    Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib

    2016-02-01

    Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.

  5. Characterization of a chromosome-specific chimpanzee alpha satellite subset: Evolutionary relationship to subsets on human chromosomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warburton, P.E.; Gosden, J.; Lawson, D.

    1996-04-15

    Alpha satellite DNA is a tandemly repeated DNA family found at the centromeres of all primate chromosomes examined. The fundamental repeat units of alpha satellite DNA are diverged 169- to 172-bp monomers, often found to be organized in chromosome-specific higher-order repeat units. The chromosomes of human (Homo sapiens (HSA)), chimpanzee (Pan troglodytes (PTR) and Pan paniscus), and gorilla (Gorilla gorilla) share a remarkable similarity and synteny. It is of interest to ask if alpha satellite arrays at centromeres of homologous chromosomes between these species are closely related (evolving in an orthologous manner) or if the evolutionary processes that homogenize andmore » spread these arrays within and between chromosomes result in nonorthologous evolution of arrays. By using PCR primers specific for human chromosome 17-specific alpha satellite DNA, we have amplified, cloned, and characterized a chromosome-specific subset from the PTR chimpanzee genome. Hybridization both on Southern blots and in situ as well as sequence analysis show that this subset is most closely related, as expected, to sequences on HSA 17. However, in situ hybridization reveals that this subset is not found on the homologous chromosome in chimpanzee (PTR 19), but instead on PTR 12, which is homologous to HSA 2p. 40 refs., 3 figs.« less

  6. Three-dimensional ordered-subset expectation maximization iterative protocol for evaluation of left ventricular volumes and function by quantitative gated SPECT: a dynamic phantom study.

    PubMed

    Ceriani, Luca; Ruberto, Teresa; Delaloye, Angelika Bischof; Prior, John O; Giovanella, Luca

    2010-03-01

    The purposes of this study were to characterize the performance of a 3-dimensional (3D) ordered-subset expectation maximization (OSEM) algorithm in the quantification of left ventricular (LV) function with (99m)Tc-labeled agent gated SPECT (G-SPECT), the QGS program, and a beating-heart phantom and to optimize the reconstruction parameters for clinical applications. A G-SPECT image of a dynamic heart phantom simulating the beating left ventricle was acquired. The exact volumes of the phantom were known and were as follows: end-diastolic volume (EDV) of 112 mL, end-systolic volume (ESV) of 37 mL, and stroke volume (SV) of 75 mL; these volumes produced an LV ejection fraction (LVEF) of 67%. Tomographic reconstructions were obtained after 10-20 iterations (I) with 4, 8, and 16 subsets (S) at full width at half maximum (FWHM) gaussian postprocessing filter cutoff values of 8-15 mm. The QGS program was used for quantitative measurements. Measured values ranged from 72 to 92 mL for EDV, from 18 to 32 mL for ESV, and from 54 to 63 mL for SV, and the calculated LVEF ranged from 65% to 76%. Overall, the combination of 10 I, 8 S, and a cutoff filter value of 10 mm produced the most accurate results. The plot of the measures with respect to the expectation maximization-equivalent iterations (I x S product) revealed a bell-shaped curve for the LV volumes and a reverse distribution for the LVEF, with the best results in the intermediate range. In particular, FWHM cutoff values exceeding 10 mm affected the estimation of the LV volumes. The QGS program is able to correctly calculate the LVEF when used in association with an optimized 3D OSEM algorithm (8 S, 10 I, and FWHM of 10 mm) but underestimates the LV volumes. However, various combinations of technical parameters, including a limited range of I and S (80-160 expectation maximization-equivalent iterations) and low cutoff values (< or =10 mm) for the gaussian postprocessing filter, produced results with similar accuracies and without clinically relevant differences in the LV volumes and the estimated LVEF.

  7. Selecting predictors for discriminant analysis of species performance: an example from an amphibious softwater plant.

    PubMed

    Vanderhaeghe, F; Smolders, A J P; Roelofs, J G M; Hoffmann, M

    2012-03-01

    Selecting an appropriate variable subset in linear multivariate methods is an important methodological issue for ecologists. Interest often exists in obtaining general predictive capacity or in finding causal inferences from predictor variables. Because of a lack of solid knowledge on a studied phenomenon, scientists explore predictor variables in order to find the most meaningful (i.e. discriminating) ones. As an example, we modelled the response of the amphibious softwater plant Eleocharis multicaulis using canonical discriminant function analysis. We asked how variables can be selected through comparison of several methods: univariate Pearson chi-square screening, principal components analysis (PCA) and step-wise analysis, as well as combinations of some methods. We expected PCA to perform best. The selected methods were evaluated through fit and stability of the resulting discriminant functions and through correlations between these functions and the predictor variables. The chi-square subset, at P < 0.05, followed by a step-wise sub-selection, gave the best results. In contrast to expectations, PCA performed poorly, as so did step-wise analysis. The different chi-square subset methods all yielded ecologically meaningful variables, while probable noise variables were also selected by PCA and step-wise analysis. We advise against the simple use of PCA or step-wise discriminant analysis to obtain an ecologically meaningful variable subset; the former because it does not take into account the response variable, the latter because noise variables are likely to be selected. We suggest that univariate screening techniques are a worthwhile alternative for variable selection in ecology. © 2011 German Botanical Society and The Royal Botanical Society of the Netherlands.

  8. Simultaneous 99mtc/111in spect reconstruction using accelerated convolution-based forced detection monte carlo

    NASA Astrophysics Data System (ADS)

    Karamat, Muhammad I.; Farncombe, Troy H.

    2015-10-01

    Simultaneous multi-isotope Single Photon Emission Computed Tomography (SPECT) imaging has a number of applications in cardiac, brain, and cancer imaging. The major concern however, is the significant crosstalk contamination due to photon scatter between the different isotopes. The current study focuses on a method of crosstalk compensation between two isotopes in simultaneous dual isotope SPECT acquisition applied to cancer imaging using 99mTc and 111In. We have developed an iterative image reconstruction technique that simulates the photon down-scatter from one isotope into the acquisition window of a second isotope. Our approach uses an accelerated Monte Carlo (MC) technique for the forward projection step in an iterative reconstruction algorithm. The MC estimated scatter contamination of a radionuclide contained in a given projection view is then used to compensate for the photon contamination in the acquisition window of other nuclide. We use a modified ordered subset-expectation maximization (OS-EM) algorithm named simultaneous ordered subset-expectation maximization (Sim-OSEM), to perform this step. We have undertaken a number of simulation tests and phantom studies to verify this approach. The proposed reconstruction technique was also evaluated by reconstruction of experimentally acquired phantom data. Reconstruction using Sim-OSEM showed very promising results in terms of contrast recovery and uniformity of object background compared to alternative reconstruction methods implementing alternative scatter correction schemes (i.e., triple energy window or separately acquired projection data). In this study the evaluation is based on the quality of reconstructed images and activity estimated using Sim-OSEM. In order to quantitate the possible improvement in spatial resolution and signal to noise ratio (SNR) observed in this study, further simulation and experimental studies are required.

  9. Automated Verification of Design Patterns with LePUS3

    NASA Technical Reports Server (NTRS)

    Nicholson, Jonathan; Gasparis, Epameinondas; Eden, Ammon H.; Kazman, Rick

    2009-01-01

    Specification and [visual] modelling languages are expected to combine strong abstraction mechanisms with rigour, scalability, and parsimony. LePUS3 is a visual, object-oriented design description language axiomatized in a decidable subset of the first-order predicate logic. We demonstrate how LePUS3 is used to formally specify a structural design pattern and prove ( verify ) whether any JavaTM 1.4 program satisfies that specification. We also show how LePUS3 specifications (charts) are composed and how they are verified fully automatically in the Two-Tier Programming Toolkit.

  10. Postinjection single photon transmission tomography with ordered-subset algorithms for whole-body PET imaging

    NASA Astrophysics Data System (ADS)

    Bai, Chuanyong; Kinahan, P. E.; Brasse, D.; Comtat, C.; Townsend, D. W.

    2002-02-01

    We have evaluated the penalized ordered-subset transmission reconstruction (OSTR) algorithm for postinjection single photon transmission scanning. The OSTR algorithm of Erdogan and Fessler (1999) uses a more accurate model for transmission tomography than ordered-subsets expectation-maximization (OSEM) when OSEM is applied to the logarithm of the transmission data. The OSTR algorithm is directly applicable to postinjection transmission scanning with a single photon source, as emission contamination from the patient mimics the effect, in the original derivation of OSTR, of random coincidence contamination in a positron source transmission scan. Multiple noise realizations of simulated postinjection transmission data were reconstructed using OSTR, filtered backprojection (FBP), and OSEM algorithms. Due to the nonspecific task performance, or multiple uses, of the transmission image, multiple figures of merit were evaluated, including image noise, contrast, uniformity, and root mean square (rms) error. We show that: 1) the use of a three-dimensional (3-D) regularizing image roughness penalty with OSTR improves the tradeoffs in noise, contrast, and rms error relative to the use of a two-dimensional penalty; 2) OSTR with a 3-D penalty has improved tradeoffs in noise, contrast, and rms error relative to FBP or OSEM; and 3) the use of image standard deviation from a single realization to estimate the true noise can be misleading in the case of OSEM. We conclude that using OSTR with a 3-D penalty potentially allows for shorter postinjection transmission scans in single photon transmission tomography in positron emission tomography (PET) relative to FBP or OSEM reconstructed images with the same noise properties. This combination of singles+OSTR is particularly suitable for whole-body PET oncology imaging.

  11. Identification of features in indexed data and equipment therefore

    DOEpatents

    Jarman, Kristin H [Richland, WA; Daly, Don Simone [Richland, WA; Anderson, Kevin K [Richland, WA; Wahl, Karen L [Richland, WA

    2002-04-02

    Embodiments of the present invention provide methods of identifying a feature in an indexed dataset. Such embodiments encompass selecting an initial subset of indices, the initial subset of indices being encompassed by an initial window-of-interest and comprising at least one beginning index and at least one ending index; computing an intensity weighted measure of dispersion for the subset of indices using a subset of responses corresponding to the subset of indices; and comparing the intensity weighted measure of dispersion to a dispersion critical value determined from an expected value of the intensity weighted measure of dispersion under a null hypothesis of no transient feature present. Embodiments of the present invention also encompass equipment configured to perform the methods of the present invention.

  12. FLASH_SSF_Aqua-FM3-MODIS_Version3C

    Atmospheric Science Data Center

    2018-04-04

    ... Tool:  CERES Order Tool  (netCDF) Subset Data:  CERES Search and Subset Tool (HDF4 & netCDF) ... Cloud Layer Area Cloud Infared Emissivity Cloud Base Pressure Surface (Radiative) Flux TOA Flux Surface Types TOT ... Radiance SW Filtered Radiance LW Flux Order Data:  Earthdata Search:  Order Data Guide Documents:  ...

  13. FLASH_SSF_Terra-FM1-MODIS_Version3C

    Atmospheric Science Data Center

    2018-04-04

    ... Tool:  CERES Order Tool  (netCDF) Subset Data:  CERES Search and Subset Tool (HDF4 & netCDF) ... Cloud Layer Area Cloud Infrared Emissivity Cloud Base Pressure Surface (Radiative) Flux TOA Flux Surface Types TOT ... Radiance SW Filtered Radiance LW Flux Order Data:  Earthdata Search:  Order Data Guide Documents:  ...

  14. Methodological Options and Their Implications: An Example Using Secondary Data to Analyze Latino Educational Expectations

    ERIC Educational Resources Information Center

    Wells, Ryan S.; Lynch, Cassie M.; Seifert, Tricia A.

    2011-01-01

    A number of studies over decades have examined determinants of educational expectations. However, even among the subset of quantitative studies, there is considerable variation in the methods used to operationally define and analyze expectations. Using a systematic literature review and several regression methods to analyze Latino students'…

  15. Attenuation correction strategies for multi-energy photon emitters using SPECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pretorius, P.H.; King, M.A.; Pan, T.S.

    1996-12-31

    The aim of this study was to investigate whether the photopeak window projections from different energy photons can be combined into a single window for reconstruction or if it is better to not combine the projections due to differences in the attenuation maps required for each photon energy. The mathematical cardiac torso (MCAT) phantom was modified to simulate the uptake of Ga-67 in the human body. Four spherical hot tumors were placed in locations which challenged attenuation correction. An analytical 3D projector with attenuation and detector response included was used to generate projection sets. Data were reconstructed using filtered backprojectionmore » (FBP) reconstruction with Butterworth filtering in conjunction with one iteration of Chang attenuation correction, and with 5 and 10 iterations of ordered-subset maximum-likelihood expectation-maximization reconstruction. To serve as a standard for comparison, the projection sets obtained from the two energies were first reconstructed separately using their own attenuation maps. The emission data obtained from both energies were added and reconstructed using the following attenuation strategies: (1) the 93 keV attenuation map for attenuation correction, (2) the 185 keV attenuation map for attenuation correction, (3) using a weighted mean obtained from combining the 93 keV and 185 keV maps, and (4) an ordered subset approach which combines both energies. The central count ratio (CCR) and total count ratio (TCR) were used to compare the performance of the different strategies. Compared to the standard method, results indicate an over-estimation with strategy 1, an under-estimation with strategy 2 and comparable results with strategies 3 and 4. In all strategies, the CCR`s of sphere 4 were under-estimated, although TCR`s were comparable to that of the other locations. The weighted mean and ordered subset strategies for attenuation correction were of comparable accuracy to reconstruction of the windows separately.« less

  16. A Mathematical Modelling Approach to One-Day Cricket Batting Orders

    PubMed Central

    Bukiet, Bruce; Ovens, Matthews

    2006-01-01

    While scoring strategies and player performance in cricket have been studied, there has been little published work about the influence of batting order with respect to One-Day cricket. We apply a mathematical modelling approach to compute efficiently the expected performance (runs distribution) of a cricket batting order in an innings. Among other applications, our method enables one to solve for the probability of one team beating another or to find the optimal batting order for a set of 11 players. The influence of defence and bowling ability can be taken into account in a straightforward manner. In this presentation, we outline how we develop our Markov Chain approach to studying the progress of runs for a batting order of non- identical players along the lines of work in baseball modelling by Bukiet et al., 1997. We describe the issues that arise in applying such methods to cricket, discuss ideas for addressing these difficulties and note limitations on modelling batting order for One-Day cricket. By performing our analysis on a selected subset of the possible batting orders, we apply the model to quantify the influence of batting order in a game of One Day cricket using available real-world data for current players. Key Points Batting order does effect the expected runs distribution in one-day cricket. One-day cricket has fewer data points than baseball, thus extreme values have greater effect on estimated probabilities. Dismissals rare and probabilities very small by comparison to baseball. Probability distribution for lower order batsmen is potentially skewed due to increased risk taking. Full enumeration of all possible line-ups is impractical using a single average computer. PMID:24357943

  17. A mathematical modelling approach to one-day cricket batting orders.

    PubMed

    Bukiet, Bruce; Ovens, Matthews

    2006-01-01

    While scoring strategies and player performance in cricket have been studied, there has been little published work about the influence of batting order with respect to One-Day cricket. We apply a mathematical modelling approach to compute efficiently the expected performance (runs distribution) of a cricket batting order in an innings. Among other applications, our method enables one to solve for the probability of one team beating another or to find the optimal batting order for a set of 11 players. The influence of defence and bowling ability can be taken into account in a straightforward manner. In this presentation, we outline how we develop our Markov Chain approach to studying the progress of runs for a batting order of non- identical players along the lines of work in baseball modelling by Bukiet et al., 1997. We describe the issues that arise in applying such methods to cricket, discuss ideas for addressing these difficulties and note limitations on modelling batting order for One-Day cricket. By performing our analysis on a selected subset of the possible batting orders, we apply the model to quantify the influence of batting order in a game of One Day cricket using available real-world data for current players. Key PointsBatting order does effect the expected runs distribution in one-day cricket.One-day cricket has fewer data points than baseball, thus extreme values have greater effect on estimated probabilities.Dismissals rare and probabilities very small by comparison to baseball.Probability distribution for lower order batsmen is potentially skewed due to increased risk taking.Full enumeration of all possible line-ups is impractical using a single average computer.

  18. Ordered mapping of 3 alphoid DNA subsets on human chromosome 22

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antonacci, R.; Baldini, A.; Archidiacono, N.

    1994-09-01

    Alpha satellite DNA consists of tandemly repeated monomers of 171 bp clustered in the centromeric region of primate chromosomes. Sequence divergence between subsets located in different human chromosomes is usually high enough to ensure chromosome-specific hybridization. Alphoid probes specific for almost every human chromosome have been reported. A single chromosome can carry different subsets of alphoid DNA and some alphoid subsets can be shared by different chromosomes. We report the physical order of three alphoid DNA subsets on human chromosome 22 determined by a combination of low and high resolution cytological mapping methods. Results visually demonstrate the presence of threemore » distinct alphoid DNA domains at the centromeric region of chromosome 22. We have measured the interphase distances between the three probes in three-color FISH experiments. Statistical analysis of the results indicated the order of the subsets. Two color experiments on prometaphase chromosomes established the order of the three domains relative to the arms of chromosome 22 and confirmed the results obtained using interphase mapping. This demonstrates the applicability of interphase mapping for alpha satellite DNA orderering. However, in our experiments, interphase mapping did not provide any information about the relationship between extremities of the repeat arrays. This information was gained from extended chromatin hybridization. The extremities of two of the repeat arrays were seen to be almost overlapping whereas the third repeat array was clearly separated from the other two. Our data show the value of extended chromatin hybridization as a complement of other cytological techniques for high resolution mapping of repetitive DNA sequences.« less

  19. Choosing Objectives in Over-Subscription Planning

    NASA Technical Reports Server (NTRS)

    Smith, David E.

    2003-01-01

    Many NASA planning problems are over-subscription problems - that is, there are a large number of possible goals of differing value, and the planning system must choose a subset &it car! be accomplished within the limited time and resources available. Examples include planning for telescopes like Hubble, SIRTF, and SOFIA; scheduling for the Deep Space Network; and planning science experiments for a Mars rover. Unfortunately, existing planning systems are not designed to deal with problems like this - they expect a well-defined conjunctive goal and terminate in failure unless the entire goal is achieved. In this paper we develop techniques for over-subscription problems that assist a classical planner in choosing which goals to achieve, and the order in which to achieve them. These techniques use plan graph cost-estimation techniques to construct an orienteering problem, which is then used to provide heuristic advice on the goals and goal order that should considered by a planner.

  20. Non-native Speech Perception Training Using Vowel Subsets: Effects of Vowels in Sets and Order of Training

    PubMed Central

    Nishi, Kanae; Kewley-Port, Diane

    2008-01-01

    Purpose Nishi and Kewley-Port (2007) trained Japanese listeners to perceive nine American English monophthongs and showed that a protocol using all nine vowels (fullset) produced better results than the one using only the three more difficult vowels (subset). The present study extended the target population to Koreans and examined whether protocols combining the two stimulus sets would provide more effective training. Method Three groups of five Korean listeners were trained on American English vowels for nine days using one of the three protocols: fullset only, first three days on subset then six days on fullset, or first six days on fullset then three days on subset. Participants' performance was assessed by pre- and post-training tests, as well as by a mid-training test. Results 1) Fullset training was also effective for Koreans; 2) no advantage was found for the two combined protocols over the fullset only protocol, and 3) sustained “non-improvement” was observed for training using one of the combined protocols. Conclusions In using subsets for training American English vowels, care should be taken not only in the selection of subset vowels, but also for the training orders of subsets. PMID:18664694

  1. Attenuation correction strategies for multi-energy photon emitters using SPECT

    NASA Astrophysics Data System (ADS)

    Pretorius, P. H.; King, M. A.; Pan, T.-S.; Hutton, B. F.

    1997-06-01

    The aim of this study was to investigate whether the photopeak window projections from different energy photons can be combined into a single window for reconstruction or if it is better to not combine the projections due to differences in the attenuation maps required for each photon energy. The mathematical cardiac torso (MCAT) phantom was modified to simulate the uptake of Ga-67 in the human body. Four spherical hot tumors were placed in locations which challenged attenuation correction. An analytical 3D projector with attenuation and detector response included was used to generate projection sets. Data were reconstructed using filtered backprojection (FBP) reconstruction with Butterworth filtering in conjunction with one iteration of Chang attenuation correction, and with 5 and 10 iterations of ordered-subset maximum-likelihood expectation maximization (ML-OS) reconstruction. To serve as a standard for comparison, the projection sets obtained from the two energies were first reconstructed separately using their own attenuation maps. The emission data obtained from both energies were added and reconstructed using the following attenuation strategies: 1) the 93 keV attenuation map for attenuation correction, 2) the 185 keV attenuation map for attenuation correction, 3) using a weighted mean obtained from combining the 93 keV and 185 keV maps, and 4) an ordered subset approach which combines both energies. The central count ratio (CCR) and total count ratio (TCR) were used to compare the performance of the different strategies. Compared to the standard method, results indicate an over-estimation with strategy 1, an under-estimation with strategy 2 and comparable results with strategies 3 and 4. In all strategies, the CCRs of sphere 4 (in proximity to the liver, spleen and backbone) were under-estimated, although TCRs were comparable to that of the other locations. The weighted mean and ordered subset strategies for attenuation correction were of comparable accuracy to reconstruction of the windows separately. They are recommended for multi-energy photon SPECT imaging quantitation when there is a need to combine the acquisitions of multiple windows.

  2. Total knee arthroplasty in motivated patients with knee osteoarthritis and athletic activity approach type goals: a conceptual decision-making model.

    PubMed

    Nyland, John; Kanouse, Zachary; Krupp, Ryan; Caborn, David; Jakob, Rolie

    2011-01-01

    Knee osteoarthritis is one of the most common disabling medical conditions. With longer life expectancy the number of total knee arthroplasty (TKA) procedures being performed worldwide is projected to increase dramatically. Patient education, physical activity, bodyweight levels, expectations and goals regarding the ability to continue athletic activity participation are also increasing. For the subset of motivated patients with knee osteoarthritis who have athletic activity approach type goals, early TKA may not be the best knee osteoarthritis treatment option to improve satisfaction, quality of life and outcomes. The purpose of this clinical commentary is to present a conceptual decision-making model designed to improve the knee osteoarthritis treatment intervention outcome for motivated patients with athletic activity approach type goals. The model focuses on improving knee surgeon, patient and rehabilitation clinician dialogue by rank ordering routine activities of daily living and quality of life evoking athletic activities based on knee symptom exacerbation or re-injury risk. This process should help establish realistic patient expectations and goals for a given knee osteoarthritis treatment intervention that will more likely improve self-efficacy, functional independence, satisfaction and outcomes while decreasing the failure risk associated with early TKA.

  3. Dissecting the genetic heterogeneity of myopia susceptibility in an Ashkenazi Jewish population using ordered subset analysis

    PubMed Central

    Simpson, Claire L.; Wojciechowski, Robert; Ibay, Grace; Stambolian, Dwight

    2011-01-01

    Purpose Despite many years of research, most of the genetic factors contributing to myopia development remain unknown. Genetic studies have pointed to a strong inherited component, but although many candidate regions have been implicated, few genes have been positively identified. Methods We have previously reported 2 genomewide linkage scans in a population of 63 highly aggregated Ashkenazi Jewish families that identified a locus on chromosome 22. Here we used ordered subset analysis (OSA), conditioned on non-parametric linkage to chromosome 22 to detect other chromosomal regions which had evidence of linkage to myopia in subsets of the families, but not the overall sample. Results Strong evidence of linkage to a 19-cM linkage interval with a peak OSA nonparametric allele-sharing logarithm-of-odds (LOD) score of 3.14 on 20p12-q11.1 (ΔLOD=2.39, empirical p=0.029) was identified in a subset of 20 families that also exhibited strong evidence of linkage to chromosome 22. One other locus also presented with suggestive LOD scores >2.0 on chromosome 11p14-q14 and one locus on chromosome 6q22-q24 had an OSA LOD score=1.76 (ΔLOD=1.65, empirical p=0.02). Conclusions The chromosome 6 and 20 loci are entirely novel and appear linked in a subset of families whose myopia is known to be linked to chromosome 22. The chromosome 11 locus overlaps with the known Myopia-7 (MYP7, OMIM 609256) locus. Using ordered subset analysis allows us to find additional loci linked to myopia in subsets of families, and underlines the complex genetic heterogeneity of myopia even in highly aggregated families and genetically isolated populations such as the Ashkenazi Jews. PMID:21738393

  4. Evaluating low pass filters on SPECT reconstructed cardiac orientation estimation

    NASA Astrophysics Data System (ADS)

    Dwivedi, Shekhar

    2009-02-01

    Low pass filters can affect the quality of clinical SPECT images by smoothing. Appropriate filter and parameter selection leads to optimum smoothing that leads to a better quantification followed by correct diagnosis and accurate interpretation by the physician. This study aims at evaluating the low pass filters on SPECT reconstruction algorithms. Criteria for evaluating the filters are estimating the SPECT reconstructed cardiac azimuth and elevation angle. Low pass filters studied are butterworth, gaussian, hamming, hanning and parzen. Experiments are conducted using three reconstruction algorithms, FBP (filtered back projection), MLEM (maximum likelihood expectation maximization) and OSEM (ordered subsets expectation maximization), on four gated cardiac patient projections (two patients with stress and rest projections). Each filter is applied with varying cutoff and order for each reconstruction algorithm (only butterworth used for MLEM and OSEM). The azimuth and elevation angles are calculated from the reconstructed volume and the variation observed in the angles with varying filter parameters is reported. Our results demonstrate that behavior of hamming, hanning and parzen filter (used with FBP) with varying cutoff is similar for all the datasets. Butterworth filter (cutoff > 0.4) behaves in a similar fashion for all the datasets using all the algorithms whereas with OSEM for a cutoff < 0.4, it fails to generate cardiac orientation due to oversmoothing, and gives an unstable response with FBP and MLEM. This study on evaluating effect of low pass filter cutoff and order on cardiac orientation using three different reconstruction algorithms provides an interesting insight into optimal selection of filter parameters.

  5. Iron Low-ionization Broad Absorption Line quasars - the missing link in galaxy evolution?

    NASA Astrophysics Data System (ADS)

    Lawther, Daniel Peter; Vestergaard, Marianne; Fan, Xiaohui

    2015-08-01

    A peculiar and rare type of quasar with strong low-ionization iron absorption lines - known as FeLoBAL quasars - may be the missing link between star forming (or starbursting) galaxies and quasars. They are hypothesized to be quasars breaking out of their dense birth blanket of gas and dust. In that case they are expected to have high rates of star formation in their galaxies. With the aim of addressing and settling this issue we have studied deep Hubble Space Telescope restframe UV and optical imaging of a subset of such quasars in order to characterize the host galaxy properties of these quasars. We present the results of this study along with simulations to characterize the uncertainties and robustness of our results.

  6. The statistics of gravitational lenses. III - Astrophysical consequences of quasar lensing

    NASA Technical Reports Server (NTRS)

    Ostriker, J. P.; Vietri, M.

    1986-01-01

    The method of Schmidt and Green (1983) for calculating the luminosity function of quasars is combined with gravitational-lensing theory to compute expected properties of lensed systems. Multiple quasar images produced by galaxies are of order 0.001 of the observed quasars, with the numbers over the whole sky calculated to be (0.86, 120, 1600) to limiting B magnitudes of (16, 19, 22). The amount of 'false evolution' is small except for an interesting subset of apparently bright, large-redshift objects for which minilensing by starlike objects may be important. Some of the BL Lac objects may be in this category, with the galaxy identified as the parent object really a foreground object within which stars have lensed a background optically violent variable quasar.

  7. Abundance of live 244Pu in deep-sea reservoirs on Earth points to rarity of actinide nucleosynthesis

    PubMed Central

    Wallner, A.; Faestermann, T.; Feige, J.; Feldstein, C.; Knie, K.; Korschinek, G.; Kutschera, W.; Ofan, A.; Paul, M.; Quinto, F.; Rugel, G.; Steier, P.

    2015-01-01

    Half of the heavy elements including all actinides are produced in r-process nucleosynthesis, whose sites and history remain a mystery. If continuously produced, the Interstellar Medium is expected to build-up a quasi-steady state of abundances of short-lived nuclides (with half-lives ≤100 My), including actinides produced in r-process nucleosynthesis. Their existence in today’s interstellar medium would serve as a radioactive clock and would establish that their production was recent. In particular 244Pu, a radioactive actinide nuclide (half-life=81 My), can place strong constraints on recent r-process frequency and production yield. Here we report the detection of live interstellar 244Pu, archived in Earth’s deep-sea floor during the last 25 My, at abundances lower than expected from continuous production in the Galaxy by about 2 orders of magnitude. This large discrepancy may signal a rarity of actinide r-process nucleosynthesis sites, compatible with neutron-star mergers or with a small subset of actinide-producing supernovae. PMID:25601158

  8. Metal-induced streak artifact reduction using iterative reconstruction algorithms in x-ray computed tomography image of the dentoalveolar region.

    PubMed

    Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia

    2013-02-01

    The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Incorporating HYPR de-noising within iterative PET reconstruction (HYPR-OSEM)

    NASA Astrophysics Data System (ADS)

    (Kevin Cheng, Ju-Chieh; Matthews, Julian; Sossi, Vesna; Anton-Rodriguez, Jose; Salomon, André; Boellaard, Ronald

    2017-08-01

    HighlY constrained back-PRojection (HYPR) is a post-processing de-noising technique originally developed for time-resolved magnetic resonance imaging. It has been recently applied to dynamic imaging for positron emission tomography and shown promising results. In this work, we have developed an iterative reconstruction algorithm (HYPR-OSEM) which improves the signal-to-noise ratio (SNR) in static imaging (i.e. single frame reconstruction) by incorporating HYPR de-noising directly within the ordered subsets expectation maximization (OSEM) algorithm. The proposed HYPR operator in this work operates on the target image(s) from each subset of OSEM and uses the sum of the preceding subset images as the composite which is updated every iteration. Three strategies were used to apply the HYPR operator in OSEM: (i) within the image space modeling component of the system matrix in forward-projection only, (ii) within the image space modeling component in both forward-projection and back-projection, and (iii) on the image estimate after the OSEM update for each subset thus generating three forms: (i) HYPR-F-OSEM, (ii) HYPR-FB-OSEM, and (iii) HYPR-AU-OSEM. Resolution and contrast phantom simulations with various sizes of hot and cold regions as well as experimental phantom and patient data were used to evaluate the performance of the three forms of HYPR-OSEM, and the results were compared to OSEM with and without a post reconstruction filter. It was observed that the convergence in contrast recovery coefficients (CRC) obtained from all forms of HYPR-OSEM was slower than that obtained from OSEM. Nevertheless, HYPR-OSEM improved SNR without degrading accuracy in terms of resolution and contrast. It achieved better accuracy in CRC at equivalent noise level and better precision than OSEM and better accuracy than filtered OSEM in general. In addition, HYPR-AU-OSEM has been determined to be the more effective form of HYPR-OSEM in terms of accuracy and precision based on the studies conducted in this work.

  10. Muon tomography imaging improvement using optimized limited angle data

    NASA Astrophysics Data System (ADS)

    Bai, Chuanyong; Simon, Sean; Kindem, Joel; Luo, Weidong; Sossong, Michael J.; Steiger, Matthew

    2014-05-01

    Image resolution of muon tomography is limited by the range of zenith angles of cosmic ray muons and the flux rate at sea level. Low flux rate limits the use of advanced data rebinning and processing techniques to improve image quality. By optimizing the limited angle data, however, image resolution can be improved. To demonstrate the idea, physical data of tungsten blocks were acquired on a muon tomography system. The angular distribution and energy spectrum of muons measured on the system was also used to generate simulation data of tungsten blocks of different arrangement (geometry). The data were grouped into subsets using the zenith angle and volume images were reconstructed from the data subsets using two algorithms. One was a distributed PoCA (point of closest approach) algorithm and the other was an accelerated iterative maximal likelihood/expectation maximization (MLEM) algorithm. Image resolution was compared for different subsets. Results showed that image resolution was better in the vertical direction for subsets with greater zenith angles and better in the horizontal plane for subsets with smaller zenith angles. The overall image resolution appeared to be the compromise of that of different subsets. This work suggests that the acquired data can be grouped into different limited angle data subsets for optimized image resolution in desired directions. Use of multiple images with resolution optimized in different directions can improve overall imaging fidelity and the intended applications.

  11. Application of distance-dependent resolution compensation and post-reconstruction filtering for myocardial SPECT

    NASA Astrophysics Data System (ADS)

    Hutton, Brian F.; Lau, Yiu H.

    1998-06-01

    Compensation for distance-dependent resolution can be directly incorporated in maximum likelihood reconstruction. Our objective was to examine the effectiveness of this compensation using either the standard expectation maximization (EM) algorithm or an accelerated algorithm based on use of ordered subsets (OSEM). We also investigated the application of post-reconstruction filtering in combination with resolution compensation. Using the MCAT phantom, projections were simulated for data, including attenuation and distance-dependent resolution. Projection data were reconstructed using conventional EM and OSEM with subset size 2 and 4, with/without 3D compensation for detector response (CDR). Also post-reconstruction filtering (PRF) was performed using a 3D Butterworth filter of order 5 with various cutoff frequencies (0.2-). Image quality and reconstruction accuracy were improved when CDR was included. Image noise was lower with CDR for a given iteration number. PRF with cutoff frequency greater than improved noise with no reduction in recovery coefficient for myocardium but the effect was less when CDR was incorporated in the reconstruction. CDR alone provided better results than use of PRF without CDR. Results suggest that using CDR without PRF, and stopping at a small number of iterations, may provide sufficiently good results for myocardial SPECT. Similar behaviour was demonstrated for OSEM.

  12. Ordering Elements and Subsets: Examples for Student Understanding

    ERIC Educational Resources Information Center

    Mellinger, Keith E.

    2004-01-01

    Teaching the art of counting can be quite difficult. Many undergraduate students have difficulty separating the ideas of permutation, combination, repetition, etc. This article develops some examples to help explain some of the underlying theory while looking carefully at the selection of various subsets of objects from a larger collection. The…

  13. Changes in hematological indices and lymphocyte subsets in response to whole blood donation in healthy male donors.

    PubMed

    Borai, Anwar; Livingstone, Callum; Alsobhi, Enaam; Al Sofyani, Abeer; Balgoon, Dalal; Farzal, Anwar; Almohammadi, Mohammed; Al-Amri, Abdulafattah; Bahijri, Suhad; Alrowaili, Daad; Bassiuni, Wafaa; Saleh, Ayman; Alrowaili, Norah; Abdelaal, Mohamed

    2017-04-01

    Whole blood donation has immunomodulatory effects, and most of these have been observed at short intervals following blood donation. This study aimed to investigate the impact of whole blood donation on lymphocyte subsets over a typical inter-donation interval. Healthy male subjects were recruited to study changes in complete blood count (CBC) (n = 42) and lymphocyte subsets (n = 16) before and at four intervals up to 106 days following blood donation. Repeated measures ANOVA were used to compare quantitative variables between different visits. Following blood donation, changes in CBC and erythropoietin were as expected. The neutrophil count increased by 11.3% at 8 days (p < .001). Novel changes were observed in lymphocyte subsets as the CD4/CD8 ratio increased by 9.2% (p < .05) at 8 days and 13.7% (p < .05) at 22 days. CD16-56 cells decreased by 16.2% (p < .05) at 8 days. All the subsets had returned to baseline by 106 days. Regression analysis showed that the changes in CD16-56 cells and CD4/CD8 ratio were not significant (Wilk's lambda = 0.15 and 0.94, respectively) when adjusted for BMI. In conclusion, following whole blood donation, there are transient changes in lymphocyte subsets. The effect of BMI on lymphocyte subsets and the effect of this immunomodulation on the immune response merit further investigation.

  14. A systems biology approach to the analysis of subset-specific responses to lipopolysaccharide in dendritic cells.

    PubMed

    Hancock, David G; Shklovskaya, Elena; Guy, Thomas V; Falsafi, Reza; Fjell, Chris D; Ritchie, William; Hancock, Robert E W; Fazekas de St Groth, Barbara

    2014-01-01

    Dendritic cells (DCs) are critical for regulating CD4 and CD8 T cell immunity, controlling Th1, Th2, and Th17 commitment, generating inducible Tregs, and mediating tolerance. It is believed that distinct DC subsets have evolved to control these different immune outcomes. However, how DC subsets mount different responses to inflammatory and/or tolerogenic signals in order to accomplish their divergent functions remains unclear. Lipopolysaccharide (LPS) provides an excellent model for investigating responses in closely related splenic DC subsets, as all subsets express the LPS receptor TLR4 and respond to LPS in vitro. However, previous studies of the LPS-induced DC transcriptome have been performed only on mixed DC populations. Moreover, comparisons of the in vivo response of two closely related DC subsets to LPS stimulation have not been reported in the literature to date. We compared the transcriptomes of murine splenic CD8 and CD11b DC subsets after in vivo LPS stimulation, using RNA-Seq and systems biology approaches. We identified subset-specific gene signatures, which included multiple functional immune mediators unique to each subset. To explain the observed subset-specific differences, we used a network analysis approach. While both DC subsets used a conserved set of transcription factors and major signalling pathways, the subsets showed differential regulation of sets of genes that 'fine-tune' the network Hubs expressed in common. We propose a model in which signalling through common pathway components is 'fine-tuned' by transcriptional control of subset-specific modulators, thus allowing for distinct functional outcomes in closely related DC subsets. We extend this analysis to comparable datasets from the literature and confirm that our model can account for cell subset-specific responses to LPS stimulation in multiple subpopulations in mouse and man.

  15. Precollege Predictors of Incapacitated Rape Among Female Students in Their First Year of College

    PubMed Central

    Carey, Kate B.; Durney, Sarah E.; Shepardson, Robyn L.; Carey, Michael P.

    2015-01-01

    Objective: The first year of college is an important transitional period for young adults; it is also a period associated with elevated risk of incapacitated rape (IR) for female students. The goal of this study was to identify prospective risk factors associated with experiencing attempted or completed IR during the first year of college. Method: Using a prospective cohort design, we recruited 483 incoming first-year female students. Participants completed a baseline survey and three follow-up surveys over the next year. At baseline, we assessed precollege alcohol use, marijuana use, sexual behavior, and, for the subset of sexually experienced participants, sex-related alcohol expectancies. At the baseline and all follow-ups, we assessed sexual victimization. Results: Approximately 1 in 6 women (18%) reported IR before entering college, and 15% reported IR during their first year of college. In bivariate analyses, precollege IR history, precollege heavy episodic drinking, number of precollege sexual partners, and sex-related alcohol expectancies (enhancement and disinhibition) predicted first-year IR. In multivariate analyses with the entire sample, only precollege IR (odds ratio = 4.98, p < .001) remained a significant predictor. However, among the subset of sexually experienced participants, both enhancement expectancies and precollege IR predicted IR during the study year. Conclusions: IR during the first year of college is independently associated with a history of IR and with expectancies about alcohol’s enhancement of sexual experience. Alcohol expectancies are a modifiable risk factor that may be a promising target for prevention efforts. PMID:26562590

  16. Computer access security code system

    NASA Technical Reports Server (NTRS)

    Collins, Earl R., Jr. (Inventor)

    1990-01-01

    A security code system for controlling access to computer and computer-controlled entry situations comprises a plurality of subsets of alpha-numeric characters disposed in random order in matrices of at least two dimensions forming theoretical rectangles, cubes, etc., such that when access is desired, at least one pair of previously unused character subsets not found in the same row or column of the matrix is chosen at random and transmitted by the computer. The proper response to gain access is transmittal of subsets which complete the rectangle, and/or a parallelepiped whose opposite corners were defined by first groups of code. Once used, subsets are not used again to absolutely defeat unauthorized access by eavesdropping, and the like.

  17. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    NASA Astrophysics Data System (ADS)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  18. A comparative assessment of projected meteorological and hydrological droughts: Elucidating the role of temperature

    NASA Astrophysics Data System (ADS)

    Ahmadalipour, Ali; Moradkhani, Hamid; Demirel, Mehmet C.

    2017-10-01

    The changing climate and the associated future increases in temperature are expected to have impacts on drought characteristics and hydrologic cycle. This paper investigates the projected changes in spatiotemporal characteristics of droughts and their future attributes over the Willamette River Basin (WRB) in the Pacific Northwest U.S. The analysis is performed using two subsets of downscaled CMIP5 global climate models (GCMs) each consisting of 10 models from two future scenarios (RCP4.5 and RCP8.5) for 30 years of historical period (1970-1999) and 90 years of future projections (2010-2099). Hydrologic modeling is conducted using the Precipitation Runoff Modeling System (PRMS) as a robust distributed hydrologic model with lower computational cost compared to other models. Meteorological and hydrological droughts are studied using three drought indices (i.e. Standardized Precipitation Index, Standardized Precipitation Evapotranspiration Index, Standardized Streamflow Index). Results reveal that the intensity and duration of hydrological droughts are expected to increase over the WRB, albeit the annual precipitation is expected to increase. On the other hand, the intensity of meteorological droughts do not indicate an aggravation for most cases. We explore the changes of hydrometeolorogical variables over the basin in order to understand the causes for such differences and to discover the controlling factors of drought. Furthermore, the uncertainty of projections are quantified for model, scenario, and downscaling uncertainty.

  19. Maintenance Downtime October 17 - 23, 2014

    Atmospheric Science Data Center

    2014-10-23

    ... Impact:  The ASDC will be conducting extended system maintenance Fri 10/17@4pm - Thu 10/23@4pm  EDT Please expect: ... and Customization Tool -  AMAPS, CALIPSO, CERES, MOPITT, TES and TAD Search and Subset Tools   All systems will be ...

  20. Clustering, Seriation, and Subset Extraction of Confusion Data

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Steinley, Douglas

    2006-01-01

    The study of confusion data is a well established practice in psychology. Although many types of analytical approaches for confusion data are available, among the most common methods are the extraction of 1 or more subsets of stimuli, the partitioning of the complete stimulus set into distinct groups, and the ordering of the stimulus set. Although…

  1. Effect of time-of-flight and point spread function modeling on detectability of myocardial defects in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaefferkoetter, Joshua, E-mail: dnrjds@nus.edu.sg; Ouyang, Jinsong; Rakvongthai, Yothin

    2014-06-15

    Purpose: A study was designed to investigate the impact of time-of-flight (TOF) and point spread function (PSF) modeling on the detectability of myocardial defects. Methods: Clinical FDG-PET data were used to generate populations of defect-present and defect-absent images. Defects were incorporated at three contrast levels, and images were reconstructed by ordered subset expectation maximization (OSEM) iterative methods including ordinary Poisson, alone and with PSF, TOF, and PSF+TOF. Channelized Hotelling observer signal-to-noise ratio (SNR) was the surrogate for human observer performance. Results: For three iterations, 12 subsets, and no postreconstruction smoothing, TOF improved overall defect detection SNR by 8.6% as comparedmore » to its non-TOF counterpart for all the defect contrasts. Due to the slow convergence of PSF reconstruction, PSF yielded 4.4% less SNR than non-PSF. For reconstruction parameters (iteration number and postreconstruction smoothing kernel size) optimizing observer SNR, PSF showed larger improvement for faint defects. The combination of TOF and PSF improved mean detection SNR as compared to non-TOF and non-PSF counterparts by 3.0% and 3.2%, respectively. Conclusions: For typical reconstruction protocol used in clinical practice, i.e., less than five iterations, TOF improved defect detectability. In contrast, PSF generally yielded less detectability. For large number of iterations, TOF+PSF yields the best observer performance.« less

  2. Structure, organization, and sequence of alpha satellite DNA from human chromosome 17: evidence for evolution by unequal crossing-over and an ancestral pentamer repeat shared with the human X chromosome.

    PubMed

    Waye, J S; Willard, H F

    1986-09-01

    The centromeric regions of all human chromosomes are characterized by distinct subsets of a diverse tandemly repeated DNA family, alpha satellite. On human chromosome 17, the predominant form of alpha satellite is a 2.7-kilobase-pair higher-order repeat unit consisting of 16 alphoid monomers. We present the complete nucleotide sequence of the 16-monomer repeat, which is present in 500 to 1,000 copies per chromosome 17, as well as that of a less abundant 15-monomer repeat, also from chromosome 17. These repeat units were approximately 98% identical in sequence, differing by the exclusion of precisely 1 monomer from the 15-monomer repeat. Homologous unequal crossing-over is suggested as a probable mechanism by which the different repeat lengths on chromosome 17 were generated, and the putative site of such a recombination event is identified. The monomer organization of the chromosome 17 higher-order repeat unit is based, in part, on tandemly repeated pentamers. A similar pentameric suborganization has been previously demonstrated for alpha satellite of the human X chromosome. Despite the organizational similarities, substantial sequence divergence distinguishes these subsets. Hybridization experiments indicate that the chromosome 17 and X subsets are more similar to each other than to the subsets found on several other human chromosomes. We suggest that the chromosome 17 and X alpha satellite subsets may be related components of a larger alphoid subfamily which have evolved from a common ancestral repeat into the contemporary chromosome-specific subsets.

  3. Impact of genetic features on treatment decisions in AML.

    PubMed

    Döhner, Hartmut; Gaidzik, Verena I

    2011-01-01

    In recent years, research in molecular genetics has been instrumental in deciphering the molecular pathogenesis of acute myeloid leukemia (AML). With the advent of the novel genomics technologies such as next-generation sequencing, it is expected that virtually all genetic lesions in AML will soon be identified. Gene mutations or deregulated expression of genes or sets of genes now allow us to explore the enormous diversity among cytogenetically defined subsets of AML, in particular the large subset of cytogenetically normal AML. Nonetheless, there are several challenges, such as discriminating driver from passenger mutations, evaluating the prognostic and predictive value of a specific mutation in the concert of the various concurrent mutations, or translating findings from molecular disease pathogenesis into novel therapies. Progress is unlikely to be fast in developing molecular targeted therapies. Contrary to the initial assumption, the development of molecular targeted therapies is slow and the various reports of promising new compounds will need to be put into perspective because many of these drugs did not show the expected effects.

  4. Anatomically-Aided PET Reconstruction Using the Kernel Method

    PubMed Central

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-01-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810

  5. Anatomically-aided PET reconstruction using the kernel method.

    PubMed

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  6. NOTE: Acceleration of Monte Carlo-based scatter compensation for cardiac SPECT

    NASA Astrophysics Data System (ADS)

    Sohlberg, A.; Watabe, H.; Iida, H.

    2008-07-01

    Single proton emission computed tomography (SPECT) images are degraded by photon scatter making scatter compensation essential for accurate reconstruction. Reconstruction-based scatter compensation with Monte Carlo (MC) modelling of scatter shows promise for accurate scatter correction, but it is normally hampered by long computation times. The aim of this work was to accelerate the MC-based scatter compensation using coarse grid and intermittent scatter modelling. The acceleration methods were compared to un-accelerated implementation using MC-simulated projection data of the mathematical cardiac torso (MCAT) phantom modelling 99mTc uptake and clinical myocardial perfusion studies. The results showed that when combined the acceleration methods reduced the reconstruction time for 10 ordered subset expectation maximization (OS-EM) iterations from 56 to 11 min without a significant reduction in image quality indicating that the coarse grid and intermittent scatter modelling are suitable for MC-based scatter compensation in cardiac SPECT.

  7. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  8. A quantitative reconstruction software suite for SPECT imaging

    NASA Astrophysics Data System (ADS)

    Namías, Mauro; Jeraj, Robert

    2017-11-01

    Quantitative Single Photon Emission Tomography (SPECT) imaging allows for measurement of activity concentrations of a given radiotracer in vivo. Although SPECT has usually been perceived as non-quantitative by the medical community, the introduction of accurate CT based attenuation correction and scatter correction from hybrid SPECT/CT scanners has enabled SPECT systems to be as quantitative as Positron Emission Tomography (PET) systems. We implemented a software suite to reconstruct quantitative SPECT images from hybrid or dedicated SPECT systems with a separate CT scanner. Attenuation, scatter and collimator response corrections were included in an Ordered Subset Expectation Maximization (OSEM) algorithm. A novel scatter fraction estimation technique was introduced. The SPECT/CT system was calibrated with a cylindrical phantom and quantitative accuracy was assessed with an anthropomorphic phantom and a NEMA/IEC image quality phantom. Accurate activity measurements were achieved at an organ level. This software suite helps increasing quantitative accuracy of SPECT scanners.

  9. Curvature and gravity actions for matrix models: II. The case of general Poisson structures

    NASA Astrophysics Data System (ADS)

    Blaschke, Daniel N.; Steinacker, Harold

    2010-12-01

    We study the geometrical meaning of higher order terms in matrix models of Yang-Mills type in the semi-classical limit, generalizing recent results (Blaschke and Steinacker 2010 Class. Quantum Grav. 27 165010 (arXiv:1003.4132)) to the case of four-dimensional spacetime geometries with general Poisson structure. Such terms are expected to arise e.g. upon quantization of the IKKT-type models. We identify terms which depend only on the intrinsic geometry and curvature, including modified versions of the Einstein-Hilbert action as well as terms which depend on the extrinsic curvature. Furthermore, a mechanism is found which implies that the effective metric G on the spacetime brane {\\cal M}\\subset \\mathds{R}^D 'almost' coincides with the induced metric g. Deviations from G = g are suppressed, and characterized by the would-be U(1) gauge field.

  10. Archive Management of NASA Earth Observation Data to Support Cloud Analysis

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Baynes, Kathleen; McInerney, Mark A.

    2017-01-01

    NASA collects, processes and distributes petabytes of Earth Observation (EO) data from satellites, aircraft, in situ instruments and model output, with an order of magnitude increase expected by 2024. Cloud-based web object storage (WOS) of these data can simplify the execution of such an increase. More importantly, it can also facilitate user analysis of those volumes by making the data available to the massively parallel computing power in the cloud. However, storing EO data in cloud WOS has a ripple effect throughout the NASA archive system with unexpected challenges and opportunities. One challenge is modifying data servicing software (such as Web Coverage Service servers) to access and subset data that are no longer on a directly accessible file system, but rather in cloud WOS. Opportunities include refactoring of the archive software to a cloud-native architecture; virtualizing data products by computing on demand; and reorganizing data to be more analysis-friendly.

  11. Effects of Number of Animals Monitored on Representations of Cattle Group Movement Characteristics and Spatial Occupancy

    PubMed Central

    Liu, Tong; Green, Angela R.; Rodríguez, Luis F.; Ramirez, Brett C.; Shike, Daniel W.

    2015-01-01

    The number of animals required to represent the collective characteristics of a group remains a concern in animal movement monitoring with GPS. Monitoring a subset of animals from a group instead of all animals can reduce costs and labor; however, incomplete data may cause information losses and inaccuracy in subsequent data analyses. In cattle studies, little work has been conducted to determine the number of cattle within a group needed to be instrumented considering subsequent analyses. Two different groups of cattle (a mixed group of 24 beef cows and heifers, and another group of 8 beef cows) were monitored with GPS collars at 4 min intervals on intensively managed pastures and corn residue fields in 2011. The effects of subset group size on cattle movement characterization and spatial occupancy analysis were evaluated by comparing the results between subset groups and the entire group for a variety of summarization parameters. As expected, more animals yield better results for all parameters. Results show the average group travel speed and daily travel distances are overestimated as subset group size decreases, while the average group radius is underestimated. Accuracy of group centroid locations and group radii are improved linearly as subset group size increases. A kernel density estimation was performed to quantify the spatial occupancy by cattle via GPS location data. Results show animals among the group had high similarity of spatial occupancy. Decisions regarding choosing an appropriate subset group size for monitoring depend on the specific use of data for subsequent analysis: a small subset group may be adequate for identifying areas visited by cattle; larger subset group size (e.g. subset group containing more than 75% of animals) is recommended to achieve better accuracy of group movement characteristics and spatial occupancy for the use of correlating cattle locations with other environmental factors. PMID:25647571

  12. [Varicocele and coincidental abacterial prostato-vesiculitis: negative role about the sperm output].

    PubMed

    Vicari, Enzo; La Vignera, Sandro; Tracia, Angelo; Cardì, Francesco; Donati, Angelo

    2003-03-01

    To evaluate the frequency and the role of a coincidentally expressed abacterial prostato-vesiculitis (PV) on sperm output in patients with left varicocele (Vr). We evaluated 143 selected infertile patients (mean age 27 years, range 21-43), with oligo- and/or astheno- and/or teratozoospermia (OAT) subdivided in two groups. Group A included 76 patients with previous varicocelectomy and persistent OAT. Group B included 67 infertile patients (mean age 26 years, range 21-37) with OAT and not varicocelectomized. Patients with Vr and coincidental didymo-epididymal ultrasound (US) abnormalities were excluded from the study. Following rectal prostato-vesicular ultrasonography, each group was subdivided in two subsets on the basis of the absence (group A: subset Vr-/PV-; and group B: subset Vr+/PV-) or the presence of an abacterial PV (group A: subset Vr-/PV+; group B: subset Vr+/PV+). Particularly, PV was present in 47.4% and 41.8% patients of groups A and B, respectively. This coincidental pathology was ipsilateral with Vr in the 61% of the cases. Semen analysis was performed in all patients. Patients of group A showed a total sperm number significantly higher than those found in group B. In presence of PV, sperm parameters were not significantly different between matched--subsets (Vr-/PV+ vs. Vr+/PV+). In absence of PV, the sperm density, the total sperm number and the percentage of forward motility from subset with previous varicocelectomy (Vr-/PV) exhibited values significantly higher than those found in the matched--subset (Vr+/PV-). Sperm analysis alone performed in patients with left Vr is not a useful prognostic post-varicocelectomy marker. Since following varicocelectomy a lack of sperm response could mask another coincidental pathology, the identification through US scans of a possible PV may be mandatory. On the other hand, an integrated uro-andrological approach, including US scans, allows to enucleate subsets of patients with Vr alone, who will have an expected better sperm response following Vr repair.

  13. Ordered-subsets linkage analysis detects novel Alzheimer disease loci on chromosomes 2q34 and 15q22.

    PubMed

    Scott, William K; Hauser, Elizabeth R; Schmechel, Donald E; Welsh-Bohmer, Kathleen A; Small, Gary W; Roses, Allen D; Saunders, Ann M; Gilbert, John R; Vance, Jeffery M; Haines, Jonathan L; Pericak-Vance, Margaret A

    2003-11-01

    Alzheimer disease (AD) is a complex disorder characterized by a wide range, within and between families, of ages at onset of symptoms. Consideration of age at onset as a covariate in genetic-linkage studies may reduce genetic heterogeneity and increase statistical power. Ordered-subsets analysis includes continuous covariates in linkage analysis by rank ordering families by a covariate and summing LOD scores to find a subset giving a significantly increased LOD score relative to the overall sample. We have analyzed data from 336 markers in 437 multiplex (>/=2 sampled individuals with AD) families included in a recent genomic screen for AD loci. To identify genetic heterogeneity by age at onset, families were ordered by increasing and decreasing mean and minimum ages at onset. Chromosomewide significance of increases in the LOD score in subsets relative to the overall sample was assessed by permutation. A statistically significant increase in the nonparametric multipoint LOD score was observed on chromosome 2q34, with a peak LOD score of 3.2 at D2S2944 (P=.008) in 31 families with a minimum age at onset between 50 and 60 years. The LOD score in the chromosome 9p region previously linked to AD increased to 4.6 at D9S741 (P=.01) in 334 families with minimum age at onset between 60 and 75 years. LOD scores were also significantly increased on chromosome 15q22: a peak LOD score of 2.8 (P=.0004) was detected at D15S1507 (60 cM) in 38 families with minimum age at onset >/=79 years, and a peak LOD score of 3.1 (P=.0006) was obtained at D15S153 (62 cM) in 43 families with mean age at onset >80 years. Thirty-one families were contained in both 15q22 subsets, indicating that these results are likely detecting the same locus. There is little overlap in these subsets, underscoring the utility of age at onset as a marker of genetic heterogeneity. These results indicate that linkage to chromosome 9p is strongest in late-onset AD and that regions on chromosome 2q34 and 15q22 are linked to early-onset AD and very-late-onset AD, respectively.

  14. Technical Proceedings fo the Symposium on Military Information Systems Engineering (Panel 11 on Information Processing Technology, Defence Research Group).

    DTIC Science & Technology

    1991-12-27

    session. The following gives the flavour of the comments made. 17. Prototyping captures requirements. The prototype exercises requirements and allows the...can modify the data in a given sub-set. These sub-sets can be used as granules of database distribu- tion in order to simplify access control. (3

  15. Search for anomalous kinematics in tt dilepton events at CDF II.

    PubMed

    Acosta, D; Adelman, J; Affolder, T; Akimoto, T; Albrow, M G; Ambrose, D; Amerio, S; Amidei, D; Anastassov, A; Anikeev, K; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Arisawa, T; Arguin, J-F; Artikov, A; Ashmanskas, W; Attal, A; Azfar, F; Azzi-Bacchetta, P; Bacchetta, N; Bachacou, H; Badgett, W; Barbaro-Galtieri, A; Barker, G J; Barnes, V E; Barnett, B A; Baroiant, S; Barone, M; Bauer, G; Bedeschi, F; Behari, S; Belforte, S; Bellettini, G; Bellinger, J; Ben-Haim, E; Benjamin, D; Beretvas, A; Bhatti, A; Binkley, M; Bisello, D; Bishai, M; Blair, R E; Blocker, C; Bloom, K; Blumenfeld, B; Bocci, A; Bodek, A; Bolla, G; Bolshov, A; Booth, P S L; Bortoletto, D; Boudreau, J; Bourov, S; Brau, B; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Burkett, K; Busetto, G; Bussey, P; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canepa, A; Casarsa, M; Carlsmith, D; Carron, S; Carosi, R; Cavalli-Sforza, M; Castro, A; Catastini, P; Cauz, D; Cerri, A; Cerrito, L; Chapman, J; Chen, C; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, I; Cho, K; Chokheli, D; Chou, J P; Chu, M L; Chuang, S; Chung, J Y; Chung, W-H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A G; Clark, D; Coca, M; Connolly, A; Convery, M; Conway, J; Cooper, B; Cordelli, M; Cortiana, G; Cranshaw, J; Cuevas, J; Culbertson, R; Currat, C; Cyr, D; Dagenhart, D; Da Ronco, S; D'Auria, S; de Barbaro, P; De Cecco, S; De Lentdecker, G; Dell'Agnello, S; Dell'Orso, M; Demers, S; Demortier, L; Deninno, M; De Pedis, D; Derwent, P F; Dionisi, C; Dittmann, J R; Dörr, C; Doksus, P; Dominguez, A; Donati, S; Donega, M; Donini, J; D'Onofrio, M; Dorigo, T; Drollinger, V; Ebina, K; Eddy, N; Ehlers, J; Ely, R; Erbacher, R; Erdmann, M; Errede, D; Errede, S; Eusebi, R; Fang, H-C; Farrington, S; Fedorko, I; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferretti, C; Field, R D; Flanagan, G; Flaugher, B; Flores-Castillo, L R; Foland, A; Forrester, S; Foster, G W; Franklin, M; Freeman, J C; Fujii, Y; Furic, I; Gajjar, A; Gallas, A; Galyardt, J; Gallinaro, M; Garcia-Sciveres, M; Garfinkel, A F; Gay, C; Gerberich, H; Gerdes, D W; Gerchtein, E; Giagu, S; Giannetti, P; Gibson, A; Gibson, K; Ginsburg, C; Giolo, K; Giordani, M; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Goldstein, D; Goldstein, J; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Gotra, Y; Goulianos, K; Gresele, A; Griffiths, M; Grosso-Pilcher, C; Grundler, U; Guenther, M; Guimaraes da Costa, J; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Hamilton, A; Han, B-Y; Handler, R; Happacher, F; Hara, K; Hare, M; Harr, R F; Harris, R M; Hartmann, F; Hatakeyama, K; Hauser, J; Hays, C; Hayward, H; Heider, E; Heinemann, B; Heinrich, J; Hennecke, M; Herndon, M; Hill, C; Hirschhbuehl, D; Hocker, A; Hoffman, K D; Holloway, A; Hou, S; Houlden, M A; Huffman, B T; Huang, Y; Hughes, R E; Huston, J; Ikado, K; Incandela, J; Introzzi, G; Iori, M; Ishizawa, Y; Issever, C; Ivanov, A; Iwata, Y; Iyutin, B; James, E; Jang, D; Jarrell, J; Jeans, D; Jensen, H; Jeon, E J; Jones, M; Joo, K K; Jun, S Y; Junk, T; Kamon, T; Kang, J; Karagoz Unel, M; Karchin, P E; Kartal, S; Kato, Y; Kemp, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, M S; Kim, S B; Kim, S H; Kim, T H; Kim, Y K; King, B T; Kirby, M; Kirsch, L; Klimenko, S; Knuteson, B; Ko, B R; Kobayashi, H; Koehn, P; Kong, D J; Kondo, K; Konigsberg, J; Kordas, K; Korn, A; Korytov, A; Kotelnikov, K; Kotwal, A V; Kovalev, A; Kraus, J; Kravchenko, I; Kreymer, A; Kroll, J; Kruse, M; Krutelyov, V; Kuhlmann, S E; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lammel, S; Lancaster, J; Lancaster, M; Lander, R; Lannon, K; Lath, A; Latino, G; Lauhakangas, R; Lazzizzera, I; Le, Y; Lecci, C; LeCompte, T; Lee, J; Lee, J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Li, K; Lin, C; Lin, C S; Lindgren, M; Liss, T M; Lister, A; Litvintsev, D O; Liu, T; Liu, Y; Lockyer, N S; Loginov, A; Loreti, M; Loverre, P; Lu, R-S; Lucchesi, D; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; MacQueen, D; Madrak, R; Maeshima, K; Maksimovic, P; Malferrari, L; Manca, G; Marginean, R; Marino, C; Martin, A; Martin, M; Martin, V; Martínez, M; Maruyama, T; Matsunaga, H; Mattson, M; Mazzanti, P; McFarland, K S; McGivern, D; McIntyre, P M; McNamara, P; NcNulty, R; Mehta, A; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; Miao, T; Miladinovic, N; Miller, L; Miller, R; Miller, J S; Miquel, R; Miscetti, S; Mitselmakher, G; Miyamoto, A; Miyazaki, Y; Moggi, N; Mohr, B; Moore, R; Morello, M; Movilla Fernandez, P A; Mukherjee, A; Mulhearn, M; Muller, T; Mumford, R; Munar, A; Murat, P; Nachtman, J; Nahn, S; Nakamura, I; Nakano, I; Napier, A; Napora, R; Naumov, D; Necula, V; Niell, F; Nielsen, J; Nelson, C; Nelson, T; Neu, C; Neubauer, M S; Newman-Holmes, C; Nigmanov, T; Nodulman, L; Norniella, O; Oesterberg, K; Ogawa, T; Oh, S H; Oh, Y D; Ohsugi, T; Okusawa, T; Oldeman, R; Orava, R; Orejudos, W; Pagliarone, C; Palencia, E; Paoletti, R; Papadimitriou, V; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Pauly, T; Paus, C; Pellett, D; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pitts, K T; Plager, C; Pompos, A; Pondrom, L; Pope, G; Portell, X; Poukhov, O; Prakoshyn, F; Pratt, T; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Rademachker, J; Rahaman, M A; Rakitine, A; Rappoccio, S; Ratnikov, F; Ray, H; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Rimondi, F; Rinnert, K; Ristori, L; Robertson, W J; Robson, A; Rodrigo, T; Rolli, S; Rosenson, L; Roser, R; Rossin, R; Rott, C; Russ, J; Rusu, V; Ruiz, A; Ryan, D; Saarikko, H; Sabik, S; Safonov, A; St Denis, R; Sakumoto, W K; Salamanna, G; Saltzberg, D; Sanchez, C; Sansoni, A; Santi, L; Sarkar, S; Sato, K; Savard, P; Savoy-Navarro, A; Schlabach, P; Schmidt, E E; Schmidt, M P; Schmitt, M; Scodellaro, L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semeria, F; Sexton-Kennedy, L; Sfiligoi, I; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Siegrist, J; Siket, M; Sill, A; Sinervo, P; Sisakyan, A; Skiba, A; Slaughter, A J; Sliwa, K; Smirnov, D; Smith, J R; Snider, F D; Snihur, R; Soha, A; Somalwar, S V; Spalding, J; Spezziga, M; Spiegel, L; Spinella, F; Spiropulu, M; Squillacioti, P; Stadie, H; Stelzer, B; Stelzer-Chilton, O; Strologas, J; Stuart, D; Sukhanov, A; Sumorok, K; Sun, H; Suzuki, T; Taffard, A; Tafirout, R; Takach, S F; Takano, H; Takashima, R; Takeuchi, Y; Takikawa, K; Tanaka, M; Tanaka, R; Tanimoto, N; Tapprogge, S; Tecchio, M; Teng, P K; Terashi, K; Tesarek, R J; Tether, S; Thom, J; Thompson, A S; Thomson, E; Tipton, P; Tiwari, V; Trkaczyk, S; Toback, D; Tollefson, K; Tomura, T; Tonelli, D; Tönnesmann, M; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tseng, J; Tsuchiya, R; Tsuno, S; Tsybychev, D; Turini, N; Turner, M; Ukegawa, F; Unverhau, T; Uozumi, S; Usynin, D; Vacavant, L; Vaiciulis, A; Varganov, A; Vataga, E; Vejcik, S; Velev, G; Veszpremi, V; Veramendi, G; Vickey, T; Vidal, R; Vila, I; Vilar, R; Vollrath, I; Volobouev, I; von der Mey, M; Wagner, P; Wagner, R G; Wagner, R L; Wagner, W; Wallny, R; Walter, T; Yamashita, T; Yamamoto, K; Wan, Z; Wang, M J; Wang, S M; Warburton, A; Ward, B; Waschke, S; Waters, D; Watts, T; Weber, M; Wester, W C; Whitehouse, B; Wicklund, A B; Wicklund, E; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolter, M; Worcester, M; Worm, S; Wright, T; Wu, X; Würthwein, F; Wyatt, A; Yagil, A; Yang, C; Yang, U K; Yao, W; Yeh, G P; Yi, K; Yoh, J; Yoon, P; Yorita, K; Yoshida, T; Yu, I; Yu, S; Yu, Z; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zetti, F; Zhou, J; Zsenei, A; Zucchelli, S

    2005-07-08

    We report on a search for anomalous kinematics of tt dilepton events in pp collisions at square root of s=1.96 TeV using 193 pb(-1) of data collected with the CDF II detector. We developed a new a priori technique designed to isolate the subset in a data sample revealing the largest deviation from standard model (SM) expectations and to quantify the significance of this departure. In the four-variable space considered, no particular subset shows a significant discrepancy, and we find that the probability of obtaining a data sample less consistent with the SM than what is observed is 1.0%-4.5%.

  16. A data driven partial ambiguity resolution: Two step success rate criterion, and its simulation demonstration

    NASA Astrophysics Data System (ADS)

    Hou, Yanqing; Verhagen, Sandra; Wu, Jie

    2016-12-01

    Ambiguity Resolution (AR) is a key technique in GNSS precise positioning. In case of weak models (i.e., low precision of data), however, the success rate of AR may be low, which may consequently introduce large errors to the baseline solution in cases of wrong fixing. Partial Ambiguity Resolution (PAR) is therefore proposed such that the baseline precision can be improved by fixing only a subset of ambiguities with high success rate. This contribution proposes a new PAR strategy, allowing to select the subset such that the expected precision gain is maximized among a set of pre-selected subsets, while at the same time the failure rate is controlled. These pre-selected subsets are supposed to obtain the highest success rate among those with the same subset size. The strategy is called Two-step Success Rate Criterion (TSRC) as it will first try to fix a relatively large subset with the fixed failure rate ratio test (FFRT) to decide on acceptance or rejection. In case of rejection, a smaller subset will be fixed and validated by the ratio test so as to fulfill the overall failure rate criterion. It is shown how the method can be practically used, without introducing a large additional computation effort. And more importantly, how it can improve (or at least not deteriorate) the availability in terms of baseline precision comparing to classical Success Rate Criterion (SRC) PAR strategy, based on a simulation validation. In the simulation validation, significant improvements are obtained for single-GNSS on short baselines with dual-frequency observations. For dual-constellation GNSS, the improvement for single-frequency observations on short baselines is very significant, on average 68%. For the medium- to long baselines, with dual-constellation GNSS the average improvement is around 20-30%.

  17. Studies of a Next-Generation Silicon-Photomultiplier-Based Time-of-Flight PET/CT System.

    PubMed

    Hsu, David F C; Ilan, Ezgi; Peterson, William T; Uribe, Jorge; Lubberink, Mark; Levin, Craig S

    2017-09-01

    This article presents system performance studies for the Discovery MI PET/CT system, a new time-of-flight system based on silicon photomultipliers. System performance and clinical imaging were compared between this next-generation system and other commercially available PET/CT and PET/MR systems, as well as between different reconstruction algorithms. Methods: Spatial resolution, sensitivity, noise-equivalent counting rate, scatter fraction, counting rate accuracy, and image quality were characterized with the National Electrical Manufacturers Association NU-2 2012 standards. Energy resolution and coincidence time resolution were measured. Tests were conducted independently on two Discovery MI scanners installed at Stanford University and Uppsala University, and the results were averaged. Back-to-back patient scans were also performed between the Discovery MI, Discovery 690 PET/CT, and SIGNA PET/MR systems. Clinical images were reconstructed using both ordered-subset expectation maximization and Q.Clear (block-sequential regularized expectation maximization with point-spread function modeling) and were examined qualitatively. Results: The averaged full widths at half maximum (FWHMs) of the radial/tangential/axial spatial resolution reconstructed with filtered backprojection at 1, 10, and 20 cm from the system center were, respectively, 4.10/4.19/4.48 mm, 5.47/4.49/6.01 mm, and 7.53/4.90/6.10 mm. The averaged sensitivity was 13.7 cps/kBq at the center of the field of view. The averaged peak noise-equivalent counting rate was 193.4 kcps at 21.9 kBq/mL, with a scatter fraction of 40.6%. The averaged contrast recovery coefficients for the image-quality phantom were 53.7, 64.0, 73.1, 82.7, 86.8, and 90.7 for the 10-, 13-, 17-, 22-, 28-, and 37-mm-diameter spheres, respectively. The average photopeak energy resolution was 9.40% FWHM, and the average coincidence time resolution was 375.4 ps FWHM. Clinical image comparisons between the PET/CT systems demonstrated the high quality of the Discovery MI. Comparisons between the Discovery MI and SIGNA showed a similar spatial resolution and overall imaging performance. Lastly, the results indicated significantly enhanced image quality and contrast-to-noise performance for Q.Clear, compared with ordered-subset expectation maximization. Conclusion: Excellent performance was achieved with the Discovery MI, including 375 ps FWHM coincidence time resolution and sensitivity of 14 cps/kBq. Comparisons between reconstruction algorithms and other multimodal silicon photomultiplier and non-silicon photomultiplier PET detector system designs indicated that performance can be substantially enhanced with this next-generation system. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  18. Redefining Myeloid Cell Subsets in Murine Spleen

    PubMed Central

    Hey, Ying-Ying; Tan, Jonathan K. H.; O’Neill, Helen C.

    2016-01-01

    Spleen is known to contain multiple dendritic and myeloid cell subsets, distinguishable on the basis of phenotype, function and anatomical location. As a result of recent intensive flow cytometric analyses, splenic dendritic cell (DC) subsets are now better characterized than other myeloid subsets. In order to identify and fully characterize a novel splenic subset termed “L-DC” in relation to other myeloid cells, it was necessary to investigate myeloid subsets in more detail. In terms of cell surface phenotype, L-DC were initially characterized as a CD11bhiCD11cloMHCII−Ly6C−Ly6G− subset in murine spleen. Their expression of CD43, lack of MHCII, and a low level of CD11c was shown to best differentiate L-DC by phenotype from conventional DC subsets. A complete analysis of all subsets in spleen led to the classification of CD11bhiCD11cloMHCII−Ly6CloLy6G− cells as monocytes expressing CX3CR1, CD43 and CD115. Siglec-F expression was used to identify a specific eosinophil population, distinguishable from both Ly6Clo and Ly6Chi monocytes, and other DC subsets. L-DC were characterized as a clear subset of CD11bhiCD11cloMHCII−Ly6C−Ly6G− cells, which are CD43+, Siglec-F− and CD115−. Changes in the prevalence of L-DC compared to other subsets in spleens of mutant mice confirmed the phenotypic distinction between L-DC, cDC and monocyte subsets. L-DC development in vivo was shown to occur independently of the BATF3 transcription factor that regulates cDC development, and also independently of the FLT3L and GM-CSF growth factors which drive cDC and monocyte development, so distinguishing L-DC from these commonly defined cell types. PMID:26793192

  19. Phylogenetic diversity, functional trait diversity and extinction: avoiding tipping points and worst-case losses

    PubMed Central

    Faith, Daniel P.

    2015-01-01

    The phylogenetic diversity measure, (‘PD’), measures the relative feature diversity of different subsets of taxa from a phylogeny. At the level of feature diversity, PD supports the broad goal of biodiversity conservation to maintain living variation and option values. PD calculations at the level of lineages and features include those integrating probabilities of extinction, providing estimates of expected PD. This approach has known advantages over the evolutionarily distinct and globally endangered (EDGE) methods. Expected PD methods also have limitations. An alternative notion of expected diversity, expected functional trait diversity, relies on an alternative non-phylogenetic model and allows inferences of diversity at the level of functional traits. Expected PD also faces challenges in helping to address phylogenetic tipping points and worst-case PD losses. Expected PD may not choose conservation options that best avoid worst-case losses of long branches from the tree of life. We can expand the range of useful calculations based on expected PD, including methods for identifying phylogenetic key biodiversity areas. PMID:25561672

  20. Serial killers: ordering caspase activation events in apoptosis.

    PubMed

    Slee, E A; Adrain, C; Martin, S J

    1999-11-01

    Caspases participate in the molecular control of apoptosis in several guises; as triggers of the death machinery, as regulatory elements within it, and ultimately as a subset of the effector elements of the machinery itself. The mammalian caspase family is steadily growing and currently contains 14 members. At present, it is unclear whether all of these proteases participate in apoptosis. Thus, current research in this area is focused upon establishing the repertoire and order of caspase activation events that occur during the signalling and demolition phases of cell death. Evidence is accumulating to suggest that proximal caspase activation events are typically initiated by molecules that promote caspase aggregation. As expected, distal caspase activation events are likely to be controlled by caspases activated earlier in the cascade. However, recent data has cast doubt upon the functional demarcation of caspases into signalling (upstream) and effector (downstream) roles based upon their prodomain lengths. In particular, caspase-3 may perform an important role in propagating the caspase cascade, in addition to its role as an effector caspase within the death programme. Here, we discuss the apoptosis-associated caspase cascade and the hierarchy of caspase activation events within it.

  1. Old-fashioned responses in an updating memory task.

    PubMed

    Ruiz, M; Elosúa, M R; Lechuga, M T

    2005-07-01

    Errors in a running memory task are analysed. Participants were presented with a variable-length list of items and were asked to report the last four items. It has been proposed (Morris & Jones, 1990) that this task requires two mechanisms: the temporal storage of the target set by the articulatory loop and its updating by the central executive. Two implicit assumptions in this proposal are (a) the preservation of serial order, and (b) participants' capacity to discard earlier items from the target subset as list presentation is running, and new items are appended. Order preservation within the updated target list and the inhibition of the outdated list items should imply a relatively higher rate of location errors for items from the medial positions of the target list and a lower rate of intrusion errors from the outdated and inhibited items from the pretarget positions. Contrary to these expectations, for both consonants (Experiment 1) and words (Experiment 2) we found recency effects and a relatively high rate of intrusions from the final pretarget positions, most of them from the very last. Similar effects were apparent with the embedded four-item lists for catch trials. These results are clearly at odds with the presumed updating by the central executive.

  2. A Comparison of Seyfert 1 and 2 Host Galaxies

    NASA Astrophysics Data System (ADS)

    De Robertis, M.; Virani, S.

    2000-12-01

    Wide-field, R-band CCD data of 15 Seyfert 1 and 15 Seyfert 2 galaxies taken from the CfA survey were analysed in order to compare the properties of their host galaxies. As well, B-band images for a subset of 12 Seyfert 1s and 7 Seyfert 2s were acquired and analysed in the same way. A robust technique for decomposing the three components---nucleus, bulge and disk---was developed in order determine the structural parameters for each galaxy. In effect, the nuclear contribution was removed empirically by using a spatially nearby, high signal-to-noise ratio point source as a template. Profile fits to the bulge+disk ignored data within three seeing disks of the nucleus. Of the many parameters that were compared between Seyfert 1s and 2s, only two distributions differed at greater than the 95% confidence level for the K-S test: the magnitude of the nuclear component, and the radial color gradient outside the nucleus. The former is expected. The latter could be consistent with some proposed evolutionary models. There is some suggestion that other parameters may differ, but at a lower confidence level.

  3. Optimization, evaluation, and comparison of standard algorithms for image reconstruction with the VIP-PET.

    PubMed

    Mikhaylova, E; Kolstein, M; De Lorenzo, G; Chmeissani, M

    2014-07-01

    A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm 3 ) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics.

  4. An overview of the NASA Langley Atmospheric Data Center: Online tools to effectively disseminate Earth science data products

    NASA Astrophysics Data System (ADS)

    Parker, L.; Dye, R. A.; Perez, J.; Rinsland, P.

    2012-12-01

    Over the past decade the Atmospheric Science Data Center (ASDC) at NASA Langley Research Center has archived and distributed a variety of satellite mission and aircraft campaign data sets. These datasets posed unique challenges to the user community at large due to the sheer volume and variety of the data and the lack of intuitive features in the order tools available to the investigator. Some of these data sets also lack sufficient metadata to provide rudimentary data discovery. To meet the needs of emerging users, the ASDC addressed issues in data discovery and delivery through the use of standards in data and access methods, and distribution through appropriate portals. The ASDC is currently undergoing a refresh of its webpages and Ordering Tools that will leverage updated collection level metadata in an effort to enhance the user experience. The ASDC is now providing search and subset capability to key mission satellite data sets. The ASDC has collaborated with Science Teams to accommodate prospective science users in the climate and modeling communities. The ASDC is using a common framework that enables more rapid development and deployment of search and subset tools that provide enhanced access features for the user community. Features of the Search and Subset web application enables a more sophisticated approach to selecting and ordering data subsets by parameter, date, time, and geographic area. The ASDC has also applied key practices from satellite missions to the multi-campaign aircraft missions executed for Earth Venture-1 and MEaSUReS

  5. How do I order MISR data?

    Atmospheric Science Data Center

    2017-10-12

    ... and archived at the NASA Langley Research Center Atmospheric Science Data Center (ASDC). A MISR Order and Customization Tool is ... Pool (an on-line, short-term data cache that provides a Web interface and FTP access). Specially subsetted and/or reformatted MISR data ...

  6. Building Capacity through Action Research Curricula Reviews

    ERIC Educational Resources Information Center

    Lee, Vanessa; Coombe, Leanne; Robinson, Priscilla

    2015-01-01

    In Australia, graduates of Master of Public Health (MPH) programmes are expected to achieve a set of core competencies, including a subset that is specifically related to Indigenous health. This paper reports on the methods utilised in a project which was designed using action research to strengthen Indigenous public health curricula within MPH…

  7. Minimizing Expected Maximum Risk from Cyber-Attacks with Probabilistic Attack Success

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhuiyan, Tanveer H.; Nandi, Apurba; Medal, Hugh

    The goal of our work is to enhance network security by generating partial cut-sets, which are a subset of edges that remove paths from initially vulnerable nodes (initial security conditions) to goal nodes (critical assets), on an attack graph given costs for cutting an edge and a limited overall budget.

  8. Optimization of Self-Directed Target Coverage in Wireless Multimedia Sensor Network

    PubMed Central

    Yang, Yang; Wang, Yufei; Pi, Dechang; Wang, Ruchuan

    2014-01-01

    Video and image sensors in wireless multimedia sensor networks (WMSNs) have directed view and limited sensing angle. So the methods to solve target coverage problem for traditional sensor networks, which use circle sensing model, are not suitable for WMSNs. Based on the FoV (field of view) sensing model and FoV disk model proposed, how expected multimedia sensor covers the target is defined by the deflection angle between target and the sensor's current orientation and the distance between target and the sensor. Then target coverage optimization algorithms based on expected coverage value are presented for single-sensor single-target, multisensor single-target, and single-sensor multitargets problems distinguishingly. Selecting the orientation that sensor rotated to cover every target falling in the FoV disk of that sensor for candidate orientations and using genetic algorithm to multisensor multitargets problem, which has NP-complete complexity, then result in the approximated minimum subset of sensors which covers all the targets in networks. Simulation results show the algorithm's performance and the effect of number of targets on the resulting subset. PMID:25136667

  9. Evaluation of Rigid-Body Motion Compensation in Cardiac Perfusion SPECT Employing Polar-Map Quantification

    PubMed Central

    Pretorius, P. Hendrik; Johnson, Karen L.; King, Michael A.

    2016-01-01

    We have recently been successful in the development and testing of rigid-body motion tracking, estimation and compensation for cardiac perfusion SPECT based on a visual tracking system (VTS). The goal of this study was to evaluate in patients the effectiveness of our rigid-body motion compensation strategy. Sixty-four patient volunteers were asked to remain motionless or execute some predefined body motion during an additional second stress perfusion acquisition. Acquisitions were performed using the standard clinical protocol with 64 projections acquired through 180 degrees. All data were reconstructed with an ordered-subsets expectation-maximization (OSEM) algorithm using 4 projections per subset and 5 iterations. All physical degradation factors were addressed (attenuation, scatter, and distance dependent resolution), while a 3-dimensional Gaussian rotator was used during reconstruction to correct for six-degree-of-freedom (6-DOF) rigid-body motion estimated by the VTS. Polar map quantification was employed to evaluate compensation techniques. In 54.7% of the uncorrected second stress studies there was a statistically significant difference in the polar maps, and in 45.3% this made a difference in the interpretation of segmental perfusion. Motion correction reduced the impact of motion such that with it 32.8 % of the polar maps were statistically significantly different, and in 14.1% this difference changed the interpretation of segmental perfusion. The improvement shown in polar map quantitation translated to visually improved uniformity of the SPECT slices. PMID:28042170

  10. [The European countries confronting cancer: a set of indicators assessing public health status].

    PubMed

    Borella, Laurent

    2008-11-01

    We now know that efficient public policies for cancer control need to be global and take into account each and all the factors involved: economics and level of development, style of life and risk factors, access to screening, effectiveness of the care-providing system. A very simple scorecard is proposed, based on publicized public health indicators, which allows a comparison between European countries. We extracted 49 indicators from public databases and literature concerning 22 European countries. We made correlation calculations in order to identify relevant indicators from which a global score was extracted. Using a hierarchical clustering method we were then able to identify subsets of homogeneous countries. A 7 indicator scorecard was drawn up: national gross product, scientific production, smoking rate, breast screening participating rate, all cancer mortality rate (male population), 5 years relative survival for colorectal cancer and life expectancy at birth. A global score shows: 1) the better positioned countries: Switzerland, Sweden, Finland and France; 2) the countries where cancer control is less effective: Estonia, Hungary, Poland and Slovakia. Three subsets of countries with a fairly similar profile were identified: a high level of means and results group; a high level of means but a medium level of results group; and a low level of means and results group. This work emphasizes dramatically heterogeneous situations between countries. A follow-up, using a reduced but regularly updated set of public health indicators, would help induce an active European policy for cancer control.

  11. Expected accuracy of proximal and distal temperature estimated by wireless sensors, in relation to their number and position on the skin.

    PubMed

    Longato, Enrico; Garrido, Maria; Saccardo, Desy; Montesinos Guevara, Camila; Mani, Ali R; Bolognesi, Massimo; Amodio, Piero; Facchinetti, Andrea; Sparacino, Giovanni; Montagnese, Sara

    2017-01-01

    A popular method to estimate proximal/distal temperature (TPROX and TDIST) consists in calculating a weighted average of nine wireless sensors placed on pre-defined skin locations. Specifically, TPROX is derived from five sensors placed on the infra-clavicular and mid-thigh area (left and right) and abdomen, and TDIST from four sensors located on the hands and feet. In clinical practice, the loss/removal of one or more sensors is a common occurrence, but limited information is available on how this affects the accuracy of temperature estimates. The aim of this study was to determine the accuracy of temperature estimates in relation to number/position of sensors removed. Thirteen healthy subjects wore all nine sensors for 24 hours and reference TPROX and TDIST time-courses were calculated using all sensors. Then, all possible combinations of reduced subsets of sensors were simulated and suitable weights for each sensor calculated. The accuracy of TPROX and TDIST estimates resulting from the reduced subsets of sensors, compared to reference values, was assessed by the mean squared error, the mean absolute error (MAE), the cross-validation error and the 25th and 75th percentiles of the reconstruction error. Tables of the accuracy and sensor weights for all possible combinations of sensors are provided. For instance, in relation to TPROX, a subset of three sensors placed in any combination of three non-homologous areas (abdominal, right or left infra-clavicular, right or left mid-thigh) produced an error of 0.13°C MAE, while the loss/removal of the abdominal sensor resulted in an error of 0.25°C MAE, with the greater impact on the quality of the reconstruction. This information may help researchers/clinicians: i) evaluate the expected goodness of their TPROX and TDIST estimates based on the number of available sensors; ii) select the most appropriate subset of sensors, depending on goals and operational constraints.

  12. Expected accuracy of proximal and distal temperature estimated by wireless sensors, in relation to their number and position on the skin

    PubMed Central

    Longato, Enrico; Garrido, Maria; Saccardo, Desy; Montesinos Guevara, Camila; Mani, Ali R.; Bolognesi, Massimo; Amodio, Piero; Facchinetti, Andrea; Sparacino, Giovanni

    2017-01-01

    A popular method to estimate proximal/distal temperature (TPROX and TDIST) consists in calculating a weighted average of nine wireless sensors placed on pre-defined skin locations. Specifically, TPROX is derived from five sensors placed on the infra-clavicular and mid-thigh area (left and right) and abdomen, and TDIST from four sensors located on the hands and feet. In clinical practice, the loss/removal of one or more sensors is a common occurrence, but limited information is available on how this affects the accuracy of temperature estimates. The aim of this study was to determine the accuracy of temperature estimates in relation to number/position of sensors removed. Thirteen healthy subjects wore all nine sensors for 24 hours and reference TPROX and TDIST time-courses were calculated using all sensors. Then, all possible combinations of reduced subsets of sensors were simulated and suitable weights for each sensor calculated. The accuracy of TPROX and TDIST estimates resulting from the reduced subsets of sensors, compared to reference values, was assessed by the mean squared error, the mean absolute error (MAE), the cross-validation error and the 25th and 75th percentiles of the reconstruction error. Tables of the accuracy and sensor weights for all possible combinations of sensors are provided. For instance, in relation to TPROX, a subset of three sensors placed in any combination of three non-homologous areas (abdominal, right or left infra-clavicular, right or left mid-thigh) produced an error of 0.13°C MAE, while the loss/removal of the abdominal sensor resulted in an error of 0.25°C MAE, with the greater impact on the quality of the reconstruction. This information may help researchers/clinicians: i) evaluate the expected goodness of their TPROX and TDIST estimates based on the number of available sensors; ii) select the most appropriate subset of sensors, depending on goals and operational constraints. PMID:28666029

  13. Stable phenotype of B-cell subsets following cryopreservation and thawing of normal human lymphocytes stored in a tissue biobank.

    PubMed

    Rasmussen, Simon Mylius; Bilgrau, Anders Ellern; Schmitz, Alexander; Falgreen, Steffen; Bergkvist, Kim Steve; Tramm, Anette Mai; Baech, John; Jacobsen, Chris Ladefoged; Gaihede, Michael; Kjeldsen, Malene Krag; Bødker, Julie Støve; Dybkaer, Karen; Bøgsted, Martin; Johnsen, Hans Erik

    2015-01-01

    Cryopreservation is an acknowledged procedure to store vital cells for future biomarker analyses. Few studies, however, have analyzed the impact of the cryopreservation on phenotyping. We have performed a controlled comparison of cryopreserved and fresh cellular aliquots prepared from individual healthy donors. We studied circulating B-cell subset membrane markers and global gene expression, respectively by multiparametric flow cytometry and microarray data. Extensive statistical analysis of the generated data tested the concept that "overall, there are no phenotypic differences between cryopreserved and fresh B-cell subsets." Subsequently, we performed an uncontrolled comparison of tonsil tissue samples. By multiparametric flow analysis, we documented no significant changes following cryopreservation of subset frequencies or membrane intensity for the differentiation markers CD19, CD20, CD22, CD27, CD38, CD45, and CD200. By gene expression profiling following cryopreservation, across all samples, only 16 out of 18708 genes were significantly up or down regulated, including FOSB, KLF4, RBP7, ANXA1 or CLC, DEFA3, respectively. Implementation of cryopreserved tissue in our research program allowed us to present a performance analysis, by comparing cryopreserved and fresh tonsil tissue. As expected, phenotypic differences were identified, but to an extent that did not affect the performance of the cryopreserved tissue to generate specific B-cell subset associated gene signatures and assign subset phenotypes to independent tissue samples. We have confirmed our working concept and illustrated the usefulness of vital cryopreserved cell suspensions for phenotypic studies of the normal B-cell hierarchy; however, storage procedures need to be delineated by tissue-specific comparative analysis. © 2014 Clinical Cytometry Society.

  14. Stable Phenotype Of B-Cell Subsets Following Cryopreservation and Thawing of Normal Human Lymphocytes Stored in a Tissue Biobank.

    PubMed

    Rasmussen, Simon Mylius; Bilgrau, Anders Ellern; Schmitz, Alexander; Falgreen, Steffen; Bergkvist, Kim Steve; Tramm, Anette Mai; Baech, John; Jacobsen, Chris Ladefoged; Gaihede, Michael; Kjeldsen, Malene Krag; Bødker, Julie Støve; Dybkaer, Karen; Bøgsted, Martin; Johnsen, Hans Erik

    2014-09-20

    Background Cryopreservation is an acknowledged procedure to store vital cells for future biomarker analyses. Few studies, however, have analyzed the impact of the cryopreservation on phenotyping. Methods We have performed a controlled comparison of cryopreserved and fresh cellular aliquots prepared from individual healthy donors. We studied circulating B-cell subset membrane markers and global gene expression, respectively by multiparametric flow cytometry and microarray data. Extensive statistical analysis of the generated data tested the concept that "overall, there are phenotypic differences between cryopreserved and fresh B-cell subsets". Subsequently, we performed a consecutive uncontrolled comparison of tonsil tissue samples. Results By multiparametric flow analysis, we documented no significant changes following cryopreservation of subset frequencies or membrane intensity for the differentiation markers CD19, CD20, CD22, CD27, CD38, CD45, and CD200. By gene expression profiling following cryopreservation, across all samples, only 16 out of 18708 genes were significantly up or down regulated, including FOSB, KLF4, RBP7, ANXA1 or CLC, DEFA3, respectively. Implementation of cryopreserved tissue in our research program allowed us to present a performance analysis, by comparing cryopreserved and fresh tonsil tissue. As expected, phenotypic differences were identified, but to an extent that did not affect the performance of the cryopreserved tissue to generate specific B-cell subset associated gene signatures and assign subset phenotypes to independent tissue samples. Conclusions We have confirmed our working concept and illustrated the usefulness of vital cryopreserved cell suspensions for phenotypic studies of the normal B-cell hierarchy; however, storage procedures need to be delineated by tissue specific comparative analysis. © 2014 Clinical Cytometry Society. Copyright © 2014 Clinical Cytometry Society.

  15. Inheritance of allozyme variants in bishop pine (Pinus muricata D.Don)

    Treesearch

    Constance I. Millar

    1985-01-01

    Isozyme phenotypes are described for 45 structural loci and I modifier locus in bishop pine (Pinus muricata D. Don,) and segregation data are presented for a subset of 31 polymorphic loci from 19 enzyme systems. All polymorphic loci had alleles that segregated within single-focus Mendelian expectations, although one pair of alleles at each of three...

  16. Accelerating image reconstruction in dual-head PET system by GPU and symmetry properties.

    PubMed

    Chou, Cheng-Ying; Dong, Yun; Hung, Yukai; Kao, Yu-Jiun; Wang, Weichung; Kao, Chien-Min; Chen, Chin-Tu

    2012-01-01

    Positron emission tomography (PET) is an important imaging modality in both clinical usage and research studies. We have developed a compact high-sensitivity PET system that consisted of two large-area panel PET detector heads, which produce more than 224 million lines of response and thus request dramatic computational demands. In this work, we employed a state-of-the-art graphics processing unit (GPU), NVIDIA Tesla C2070, to yield an efficient reconstruction process. Our approaches ingeniously integrate the distinguished features of the symmetry properties of the imaging system and GPU architectures, including block/warp/thread assignments and effective memory usage, to accelerate the computations for ordered subset expectation maximization (OSEM) image reconstruction. The OSEM reconstruction algorithms were implemented employing both CPU-based and GPU-based codes, and their computational performance was quantitatively analyzed and compared. The results showed that the GPU-accelerated scheme can drastically reduce the reconstruction time and thus can largely expand the applicability of the dual-head PET system.

  17. NEMA NU 4-Optimized Reconstructions for Therapy Assessment in Cancer Research with the Inveon Small Animal PET/CT System.

    PubMed

    Lasnon, Charline; Dugue, Audrey Emmanuelle; Briand, Mélanie; Blanc-Fournier, Cécile; Dutoit, Soizic; Louis, Marie-Hélène; Aide, Nicolas

    2015-06-01

    We compared conventional filtered back-projection (FBP), two-dimensional-ordered subsets expectation maximization (OSEM) and maximum a posteriori (MAP) NEMA NU 4-optimized reconstructions for therapy assessment. Varying reconstruction settings were used to determine the parameters for optimal image quality with two NEMA NU 4 phantom acquisitions. Subsequently, data from two experiments in which nude rats bearing subcutaneous tumors had received a dual PI3K/mTOR inhibitor were reconstructed with the NEMA NU 4-optimized parameters. Mann-Whitney tests were used to compare mean standardized uptake value (SUV(mean)) variations among groups. All NEMA NU 4-optimized reconstructions showed the same 2-deoxy-2-[(18)F]fluoro-D-glucose ([(18)F]FDG) kinetic patterns and detected a significant difference in SUV(mean) relative to day 0 between controls and treated groups for all time points with comparable p values. In the framework of therapy assessment in rats bearing subcutaneous tumors, all algorithms available on the Inveon system performed equally.

  18. Effects of Regularisation Priors and Anatomical Partial Volume Correction on Dynamic PET Data

    NASA Astrophysics Data System (ADS)

    Caldeira, Liliana L.; Silva, Nuno da; Scheins, Jürgen J.; Gaens, Michaela E.; Shah, N. Jon

    2015-08-01

    Dynamic PET provides temporal information about the tracer uptake. However, each PET frame has usually low statistics, resulting in noisy images. Furthermore, PET images suffer from partial volume effects. The goal of this study is to understand the effects of prior regularisation on dynamic PET data and subsequent anatomical partial volume correction. The Median Root Prior (MRP) regularisation method was used in this work during reconstruction. The quantification and noise in image-domain and time-domain (time-activity curves) as well as the impact on parametric images is assessed and compared with Ordinary Poisson Ordered Subset Expectation Maximisation (OP-OSEM) reconstruction with and without Gaussian filter. This study shows the improvement in PET images and time-activity curves (TAC) in terms of noise as well as in the parametric images when using prior regularisation in dynamic PET data. Anatomical partial volume correction improves the TAC and consequently, parametric images. Therefore, the use of MRP with anatomical partial volume correction is of interest for dynamic PET studies.

  19. Archive Management of NASA Earth Observation Data to Support Cloud Analysis

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Baynes, Kathleen; McInerney, Mark

    2017-01-01

    NASA collects, processes and distributes petabytes of Earth Observation (EO) data from satellites, aircraft, in situ instruments and model output, with an order of magnitude increase expected by 2024. Cloud-based web object storage (WOS) of these data can simplify the execution of such an increase. More importantly, it can also facilitate user analysis of those volumes by making the data available to the massively parallel computing power in the cloud. However, storing EO data in cloud WOS has a ripple effect throughout the NASA archive system with unexpected challenges and opportunities. One challenge is modifying data servicing software (such as Web Coverage Service servers) to access and subset data that are no longer on a directly accessible file system, but rather in cloud WOS. Opportunities include refactoring of the archive software to a cloud-native architecture; virtualizing data products by computing on demand; and reorganizing data to be more analysis-friendly. Reviewed by Mark McInerney ESDIS Deputy Project Manager.

  20. Sample size determination for bibliographic retrieval studies

    PubMed Central

    Yao, Xiaomei; Wilczynski, Nancy L; Walter, Stephen D; Haynes, R Brian

    2008-01-01

    Background Research for developing search strategies to retrieve high-quality clinical journal articles from MEDLINE is expensive and time-consuming. The objective of this study was to determine the minimal number of high-quality articles in a journal subset that would need to be hand-searched to update or create new MEDLINE search strategies for treatment, diagnosis, and prognosis studies. Methods The desired width of the 95% confidence intervals (W) for the lowest sensitivity among existing search strategies was used to calculate the number of high-quality articles needed to reliably update search strategies. New search strategies were derived in journal subsets formed by 2 approaches: random sampling of journals and top journals (having the most high-quality articles). The new strategies were tested in both the original large journal database and in a low-yielding journal (having few high-quality articles) subset. Results For treatment studies, if W was 10% or less for the lowest sensitivity among our existing search strategies, a subset of 15 randomly selected journals or 2 top journals were adequate for updating search strategies, based on each approach having at least 99 high-quality articles. The new strategies derived in 15 randomly selected journals or 2 top journals performed well in the original large journal database. Nevertheless, the new search strategies developed using the random sampling approach performed better than those developed using the top journal approach in a low-yielding journal subset. For studies of diagnosis and prognosis, no journal subset had enough high-quality articles to achieve the expected W (10%). Conclusion The approach of randomly sampling a small subset of journals that includes sufficient high-quality articles is an efficient way to update or create search strategies for high-quality articles on therapy in MEDLINE. The concentrations of diagnosis and prognosis articles are too low for this approach. PMID:18823538

  1. Observation of hard scattering in photoproduction at HERA

    NASA Astrophysics Data System (ADS)

    Derrick, M.; Krakauer, D.; Magill, S.; Musgrave, B.; Repond, J.; Sugano, K.; Stanek, R.; Talaga, R. L.; Thron, J.; Arzarello, F.; Ayed, R.; Barbagli, G.; Bari, G.; Basile, M.; Bellagamba, L.; Boscherini, D.; Bruni, G.; Bruni, P.; Cara Romeo, G.; Castellini, G.; Chiarini, M.; Cifarelli, L.; Cindolo, F.; Ciralli, F.; Contin, A.; D'Auria, S.; Del Papa, C.; Frasconi, F.; Giusti, P.; Iacobucci, G.; Laurenti, G.; Levi, G.; Lin, Q.; Lisowski, B.; Maccarrone, G.; Margotti, A.; Massam, T.; Nania, R.; Nemoz, C.; Palmonari, F.; Sartorelli, G.; Timellini, R.; Zamora Garcia, Y.; Zichichi, A.; Bargende, A.; Barreiro, F.; Crittenden, J.; Dabbous, H.; Desch, K.; Diekmann, B.; Geerts, M.; Geitz, G.; Gutjahr, B.; Hartmann, H.; Hartmann, J.; Haun, D.; Heinloth, K.; Hilger, E.; Jakob, H.-P.; Kramarczyk, S.; Kückes, M.; Mass, A.; Mengel, S.; Mollen, J.; Müsch, H.; Paul, E.; Schattevoy, R.; Schneider, B.; Schneider, J.-L.; Wedemeyer, R.; Cassidy, A.; Cussans, D. G.; Dyce, N.; Fawcett, H. F.; Foster, B.; Gilmore, R.; Heath, G. P.; Lancaster, M.; Llewellyn, T. J.; Malos, J.; Morgado, C. J. S.; Tapper, R. J.; Wilson, S. S.; Rau, R. R.; Bernstein, A.; Caldwell, A.; Gialas, I.; Parsons, J. A.; Ritz, S.; Sciulli, F.; Straub, P. B.; Wai, L.; Yang, S.; Barillari, T.; Schioppa, M.; Susinno, G.; Burkot, W.; Chwastowski, J.; Dwuraźny, A.; Eskreys, A.; Nizioł, B.; Jakubowski, Z.; Piotrzkowski, K.; Zachara, M.; Zawiejski, L.; Borzemski, P.; Eskreys, K.; Jeleń, K.; Kisielewska, D.; Kowalski, T.; Kulka, J.; Rulikowska-Zarȩbska, E.; Suszycki, L.; Zajaç, J.; Kȩdzierski, T.; Kotański, A.; Przybycień, M.; Bauerdick, L. A. T.; Behrens, U.; Bienlein, J. K.; Coldewey, C.; Dannemann, A.; Dierks, K.; Dorth, W.; Drews, G.; Erhard, P.; Flasiński, M.; Fleck, I.; Fürtjes, A.; Gläser, R.; Göttlicher, P.; Haas, T.; Hagge, L.; Hain, W.; Hasell, D.; Hultschig, H.; Jahnen, G.; Joos, P.; Kasemann, M.; Klanner, R.; Koch, W.; Kötz, U.; Kowalski, H.; Labs, J.; Ladage, A.; Löhr, B.; Löwe, M.; Lüke, D.; Mainusch, J.; Manczak, O.; Momayezi, M.; Nickel, S.; Notz, D.; Park, I.; Pösnecker, K.-U.; Rohde, M.; Ros, E.; Schneekloth, U.; Schroeder, J.; Schulz, W.; Selonke, F.; Tscheslog, E.; Tsurugai, T.; Turkot, F.; Vogel, W.; Woeniger, T.; Wolf, G.; Youngman, C.; Grabosch, H. J.; Leich, A.; Meyer, A.; Rethfeldt, C.; Schlenstedt, S.; Casalbuoni, R.; De Curtis, S.; Dominici, D.; Francescato, A.; Nuti, M.; Pelfer, P.; Anzivino, G.; Casaccia, R.; Laakso, I.; De Pasquale, S.; Qian, S.; Votano, L.; Bamberger, A.; Freidhof, A.; Poser, T.; Söldner-Rembold, S.; Theisen, G.; Trefzger, T.; Brook, N. H.; Bussey, P. J.; Doyle, A. T.; Forbes, J. R.; Jamieson, V. A.; Raine, C.; Saxon, D. H.; Gloth, G.; Holm, U.; Kammerlocher, H.; Krebs, B.; Neumann, T.; Wick, K.; Hofmann, A.; Kröger, W.; Krüger, J.; Lohrmann, E.; Milewski, J.; Nakahata, M.; Pavel, N.; Poelz, G.; Salomon, R.; Seidman, A.; Schott, W.; Wiik, B. H.; Zetsche, F.; Bacon, T. C.; Butterworth, I.; Markou, C.; McQuillan, D.; Miller, D. B.; Mobayyen, M. M.; Prinias, A.; Vorvolakos, A.; Bienz, T.; Kreutzmann, H.; Mallik, U.; McCliment, E.; Roco, M.; Wang, M. Z.; Cloth, P.; Filges, D.; Chen, L.; Imlay, R.; Kartik, S.; Kim, H.-J.; McNeil, R. R.; Metcalf, W.; Cases, G.; Hervás, L.; Labarga, L.; del Peso, J.; Roldán, J.; Terrón, J.; de Trocóniz, J. F.; Ikraiam, F.; Mayer, J. K.; Smith, G. R.; Corriveau, F.; Gilkinson, D. J.; Hanna, D. S.; Hung, L. W.; Mitchell, J. W.; Patel, P. M.; Sinclair, L. E.; Stairs, D. G.; Ullmann, R.; Bashindzhagyan, G. L.; Ermolov, P. F.; Golubkov, Y. A.; Kuzmin, V. A.; Kuznetsov, E. N.; Savin, A. A.; Voronin, A. G.; Zotov, N. P.; Bentvelsen, S.; Dake, A.; Engelen, J.; de Jong, P.; de Jong, S.; de Kamps, M.; Kooijman, P.; Kruse, A.; van der Lugt, H.; O'Dell, V.; Straver, J.; Tenner, A.; Tiecke, H.; Uijterwaal, H.; Vermeulen, J.; Wiggers, L.; de Wolf, E.; van Woudenberg, R.; Yoshida, R.; Bylsma, B.; Durkin, L. S.; Li, C.; Ling, T. Y.; McLean, K. W.; Murray, W. N.; Park, S. K.; Romanowski, T. A.; Seidlein, R.; Blair, G. A.; Butterworth, J. M.; Byrne, A.; Cashmore, R. J.; Cooper-Sarkar, A. M.; Devenish, R. C. E.; Gingrich, D. M.; Hallam-Baker, P. M.; Harnew, N.; Khatri, T.; Long, K. R.; Luffman, P.; McArthur, I.; Morawitz, P.; Nash, J.; Smith, S. J. P.; Roocroft, N. C.; Wilson, F. F.; Abbiendi, G.; Brugnera, R.; Carlin, R.; Dal Corso, F.; De Giorgi, M.; Dosselli, U.; Fanin, C.; Gasparini, F.; Limentani, S.; Morandin, M.; Posocco, M.; Stanco, L.; Stroili, R.; Voci, C.; Lim, J. N.; Oh, B. Y.; Whitmore, J.; Bonori, M.; Contino, U.; D'Agostini, G.; Guida, M.; Iori, M.; Mari, S.; Marini, G.; Mattioli, M.; Monaldi, D.; Nigro, A.; Hart, J. C.; McCubbin, N. A.; Shah, T. P.; Short, T. L.; Barberis, E.; Cartiglia, N.; Heusch, C.; Hubbard, B.; Leslie, J.; Ng, J. S. T.; O'Shaughnessy, K.; Sadrozinski, H. F.; Seiden, A.; Badura, E.; Biltzinger, J.; Chaves, H.; Rost, M.; Seifert, R. J.; Walenta, A. H.; Weihs, W.; Zech, G.; Dagan, S.; Heifetz, R.; Levy, A.; Zer-Zion, D.; Hasegawa, T.; Hazumi, M.; Ishii, T.; Kasai, S.; Kuze, M.; Nagasawa, Y.; Nakao, M.; Okuno, H.; Tokushuku, K.; Watanabe, T.; Yamada, S.; Chiba, M.; Hamatsu, R.; Hirose, T.; Kitamura, S.; Nagayama, S.; Nakamitsu, Y.; Arneodo, M.; Costa, M.; Ferrero, M. I.; Lamberti, L.; Maselli, S.; Peroni, C.; Solano, A.; Staiano, A.; Dardo, M.; Bailey, D. C.; Bandyopadhyay, D.; Benard, F.; Bhadra, S.; Brkic, M.; Burow, B. D.; Chlebana, F. S.; Crombie, M. B.; Hartner, G. F.; Levman, G. M.; Martin, J. F.; Orr, R. S.; Prentice, J. D.; Sampson, C. R.; Stairs, G. G.; Teuscher, R. J.; Yoon, T.-S.; Bullock, F. W.; Catterall, C. D.; Giddings, J. C.; Jones, T. W.; Khan, A. M.; Lane, J. B.; Makkar, P. L.; Shaw, D.; Shulman, J.; Blankenship, K.; Kochocki, J.; Lu, B.; Mo, L. W.; Charchuła, K.; Ciborowski, J.; Gajewski, J.; Grzelak, G.; Kasprzak, M.; Krzyżanowski, M.; Muchorowski, K.; Nowak, R. J.; Pawlak, J. M.; Stojda, K.; Stopczyński, A.; Szwed, R.; Tymieniecka, T.; Walczak, R.; Wróblewski, A. K.; Zakrzewski, J. A.; Żarnecki, A. F.; Adamus, M.; Abramowicz, H.; Eisenberg, Y.; Glasman, C.; Karshon, U.; Montag, A.; Revel, D.; Shapira, A.; Ali, I.; Behrens, B.; Camerini, U.; Dasu, S.; Fordham, C.; Foudas, C.; Goussiou, A.; Lomperski, M.; Loveless, R. J.; Nylander, P.; Ptacek, M.; Reeder, D. D.; Smith, W. H.; Silverstein, S.; Frisken, W. R.; Furutani, K. M.; Iga, Y.; ZEUS Collaboration

    1992-12-01

    We report a study of electron proton collisions at very low Q2, corresponding to virtual photoproduction at centre of mass energies in the range 100-295 GeV. The distribution in transverse energy of the observed hadrons is much harder than can be explained by soft processes. Some of the events show back-to-back two-jet production at the rate and with the characteristics expected from hard two-body scattering. A subset of the two-jet events have energy in the electron direction consistent with that expected from the photon remnant in resolved photon processes.

  2. 77 FR 43879 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Designation of a Longer Period for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-26

    ... Proposed Rule Change Amending NYSE Arca Equities Rule 7.31(h) To Add a PL Select Order Type July 20, 2012...(h) to add a PL Select Order type. The proposed rule change was published for comment in the Federal... security at a specified, undisplayed price. The PL Select Order would be a subset of the PL Order that...

  3. Mid-infrared spectroscopy predictions as indicator traits in breeding programs for enhanced coagulation properties of milk.

    PubMed

    Cecchinato, A; De Marchi, M; Gallo, L; Bittante, G; Carnier, P

    2009-10-01

    The aims of this study were to investigate variation of milk coagulation property (MCP) measures and their predictions obtained by mid-infrared spectroscopy (MIR), to investigate the genetic relationship between measures of MCP and MIR predictions, and to estimate the expected response from a breeding program focusing on the enhancement of MCP using MIR predictions as indicator traits. Individual milk samples were collected from 1,200 Brown Swiss cows (progeny of 50 artificial insemination sires) reared in 30 herds located in northern Italy. Rennet coagulation time (RCT, min) and curd firmness (a(30), mm) were measured using a computerized renneting meter. The MIR data were recorded over the spectral range of 4,000 to 900 cm(-1). Prediction models for RCT and a(30) based on MIR spectra were developed using partial least squares regression. A cross-validation procedure was carried out. The procedure involved the partition of available data into 2 subsets: a calibration subset and a test subset. The calibration subset was used to develop a calibration equation able to predict individual MCP phenotypes using MIR spectra. The test subset was used to validate the calibration equation and to estimate heritabilities and genetic correlations for measured MCP and their predictions obtained from MIR spectra and the calibration equation. Point estimates of heritability ranged from 0.30 to 0.34 and from 0.22 to 0.24 for RCT and a(30), respectively. Heritability estimates for MCP predictions were larger than those obtained for measured MCP. Estimated genetic correlations between measures and predictions of RCT were very high and ranged from 0.91 to 0.96. Estimates of the genetic correlation between measures and predictions of a(30) were large and ranged from 0.71 to 0.87. Predictions of MCP provided by MIR techniques can be proposed as indicator traits for the genetic enhancement of MCP. The expected response of RCT and a(30) ensured by the selection using MIR predictions as indicator traits was equal to or slightly less than the response achievable through a single measurement of these traits. Breeding strategies for the enhancement of MCP based on MIR predictions as indicator traits could be easily and immediately implemented for dairy cattle populations where routine acquisition of spectra from individual milk samples is already performed.

  4. Phylogenetic diversity, functional trait diversity and extinction: avoiding tipping points and worst-case losses.

    PubMed

    Faith, Daniel P

    2015-02-19

    The phylogenetic diversity measure, ('PD'), measures the relative feature diversity of different subsets of taxa from a phylogeny. At the level of feature diversity, PD supports the broad goal of biodiversity conservation to maintain living variation and option values. PD calculations at the level of lineages and features include those integrating probabilities of extinction, providing estimates of expected PD. This approach has known advantages over the evolutionarily distinct and globally endangered (EDGE) methods. Expected PD methods also have limitations. An alternative notion of expected diversity, expected functional trait diversity, relies on an alternative non-phylogenetic model and allows inferences of diversity at the level of functional traits. Expected PD also faces challenges in helping to address phylogenetic tipping points and worst-case PD losses. Expected PD may not choose conservation options that best avoid worst-case losses of long branches from the tree of life. We can expand the range of useful calculations based on expected PD, including methods for identifying phylogenetic key biodiversity areas. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  5. Genetic heterogeneity in Finnish hereditary prostate cancer using ordered subset analysis

    PubMed Central

    Simpson, Claire L; Cropp, Cheryl D; Wahlfors, Tiina; George, Asha; Jones, MaryPat S; Harper, Ursula; Ponciano-Jackson, Damaris; Tammela, Teuvo; Schleutker, Johanna; Bailey-Wilson, Joan E

    2013-01-01

    Prostate cancer (PrCa) is the most common male cancer in developed countries and the second most common cause of cancer death after lung cancer. We recently reported a genome-wide linkage scan in 69 Finnish hereditary PrCa (HPC) families, which replicated the HPC9 locus on 17q21-q22 and identified a locus on 2q37. The aim of this study was to identify and to detect other loci linked to HPC. Here we used ordered subset analysis (OSA), conditioned on nonparametric linkage to these loci to detect other loci linked to HPC in subsets of families, but not the overall sample. We analyzed the families based on their evidence for linkage to chromosome 2, chromosome 17 and a maximum score using the strongest evidence of linkage from either of the two loci. Significant linkage to a 5-cM linkage interval with a peak OSA nonparametric allele-sharing LOD score of 4.876 on Xq26.3-q27 (ΔLOD=3.193, empirical P=0.009) was observed in a subset of 41 families weakly linked to 2q37, overlapping the HPCX1 locus. Two peaks that were novel to the analysis combining linkage evidence from both primary loci were identified; 18q12.1-q12.2 (OSA LOD=2.541, ΔLOD=1.651, P=0.03) and 22q11.1-q11.21 (OSA LOD=2.395, ΔLOD=2.36, P=0.006), which is close to HPC6. Using OSA allows us to find additional loci linked to HPC in subsets of families, and underlines the complex genetic heterogeneity of HPC even in highly aggregated families. PMID:22948022

  6. Multi-ray-based system matrix generation for 3D PET reconstruction

    NASA Astrophysics Data System (ADS)

    Moehrs, Sascha; Defrise, Michel; Belcari, Nicola; DelGuerra, Alberto; Bartoli, Antonietta; Fabbri, Serena; Zanetti, Gianluigi

    2008-12-01

    Iterative image reconstruction algorithms for positron emission tomography (PET) require a sophisticated system matrix (model) of the scanner. Our aim is to set up such a model offline for the YAP-(S)PET II small animal imaging tomograph in order to use it subsequently with standard ML-EM (maximum-likelihood expectation maximization) and OSEM (ordered subset expectation maximization) for fully three-dimensional image reconstruction. In general, the system model can be obtained analytically, via measurements or via Monte Carlo simulations. In this paper, we present the multi-ray method, which can be considered as a hybrid method to set up the system model offline. It incorporates accurate analytical (geometric) considerations as well as crystal depth and crystal scatter effects. At the same time, it has the potential to model seamlessly other physical aspects such as the positron range. The proposed method is based on multiple rays which are traced from/to the detector crystals through the image volume. Such a ray-tracing approach itself is not new; however, we derive a novel mathematical formulation of the approach and investigate the positioning of the integration (ray-end) points. First, we study single system matrix entries and show that the positioning and weighting of the ray-end points according to Gaussian integration give better results compared to equally spaced integration points (trapezoidal integration), especially if only a small number of integration points (rays) are used. Additionally, we show that, for a given variance of the single matrix entries, the number of rays (events) required to calculate the whole matrix is a factor of 20 larger when using a pure Monte-Carlo-based method. Finally, we analyse the quality of the model by reconstructing phantom data from the YAP-(S)PET II scanner.

  7. Novel relationships among ten fish model species revealed based on a phylogenomic analysis using ESTs.

    PubMed

    Steinke, Dirk; Salzburger, Walter; Meyer, Axel

    2006-06-01

    The power of comparative phylogenomic analyses also depends on the amount of data that are included in such studies. We used expressed sequence tags (ESTs) from fish model species as a proof of principle approach in order to test the reliability of using ESTs for phylogenetic inference. As expected, the robustness increases with the amount of sequences. Although some progress has been made in the elucidation of the phylogeny of teleosts, relationships among the main lineages of the derived fish (Euteleostei) remain poorly defined and are still debated. We performed a phylogenomic analysis of a set of 42 of orthologous genes from 10 available fish model systems from seven different orders (Salmoniformes, Siluriformes, Cypriniformes, Tetraodontiformes, Cyprinodontiformes, Beloniformes, and Perciformes) of euteleostean fish to estimate divergence times and evolutionary relationships among those lineages. All 10 fish species serve as models for developmental, aquaculture, genomic, and comparative genetic studies. The phylogenetic signal and the strength of the contribution of each of the 42 orthologous genes were estimated with randomly chosen data subsets. Our study revealed a molecular phylogeny of higher-level relationships of derived teleosts, which indicates that the use of multiple genes produces robust phylogenies, a finding that is expected to apply to other phylogenetic issues among distantly related taxa. Our phylogenomic analyses confirm that the euteleostean superorders Ostariophysi and Acanthopterygii are monophyletic and the Protacanthopterygii and Ostariophysi are sister clades. In addition, and contrary to the traditional phylogenetic hypothesis, our analyses determine that killifish (Cyprinodontiformes), medaka (Beloniformes), and cichlids (Perciformes) appear to be more closely related to each other than either of them is to pufferfish (Tetraodontiformes). All 10 lineages split before or during the fragmentation of the supercontinent Pangea in the Jurassic.

  8. Nonuniform update for sparse target recovery in fluorescence molecular tomography accelerated by ordered subsets.

    PubMed

    Zhu, Dianwen; Li, Changqing

    2014-12-01

    Fluorescence molecular tomography (FMT) is a promising imaging modality and has been actively studied in the past two decades since it can locate the specific tumor position three-dimensionally in small animals. However, it remains a challenging task to obtain fast, robust and accurate reconstruction of fluorescent probe distribution in small animals due to the large computational burden, the noisy measurement and the ill-posed nature of the inverse problem. In this paper we propose a nonuniform preconditioning method in combination with L (1) regularization and ordered subsets technique (NUMOS) to take care of the different updating needs at different pixels, to enhance sparsity and suppress noise, and to further boost convergence of approximate solutions for fluorescence molecular tomography. Using both simulated data and phantom experiment, we found that the proposed nonuniform updating method outperforms its popular uniform counterpart by obtaining a more localized, less noisy, more accurate image. The computational cost was greatly reduced as well. The ordered subset (OS) technique provided additional 5 times and 3 times speed enhancements for simulation and phantom experiments, respectively, without degrading image qualities. When compared with the popular L (1) algorithms such as iterative soft-thresholding algorithm (ISTA) and Fast iterative soft-thresholding algorithm (FISTA) algorithms, NUMOS also outperforms them by obtaining a better image in much shorter period of time.

  9. Heterogeneous Ensemble Combination Search Using Genetic Algorithm for Class Imbalanced Data Classification.

    PubMed

    Haque, Mohammad Nazmul; Noman, Nasimul; Berretta, Regina; Moscato, Pablo

    2016-01-01

    Classification of datasets with imbalanced sample distributions has always been a challenge. In general, a popular approach for enhancing classification performance is the construction of an ensemble of classifiers. However, the performance of an ensemble is dependent on the choice of constituent base classifiers. Therefore, we propose a genetic algorithm-based search method for finding the optimum combination from a pool of base classifiers to form a heterogeneous ensemble. The algorithm, called GA-EoC, utilises 10 fold-cross validation on training data for evaluating the quality of each candidate ensembles. In order to combine the base classifiers decision into ensemble's output, we used the simple and widely used majority voting approach. The proposed algorithm, along with the random sub-sampling approach to balance the class distribution, has been used for classifying class-imbalanced datasets. Additionally, if a feature set was not available, we used the (α, β) - k Feature Set method to select a better subset of features for classification. We have tested GA-EoC with three benchmarking datasets from the UCI-Machine Learning repository, one Alzheimer's disease dataset and a subset of the PubFig database of Columbia University. In general, the performance of the proposed method on the chosen datasets is robust and better than that of the constituent base classifiers and many other well-known ensembles. Based on our empirical study we claim that a genetic algorithm is a superior and reliable approach to heterogeneous ensemble construction and we expect that the proposed GA-EoC would perform consistently in other cases.

  10. Heterogeneous Ensemble Combination Search Using Genetic Algorithm for Class Imbalanced Data Classification

    PubMed Central

    Haque, Mohammad Nazmul; Noman, Nasimul; Berretta, Regina; Moscato, Pablo

    2016-01-01

    Classification of datasets with imbalanced sample distributions has always been a challenge. In general, a popular approach for enhancing classification performance is the construction of an ensemble of classifiers. However, the performance of an ensemble is dependent on the choice of constituent base classifiers. Therefore, we propose a genetic algorithm-based search method for finding the optimum combination from a pool of base classifiers to form a heterogeneous ensemble. The algorithm, called GA-EoC, utilises 10 fold-cross validation on training data for evaluating the quality of each candidate ensembles. In order to combine the base classifiers decision into ensemble’s output, we used the simple and widely used majority voting approach. The proposed algorithm, along with the random sub-sampling approach to balance the class distribution, has been used for classifying class-imbalanced datasets. Additionally, if a feature set was not available, we used the (α, β) − k Feature Set method to select a better subset of features for classification. We have tested GA-EoC with three benchmarking datasets from the UCI-Machine Learning repository, one Alzheimer’s disease dataset and a subset of the PubFig database of Columbia University. In general, the performance of the proposed method on the chosen datasets is robust and better than that of the constituent base classifiers and many other well-known ensembles. Based on our empirical study we claim that a genetic algorithm is a superior and reliable approach to heterogeneous ensemble construction and we expect that the proposed GA-EoC would perform consistently in other cases. PMID:26764911

  11. The proposal of architecture for chemical splitting to optimize QSAR models for aquatic toxicity.

    PubMed

    Colombo, Andrea; Benfenati, Emilio; Karelson, Mati; Maran, Uko

    2008-06-01

    One of the challenges in the field of quantitative structure-activity relationship (QSAR) analysis is the correct classification of a chemical compound to an appropriate model for the prediction of activity. Thus, in previous studies, compounds have been divided into distinct groups according to their mode of action or chemical class. In the current study, theoretical molecular descriptors were used to divide 568 organic substances into subsets with toxicity measured for the 96-h lethal median concentration for the Fathead minnow (Pimephales promelas). Simple constitutional descriptors such as the number of aliphatic and aromatic rings and a quantum chemical descriptor, maximum bond order of a carbon atom divide compounds into nine subsets. For each subset of compounds the automatic forward selection of descriptors was applied to construct QSAR models. Significant correlations were achieved for each subset of chemicals and all models were validated with the leave-one-out internal validation procedure (R(2)(cv) approximately 0.80). The results encourage to consider this alternative way for the prediction of toxicity using QSAR subset models without direct reference to the mechanism of toxic action or the traditional chemical classification.

  12. GPU-based prompt gamma ray imaging from boron neutron capture therapy.

    PubMed

    Yoon, Do-Kun; Jung, Joo-Young; Jo Hong, Key; Sil Lee, Keum; Suk Suh, Tae

    2015-01-01

    The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU). Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray image reconstruction using the GPU computation for BNCT simulations.

  13. PET Image Reconstruction Incorporating 3D Mean-Median Sinogram Filtering

    NASA Astrophysics Data System (ADS)

    Mokri, S. S.; Saripan, M. I.; Rahni, A. A. Abd; Nordin, A. J.; Hashim, S.; Marhaban, M. H.

    2016-02-01

    Positron Emission Tomography (PET) projection data or sinogram contained poor statistics and randomness that produced noisy PET images. In order to improve the PET image, we proposed an implementation of pre-reconstruction sinogram filtering based on 3D mean-median filter. The proposed filter is designed based on three aims; to minimise angular blurring artifacts, to smooth flat region and to preserve the edges in the reconstructed PET image. The performance of the pre-reconstruction sinogram filter prior to three established reconstruction methods namely filtered-backprojection (FBP), Maximum likelihood expectation maximization-Ordered Subset (OSEM) and OSEM with median root prior (OSEM-MRP) is investigated using simulated NCAT phantom PET sinogram as generated by the PET Analytical Simulator (ASIM). The improvement on the quality of the reconstructed images with and without sinogram filtering is assessed according to visual as well as quantitative evaluation based on global signal to noise ratio (SNR), local SNR, contrast to noise ratio (CNR) and edge preservation capability. Further analysis on the achieved improvement is also carried out specific to iterative OSEM and OSEM-MRP reconstruction methods with and without pre-reconstruction filtering in terms of contrast recovery curve (CRC) versus noise trade off, normalised mean square error versus iteration, local CNR versus iteration and lesion detectability. Overall, satisfactory results are obtained from both visual and quantitative evaluations.

  14. The nature of genetic susceptibility to multiple sclerosis: constraining the possibilities.

    PubMed

    Goodin, Douglas S

    2016-04-27

    Epidemiological observations regarding certain population-wide parameters (e.g., disease-prevalence, recurrence-risk in relatives, gender predilections, and the distribution of common genetic-variants) place important constraints on the possibilities for the genetic-basis underlying susceptibility to multiple sclerosis (MS). Using very broad range-estimates for the different population-wide epidemiological parameters, a mathematical model can help elucidate the nature and the magnitude of these constraints. For MS no more than 8.5 % of the population can possibly be in the "genetically-susceptible" subset (defined as having a life-time MS-probability at least as high as the overall population average). Indeed, the expected MS-probability for this subset is more than 12 times that for every other person of the population who is not in this subset. Moreover, provided that those genetically susceptible persons (genotypes), who carry the well-established MS susceptibility allele (DRB1*1501), are equally or more likely to get MS than those susceptible persons, who don't carry this allele, then at least 84 % of MS-cases must come from this "genetically susceptible" subset. Finally, because men, compared to women, are at least as likely (and possibly more likely) to be susceptible, it can be demonstrated that women are more responsive to the environmental factors that are involved in MS-pathogenesis (whatever these are) and, thus, susceptible women are more likely actually to develop MS than susceptible men. Finally, in contrast to genetic susceptibility, more than 70 % of men (and likely also women) must have an environmental experience (including all of the necessary factors), which is sufficient to produce MS in a susceptible individual. As a result, because of these constraints, it is possible to distinguish two classes of persons, indicating either that MS can be caused by two fundamentally different pathophysiological mechanisms or that the large majority of the population is at no risk of the developing this disease regardless of their environmental experience. Moreover, although environmental-factors would play a critical role in both mechanisms (if both exist), there is no reason to expect that these factors are the same (or even similar) between the two.

  15. Breeding value accuracy estimates for growth traits using random regression and multi-trait models in Nelore cattle.

    PubMed

    Boligon, A A; Baldi, F; Mercadante, M E Z; Lobo, R B; Pereira, R J; Albuquerque, L G

    2011-06-28

    We quantified the potential increase in accuracy of expected breeding value for weights of Nelore cattle, from birth to mature age, using multi-trait and random regression models on Legendre polynomials and B-spline functions. A total of 87,712 weight records from 8144 females were used, recorded every three months from birth to mature age from the Nelore Brazil Program. For random regression analyses, all female weight records from birth to eight years of age (data set I) were considered. From this general data set, a subset was created (data set II), which included only nine weight records: at birth, weaning, 365 and 550 days of age, and 2, 3, 4, 5, and 6 years of age. Data set II was analyzed using random regression and multi-trait models. The model of analysis included the contemporary group as fixed effects and age of dam as a linear and quadratic covariable. In the random regression analyses, average growth trends were modeled using a cubic regression on orthogonal polynomials of age. Residual variances were modeled by a step function with five classes. Legendre polynomials of fourth and sixth order were utilized to model the direct genetic and animal permanent environmental effects, respectively, while third-order Legendre polynomials were considered for maternal genetic and maternal permanent environmental effects. Quadratic polynomials were applied to model all random effects in random regression models on B-spline functions. Direct genetic and animal permanent environmental effects were modeled using three segments or five coefficients, and genetic maternal and maternal permanent environmental effects were modeled with one segment or three coefficients in the random regression models on B-spline functions. For both data sets (I and II), animals ranked differently according to expected breeding value obtained by random regression or multi-trait models. With random regression models, the highest gains in accuracy were obtained at ages with a low number of weight records. The results indicate that random regression models provide more accurate expected breeding values than the traditionally finite multi-trait models. Thus, higher genetic responses are expected for beef cattle growth traits by replacing a multi-trait model with random regression models for genetic evaluation. B-spline functions could be applied as an alternative to Legendre polynomials to model covariance functions for weights from birth to mature age.

  16. Automated sinkhole detection using a DEM subsetting technique and fill tools at Mammoth Cave National Park

    NASA Astrophysics Data System (ADS)

    Wall, J.; Bohnenstiehl, D. R.; Levine, N. S.

    2013-12-01

    An automated workflow for sinkhole detection is developed using Light Detection and Ranging (Lidar) data from Mammoth Cave National Park (MACA). While the park is known to sit within a karst formation, the generally dense canopy cover and the size of the park (~53,000 acres) creates issues for sinkhole inventorying. Lidar provides a useful remote sensing technology for peering beneath the canopy in hard to reach areas of the park. In order to detect sinkholes, a subsetting technique is used to interpolate a Digital Elevation Model (DEM) thereby reducing edge effects. For each subset, standard GIS fill tools are used to fill depressions within the DEM. The initial DEM is then subtracted from the filled DEM resulting in detected depressions or sinkholes. Resulting depressions are then described in terms of size and geospatial trend.

  17. Day-Ahead PM2.5 Concentration Forecasting Using WT-VMD Based Decomposition Method and Back Propagation Neural Network Improved by Differential Evolution

    PubMed Central

    Wang, Deyun; Liu, Yanling; Luo, Hongyuan; Yue, Chenqiang; Cheng, Sheng

    2017-01-01

    Accurate PM2.5 concentration forecasting is crucial for protecting public health and atmospheric environment. However, the intermittent and unstable nature of PM2.5 concentration series makes its forecasting become a very difficult task. In order to improve the forecast accuracy of PM2.5 concentration, this paper proposes a hybrid model based on wavelet transform (WT), variational mode decomposition (VMD) and back propagation (BP) neural network optimized by differential evolution (DE) algorithm. Firstly, WT is employed to disassemble the PM2.5 concentration series into a number of subsets with different frequencies. Secondly, VMD is applied to decompose each subset into a set of variational modes (VMs). Thirdly, DE-BP model is utilized to forecast all the VMs. Fourthly, the forecast value of each subset is obtained through aggregating the forecast results of all the VMs obtained from VMD decomposition of this subset. Finally, the final forecast series of PM2.5 concentration is obtained by adding up the forecast values of all subsets. Two PM2.5 concentration series collected from Wuhan and Tianjin, respectively, located in China are used to test the effectiveness of the proposed model. The results demonstrate that the proposed model outperforms all the other considered models in this paper. PMID:28704955

  18. Profiling dendritic cell subsets in head and neck squamous cell tonsillar cancer and benign tonsils.

    PubMed

    Abolhalaj, Milad; Askmyr, David; Sakellariou, Christina Alexandra; Lundberg, Kristina; Greiff, Lennart; Lindstedt, Malin

    2018-05-23

    Dendritic cells (DCs) have a key role in orchestrating immune responses and are considered important targets for immunotherapy against cancer. In order to develop effective cancer vaccines, detailed knowledge of the micromilieu in cancer lesions is warranted. In this study, flow cytometry and human transcriptome arrays were used to characterize subsets of DCs in head and neck squamous cell tonsillar cancer and compare them to their counterparts in benign tonsils to evaluate subset-selective biomarkers associated with tonsillar cancer. We describe, for the first time, four subsets of DCs in tonsillar cancer: CD123 + plasmacytoid DCs (pDC), CD1c + , CD141 + , and CD1c - CD141 - myeloid DCs (mDC). An increased frequency of DCs and an elevated mDC/pDC ratio were shown in malignant compared to benign tonsillar tissue. The microarray data demonstrates characteristics specific for tonsil cancer DC subsets, including expression of immunosuppressive molecules and lower expression levels of genes involved in development of effector immune responses in DCs in malignant tonsillar tissue, compared to their counterparts in benign tonsillar tissue. Finally, we present target candidates selectively expressed by different DC subsets in malignant tonsils and confirm expression of CD206/MRC1 and CD207/Langerin on CD1c + DCs at protein level. This study descibes DC characteristics in the context of head and neck cancer and add valuable steps towards future DC-based therapies against tonsillar cancer.

  19. Natural Killer Cells Promote Fetal Development through the Secretion of Growth-Promoting Factors.

    PubMed

    Fu, Binqing; Zhou, Yonggang; Ni, Xiang; Tong, Xianhong; Xu, Xiuxiu; Dong, Zhongjun; Sun, Rui; Tian, Zhigang; Wei, Haiming

    2017-12-19

    Natural killer (NK) cells are present in large populations at the maternal-fetal interface during early pregnancy. However, the role of NK cells in fetal growth is unclear. Here, we have identified a CD49a + Eomes + subset of NK cells that secreted growth-promoting factors (GPFs), including pleiotrophin and osteoglycin, in both humans and mice. The crosstalk between HLA-G and ILT2 served as a stimulus for GPF-secreting function of this NK cell subset. Decreases in this GPF-secreting NK cell subset impaired fetal development, resulting in fetal growth restriction. The transcription factor Nfil3, but not T-bet, affected the function and the number of this decidual NK cell subset. Adoptive transfer of induced CD49a + Eomes + NK cells reversed impaired fetal growth and rebuilt an appropriate local microenvironment. These findings reveal properties of NK cells in promoting fetal growth. In addition, this research proposes approaches for therapeutic administration of NK cells in order to reverse restricted nourishments within the uterine microenvironment during early pregnancy. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Long-Range Technological Impact on Computer-Aided Product Development at DMA (Defense Mapping Agency).

    DTIC Science & Technology

    1986-07-01

    maintainability, enhanceability, portability, flexibility, reusability of components, expected market or production life span, upward compatibility, integration...cost) but, most often, they involve global marketing and production objectives. A high life- cycle cost may be accepted in exchange for some other...ease of integration. More importantly, these results could be interpreted as suggesting the need to use a mixed approach where one uses a subset of

  1. Tools for neuroanatomy and neurogenetics in Drosophila

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pfeiffer, Barret D.; Jenett, Arnim; Hammonds, Ann S.

    2008-08-11

    We demonstrate the feasibility of generating thousands of transgenic Drosophila melanogaster lines in which the expression of an exogenous gene is reproducibly directed to distinct small subsets of cells in the adult brain. We expect the expression patterns produced by the collection of 5,000 lines that we are currently generating to encompass all neurons in the brain in a variety of intersecting patterns. Overlapping 3-kb DNA fragments from the flanking noncoding and intronic regions of genes thought to have patterned expression in the adult brain were inserted into a defined genomic location by site-specific recombination. These fragments were then assayedmore » for their ability to function as transcriptional enhancers in conjunction with a synthetic core promoter designed to work with a wide variety of enhancer types. An analysis of 44 fragments from four genes found that >80% drive expression patterns in the brain; the observed patterns were, on average, comprised of <100 cells. Our results suggest that the D. melanogaster genome contains >50,000 enhancers and that multiple enhancers drive distinct subsets of expression of a gene in each tissue and developmental stage. We expect that these lines will be valuable tools for neuroanatomy as well as for the elucidation of neuronal circuits and information flow in the fly brain.« less

  2. Threat expectancy bias and treatment outcome in patients with panic disorder and agoraphobia.

    PubMed

    Duits, Puck; Klein Hofmeijer-Sevink, Mieke; Engelhard, Iris M; Baas, Johanna M P; Ehrismann, Wieske A M; Cath, Danielle C

    2016-09-01

    Previous studies suggest that patients with panic disorder and agoraphobia (PD/A) tend to overestimate the associations between fear-relevant stimuli and threat. This so-called threat expectancy bias is thought to play a role in the development and treatment of anxiety disorders. The current study tested 1) whether patients with PD/A (N = 71) show increased threat expectancy ratings to fear-relevant and fear-irrelevant stimuli relative to a comparison group without an axis I disorder (N=65), and 2) whether threat expectancy bias before treatment predicts treatment outcome in a subset of these patients (n = 51). In a computerized task, participants saw a series of panic-related and neutral words and rated for each word the likelihood that it would be followed by a loud, aversive sound. Results showed higher threat expectancy ratings to both panic-related and neutral words in patients with PD/A compared to the comparison group. Threat expectancy ratings did not predict treatment outcome. This study only used expectancy ratings and did not include physiological measures. Furthermore, no post-treatment expectancy bias task was added to shed further light on the possibility that expectancy bias might be attenuated by treatment. Patients show higher expectancies of aversive outcome following both fear-relevant and fear-irrelevant stimuli relative to the comparison group, but this does not predict treatment outcome. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Investigating Evolutionary Conservation of Dendritic Cell Subset Identity and Functions

    PubMed Central

    Vu Manh, Thien-Phong; Bertho, Nicolas; Hosmalin, Anne; Schwartz-Cornil, Isabelle; Dalod, Marc

    2015-01-01

    Dendritic cells (DCs) were initially defined as mononuclear phagocytes with a dendritic morphology and an exquisite efficiency for naïve T-cell activation. DC encompass several subsets initially identified by their expression of specific cell surface molecules and later shown to excel in distinct functions and to develop under the instruction of different transcription factors or cytokines. Very few cell surface molecules are expressed in a specific manner on any immune cell type. Hence, to identify cell types, the sole use of a small number of cell surface markers in classical flow cytometry can be deceiving. Moreover, the markers currently used to define mononuclear phagocyte subsets vary depending on the tissue and animal species studied and even between laboratories. This has led to confusion in the definition of DC subset identity and in their attribution of specific functions. There is a strong need to identify a rigorous and consensus way to define mononuclear phagocyte subsets, with precise guidelines potentially applicable throughout tissues and species. We will discuss the advantages, drawbacks, and complementarities of different methodologies: cell surface phenotyping, ontogeny, functional characterization, and molecular profiling. We will advocate that gene expression profiling is a very rigorous, largely unbiased and accessible method to define the identity of mononuclear phagocyte subsets, which strengthens and refines surface phenotyping. It is uniquely powerful to yield new, experimentally testable, hypotheses on the ontogeny or functions of mononuclear phagocyte subsets, their molecular regulation, and their evolutionary conservation. We propose defining cell populations based on a combination of cell surface phenotyping, expression analysis of hallmark genes, and robust functional assays, in order to reach a consensus and integrate faster the huge but scattered knowledge accumulated by different laboratories on different cell types, organs, and species. PMID:26082777

  4. ECS Maintenance Wed. 12/28

    Atmospheric Science Data Center

    2016-12-27

    Date(s):  Wednesday, December 28, 2016 Time:  12 am - 12 pm EDT Event Impact:  The Data Pool, MISR order and browse tools, TAD, TES and MOPITT Search and Subset Applications, and Reverb will be unavailable...

  5. Alcohol-related expectancies in adults and adolescents: Similarities and disparities.

    PubMed

    Monk, Rebecca L; Heim, Derek

    2016-03-02

    This study aimed to contrast student and not student outcome expectancies, and explore the diversity of alcohol-related cognitions within a wider student sample. Participants (n=549) were college students (higher education-typically aged 15-18 years), university students (further education-typically aged 18-22 years) and business people (white collar professionals <50 years) who completed questionnaires in their place of work or education. Overall positive expectancies were higher in the college students than in the business or university samples. However, not all expectancy subcategories followed this pattern. Participant groups of similar age were therefore alike in some aspects of their alcohol-related cognitions but different in others. Similarly, participant groups whom are divergent in age appeared to be alike in some of their alcohol-related cognitions, such as tension reduction expectancies. Research often homogenises students as a specific sub-set of the population, this paper hi-lights that this may be an over-simplification. Furthermore, the largely exclusive focus on student groups within research in this area may also be an oversight, given the diversity of the findings demonstrated between these groups.

  6. Chemotherapy and target therapy in the management of adult high- grade gliomas.

    PubMed

    Spinelli, Gian Paolo; Miele, Evelina; Lo Russo, Giuseppe; Miscusi, Massimo; Codacci-Pisanelli, Giovanni; Petrozza, Vincenzo; Papa, Anselmo; Frati, Luigi; Della Rocca, Carlo; Gulino, Alberto; Tomao, Silverio

    2012-10-01

    Adult high grade gliomas (HGG) are the most frequent and fatal primary central nervous system (CNS) tumors. Despite recent advances in the knowledge of the pathology and the molecular features of this neoplasm, its prognosis remains poor. In the last years temozolomide (TMZ) has dramatically changed the life expectancy of these patients: the association of this drug with radiotherapy (RT), followed by TMZ alone, is the current standard of care. However, malignant gliomas often remain resistant to chemotherapy (CHT). Therefore, preclinical and clinical research efforts have been directed on identifying and understanding the different mechanisms of chemo-resistance operating in this subset of tumors,in order to develop effective strategies to overcome resistance. Moreover, the evidence of alterations in signal transduction pathways underlying tumor progression, has increased the number of trials investigating molecular target agents, such as anti-epidermal growth factor receptor (EGFR) and anti- vascular endothelial growth factor (VEGF) signaling. The purpose of this review is to point out the current standard of treatment and to explore new available target therapies in HGG.

  7. Pain as metaphor: metaphor and medicine

    PubMed Central

    Neilson, Shane

    2016-01-01

    Like many other disciplines, medicine often resorts to metaphor in order to explain complicated concepts that are imperfectly understood. But what happens when medicine's metaphors close off thinking, restricting interpretations and opinions to those of the negative kind? This paper considers the deleterious effects of destructive metaphors that cluster around pain. First, the metaphoric basis of all knowledge is introduced. Next, a particular subset of medical metaphors in the domain of neurology (doors/keys/wires) are shown to encourage mechanistic thinking. Because schematics are often used in medical textbooks to simplify the complex, this paper traces the visual metaphors implied in such schematics. Mechanistic-metaphorical thinking results in the accumulation of vast amounts of data through experimentation, but this paper asks what the real value of the information is since patients can generally only expect modest benefits – or none at all – for relief from chronic pain conditions. Elucidation of mechanism through careful experimentation creates an illusion of vast medical knowledge that, to a significant degree, is metaphor-based. This paper argues that for pain outcomes to change, our metaphors must change first. PMID:26253331

  8. Daily emotional states as reported by children and adolescents.

    PubMed

    Larson, R; Lampman-Petraitis, C

    1989-10-01

    Hour-to-hour emotional states reported by children, ages 9-15, were examined in order to evaluate the hypothesis that the onset of adolescence is associated with increased emotional variability. These youths carried electronic pagers for 1 week and filled out reports on their emotional states in response to signals received at random times. To evaluate possible age-related response sets, a subset of children was asked to use the same scales to rate the emotions shown in drawings of 6 faces. The expected relation between daily emotional variability and age was not found among the boys and was small among the girls. There was, however, a linear relation between age and average mood states, with the older participants reporting more dysphoric average states, especially more mildly negative states. An absence of age difference in the ratings of the faces indicated that this relation could not be attributed to age differences in response set. Thus, these findings provide little support for the hypothesis that the onset of adolescence is associated with increased emotionality but indicate significant alterations in everyday experience associated with this age period.

  9. A comparative human health risk assessment of p-dichlorobenzene-based toilet rimblock products versus fragrance/surfactant-based alternatives.

    PubMed

    Aronson, Dallas B; Bosch, Stephen; Gray, D Anthony; Howard, Philip H; Guiney, Patrick D

    2007-10-01

    A comparison of the human health risk to consumers using one of two types of toilet rimblock products, either a p-dichlorobenzene-based rimblock or two newer fragrance/surfactant-based alternatives, was conducted. Rimblock products are designed for global use by consumers worldwide and function by releasing volatile compounds into indoor air with subsequent exposure presumed to be mainly by inhalation of indoor air. Using the THERdbASE exposure model and experimentally determined emission data, indoor air concentrations and daily intake values were determined for both types of rimblock products. Modeled exposure concentrations from a representative p-dichlorobenzene rimblock product are an order of magnitude higher than those from the alternative rimblock products due to its nearly pure composition and high sublimation rate. Lifetime exposure to p-dichlorobenzene or the subset of fragrance components with available RfD values is not expected to lead to non-cancer-based adverse health effects based on the exposure concentrations estimated using the THERdbASE model. A similar comparison of cancer-based effects was not possible as insufficient data were available for the fragrance components.

  10. All Roads Lead to Rome: Exploring Human Migration to the Eternal City through Biochemistry of Skeletons from Two Imperial-Era Cemeteries (1st-3rd c AD)

    PubMed Central

    Killgrove, Kristina; Montgomery, Janet

    2016-01-01

    Migration within the Roman Empire occurred at multiple scales and was engaged in both voluntarily and involuntarily. Because of the lengthy tradition of classical studies, bioarchaeological analyses must be fully contextualized within the bounds of history, material culture, and epigraphy. In order to assess migration to Rome within an updated contextual framework, strontium isotope analysis was performed on 105 individuals from two cemeteries associated with Imperial Rome—Casal Bertone and Castellaccio Europarco—and oxygen and carbon isotope analyses were performed on a subset of 55 individuals. Statistical analysis and comparisons with expected local ranges found several outliers who likely immigrated to Rome from elsewhere. Demographics of the immigrants show men and children migrated, and a comparison of carbon isotopes from teeth and bone samples suggests the immigrants may have significantly changed their diet. These data represent the first physical evidence of individual migrants to Imperial Rome. This case study demonstrates the importance of employing bioarchaeology to generate a deeper understanding of a complex ancient urban center. PMID:26863610

  11. All Roads Lead to Rome: Exploring Human Migration to the Eternal City through Biochemistry of Skeletons from Two Imperial-Era Cemeteries (1st-3rd c AD).

    PubMed

    Killgrove, Kristina; Montgomery, Janet

    2016-01-01

    Migration within the Roman Empire occurred at multiple scales and was engaged in both voluntarily and involuntarily. Because of the lengthy tradition of classical studies, bioarchaeological analyses must be fully contextualized within the bounds of history, material culture, and epigraphy. In order to assess migration to Rome within an updated contextual framework, strontium isotope analysis was performed on 105 individuals from two cemeteries associated with Imperial Rome-Casal Bertone and Castellaccio Europarco-and oxygen and carbon isotope analyses were performed on a subset of 55 individuals. Statistical analysis and comparisons with expected local ranges found several outliers who likely immigrated to Rome from elsewhere. Demographics of the immigrants show men and children migrated, and a comparison of carbon isotopes from teeth and bone samples suggests the immigrants may have significantly changed their diet. These data represent the first physical evidence of individual migrants to Imperial Rome. This case study demonstrates the importance of employing bioarchaeology to generate a deeper understanding of a complex ancient urban center.

  12. Monochromatic-beam-based dynamic X-ray microtomography based on OSEM-TV algorithm.

    PubMed

    Xu, Liang; Chen, Rongchang; Yang, Yiming; Deng, Biao; Du, Guohao; Xie, Honglan; Xiao, Tiqiao

    2017-01-01

    Monochromatic-beam-based dynamic X-ray computed microtomography (CT) was developed to observe evolution of microstructure inside samples. However, the low flux density results in low efficiency in data collection. To increase efficiency, reducing the number of projections should be a practical solution. However, it has disadvantages of low image reconstruction quality using the traditional filtered back projection (FBP) algorithm. In this study, an iterative reconstruction method using an ordered subset expectation maximization-total variation (OSEM-TV) algorithm was employed to address and solve this problem. The simulated results demonstrated that normalized mean square error of the image slices reconstructed by the OSEM-TV algorithm was about 1/4 of that by FBP. Experimental results also demonstrated that the density resolution of OSEM-TV was high enough to resolve different materials with the number of projections less than 100. As a result, with the introduction of OSEM-TV, the monochromatic-beam-based dynamic X-ray microtomography is potentially practicable for the quantitative and non-destructive analysis to the evolution of microstructure with acceptable efficiency in data collection and reconstructed image quality.

  13. On optimal fuzzy best proximity coincidence points of fuzzy order preserving proximal Ψ(σ, α)-lower-bounding asymptotically contractive mappings in non-Archimedean fuzzy metric spaces.

    PubMed

    De la Sen, Manuel; Abbas, Mujahid; Saleem, Naeem

    2016-01-01

    This paper discusses some convergence properties in fuzzy ordered proximal approaches defined by [Formula: see text]-sequences of pairs, where [Formula: see text] is a surjective self-mapping and [Formula: see text] where Aand Bare nonempty subsets of and abstract nonempty set X and [Formula: see text] is a partially ordered non-Archimedean fuzzy metric space which is endowed with a fuzzy metric M, a triangular norm * and an ordering [Formula: see text] The fuzzy set M takes values in a sequence or set [Formula: see text] where the elements of the so-called switching rule [Formula: see text] are defined from [Formula: see text] to a subset of [Formula: see text] Such a switching rule selects a particular realization of M at the nth iteration and it is parameterized by a growth evolution sequence [Formula: see text] and a sequence or set [Formula: see text] which belongs to the so-called [Formula: see text]-lower-bounding mappings which are defined from [0, 1] to [0, 1]. Some application examples concerning discrete systems under switching rules and best approximation solvability of algebraic equations are discussed.

  14. Effect of Using 2 mm Voxels on Observer Performance for PET Lesion Detection

    NASA Astrophysics Data System (ADS)

    Morey, A. M.; Noo, Frédéric; Kadrmas, Dan J.

    2016-06-01

    Positron emission tomography (PET) images are typically reconstructed with an in-plane pixel size of approximately 4 mm for cancer imaging. The objective of this work was to evaluate the effect of using smaller pixels on general oncologic lesion-detection. A series of observer studies was performed using experimental phantom data from the Utah PET Lesion Detection Database, which modeled whole-body FDG PET cancer imaging of a 92 kg patient. The data comprised 24 scans over 4 days on a Biograph mCT time-of-flight (TOF) PET/CT scanner, with up to 23 lesions (diam. 6-16 mm) distributed throughout the phantom each day. Images were reconstructed with 2.036 mm and 4.073 mm pixels using ordered-subsets expectation-maximization (OSEM) both with and without point spread function (PSF) modeling and TOF. Detection performance was assessed using the channelized non-prewhitened numerical observer with localization receiver operating characteristic (LROC) analysis. Tumor localization performance and the area under the LROC curve were then analyzed as functions of the pixel size. In all cases, the images with 2 mm pixels provided higher detection performance than those with 4 mm pixels. The degree of improvement from the smaller pixels was larger than that offered by PSF modeling for these data, and provided roughly half the benefit of using TOF. Key results were confirmed by two human observers, who read subsets of the test data. This study suggests that a significant improvement in tumor detection performance for PET can be attained by using smaller voxel sizes than commonly used at many centers. The primary drawback is a 4-fold increase in reconstruction time and data storage requirements.

  15. Feature Selection for Speech Emotion Recognition in Spanish and Basque: On the Use of Machine Learning to Improve Human-Computer Interaction

    PubMed Central

    Arruti, Andoni; Cearreta, Idoia; Álvarez, Aitor; Lazkano, Elena; Sierra, Basilio

    2014-01-01

    Study of emotions in human–computer interaction is a growing research area. This paper shows an attempt to select the most significant features for emotion recognition in spoken Basque and Spanish Languages using different methods for feature selection. RekEmozio database was used as the experimental data set. Several Machine Learning paradigms were used for the emotion classification task. Experiments were executed in three phases, using different sets of features as classification variables in each phase. Moreover, feature subset selection was applied at each phase in order to seek for the most relevant feature subset. The three phases approach was selected to check the validity of the proposed approach. Achieved results show that an instance-based learning algorithm using feature subset selection techniques based on evolutionary algorithms is the best Machine Learning paradigm in automatic emotion recognition, with all different feature sets, obtaining a mean of 80,05% emotion recognition rate in Basque and a 74,82% in Spanish. In order to check the goodness of the proposed process, a greedy searching approach (FSS-Forward) has been applied and a comparison between them is provided. Based on achieved results, a set of most relevant non-speaker dependent features is proposed for both languages and new perspectives are suggested. PMID:25279686

  16. Iterative CT reconstruction using coordinate descent with ordered subsets of data

    NASA Astrophysics Data System (ADS)

    Noo, F.; Hahn, K.; Schöndube, H.; Stierstorfer, K.

    2016-04-01

    Image reconstruction based on iterative minimization of a penalized weighted least-square criteria has become an important topic of research in X-ray computed tomography. This topic is motivated by increasing evidence that such a formalism may enable a significant reduction in dose imparted to the patient while maintaining or improving image quality. One important issue associated with this iterative image reconstruction concept is slow convergence and the associated computational effort. For this reason, there is interest in finding methods that produce approximate versions of the targeted image with a small number of iterations and an acceptable level of discrepancy. We introduce here a novel method to produce such approximations: ordered subsets in combination with iterative coordinate descent. Preliminary results demonstrate that this method can produce, within 10 iterations and using only a constant image as initial condition, satisfactory reconstructions that retain the noise properties of the targeted image.

  17. The performance of monotonic and new non-monotonic gradient ascent reconstruction algorithms for high-resolution neuroreceptor PET imaging.

    PubMed

    Angelis, G I; Reader, A J; Kotasidis, F A; Lionheart, W R; Matthews, J C

    2011-07-07

    Iterative expectation maximization (EM) techniques have been extensively used to solve maximum likelihood (ML) problems in positron emission tomography (PET) image reconstruction. Although EM methods offer a robust approach to solving ML problems, they usually suffer from slow convergence rates. The ordered subsets EM (OSEM) algorithm provides significant improvements in the convergence rate, but it can cycle between estimates converging towards the ML solution of each subset. In contrast, gradient-based methods, such as the recently proposed non-monotonic maximum likelihood (NMML) and the more established preconditioned conjugate gradient (PCG), offer a globally convergent, yet equally fast, alternative to OSEM. Reported results showed that NMML provides faster convergence compared to OSEM; however, it has never been compared to other fast gradient-based methods, like PCG. Therefore, in this work we evaluate the performance of two gradient-based methods (NMML and PCG) and investigate their potential as an alternative to the fast and widely used OSEM. All algorithms were evaluated using 2D simulations, as well as a single [(11)C]DASB clinical brain dataset. Results on simulated 2D data show that both PCG and NMML achieve orders of magnitude faster convergence to the ML solution compared to MLEM and exhibit comparable performance to OSEM. Equally fast performance is observed between OSEM and PCG for clinical 3D data, but NMML seems to perform poorly. However, with the addition of a preconditioner term to the gradient direction, the convergence behaviour of NMML can be substantially improved. Although PCG is a fast convergent algorithm, the use of a (bent) line search increases the complexity of the implementation, as well as the computational time involved per iteration. Contrary to previous reports, NMML offers no clear advantage over OSEM or PCG, for noisy PET data. Therefore, we conclude that there is little evidence to replace OSEM as the algorithm of choice for many applications, especially given that in practice convergence is often not desired for algorithms seeking ML estimates.

  18. GPU-based prompt gamma ray imaging from boron neutron capture therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Do-Kun; Jung, Joo-Young; Suk Suh, Tae, E-mail: suhsanta@catholic.ac.kr

    Purpose: The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. Methods: To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU).more » Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. Results: The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). Conclusions: The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray image reconstruction using the GPU computation for BNCT simulations.« less

  19. TU-FG-BRB-07: GPU-Based Prompt Gamma Ray Imaging From Boron Neutron Capture Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, S; Suh, T; Yoon, D

    Purpose: The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. Methods: To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU).more » Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. Results: The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). Conclusion: The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray reconstruction using the GPU computation for BNCT simulations.« less

  20. Resistive plate chambers in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Crespo, Paulo; Blanco, Alberto; Couceiro, Miguel; Ferreira, Nuno C.; Lopes, Luís; Martins, Paulo; Ferreira Marques, Rui; Fonte, Paulo

    2013-07-01

    Resistive plate chambers (RPC) were originally deployed for high energy physics. Realizing how their properties match the needs of nuclear medicine, a LIP team proposed applying RPCs to both preclinical and clinical positron emission tomography (RPC-PET). We show a large-area RPC-PET simulated scanner covering an axial length of 2.4m —slightly superior to the height of the human body— allowing for whole-body, single-bed RPC-PET acquisitions. Simulations following NEMA (National Electrical Manufacturers Association, USA) protocols yield a system sensitivity at least one order of magnitude larger than present-day, commercial PET systems. Reconstruction of whole-body simulated data is feasible by using a dedicated, direct time-of-flight-based algorithm implemented onto an ordered subsets estimation maximization parallelized strategy. Whole-body RPC-PET patient images following the injection of only 2mCi of 18-fluorodesoxyglucose (FDG) are expected to be ready 7 minutes after the 6 minutes necessary for data acquisition. This compares to the 10-20mCi FDG presently injected for a PET scan, and to the uncomfortable 20-30minutes necessary for its data acquisition. In the preclinical field, two fully instrumented detector heads have been assembled aiming at a four-head-based, small-animal RPC-PET system. Images of a disk-shaped and a needle-like 22Na source show unprecedented sub-millimeter spatial resolution.

  1. Roughness in Lattice Ordered Effect Algebras

    PubMed Central

    Xin, Xiao Long; Hua, Xiu Juan; Zhu, Xi

    2014-01-01

    Many authors have studied roughness on various algebraic systems. In this paper, we consider a lattice ordered effect algebra and discuss its roughness in this context. Moreover, we introduce the notions of the interior and the closure of a subset and give some of their properties in effect algebras. Finally, we use a Riesz ideal induced congruence and define a function e(a, b) in a lattice ordered effect algebra E and build a relationship between it and congruence classes. Then we study some properties about approximation of lattice ordered effect algebras. PMID:25170523

  2. Deferred slanted-edge analysis: a unified approach to spatial frequency response measurement on distorted images and color filter array subsets.

    PubMed

    van den Bergh, F

    2018-03-01

    The slanted-edge method of spatial frequency response (SFR) measurement is usually applied to grayscale images under the assumption that any distortion of the expected straight edge is negligible. By decoupling the edge orientation and position estimation step from the edge spread function construction step, it is shown in this paper that the slanted-edge method can be extended to allow it to be applied to images suffering from significant geometric distortion, such as produced by equiangular fisheye lenses. This same decoupling also allows the slanted-edge method to be applied directly to Bayer-mosaicked images so that the SFR of the color filter array subsets can be measured directly without the unwanted influence of demosaicking artifacts. Numerical simulation results are presented to demonstrate the efficacy of the proposed deferred slanted-edge method in relation to existing methods.

  3. CT image reconstruction with half precision floating-point values.

    PubMed

    Maaß, Clemens; Baer, Matthias; Kachelrieß, Marc

    2011-07-01

    Analytic CT image reconstruction is a computationally demanding task. Currently, the even more demanding iterative reconstruction algorithms find their way into clinical routine because their image quality is superior to analytic image reconstruction. The authors thoroughly analyze a so far unconsidered but valuable tool of tomorrow's reconstruction hardware (CPU and GPU) that allows implementing the forward projection and backprojection steps, which are the computationally most demanding parts of any reconstruction algorithm, much more efficiently. Instead of the standard 32 bit floating-point values (float), a recently standardized floating-point value with 16 bit (half) is adopted for data representation in image domain and in rawdata domain. The reduction in the total data amount reduces the traffic on the memory bus, which is the bottleneck of today's high-performance algorithms, by 50%. In CT simulations and CT measurements, float reconstructions (gold standard) and half reconstructions are visually compared via difference images and by quantitative image quality evaluation. This is done for analytical reconstruction (filtered backprojection) and iterative reconstruction (ordered subset SART). The magnitude of quantization noise, which is caused by a reduction in the data precision of both rawdata and image data during image reconstruction, is negligible. This is clearly shown for filtered backprojection and iterative ordered subset SART reconstruction. In filtered backprojection, the implementation of the backprojection should be optimized for low data precision if the image data are represented in half format. In ordered subset SART image reconstruction, no adaptations are necessary and the convergence speed remains unchanged. Half precision floating-point values allow to speed up CT image reconstruction without compromising image quality.

  4. Functional characterization of a regulatory human T-cell subpopulation increasing during autologous MLR.

    PubMed Central

    Cosulich, M E; Risso, A; Canonica, G W; Bargellesi, A

    1986-01-01

    The present study was undertaken to investigate the heterogeneity of helper T cells in humans using two different monoclonal antibodies: 5/9 and MLR4. The former identifies 15-20% of resting T lymphocytes from peripheral blood and corresponds to an anti-helper/inducer T cell. The second antibody, MLR4, recognizes 5% of total T lymphocytes and partially overlaps with the 5/9+ T cells. In order to investigate functional differences within the 5/9+ cells, we separated two different subsets (5/9+ MLR+ and 5/9+ MLR4-) by a rosetting technique. Although both subsets provide help for Ig synthesis in a PWM-stimulated culture, only the 5/9+ MLR4- fraction gave a proliferative response in both autologous and allogeneic MLR and to soluble protein antigens. The effect of radiation on the ability of the two subsets to provide help for Ig synthesis showed that the 5/9+ MLR4+ subset is highly radiation-sensitive, while 5/9+ MLR- is relatively radiation-resistant. In a further series of experiments, 5/9+ MLR4+ cells isolated after activation in an autologous MLR but not by Con A, were no longer able to induce T-cell differentiation but now showed a strong suppressor effect. The 5/9+ MLR4- subset separated from the same cultures did not display any suppressor function. These data demonstrate in fresh PBL the existence of a radiation-sensitive regulatory subset exerting a helper activity, and which acquires suppressor activity after activation in autologous MLR. PMID:2936679

  5. Comparative quantitative analysis of cluster of differentiation 45 antigen expression on lymphocyte subsets.

    PubMed

    Im, Mijeong; Chae, Hyojin; Kim, Taehoon; Park, Hun-Hee; Lim, Jihyang; Oh, Eun-Jee; Kim, Yonggoo; Park, Yeon-Joon; Han, Kyungja

    2011-07-01

    Since the recent introduction of radioimmunotherapy (RIT) using antibodies against cluster of differentiation (CD) 45 for the treatment of lymphoma, the clinical significance of the CD45 antigen has been increasing steadily. Here, we analyzed CD45 expression on lymphocyte subsets using flow cytometry in order to predict the susceptibility of normal lymphocytes to RIT. Peripheral blood specimens were collected from 14 healthy individuals aged 25-54 yr. The mean fluorescence intensity (MFI) of the cell surface antigens was measured using a FACSCanto II system (Becton Dickinson Bioscience, USA). MFI values were converted into antibody binding capacity values using a Quantum Simply Cellular microbead kit (Bangs Laboratories, Inc., USA). Among the lymphocyte subsets, the expression of CD45 was the highest (725,368±42,763) on natural killer T (NKT) cells, 674,030±48,187 on cytotoxic/suppressor T cells, 588,750±48,090 on natural killer (NK) cells, 580,211±29,168 on helper T (Th) cells, and 499,436±21,737 on B cells. The Th cells and NK cells expressed a similar level of CD45 (P=0.502). Forward scatter was the highest in NKT cells (P<0.05), whereas side scatter differed significantly between each of the lymphocyte subsets (P<0.05). CD3 expression was highest in the Th and NKT cells. NKT cells express the highest levels of CD45 antigen. Therefore, this lymphocyte subset would be most profoundly affected by RIT or pretargeted RIT. The monitoring of this lymphocyte subset during and after RIT should prove helpful.

  6. AIRS Data Subsetting Service at the Goddard Earth Sciences (GES) DISC/DAAC

    NASA Technical Reports Server (NTRS)

    Vicente, Gilberto A.; Qin, Jianchun; Li, Jason; Gerasimov, Irina; Savtchenko, Andrey

    2004-01-01

    The AIRS mission, as a combination of the Atmospheric Infrared Sounder (AIRS), the Advanced Microwave Sounding Unit (AMSU) and the Humidity Sounder for Brazil (HSB), brings climate research and weather prediction into 21st century. From NASA' Aqua spacecraft, the AIRS/AMSU/HSB instruments measure humidity, temperature, cloud properties and the amounts of greenhouse gases. The AIRS also reveals land and sea- surface temperatures. Measurements from these three instruments are analyzed . jointly to filter out the effects of clouds from the IR data in order to derive clear-column air-temperature profiles and surface temperatures with high vertical resolution and accuracy. Together, they constitute an advanced operational sounding data system that have contributed to improve global modeling efforts and numerical weather prediction; enhance studies of the global energy and water cycles, the effects of greenhouse gases, and atmosphere-surface interactions; and facilitate monitoring of climate variations and trends. The high data volume generated by the AIRS/AMSU/HSB instruments and the complexity of its data format (Hierarchical Data Format, HDF) are barriers to AIRS data use. Although many researchers are interested in only a fraction of the data they receive or request, they are forced to run their algorithms on a much larger data set to extract the information of interest. In order to better server its users, the GES DISC/DAAC, provider of long-term archives and distribution services as well science support for the AIRS/AMSU/HSB data products, has developed various tools for performing channels, variables, parameter, spatial and derived products subsetting, resampling and reformatting operations. This presentation mainly describes the web-enabled subsetting services currently available at the GES DISC/DAAC that provide subsetting functions for all the Level 1B and Level 2 data products from the AIRS/AMSU/HSB instruments.

  7. Optimized diffusion gradient orientation schemes for corrupted clinical DTI data sets.

    PubMed

    Dubois, J; Poupon, C; Lethimonnier, F; Le Bihan, D

    2006-08-01

    A method is proposed for generating schemes of diffusion gradient orientations which allow the diffusion tensor to be reconstructed from partial data sets in clinical DT-MRI, should the acquisition be corrupted or terminated before completion because of patient motion. A general energy-minimization electrostatic model was developed in which the interactions between orientations are weighted according to their temporal order during acquisition. In this report, two corruption scenarios were specifically considered for generating relatively uniform schemes of 18 and 60 orientations, with useful subsets of 6 and 15 orientations. The sets and subsets were compared to conventional sets through their energy, condition number and rotational invariance. Schemes of 18 orientations were tested on a volunteer. The optimized sets were similar to uniform sets in terms of energy, condition number and rotational invariance, whether the complete set or only a subset was considered. Diffusion maps obtained in vivo were close to those for uniform sets whatever the acquisition time was. This was not the case with conventional schemes, whose subset uniformity was insufficient. With the proposed approach, sets of orientations responding to several corruption scenarios can be generated, which is potentially useful for imaging uncooperative patients or infants.

  8. Redefining the endophenotype concept to accommodate transdiagnostic vulnerabilities and etiological complexity.

    PubMed

    Beauchaine, Theodore P; Constantino, John N

    2017-09-11

    In psychopathology research, endophenotypes are a subset of biomarkers that indicate genetic vulnerability independent of clinical state. To date, an explicit expectation is that endophenotypes be specific to single disorders. We evaluate this expectation considering recent advances in psychiatric genetics, recognition that transdiagnostic vulnerability traits are often more useful than clinical diagnoses in psychiatric genetics, and appreciation for etiological complexity across genetic, neural, hormonal and environmental levels of analysis. We suggest that the disorder-specificity requirement of endophenotypes be relaxed, that neural functions are preferable to behaviors as starting points in searches for endophenotypes, and that future research should focus on interactive effects of multiple endophenotypes on complex psychiatric disorders, some of which are 'phenocopies' with distinct etiologies.

  9. Efficiencies in freight & passenger routing & scheduling.

    DOT National Transportation Integrated Search

    2015-08-01

    The problem we study concerns routing a fleet of capacitated vehicles in real time to collect : shipment orders placed by a known set of customers. On each day of operation, only a subset of : all customers request service. Some of these requests are...

  10. Nestedness of desert bat assemblages: species composition patterns in insular and terrestrial landscapes.

    PubMed

    Frick, Winifred F; Hayes, John P; Heady, Paul A

    2009-01-01

    Nested patterns of community composition exist when species at depauperate sites are subsets of those occurring at sites with more species. Nested subset analysis provides a framework for analyzing species occurrences to determine non-random patterns in community composition and potentially identify mechanisms that may shape faunal assemblages. We examined nested subset structure of desert bat assemblages on 20 islands in the southern Gulf of California and at 27 sites along the Baja California peninsula coast, the presumable source pool for the insular faunas. Nested structure was analyzed using a conservative null model that accounts for expected variation in species richness and species incidence across sites (fixed row and column totals). Associations of nestedness and island traits, such as size and isolation, as well as species traits related to mobility, were assessed to determine the potential role of differential extinction and immigration abilities as mechanisms of nestedness. Bat faunas were significantly nested in both the insular and terrestrial landscape and island size was significantly correlated with nested structure, such that species on smaller islands tended to be subsets of species on larger islands, suggesting that differential extinction vulnerabilities may be important in shaping insular bat faunas. The role of species mobility and immigration abilities is less clearly associated with nestedness in this system. Nestedness in the terrestrial landscape is likely due to stochastic processes related to random placement of individuals and this may also influence nested patterns on islands, but additional data on abundances will be necessary to distinguish among these potential mechanisms.

  11. G-STRATEGY: Optimal Selection of Individuals for Sequencing in Genetic Association Studies

    PubMed Central

    Wang, Miaoyan; Jakobsdottir, Johanna; Smith, Albert V.; McPeek, Mary Sara

    2017-01-01

    In a large-scale genetic association study, the number of phenotyped individuals available for sequencing may, in some cases, be greater than the study’s sequencing budget will allow. In that case, it can be important to prioritize individuals for sequencing in a way that optimizes power for association with the trait. Suppose a cohort of phenotyped individuals is available, with some subset of them possibly already sequenced, and one wants to choose an additional fixed-size subset of individuals to sequence in such a way that the power to detect association is maximized. When the phenotyped sample includes related individuals, power for association can be gained by including partial information, such as phenotype data of ungenotyped relatives, in the analysis, and this should be taken into account when assessing whom to sequence. We propose G-STRATEGY, which uses simulated annealing to choose a subset of individuals for sequencing that maximizes the expected power for association. In simulations, G-STRATEGY performs extremely well for a range of complex disease models and outperforms other strategies with, in many cases, relative power increases of 20–40% over the next best strategy, while maintaining correct type 1 error. G-STRATEGY is computationally feasible even for large datasets and complex pedigrees. We apply G-STRATEGY to data on HDL and LDL from the AGES-Reykjavik and REFINE-Reykjavik studies, in which G-STRATEGY is able to closely-approximate the power of sequencing the full sample by selecting for sequencing a only small subset of the individuals. PMID:27256766

  12. F-35 Joint Strike Fighter (JSF) Program

    DTIC Science & Technology

    2012-02-16

    Operational Test and Evaluation ( IOT &E), a subset of SDD.61 The eight partner countries are expected to purchase hundreds of F-35s, with the United...Netherlands have agreed to participate in the IOT &E program. UK, the senior F-35 partner, will have the strongest participation in the IOT &E phase...testing. (Telephone conversation with OSD/AT&L, October 3, 2007.) Other partner nations are still weighing their option to participate in the IOT &E

  13. Use of the SF-8 to assess health-related quality of life for a chronically ill, low-income population participating in the Central Louisiana Medication Access Program (CMAP).

    PubMed

    Lefante, John J; Harmon, Gary N; Ashby, Keith M; Barnard, David; Webber, Larry S

    2005-04-01

    The utility of the SF-8 for assessing health-related quality of life (HRQL) is demonstrated. Race and gender differences in physical component (PCS) and mental component (MCS) summary scores among participants in the CENLA Medication Access Program (CMAP), along with comparisons to the United States population are made. Age-adjusted multiple linear regression analyses were used to compare 1687 CMAP participants to the US population. Internal race and gender comparisons, adjusting for age and the number of self reported diagnoses, were also obtained. The paired t-test was used to assess 6-month change in PCS and MCS scores for a subset of 342 participants. CMAP participants have PCS and MCS scores that are significantly 10-12 points lower than the US population, indicating lower self-reported HRQL. Females have significantly higher PCS and significantly lower MCS than males. African-Americans have significantly higher MCS than Caucasians. Significant increases in both PCS and MCS were observed for the subset of participants after 6 months of intervention. The expected lower baseline PCS and MCS measures and the expected associations with age and number of diagnoses indicate that the SF-8 survey is an effective tool for measuring the HRQL of participants in this program. Preliminary results indicate significant increases in both PCS and MCS 6 months after intervention.

  14. A comparison of mental health outcomes in persons entering U.S. military service before and after September 11, 2001.

    PubMed

    Wells, Timothy S; Ryan, Margaret A K; Jones, Kelly A; Hooper, Tomoko I; Boyko, Edward J; Jacobson, Isabel G; Smith, Tyler C; Gackstetter, Gary D

    2012-02-01

    It has been hypothesized that those who entered military service in the pre-September 11, 2001 era might have expectations incongruent with their subsequent experiences, increasing the risk for posttraumatic stress disorder (PTSD) or other mental disorders. A subset of Millennium Cohort Study participants who joined the military during 1995-1999 was selected and compared with a subset of members who joined the military in 2002 or later. Outcomes included new-onset symptoms of PTSD, depression, panic/anxiety, and alcohol-related problems. Multivariable methods adjusted for differences in demographic and military characteristics. More than 11,000 cohort members were included in the analyses. Those who entered service in the pre-September 11 era had lower odds of new-onset PTSD symptoms (odds ratio [OR] 0.74, 95% CI [0.59, 0.93]) compared with the post-September 11 cohort. There were no statistically significant differences in rates of new-onset symptoms of depression, panic/anxiety, or alcohol-related problems between the groups. The cohort who entered military service in the pre-September 11 era did not experience higher rates of new-onset mental health challenges compared with the cohort who entered service after September 11, 2001. Findings support the concept that the experience of war, and resulting psychological morbidity, is not a function of incongruent expectations. Copyright © 2012 International Society for Traumatic Stress Studies.

  15. Evaluation of Bias and Variance in Low-count OSEM List Mode Reconstruction

    PubMed Central

    Jian, Y; Planeta, B; Carson, R E

    2016-01-01

    Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization (MLEM) reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combination of subsets and iterations. Regions of interest (ROIs) were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations x subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1–5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR. PMID:25479254

  16. Evaluation of bias and variance in low-count OSEM list mode reconstruction

    NASA Astrophysics Data System (ADS)

    Jian, Y.; Planeta, B.; Carson, R. E.

    2015-01-01

    Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combinations of subsets and iterations. Regions of interest were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations × subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1-5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR.

  17. Functional analysis of circadian pacemaker neurons in Drosophila melanogaster.

    PubMed

    Rieger, Dirk; Shafer, Orie Thomas; Tomioka, Kenji; Helfrich-Förster, Charlotte

    2006-03-01

    The molecular mechanisms of circadian rhythms are well known, but how multiple clocks within one organism generate a structured rhythmic output remains a mystery. Many animals show bimodal activity rhythms with morning (M) and evening (E) activity bouts. One long-standing model assumes that two mutually coupled oscillators underlie these bouts and show different sensitivities to light. Three groups of lateral neurons (LN) and three groups of dorsal neurons govern behavioral rhythmicity of Drosophila. Recent data suggest that two groups of the LN (the ventral subset of the small LN cells and the dorsal subset of LN cells) are plausible candidates for the M and E oscillator, respectively. We provide evidence that these neuronal groups respond differently to light and can be completely desynchronized from one another by constant light, leading to two activity components that free-run with different periods. As expected, a long-period component started from the E activity bout. However, a short-period component originated not exclusively from the morning peak but more prominently from the evening peak. This reveals an interesting deviation from the original Pittendrigh and Daan (1976) model and suggests that a subgroup of the ventral subset of the small LN acts as "main" oscillator controlling M and E activity bouts in Drosophila.

  18. VizieR Online Data Catalog: DIBs in APOGEE telluric standard star spectra (Elyajouri+, 2017)

    NASA Astrophysics Data System (ADS)

    Elyajouri, M.; Lallement, R.; Monreal-Ibero, A.; Capitanio, L.; Cox, N. L. J.

    2017-04-01

    A subset of ~60 target stars from the APOGEE TSS list described in Sect. 3 has been observed in the visible with NARVAL, the spectropolarimeter of the Bernard Lyot telescope (2m) at Pic du Midi observatory, used in high-resolution spectroscopic mode (R~=80000). For all data the signal-to-noise ratio (S/N) is between 50 and 100. Two targets were observed twice in order check the estimated uncertainties. An additional subset of five targets was observed with the SOPHIE spectrograph at the 1.93m telescope of the Haute-Provence Observatory at a resolving power R~=39000. (1 data file).

  19. Development and Standardization of a Test for Pragmatic Language Skills in Egyptian Arabic: The Egyptian Arabic Pragmatic Language Test (EAPLT).

    PubMed

    Khodeir, Mona S; Hegazi, Mona A; Saleh, Marwa M

    2018-03-19

    The aim of this study was to standardize an Egyptian Arabic Pragmatic Language Test (EAPLT) using linguistically and socially suitable questions and pictures in order to be able to address specific deficits in this language domain. Questions and pictures were designed for the EAPLT to assess 3 pragmatic language subsets: pragmatic skills, functions, and factors. Ten expert phoniatricians were asked to review the EAPLT and complete a questionnaire to assess the validity of the test items. The EAPLT was applied in 120 typically developing Arabic-speaking Egyptian children (64 females and 56 males) randomly selected by inclusion and exclusion criteria in the age range between 2 years, 1 month, 1 day and 9 years, 12 months, 31 days. Children's scores were used to calculate the means and standard deviations and the 5th and 95th percentiles to determine the age of the pragmatic skills acquisition. All experts have mostly agreed that the EAPLT gives a general idea about children's pragmatic language development. Test-retest reliability analysis proved the high reliability and internal consistency of the EAPLT subsets. A statistically significant correlation was found between the test subsets and age. The EAPLT is a valid and reliable Egyptian Arabic test that can be applied in order to detect a pragmatic language delay. © 2018 S. Karger AG, Basel.

  20. Antigen presenting capacity of murine splenic myeloid cells.

    PubMed

    Hey, Ying-Ying; Quah, Benjamin; O'Neill, Helen C

    2017-01-11

    The spleen is an important site for hematopoiesis. It supports development of myeloid cells from bone marrow-derived precursors entering from blood. Myeloid subsets in spleen are not well characterised although dendritic cell (DC) subsets are clearly defined in terms of phenotype, development and functional role. Recently a novel dendritic-like cell type in spleen named 'L-DC' was distinguished from other known dendritic and myeloid cells by its distinct phenotype and developmental origin. That study also redefined splenic eosinophils as well as resident and inflammatory monocytes in spleen. L-DC are shown to be distinct from known splenic macrophages and monocyte subsets. Using a new flow cytometric procedure, it has been possible to identify and isolate L-DC in order to assess their functional competence and ability to activate T cells both in vivo and in vitro. L-DC are readily accessible to antigen given intravenously through receptor-mediated endocytosis. They are also capable of CD8 + T cell activation through antigen cross presentation, with subsequent induction of cytotoxic effector T cells. L-DC are MHCII - cells and unable to activate CD4 + T cells, a property which clearly distinguishes them from conventional DC. The myeloid subsets of resident monocytes, inflammatory monocytes, neutrophils and eosinophils, were found to have varying capacities to take up antigen, but were uniformly unable to activate either CD4 + T cells or CD8 + T cells. The results presented here demonstrate that L-DC in spleen are distinct from other myeloid cells in that they can process antigen for CD8 + T cell activation and induction of cytotoxic effector function, while both L-DC and myeloid subsets remain unable to activate CD4 + T cells. The L-DC subset in spleen is therefore distinct as an antigen presenting cell.

  1. Gamma-ray blazars: the combined AGILE and MAGIC views

    NASA Astrophysics Data System (ADS)

    Persic, M.; De Angelis, A.; Longo, F.; Tavani, M.

    The large FOV of the AGILE Gamma-Ray Imaging Detector (GRID), 2.5 sr, will allow the whole sky to be surveyed once every 10 days in the 30 MeV - 50 GeV energy band down to 0.05 Crab Units. This fact gives the opportunity of performing the first flux-limited, high-energy g-ray all-sky survey. The high Galactic latitude point-source population is expected to be largely dominated by blazars. Several tens of blazars are expected to be detected by AGILE (e.g., Costamante & Ghisellini 2002), about half of which accessible to the ground-based MAGIC Cherenkov telescope. The latter can then carry out pointed observations of this subset of AGILE sources in the 50GeV - 10TeV band. Given the comparable sensitivities of AGILE/GRID and MAGIC in adjacent energy bands where the emitted radiation is produced by the same (e.g., SSC) mechanism, we expect that most of these sources can be detected by MAGIC. We expect this broadband g-ray strategy to enable discovery by MAGIC of 10-15 previously unknown TeV blazars.

  2. Feature selection with harmony search.

    PubMed

    Diao, Ren; Shen, Qiang

    2012-12-01

    Many search strategies have been exploited for the task of feature selection (FS), in an effort to identify more compact and better quality subsets. Such work typically involves the use of greedy hill climbing (HC), or nature-inspired heuristics, in order to discover the optimal solution without going through exhaustive search. In this paper, a novel FS approach based on harmony search (HS) is presented. It is a general approach that can be used in conjunction with many subset evaluation techniques. The simplicity of HS is exploited to reduce the overall complexity of the search process. The proposed approach is able to escape from local solutions and identify multiple solutions owing to the stochastic nature of HS. Additional parameter control schemes are introduced to reduce the effort and impact of parameter configuration. These can be further combined with the iterative refinement strategy, tailored to enforce the discovery of quality subsets. The resulting approach is compared with those that rely on HC, genetic algorithms, and particle swarm optimization, accompanied by in-depth studies of the suggested improvements.

  3. Random-subset fitting of digital holograms for fast three-dimensional particle tracking [invited].

    PubMed

    Dimiduk, Thomas G; Perry, Rebecca W; Fung, Jerome; Manoharan, Vinothan N

    2014-09-20

    Fitting scattering solutions to time series of digital holograms is a precise way to measure three-dimensional dynamics of microscale objects such as colloidal particles. However, this inverse-problem approach is computationally expensive. We show that the computational time can be reduced by an order of magnitude or more by fitting to a random subset of the pixels in a hologram. We demonstrate our algorithm on experimentally measured holograms of micrometer-scale colloidal particles, and we show that 20-fold increases in speed, relative to fitting full frames, can be attained while introducing errors in the particle positions of 10 nm or less. The method is straightforward to implement and works for any scattering model. It also enables a parallelization strategy wherein random-subset fitting is used to quickly determine initial guesses that are subsequently used to fit full frames in parallel. This approach may prove particularly useful for studying rare events, such as nucleation, that can only be captured with high frame rates over long times.

  4. The Cross-Entropy Based Multi-Filter Ensemble Method for Gene Selection.

    PubMed

    Sun, Yingqiang; Lu, Chengbo; Li, Xiaobo

    2018-05-17

    The gene expression profile has the characteristics of a high dimension, low sample, and continuous type, and it is a great challenge to use gene expression profile data for the classification of tumor samples. This paper proposes a cross-entropy based multi-filter ensemble (CEMFE) method for microarray data classification. Firstly, multiple filters are used to select the microarray data in order to obtain a plurality of the pre-selected feature subsets with a different classification ability. The top N genes with the highest rank of each subset are integrated so as to form a new data set. Secondly, the cross-entropy algorithm is used to remove the redundant data in the data set. Finally, the wrapper method, which is based on forward feature selection, is used to select the best feature subset. The experimental results show that the proposed method is more efficient than other gene selection methods and that it can achieve a higher classification accuracy under fewer characteristic genes.

  5. A geopotential model from satellite tracking, altimeter, and surface gravity data: GEM-T3

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.; Nerem, R. S.; Putney, B. H.; Felsentreger, T. L.; Sanchez, B. V.; Marshall, J. A.; Klosko, S. M.; Patel, G. B.; Williamson, R. G.; Chinn, D. S.

    1994-01-01

    An improved model of Earth's gravitational field, Goddard Earth Model T-3 (GEM-T3), has been developed from a combination of satellite tracking, satellite altimeter, and surface gravimetric data. GEM-T3 provides a significant improvement in the modeling of the gravity field at half wavelengths of 400 km and longer. This model, complete to degree and order 50, yields more accurate satellite orbits and an improved geoid representation than previous Goddard Earth Models. GEM-T3 uses altimeter data from GEOS 3 (1975-1976), Seasat (1978) and Geosat (1986-1987). Tracking information used in the solution includes more than 1300 arcs of data encompassing 31 different satellites. The recovery of the long-wavelength components of the solution relies mostly on highly precise satellite laser ranging (SLR) data, but also includes Tracking Network (TRANET) Doppler, optical, and satellite-to-satellite tracking acquired between the ATS 6 and GEOS 3 satellites. The main advances over GEM-T2 (beyond the inclusion of altimeter and surface gravity information which is essential for the resolution of the shorter wavelength geoid) are some improved tracking data analysis approaches and additional SLR data. Although the use of altimeter data has greatly enhanced the modeling of the ocean geoid between 65 deg N and 60 deg S latitudes in GEM-T3, the lack of accurate detailed surface gravimetry leaves poor geoid resolution over many continental regions of great tectonic interest (e.g., Himalayas, Andes). Estimates of polar motion, tracking station coordinates, and long-wavelength ocean tidal terms were also made (accounting for 6330 parameters). GEM-T3 has undergone error calibration using a technique based on subset solutions to produce reliable error estimates. The calibration is based on the condition that the expected mean square deviation of a subset gravity solution from the full set values is predicted by the solutions' error covariances. Data weights are iteratively adjusted until this condition for the error calibration is satisfied. In addition, gravity field tests were performed on strong satellite data sets withheld from the solution (thereby ensuring their independence). In these tests, the performance of the subset models on the withheld observations is compared to error projections based on their calibrated error covariances. These results demonstrate that orbit accuracy projections are reliable for new satellites which were not included in GEM-T3.

  6. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization.

    PubMed

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate.

  7. Plate-based diversity subset screening: an efficient paradigm for high throughput screening of a large screening file.

    PubMed

    Bell, Andrew S; Bradley, Joseph; Everett, Jeremy R; Knight, Michelle; Loesel, Jens; Mathias, John; McLoughlin, David; Mills, James; Sharp, Robert E; Williams, Christine; Wood, Terence P

    2013-05-01

    The screening files of many large companies, including Pfizer, have grown considerably due to internal chemistry efforts, company mergers and acquisitions, external contracted synthesis, or compound purchase schemes. In order to screen the targets of interest in a cost-effective fashion, we devised an easy-to-assemble, plate-based diversity subset (PBDS) that represents almost the entire computed chemical space of the screening file whilst comprising only a fraction of the plates in the collection. In order to create this file, we developed new design principles for the quality assessment of screening plates: the Rule of 40 (Ro40) and a plate selection process that insured excellent coverage of both library chemistry and legacy chemistry space. This paper describes the rationale, design, construction, and performance of the PBDS, that has evolved into the standard paradigm for singleton (one compound per well) high-throughput screening in Pfizer since its introduction in 2006.

  8. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization

    PubMed Central

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate. PMID:27073853

  9. Measurements of Crossflow Instability Modes for HIFiRE 5 at Angle of Attack

    DTIC Science & Technology

    2017-11-15

    temperature sensitive paint (TSP) did not show any vortices in noisy flow, and only revealed vortices in quiet flow for a subset of the Reynolds numbers for...evidence of traveling crossflow waves with a noisy freestream, even though the spectra of the surface pressure signals showed an expected progression...cone ray describing the minor axis, and retains a 2:1 elliptical cross-section to the tip. Figure 1: Photograph of model The model is made of solid 15

  10. Scalable Amplification of Strand Subsets from Chip-Synthesized Oligonucleotide Libraries (Open Access)

    DTIC Science & Technology

    2015-11-16

    detailed discussion of barcode designs in Supplementary Note 1, Supplementary Fig. 1 and sequences in Supplementary Note 2). Whereas the nicking and...eight subpools, each as a one- or as a two-barcode version ( design details in Supplementary Note 1). All subpools amplified strands with the expected...for the c2ca designs . We used the same restriction enzymes (Nb.BsrDI and Nt.BspQI) that were encoded between the primers and the target sequences to

  11. NMR dipolar constants of motion in liquid crystals: Jeener-Broekaert, double quantum coherence experiments and numerical calculation on a 10-spin cluster.

    PubMed

    Segnorile, H H; Bonin, C J; González, C E; Acosta, R H; Zamar, R C

    2009-10-01

    Two proton quasi-equilibrium states were previously observed in nematic liquid crystals, namely the S and W quasi-invariants. Even though the experimental evidence suggested that they originate in a partition of the spin dipolar energy into a strong and a weak part, respectively, from a theoretical viewpoint, the existence of an appropriate energy scale which allows such energy separation remains to be confirmed and a representation of the quasi-invariants is still to be given. We compare the dipolar NMR signals yielded both by the Jeener-Broekaert (JB) experiment as a function of the preparation time and the free evolution of the double quantum coherence (DQC) spectra excited from the S state, with numerical calculations carried out from first principles under different models for the dipolar quasi-invariants, in a 10-spin cluster which represents the 5CB (4(')-pentyl-4-biphenyl-carbonitrile) molecule. The calculated signals qualitatively agree with the experiments and the DQC spectra as a function of the single-quantum detection time are sensible enough to the different models to allow both to probe the physical nature of the initial dipolar-ordered state and to assign a subset of dipolar interactions to each constant of motion, which are compatible with the experiments. As a criterion for selecting a suitable quasi-equilibrium model of the 5CB molecule, we impose on the time evolution operator consistency with the occurrence of two dipolar quasi-invariants, that is, the calculated spectra must be unaffected by truncation of non-secular terms of the weaker dipolar energy. We find that defining the S quasi-invariant as the subset of the dipolar interactions of each proton with its two nearest neighbours yields a realistic characterization of the dipolar constants of motion in 5CB. We conclude that the proton-spin system of the 5CB molecule admits a partition of the dipolar energy into a bilinear strong and a multiple-spin weak contributions therefore providing two orthogonal constants of motion, which can be prepared and observed by means of the JB experiment. This feature, which implies the existence of two timescales of very different nature in the proton-spin dynamics, is ultimately dictated by the topology of the spin distribution in the dipole network and can be expected in other liquid crystals. Knowledge of the nature of the dipolar quasi-invariants will be useful in studies of dipolar-order relaxation, decoherence and multiple quantum NMR experiments where the initial state is a dipolar-ordered one.

  12. Evaluating physical habitat and water chemistry data from statewide stream monitoring programs to establish least-impacted conditions in Washington State

    USGS Publications Warehouse

    Wilmoth, Siri K.; Irvine, Kathryn M.; Larson, Chad

    2015-01-01

    Various GIS-generated land-use predictor variables, physical habitat metrics, and water chemistry variables from 75 reference streams and 351 randomly sampled sites throughout Washington State were evaluated for effectiveness at discriminating reference from random sites within level III ecoregions. A combination of multivariate clustering and ordination techniques were used. We describe average observed conditions for a subset of predictor variables as well as proposing statistical criteria for establishing reference conditions for stream habitat in Washington. Using these criteria, we determined whether any of the random sites met expectations for reference condition and whether any of the established reference sites failed to meet expectations for reference condition. Establishing these criteria will set a benchmark from which future data will be compared.

  13. Do employee health management programs work?

    PubMed

    Serxner, Seth; Gold, Daniel; Meraz, Angela; Gray, Ann

    2009-01-01

    Current peer review literature clearly documents the economic return and Return-on-Investment (ROI) for employee health management (EHM) programs. These EHM programs are defined as: health promotion, self-care, disease management, and case management programs. The evaluation literature for the sub-set of health promotion and disease management programs is examined in this article for specific evidence of the level of economic return in medical benefit cost reduction or avoidance. The article identifies the methodological challenges associated with determination of economic return for EHM programs and summarizes the findings from 23 articles that included 120 peer review study results. The article identifies the average ROI and percent health plan cost impact to be expected for both types of EHM programs, the expected time period for its occurrence, and caveats related to its measurement.

  14. Restricted numerical range: A versatile tool in the theory of quantum information

    NASA Astrophysics Data System (ADS)

    Gawron, Piotr; Puchała, Zbigniew; Miszczak, Jarosław Adam; Skowronek, Łukasz; Życzkowski, Karol

    2010-10-01

    Numerical range of a Hermitian operator X is defined as the set of all possible expectation values of this observable among a normalized quantum state. We analyze a modification of this definition in which the expectation value is taken among a certain subset of the set of all quantum states. One considers, for instance, the set of real states, the set of product states, separable states, or the set of maximally entangled states. We show exemplary applications of these algebraic tools in the theory of quantum information: analysis of k-positive maps and entanglement witnesses, as well as study of the minimal output entropy of a quantum channel. Product numerical range of a unitary operator is used to solve the problem of local distinguishability of a family of two unitary gates.

  15. Retrieval-induced forgetting without competition: testing the retrieval specificity assumption of the inhibition theory.

    PubMed

    Raaijmakers, Jeroen G W; Jakab, Emoke

    2012-01-01

    According to the inhibition theory of forgetting (Anderson, Journal of Memory and Language 49:415-445, 2003; Anderson, Bjork, & Bjork, Psychonomic Bulletin & Review 7:522-530, 2000), retrieval practice on a subset of target items leads to forgetting for the other, nontarget items, due to the fact that these other items interfere during the retrieval process and have to be inhibited in order to resolve the interference. In this account, retrieval-induced forgetting occurs only when competition takes place between target and nontarget items during target item practice, since only in such a case is inhibition of the nontarget items necessary. Strengthening of the target item without active retrieval should not lead to such an impairment. In two experiments, we investigated this assumption by using noncompetitive retrieval during the practice phase. We strengthened the cue-target item association during practice by recall of the category name instead of the target item, and thus eliminated competition between the different item types (as in Anderson et al., Psychonomic Bulletin & Review 7:522-530 2000). In contrast to the expectations of the inhibition theory, retrieval-induced forgetting occurred even without competition, and thus the present study does not support the retrieval specificity assumption.

  16. Optimization of oncological {sup 18}F-FDG PET/CT imaging based on a multiparameter analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menezes, Vinicius O., E-mail: vinicius@radtec.com.br; Machado, Marcos A. D.; Queiroz, Cleiton C.

    2016-02-15

    Purpose: This paper describes a method to achieve consistent clinical image quality in {sup 18}F-FDG scans accounting for patient habitus, dose regimen, image acquisition, and processing techniques. Methods: Oncological PET/CT scan data for 58 subjects were evaluated retrospectively to derive analytical curves that predict image quality. Patient noise equivalent count rate and coefficient of variation (CV) were used as metrics in their analysis. Optimized acquisition protocols were identified and prospectively applied to 179 subjects. Results: The adoption of different schemes for three body mass ranges (<60 kg, 60–90 kg, >90 kg) allows improved image quality with both point spread functionmore » and ordered-subsets expectation maximization-3D reconstruction methods. The application of this methodology showed that CV improved significantly (p < 0.0001) in clinical practice. Conclusions: Consistent oncological PET/CT image quality on a high-performance scanner was achieved from an analysis of the relations existing between dose regimen, patient habitus, acquisition, and processing techniques. The proposed methodology may be used by PET/CT centers to develop protocols to standardize PET/CT imaging procedures and achieve better patient management and cost-effective operations.« less

  17. Effect of filters and reconstruction algorithms on I-124 PET in Siemens Inveon PET scanner

    NASA Astrophysics Data System (ADS)

    Ram Yu, A.; Kim, Jin Su

    2015-10-01

    Purpose: To assess the effects of filtering and reconstruction on Siemens I-124 PET data. Methods: A Siemens Inveon PET was used. Spatial resolution of I-124 was measured to a transverse offset of 50 mm from the center FBP, 2D ordered subset expectation maximization (OSEM2D), 3D re-projection algorithm (3DRP), and maximum a posteriori (MAP) methods were tested. Non-uniformity (NU), recovery coefficient (RC), and spillover ratio (SOR) parameterized image quality. Mini deluxe phantom data of I-124 was also assessed. Results: Volumetric resolution was 7.3 mm3 from the transverse FOV center when FBP reconstruction algorithms with ramp filter was used. MAP yielded minimal NU with β =1.5. OSEM2D yielded maximal RC. SOR was below 4% for FBP with ramp, Hamming, Hanning, or Shepp-Logan filters. Based on the mini deluxe phantom results, an FBP with Hanning or Parzen filters, or a 3DRP with Hanning filter yielded feasible I-124 PET data.Conclusions: Reconstruction algorithms and filters were compared. FBP with Hanning or Parzen filters, or 3DRP with Hanning filter yielded feasible data for quantifying I-124 PET.

  18. Fluids and Materials Science Studies Utilizing the Microgravity-vibration Isolation Mount (MIM)

    NASA Technical Reports Server (NTRS)

    Herring, Rodney; Tryggvason, Bjarni; Duval, Walter

    1998-01-01

    Canada's Microgravity Sciences Program (MSP) is the smallest program of the ISS partners and so can participate in only a few, highly focused projects in order to make a scientific and technological impact. One focused project involves determining the effect of accelerations (g-jitter) on scientific measurements in a microgravity environment utilizing the Microgravity-vibration Isolation Mount (MIM). Many experiments share the common characteristic of having a fluid stage in their process. The quality of the experimental measurements have been expected to be affected by g-jitters which has lead the ISS program to include specifications to limit the level of acceleration allowed on a subset of experimental racks. From finite element analysis (FEM), the ISS structure will not be able to meet the acceleration specifications. Therefore, isolation systems are necessary. Fluid science results and materials science results show significant sensitivity to g-jitter. The work done to date should be viewed only as a first look at the issue of g-jitter sensitivity. The work should continue with high priority such that the international science community and the ISS program can address the requirement and settle on an agreed to overall approach as soon as possible.

  19. Assessment of prostate cancer detection with a visual-search human model observer

    NASA Astrophysics Data System (ADS)

    Sen, Anando; Kalantari, Faraz; Gifford, Howard C.

    2014-03-01

    Early staging of prostate cancer (PC) is a significant challenge, in part because of the small tumor sizes in- volved. Our long-term goal is to determine realistic diagnostic task performance benchmarks for standard PC imaging with single photon emission computed tomography (SPECT). This paper reports on a localization receiver operator characteristic (LROC) validation study comparing human and model observers. The study made use of a digital anthropomorphic phantom and one-cm tumors within the prostate and pelvic lymph nodes. Uptake values were consistent with data obtained from clinical In-111 ProstaScint scans. The SPECT simulation modeled a parallel-hole imaging geometry with medium-energy collimators. Nonuniform attenua- tion and distance-dependent detector response were accounted for both in the imaging and the ordered-subset expectation-maximization (OSEM) iterative reconstruction. The observer study made use of 2D slices extracted from reconstructed volumes. All observers were informed about the prostate and nodal locations in an image. Iteration number and the level of postreconstruction smoothing were study parameters. The results show that a visual-search (VS) model observer correlates better with the average detection performance of human observers than does a scanning channelized nonprewhitening (CNPW) model observer.

  20. Fast, Accurate and Shift-Varying Line Projections for Iterative Reconstruction Using the GPU

    PubMed Central

    Pratx, Guillem; Chinn, Garry; Olcott, Peter D.; Levin, Craig S.

    2013-01-01

    List-mode processing provides an efficient way to deal with sparse projections in iterative image reconstruction for emission tomography. An issue often reported is the tremendous amount of computation required by such algorithm. Each recorded event requires several back- and forward line projections. We investigated the use of the programmable graphics processing unit (GPU) to accelerate the line-projection operations and implement fully-3D list-mode ordered-subsets expectation-maximization for positron emission tomography (PET). We designed a reconstruction approach that incorporates resolution kernels, which model the spatially-varying physical processes associated with photon emission, transport and detection. Our development is particularly suitable for applications where the projection data is sparse, such as high-resolution, dynamic, and time-of-flight PET reconstruction. The GPU approach runs more than 50 times faster than an equivalent CPU implementation while image quality and accuracy are virtually identical. This paper describes in details how the GPU can be used to accelerate the line projection operations, even when the lines-of-response have arbitrary endpoint locations and shift-varying resolution kernels are used. A quantitative evaluation is included to validate the correctness of this new approach. PMID:19244015

  1. Implementation of GPU accelerated SPECT reconstruction with Monte Carlo-based scatter correction.

    PubMed

    Bexelius, Tobias; Sohlberg, Antti

    2018-06-01

    Statistical SPECT reconstruction can be very time-consuming especially when compensations for collimator and detector response, attenuation, and scatter are included in the reconstruction. This work proposes an accelerated SPECT reconstruction algorithm based on graphics processing unit (GPU) processing. Ordered subset expectation maximization (OSEM) algorithm with CT-based attenuation modelling, depth-dependent Gaussian convolution-based collimator-detector response modelling, and Monte Carlo-based scatter compensation was implemented using OpenCL. The OpenCL implementation was compared against the existing multi-threaded OSEM implementation running on a central processing unit (CPU) in terms of scatter-to-primary ratios, standardized uptake values (SUVs), and processing speed using mathematical phantoms and clinical multi-bed bone SPECT/CT studies. The difference in scatter-to-primary ratios, visual appearance, and SUVs between GPU and CPU implementations was minor. On the other hand, at its best, the GPU implementation was noticed to be 24 times faster than the multi-threaded CPU version on a normal 128 × 128 matrix size 3 bed bone SPECT/CT data set when compensations for collimator and detector response, attenuation, and scatter were included. GPU SPECT reconstructions show great promise as an every day clinical reconstruction tool.

  2. Compositional and enumerative designs for medical language representation.

    PubMed Central

    Rassinoux, A. M.; Miller, R. A.; Baud, R. H.; Scherrer, J. R.

    1997-01-01

    Medical language is in essence highly compositional, allowing complex information to be expressed from more elementary pieces. Embedding the expressive power of medical language into formal systems of representation is recognized in the medical informatics community as a key step towards sharing such information among medical record, decision support, and information retrieval systems. Accordingly, such representation requires managing both the expressiveness of the formalism and its computational tractability, while coping with the level of detail expected by clinical applications. These desiderata can be supported by enumerative as well as compositional approaches, as argued in this paper. These principles have been applied in recasting a frame-based system for general medical findings developed during the 1980s. The new system captures the precise meaning of a subset of over 1500 medical terms for general internal medicine identified from the Quick Medical Reference (QMR) lexicon. In order to evaluate the adequacy of this formal structure in reflecting the deep meaning of the QMR findings, a validation process was implemented. It consists of automatically rebuilding the semantic representation of the QMR findings by analyzing them through the RECIT natural language analyzer, whose semantic components have been adjusted to this frame-based model for the understanding task. PMID:9357700

  3. Compositional and enumerative designs for medical language representation.

    PubMed

    Rassinoux, A M; Miller, R A; Baud, R H; Scherrer, J R

    1997-01-01

    Medical language is in essence highly compositional, allowing complex information to be expressed from more elementary pieces. Embedding the expressive power of medical language into formal systems of representation is recognized in the medical informatics community as a key step towards sharing such information among medical record, decision support, and information retrieval systems. Accordingly, such representation requires managing both the expressiveness of the formalism and its computational tractability, while coping with the level of detail expected by clinical applications. These desiderata can be supported by enumerative as well as compositional approaches, as argued in this paper. These principles have been applied in recasting a frame-based system for general medical findings developed during the 1980s. The new system captures the precise meaning of a subset of over 1500 medical terms for general internal medicine identified from the Quick Medical Reference (QMR) lexicon. In order to evaluate the adequacy of this formal structure in reflecting the deep meaning of the QMR findings, a validation process was implemented. It consists of automatically rebuilding the semantic representation of the QMR findings by analyzing them through the RECIT natural language analyzer, whose semantic components have been adjusted to this frame-based model for the understanding task.

  4. Modeling thermal sensation in a Mediterranean climate—a comparison of linear and ordinal models

    NASA Astrophysics Data System (ADS)

    Pantavou, Katerina; Lykoudis, Spyridon

    2014-08-01

    A simple thermo-physiological model of outdoor thermal sensation adjusted with psychological factors is developed aiming to predict thermal sensation in Mediterranean climates. Microclimatic measurements simultaneously with interviews on personal and psychological conditions were carried out in a square, a street canyon and a coastal location of the greater urban area of Athens, Greece. Multiple linear and ordinal regression were applied in order to estimate thermal sensation making allowance for all the recorded parameters or specific, empirically selected, subsets producing so-called extensive and empirical models, respectively. Meteorological, thermo-physiological and overall models - considering psychological factors as well - were developed. Predictions were improved when personal and psychological factors were taken into account as compared to meteorological models. The model based on ordinal regression reproduced extreme values of thermal sensation vote more adequately than the linear regression one, while the empirical model produced satisfactory results in relation to the extensive model. The effects of adaptation and expectation on thermal sensation vote were introduced in the models by means of the exposure time, season and preference related to air temperature and irradiation. The assessment of thermal sensation could be a useful criterion in decision making regarding public health, outdoor spaces planning and tourism.

  5. Pancreatic cancer early detection: Expanding higher-risk group with clinical and metabolomics parameters

    PubMed Central

    Urayama, Shiro

    2015-01-01

    Pancreatic ductal adenocarcinoma (PDAC) is the fourth and fifth leading cause of cancer death for each gender in developed countries. With lack of effective treatment and screening scheme available for the general population, the mortality rate is expected to increase over the next several decades in contrast to the other major malignancies such as lung, breast, prostate and colorectal cancers. Endoscopic ultrasound, with its highest level of detection capacity of smaller pancreatic lesions, is the commonly employed and preferred clinical imaging-based PDAC detection method. Various molecular biomarkers have been investigated for characterization of the disease, but none are shown to be useful or validated for clinical utilization for early detection. As seen from studies of a small subset of familial or genetically high-risk PDAC groups, the higher yield and utility of imaging-based screening methods are demonstrated for these groups. Multiple recent studies on the unique cancer metabolism including PDAC, demonstrate the potential for utility of the metabolites as the discriminant markers for this disease. In order to generate an early PDAC detection screening strategy available for a wider population, we propose to expand the population of higher risk PDAC group with combination clinical and metabolomics parameters. PMID:25684935

  6. Variance-reduction normalization technique for a compton camera system

    NASA Astrophysics Data System (ADS)

    Kim, S. M.; Lee, J. S.; Kim, J. H.; Seo, H.; Kim, C. H.; Lee, C. S.; Lee, S. J.; Lee, M. C.; Lee, D. S.

    2011-01-01

    For an artifact-free dataset, pre-processing (known as normalization) is needed to correct inherent non-uniformity of detection property in the Compton camera which consists of scattering and absorbing detectors. The detection efficiency depends on the non-uniform detection efficiency of the scattering and absorbing detectors, different incidence angles onto the detector surfaces, and the geometry of the two detectors. The correction factor for each detected position pair which is referred to as the normalization coefficient, is expressed as a product of factors representing the various variations. The variance-reduction technique (VRT) for a Compton camera (a normalization method) was studied. For the VRT, the Compton list-mode data of a planar uniform source of 140 keV was generated from a GATE simulation tool. The projection data of a cylindrical software phantom were normalized with normalization coefficients determined from the non-uniformity map, and then reconstructed by an ordered subset expectation maximization algorithm. The coefficient of variations and percent errors of the 3-D reconstructed images showed that the VRT applied to the Compton camera provides an enhanced image quality and the increased recovery rate of uniformity in the reconstructed image.

  7. Mixed input to olfactory glomeruli from two subsets of ciliated sensory neurons does not impede relay neuron specificity in the crucian carp.

    PubMed

    Hansson, Kenth-Arne; Døving, Kjell B; Skjeldal, Frode M

    2015-10-01

    The consensus view of olfactory processing is that the axons of receptor-specific primary olfactory sensory neurons (OSNs) converge to a small subset of glomeruli, thus preserving the odour identity before the olfactory information is processed in higher brain centres. In the present study, we show that two different subsets of ciliated OSNs with different odorant specificities converge to the same glomeruli. In order to stain different ciliated OSNs in the crucian carp Carassius carassius we used two different chemical odorants, a bile salt and a purported alarm substance, together with fluorescent dextrans. The dye is transported within the axons and stains glomeruli in the olfactory bulb. Interestingly, the axons from the ciliated OSNs co-converge to the same glomeruli. Despite intermingled innervation of glomeruli, axons and terminal fields from the two different subsets of ciliated OSNs remained mono-coloured. By 4-6 days after staining, the dye was transported trans-synaptically to separately stained axons of relay neurons. These findings demonstrate that specificity of the primary neurons is retained in the olfactory pathways despite mixed innervation of the olfactory glomeruli. The results are discussed in relation to the emerging concepts about non-mammalian glomeruli. © 2015. Published by The Company of Biologists Ltd.

  8. In silico mining and characterization of simple sequence repeats from gilthead sea bream (Sparus aurata) expressed sequence tags (EST-SSRs); PCR amplification, polymorphism evaluation and multiplexing and cross-species assays.

    PubMed

    Vogiatzi, Emmanouella; Lagnel, Jacques; Pakaki, Victoria; Louro, Bruno; Canario, Adelino V M; Reinhardt, Richard; Kotoulas, Georgios; Magoulas, Antonios; Tsigenopoulos, Costas S

    2011-06-01

    We screened for simple sequence repeats (SSRs) found in ESTs derived from an EST-database development project ('Marine Genomics Europe' Network of Excellence). Different motifs of di-, tri-, tetra-, penta- and hexanucleotide SSRs were evaluated for variation in length and position in the expressed sequences, relative abundance and distribution in gilthead sea bream (Sparus aurata). We found 899 ESTs that harbor 997 SSRs (4.94%). On average, one SSR was found per 2.95 kb of EST sequence and the dinucleotide SSRs are the most abundant accounting for 47.6% of the total number. EST-SSRs were used as template for primer design. 664 primer pairs could be successfully identified and a subset of 206 pairs of primers was synthesized, PCR-tested and visualized on ethidium bromide stained agarose gels. The main objective was to further assess the potential of EST-SSRs as informative markers and investigate their cross-species amplification in sixteen teleost fish species: seven sparid species and nine other species from different families. Approximately 78% of the primer pairs gave PCR products of expected size in gilthead sea bream, and as expected, the rate of successful amplification of sea bream EST-SSRs was higher in sparids, lower in other perciforms and even lower in species of the Clupeiform and Gadiform orders. We finally determined the polymorphism and the heterozygosity of 63 markers in a wild gilthead sea bream population; fifty-eight loci were found to be polymorphic with the expected heterozygosity and the number of alleles ranging from 0.089 to 0.946 and from 2 to 27, respectively. These tools and markers are expected to enhance the available genetic linkage map in gilthead sea bream, to assist comparative mapping and genome analyses for this species and further with other model fish species and finally to help advance genetic analysis for cultivated and wild populations and accelerate breeding programs. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. A Negative Partition Relation

    PubMed Central

    Hajnal, A.

    1971-01-01

    If the continuum hypothesis is assumed, there is a graph G whose vertices form an ordered set of type ω12; G does not contain triangles or complete even graphs of form [[unk]0,[unk]0], and there is no independent subset of vertices of type ω12. PMID:16591893

  10. MicroRNA-153 Physiologically Inhibits Expression of Amyloid-β Precursor Protein in Cultured Human Fetal Brain Cells and Is Dysregulated in a Subset of Alzheimer Disease Patients*

    PubMed Central

    Long, Justin M.; Ray, Balmiki; Lahiri, Debomoy K.

    2012-01-01

    Regulation of amyloid-β (Aβ) precursor protein (APP) expression is complex. MicroRNAs (miRNAs) are expected to participate in the molecular network that controls this process. The composition of this network is, however, still undefined. Elucidating the complement of miRNAs that regulate APP expression should reveal novel drug targets capable of modulating Aβ production in AD. Here, we investigated the contribution of miR-153 to this regulatory network. A miR-153 target site within the APP 3′-untranslated region (3′-UTR) was predicted by several bioinformatic algorithms. We found that miR-153 significantly reduced reporter expression when co-transfected with an APP 3′-UTR reporter construct. Mutation of the predicted miR-153 target site eliminated this reporter response. miR-153 delivery in both HeLa cells and primary human fetal brain cultures significantly reduced APP expression. Delivery of a miR-153 antisense inhibitor to human fetal brain cultures significantly elevated APP expression. miR-153 delivery also reduced expression of the APP paralog APLP2. High functional redundancy between APP and APLP2 suggests that miR-153 may target biological pathways in which they both function. Interestingly, in a subset of human AD brain specimens with moderate AD pathology, miR-153 levels were reduced. This same subset also exhibited elevated APP levels relative to control specimens. Therefore, endogenous miR-153 inhibits expression of APP in human neurons by specifically interacting with the APP 3′-UTR. This regulatory interaction may have relevance to AD etiology, where low miR-153 levels may drive increased APP expression in a subset of AD patients. PMID:22733824

  11. Innate NKTγδ and NKTαβ cells exert similar functions and compete for a thymic niche.

    PubMed

    Pereira, Pablo; Boucontet, Laurent

    2012-05-01

    The transcriptional regulator promyelocytic leukemia zinc finger (PLZF) is highly expressed during the differentiation of natural killer T (NKT) cells and is essential for the acquisition of their effector/memory innate-like phenotype. Staining with anti-PLZF and anti-NK1.1 Abs allows the definition of two subsets of NKTαβ and NKTγδ thymocytes that differ phenotypically and functionally: a PLZF(+) NK1.1(-) subset composed of mostly quiescent cells that secrete more IL-4 than IFN-γ upon activation and a PLZF(+/-) NK1.1(+) subset that expresses CD127, NK1.1, and other NK-cell markers, secrete more IFN-γ than IL-4 upon activation and contains a sizable fraction of dividing cells. The size of the NK1.1(+) population is very tightly regulated and NK1.1(+) αβ and γδ thymocytes compete for a thymic niche. Furthermore, the relative representation of the PLZF(+) and NK1.1(+) subsets varies in a strain-specific manner with C57BL/6 (B6) mice containing more NK1.1(+) cells and (B6 × DBA/2)F1 (B6D2F1) mice more PLZF(+) cells. Consequently, activation of NKT cells in vivo is expected to result in higher levels of IL-4 secreted in B6D2F1 mice than in B6 mice. Consistent with this possibility, B6D2F1 mice, when compared with B6 mice, contain more "innate" CD8(+) thymocytes, the generation of which depends on IL-4 secreted by NKT cells. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Folate-deficiency induced cell-specific changes in the distribution of lymphocytes and granulocytes in rats.

    PubMed

    Abe, Ikumi; Shirato, Ken; Hashizume, Yoko; Mitsuhashi, Ryosuke; Kobayashi, Ayumu; Shiono, Chikako; Sato, Shogo; Tachiyashiki, Kaoru; Imaizumi, Kazuhiko

    2013-01-01

    Folate (vitamin B(9)) plays key roles in cell growth and proliferation through regulating the synthesis and stabilization of DNA and RNA, and its deficiency leads to lymphocytopenia and granulocytopenia. However, precisely how folate deficiency affects the distribution of a variety of white blood cell subsets, including the minor population of basophils, and the cell specificity of the effects remain unclear. Therefore, we examined the effects of a folate-deficient diet on the circulating number of lymphocyte subsets [T-lymphocytes, B-lymphocytes, and natural killer (NK) cells] and granulocyte subsets (neutrophils, eosinophils, and basophils) in rats. Rats were divided into two groups, with one receiving the folate-deficient diet (FAD group) and the other a control diet (CON group). All rats were pair-fed for 8 weeks. Plasma folate level was dramatically lower in the FAD group than in the CON group, and the level of homocysteine in the plasma, a predictor of folate deficiency was significantly higher in the FAD group than in the CON group. The number of T-lymphocytes, B-lymphocytes, and NK cells was significantly lower in the FAD group than in the CON group by 0.73-, 0.49-, and 0.70-fold, respectively, indicating that B-lymphocytes are more sensitive to folate deficiency than the other lymphocyte subsets. As expected, the number of neutrophils and eosinophils was significantly lower in the FAD group than in the CON group. However, the number of basophils, the least common type of granulocyte, showed transiently an increasing tendency in the FAD group as compared with the CON group. These results suggest that folate deficiency induces lymphocytopenia and granulocytopenia in a cell-specific manner.

  13. A Concise and Practical Framework for the Development and Usability Evaluation of Patient Information Websites.

    PubMed

    Peute, L W; Knijnenburg, S L; Kremer, L C; Jaspers, M W M

    2015-01-01

    The Website Developmental Model for the Healthcare Consumer (WDMHC) is an extensive and successfully evaluated framework that incorporates user-centered design principles. However, due to its extensiveness its application is limited. In the current study we apply a subset of the WDMHC framework in a case study concerning the development and evaluation of a website aimed at childhood cancer survivors (CCS). To assess whether the implementation of a limited subset of the WDMHC-framework is sufficient to deliver a high-quality website with few usability problems, aimed at a specific patient population. The website was developed using a six-step approach divided into three phases derived from the WDMHC: 1) information needs analysis, mock-up creation and focus group discussion; 2) website prototype development; and 3) heuristic evaluation (HE) and think aloud analysis (TA). The HE was performed by three double experts (knowledgeable both in usability engineering and childhood cancer survivorship), who assessed the site using the Nielsen heuristics. Eight end-users were invited to complete three scenarios covering all functionality of the website by TA. The HE and TA were performed concurrently on the website prototype. The HE resulted in 29 unique usability issues; the end-users performing the TA encountered eleven unique problems. Four issues specifically revealed by HE concerned cosmetic design flaws, whereas two problems revealed by TA were related to website content. Based on the subset of the WDMHC framework we were able to deliver a website that closely matched the expectancy of the end-users and resulted in relatively few usability problems during end-user testing. With the successful application of this subset of the WDMHC, we provide developers with a clear and easily applicable framework for the development of healthcare websites with high usability aimed at specific medical populations.

  14. Possibility expectation and its decision making algorithm

    NASA Technical Reports Server (NTRS)

    Keller, James M.; Yan, Bolin

    1992-01-01

    The fuzzy integral has been shown to be an effective tool for the aggregation of evidence in decision making. Of primary importance in the development of a fuzzy integral pattern recognition algorithm is the choice (construction) of the measure which embodies the importance of subsets of sources of evidence. Sugeno fuzzy measures have received the most attention due to the recursive nature of the fabrication of the measure on nested sequences of subsets. Possibility measures exhibit an even simpler generation capability, but usually require that one of the sources of information possess complete credibility. In real applications, such normalization may not be possible, or even desirable. In this report, both the theory and a decision making algorithm for a variation of the fuzzy integral are presented. This integral is based on a possibility measure where it is not required that the measure of the universe be unity. A training algorithm for the possibility densities in a pattern recognition application is also presented with the results demonstrated on the shuttle-earth-space training and testing images.

  15. Compositional differences between meteorites and near-Earth asteroids.

    PubMed

    Vernazza, P; Binzel, R P; Thomas, C A; DeMeo, F E; Bus, S J; Rivkin, A S; Tokunaga, A T

    2008-08-14

    Understanding the nature and origin of the asteroid population in Earth's vicinity (near-Earth asteroids, and its subset of potentially hazardous asteroids) is a matter of both scientific interest and practical importance. It is generally expected that the compositions of the asteroids that are most likely to hit Earth should reflect those of the most common meteorites. Here we report that most near-Earth asteroids (including the potentially hazardous subset) have spectral properties quantitatively similar to the class of meteorites known as LL chondrites. The prominent Flora family in the inner part of the asteroid belt shares the same spectral properties, suggesting that it is a dominant source of near-Earth asteroids. The observed similarity of near-Earth asteroids to LL chondrites is, however, surprising, as this meteorite class is relatively rare ( approximately 8 per cent of all meteorite falls). One possible explanation is the role of a size-dependent process, such as the Yarkovsky effect, in transporting material from the main belt.

  16. How long will my mouse live? Machine learning approaches for prediction of mouse life span.

    PubMed

    Swindell, William R; Harper, James M; Miller, Richard A

    2008-09-01

    Prediction of individual life span based on characteristics evaluated at middle-age represents a challenging objective for aging research. In this study, we used machine learning algorithms to construct models that predict life span in a stock of genetically heterogeneous mice. Life-span prediction accuracy of 22 algorithms was evaluated using a cross-validation approach, in which models were trained and tested with distinct subsets of data. Using a combination of body weight and T-cell subset measures evaluated before 2 years of age, we show that the life-span quartile to which an individual mouse belongs can be predicted with an accuracy of 35.3% (+/-0.10%). This result provides a new benchmark for the development of life-span-predictive models, but improvement can be expected through identification of new predictor variables and development of computational approaches. Future work in this direction can provide tools for aging research and will shed light on associations between phenotypic traits and longevity.

  17. Verification of TREX1 as a promising indicator of judging the prognosis of osteosarcoma.

    PubMed

    Feng, Jinyi; Lan, Ruilong; Cai, Guanxiong; Lin, Jinluan; Wang, Xinwen; Lin, Jianhua; Han, Deping

    2016-11-24

    The study aimed to explore the correlation between the expression of TREX1 and the metastasis and the survival time of patients with osteosarcoma as well as biological characteristics of osteosarcoma cells for the prognosis judgment of osteosarcoma. The correlation between the expression of TREX1 protein and the occurrence of pulmonary metastasis in 45 cases of osteosarcoma was analyzed. The CD133 + and CD133 - cell subsets of osteosarcoma stem cells were sorted by the flow cytometry. The tumorsphere culture, clone formation, growth curve, osteogenic and adipogenic differentiation, tumor-formation ability in nude mice, sensitivity of chemotherapeutic drugs, and other cytobiology behaviors were compared between the cell subsets in two groups; the expressions of stem cell-related genes Nanog and Oct4 were compared; The expressions of TREX1 protein and mRNA were compared between the cell subsets in two groups. The data was statistically analyzed. The measurement data between the two groups were compared using t test. The count data between the two groups were compared using χ 2 test and Kaplan-Meier survival analysis. A P value <0.05 indicated that the difference was statistically significant. The expression of TREX1 protein in patients with osteosarcoma in the metastasis group was significantly lower than that in the non-metastasis group. The difference was statistically significant (P < 0.05). Up to the last follow-up visit, the former average survival time was significantly lower than that of the latter, and the difference was statistically significant (P < 0.05). The expression of TREX1 in human osteosarcoma CD133 + cell subsets was significantly lower than that in CD133 - cell subsets. Stemness-related genes Nanog and Oct4 were highly expressed in human osteosarcoma CD133 + cell subsets with lower expression of TREX1; the biological characteristics identification experiment showed that human CD133 + cell subsets with low TREX1 expression could form tumorspheres, the number of colony forming was more, the cell proliferation ability was strong, the osteogenic and adipogenic differentiation potential was big, the tumor-forming ability in nude mice was strong, and the sensibility of chemotherapeutics drugs on cisplatin was low. The expression of TREX1 may be related to metastasis in patients with osteosarcoma. The expression of TREX1 was closely related to the cytobiology characteristics of osteosarcoma stem cell. TREX1 can play an important role in the occurrence and development processes. And, TREX1 is expected to become an effective new index for the evaluation of the prognosis.

  18. Comparison of the Functional microRNA Expression in Immune Cell Subsets of Neonates and Adults

    PubMed Central

    Yu, Hong-Ren; Hsu, Te-Yao; Huang, Hsin-Chun; Kuo, Ho-Chang; Li, Sung-Chou; Yang, Kuender D.; Hsieh, Kai-Sheng

    2016-01-01

    Diversity of biological molecules in newborn and adult immune cells contributes to differences in cell function and atopic properties. Micro RNAs (miRNAs) are reported to involve in the regulation of immune system. Therefore, determining the miRNA expression profile of leukocyte subpopulations is important for understanding immune system regulation. In order to explore the unique miRNA profiling that contribute to altered immune in neonates, we comprehensively analyzed the functional miRNA signatures of eight leukocyte subsets (polymorphonuclear cells, monocytes, CD4+ T cells, CD8+ T cells, natural killer cells, B cells, plasmacytoid dendritic cells, and myeloid dendritic cells) from both neonatal and adult umbilical cord and peripheral blood samples, respectively. We observed distinct miRNA profiles between adult and neonatal blood leukocyte subsets, including unique miRNA signatures for each cell lineage. Leukocyte miRNA signatures were altered after stimulation. Adult peripheral leukocytes had higher let-7b-5p expression levels compared to neonatal cord leukocytes across multiple subsets, irrespective of stimulation. Transfecting neonatal monocytes with a let-7b-5p mimic resulted in a reduction of LPS-induced interleukin (IL)-6 and TNF-α production, while transfection of a let-7b-5p inhibitor into adult monocytes enhanced IL-6 and TNF-α production. With this functional approach, we provide intact differential miRNA expression profiling of specific immune cell subsets between neonates and adults. These studies serve as a basis to further understand the altered immune response observed in neonates and advance the development of therapeutic strategies. PMID:28066425

  19. Immune reconstitution after allogeneic hematopoietic stem cell transplantation in children: a single institution study of 59 patients.

    PubMed

    Kim, Hyun O; Oh, Hyun Jin; Lee, Jae Wook; Jang, Pil-Sang; Chung, Nack-Gyun; Cho, Bin; Kim, Hack-Ki

    2013-01-01

    Lymphocyte subset recovery is an important factor that determines the success of hematopoietic stem cell transplantation (HSCT). Temporal differences in the recovery of lymphocyte subsets and the factors influencing this recovery are important variables that affect a patient's post-transplant immune reconstitution, and therefore require investigation. The time taken to achieve lymphocyte subset recovery and the factors influencing this recovery were investigated in 59 children who had undergone HSCT at the Department of Pediatrics, The Catholic University of Korea Seoul St. Mary's Hospital, and who had an uneventful follow-up period of at least 1 year. Analyses were carried out at 3 and 12 months post-transplant. An additional study was performed 1 month post-transplant to evaluate natural killer (NK) cell recovery. The impact of pre- and post-transplant variables, including diagnosis of Epstein-Barr virus (EBV) DNAemia posttransplant, on lymphocyte recovery was evaluated. THE LYMPHOCYTE SUBSETS RECOVERED IN THE FOLLOWING ORDER: NK cells, cytotoxic T cells, B cells, and helper T cells. At 1 month post-transplant, acute graft-versus-host disease was found to contribute significantly to the delay of CD16(+)/56(+) cell recovery. Younger patients showed delayed recovery of both CD3(+)/CD8(+) and CD19(+) cells. EBV DNAemia had a deleterious impact on the recovery of both CD3(+) and CD3(+)/CD4(+) lymphocytes at 1 year post-transplant. In our pediatric allogeneic HSCT cohort, helper T cells were the last subset to recover. Younger age and EBV DNAemia had a negative impact on the post-transplant recovery of T cells and B cells.

  20. Characterization of Conserved and Nonconserved Imprinted Genes in Swine

    USDA-ARS?s Scientific Manuscript database

    Genomic imprinting results in the silencing of a subset of mammalian alleles due to parent-of-origin inheritance. Due to the nature of their expression patterns they play a critical role in placental and early embryonic development. In order to increase our understanding of imprinted genes specifi...

  1. Let's Do It: Paper Dot Plates Give Numbers Meaning.

    ERIC Educational Resources Information Center

    Thompson, Charles S.; Van de Walle, John

    1980-01-01

    Described are activities using paper plates with dots drawn on them which place a heavy emphasis on matching and ordering sets, on developing mental images of sets, and on perceiving sets of a certain size as composed of smaller subsets. Also suggested are activities involving numerals. (Author/TG)

  2. 76 FR 24363 - Adjudication and Enforcement

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-02

    ... proceedings under section 337 of the Tariff Act of 1930. The Supplement to the Strategic Human Capital Plan... in another subset of cases. In order to better allocate its resources, OUII may have to assign... in the Federal Register. The new rule will have no substantive effect on Commission practice in...

  3. Clusters of Colleges and Universities: An Empirically Determined System.

    ERIC Educational Resources Information Center

    Korb, Roslyn

    A technique for classifying higher education institutions was developed in order to identify homogenous subsets of institutions and to compare an institution with its empirically determined peers. The majority of the data were obtained from a 4-year longitudinal file that merged the finance, faculty, enrollment, and institutional characteristics…

  4. Demographic differences in Down syndrome livebirths in the US from 1989 to 2006.

    PubMed

    Egan, James F X; Smith, Kathleen; Timms, Diane; Bolnick, Jay M; Campbell, Winston A; Benn, Peter A

    2011-04-01

    To explore demographic differences in Down syndrome livebirths in the United States. Using National Center for Health Statistics (NCHS) birth certificate data from 1989 to 2006 we analyzed Down syndrome livebirths after correcting for under-reporting. We created six subsets based on maternal age (15-34 and 35-49 years old); US regions, that is, Northeast, Midwest, South and West; marital status, (married, unmarried); education, ( ≤ 12 years, ≥ 13 years); race, (white, black); and Hispanic ethnicity, (non-Hispanic, Hispanic). We estimated expected Down syndrome livebirths assuming no change in birth certificate reporting. The percentage of expected Down syndrome livebirths actually born was calculated by year. There were 72 613 424 livebirths from 1989 to 2006. There were 122 519 Down syndrome livebirths expected and 65 492 were actually born. The Midwest had the most expected Down syndrome livebirths actually born (67.6%); the West was lowest (44.4%). More expected Down syndrome livebirths were born to women who were 15 to 34 years old (61 vs 43.8%) and to those with ≤ 12 years education (60.4 vs 46.9%), white race (56.6 vs 37%), unmarried (56.0 vs 52.5%), and of Hispanic ethnicity (55.0 vs 53.3%). The percentage of expected Down syndrome livebirths actually born varies by demographics. Copyright © 2011 John Wiley & Sons, Ltd.

  5. Comparative Expression Profiling of Distinct T Cell Subsets Undergoing Oxidative Stress

    PubMed Central

    Lichtenfels, Rudolf; Mougiakakos, Dimitrios; Johansson, C. Christian; Dressler, Sven P.; Recktenwald, Christian V.; Kiessling, Rolf; Seliger, Barbara

    2012-01-01

    The clinical outcome of adoptive T cell transfer-based immunotherapies is often limited due to different escape mechanisms established by tumors in order to evade the hosts' immune system. The establishment of an immunosuppressive micromilieu by tumor cells along with distinct subsets of tumor-infiltrating lymphocytes is often associated with oxidative stress that can affect antigen-specific memory/effector cytotoxic T cells thereby substantially reducing their frequency and functional activation. Therefore, protection of tumor-reactive cytotoxic T lymphocytes from oxidative stress may enhance the anti-tumor-directed immune response. In order to better define the key pathways/proteins involved in the response to oxidative stress a comparative 2-DE-based proteome analysis of naïve CD45RA+ and their memory/effector CD45RO+ T cell counterparts in the presence and absence of low dose hydrogen peroxide (H2O2) was performed in this pilot study. Based on the profiling data of these T cell subpopulations under the various conditions, a series of differentially expressed spots were defined, members thereof identified by mass spectrometry and subsequently classified according to their cellular function and localization. Representative targets responding to oxidative stress including proteins involved in signaling pathways, in regulating the cellular redox status as well as in shaping/maintaining the structural cell integrity were independently verified at the transcript and protein level under the same conditions in both T cell subsets. In conclusion the resulting profiling data describe complex, oxidative stress-induced, but not strictly concordant changes within the respective expression profiles of CD45RA+ and CD45RO+ T cells. Some of the differentially expressed genes/proteins might be further exploited as potential targets toward modulating the redox capacity of the distinct lymphocyte subsets thereby providing the basis for further studies aiming at rendering them more resistant to tumor micromilieu-induced oxidative stress. PMID:22911781

  6. SU-D-9A-01: Listmode-Driven Optimal Gating (OG) Respiratory Motion Management: Potential Impact On Quantitative PET Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, K; Hristov, D

    2014-06-01

    Purpose: To evaluate the potential impact of listmode-driven amplitude based optimal gating (OG) respiratory motion management technique on quantitative PET imaging. Methods: During the PET acquisitions, an optical camera tracked and recorded the motion of a tool placed on top of patients' torso. PET event data were utilized to detect and derive a motion signal that is directly coupled with a specific internal organ. A radioactivity-trace was generated from listmode data by accumulating all prompt counts in temporal bins matching the sampling rate of the external tracking device. Decay correction for 18F was performed. The image reconstructions using OG respiratorymore » motion management technique that uses 35% of total radioactivity counts within limited motion amplitudes were performed with external motion and radioactivity traces separately with ordered subset expectation maximization (OSEM) with 2 iterations and 21 subsets. Standard uptake values (SUVs) in a tumor region were calculated to measure the effect of using radioactivity trace for motion compensation. Motion-blurred 3D static PET image was also reconstructed with all counts and the SUVs derived from OG images were compared with SUVs from 3D images. Results: A 5.7 % increase of the maximum SUV in the lesion was found for optimal gating image reconstruction with radioactivity trace when compared to a static 3D image. The mean and maximum SUVs on the image that was reconstructed with radioactivity trace were found comparable (0.4 % and 4.5 % increase, respectively) to the values derived from the image that was reconstructed with external trace. Conclusion: The image reconstructed using radioactivity trace showed that the blurring due to the motion was reduced with impact on derived SUVs. The resolution and contrast of the images reconstructed with radioactivity trace were comparable to the resolution and contrast of the images reconstructed with external respiratory traces. Research supported by Siemens.« less

  7. Determining the Minimal Required Radioactivity of 18F-FDG for Reliable Semiquantification in PET/CT Imaging: A Phantom Study.

    PubMed

    Chen, Ming-Kai; Menard, David H; Cheng, David W

    2016-03-01

    In pursuit of as-low-as-reasonably-achievable (ALARA) doses, this study investigated the minimal required radioactivity and corresponding imaging time for reliable semiquantification in PET/CT imaging. Using a phantom containing spheres of various diameters (3.4, 2.1, 1.5, 1.2, and 1.0 cm) filled with a fixed (18)F-FDG concentration of 165 kBq/mL and a background concentration of 23.3 kBq/mL, we performed PET/CT at multiple time points over 20 h of radioactive decay. The images were acquired for 10 min at a single bed position for each of 10 half-lives of decay using 3-dimensional list mode and were reconstructed into 1-, 2-, 3-, 4-, 5-, and 10-min acquisitions per bed position using an ordered-subsets expectation maximum algorithm with 24 subsets and 2 iterations and a gaussian 2-mm filter. SUVmax and SUVavg were measured for each sphere. The minimal required activity (±10%) for precise SUVmax semiquantification in the spheres was 1.8 kBq/mL for an acquisition of 10 min, 3.7 kBq/mL for 3-5 min, 7.9 kBq/mL for 2 min, and 17.4 kBq/mL for 1 min. The minimal required activity concentration-acquisition time product per bed position was 10-15 kBq/mL⋅min for reproducible SUV measurements within the spheres without overestimation. Using the total radioactivity and counting rate from the entire phantom, we found that the minimal required total activity-time product was 17 MBq⋅min and the minimal required counting rate-time product was 100 kcps⋅min. Our phantom study determined a threshold for minimal radioactivity and acquisition time for precise semiquantification in (18)F-FDG PET imaging that can serve as a guide in pursuit of achieving ALARA doses. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  8. Extended B cell phenotype in patients with myalgic encephalomyelitis/chronic fatigue syndrome: a cross‐sectional study

    PubMed Central

    Mensah, F.; Bansal, A.; Berkovitz, S.; Sharma, A.; Reddy, V.; Leandro, M. J.

    2016-01-01

    Summary Myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) is a heterogeneous condition of unknown aetiology characterized by multiple symptoms including fatigue, post‐exertional malaise and cognitive impairment, lasting for at least 6 months. Recently, two clinical trials of B cell depletion therapy with rituximab (anti‐CD20) reported convincing improvement in symptoms. A possible but undefined role for B cells has therefore been proposed. Studies of the relative percentages of B cell subsets in patients with ME/CFS have not revealed any reproducible differences from healthy controls (HC). In order to explore whether more subtle alterations in B cell subsets related to B cell differentiation exist in ME/CFS patients we used flow cytometry to immunophenotype CD19+ B cells. The panel utilized immunoglobulin (Ig)D, CD27 and CD38 (classical B cell subsets) together with additional markers. A total of 38 patients fulfilling Canadian, Centre for Disease Control and Fukuda ME/CFS criteria and 32 age‐ and sex‐matched HC were included. We found no difference in percentages of classical subsets between ME/CFS patients and HC. However, we observed an increase in frequency (P < 0·01) and expression (MFI; P = 0·03) of CD24 on total B cells, confined to IgD+ subsets. Within memory subsets, a higher frequency of CD21+CD38– B cells (>20%) was associated with the presence of ME/CFS [odds ratio: 3·47 (1·15–10·46); P = 0·03] compared with HC, and there was a negative correlation with disease duration. In conclusion, we identified possible changes in B cell phenotype in patients with ME/CFS. These may reflect altered B cell function and, if confirmed in other patient cohorts, could provide a platform for studies based on clinical course or responsiveness to rituximab therapy. PMID:26646713

  9. On the reduced dynamics of a subset of interacting bosonic particles

    NASA Astrophysics Data System (ADS)

    Gessner, Manuel; Buchleitner, Andreas

    2018-03-01

    The quantum dynamics of a subset of interacting bosons in a subspace of fixed particle number is described in terms of symmetrized many-particle states. A suitable partial trace operation over the von Neumann equation of an N-particle system produces a hierarchical expansion for the subdynamics of M ≤ N particles. Truncating this hierarchy with a pure product state ansatz yields the general, nonlinear coherent mean-field equation of motion. In the special case of a contact interaction potential, this reproduces the Gross-Pitaevskii equation. To account for incoherent effects on top of the mean-field evolution, we discuss possible extensions towards a second-order perturbation theory that accounts for interaction-induced decoherence in form of a nonlinear Lindblad-type master equation.

  10. Flexible ordering of antibody class switch and V(D)J joining during B-cell ontogeny

    PubMed Central

    Kumar, Satyendra; Wuerffel, Robert; Achour, Ikbel; Lajoie, Bryan; Sen, Ranjan; Dekker, Job; Feeney, Ann J.; Kenter, Amy L.

    2013-01-01

    V(D)J joining is mediated by RAG recombinase during early B-lymphocyte development in the bone marrow (BM). Activation-induced deaminase initiates isotype switching in mature B cells of secondary lymphoid structures. Previous studies questioned the strict ontological partitioning of these processes. We show that pro-B cells undergo robust switching to a subset of immunoglobulin H (IgH) isotypes. Chromatin studies reveal that in pro-B cells, the spatial organization of the Igh locus may restrict switching to this subset of isotypes. We demonstrate that in the BM, V(D)J joining and switching are interchangeably inducible, providing an explanation for the hyper-IgE phenotype of Omenn syndrome. PMID:24240234

  11. Replica amplification of nucleic acid arrays

    DOEpatents

    Church, George M.

    2002-01-01

    A method of producing a plurality of a nucleic acid array, comprising, in order, the steps of amplifying in situ nucleic acid molecules of a first randomly-patterned, immobilized nucleic acid array comprising a heterogeneous pool of nucleic acid molecules affixed to a support, transferring at least a subset of the nucleic acid molecules produced by such amplifying to a second support, and affixing the subset so transferred to the second support to form a second randomly-patterned, immobilized nucleic acid array, wherein the nucleic acid molecules of the second array occupy positions that correspond to those of the nucleic acid molecules from which they were amplified on the first array, so that the first array serves as a template to produce a plurality, is disclosed.

  12. Birth order, self-concept, and participation in dangerous sports.

    PubMed

    Seff, M A; Gecas, V; Frey, J H

    1993-03-01

    We examined the effect of birth order on participation in dangerous sports, using data from a mail survey of 841 members of the United States Parachute Association drawn from the membership list of over 18,000; 52% (N = 436) responded. The questionnaires included detailed information on participation in leisure activities, background characteristics, reasons for parachuting, and self-concept; answers were obtained from an overwhelmingly middle-class, White, male, young, and college educated sample. The findings (based on descriptive statistics, correlations, and regression analysis) did not support our expectations regarding birth order and participation in dangerous sports. Several were even in opposite direction to our expectations. We did find some support for our expectation that self-efficacy would be positively related to participation in dangerous sports, but not for our expectation that self-efficacy would be related to birth order. We concluded that birth order continues to be a frustrating variable in studies of socialization.

  13. Feature relevance assessment for the semantic interpretation of 3D point cloud data

    NASA Astrophysics Data System (ADS)

    Weinmann, M.; Jutzi, B.; Mallet, C.

    2013-10-01

    The automatic analysis of large 3D point clouds represents a crucial task in photogrammetry, remote sensing and computer vision. In this paper, we propose a new methodology for the semantic interpretation of such point clouds which involves feature relevance assessment in order to reduce both processing time and memory consumption. Given a standard benchmark dataset with 1.3 million 3D points, we first extract a set of 21 geometric 3D and 2D features. Subsequently, we apply a classifier-independent ranking procedure which involves a general relevance metric in order to derive compact and robust subsets of versatile features which are generally applicable for a large variety of subsequent tasks. This metric is based on 7 different feature selection strategies and thus addresses different intrinsic properties of the given data. For the example of semantically interpreting 3D point cloud data, we demonstrate the great potential of smaller subsets consisting of only the most relevant features with 4 different state-of-the-art classifiers. The results reveal that, instead of including as many features as possible in order to compensate for lack of knowledge, a crucial task such as scene interpretation can be carried out with only few versatile features and even improved accuracy.

  14. A protective role of murine langerin+ cells in immune responses to cutaneous vaccination with microneedle patches

    PubMed Central

    Pulit-Penaloza, Joanna A.; Esser, E. Stein; Vassilieva, Elena V.; Lee, Jeong Woo; Taherbhai, Misha T.; Pollack, Brian P.; Prausnitz, Mark R.; Compans, Richard W.; Skountzou, Ioanna

    2014-01-01

    Cutaneous vaccination with microneedle patches offers several advantages over more frequently used approaches for vaccine delivery, including improved protective immunity. However, the involvement of specific APC subsets and their contribution to the induction of immunity following cutaneous vaccine delivery is not well understood. A better understanding of the functions of individual APC subsets in the skin will allow us to target specific skin cell populations in order to further enhance vaccine efficacy. Here we use a Langerin-EGFP-DTR knock-in mouse model to determine the contribution of langerin+ subsets of skin APCs in the induction of adaptive immune responses following cutaneous microneedle delivery of influenza vaccine. Depletion of langerin+ cells prior to vaccination resulted in substantial impairment of both Th1 and Th2 responses, and decreased post-challenge survival rates, in mice vaccinated cutaneously but not in those vaccinated via the intramuscular route or in non-depleted control mice. Our results indicate that langerin+ cells contribute significantly to the induction of protective immune responses following cutaneous vaccination with a subunit influenza vaccine. PMID:25130187

  15. Research on allocation efficiency of the daisy chain allocation algorithm

    NASA Astrophysics Data System (ADS)

    Shi, Jingping; Zhang, Weiguo

    2013-03-01

    With the improvement of the aircraft performance in reliability, maneuverability and survivability, the number of the control effectors increases a lot. How to distribute the three-axis moments into the control surfaces reasonably becomes an important problem. Daisy chain method is simple and easy to be carried out in the design of the allocation system. But it can not solve the allocation problem for entire attainable moment subset. For the lateral-directional allocation problem, the allocation efficiency of the daisy chain can be directly measured by the area of its subset of attainable moments. Because of the non-linear allocation characteristic, the subset of attainable moments of daisy-chain method is a complex non-convex polygon, and it is difficult to solve directly. By analyzing the two-dimensional allocation problems with a "micro-element" idea, a numerical calculation algorithm is proposed to compute the area of the non-convex polygon. In order to improve the allocation efficiency of the algorithm, a genetic algorithm with the allocation efficiency chosen as the fitness function is proposed to find the best pseudo-inverse matrix.

  16. Engineering Design Handbook: Development Guide for Reliability. Part Three. Reliability Prediction

    DTIC Science & Technology

    1976-01-01

    Populations 4-3 4-7 IFR and DFR Distributions 4-3 CHAPTER 5. SOME ADVANCED MATHEMATICAL TECHNIQUES 5-0 List of Symbols 5-1 5-1 Introduction...Aß,CJE= sets AFyAGSFJBc ~ events that units UA and -\\AAA E{] EB £H T $£ T UB are Failed or Good subsets "of A£,CJ),E s-expected value of...i.e., the sample space is all possible values that can arise. Each value is called a sample point. There are six sample points in the sample space

  17. Convergence in High Probability of the Quantum Diffusion in a Random Band Matrix Model

    NASA Astrophysics Data System (ADS)

    Margarint, Vlad

    2018-06-01

    We consider Hermitian random band matrices H in d ≥slant 1 dimensions. The matrix elements H_{xy}, indexed by x, y \\in Λ \\subset Z^d, are independent, uniformly distributed random variable if |x-y| is less than the band width W, and zero otherwise. We update the previous results of the converge of quantum diffusion in a random band matrix model from convergence of the expectation to convergence in high probability. The result is uniformly in the size |Λ| of the matrix.

  18. Unidimensional factor models imply weaker partial correlations than zero-order correlations.

    PubMed

    van Bork, Riet; Grasman, Raoul P P P; Waldorp, Lourens J

    2018-06-01

    In this paper we present a new implication of the unidimensional factor model. We prove that the partial correlation between two observed variables that load on one factor given any subset of other observed variables that load on this factor lies between zero and the zero-order correlation between these two observed variables. We implement this result in an empirical bootstrap test that rejects the unidimensional factor model when partial correlations are identified that are either stronger than the zero-order correlation or have a different sign than the zero-order correlation. We demonstrate the use of the test in an empirical data example with data consisting of fourteen items that measure extraversion.

  19. Relationships between treated hypertension and subsequent mortality in an insured population.

    PubMed

    Ivanovic, Brian; Cumming, Marianne E; Pinkham, C Allen

    2004-01-01

    To investigate if a mortality differential exists between insurance policyholders with treated hypertension and policyholders who are not under such treatment, where both groups are noted to have the same blood pressure at the time of policy issue. Hypertension is a known mortality risk factor in the insured and general population. Treatment for hypertension is very common in the insured population, especially as age increases. At the time of insurance application, a subset of individuals with treated hypertension will have blood pressures that are effectively controlled and are in the normal range. These individuals often meet established preferred underwriting criteria for blood pressure. In some life insurance companies, they may be offered insurance at the same rates as individuals who are not hypertensive with the same blood pressure. Such companies make the assumption that the pharmacologically induced normotensive state confers no excess risk relative to the natural normotensive state. Given the potential pricing implications of this decision, we undertook an investigation to test this hypothesis. We studied internal data on direct and reinsurance business between 1975 and 2001 followed through anniversaries in 2002 or prior termination with an average duration of 5.2 years per policy. Actual-to-expected analyses and Cox proportional hazards models were used to assess if a mortality differential existed between policyholders coded for hypertension and policyholders with the same blood pressure that were not coded as hypertensive. Eight thousand six hundred forty-seven deaths were observed during follow-up in the standard or preferred policy cohort. Within the same blood pressure category, mortality was higher in policyholders identified as treated hypertensives compared with those in the subset of individuals who were not coded for hypertension. This finding was present in males and females and persisted across age groups in almost all age-gender-smoking status subsets examined. The differential in mortality was 125% to 160% of standard mortality based on the ratio of actual-to-expected claims. In this insured cohort, a designation of treated hypertension is associated with increased relative mortality compared to life insurance policyholders not so coded.

  20. Voxels in the Brain: Neuroscience, Informatics and Changing Notions of Objectivity.

    ERIC Educational Resources Information Center

    Beaulieu, Anne

    2001-01-01

    Examines a subset of tools (atlases of the brain) developed in the Human Brain Project (HBP) in order to understand how the use of these tools changes the practice of science. Discusses the redefinition of what constitutes 'objective' neuroscientific knowledge according to both technological possibilities built into these tools and the constraints…

  1. A Multi-Institutional Investigation of Students' Preinstructional Ideas about Cosmology

    ERIC Educational Resources Information Center

    Bailey, Janelle M.; Sanchez, Roxanne; Coble, Kim; Larrieu, Donna; Cochran, Geraldine; Cominsky, Lynn R.

    2012-01-01

    In order to improve instruction in introductory astronomy, we are investigating students' preinstructional ideas about a number of cosmology topics. This article describes one aspect of this large research study in which 1270 students responded to a subset of three questions each from a larger set of questions about the following areas: definition…

  2. Caffeine Expectancy Questionnaire (CaffEQ): construction, psychometric properties, and associations with caffeine use, caffeine dependence, and other related variables.

    PubMed

    Huntley, Edward D; Juliano, Laura M

    2012-09-01

    Expectancies for drug effects predict drug initiation, use, cessation, and relapse, and may play a causal role in drug effects (i.e., placebo effects). Surprisingly little is known about expectancies for caffeine even though it is the most widely used psychoactive drug in the world. In a series of independent studies, the nature and scope of caffeine expectancies among caffeine consumers and nonconsumers were assessed, and a comprehensive and psychometrically sound Caffeine Expectancy Questionnaire (CaffEQ) was developed. After 2 preliminary studies, the CaffEQ was administered to 1,046 individuals from the general population along with other measures of interest (e.g., caffeine use history, anxiety). Exploratory factor analysis of the CaffEQ yielded a 7-factor solution. Subsequently, an independent sample of 665 individuals completed the CaffEQ and other measures, and a subset (n = 440) completed the CaffEQ again approximately 2 weeks later. Confirmatory factor analysis revealed good model fit, and test-retest reliability was very good. The frequency and quantity of caffeine use were associated with greater expectancies for withdrawal/dependence, energy/work enhancement, appetite suppression, social/mood enhancement, and physical performance enhancement and lower expectancies for anxiety/negative physical effects and sleep disturbance. Caffeine expectancies predicted various caffeine- associated features of substance dependence (e.g., use despite harm, withdrawal incidence and severity, perceived difficulty stopping use, tolerance). Expectancies for caffeine consumed via coffee were stronger than for caffeine consumed via soft drinks or tea. The CaffEQ should facilitate the advancement of our knowledge of caffeine and drug use in general. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  3. Heterogeneity in Neutrophil Microparticles Reveals Distinct Proteome and Functional Properties*

    PubMed Central

    Dalli, Jesmond; Montero-Melendez, Trinidad; Norling, Lucy V; Yin, Xiaoke; Hinds, Charles; Haskard, Dorian; Mayr, Manuel; Perretti, Mauro

    2013-01-01

    Altered plasma neutrophil microparticle levels have recently been implicated in a number of vascular and inflammatory diseases, yet our understanding of their actions is very limited. Herein, we investigate the proteome of neutrophil microparticles in order to shed light on their biological actions. Stimulation of human neutrophils, either in suspension or adherent to an endothelial monolayer, led to the production of microparticles containing >400 distinct proteins with only 223 being shared by the two subsets. For instance, postadherent microparticles were enriched in alpha-2 macroglobulin and ceruloplasmin, whereas microparticles produced by neutrophils in suspension were abundant in heat shock 70 kDa protein 1. Annexin A1 and lactotransferrin were expressed in both microparticle subsets. We next determined relative abundance of these proteins in three types of human microparticle samples: healthy volunteer plasma, plasma of septic patients and skin blister exudates finding that these proteins were differentially expressed on neutrophil microparticles from these samples reflecting in part the expression profiles we found in vitro. Functional assessment of the neutrophil microparticles subsets demonstrated that in response to direct stimulation neutrophil microparticles produced reactive oxygen species and leukotriene B4 as well as locomoted toward a chemotactic gradient. Finally, we investigated the actions of the two neutrophil microparticles subsets described herein on target cell responses. Microarray analysis with human primary endothelial cells incubated with either microparticle subset revealed a discrete modulation of endothelial cell gene expression profile. These findings demonstrate that neutrophil microparticles are heterogenous and can deliver packaged information propagating the activation status of the parent cell, potentially exerting novel and fundamental roles both under homeostatic and disease conditions. PMID:23660474

  4. Lupus erythematosus tumidus: a series of 26 cases.

    PubMed

    Vieira, Vanessa; Del Pozo, Jesús; Yebra-Pimentel, Maria Teresa; Martínez, Walter; Fonseca, Eduardo

    2006-05-01

    To study 26 cases of lupus erythematosus tumidus (LET), a subset of chronic cutaneous lupus erythematosus (CCLE), referred to in the literature as a rare entity. A retrospective study was conducted of 26 patients diagnosed with LET between 1996 and 2002. The clinical characteristics, histopathologic and laboratory findings, response to treatment, association with other subsets of lupus, course, and diagnostic criteria were analyzed. The incidence by sex was similar. The mean age of presentation was 49.19 years. The clinical presentation usually involved erythematous, edematous plaques located on the face, chest, back, or extremities, related to sun exposure. A dermal lymphocytic infiltrate with a perivascular disposition and differing degrees of mucin deposition was observed in all cases. Minimal epidermal changes were present in 18 cases, and 11 of these also showed minimal dermal-epidermal changes. Only one case showed dermal-epidermal changes without any epidermal alteration. Direct immunofluorescence test was performed in 15 patients, and 11 were negative. All cases showed a benign course without systemic manifestations. The response to topical steroids or antimalarial treatment was excellent, but a seasonal recurrence was usually observed. Discussion No defined criteria for LET are universally accepted. The main controversies are the acceptance of LET as a separate subset of CCLE, and the histopathologic diagnostic features, mainly the presence or absence of epidermal and dermal-epidermal changes in these lesions. No inflexible histologic criteria should be employed for the diagnosis of LET. This subset of lupus erythematosus is characterized by intense photosensitivity, definite clinical lesions, a benign course, the absence of systemic disease, good response to antimalarial treatment, and a tendency to recur. More studies should be performed in order to establish the true incidence of LET because this subset of CCLE is probably underestimated.

  5. Comparative analysis of QSAR models for predicting pK(a) of organic oxygen acids and nitrogen bases from molecular structure.

    PubMed

    Yu, Haiying; Kühne, Ralph; Ebert, Ralf-Uwe; Schüürmann, Gerrit

    2010-11-22

    For 1143 organic compounds comprising 580 oxygen acids and 563 nitrogen bases that cover more than 17 orders of experimental pK(a) (from -5.00 to 12.23), the pK(a) prediction performances of ACD, SPARC, and two calibrations of a semiempirical quantum chemical (QC) AM1 approach have been analyzed. The overall root-mean-square errors (rms) for the acids are 0.41, 0.58 (0.42 without ortho-substituted phenols with intramolecular H-bonding), and 0.55 and for the bases are 0.65, 0.70, 1.17, and 1.27 for ACD, SPARC, and both QC methods, respectively. Method-specific performances are discussed in detail for six acid subsets (phenols and aromatic and aliphatic carboxylic acids with different substitution patterns) and nine base subsets (anilines, primary, secondary and tertiary amines, meta/para-substituted and ortho-substituted pyridines, pyrimidines, imidazoles, and quinolines). The results demonstrate an overall better performance for acids than for bases but also a substantial variation across subsets. For the overall best-performing ACD, rms ranges from 0.12 to 1.11 and 0.40 to 1.21 pK(a) units for the acid and base subsets, respectively. With regard to the squared correlation coefficient r², the results are 0.86 to 0.96 (acids) and 0.79 to 0.95 (bases) for ACD, 0.77 to 0.95 (acids) and 0.85 to 0.97 (bases) for SPARC, and 0.64 to 0.87 (acids) and 0.43 to 0.83 (bases) for the QC methods, respectively. Attention is paid to structural and method-specific causes for observed pitfalls. The significant subset dependence of the prediction performances suggests a consensus modeling approach.

  6. OPTIMAL NETWORK TOPOLOGY DESIGN

    NASA Technical Reports Server (NTRS)

    Yuen, J. H.

    1994-01-01

    This program was developed as part of a research study on the topology design and performance analysis for the Space Station Information System (SSIS) network. It uses an efficient algorithm to generate candidate network designs (consisting of subsets of the set of all network components) in increasing order of their total costs, and checks each design to see if it forms an acceptable network. This technique gives the true cost-optimal network, and is particularly useful when the network has many constraints and not too many components. It is intended that this new design technique consider all important performance measures explicitly and take into account the constraints due to various technical feasibilities. In the current program, technical constraints are taken care of by the user properly forming the starting set of candidate components (e.g. nonfeasible links are not included). As subsets are generated, they are tested to see if they form an acceptable network by checking that all requirements are satisfied. Thus the first acceptable subset encountered gives the cost-optimal topology satisfying all given constraints. The user must sort the set of "feasible" link elements in increasing order of their costs. The program prompts the user for the following information for each link: 1) cost, 2) connectivity (number of stations connected by the link), and 3) the stations connected by that link. Unless instructed to stop, the program generates all possible acceptable networks in increasing order of their total costs. The program is written only to generate topologies that are simply connected. Tests on reliability, delay, and other performance measures are discussed in the documentation, but have not been incorporated into the program. This program is written in PASCAL for interactive execution and has been implemented on an IBM PC series computer operating under PC DOS. The disk contains source code only. This program was developed in 1985.

  7. Beyond filtered backprojection: A reconstruction software package for ion beam microtomography data

    NASA Astrophysics Data System (ADS)

    Habchi, C.; Gordillo, N.; Bourret, S.; Barberet, Ph.; Jovet, C.; Moretto, Ph.; Seznec, H.

    2013-01-01

    A new version of the TomoRebuild data reduction software package is presented, for the reconstruction of scanning transmission ion microscopy tomography (STIMT) and particle induced X-ray emission tomography (PIXET) images. First, we present a state of the art of the reconstruction codes available for ion beam microtomography. The algorithm proposed here brings several advantages. It is a portable, multi-platform code, designed in C++ with well-separated classes for easier use and evolution. Data reduction is separated in different steps and the intermediate results may be checked if necessary. Although no additional graphic library or numerical tool is required to run the program as a command line, a user friendly interface was designed in Java, as an ImageJ plugin. All experimental and reconstruction parameters may be entered either through this plugin or directly in text format files. A simple standard format is proposed for the input of experimental data. Optional graphic applications using the ROOT interface may be used separately to display and fit energy spectra. Regarding the reconstruction process, the filtered backprojection (FBP) algorithm, already present in the previous version of the code, was optimized so that it is about 10 times as fast. In addition, Maximum Likelihood Expectation Maximization (MLEM) and its accelerated version Ordered Subsets Expectation Maximization (OSEM) algorithms were implemented. A detailed user guide in English is available. A reconstruction example of experimental data from a biological sample is given. It shows the capability of the code to reduce noise in the sinograms and to deal with incomplete data, which puts a new perspective on tomography using low number of projections or limited angle.

  8. Dynamics of a reconnection-driven runaway ion tail in a reversed field pinch plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, J. K., E-mail: jkanders@wisc.edu; Kim, J.; Bonofiglo, P. J.

    2016-05-15

    While reconnection-driven ion heating is common in laboratory and astrophysical plasmas, the underlying mechanisms for converting magnetic to kinetic energy remain not fully understood. Reversed field pinch discharges are often characterized by rapid ion heating during impulsive reconnection, generating an ion distribution with an enhanced bulk temperature, mainly perpendicular to magnetic field. In the Madison Symmetric Torus, a subset of discharges with the strongest reconnection events develop a very anisotropic, high energy tail parallel to magnetic field in addition to bulk perpendicular heating, which produces a fusion neutron flux orders of magnitude higher than that expected from a Maxwellian distribution.more » Here, we demonstrate that two factors in addition to a perpendicular bulk heating mechanism must be considered to explain this distribution. First, ion runaway can occur in the strong parallel-to-B electric field induced by a rapid equilibrium change triggered by reconnection-based relaxation; this effect is particularly strong on perpendicularly heated ions which experience a reduced frictional drag relative to bulk ions. Second, the confinement of ions varies dramatically as a function of velocity. Whereas thermal ions are governed by stochastic diffusion along tearing-altered field lines (and radial diffusion increases with parallel speed), sufficiently energetic ions are well confined, only weakly affected by a stochastic magnetic field. High energy ions traveling mainly in the direction of toroidal plasma current are nearly classically confined, while counter-propagating ions experience an intermediate confinement, greater than that of thermal ions but significantly less than classical expectations. The details of ion confinement tend to reinforce the asymmetric drive of the parallel electric field, resulting in a very asymmetric, anisotropic distribution.« less

  9. Signalling changes to individuals who show resistance to change can reduce challenging behaviour.

    PubMed

    Bull, Leah E; Oliver, Chris; Woodcock, Kate A

    2017-03-01

    Several neurodevelopmental disorders are associated with resistance to change and challenging behaviours - including temper outbursts - that ensue following changes to routines, plans or expectations (here, collectively: expectations). Here, a change signalling intervention was tested for proof of concept and potential practical effectiveness. Twelve individuals with Prader-Willi syndrome participated in researcher- and caregiver-led pairing of a distinctive visual-verbal signal with subsequent changes to expectations. Specific expectations for a planned subset of five participants were systematically observed in minimally manipulated natural environments. Nine caregivers completed a temper outburst diary during a four week baseline period and a two week signalling evaluation period. Participants demonstrated consistently less temper outburst behaviour in the systematic observations when changes imposed to expectations were signalled, compared to when changes were not signalled. Four of the nine participants whose caregivers completed the behaviour diary demonstrated reliable reductions in temper outbursts between baseline and signalling evaluation. An active control group for the present initial evaluation of the signalling strategy using evidence from caregiver behaviour diaries was outside the scope of the present pilot study. Thus, findings cannot support the clinical efficacy of the present signalling approach. Proof of concept evidence that reliable pairing of a distinctive cue with a subsequent change to expectation can reduce associated challenging behaviour is provided. Data provide additional support for the importance of specific practical steps in further evaluations of the change signalling approach. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Skylab S192 data evaluation: Comparisons with ERTS-1 results. [classification results using ERTS-1 and Skylab MSS data over Holt County, Nebraska agricultural area

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.

    1974-01-01

    The author has identified the following significant results. The Skylab S192 data was evaluated by: (1) comparing the classification results using S192 and ERTS-1 data over the Holt County, Nebraska agricultural study area, and (2) investigating the impact of signal-to-noise ratio on classification accuracies using registered S192 and ERTS-1 data. Results indicate: (1) The classification accuracy obtained on S192 data using its best subset of four bands can be expected to be as high as that on ERTS-1 data. (2) When a subset of four S192 bands that are spectrally similar to the ERTS-1 bands was used for classification, an obvious deterioration in the classification accuracy was observed with respect to the ERTS-1 results. (3) The thermal bands 13 and 14 as well as the near IR bands were found to be relatively important in the classification of agricultural data. Although bands 11 and 12 were highly correlated, both were invariably included in the best subsets of the band sizes, four and beyond, according to the divergence criterion. (4) The differentiation of corn from popcorn was difficult on both S192 and ERTS-1 data acquired at an early summer date. (5) The results on both sets of data indicate that it was relatively easy to differentiate grass from any other class.

  11. Examining the Effects of Students' Classroom Expectations on Undergraduate Biology Course Reform

    ERIC Educational Resources Information Center

    Hall, Kristi Lyn

    2013-01-01

    In this dissertation, I perform and compare three studies of introductory biology students' classroom expectations--what students expect to be the nature of the knowledge that they are learning, what they think they should be (or are) doing in order to learn, and what they think they should be (or are) doing in order to be successful. Previous…

  12. A Novel Protocol for Model Calibration in Biological Wastewater Treatment

    PubMed Central

    Zhu, Ao; Guo, Jianhua; Ni, Bing-Jie; Wang, Shuying; Yang, Qing; Peng, Yongzhen

    2015-01-01

    Activated sludge models (ASMs) have been widely used for process design, operation and optimization in wastewater treatment plants. However, it is still a challenge to achieve an efficient calibration for reliable application by using the conventional approaches. Hereby, we propose a novel calibration protocol, i.e. Numerical Optimal Approaching Procedure (NOAP), for the systematic calibration of ASMs. The NOAP consists of three key steps in an iterative scheme flow: i) global factors sensitivity analysis for factors fixing; ii) pseudo-global parameter correlation analysis for non-identifiable factors detection; and iii) formation of a parameter subset through an estimation by using genetic algorithm. The validity and applicability are confirmed using experimental data obtained from two independent wastewater treatment systems, including a sequencing batch reactor and a continuous stirred-tank reactor. The results indicate that the NOAP can effectively determine the optimal parameter subset and successfully perform model calibration and validation for these two different systems. The proposed NOAP is expected to use for automatic calibration of ASMs and be applied potentially to other ordinary differential equations models. PMID:25682959

  13. Severely Impaired Control of Bacterial Infections in a Patient With Cystic Fibrosis Defective in Mucosal-Associated Invariant T Cells.

    PubMed

    Pincikova, Terezia; Paquin-Proulx, Dominic; Moll, Markus; Flodström-Tullberg, Malin; Hjelte, Lena; Sandberg, Johan K

    2018-05-01

    Here we report a unique case of a patient with cystic fibrosis characterized by severely impaired control of bacterial respiratory infections. This patient's susceptibility to such infections was much worse than expected from a cystic fibrosis clinical perspective, and he died at age 22 years despite extensive efforts and massive use of antibiotics. We found that this severe condition was associated with a near-complete deficiency in circulating mucosal-associated invariant T (MAIT) cells as measured at several time points. MAIT cells are a large, recently described subset of T cells that recognize microbial riboflavin metabolites presented by the highly evolutionarily conserved MR1 molecules. The MAIT cell deficiency was specific; other T-cell subsets were intact. Even though this is only one unique case, the findings lend significant support to the emerging role of MAIT cells in mucosal immune defense and suggest that MAIT cells may significantly modify the clinical phenotype of respiratory diseases. Copyright © 2018 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  14. Insights into assessing water quality using taxonomic distinctness based on a small species pool of biofilm-dwelling ciliate fauna in coastal waters of the Yellow Sea, northern China.

    PubMed

    Zhang, Wei; Liu, Yuanyuan; Warren, Alan; Xu, Henglong

    2014-12-15

    The aim of this study is to determine the feasibility of using a small species pool from a raw dataset of biofilm-dwelling ciliates for bioassessment based on taxonomic diversity. Samples were collected monthly at four stations within a gradient of environmental stress in coastal waters of the Yellow Sea, northern China from August 2011 to July 2012. A 33-species subset was identified from the raw 137-species dataset using a multivariate method. The spatial patterns of this subset were significantly correlated with the changes in the nutrients and chemical oxygen demand. The taxonomic diversity indices were significantly correlated with nutrients. The pair-wise indices of average taxonomic distinctness (Δ(+)) and the taxonomic distinctness (Λ(+)) showed a clear departure from the expected taxonomic pattern. These findings suggest that this small ciliate assemblage might be used as an adequate species pool for discriminating water quality status based on taxonomic distinctness in marine ecosystems. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Data approximation using a blending type spline construction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dalmo, Rune; Bratlie, Jostein

    2014-11-18

    Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended together by C{sup k}-smooth basis functions. One way of approximating discrete regular data using GERBS is by partitioning the data set into subsets and fit a local function to each subset. Partitioning and fitting strategies can be devised such that important or interesting data points are interpolated in order to preserve certain features. We present a method for fitting discrete data using a tensor product GERBS construction. The method is based on detection of feature points using differential geometry. Derivatives, which aremore » necessary for feature point detection and used to construct local surface patches, are approximated from the discrete data using finite differences.« less

  16. Leaving Librarianship: A Study of the Determinants and Consequences of Occupational Turnover

    ERIC Educational Resources Information Center

    Rathbun-Grubb, Susan R.

    2009-01-01

    The purpose of this study was to better understand occupational turnover among librarians and archivists by examining the careers of individuals who have left or intend to leave the profession, in order to identify the factors associated with turnover, and to discover the career outcomes of those who leave. The dissertation analyzes a subset of…

  17. Comparison of the Incremental Validity of the Old and New MCAT.

    ERIC Educational Resources Information Center

    Wolf, Fredric M.; And Others

    The predictive and incremental validity of both the Old and New Medical College Admission Test (MCAT) was examined and compared with a sample of over 300 medical students. Results of zero order and incremental validity coefficients, as well as prediction models resulting from all possible subsets regression analyses using Mallow's Cp criterion,…

  18. "My Future Doesn't Know ME": Time and Subjectivity in Poetry by Young People

    ERIC Educational Resources Information Center

    Conrad, Rachel

    2012-01-01

    This article explores children's imaginative representations of time in relation to self-experience. Poems published in a young poets' anthology edited by Naomi Shihab Nye are analyzed in order to discern models of temporality and subjectivity imagined by young writers. A "dynamic temporality" is seen in a subset of poems which manipulate time…

  19. Response Latency as a Function of Hypothesis-Testing Strategies in Concept Identification

    ERIC Educational Resources Information Center

    Fink, Richard T.

    1972-01-01

    The ability of M. Levine's subset-sampling assumptions to account for the decrease in response latency following the trial of the last error was investigated by employing a distributed stimulus set composed of four binary dimensions and a procedure which required Ss to make an overt response in order to sample each dimension. (Author)

  20. Call Me "Madame": Re-Presenting Culture in the French Language Classroom

    ERIC Educational Resources Information Center

    Siskin, H. Jay

    2007-01-01

    This study examines autobiographies of American teachers of French in order to make explicit their beliefs regarding French language and culture. The themes of class and power are prominent in these teachers' belief systems, as is the desire for self-transformation through mastery of French and miming a subset of French behaviors. These notions…

  1. Natural and cross-inducible anti-SIV antibodies in Mauritian cynomolgus macaques

    PubMed Central

    Li, Hongzhao; Nykoluk, Mikaela; Li, Lin; Liu, Lewis R.; Omange, Robert W.; Soule, Geoff; Schroeder, Lukas T.; Toledo, Nikki; Kashem, Mohammad Abul; Correia-Pinto, Jorge F.; Liang, Binhua; Schultz-Darken, Nancy; Alonso, Maria J.; Whitney, James B.; Plummer, Francis A.

    2017-01-01

    Cynomolgus macaques are an increasingly important nonhuman primate model for HIV vaccine research. SIV-free animals without pre-existing anti-SIV immune responses are generally needed to evaluate the effect of vaccine-induced immune responses against the vaccine epitopes. Here, in order to select such animals for vaccine studies, we screened 108 naïve female Mauritian cynomolgus macaques for natural (baseline) antibodies to SIV antigens using a Bio-Plex multiplex system. The antigens included twelve 20mer peptides overlapping the twelve SIV protease cleavage sites (-10/+10), respectively (PCS peptides), and three non-PCS Gag or Env peptides. Natural antibodies to SIV antigens were detected in subsets of monkeys. The antibody reactivity to SIV was further confirmed by Western blot using purified recombinant SIV Gag and Env proteins. As expected, the immunization of monkeys with PCS antigens elicited anti-PCS antibodies. However, unexpectedly, antibodies to non-PCS peptides were also induced, as shown by both Bio-Plex and Western blot analyses, while the non-PCS peptides do not share sequence homology with PCS peptides. The presence of natural and vaccine cross-inducible SIV antibodies in Mauritian cynomolgus macaques should be considered in animal selection, experimental design and result interpretation, for their best use in HIV vaccine research. PMID:28982126

  2. Preclinical evaluation of parametric image reconstruction of [18F]FMISO PET: correlation with ex vivo immunohistochemistry

    NASA Astrophysics Data System (ADS)

    Cheng, Xiaoyin; Bayer, Christine; Maftei, Constantin-Alin; Astner, Sabrina T.; Vaupel, Peter; Ziegler, Sibylle I.; Shi, Kuangyu

    2014-01-01

    Compared to indirect methods, direct parametric image reconstruction (PIR) has the advantage of high quality and low statistical errors. However, it is not yet clear if this improvement in quality is beneficial for physiological quantification. This study aimed to evaluate direct PIR for the quantification of tumor hypoxia using the hypoxic fraction (HF) assessed from immunohistological data as a physiological reference. Sixteen mice with xenografted human squamous cell carcinomas were scanned with dynamic [18F]FMISO PET. Afterward, tumors were sliced and stained with H&E and the hypoxia marker pimonidazole. The hypoxic signal was segmented using k-means clustering and HF was specified as the ratio of the hypoxic area over the viable tumor area. The parametric Patlak slope images were obtained by indirect voxel-wise modeling on reconstructed images using filtered back projection and ordered-subset expectation maximization (OSEM) and by direct PIR (e.g., parametric-OSEM, POSEM). The mean and maximum Patlak slopes of the tumor area were investigated and compared with HF. POSEM resulted in generally higher correlations between slope and HF among the investigated methods. A strategy for the delineation of the hypoxic tumor volume based on thresholding parametric images at half maximum of the slope is recommended based on the results of this study.

  3. Quantitative comparison of OSEM and penalized likelihood image reconstruction using relative difference penalties for clinical PET

    NASA Astrophysics Data System (ADS)

    Ahn, Sangtae; Ross, Steven G.; Asma, Evren; Miao, Jun; Jin, Xiao; Cheng, Lishui; Wollenweber, Scott D.; Manjeshwar, Ravindra M.

    2015-08-01

    Ordered subset expectation maximization (OSEM) is the most widely used algorithm for clinical PET image reconstruction. OSEM is usually stopped early and post-filtered to control image noise and does not necessarily achieve optimal quantitation accuracy. As an alternative to OSEM, we have recently implemented a penalized likelihood (PL) image reconstruction algorithm for clinical PET using the relative difference penalty with the aim of improving quantitation accuracy without compromising visual image quality. Preliminary clinical studies have demonstrated visual image quality including lesion conspicuity in images reconstructed by the PL algorithm is better than or at least as good as that in OSEM images. In this paper we evaluate lesion quantitation accuracy of the PL algorithm with the relative difference penalty compared to OSEM by using various data sets including phantom data acquired with an anthropomorphic torso phantom, an extended oval phantom and the NEMA image quality phantom; clinical data; and hybrid clinical data generated by adding simulated lesion data to clinical data. We focus on mean standardized uptake values and compare them for PL and OSEM using both time-of-flight (TOF) and non-TOF data. The results demonstrate improvements of PL in lesion quantitation accuracy compared to OSEM with a particular improvement in cold background regions such as lungs.

  4. Characterization of dust aggregates in the vicinity of the Rosetta spacecraft

    NASA Astrophysics Data System (ADS)

    Güttler, C.; Hasselmann, P. H.; Li, Y.; Fulle, M.; Tubiana, C.; Kovacs, G.; Agarwal, J.; Sierks, H.; Fornasier, S.; Hofmann, M.; Gutiérrez Marqués, P.; Ott, T.; Drolshagen, E.; Bertini, I.; Osiris Team

    2017-09-01

    In a Rosetta/OSIRIS imaging activity in June 2015, we have observed the dynamic motion of particles close to the spacecraft. Due to the focal setting of the OSIRIS Wide Angle Camera (WAC), these particles were blurred, which can be used to measure their distances to the spacecraft. We detected 108 dust aggregates over a 130 minutes long sequence, and find that their sizes are around a millimetre and their distances cluster between 2 m and 40 m from the spacecraft. Their number densities are about a factor 10 higher than expected for the overall coma and highly fluctuating. Their velocities are small compared to the spacecraft orbital motion and directed away from the spacecraft, towards the comet. From this we conclude that they have interacted with the spacecraft and assess three possible scenarios. We prefer a scenario where centimeter-sized aggregates collide with the spacecraft and we would observe the fragments. Ablation of a dust layer on the spacecraft's z panel when rotated towards the sun is a reasonable alternative. We could also measure an acceleration for a subset of 18 aggregates, which is directed away from the sun and can be explain by a rocket effect, which requires a minimum ice fraction in the order of 0.1%

  5. Characterization of dust aggregates in the vicinity of the Rosetta spacecraft

    NASA Astrophysics Data System (ADS)

    Güttler, C.; Hasselmann, P. H.; Li, Y.; Fulle, M.; Tubiana, C.; Kovacs, G.; Agarwal, J.; Sierks, H.; Fornasier, S.; Hofmann, M.; Gutiérrez Marqués, P.; Ott, T.; Drolshagen, E.; Bertini, I.; Barbieri, C.; Lamy, P. L.; Rodrigo, R.; Koschny, D.; Rickman, H.; A'Hearn, M. F.; Barucci, M. A.; Bodewits, D.; Bertaux, J.-L.; Boudreault, S.; Cremonese, G.; Da Deppo, V.; Davidsson, B.; Debei, S.; De Cecco, M.; Deller, J.; Geiger, B.; Groussin, O.; Gutiérrez, P. J.; Hviid, S. F.; Ip, W.-H.; Jorda, L.; Keller, H. U.; Knollenberg, J.; Kramm, J. R.; Kührt, E.; Küppers, M.; Lara, L. M.; Lazzarin, M.; López-Moreno, J. J.; Marzari, F.; Mottola, S.; Naletto, G.; Oklay, N.; Pajola, M.; Shi, X.; Thomas, N.; Vincent, J.-B.

    2017-07-01

    In a Rosetta/OSIRIS imaging activity in 2015 June, we have observed the dynamic motion of particles close to the spacecraft. Due to the focal setting of the OSIRIS wide angle camera, these particles were blurred, which can be used to measure their distances to the spacecraft. We detected 109 dust aggregates over a 130 min long sequence, and find that their sizes are around a millimetre and their distances cluster between 2 and 40 m from the spacecraft. Their number densities are about a factor 10 higher than expected for the overall coma and highly fluctuating. Their velocities are small compared to the spacecraft orbital motion and directed away from the spacecraft, towards the comet. From this we conclude that they have interacted with the spacecraft and assess three possible scenarios. In the likeliest of the three scenarios, centimetre-sized aggregates collide with the spacecraft and we would observe the fragments. Ablation of a dust layer on the spacecraft's z panel (remote instrument viewing direction) when rotated towards the Sun is a reasonable alternative. We could also measure an acceleration for a subset of 18 aggregates, which is directed away from the Sun and can be explain by a rocket effect, which requires a minimum ice fraction of the order of 0.1 per cent.

  6. A CCD Color Comparison of Seyfert 1 and 2 Host Galaxies

    NASA Astrophysics Data System (ADS)

    Virani, S. N.; De Robertis, M. M.

    2001-05-01

    Wide-field, R-band CCD data of 15 Seyfert 1 and 15 Seyfert 2 galaxies taken from the CfA survey were analysed in order to compare the properties of their host galaxies. Also, B-band images for a subset of 12 Seyfert 1s and 7 Seyfert 2s were acquired and analysed in the same way. The nuclear contribution of the Seyfert host galaxies was modeled and removed empirically by using a robust technique for decomposing the nucleus, bulge and disk components (see Virani et al. 2000, De Robertis and Virani, 2001). Profile fits to the remaining bulge+disk light were then performed. Of the many B-R color comparisons that were performed (i.e., component colors, color gradient, etc.) between Seyfert 1s and 2s, only two distributions differed at greater than the 95% confidence level for the K-S test: the magnitude of the nuclear component, and the radial color gradient outside the nucleus. The former is expected. The latter could be consistent with some proposed evolutionary models. There is some suggestion that other parameters may differ, but at a lower confidence level. Color contour maps and results from all tests performed (K-S test and Wilcoxon-Rank Sum Test) are presented.

  7. NLP based congestive heart failure case finding: A prospective analysis on statewide electronic medical records.

    PubMed

    Wang, Yue; Luo, Jin; Hao, Shiying; Xu, Haihua; Shin, Andrew Young; Jin, Bo; Liu, Rui; Deng, Xiaohong; Wang, Lijuan; Zheng, Le; Zhao, Yifan; Zhu, Chunqing; Hu, Zhongkai; Fu, Changlin; Hao, Yanpeng; Zhao, Yingzhen; Jiang, Yunliang; Dai, Dorothy; Culver, Devore S; Alfreds, Shaun T; Todd, Rogow; Stearns, Frank; Sylvester, Karl G; Widen, Eric; Ling, Xuefeng B

    2015-12-01

    In order to proactively manage congestive heart failure (CHF) patients, an effective CHF case finding algorithm is required to process both structured and unstructured electronic medical records (EMR) to allow complementary and cost-efficient identification of CHF patients. We set to identify CHF cases from both EMR codified and natural language processing (NLP) found cases. Using narrative clinical notes from all Maine Health Information Exchange (HIE) patients, the NLP case finding algorithm was retrospectively (July 1, 2012-June 30, 2013) developed with a random subset of HIE associated facilities, and blind-tested with the remaining facilities. The NLP based method was integrated into a live HIE population exploration system and validated prospectively (July 1, 2013-June 30, 2014). Total of 18,295 codified CHF patients were included in Maine HIE. Among the 253,803 subjects without CHF codings, our case finding algorithm prospectively identified 2411 uncodified CHF cases. The positive predictive value (PPV) is 0.914, and 70.1% of these 2411 cases were found to be with CHF histories in the clinical notes. A CHF case finding algorithm was developed, tested and prospectively validated. The successful integration of the CHF case findings algorithm into the Maine HIE live system is expected to improve the Maine CHF care. Copyright © 2015. Published by Elsevier Ireland Ltd.

  8. Using ecological null models to assess the potential for marine protected area networks to protect biodiversity.

    PubMed

    Semmens, Brice X; Auster, Peter J; Paddack, Michelle J

    2010-01-27

    Marine protected area (MPA) networks have been proposed as a principal method for conserving biological diversity, yet patterns of diversity may ultimately complicate or compromise the development of such networks. We show how a series of ecological null models can be applied to assemblage data across sites in order to identify non-random biological patterns likely to influence the effectiveness of MPA network design. We use fish census data from Caribbean fore-reefs as a test system and demonstrate that: 1) site assemblages were nested, such that species found on sites with relatively few species were subsets of those found on sites with relatively many species, 2) species co-occurred across sites more than expected by chance once species-habitat associations were accounted for, and 3) guilds were most evenly represented at the richest sites and richness among all guilds was correlated (i.e., species and trophic diversity were closely linked). These results suggest that the emerging Caribbean marine protected area network will likely be successful at protecting regional diversity even if planning is largely constrained by insular, inventory-based design efforts. By recasting ecological null models as tests of assemblage patterns likely to influence management action, we demonstrate how these classic tools of ecological theory can be brought to bear in applied conservation problems.

  9. Question order sensitivity of subjective well-being measures: focus on life satisfaction, self-rated health, and subjective life expectancy in survey instruments.

    PubMed

    Lee, Sunghee; McClain, Colleen; Webster, Noah; Han, Saram

    2016-10-01

    This study examines the effect of question context created by order in questionnaires on three subjective well-being measures: life satisfaction, self-rated health, and subjective life expectancy. We conducted two Web survey experiments. The first experiment (n = 648) altered the order of life satisfaction and self-rated health: (1) life satisfaction asked immediately after self-rated health; (2) self-rated health immediately after life satisfaction; and (3) two items placed apart. We examined their correlation coefficient by experimental condition and further examined its interaction with objective health. The second experiment (n = 479) asked life expectancy before and after parental mortality questions. Responses to life expectancy were compared by order using ANOVA, and we examined interaction with parental mortality status using ANCOVA. Additionally, response time and probes were examined. Correlation coefficients between self-rated health and life satisfaction differed significantly by order: 0.313 (life satisfaction first), 0.508 (apart), and 0.643 (self-rated health first). Differences were larger among respondents with chronic conditions. Response times were the shortest when self-rated health was asked first. When life expectancy asked after parental mortality questions, respondents reported considering parents more for answering life expectancy; and respondents with deceased parents reported significantly lower expectancy, but not those whose parents were alive. Question context effects exist. Findings suggest placing life satisfaction and self-rated health apart to avoid artificial attenuation or inflation in their association. Asking about parental mortality prior to life expectancy appears advantageous as this leads respondents to consider parental longevity more, an important factor for true longevity.

  10. Lie group classification of first-order delay ordinary differential equations

    NASA Astrophysics Data System (ADS)

    Dorodnitsyn, Vladimir A.; Kozlov, Roman; Meleshko, Sergey V.; Winternitz, Pavel

    2018-05-01

    A group classification of first-order delay ordinary differential equations (DODEs) accompanied by an equation for the delay parameter (delay relation) is presented. A subset of such systems (delay ordinary differential systems or DODSs), which consists of linear DODEs and solution-independent delay relations, have infinite-dimensional symmetry algebras—as do nonlinear ones that are linearizable by an invertible transformation of variables. Genuinely nonlinear DODSs have symmetry algebras of dimension n, . It is shown how exact analytical solutions of invariant DODSs can be obtained using symmetry reduction.

  11. Decision-making competence predicts domain-specific risk attitudes

    PubMed Central

    Weller, Joshua A.; Ceschi, Andrea; Randolph, Caleb

    2015-01-01

    Decision-making competence (DMC) reflects individual differences in rational responding across several classic behavioral decision-making tasks. Although it has been associated with real-world risk behavior, less is known about the degree to which DMC contributes to specific components of risk attitudes. Utilizing a psychological risk-return framework, we examined the associations between risk attitudes and DMC. Italian community residents (n = 804) completed an online DMC measure, using a subset of the original Adult-DMC battery. Participants also completed a self-reported risk attitude measure for three components of risk attitudes (risk-taking, risk perceptions, and expected benefits) across six risk domains. Overall, greater performance on the DMC component scales were inversely, albeit modestly, associated with risk-taking tendencies. Structural equation modeling results revealed that DMC was associated with lower perceived expected benefits for all domains. In contrast, its association with perceived risks was more domain-specific. These analyses also revealed stronger indirect effects for the DMC → expected benefits → risk-taking path than the DMC → perceived risk → risk-taking path, especially for behaviors that may be considered more maladaptive in nature. These results suggest that DMC performance differentially impacts specific components of risk attitudes, and may be more strongly related to the evaluation of expected value of a specific behavior. PMID:26029128

  12. A Concise and Practical Framework for the Development and Usability Evaluation of Patient Information Websites

    PubMed Central

    Knijnenburg, S.L.; Kremer, L.C.; Jaspers, M.W.M.

    2015-01-01

    Summary Background The Website Developmental Model for the Healthcare Consumer (WDMHC) is an extensive and successfully evaluated framework that incorporates user-centered design principles. However, due to its extensiveness its application is limited. In the current study we apply a subset of the WDMHC framework in a case study concerning the development and evaluation of a website aimed at childhood cancer survivors (CCS). Objective To assess whether the implementation of a limited subset of the WDMHC-framework is sufficient to deliver a high-quality website with few usability problems, aimed at a specific patient population. Methods The website was developed using a six-step approach divided into three phases derived from the WDMHC: 1) information needs analysis, mock-up creation and focus group discussion; 2) website prototype development; and 3) heuristic evaluation (HE) and think aloud analysis (TA). The HE was performed by three double experts (knowledgeable both in usability engineering and childhood cancer survivorship), who assessed the site using the Nielsen heuristics. Eight end-users were invited to complete three scenarios covering all functionality of the website by TA. Results The HE and TA were performed concurrently on the website prototype. The HE resulted in 29 unique usability issues; the end-users performing the TA encountered eleven unique problems. Four issues specifically revealed by HE concerned cosmetic design flaws, whereas two problems revealed by TA were related to website content. Conclusion Based on the subset of the WDMHC framework we were able to deliver a website that closely matched the expectancy of the end-users and resulted in relatively few usability problems during end-user testing. With the successful application of this subset of the WDMHC, we provide developers with a clear and easily applicable framework for the development of healthcare websites with high usability aimed at specific medical populations. PMID:26171083

  13. SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, T; Ruan, D

    2015-06-15

    Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is firstmore » roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit in both complexity and performance is expected to be most pronounced with large-scale heterogeneous data.« less

  14. Finding high-order analytic post-Newtonian parameters from a high-precision numerical self-force calculation

    NASA Astrophysics Data System (ADS)

    Shah, Abhay G.; Friedman, John L.; Whiting, Bernard F.

    2014-03-01

    We present a novel analytic extraction of high-order post-Newtonian (pN) parameters that govern quasicircular binary systems. Coefficients in the pN expansion of the energy of a binary system can be found from corresponding coefficients in an extreme-mass-ratio inspiral computation of the change ΔU in the redshift factor of a circular orbit at fixed angular velocity. Remarkably, by computing this essentially gauge-invariant quantity to accuracy greater than one part in 10225, and by assuming that a subset of pN coefficients are rational numbers or products of π and a rational, we obtain the exact analytic coefficients. We find the previously unexpected result that the post-Newtonian expansion of ΔU (and of the change ΔΩ in the angular velocity at fixed redshift factor) have conservative terms at half-integral pN order beginning with a 5.5 pN term. This implies the existence of a corresponding 5.5 pN term in the expansion of the energy of a binary system. Coefficients in the pN series that do not belong to the subset just described are obtained to accuracy better than 1 part in 10265-23n at nth pN order. We work in a radiation gauge, finding the radiative part of the metric perturbation from the gauge-invariant Weyl scalar ψ0 via a Hertz potential. We use mode-sum renormalization, and find high-order renormalization coefficients by matching a series in L=ℓ+1/2 to the large-L behavior of the expression for ΔU. The nonradiative parts of the perturbed metric associated with changes in mass and angular momentum are calculated in the Schwarzschild gauge.

  15. A fast method for the determination of fractional contributions to solvation in proteins

    PubMed Central

    Talavera, David; Morreale, Antonio; Meyer, Tim; Hospital, Adam; Ferrer-Costa, Carles; Gelpi, Josep Lluis; de la Cruz, Xavier; Soliva, Robert; Luque, F. Javier; Orozco, Modesto

    2006-01-01

    A fast method for the calculation of residue contributions to protein solvation is presented. The approach uses the exposed polar and apolar surface of protein residues and has been parametrized from the fractional contributions to solvation determined from linear response theory coupled to molecular dynamics simulations. Application of the method to a large subset of proteins taken from the Protein Data Bank allowed us to compute the expected fractional solvation of residues. This information is used to discuss when a residue or a group of residues presents an uncommon solvation profile. PMID:17001031

  16. Support for the existence of invertible maps between electronic densities and non-analytic 1-body external potentials in non-relativistic time-dependent quantum mechanics

    NASA Astrophysics Data System (ADS)

    Mosquera, Martín A.

    2017-10-01

    Provided the initial state, the Runge-Gross theorem establishes that the time-dependent (TD) external potential of a system of non-relativistic electrons determines uniquely their TD electronic density, and vice versa (up to a constant in the potential). This theorem requires the TD external potential and density to be Taylor-expandable around the initial time of the propagation. This paper presents an extension without this restriction. Given the initial state of the system and evolution of the density due to some TD scalar potential, we show that a perturbative (not necessarily weak) TD potential that induces a non-zero divergence of the external force-density, inside a small spatial subset and immediately after the initial propagation time, will cause a change in the density within that subset, implying that the TD potential uniquely determines the TD density. In this proof, we assume unitary evolution of wavefunctions and first-order differentiability (which does not imply analyticity) in time of the internal and external force-densities, electronic density, current density, and their spatial derivatives over the small spatial subset and short time interval.

  17. A Parameter Subset Selection Algorithm for Mixed-Effects Models

    DOE PAGES

    Schmidt, Kathleen L.; Smith, Ralph C.

    2016-01-01

    Mixed-effects models are commonly used to statistically model phenomena that include attributes associated with a population or general underlying mechanism as well as effects specific to individuals or components of the general mechanism. This can include individual effects associated with data from multiple experiments. However, the parameterizations used to incorporate the population and individual effects are often unidentifiable in the sense that parameters are not uniquely specified by the data. As a result, the current literature focuses on model selection, by which insensitive parameters are fixed or removed from the model. Model selection methods that employ information criteria are applicablemore » to both linear and nonlinear mixed-effects models, but such techniques are limited in that they are computationally prohibitive for large problems due to the number of possible models that must be tested. To limit the scope of possible models for model selection via information criteria, we introduce a parameter subset selection (PSS) algorithm for mixed-effects models, which orders the parameters by their significance. In conclusion, we provide examples to verify the effectiveness of the PSS algorithm and to test the performance of mixed-effects model selection that makes use of parameter subset selection.« less

  18. Polytomous Differential Item Functioning and Violations of Ordering of the Expected Latent Trait by the Raw Score

    ERIC Educational Resources Information Center

    DeMars, Christine E.

    2008-01-01

    The graded response (GR) and generalized partial credit (GPC) models do not imply that examinees ordered by raw observed score will necessarily be ordered on the expected value of the latent trait (OEL). Factors were manipulated to assess whether increased violations of OEL also produced increased Type I error rates in differential item…

  19. Automation and Preclinical Evaluation of a Dedicated Emission Mammotomography System for Fully 3-D Molecular Breast Imaging

    DTIC Science & Technology

    2008-10-01

    concentrated aqueous 99m Tc and taped to the exterior surface of the breast phantom to act as fiducial markers for registration purposes. Two...34 Physica Medica, vol. 21, pp. 48-55, 2006. [16] H. Erdogan and J. A. Fessler, "Ordered subsets algorithms for transmission tomography," Phys Med Biol

  20. Keeping the Teacher at Arm's Length: Student Resistance in Writing Conferences in Two High School Classrooms

    ERIC Educational Resources Information Center

    Consalvo, Annamary; Maloch, Beth

    2015-01-01

    The purpose of this paper is to explore a subset of findings taken from yearlong qualitative study of writing conferences in two diversely populated, urban high school classrooms. Drawing on multiple data sources, we used case study and discourse analytic methods to follow two focal students across the year in order to examine instructional and…

  1. Concentration of the L{sub 1}-norm of trigonometric polynomials and entire functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malykhin, Yu V; Ryutin, K S

    2014-11-30

    For any sufficiently large n, the minimal measure of a subset of [−π,π] on which some nonzero trigonometric polynomial of order ≤n gains half of the L{sub 1}-norm is shown to be π/(n+1). A similar result for entire functions of exponential type is established. Bibliography: 13 titles.

  2. Inside the Black Box of School Reform: Explaining the How and Why of Change at "Getting Results" Schools

    ERIC Educational Resources Information Center

    McDougall, Dennis; Saunders, William M.; Goldenberg, Claude

    2007-01-01

    This article reports key findings from a process-focused external evaluation that compared a subset of "Getting Results" project schools and comparison schools in order to understand the dynamics of school-wide reform efforts at these primary schools. Findings shed light on the "black box" of school reform and illuminate the…

  3. Prioritizing molecular markers to test for in the initial workup of advanced non-small cell lung cancer: wants versus needs.

    PubMed

    West, Howard

    2017-09-01

    The current standard of care for molecular marker testing in patients with advanced non-small cell lung cancer (NSCLC) has been evolving over several years and is a product of the quality of the evidence supporting a targeted therapy for a specific molecular marker, the pre-test probability of that marker in the population, and the magnitude of benefit seen with that treatment. Among the markers that have one or more matched targeted therapies, only a few are in the subset for which they should be considered as most clearly worthy of prioritizing to detect in the first line setting in order to have them supplant other first line alternatives, and in only a subset of patients, as defined currently by NSCLC histology. Specifically, this currently includes testing for an activating epidermal growth factor receptor ( EGFR ) mutation or an anaplastic lymphoma kinase ( ALK ) or ROS1 rearrangement. This article reviews the history and data supporting the prioritization of these markers in patients with non-squamous NSCLC, a histologically selected population in whom the probability of these markers combined with the anticipated efficacy of targeted therapies against them is high enough to favor these treatments in the first line setting. In reviewing the evidence supporting this very limited core subset of most valuable molecular markers to detect in the initial workup of such patients, we can also see the criteria by which other actionable markers need to reach in order to be widely recognized as reliably valuable enough to warrant prioritization to detect in the initial workup of advanced NSCLC as well.

  4. Subjective study of preferred listening conditions in Italian Catholic churches

    NASA Astrophysics Data System (ADS)

    Martellotta, Francesco

    2008-10-01

    The paper describes the results of research aimed at investigating the preferred subjective listening conditions inside churches. The effect of different musical motifs (spanning Gregorian chants to symphonic music) was investigated and regression analysis was performed in order to point out the relationship between subjective ratings and acoustical parameters. In order to present realistic listening conditions to the subjects a small subset of nine churches was selected among a larger set of acoustic data collected in several Italian churches during a widespread on-site survey. The subset represented different architectural styles and shapes, and was characterized by average listening conditions. For each church a single source-receiver combination with fixed relative positions was chosen. Measured binaural impulse responses were cross-talk cancelled and then convolved with five anechoic motifs. Paired comparisons were finally performed, asking a trained panel of subjects their preference. Factor analysis pointed out a substantially common underlying pattern characterizing subjective responses. The results show that preferred listening conditions vary as a function of the musical motif, depending on early decay time for choral music and on a combination of initial time delay and lateral energy for instrumental music.

  5. Dynamic graphs, community detection, and Riemannian geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakker, Craig; Halappanavar, Mahantesh; Visweswara Sathanur, Arun

    A community is a subset of a wider network where the members of that subset are more strongly connected to each other than they are to the rest of the network. In this paper, we consider the problem of identifying and tracking communities in graphs that change over time {dynamic community detection} and present a framework based on Riemannian geometry to aid in this task. Our framework currently supports several important operations such as interpolating between and averaging over graph snapshots. We compare these Riemannian methods with entry-wise linear interpolation and that the Riemannian methods are generally better suited tomore » dynamic community detection. Next steps with the Riemannian framework include developing higher-order interpolation methods (e.g. the analogues of polynomial and spline interpolation) and a Riemannian least-squares regression method for working with noisy data.« less

  6. Global uniqueness in an inverse problem for time fractional diffusion equations

    NASA Astrophysics Data System (ADS)

    Kian, Y.; Oksanen, L.; Soccorsi, E.; Yamamoto, M.

    2018-01-01

    Given (M , g), a compact connected Riemannian manifold of dimension d ⩾ 2, with boundary ∂M, we consider an initial boundary value problem for a fractional diffusion equation on (0 , T) × M, T > 0, with time-fractional Caputo derivative of order α ∈ (0 , 1) ∪ (1 , 2). We prove uniqueness in the inverse problem of determining the smooth manifold (M , g) (up to an isometry), and various time-independent smooth coefficients appearing in this equation, from measurements of the solutions on a subset of ∂M at fixed time. In the "flat" case where M is a compact subset of Rd, two out the three coefficients ρ (density), a (conductivity) and q (potential) appearing in the equation ρ ∂tα u - div (a∇u) + qu = 0 on (0 , T) × M are recovered simultaneously.

  7. Upper bounds on sequential decoding performance parameters

    NASA Technical Reports Server (NTRS)

    Jelinek, F.

    1974-01-01

    This paper presents the best obtainable random coding and expurgated upper bounds on the probabilities of undetectable error, of t-order failure (advance to depth t into an incorrect subset), and of likelihood rise in the incorrect subset, applicable to sequential decoding when the metric bias G is arbitrary. Upper bounds on the Pareto exponent are also presented. The G-values optimizing each of the parameters of interest are determined, and are shown to lie in intervals that in general have nonzero widths. The G-optimal expurgated bound on undetectable error is shown to agree with that for maximum likelihood decoding of convolutional codes, and that on failure agrees with the block code expurgated bound. Included are curves evaluating the bounds for interesting choices of G and SNR for a binary-input quantized-output Gaussian additive noise channel.

  8. Prostate specific antigen velocity as a measure of the natural history of prostate cancer: defining a 'rapid riser' subset.

    PubMed

    Nam, R K; Klotz, L H; Jewett, M A; Danjoux, C; Trachtenberg, J

    1998-01-01

    To study the rate of change in prostate specific antigen (PSA velocity) in patients with prostate cancer initially managed by 'watchful waiting'. Serial PSA levels were determined in 141 patients with prostate cancer confirmed by biopsy, who were initially managed expectantly and enrolled between May 1990 and December 1995. Sixty-seven patients eventually underwent surgery (mean age 59 years) because they chose it (the decision for surgery was not based on PSA velocity). A cohort of 74 patients remained on 'watchful waiting' (mean age 69 years). Linear regression and logarithmic transformations were used to segregate those patients who showed a rapid rise, defined as a > 50% rise in PSA per year (or a doubling time of < 2 years) and designated 'rapid risers'. An initial analysis based on a minimum of two PSA values showed that 31% were rapid risers. Only 15% of patients with more than three serial PSA determinations over > or = 6 months showed a rapid rise in PSA level. There was no advantage of log-linear analysis over linear regression models. Three serial PSA determinations over > or = 6 months in patients with clinically localized prostate cancer identifies a subset (15%) of patients with a rapidly rising PSA level. Shorter PSA surveillance with fewer PSA values may falsely identify patients with rapid rises in PSA level. However, further follow-up is required to determine if a rapid rise in PSA level identifies a subset of patients with an aggressive biological phenotype who are either still curable or who have already progressed to incurability through metastatic disease.

  9. Relieving patients' pain with expectation interventions: a meta-analysis.

    PubMed

    Peerdeman, Kaya J; van Laarhoven, Antoinette I M; Keij, Sascha M; Vase, Lene; Rovers, Maroeska M; Peters, Madelon L; Evers, Andrea W M

    2016-06-01

    Patients' expectations are important predictors of the outcome of analgesic treatments, as demonstrated predominantly in research on placebo effects. Three commonly investigated interventions that have been found to induce expectations (verbal suggestion, conditioning, and mental imagery) entail promising, brief, and easy-to-implement adjunctive procedures for optimizing the effectiveness of analgesic treatments. However, evidence for their efficacy stems mostly from research on experimentally evoked pain in healthy samples, and these findings might not be directly transferable to clinical populations. The current meta-analysis investigated the effects of these expectation inductions on patients' pain relief. Five bibliographic databases were systematically searched for studies that assessed the effects of brief verbal suggestion, conditioning, or imagery interventions on pain in clinical populations, with patients experiencing experimental, acute procedural, or chronic pain, compared with no treatment or control treatment. Of the 15,955 studies retrieved, 30 met the inclusion criteria, of which 27 provided sufficient data for quantitative analyses. Overall, a medium-sized effect of the interventions on patients' pain relief was observed (Hedges g = 0.61, I = 73%), with varying effects of verbal suggestion (k = 18, g = 0.75), conditioning (always paired with verbal suggestion, k = 3, g = 0.65), and imagery (k = 6, g = 0.27). Subset analyses indicated medium to large effects on experimental and acute procedural pain and small effects on chronic pain. In conclusion, patients' pain can be relieved with expectation interventions; particularly, verbal suggestion for acute procedural pain was found to be effective.

  10. Optimal selection of markers for validation or replication from genome-wide association studies.

    PubMed

    Greenwood, Celia M T; Rangrej, Jagadish; Sun, Lei

    2007-07-01

    With reductions in genotyping costs and the fast pace of improvements in genotyping technology, it is not uncommon for the individuals in a single study to undergo genotyping using several different platforms, where each platform may contain different numbers of markers selected via different criteria. For example, a set of cases and controls may be genotyped at markers in a small set of carefully selected candidate genes, and shortly thereafter, the same cases and controls may be used for a genome-wide single nucleotide polymorphism (SNP) association study. After such initial investigations, often, a subset of "interesting" markers is selected for validation or replication. Specifically, by validation, we refer to the investigation of associations between the selected subset of markers and the disease in independent data. However, it is not obvious how to choose the best set of markers for this validation. There may be a prior expectation that some sets of genotyping data are more likely to contain real associations. For example, it may be more likely for markers in plausible candidate genes to show disease associations than markers in a genome-wide scan. Hence, it would be desirable to select proportionally more markers from the candidate gene set. When a fixed number of markers are selected for validation, we propose an approach for identifying an optimal marker-selection configuration by basing the approach on minimizing the stratified false discovery rate. We illustrate this approach using a case-control study of colorectal cancer from Ontario, Canada, and we show that this approach leads to substantial reductions in the estimated false discovery rates in the Ontario dataset for the selected markers, as well as reductions in the expected false discovery rates for the proposed validation dataset. Copyright 2007 Wiley-Liss, Inc.

  11. Ab Initio Computations and Active Thermochemical Tables Hand in Hand: Heats of Formation of Core Combustion Species.

    PubMed

    Klippenstein, Stephen J; Harding, Lawrence B; Ruscic, Branko

    2017-09-07

    The fidelity of combustion simulations is strongly dependent on the accuracy of the underlying thermochemical properties for the core combustion species that arise as intermediates and products in the chemical conversion of most fuels. High level theoretical evaluations are coupled with a wide-ranging implementation of the Active Thermochemical Tables (ATcT) approach to obtain well-validated high fidelity predictions for the 0 K heat of formation for a large set of core combustion species. In particular, high level ab initio electronic structure based predictions are obtained for a set of 348 C, N, O, and H containing species, which corresponds to essentially all core combustion species with 34 or fewer electrons. The theoretical analyses incorporate various high level corrections to base CCSD(T)/cc-pVnZ analyses (n = T or Q) using H 2 , CH 4 , H 2 O, and NH 3 as references. Corrections for the complete-basis-set limit, higher-order excitations, anharmonic zero-point energy, core-valence, relativistic, and diagonal Born-Oppenheimer effects are ordered in decreasing importance. Independent ATcT values are presented for a subset of 150 species. The accuracy of the theoretical predictions is explored through (i) examination of the magnitude of the various corrections, (ii) comparisons with other high level calculations, and (iii) through comparison with the ATcT values. The estimated 2σ uncertainties of the three methods devised here, ANL0, ANL0-F12, and ANL1, are in the range of ±1.0-1.5 kJ/mol for single-reference and moderately multireference species, for which the calculated higher order excitations are 5 kJ/mol or less. In addition to providing valuable references for combustion simulations, the subsequent inclusion of the current theoretical results into the ATcT thermochemical network is expected to significantly improve the thermochemical knowledge base for less-well studied species.

  12. Ab Initio Computations and Active Thermochemical Tables Hand in Hand: Heats of Formation of Core Combustion Species

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klippenstein, Stephen J.; Harding, Lawrence B.; Ruscic, Branko

    Here, the fidelity of combustion simulations is strongly dependent on the accuracy of the underlying thermochemical properties for the core combustion species that arise as intermediates and products in the chemical conversion of most fuels. High level theoretical evaluations are coupled with a wide-ranging implementation of the Active Thermochemical Tables (ATcT) approach to obtain well-validated high fidelity predictions for the 0 K heat of formation for a large set of core combustion species. In particular, high level ab initio electronic structure based predictions are obtained for a set of 348 C, N, O, and H containing species, which corresponds tomore » essentially all core combustion species with 34 or fewer electrons. The theoretical analyses incorporate various high level corrections to base CCSD(T)/cc-pVnZ analyses (n = T or Q) using H 2, CH 4, H 2O, and NH 3 as references. Corrections for the complete-basis-set limit, higher-order excitations, anharmonic zeropoint energy, core–valence, relativistic, and diagonal Born–Oppenheimer effects are ordered in decreasing importance. Independent ATcT values are presented for a subset of 150 species. The accuracy of the theoretical predictions is explored through (i) examination of the magnitude of the various corrections, (ii) comparisons with other high level calculations, and (iii) through comparison with the ATcT values. The estimated 2σ uncertainties of the three methods devised here, ANL0, ANL0-F12, and ANL1, are in the range of ±1.0–1.5 kJ/mol for single-reference and moderately multireference species, for which the calculated higher order excitations are 5 kJ/mol or less. In addition to providing valuable references for combustion simulations, the subsequent inclusion of the current theoretical results into the ATcT thermochemical network is expected to significantly improve the thermochemical knowledge base for less-well studied species.« less

  13. Ab Initio Computations and Active Thermochemical Tables Hand in Hand: Heats of Formation of Core Combustion Species

    DOE PAGES

    Klippenstein, Stephen J.; Harding, Lawrence B.; Ruscic, Branko

    2017-07-31

    Here, the fidelity of combustion simulations is strongly dependent on the accuracy of the underlying thermochemical properties for the core combustion species that arise as intermediates and products in the chemical conversion of most fuels. High level theoretical evaluations are coupled with a wide-ranging implementation of the Active Thermochemical Tables (ATcT) approach to obtain well-validated high fidelity predictions for the 0 K heat of formation for a large set of core combustion species. In particular, high level ab initio electronic structure based predictions are obtained for a set of 348 C, N, O, and H containing species, which corresponds tomore » essentially all core combustion species with 34 or fewer electrons. The theoretical analyses incorporate various high level corrections to base CCSD(T)/cc-pVnZ analyses (n = T or Q) using H 2, CH 4, H 2O, and NH 3 as references. Corrections for the complete-basis-set limit, higher-order excitations, anharmonic zeropoint energy, core–valence, relativistic, and diagonal Born–Oppenheimer effects are ordered in decreasing importance. Independent ATcT values are presented for a subset of 150 species. The accuracy of the theoretical predictions is explored through (i) examination of the magnitude of the various corrections, (ii) comparisons with other high level calculations, and (iii) through comparison with the ATcT values. The estimated 2σ uncertainties of the three methods devised here, ANL0, ANL0-F12, and ANL1, are in the range of ±1.0–1.5 kJ/mol for single-reference and moderately multireference species, for which the calculated higher order excitations are 5 kJ/mol or less. In addition to providing valuable references for combustion simulations, the subsequent inclusion of the current theoretical results into the ATcT thermochemical network is expected to significantly improve the thermochemical knowledge base for less-well studied species.« less

  14. Nonsynchronous updating in the multiverse of cellular automata

    NASA Astrophysics Data System (ADS)

    Reia, Sandro M.; Kinouchi, Osame

    2015-04-01

    In this paper we study updating effects on cellular automata rule space. We consider a subset of 6144 order-3 automata from the space of 262144 bidimensional outer-totalistic rules. We compare synchronous to asynchronous and sequential updatings. Focusing on two automata, we discuss how update changes destroy typical structures of these rules. Besides, we show that the first-order phase transition in the multiverse of synchronous cellular automata, revealed with the use of a recently introduced control parameter, seems to be robust not only to changes in update schema but also to different initial densities.

  15. The Use of Software Agents for Autonomous Control of a DC Space Power System

    NASA Technical Reports Server (NTRS)

    May, Ryan D.; Loparo, Kenneth A.

    2014-01-01

    In order to enable manned deep-space missions, the spacecraft must be controlled autonomously using on-board algorithms. A control architecture is proposed to enable this autonomous operation for an spacecraft electric power system and then implemented using a highly distributed network of software agents. These agents collaborate and compete with each other in order to implement each of the control functions. A subset of this control architecture is tested against a steadystate power system simulation and found to be able to solve a constrained optimization problem with competing objectives using only local information.

  16. Nonsynchronous updating in the multiverse of cellular automata.

    PubMed

    Reia, Sandro M; Kinouchi, Osame

    2015-04-01

    In this paper we study updating effects on cellular automata rule space. We consider a subset of 6144 order-3 automata from the space of 262144 bidimensional outer-totalistic rules. We compare synchronous to asynchronous and sequential updatings. Focusing on two automata, we discuss how update changes destroy typical structures of these rules. Besides, we show that the first-order phase transition in the multiverse of synchronous cellular automata, revealed with the use of a recently introduced control parameter, seems to be robust not only to changes in update schema but also to different initial densities.

  17. Recent Developments in the Treatment of Spinal Epidural Abscesses

    PubMed Central

    Eltorai, Adam E.M.; Naqvi, Syed S.; Seetharam, Ashok; Brea, Bielinsky A.; Simon, Chad

    2017-01-01

    Spinal epidural abscess (SEA) is a serious condition that can be challenging to diagnose due to nonspecific symptomology and delayed presentation. Despite this, it requires prompt recognition and management in order to prevent permanent neurologic sequelae. Several recent studies have improved our understanding of SEA. Herein, we summarize the recent literature from the past 10 years relevant to SEA diagnosis, management and outcome. While surgical care remains the mainstay of treatment, a select subset of SEA patients may be managed without operative intervention. Multidisciplinary management involves internal medicine, infectious disease, critical care, and spine surgeons in order to optimize care. PMID:28713526

  18. The depth estimation of 3D face from single 2D picture based on manifold learning constraints

    NASA Astrophysics Data System (ADS)

    Li, Xia; Yang, Yang; Xiong, Hailiang; Liu, Yunxia

    2018-04-01

    The estimation of depth is virtual important in 3D face reconstruction. In this paper, we propose a t-SNE based on manifold learning constraints and introduce K-means method to divide the original database into several subset, and the selected optimal subset to reconstruct the 3D face depth information can greatly reduce the computational complexity. Firstly, we carry out the t-SNE operation to reduce the key feature points in each 3D face model from 1×249 to 1×2. Secondly, the K-means method is applied to divide the training 3D database into several subset. Thirdly, the Euclidean distance between the 83 feature points of the image to be estimated and the feature point information before the dimension reduction of each cluster center is calculated. The category of the image to be estimated is judged according to the minimum Euclidean distance. Finally, the method Kong D will be applied only in the optimal subset to estimate the depth value information of 83 feature points of 2D face images. Achieving the final depth estimation results, thus the computational complexity is greatly reduced. Compared with the traditional traversal search estimation method, although the proposed method error rate is reduced by 0.49, the number of searches decreases with the change of the category. In order to validate our approach, we use a public database to mimic the task of estimating the depth of face images from 2D images. The average number of searches decreased by 83.19%.

  19. Evaluation of subset matching methods and forms of covariate balance.

    PubMed

    de Los Angeles Resa, María; Zubizarreta, José R

    2016-11-30

    This paper conducts a Monte Carlo simulation study to evaluate the performance of multivariate matching methods that select a subset of treatment and control observations. The matching methods studied are the widely used nearest neighbor matching with propensity score calipers and the more recently proposed methods, optimal matching of an optimally chosen subset and optimal cardinality matching. The main findings are: (i) covariate balance, as measured by differences in means, variance ratios, Kolmogorov-Smirnov distances, and cross-match test statistics, is better with cardinality matching because by construction it satisfies balance requirements; (ii) for given levels of covariate balance, the matched samples are larger with cardinality matching than with the other methods; (iii) in terms of covariate distances, optimal subset matching performs best; (iv) treatment effect estimates from cardinality matching have lower root-mean-square errors, provided strong requirements for balance, specifically, fine balance, or strength-k balance, plus close mean balance. In standard practice, a matched sample is considered to be balanced if the absolute differences in means of the covariates across treatment groups are smaller than 0.1 standard deviations. However, the simulation results suggest that stronger forms of balance should be pursued in order to remove systematic biases due to observed covariates when a difference in means treatment effect estimator is used. In particular, if the true outcome model is additive, then marginal distributions should be balanced, and if the true outcome model is additive with interactions, then low-dimensional joints should be balanced. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Feature selection for the classification of traced neurons.

    PubMed

    López-Cabrera, José D; Lorenzo-Ginori, Juan V

    2018-06-01

    The great availability of computational tools to calculate the properties of traced neurons leads to the existence of many descriptors which allow the automated classification of neurons from these reconstructions. This situation determines the necessity to eliminate irrelevant features as well as making a selection of the most appropriate among them, in order to improve the quality of the classification obtained. The dataset used contains a total of 318 traced neurons, classified by human experts in 192 GABAergic interneurons and 126 pyramidal cells. The features were extracted by means of the L-measure software, which is one of the most used computational tools in neuroinformatics to quantify traced neurons. We review some current feature selection techniques as filter, wrapper, embedded and ensemble methods. The stability of the feature selection methods was measured. For the ensemble methods, several aggregation methods based on different metrics were applied to combine the subsets obtained during the feature selection process. The subsets obtained applying feature selection methods were evaluated using supervised classifiers, among which Random Forest, C4.5, SVM, Naïve Bayes, Knn, Decision Table and the Logistic classifier were used as classification algorithms. Feature selection methods of types filter, embedded, wrappers and ensembles were compared and the subsets returned were tested in classification tasks for different classification algorithms. L-measure features EucDistanceSD, PathDistanceSD, Branch_pathlengthAve, Branch_pathlengthSD and EucDistanceAve were present in more than 60% of the selected subsets which provides evidence about their importance in the classification of this neurons. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Integrating multiple immunogenetic data sources for feature extraction and mining somatic hypermutation patterns: the case of "towards analysis" in chronic lymphocytic leukaemia.

    PubMed

    Kavakiotis, Ioannis; Xochelli, Aliki; Agathangelidis, Andreas; Tsoumakas, Grigorios; Maglaveras, Nicos; Stamatopoulos, Kostas; Hadzidimitriou, Anastasia; Vlahavas, Ioannis; Chouvarda, Ioanna

    2016-06-06

    Somatic Hypermutation (SHM) refers to the introduction of mutations within rearranged V(D)J genes, a process that increases the diversity of Immunoglobulins (IGs). The analysis of SHM has offered critical insight into the physiology and pathology of B cells, leading to strong prognostication markers for clinical outcome in chronic lymphocytic leukaemia (CLL), the most frequent adult B-cell malignancy. In this paper we present a methodology for integrating multiple immunogenetic and clinocobiological data sources in order to extract features and create high quality datasets for SHM analysis in IG receptors of CLL patients. This dataset is used as the basis for a higher level integration procedure, inspired form social choice theory. This is applied in the Towards Analysis, our attempt to investigate the potential ontogenetic transformation of genes belonging to specific stereotyped CLL subsets towards other genes or gene families, through SHM. The data integration process, followed by feature extraction, resulted in the generation of a dataset containing information about mutations occurring through SHM. The Towards analysis performed on the integrated dataset applying voting techniques, revealed the distinct behaviour of subset #201 compared to other subsets, as regards SHM related movements among gene clans, both in allele-conserved and non-conserved gene areas. With respect to movement between genes, a high percentage movement towards pseudo genes was found in all CLL subsets. This data integration and feature extraction process can set the basis for exploratory analysis or a fully automated computational data mining approach on many as yet unanswered, clinically relevant biological questions.

  2. Binge drinking in undergraduates: relationships with sex, drinking behaviors, impulsivity, and the perceived effects of alcohol.

    PubMed

    Balodis, Iris M; Potenza, Marc N; Olmstead, Mary C

    2009-09-01

    Binge drinking on university campuses is associated with social and health-related problems. To determine the factors that may predict this behavior, we collected information on alcohol use, alcohol expectations, and impulsivity from 428 undergraduate students attending a Canadian university. The subjective effects of a binge drinking dose of alcohol were assessed in a subset of participants. In the larger sample, 72% of students reported drinking at or above binge drinking thresholds on a regular basis. Men reported alcohol consumption per drinking occasion, which was consistent with other studies, but the frequency of drinking occasions among women was higher than in earlier studies, suggesting that consumption in women may be increasing. Compared with men, women reported different expectations of alcohol, specifically related to sociability and sexuality. Self-reported impulsivity scores were related, albeit weakly, to drinking behaviors and to expectations in both the sexes. Finally, intoxicated binge drinkers reported feeling less intoxicated, liking the effects more, and wanting more alcohol than did non-binge drinkers receiving an equivalent dose of alcohol. These results have implications for sex-specific prevention strategies for binge drinking on university campuses.

  3. Analyzing Process Data from Game/Scenario-Based Tasks: An Edit Distance Approach

    ERIC Educational Resources Information Center

    Hao, Jiangang; Shu, Zhan; von Davier, Alina

    2015-01-01

    Students' activities in game/scenario-based tasks (G/SBTs) can be characterized by a sequence of time-stamped actions of different types with different attributes. For a subset of G/SBTs in which only the order of the actions is of great interest, the process data can be well characterized as a string of characters (i.e., action string) if we…

  4. Modulating Calcium Signals to Boost AON Exon Skipping for DMD

    DTIC Science & Technology

    2016-10-01

    RNA Seq analysis to identify mechanisms of activity and specificity in order to guide discovery of second-generation skipping drugs or combinations...with greater activity. 15. SUBJECT TERMS Exon skipping, Dantrolene, Calcium, Duchenne, Dytrophy, Dystrophin, anti-sense-oligonucleatide, DMD, RNA ...for a subset of very rare mutations. Finally, we hypothesize that by combining chemical genomics with RNA Seq analysis we can begin to identify

  5. Ecological consequences of hydropower development in Central America: Impacts of small dams and water diversion on neotropical stream fish assemblages

    USGS Publications Warehouse

    Anderson, Elizabeth P.; Freeman, Mary C.; Pringle, C.M.

    2006-01-01

    Small dams for hydropower have caused widespread alteration of Central American rivers, yet much of recent development has gone undocumented by scientists and conservationists. We examined the ecological effects of a small hydropower plant (Dona Julia Hydroelectric Center) on two low-order streams (the Puerto Viejo River and Quebradon stream) draining a mountainous area of Costa Rica. Operation of the Dona Julia plant has dewatered these streams, reducing discharge to ~ 10% of average annual flow. This study compared fish assemblage composition and aquatic habitat upstream and downstream of diversion dams on two streams and along a ~ 4 km dewatered reach of the Puerto Viejo River in an attempt to evaluate current instream flow recommendations for regulated Costa Rican streams. Our results indicated that fish assemblages directly upstream and downstream of the dam on the third order Puerto Viejo River were dissimilar, suggesting that the small dam (< 15 in high) hindered movement of fishes. Along the ~ 4 km dewatered reach of the Puerto Viejo River, species count increased with downstream distance from the dam. However, estimated species richness and overall fish abundance were not significantly correlated with downstream distance from the dam. Our results suggested that effects of stream dewatering may be most pronounced for a subset of species with more complex reproductive requirements, classified as equilibrium-type species based on their life-history. In the absence of changes to current operations, we expect that fish assemblages in the Puerto Viejo River will be increasingly dominated by opportunistic-type, colonizing fish species. Operations of many other small hydropower plants in Costa Rica and other parts of Central America mirror those of Doha Julia; the methods and results of this study may be applicable to some of those projects.

  6. Optimization of the reconstruction parameters in [123I]FP-CIT SPECT

    NASA Astrophysics Data System (ADS)

    Niñerola-Baizán, Aida; Gallego, Judith; Cot, Albert; Aguiar, Pablo; Lomeña, Francisco; Pavía, Javier; Ros, Domènec

    2018-04-01

    The aim of this work was to obtain a set of parameters to be applied in [123I]FP-CIT SPECT reconstruction in order to minimize the error between standardized and true values of the specific uptake ratio (SUR) in dopaminergic neurotransmission SPECT studies. To this end, Monte Carlo simulation was used to generate a database of 1380 projection data-sets from 23 subjects, including normal cases and a variety of pathologies. Studies were reconstructed using filtered back projection (FBP) with attenuation correction and ordered subset expectation maximization (OSEM) with correction for different degradations (attenuation, scatter and PSF). Reconstruction parameters to be optimized were the cut-off frequency of a 2D Butterworth pre-filter in FBP, and the number of iterations and the full width at Half maximum of a 3D Gaussian post-filter in OSEM. Reconstructed images were quantified using regions of interest (ROIs) derived from Magnetic Resonance scans and from the Automated Anatomical Labeling map. Results were standardized by applying a simple linear regression line obtained from the entire patient dataset. Our findings show that we can obtain a set of optimal parameters for each reconstruction strategy. The accuracy of the standardized SUR increases when the reconstruction method includes more corrections. The use of generic ROIs instead of subject-specific ROIs adds significant inaccuracies. Thus, after reconstruction with OSEM and correction for all degradations, subject-specific ROIs led to errors between standardized and true SUR values in the range [‑0.5, +0.5] in 87% and 92% of the cases for caudate and putamen, respectively. These percentages dropped to 75% and 88% when the generic ROIs were used.

  7. Selection for family medicine residency training in Canada: How consistently are the same students ranked by different programs?

    PubMed

    Wycliffe-Jones, Keith; Hecker, Kent G; Schipper, Shirley; Topps, Maureen; Robinson, Jeanine; Abedin, Tasnima

    2018-02-01

    To examine the consistency of the ranking of Canadian and US medical graduates who applied to Canadian family medicine (FM) residency programs between 2007 and 2013. Descriptive cross-sectional study. Family medicine residency programs in Canada. All 17 Canadian medical schools allowed access to their anonymized program rank-order lists of students applying to FM residency programs submitted to the first iteration of the Canadian Resident Matching Service match from 2007 to 2013. The rank position of medical students who applied to more than 1 FM residency program on the rank-order lists submitted by the programs. Anonymized ranking data submitted to the Canadian Resident Matching Service from 2007 to 2013 by all 17 FM residency programs were used. Ranking data of eligible Canadian and US medical graduates were analyzed to assess the within-student and between-student variability in rank score. These covariance parameters were then used to calculate the intraclass correlation coefficient (ICC) for all programs. Program descriptions and selection criteria were also reviewed to identify sites with similar profiles for subset ICC analysis. Between 2007 and 2013, the consistency of ranking by all programs was fair at best (ICC = 0.34 to 0.39). The consistency of ranking by larger urban-based sites was weak to fair (ICC = 0.23 to 0.36), and the consistency of ranking by sites focusing on training for rural practice was weak to moderate (ICC = 0.16 to 0.55). In most cases, there is a low level of consistency of ranking of students applying for FM training in Canada. This raises concerns regarding fairness, particularly in relation to expectations around equity and distributive justice in selection processes. Copyright© the College of Family Physicians of Canada.

  8. Design, installation, commissioning and operation of a beamlet monitor in the negative ion beam test stand at NIFS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antoni, V.; Agostinetti, P.; Brombin, M.

    2015-04-08

    In the framework of the accompanying activity for the development of the two neutral beam injectors for the ITER fusion experiment, an instrumented beam calorimeter is being designed at Consorzio RFX, to be used in the SPIDER test facility (particle energy 100keV; beam current 50A), with the aim of testing beam characteristics and to verify the source proper operation. The main components of the instrumented calorimeter are one-directional carbon-fibre-carbon composite tiles. Some prototype tiles have been used as a small-scale version of the entire calorimeter in the test stand of the neutral beam injectors of the LHD experiment, with themore » aim of characterising the beam features in various operating conditions. The extraction system of the NIFS test stand source was modified, by applying a mask to the first gridded electrode, in order to isolate only a subset of the beamlets, arranged in two 3×5 matrices, resembling the beamlet groups of the ITER beam sources. The present contribution gives a description of the design of the diagnostic system, including the numerical simulations of the expected thermal pattern. Moreover the dedicated thermocouple measurement system is presented. The beamlet monitor was successfully used for a full experimental campaign, during which the main parameters of the source, mainly the arc power and the grid voltages, were varied. This contribution describes the methods of fitting and data analysis applied to the infrared images of the camera to recover the beamlet optics characteristics, in order to quantify the response of the system to different operational conditions. Some results concerning the beamlet features are presented as a function of the source parameters.« less

  9. Specifications of a continual reassessment method design for phase I trials of combined drugs†

    PubMed Central

    Wages, Nolan A.; Conaway, Mark R.

    2013-01-01

    In studies of combinations of agents in phase I oncology trials, the dose–toxicity relationship may not be monotone for all combinations, in which case the toxicity probabilities follow a partial order. The continual reassessment method for partial orders (PO-CRM) is a design for phase I trials of combinations that leans upon identifying possible complete orders associated with the partial order. This article addresses some practical design considerations not previously undertaken when describing the PO-CRM. We describe an approach in choosing a proper subset of possible orderings, formulated according to the known toxicity relationships within a matrix of combination therapies. Other design issues, such as working model selection and stopping rules, are also discussed. We demonstrate the practical ability of PO-CRM as a phase I design for combinations through its use in a recent trial designed at the University of Virginia Cancer Center. PMID:23729323

  10. Texture analysis based on the Hermite transform for image classification and segmentation

    NASA Astrophysics Data System (ADS)

    Estudillo-Romero, Alfonso; Escalante-Ramirez, Boris; Savage-Carmona, Jesus

    2012-06-01

    Texture analysis has become an important task in image processing because it is used as a preprocessing stage in different research areas including medical image analysis, industrial inspection, segmentation of remote sensed imaginary, multimedia indexing and retrieval. In order to extract visual texture features a texture image analysis technique is presented based on the Hermite transform. Psychovisual evidence suggests that the Gaussian derivatives fit the receptive field profiles of mammalian visual systems. The Hermite transform describes locally basic texture features in terms of Gaussian derivatives. Multiresolution combined with several analysis orders provides detection of patterns that characterizes every texture class. The analysis of the local maximum energy direction and steering of the transformation coefficients increase the method robustness against the texture orientation. This method presents an advantage over classical filter bank design because in the latter a fixed number of orientations for the analysis has to be selected. During the training stage, a subset of the Hermite analysis filters is chosen in order to improve the inter-class separability, reduce dimensionality of the feature vectors and computational cost during the classification stage. We exhaustively evaluated the correct classification rate of real randomly selected training and testing texture subsets using several kinds of common used texture features. A comparison between different distance measurements is also presented. Results of the unsupervised real texture segmentation using this approach and comparison with previous approaches showed the benefits of our proposal.

  11. Altered dynamics of Candida albicans phagocytosis by macrophages and PMNs when both phagocyte subsets are present.

    PubMed

    Rudkin, Fiona M; Bain, Judith M; Walls, Catriona; Lewis, Leanne E; Gow, Neil A R; Erwig, Lars P

    2013-10-29

    An important first line of defense against Candida albicans infections is the killing of fungal cells by professional phagocytes of the innate immune system, such as polymorphonuclear cells (PMNs) and macrophages. In this study, we employed live-cell video microscopy coupled with dynamic image analysis tools to provide insights into the complexity of C. albicans phagocytosis when macrophages and PMNs were incubated with C. albicans alone and when both phagocyte subsets were present. When C. albicans cells were incubated with only one phagocyte subtype, PMNs had a lower overall phagocytic capacity than macrophages, despite engulfing fungal cells at a higher rate once fungal cells were bound to the phagocyte surface. PMNs were more susceptible to C. albicans-mediated killing than macrophages, irrespective of the number of C. albicans cells ingested. In contrast, when both phagocyte subsets were studied in coculture, the two cell types phagocytosed and cleared C. albicans at equal rates and were equally susceptible to killing by the fungus. The increase in macrophage susceptibility to C. albicans-mediated killing was a consequence of macrophages taking up a higher proportion of hyphal cells under these conditions. In the presence of both PMNs and macrophages, C. albicans yeast cells were predominantly cleared by PMNs, which migrated at a greater speed toward fungal cells and engulfed bound cells more rapidly. These observations demonstrate that the phagocytosis of fungal pathogens depends on, and is modified by, the specific phagocyte subsets present at the site of infection. Extensive work investigating fungal cell phagocytosis by macrophages and PMNs of the innate immune system has been carried out. These studies have been informative but have examined this phenomenon only when one phagocyte subset is present. The current study employed live-cell video microscopy to break down C. albicans phagocytosis into its component parts and examine the effect of a single phagocyte subset, versus a mixed phagocyte population, on these individual stages. Through this approach, we identified that the rate of fungal cell engulfment and rate of phagocyte killing altered significantly when both macrophages and PMNs were incubated in coculture with C. albicans compared to the rate of either phagocyte subset incubated alone with the fungus. This research highlights the significance of studying pathogen-host cell interactions with a combination of phagocytes in order to gain a greater understanding of the interactions that occur between cells of the host immune system in response to fungal invasion.

  12. A comparative analysis of somatolactin-related immunoreactivity in the pituitaries of four neopterygian fishes and one chondrostean fish: an immunohistochemical study.

    PubMed

    Dores, R M; Hoffman, N E; Chilcutt-Ruth, T; Lancha, A; Brown, C; Marra, L; Youson, J

    1996-04-01

    An antiserum to cod somatolactin (SL) was used for immunohistochemical screening for the pars intermedia of two teleosts (Oreochromis mossambicus and Gymothorax meleagris), two holostean fishes (Lepisosteus osseus and Amia calva), and a chondrostean fish (Acipenser fulvescens) for SL-immunopositive (SL-IR) cells. As expected, a subset of the epithelial cells in the pars intermedia of O. mossambicus (tilapia) was immunopositive for SL, and the remainder of the epithelial cells was immunopositive for alpha-MSH-specific antiserum (alpha-MSH-IR). SL-IR was not detected in any of the epithelial cells in the pars intermedia of the moray eel G. meleagris. To determine whether SL-IR could be detected in nonteleost fishes, immunohistochemical analyses were done on the pituitaries of two holostean fishes and one chondrostean fish. In the pars intermedia of the gar, L. osseus, a subset of cells was immunopositive for alpha-MSH only. However, in the pars intermedia of the bowfin, A. calva, all of the epithelial cells indicated the presence of both SL and alpha-MSH. Finally, no SL-positive cells were detected in the pars intermedia of the sturgeon, A. fulvescens.

  13. Detection of 224 candidate structured RNAs by comparative analysis of specific subsets of intergenic regions

    PubMed Central

    Lünse, Christina E.; Corbino, Keith A.; Ames, Tyler D.; Nelson, James W.; Roth, Adam; Perkins, Kevin R.; Sherlock, Madeline E.

    2017-01-01

    Abstract The discovery of structured non-coding RNAs (ncRNAs) in bacteria can reveal new facets of biology and biochemistry. Comparative genomics analyses executed by powerful computer algorithms have successfully been used to uncover many novel bacterial ncRNA classes in recent years. However, this general search strategy favors the discovery of more common ncRNA classes, whereas progressively rarer classes are correspondingly more difficult to identify. In the current study, we confront this problem by devising several methods to select subsets of intergenic regions that can concentrate these rare RNA classes, thereby increasing the probability that comparative sequence analysis approaches will reveal their existence. By implementing these methods, we discovered 224 novel ncRNA classes, which include ROOL RNA, an RNA class averaging 581 nt and present in multiple phyla, several highly conserved and widespread ncRNA classes with properties that suggest sophisticated biochemical functions and a multitude of putative cis-regulatory RNA classes involved in a variety of biological processes. We expect that further research on these newly found RNA classes will reveal additional aspects of novel biology, and allow for greater insights into the biochemistry performed by ncRNAs. PMID:28977401

  14. On sample size and different interpretations of snow stability datasets

    NASA Astrophysics Data System (ADS)

    Schirmer, M.; Mitterer, C.; Schweizer, J.

    2009-04-01

    Interpretations of snow stability variations need an assessment of the stability itself, independent of the scale investigated in the study. Studies on stability variations at a regional scale have often chosen stability tests such as the Rutschblock test or combinations of various tests in order to detect differences in aspect and elevation. The question arose: ‘how capable are such stability interpretations in drawing conclusions'. There are at least three possible errors sources: (i) the variance of the stability test itself; (ii) the stability variance at an underlying slope scale, and (iii) that the stability interpretation might not be directly related to the probability of skier triggering. Various stability interpretations have been proposed in the past that provide partly different results. We compared a subjective one based on expert knowledge with a more objective one based on a measure derived from comparing skier-triggered slopes vs. slopes that have been skied but not triggered. In this study, the uncertainties are discussed and their effects on regional scale stability variations will be quantified in a pragmatic way. An existing dataset with very large sample sizes was revisited. This dataset contained the variance of stability at a regional scale for several situations. The stability in this dataset was determined using the subjective interpretation scheme based on expert knowledge. The question to be answered was how many measurements were needed to obtain similar results (mainly stability differences in aspect or elevation) as with the complete dataset. The optimal sample size was obtained in several ways: (i) assuming a nominal data scale the sample size was determined with a given test, significance level and power, and by calculating the mean and standard deviation of the complete dataset. With this method it can also be determined if the complete dataset consists of an appropriate sample size. (ii) Smaller subsets were created with similar aspect distributions to the large dataset. We used 100 different subsets for each sample size. Statistical variations obtained in the complete dataset were also tested on the smaller subsets using the Mann-Whitney or the Kruskal-Wallis test. For each subset size, the number of subsets were counted in which the significance level was reached. For these tests no nominal data scale was assumed. (iii) For the same subsets described above, the distribution of the aspect median was determined. A count of how often this distribution was substantially different from the distribution obtained with the complete dataset was made. Since two valid stability interpretations were available (an objective and a subjective interpretation as described above), the effect of the arbitrary choice of the interpretation on spatial variability results was tested. In over one third of the cases the two interpretations came to different results. The effect of these differences were studied in a similar method as described in (iii): the distribution of the aspect median was determined for subsets of the complete dataset using both interpretations, compared against each other as well as to the results of the complete dataset. For the complete dataset the two interpretations showed mainly identical results. Therefore the subset size was determined from the point at which the results of the two interpretations converged. A universal result for the optimal subset size cannot be presented since results differed between different situations contained in the dataset. The optimal subset size is thus dependent on stability variation in a given situation, which is unknown initially. There are indications that for some situations even the complete dataset might be not large enough. At a subset size of approximately 25, the significant differences between aspect groups (as determined using the whole dataset) were only obtained in one out of five situations. In some situations, up to 20% of the subsets showed a substantially different distribution of the aspect median. Thus, in most cases, 25 measurements (which can be achieved by six two-person teams in one day) did not allow to draw reliable conclusions.

  15. Metastable neural dynamics mediates expectation

    NASA Astrophysics Data System (ADS)

    Mazzucato, Luca; La Camera, Giancarlo; Fontanini, Alfredo

    Sensory stimuli are processed faster when their presentation is expected compared to when they come as a surprise. We previously showed that, in multiple single-unit recordings from alert rat gustatory cortex, taste stimuli can be decoded faster from neural activity if preceded by a stimulus-predicting cue. However, the specific computational process mediating this anticipatory neural activity is unknown. Here, we propose a biologically plausible model based on a recurrent network of spiking neurons with clustered architecture. In the absence of stimulation, the model neural activity unfolds through sequences of metastable states, each state being a population vector of firing rates. We modeled taste stimuli and cue (the same for all stimuli) as two inputs targeting subsets of excitatory neurons. As observed in experiment, stimuli evoked specific state sequences, characterized in terms of `coding states', i.e., states occurring significantly more often for a particular stimulus. When stimulus presentation is preceded by a cue, coding states show a faster and more reliable onset, and expected stimuli can be decoded more quickly than unexpected ones. This anticipatory effect is unrelated to changes of firing rates in stimulus-selective neurons and is absent in homogeneous balanced networks, suggesting that a clustered organization is necessary to mediate the expectation of relevant events. Our results demonstrate a novel mechanism for speeding up sensory coding in cortical circuits. NIDCD K25-DC013557 (LM); NIDCD R01-DC010389 (AF); NSF IIS-1161852 (GL).

  16. Stem-like plasticity and heterogeneity of circulating tumor cells: current status and prospect challenges in liver cancer

    PubMed Central

    Correnti, Margherita; Raggi, Chiara

    2017-01-01

    Poor prognosis and high recurrence remain leading causes of primary liver cancerassociated mortality. The spread of circulating tumor cells (CTCs) in the blood plays a major role in the initiation of metastasis and tumor recurrence after surgery. Nevertheless, only a subset of CTCs can survive, migrate to distant sites and establish secondary tumors. Consistent with cancer stem cell (CSC) hypothesis, stem-like CTCs might represent a potential source for cancer relapse and distant metastasis. Thus, identification of stem-like metastasis-initiating CTC-subset may provide useful clinically prognostic information. This review will emphasize the most relevant findings of CTCs in the context of stem-like biology associated to liver carcinogenesis. In this view, the emerging field of stem-like CTCs may deliver substantial contribution in liver cancer field in order to move to personalized approaches for diagnosis, prognosis and therapy. PMID:27738343

  17. A web-based platform for virtual screening.

    PubMed

    Watson, Paul; Verdonk, Marcel; Hartshorn, Michael J

    2003-09-01

    A fully integrated, web-based, virtual screening platform has been developed to allow rapid virtual screening of large numbers of compounds. ORACLE is used to store information at all stages of the process. The system includes a large database of historical compounds from high throughput screenings (HTS) chemical suppliers, ATLAS, containing over 3.1 million unique compounds with their associated physiochemical properties (ClogP, MW, etc.). The database can be screened using a web-based interface to produce compound subsets for virtual screening or virtual library (VL) enumeration. In order to carry out the latter task within ORACLE a reaction data cartridge has been developed. Virtual libraries can be enumerated rapidly using the web-based interface to the cartridge. The compound subsets can be seamlessly submitted for virtual screening experiments, and the results can be viewed via another web-based interface allowing ad hoc querying of the virtual screening data stored in ORACLE.

  18. Isolation and Flow Cytometry Analysis of Innate Lymphoid Cells from the Intestinal Lamina Propria.

    PubMed

    Gronke, Konrad; Kofoed-Nielsen, Michael; Diefenbach, Andreas

    2017-01-01

    The intestinal mucosa constitutes the biggest surface area of the body. It is constantly challenged by bacteria, commensal and pathogenic, protozoa, and food-derived irritants. In order to maintain homeostasis, a complex network of signaling circuits has evolved that includes contributions of immune cells. In recent years a subset of lymphocytes, which belong to the innate immune system, has caught particular attention. These so-called innate lymphoid cells (ILC) reside within the lamina propria of the small and large intestines and rapidly respond to environmental challenges. They provide immunity to various types of infections but may also contribute to organ homeostasis as they produce factors acting on epithelial cells thereby enhancing barrier integrity. Here, we describe how these cells can be isolated from their environment and provide an in-depth protocol how to visualize the various ILC subsets by flow cytometry.

  19. Superiorization with level control

    NASA Astrophysics Data System (ADS)

    Cegielski, Andrzej; Al-Musallam, Fadhel

    2017-04-01

    The convex feasibility problem is to find a common point of a finite family of closed convex subsets. In many applications one requires something more, namely finding a common point of closed convex subsets which minimizes a continuous convex function. The latter requirement leads to an application of the superiorization methodology which is actually settled between methods for convex feasibility problem and the convex constrained minimization. Inspired by the superiorization idea we introduce a method which sequentially applies a long-step algorithm for a sequence of convex feasibility problems; the method employs quasi-nonexpansive operators as well as subgradient projections with level control and does not require evaluation of the metric projection. We replace a perturbation of the iterations (applied in the superiorization methodology) by a perturbation of the current level in minimizing the objective function. We consider the method in the Euclidean space in order to guarantee the strong convergence, although the method is well defined in a Hilbert space.

  20. Estimation of the lower and upper bounds on the probability of failure using subset simulation and random set theory

    NASA Astrophysics Data System (ADS)

    Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.

    2018-02-01

    Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.

  1. Comparison of Different EHG Feature Selection Methods for the Detection of Preterm Labor

    PubMed Central

    Alamedine, D.; Khalil, M.; Marque, C.

    2013-01-01

    Numerous types of linear and nonlinear features have been extracted from the electrohysterogram (EHG) in order to classify labor and pregnancy contractions. As a result, the number of available features is now very large. The goal of this study is to reduce the number of features by selecting only the relevant ones which are useful for solving the classification problem. This paper presents three methods for feature subset selection that can be applied to choose the best subsets for classifying labor and pregnancy contractions: an algorithm using the Jeffrey divergence (JD) distance, a sequential forward selection (SFS) algorithm, and a binary particle swarm optimization (BPSO) algorithm. The two last methods are based on a classifier and were tested with three types of classifiers. These methods have allowed us to identify common features which are relevant for contraction classification. PMID:24454536

  2. Beyond NK cells: the expanding universe of innate lymphoid cells.

    PubMed

    Cella, Marina; Miller, Hannah; Song, Christina

    2014-01-01

    For a long time, natural killer (NK) cells were thought to be the only innate immune lymphoid population capable of responding to invading pathogens under the influence of changing environmental cues. In the last few years, an increasing amount of evidence has shown that a number of different innate lymphoid cell (ILC) populations found at mucosal sites rapidly respond to locally produced cytokines in order to establish or maintain homeostasis. These ILC populations closely mirror the phenotype of adaptive T helper subsets in their repertoire of secreted soluble factors. Early in the immune response, ILCs are responsible for setting the stage to mount an adaptive T cell response that is appropriate for the incoming insult. Here, we review the diversity of ILC subsets and discuss similarities and differences between ILCs and NK cells in function and key transcriptional factors required for their development.

  3. VizieR Online Data Catalog: Short GRBs with Fermi GBM and Swift BAT (Burns+, 2016)

    NASA Astrophysics Data System (ADS)

    Burns, E.; Connaughton, V.; Zhang, B.-B.; Lien, A.; Briggs, M. S.; Goldstein, A.; Pelassa, V.; Troja, E.

    2018-01-01

    Compact binary system mergers are expected to generate gravitational radiation detectable by ground-based interferometers. A subset of these, the merger of a neutron star with another neutron star or a black hole, are also the most popular model for the production of short gamma-ray bursts (GRBs). The Swift Burst Alert Telescope (BAT) and the Fermi Gamma-ray Burst Monitor (GBM) trigger on short GRBs (SGRBs) at rates that reflect their relative sky exposures, with the BAT detecting 10 per year compared to about 45 for GBM. We examine the SGRB populations detected by Swift BAT and Fermi GBM. (4 data files).

  4. Magnetic resonance imaging for pain after surgical treatment for athletic pubalgia and the "sports hernia".

    PubMed

    Zoga, Adam C; Meyers, William C

    2011-09-01

    Magnetic resonance (MR) imaging technique and findings in the setting of athletic pubalgia, including injury at the rectus abdominis/adductor aponeurosis, are becoming widely recognized. A subset of these patients is treated with various pelvic floor repairs, mesh reinforcements, and tendon releases. Most of these patients do well after intervention, but some have persistent or refractory groin pain, and others eventually develop new injuries in the pubic region or elsewhere about the pelvic girdle. This review describes the expected and some unexpected MRI findings in patients with recurrent or persistent groin pain after a "sports hernia" repair. © Thieme Medical Publishers.

  5. Has microblogging changed stock market behavior? Evidence from China

    NASA Astrophysics Data System (ADS)

    Jin, Xi; Shen, Dehua; Zhang, Wei

    2016-06-01

    This paper examines the stock market behavior for a long-lived subset of firms in Shanghai and Shenzhen CSI 300 Index (CSI 300 Index) both before and after the establishment of firms' Microblogging in Sina Weibo. The empirical results show a significant increase in the relative trading volume as well as the decreases in the daily expected stock return and firm-level volatility in the post-Sina Weibo period. These findings suggest that Sina Weibo as an alternative information interaction channel has changed the information environment for individual stock, enhanced the speed of information diffusion and therefore changed the overall stock market behavior.

  6. [Myositis, polysynovitis and pulmonary fibrosis: anti-Jo-1 syndrome].

    PubMed

    Perrenoud, F G; Van Lindhoudt, D; Ochsner, F; Janzer, R C; Ott, H

    1996-01-27

    Polymyositis/dermatomyositis are rare autoimmune diseases. Classification is usually performed according to the criteria of Bohan and Peter. The occurrence of myositis-specific autoantibodies has recently been described in inflammatory myopathies. Approximately half of the patients can now be classified by these specific autoantibodies. Several of these autoantibodies (anti-aminoacyl-tRNA synthetases, anti-SRP, anti-Mi2) are strongly associated with the clinical presentation. We may expect that in the future different subsets of these diseases will be increasingly identified by serum antibodies. We report on a patient with myopathy, pulmonary fibrosis and polysynovitis, a typical clinical presentation of the anti-Jo1 syndrome (anti-synthetase syndrome).

  7. Quantitative and Qualitative Assessment of Yttrium-90 PET/CT Imaging

    PubMed Central

    Büsing, Karen-Anett; Schönberg, Stefan O.; Bailey, Dale L.; Willowson, Kathy; Glatting, Gerhard

    2014-01-01

    Yttrium-90 is known to have a low positron emission decay of 32 ppm that may allow for personalized dosimetry of liver cancer therapy with 90Y labeled microspheres. The aim of this work was to image and quantify 90Y so that accurate predictions of the absorbed dose can be made. The measurements were performed within the QUEST study (University of Sydney, and Sirtex Medical, Australia). A NEMA IEC body phantom containing 6 fillable spheres (10–37 mm ∅) was used to measure the 90Y distribution with a Biograph mCT PET/CT (Siemens, Erlangen, Germany) with time-of-flight (TOF) acquisition. A sphere to background ratio of 8∶1, with a total 90Y activity of 3 GBq was used. Measurements were performed for one week (0, 3, 5 and 7 d). he acquisition protocol consisted of 30 min-2 bed positions and 120 min-single bed position. mages were reconstructed with 3D ordered subset expectation maximization (OSEM) and point spread function (PSF) for iteration numbers of 1–12 with 21 (TOF) and 24 (non-TOF) subsets and CT based attenuation and scatter correction. Convergence of algorithms and activity recovery was assessed based on regions-of-interest (ROI) analysis of the background (100 voxels), spheres (4 voxels) and the central low density insert (25 voxels). For the largest sphere, the recovery coefficient (RC) values for the 30 min –2-bed position, 30 min-single bed and 120 min-single bed were 1.12±0.20, 1.14±0.13, 0.97±0.07 respectively. For the smaller diameter spheres, the PSF algorithm with TOF and single bed acquisition provided a comparatively better activity recovery. Quantification of Y-90 using Biograph mCT PET/CT is possible with a reasonable accuracy, the limitations being the size of the lesion and the activity concentration present. At this stage, based on our study, it seems advantageous to use different protocols depending on the size of the lesion. PMID:25369020

  8. Evaluation and application of multiple scoring functions for a virtual screening experiment

    NASA Astrophysics Data System (ADS)

    Xing, Li; Hodgkin, Edward; Liu, Qian; Sedlock, David

    2004-05-01

    In order to identify novel chemical classes of factor Xa inhibitors, five scoring functions (FlexX, DOCK, GOLD, ChemScore and PMF) were engaged to evaluate the multiple docking poses generated by FlexX. The compound collection was composed of confirmed potent factor Xa inhibitors and a subset of the LeadQuest® screening compound library. Except for PMF the other four scoring functions succeeded in reproducing the crystal complex (PDB code: 1FAX). During virtual screening the highest hit rate (80%) was demonstrated by FlexX at an energy cutoff of -40 kJ/mol, which is about 40-fold over random screening (2.06%). Limited results suggest that presenting more poses of a single molecule to the scoring functions could deteriorate their enrichment factors. A series of promising scaffolds with favorable binding scores was retrieved from LeadQuest. Consensus scoring by pair-wise intersection failed to enrich the hit rate yielded by single scorings (i.e. FlexX). We note that reported successes of consensus scoring in hit rate enrichment could be artificial because their comparisons were based on a selected subset of single scoring and a markedly reduced subset of double or triple scoring. The findings presented in this report are based upon a single biological system and support further studies.

  9. Performance Analysis of Relay Subset Selection for Amplify-and-Forward Cognitive Relay Networks

    PubMed Central

    Qureshi, Ijaz Mansoor; Malik, Aqdas Naveed; Zubair, Muhammad

    2014-01-01

    Cooperative communication is regarded as a key technology in wireless networks, including cognitive radio networks (CRNs), which increases the diversity order of the signal to combat the unfavorable effects of the fading channels, by allowing distributed terminals to collaborate through sophisticated signal processing. Underlay CRNs have strict interference constraints towards the secondary users (SUs) active in the frequency band of the primary users (PUs), which limits their transmit power and their coverage area. Relay selection offers a potential solution to the challenges faced by underlay networks, by selecting either single best relay or a subset of potential relay set under different design requirements and assumptions. The best relay selection schemes proposed in the literature for amplify-and-forward (AF) based underlay cognitive relay networks have been very well studied in terms of outage probability (OP) and bit error rate (BER), which is deficient in multiple relay selection schemes. The novelty of this work is to study the outage behavior of multiple relay selection in the underlay CRN and derive the closed-form expressions for the OP and BER through cumulative distribution function (CDF) of the SNR received at the destination. The effectiveness of relay subset selection is shown through simulation results. PMID:24737980

  10. Enhancing the Discrimination Ability of a Gas Sensor Array Based on a Novel Feature Selection and Fusion Framework.

    PubMed

    Deng, Changjian; Lv, Kun; Shi, Debo; Yang, Bo; Yu, Song; He, Zhiyi; Yan, Jia

    2018-06-12

    In this paper, a novel feature selection and fusion framework is proposed to enhance the discrimination ability of gas sensor arrays for odor identification. Firstly, we put forward an efficient feature selection method based on the separability and the dissimilarity to determine the feature selection order for each type of feature when increasing the dimension of selected feature subsets. Secondly, the K-nearest neighbor (KNN) classifier is applied to determine the dimensions of the optimal feature subsets for different types of features. Finally, in the process of establishing features fusion, we come up with a classification dominance feature fusion strategy which conducts an effective basic feature. Experimental results on two datasets show that the recognition rates of Database I and Database II achieve 97.5% and 80.11%, respectively, when k = 1 for KNN classifier and the distance metric is correlation distance (COR), which demonstrates the superiority of the proposed feature selection and fusion framework in representing signal features. The novel feature selection method proposed in this paper can effectively select feature subsets that are conducive to the classification, while the feature fusion framework can fuse various features which describe the different characteristics of sensor signals, for enhancing the discrimination ability of gas sensors and, to a certain extent, suppressing drift effect.

  11. Unique capabilities of AC frequency scanning and its implementation on a Mars Organic Molecule Analyzer linear ion trap.

    PubMed

    Snyder, Dalton T; Kaplan, Desmond A; Danell, Ryan M; van Amerom, Friso H W; Pinnick, Veronica T; Brinckerhoff, William B; Mahaffy, Paul R; Cooks, R Graham

    2017-06-21

    A limitation of conventional quadrupole ion trap scan modes which use rf amplitude control for mass scanning is that, in order to detect a subset of an ion population, the rest of the ion population must also be interrogated. That is, ions cannot be detected out of order; they must be detected in order of either increasing or decreasing mass-to-charge (m/z). However, an ion trap operated in the ac frequency scan mode, where the rf amplitude is kept constant and instead the ac frequency is used for mass-selective operations, has no such limitation because any variation in the ac frequency affects only the subset of ions whose secular frequencies match the perturbation frequency. Hence, an ion trap operated in the ac frequency scan mode can perform any arbitrary mass scan, as well as a sequence of scans, using a single ion injection; we demonstrate both capabilities here. Combining these two capabilities, we demonstrate the acquisition of a full mass spectrum, a product ion spectrum, and a second generation product ion spectrum using a single ion injection event. We further demonstrate a "segmented scan" in which different mass ranges are interrogated at different rf amplitudes in order to improve resolution over a portion of the mass range, and a "periodic scan" in which ions are continuously introduced into the ion trap to achieve a nearly 100% duty cycle. These unique scan modes, along with other characteristics of ac frequency scanning, are particularly appropriate for miniature ion trap mass spectrometers. Hence, implementation of ac frequency scanning on a prototype of the Mars Organic Molecule Analyzer mass spectrometer is also described.

  12. NIH Toolbox Cognition Battery (NIHTB-CB): list sorting test to measure working memory.

    PubMed

    Tulsky, David S; Carlozzi, Noelle; Chiaravalloti, Nancy D; Beaumont, Jennifer L; Kisala, Pamela A; Mungas, Dan; Conway, Kevin; Gershon, Richard

    2014-07-01

    The List Sorting Working Memory Test was designed to assess working memory (WM) as part of the NIH Toolbox Cognition Battery. List Sorting is a sequencing task requiring children and adults to sort and sequence stimuli that are presented visually and auditorily. Validation data are presented for 268 participants ages 20 to 85 years. A subset of participants (N=89) was retested 7 to 21 days later. As expected, the List Sorting Test had moderately high correlations with other measures of working memory and executive functioning (convergent validity) but a low correlation with a test of receptive vocabulary (discriminant validity). Furthermore, List Sorting demonstrates expected changes over the age span and has excellent test-retest reliability. Collectively, these results provide initial support for the construct validity of the List Sorting Working Memory Measure as a measure of working memory. However, the relationship between the List Sorting Test and general executive function has yet to be determined.

  13. NIH Toolbox Cognition Battery (NIHTB-CB): The List Sorting Test to Measure Working Memory

    PubMed Central

    Tulsky, David S.; Carlozzi, Noelle; Chiaravalloti, Nancy D.; Beaumont, Jennifer L.; Kisala, Pamela A.; Mungas, Dan; Conway, Kevin; Gershon, Richard

    2015-01-01

    The List Sorting Working Memory Test was designed to assess working memory (WM) as part of the NIH Toolbox Cognition Battery. List Sorting is a sequencing task requiring children and adults to sort and sequence stimuli that are presented visually and auditorily. Validation data are presented for 268 participants ages 20 to 85 years. A subset of participants (N=89) was retested 7 to 21 days later. As expected, the List Sorting Test had moderately high correlations with other measures of working memory and executive functioning (convergent validity) but a low correlation with a test of receptive vocabulary (discriminant validity). Furthermore, List Sorting demonstrates expected changes over the age span and has excellent test-retest reliability. Collectively, these results provide initial support the construct validity of the List Sorting Working Memory Measure as a measure of working memory. However, the relation between the List Sorting Test and general executive function has yet to be determined. PMID:24959983

  14. Bridging the "Expectation Gap" Using Student Preceptors

    ERIC Educational Resources Information Center

    Koerner, Michael

    2017-01-01

    An "Expectation Gap" can exist between what teachers expect of their students and what effort students expect to and are willing to expend. In order to get students and teachers on the same learning page, this Gap needs remedied. One successful means of bridging the Gap is the use of Student Preceptors.

  15. Volunteering for Job Enrichment: A Test of Expectancy Theory Predictions

    ERIC Educational Resources Information Center

    Giles, William F.

    1977-01-01

    In order to test predictions derived from an expectancy theory model developed by E. E. Lawler, measures of higher-order need satisfaction, locus of control, and intrinsic motivation were obtained from 252 female assembly line workers. Implications of the results for placement of individuals in enriched jobs are discussed. (Editor/RK)

  16. Computational discovery of stable M2A X phases

    NASA Astrophysics Data System (ADS)

    Ashton, Michael; Hennig, Richard G.; Broderick, Scott R.; Rajan, Krishna; Sinnott, Susan B.

    2016-08-01

    The family of layered Mn +1A Xn compounds provides a large class of materials with applications ranging from magnets to high-temperature coatings to nuclear cladding. In this work, we employ a density-functional-theory-based discovery approach to identify a large number of thermodynamically stable Mn +1A Xn compounds, where n =1 , M =Sc, Ti, V, Cr, Zr, Nb, Mo, Hf, Ta; A =Al, Si, P, S, Ga, Ge, As, Cd, In, Sn, Tl, Pb; and X =C, N. We calculate the formation energy for 216 pure M2A X compounds and 10 314 solid solutions, (MM') 2(A A') (X X') , relative to their competing phases. We find that the 49 experimentally known M2A X phases exhibit formation energies of less than 30 meV/atom. Among the 10 530 compositions considered, 3140 exhibit formation energies below 30 meV/atom, most of which have yet to be experimentally synthesized. A significant subset of 301 compositions exhibits strong exothermic stability in excess of 100 meV/atom, indicating favorable synthesis conditions. We identify empirical design rules for stable M2A X compounds. Among the metastable M2A X compounds are two Cr-based compounds with ferromagnetic ordering and expected Curie temperatures around 75 K. These results can serve as a map for the experimental design and synthesis of different M2A X compounds.

  17. A novel three-dimensional image reconstruction method for near-field coded aperture single photon emission computerized tomography

    PubMed Central

    Mu, Zhiping; Hong, Baoming; Li, Shimin; Liu, Yi-Hwa

    2009-01-01

    Coded aperture imaging for two-dimensional (2D) planar objects has been investigated extensively in the past, whereas little success has been achieved in imaging 3D objects using this technique. In this article, the authors present a novel method of 3D single photon emission computerized tomography (SPECT) reconstruction for near-field coded aperture imaging. Multiangular coded aperture projections are acquired and a stack of 2D images is reconstructed separately from each of the projections. Secondary projections are subsequently generated from the reconstructed image stacks based on the geometry of parallel-hole collimation and the variable magnification of near-field coded aperture imaging. Sinograms of cross-sectional slices of 3D objects are assembled from the secondary projections, and the ordered subset expectation and maximization algorithm is employed to reconstruct the cross-sectional image slices from the sinograms. Experiments were conducted using a customized capillary tube phantom and a micro hot rod phantom. Imaged at approximately 50 cm from the detector, hot rods in the phantom with diameters as small as 2.4 mm could be discerned in the reconstructed SPECT images. These results have demonstrated the feasibility of the authors’ 3D coded aperture image reconstruction algorithm for SPECT, representing an important step in their effort to develop a high sensitivity and high resolution SPECT imaging system. PMID:19544769

  18. Creation of an ensemble of simulated cardiac cases and a human observer study: tools for the development of numerical observers for SPECT myocardial perfusion imaging

    NASA Astrophysics Data System (ADS)

    O'Connor, J. Michael; Pretorius, P. Hendrik; Gifford, Howard C.; Licho, Robert; Joffe, Samuel; McGuiness, Matthew; Mehurg, Shannon; Zacharias, Michael; Brankov, Jovan G.

    2012-02-01

    Our previous Single Photon Emission Computed Tomography (SPECT) myocardial perfusion imaging (MPI) research explored the utility of numerical observers. We recently created two hundred and eighty simulated SPECT cardiac cases using Dynamic MCAT (DMCAT) and SIMIND Monte Carlo tools. All simulated cases were then processed with two reconstruction methods: iterative ordered subset expectation maximization (OSEM) and filtered back-projection (FBP). Observer study sets were assembled for both OSEM and FBP methods. Five physicians performed an observer study on one hundred and seventy-nine images from the simulated cases. The observer task was to indicate detection of any myocardial perfusion defect using the American Society of Nuclear Cardiology (ASNC) 17-segment cardiac model and the ASNC five-scale rating guidelines. Human observer Receiver Operating Characteristic (ROC) studies established the guidelines for the subsequent evaluation of numerical model observer (NO) performance. Several NOs were formulated and their performance was compared with the human observer performance. One type of NO was based on evaluation of a cardiac polar map that had been pre-processed using a gradient-magnitude watershed segmentation algorithm. The second type of NO was also based on analysis of a cardiac polar map but with use of a priori calculated average image derived from an ensemble of normal cases.

  19. Completing the gaps in Kilauea's Father's Day InSAR displacement signature with ScanSAR

    NASA Astrophysics Data System (ADS)

    Bertran Ortiz, A.; Pepe, A.; Lanari, R.; Lundgren, P.; Rosen, P. A.

    2009-12-01

    Currently there are gaps in the known displacement signature obtained with InSAR at Kilauea between 2002 and 2009. InSAR data can be richer than GPS because of denser spatial cover. However, to better model rapidly varying and non-steady geophysical events InSAR is limited because of its less dense time observations of the area under study. The ScanSAR mode currently available in several satellites mitigates this effect because the satellite may illuminate a given area more than once within an orbit cycle. The Kilauea displacement graph below from Instituto per Il Rilevamento Electromagnetico dell'Ambiente (IREA) is a cut in space of the displacement signature obtained from a time series of several stripmap-to-stripmap interferograms. It shows that critical information is missing, especially between 2006 and 2007. The displacement is expected to be non-linear judging from the 2007-2008 displacement signature, thus simple interpolation would not suffice. The gap can be filled by incorporating Envisat stripmap-to-ScanSAR interferograms available during that time period. We propose leveraging JPL's new ROI-PAC ScanSAR module to create stripmap-to-ScanSAR interferograms. The new interferograms will be added to the stripmap ones in order to extend the existing stripmap time series generated by using the Small BAseline Subset (SBAS) technique. At AGU we will present denser graphs that better capture Kilauea's displacement between 2003 and 2009.

  20. On normality, ethnicity, and missing values in quantitative trait locus mapping

    PubMed Central

    Labbe, Aurélie; Wormald, Hanna

    2005-01-01

    Background This paper deals with the detection of significant linkage for quantitative traits using a variance components approach. Microsatellite markers were obtained for the Genetic Analysis Workshop 14 Collaborative Study on the Genetics of Alcoholism data. Ethnic heterogeneity, highly skewed quantitative measures, and a high rate of missing values are all present in this dataset and well known to impact upon linkage analysis. This makes it a good candidate for investigation. Results As expected, we observed a number of changes in LOD scores, especially for chromosomes 1, 7, and 18, along with the three factors studied. A dramatic example of such changes can be found in chromosome 7. Highly significant linkage to one of the quantitative traits became insignificant when a proper normalizing transformation of the trait was used and when analysis was carried out on an ethnically homogeneous subset of the original pedigrees. Conclusion In agreement with existing literature, transforming a trait to ensure normality using a Box-Cox transformation is highly recommended in order to avoid false-positive linkages. Furthermore, pedigrees should be sorted by ethnic groups and analyses should be carried out separately. Finally, one should be aware that the inclusion of covariates with a high rate of missing values reduces considerably the number of subjects included in the model. In such a case, the loss in power may be large. Imputation methods are then recommended. PMID:16451664

  1. A new method for spatial structure detection of complex inner cavities based on 3D γ-photon imaging

    NASA Astrophysics Data System (ADS)

    Xiao, Hui; Zhao, Min; Liu, Jiantang; Liu, Jiao; Chen, Hao

    2018-05-01

    This paper presents a new three-dimensional (3D) imaging method for detecting the spatial structure of a complex inner cavity based on positron annihilation and γ-photon detection. This method first marks carrier solution by a certain radionuclide and injects it into the inner cavity where positrons are generated. Subsequently, γ-photons are released from positron annihilation, and the γ-photon detector ring is used for recording the γ-photons. Finally, the two-dimensional (2D) image slices of the inner cavity are constructed by the ordered-subset expectation maximization scheme and the 2D image slices are merged to the 3D image of the inner cavity. To eliminate the artifact in the reconstructed image due to the scattered γ-photons, a novel angle-traversal model is proposed for γ-photon single-scattering correction, in which the path of the single scattered γ-photon is analyzed from a spatial geometry perspective. Two experiments are conducted to verify the effectiveness of the proposed correction model and the advantage of the proposed testing method in detecting the spatial structure of the inner cavity, including the distribution of gas-liquid multi-phase mixture inside the inner cavity. The above two experiments indicate the potential of the proposed method as a new tool for accurately delineating the inner structures of industrial complex parts.

  2. Phantom experiments to improve parathyroid lesion detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, Kenneth J.; Tronco, Gene G.; Tomas, Maria B.

    2007-12-15

    This investigation tested the hypothesis that visual analysis of iteratively reconstructed tomograms by ordered subset expectation maximization (OSEM) provides the highest accuracy for localizing parathyroid lesions using {sup 99m}Tc-sestamibi SPECT data. From an Institutional Review Board approved retrospective review of 531 patients evaluated for parathyroid localization, image characteristics were determined for 85 {sup 99m}Tc-sestamibi SPECT studies originally read as equivocal (EQ). Seventy-two plexiglas phantoms using cylindrical simulated lesions were acquired for a clinically realistic range of counts (mean simulated lesion counts of 75{+-}50 counts/pixel) and target-to-background (T:B) ratios (range=2.0 to 8.0) to determine an optimal filter for OSEM. Two experiencedmore » nuclear physicians graded simulated lesions, blinded to whether chambers contained radioactivity or plain water, and two observers used the same scale to read all phantom and clinical SPECT studies, blinded to pathology findings and clinical information. For phantom data and all clinical data, T:B analyses were not statistically different for OSEM versus FB, but visual readings were significantly more accurate than T:B (88{+-}6% versus 68{+-}6%, p=0.001) for OSEM processing, and OSEM was significantly more accurate than FB for visual readings (88{+-}6% versus 58{+-}6%, p<0.0001). These data suggest that visual analysis of iteratively reconstructed MIBI tomograms should be incorporated into imaging protocols performed to localize parathyroid lesions.« less

  3. Small-Scale Experiments.10-gallon drum experiment summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenberg, David M.

    2015-02-05

    A series of sub-scale (10-gallon) drum experiments were conducted to characterize the reactivity, heat generation, and gas generation of mixtures of chemicals believed to be present in the drum (68660) known to have breached in association with the radiation release event at the Waste Isolation Pilot Plant (WIPP) on February 14, 2014, at a scale expected to be large enough to replicate the environment in that drum but small enough to be practical, safe, and cost effective. These tests were not intended to replicate all the properties of drum 68660 or the event that led to its breach, or tomore » validate a particular hypothesis of the release event. They were intended to observe, in a controlled environment and with suitable diagnostics, the behavior of simple mixtures of chemicals in order to determine if they could support reactivity that could result in ignition or if some other ingredient or event would be necessary. There is a significant amount of uncertainty into the exact composition of the barrel; a limited sub-set of known components was identified, reviewed with Technical Assessment Team (TAT) members, and used in these tests. This set of experiments was intended to provide a framework to postulate realistic, data-supported hypotheses for processes that occur in a “68660-like” configuration, not definitively prove what actually occurred in 68660.« less

  4. β-hCG resolution times during expectant management of tubal ectopic pregnancies.

    PubMed

    Mavrelos, D; Memtsa, M; Helmy, S; Derdelis, G; Jauniaux, E; Jurkovic, D

    2015-05-21

    A subset of women with a tubal ectopic pregnancy can be safely managed expectantly. Expectant management involves a degree of disruption with hospital visits to determine serum β-hCG (β-human chorionic gonadotrophin) concentration until the pregnancy test becomes negative and expectant management is considered complete. The length of time required for the pregnancy test to become negative and the parameters that influence this interval have not been described. Information on the likely length of follow up would be useful for women considering expectant management of their tubal ectopic pregnancy. This was a retrospective study at a tertiary referral center in an inner city London Hospital. We included women who were diagnosed with a tubal ectopic pregnancy by transvaginal ultrasound between March 2009 and March 2014. During the study period 474 women were diagnosed with a tubal ectopic pregnancy and 256 (54 %) of them fulfilled our management criteria for expectant management. A total of 158 (33 %) women had successful expectant management and in those cases we recorded the diameter of the ectopic pregnancy (mm), the maximum serum β-hCG (IU/L) and levels during follow up until resolution as well as the interval to resolution (days). The median interval from maximum serum β-hCG concentration to resolution was 18.0 days (IQR 11.0-28.0). The maximum serum β-hCG concentration and the rate of decline of β-hCG were independently associated with the length of follow up. Women's age and size of ectopic pregnancy did not have significant effects on the length of follow up. Women undergoing expectant management of ectopic pregnancy can be informed that the likely length of follow up is under 3 weeks and that it positively correlates with initial β-hCG level at the time of diagnosis.

  5. Frontiers in Applied and Computational Mathematics 05’

    DTIC Science & Technology

    2005-03-01

    dynamics, forcing subsets to have the same oscillation numbers and interleaving spiking times . Our analysis follows the theory of coupled systems of...continuum is described by a continuous- time stochastic process, as are their internal dynamics. Soluble factors, such as cytokines, are represent- ed...scale of a partide pas- sage time through the reaction zone. Both are realistic for many systems of physical interest. A higher order theory includes

  6. A method for simplifying the analysis of traffic accidents injury severity on two-lane highways using Bayesian networks.

    PubMed

    Mujalli, Randa Oqab; de Oña, Juan

    2011-10-01

    This study describes a method for reducing the number of variables frequently considered in modeling the severity of traffic accidents. The method's efficiency is assessed by constructing Bayesian networks (BN). It is based on a two stage selection process. Several variable selection algorithms, commonly used in data mining, are applied in order to select subsets of variables. BNs are built using the selected subsets and their performance is compared with the original BN (with all the variables) using five indicators. The BNs that improve the indicators' values are further analyzed for identifying the most significant variables (accident type, age, atmospheric factors, gender, lighting, number of injured, and occupant involved). A new BN is built using these variables, where the results of the indicators indicate, in most of the cases, a statistically significant improvement with respect to the original BN. It is possible to reduce the number of variables used to model traffic accidents injury severity through BNs without reducing the performance of the model. The study provides the safety analysts a methodology that could be used to minimize the number of variables used in order to determine efficiently the injury severity of traffic accidents without reducing the performance of the model. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. 3D tomographic imaging with the γ-eye planar scintigraphic gamma camera

    NASA Astrophysics Data System (ADS)

    Tunnicliffe, H.; Georgiou, M.; Loudos, G. K.; Simcox, A.; Tsoumpas, C.

    2017-11-01

    γ-eye is a desktop planar scintigraphic gamma camera (100 mm × 50 mm field of view) designed by BET Solutions as an affordable tool for dynamic, whole body, small-animal imaging. This investigation tests the viability of using γ-eye for the collection of tomographic data for 3D SPECT reconstruction. Two software packages, QSPECT and STIR (software for tomographic image reconstruction), have been compared. Reconstructions have been performed using QSPECT’s implementation of the OSEM algorithm and STIR’s OSMAPOSL (Ordered Subset Maximum A Posteriori One Step Late) and OSSPS (Ordered Subsets Separable Paraboloidal Surrogate) algorithms. Reconstructed images of phantom and mouse data have been assessed in terms of spatial resolution, sensitivity to varying activity levels and uniformity. The effect of varying the number of iterations, the voxel size (1.25 mm default voxel size reduced to 0.625 mm and 0.3125 mm), the point spread function correction and the weight of prior terms were explored. While QSPECT demonstrated faster reconstructions, STIR outperformed it in terms of resolution (as low as 1 mm versus 3 mm), particularly when smaller voxel sizes were used, and in terms of uniformity, particularly when prior terms were used. Little difference in terms of sensitivity was seen throughout.

  8. Linkage Analysis Using Co-Phenotypes in the BRIGHT Study Reveals Novel Potential Susceptibility Loci for Hypertension

    PubMed Central

    Wallace, Chris; Xue, Ming-Zhan; Newhouse, Stephen J.; Marçano, Ana Carolina B.; Onipinla, Abiodun K.; Burke, Beverley; Gungadoo, Johannie; Dobson, Richard J.; Brown, Morris; Connell, John M.; Dominiczak, Anna; Lathrop, G. Mark; Webster, John; Farrall, Martin; Mein, Charles; Samani, Nilesh J.; Caulfield, Mark J.; Clayton, David G.; Munroe, Patricia B.

    2006-01-01

    Identification of the genetic influences on human essential hypertension and other complex diseases has proved difficult, partly because of genetic heterogeneity. In many complex-trait resources, additional phenotypic data have been collected, allowing comorbid intermediary phenotypes to be used to characterize more genetically homogeneous subsets. The traditional approach to analyzing covariate-defined subsets has typically depended on researchers’ previous expectations for definition of a comorbid subset and leads to smaller data sets, with a concomitant attrition in power. An alternative is to test for dependence between genetic sharing and covariates across the entire data set. This approach offers the advantage of exploiting the full data set and could be widely applied to complex-trait genome scans. However, existing maximum-likelihood methods can be prohibitively computationally expensive, especially since permutation is often required to determine significance. We developed a less computationally intensive score test and applied it to biometric and biochemical covariate data, from 2,044 sibling pairs with severe hypertension, collected by the British Genetics of Hypertension (BRIGHT) study. We found genomewide-significant evidence for linkage with hypertension and several related covariates. The strongest signals were with leaner-body-mass measures on chromosome 20q (maximum LOD=4.24) and with parameters of renal function on chromosome 5p (maximum LOD=3.71). After correction for the multiple traits and genetic locations studied, our global genomewide P value was .046. This is the first identity-by-descent regression analysis of hypertension to our knowledge, and it demonstrates the value of this approach for the incorporation of additional phenotypic information in genetic studies of complex traits. PMID:16826522

  9. Linkage analysis using co-phenotypes in the BRIGHT study reveals novel potential susceptibility loci for hypertension.

    PubMed

    Wallace, Chris; Xue, Ming-Zhan; Newhouse, Stephen J; Marcano, Ana Carolina B; Onipinla, Abiodun K; Burke, Beverley; Gungadoo, Johannie; Dobson, Richard J; Brown, Morris; Connell, John M; Dominiczak, Anna; Lathrop, G Mark; Webster, John; Farrall, Martin; Mein, Charles; Samani, Nilesh J; Caulfield, Mark J; Clayton, David G; Munroe, Patricia B

    2006-08-01

    Identification of the genetic influences on human essential hypertension and other complex diseases has proved difficult, partly because of genetic heterogeneity. In many complex-trait resources, additional phenotypic data have been collected, allowing comorbid intermediary phenotypes to be used to characterize more genetically homogeneous subsets. The traditional approach to analyzing covariate-defined subsets has typically depended on researchers' previous expectations for definition of a comorbid subset and leads to smaller data sets, with a concomitant attrition in power. An alternative is to test for dependence between genetic sharing and covariates across the entire data set. This approach offers the advantage of exploiting the full data set and could be widely applied to complex-trait genome scans. However, existing maximum-likelihood methods can be prohibitively computationally expensive, especially since permutation is often required to determine significance. We developed a less computationally intensive score test and applied it to biometric and biochemical covariate data, from 2,044 sibling pairs with severe hypertension, collected by the British Genetics of Hypertension (BRIGHT) study. We found genomewide-significant evidence for linkage with hypertension and several related covariates. The strongest signals were with leaner-body-mass measures on chromosome 20q (maximum LOD = 4.24) and with parameters of renal function on chromosome 5p (maximum LOD = 3.71). After correction for the multiple traits and genetic locations studied, our global genomewide P value was .046. This is the first identity-by-descent regression analysis of hypertension to our knowledge, and it demonstrates the value of this approach for the incorporation of additional phenotypic information in genetic studies of complex traits.

  10. Brain mediators of predictive cue effects on perceived pain

    PubMed Central

    Atlas, Lauren Y.; Bolger, Niall; Lindquist, Martin A.; Wager, Tor D.

    2010-01-01

    Information about upcoming pain strongly influences pain experience in experimental and clinical settings, but little is known about the brain mechanisms that link expectation and experience. To identify the pathways by which informational cues influence perception, analyses must jointly consider both the effects of cues on brain responses and the relationship between brain responses and changes in reported experience. Our task and analysis strategy were designed to test these relationships. Auditory cues elicited expectations for low or high painful thermal stimulation, and we assessed how cues influenced human subjects’ pain reports and BOLD fMRI responses to matched levels of noxious heat. We used multi-level mediation analysis to identify brain regions that 1) are modulated by predictive cues, 2) predict trial-to-trial variations in pain reports, and 3) formally mediate the relationship between cues and reported pain. Cues influenced heat-evoked responses in most canonical pain-processing regions, including both medial and lateral pain pathways. Effects on several regions correlated with pre-task expectations, suggesting that expectancy plays a prominent role. A subset of pain-processing regions, including anterior cingulate cortex, anterior insula, and thalamus, formally mediated cue effects on pain. Effects on these regions were in turn mediated by cue-evoked anticipatory activity in the medial orbitofrontal cortex (OFC) and ventral striatum, areas not previously directly implicated in nociception. These results suggest that activity in pain-processing regions reflects a combination of nociceptive input and top-down information related to expectations, and that anticipatory processes in OFC and striatum may play a key role in modulating pain processing. PMID:20881115

  11. Assessment of three risk evaluation systems for patients aged ≥70 in East China: performance of SinoSCORE, EuroSCORE II and the STS risk evaluation system.

    PubMed

    Shan, Lingtong; Ge, Wen; Pu, Yiwei; Cheng, Hong; Cang, Zhengqiang; Zhang, Xing; Li, Qifan; Xu, Anyang; Wang, Qi; Gu, Chang; Zhang, Yangyang

    2018-01-01

    To assess and compare the predictive ability of three risk evaluation systems (SinoSCORE, EuroSCORE II and the STS risk evaluation system) in patients aged ≥70, and who underwent coronary artery bypass grafting (CABG) in East China. Three risk evaluation systems were applied to 1,946 consecutive patients who underwent isolated CABG from January 2004 to September 2016 in two hospitals. Patients were divided into two subsets according to their age: elderly group (age ≥70) with a younger group (age <70) used for comparison. The outcome of interest in this study was in-hospital mortality. The entire cohort and subsets of patients were analyzed. The calibration and discrimination in total and in subsets were assessed by the Hosmer-Lemeshow and the C statistics respectively. Institutional overall mortality was 2.52%. The expected mortality rates of SinoSCORE, EuroSCORE II and the STS risk evaluation system were 0.78(0.64)%, 1.43(1.14)% and 0.78(0.77)%, respectively. SinoSCORE achieved the best discrimination (the area under the receiver operating characteristic curve (AUC) = 0.829), followed by the STS risk evaluation system (AUC = 0.790) and EuroSCORE II (AUC = 0.769) in the entire cohort. In the elderly group, the observed mortality rate was 4.82% while it was 1.38% in the younger group. SinoSCORE (AUC = .829) also achieved the best discrimination in the elderly group, followed by the STS risk evaluation system (AUC = .730) and EuroSCORE II (AUC = 0.640) while all three risk evaluation systems all had good performances in the younger group. SinoSCORE, EuroSCORE II and the STS risk evaluation system all achieved positive calibrations in the entire cohort and subsets. The performance of the three risk evaluation systems was not ideal in the entire cohort. In the elderly group, SinoSCORE appeared to achieve better predictive efficiency than EuroSCORE II and the STS risk evaluation system.

  12. A statistical study of merging galaxies: Theory and observations

    NASA Technical Reports Server (NTRS)

    Chatterjee, Tapan K.

    1990-01-01

    A study of the expected frequency of merging galaxies is conducted, using the impulsive approximation. Results indicate that if we consider mergers involving galaxy pairs without halos in a single crossing time or orbital period, the expected frequency of mergers is two orders of magnitude below the observed value for the present epoch. If we consider mergers involving several orbital periods or crossing times, the expected frequency goes up by an order of magnitude. Preliminary calculation indicate that if we consider galaxy mergers between pairs with massive halos, the merger is very much hastened.

  13. CD27 natural killer cell subsets play different roles during the pre-onset stage of experimental autoimmune encephalomyelitis.

    PubMed

    Gao, Ming; Yang, Yan; Li, Daling; Ming, Bingxia; Chen, Huoying; Sun, Yan; Xiao, Yifan; Lai, Lin; Zou, Huijuan; Xu, Yong; Xiong, Ping; Tan, Zheng; Gong, Feili; Zheng, Fang

    2016-08-01

    NK cells participate in the development of human multiple sclerosis (MS) and mouse experimental autoimmune encephalomyelitis (EAE), but the roles of different NK cell subsets in disease onset remain poorly understood. In this study, murine NK cells were divided into CD27(high) and CD27(low/-) subsets. The CD27(high) subset was decreased and the CD27(low/-) subset was increased in lymphoid organs during the pre-onset stage of EAE. Compared with the counterpart in naïve mice, the CD27(high) subset showed lower expression of Ly49D, Ly49H and NKG2D, and less production of IFN-γ, whereas the CD27(low/-) subset showed similar expression of the above mentioned surface receptors but higher cytotoxic activity in EAE mice. Compared with the CD27(high) subset, the CD27(low/-) subset exhibited increased promotion of DC maturation and no significant inhibition of T cells proliferation and Th17 cells differentiation in vitro Additionally, adoptive transfer of the CD27(low/-) subset, but not the CD27(high) subset, exacerbated the severity of EAE. Collectively, our data suggest the CD27 NK cell subsets play different roles in controlling EAE onset, which provide a new understanding for the regulation of NK cell subsets in early autoimmune disease. © The Author(s) 2016.

  14. Comprehensive Astronaut Immune Assessment Following a Short-Duration Space Flight

    NASA Technical Reports Server (NTRS)

    Crucian, Brian; Stowe, Raymond; Yetman, Deborah; Pierson, Duane; Sams, Clarence

    2006-01-01

    Immune system dysregulation has been demonstrated to occur during spaceflight and has the potential to cause serious health risks to crewmembers participating in exploration class missions. As a part of an ongoing NASA flight experiment assessing viral immunity (DSO-500), a generalized immune assessment was performed on 3 crewmembers who participated in the recent STS-114 Space Shuttle mission. The following assays were performed: (1) comprehensive immunophenotype analysis; (2) T cell function/intracellular cytokine profiles; (4) secreted Th1/Th2 cytokine profiles via cytometric bead array. Immunophenotype analysis included a leukocyte differential, lymphocyte subsets, T cell subsets, cytotoxic/effector CD8+ T cells, memory/naive T cell subsets and constitutively activated T cells. Study timepoints were L-180, L-65, L-10, R+0, R+3 and R+14. Detailed data are presented in the poster text. As expected from a limited number of human subjects, data tended to vary with respect to most parameters. Specific post-flight alterations were as follows (subject number in parentheses): Granulocytosis (2/3), reduced NK cells (3/3), elevated CD4/CD8 ratio (3/3), general CD8+ phenotype shift to a less differentiated phenotype (3/3), elevated levels of memory CD4+ T cells (3/3), loss of L-selectin on T cell subsets (3/3), increased levels of activated T cells (2/3), reduced IL-2 producing T cell subsets (3/3), levels of IFNg producing T cells were unchanged. CD8+ T cell expression of the CD69 activation markers following whole blood stimulation with SEA+SEB were dramatically reduced postflight (3/3), whereas other T cell function assessments were largely unchanged. Cytometric bead array assessment of secreted T cell cytokines was performed, following whole blood stimulation with either CD3/CD28 antibodies or PMA+ionomycin for 48 hours. Specific cytokines assessed were IFNg, TNFa, IL-2, IL-4, IL-5, IL-10. Following CD3/CD28 stimulation, all three crewmembers had a mission-associated reduction in the levels of secreted IFNg. One crewmember had a post-flight inversion in the IFNg/IL-10 ratio postflight, which trended back to baseline by R+14. Detailed cytokine data are presented in the poster text. This testing regimen was designed to correlate immunophenotype changes (thought to correspond to specific in-vivo immune responses or pathogenesis), against altered leukocyte function and cytokine profiles. In-flight studies are required to determine if post-flight alterations are reflective of the in-flight condition, or are a response to landing and readaptation.

  15. Decisions with Uncertain Consequences—A Total Ordering on Loss-Distributions

    PubMed Central

    König, Sandra; Schauer, Stefan

    2016-01-01

    Decisions are often based on imprecise, uncertain or vague information. Likewise, the consequences of an action are often equally unpredictable, thus putting the decision maker into a twofold jeopardy. Assuming that the effects of an action can be modeled by a random variable, then the decision problem boils down to comparing different effects (random variables) by comparing their distribution functions. Although the full space of probability distributions cannot be ordered, a properly restricted subset of distributions can be totally ordered in a practically meaningful way. We call these loss-distributions, since they provide a substitute for the concept of loss-functions in decision theory. This article introduces the theory behind the necessary restrictions and the hereby constructible total ordering on random loss variables, which enables decisions under uncertainty of consequences. Using data obtained from simulations, we demonstrate the practical applicability of our approach. PMID:28030572

  16. Significance of settling model structures and parameter subsets in modelling WWTPs under wet-weather flow and filamentous bulking conditions.

    PubMed

    Ramin, Elham; Sin, Gürkan; Mikkelsen, Peter Steen; Plósz, Benedek Gy

    2014-10-15

    Current research focuses on predicting and mitigating the impacts of high hydraulic loadings on centralized wastewater treatment plants (WWTPs) under wet-weather conditions. The maximum permissible inflow to WWTPs depends not only on the settleability of activated sludge in secondary settling tanks (SSTs) but also on the hydraulic behaviour of SSTs. The present study investigates the impacts of ideal and non-ideal flow (dry and wet weather) and settling (good settling and bulking) boundary conditions on the sensitivity of WWTP model outputs to uncertainties intrinsic to the one-dimensional (1-D) SST model structures and parameters. We identify the critical sources of uncertainty in WWTP models through global sensitivity analysis (GSA) using the Benchmark simulation model No. 1 in combination with first- and second-order 1-D SST models. The results obtained illustrate that the contribution of settling parameters to the total variance of the key WWTP process outputs significantly depends on the influent flow and settling conditions. The magnitude of the impact is found to vary, depending on which type of 1-D SST model is used. Therefore, we identify and recommend potential parameter subsets for WWTP model calibration, and propose optimal choice of 1-D SST models under different flow and settling boundary conditions. Additionally, the hydraulic parameters in the second-order SST model are found significant under dynamic wet-weather flow conditions. These results highlight the importance of developing a more mechanistic based flow-dependent hydraulic sub-model in second-order 1-D SST models in the future. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Molecular Biology and Prevention of Endometrial Cancer. Addendum

    DTIC Science & Technology

    2008-07-01

    2) A subset of adenocarcinoma cases from the International DES Registry (IDESR) was analyzed for MSI 3) A case-control study of the CASH database... DES in- utero , for methylation and mutation of PTEN and MLH1 in order to determine if estrogen induces genetic alterations in these tumors...current trial within the “Gynecologic Disease Program”. Aim 2: To analyze vaginal and cervical adenocarcinomas , that have arisen in women exposed to

  18. Reconfiguration Schemes for Fault-Tolerant Processor Arrays

    DTIC Science & Technology

    1992-10-15

    partially notion of linear schedule are easily related to similar ordered subset of a multidimensional integer lattice models and concepts used in [11-[131...and several other (called indec set). The points of this lattice correspond works. to (i.e.. are the indices of) computations, and the partial There are...These data dependencies are represented as vectors that of all computations of the algorithm is to be minimized. connect points of the lattice . If a

  19. Functional heterogeneity of human effector CD8+ T cells.

    PubMed

    Takata, Hiroshi; Naruto, Takuya; Takiguchi, Masafumi

    2012-02-09

    Effector CD8(+) T cells are believed to be terminally differentiated cells having cytotoxic activity and the ability to produce effector cytokines such as INF-γ and TNF-α. We investigated the difference between CXCR1(+) and CXCR1(-) subsets of human effector CD27(-)CD28(-)CD8(+) T cells. The subsets expressed cytolytic molecules similarly and exerted substantial cytolytic activity, whereas only the CXCR1(-) subset had IL-2 productivity and self-proliferative activity and was more resistant to cell death than the CXCR1(+) subset. These differences were explained by the specific up-regulation of CAMK4, SPRY2, and IL-7R in the CXCR1(-) subset and that of pro-apoptotic death-associated protein kinase 1 (DAPK1) in the CXCR1(+) subset. The IL-2 producers were more frequently found in the IL-7R(+) subset of the CXCR1(-) effector CD8(+) T cells than in the IL-7R(-) subset. IL-7/IL-7R signaling promoted cell survival only in the CXCR1(-) subset. The present study has highlighted a novel subset of effector CD8(+) T cells producing IL-2 and suggests the importance of this subset in the homeostasis of effector CD8(+) T cells.

  20. Simultaneous modelling of multi-purpose/multi-stop activity patterns and quantities consumed

    NASA Astrophysics Data System (ADS)

    Roy, John R.; Smith, Nariida C.; Xu, Blake

    Whereas for commuting travel there is a one-to-one correspondence between commuters and jobs, and for commodity flows a one-to-one correspondence between the size of orders and the shipping cost of the commodities, the situation is much more complex for retail/service travel. A typical shopper may make a single trip or multi-stop tour to buy/consume a quite diverse set of commodities/services at different locations in quite variable quantities. At the same time, the general pattern of the tour is clearly dependent on the activities and goods available at potential stops. These interdependencies have been alluded to in the literature, especially by spatial economists. However, until some preliminary work by the first author, there has been no attempt to formally include these interdependencies in a general model. This paper presents a framework for achieving this goal by developing an evolutionary set of models starting from the simplest forms available. From the above, it is clear that such interdependency models will inevitably have high dimensionality and combinatorial complexity. This rules out a simultaneous treatment of all the events using an individual choice approach. If an individual choice approach is to be applied in a tractable manner, the set of interdependent events needs to be segmented into several subsets, with simultaneity recognised within each subset, but a mere sequential progression occurring between subsets. In this paper, full event interdependencies are retained at the expense of modelling market segments of consumers rather than a sample of representative individuals. We couple the travel and consumption events in the only feasible way, by modelling the tours as discrete entities, in conjunction with the amount of each commodity consumed per stop on each such tour in terms of the continuous quantities of microeconomics. This is performed both under a budget/income constraint from microeconomics and a time budget constraint from time geography. The model considers both physical trips and tele-orders.

  1. Random or predictable?: Adoption patterns of chronic care management practices in physician organizations.

    PubMed

    Miake-Lye, Isomi M; Chuang, Emmeline; Rodriguez, Hector P; Kominski, Gerald F; Yano, Elizabeth M; Shortell, Stephen M

    2017-08-24

    Theories, models, and frameworks used by implementation science, including Diffusion of Innovations, tend to focus on the adoption of one innovation, when often organizations may be facing multiple simultaneous adoption decisions. For instance, despite evidence that care management practices (CMPs) are helpful in managing chronic illness, there is still uneven adoption by physician organizations. This exploratory paper leverages this natural variation in uptake to describe inter-organizational patterns in adoption of CMPs and to better understand how adoption choices may be related to one another. We assessed a cross section of national survey data from physician organizations reporting on the use of 20 CMPs (5 each for asthma, congestive heart failure, depression, and diabetes). Item response theory was used to explore patterns in adoption, first considering all 20 CMPs together and then by subsets according to disease focus or CMP type (e.g., registries, patient reminders). Mokken scale analysis explored whether adoption choices were linked by disease focus or CMP type and whether a consistent ordering of adoption choices was present. The Mokken scale for all 20 CMPs demonstrated medium scalability (H = 0.43), but no consistent ordering. Scales for subsets of CMPs sharing a disease focus had medium scalability (0.4 < H < 0.5), while subsets sharing a CMP type had strong scalability (H > 0.5). Scales for CMP type consistently ranked diabetes CMPs as most adoptable and depression CMPs as least adoptable. Within disease focus scales, patient reminders were ranked as the most adoptable CMP, while clinician feedback and patient education were ranked the least adoptable. Patterns of adoption indicate that innovation characteristics may influence adoption. CMP dissemination efforts may be strengthened by encouraging traditionally non-adopting organizations to focus on more adoptable practices first and then describing a pathway for the adoption of subsequent CMPs. Clarifying why certain CMPs are "less adoptable" may also provide insights into how to overcome CMP adoption constraints.

  2. Simulating the Performance of Ground-Based Optical Asteroid Surveys

    NASA Astrophysics Data System (ADS)

    Christensen, Eric J.; Shelly, Frank C.; Gibbs, Alex R.; Grauer, Albert D.; Hill, Richard E.; Johnson, Jess A.; Kowalski, Richard A.; Larson, Stephen M.

    2014-11-01

    We are developing a set of asteroid survey simulation tools in order to estimate the capability of existing and planned ground-based optical surveys, and to test a variety of possible survey cadences and strategies. The survey simulator is composed of several layers, including a model population of solar system objects and an orbital integrator, a site-specific atmospheric model (including inputs for seeing, haze and seasonal cloud cover), a model telescope (with a complete optical path to estimate throughput), a model camera (including FOV, pixel scale, and focal plane fill factor) and model source extraction and moving object detection layers with tunable detection requirements. We have also developed a flexible survey cadence planning tool to automatically generate nightly survey plans. Inputs to the cadence planner include camera properties (FOV, readout time), telescope limits (horizon, declination, hour angle, lunar and zenithal avoidance), preferred and restricted survey regions in RA/Dec, ecliptic, and Galactic coordinate systems, and recent coverage by other asteroid surveys. Simulated surveys are created for a subset of current and previous NEO surveys (LINEAR, Pan-STARRS and the three Catalina Sky Survey telescopes), and compared against the actual performance of these surveys in order to validate the model’s performance. The simulator tracks objects within the FOV of any pointing that were not discovered (e.g. too few observations, too trailed, focal plane array gaps, too fast or slow), thus dividing the population into “discoverable” and “discovered” subsets, to inform possible survey design changes. Ongoing and future work includes generating a realistic “known” subset of the model NEO population, running multiple independent simulated surveys in coordinated and uncoordinated modes, and testing various cadences to find optimal strategies for detecting NEO sub-populations. These tools can also assist in quantifying the efficiency of novel yet unverified survey cadences (e.g. the baseline LSST cadence) that sparsely spread the observations required for detection over several days or weeks.

  3. Client expectations and satisfaction of quality in home care services. A consumer perspective.

    PubMed

    Samuelsson, G; Wister, A

    2000-12-01

    This study examines clients' expectations of quality in home care services and their perceived satisfaction with services among a random sample of 76 home care recipients in Vancouver, Canada. The researchers conducted face-to-face interviews that applied Multiattribute Utility Technology, a procedure that organizes several quality attributes of "ideal" home care into a tree structure to compare their relative importance and ranking from the clients' perspective. Participants also were asked to state their satisfaction or dissatisfaction with the services received in these domains. Among the five main quality attributes identified, the subjects ranked suitability of the home helper and its subset, personal competence, as the most important indicators of quality, followed by continuity in service. In addition, clients tended to have a high level of satisfaction with regard to the attributes of overall home care services. The highest level of satisfaction was reported for elements of personal dispositions of home care staff. The lowest level of satisfaction involved the time/availability components of the service. Finally, comparisons between client expectations and satisfaction of received home care services showed the highest discrepancy for the attributes of influence and time/availability and the greatest congruence for personal attributes of the staff. The results are discussed in terms of their implications for the delivery of home care services.

  4. Psychometric tests of Expectations of Filial Piety Scale in a Mexican-American population.

    PubMed

    Kao, Hsueh-Fen S; McHugh, Mary L; Travis, Shirley S

    2007-08-01

    This paper reports the development of the Expectations of Filial Piety Scale for use with Mexican-American parents regarding expectations they have of their adult children for care and support. Earlier work by the authors demonstrated that filial piety is a cross-cultural construct that can be used with Hispanic/Latino populations. More refined development of the construct required testing with more homogeneous subsets (i.e. Mexican-Americans) within the broad designation of Hispanic/Latino adults. Non-experimental methodological design for field testing of the instrument's psychometric properties. A convenient sample of 80 Mexican-American adults in California and Texas completed a brief biographical survey and field tested the Expectations of Filial Piety Scale. Common factor analysis with orthogonal rotation was used to extract three factors, which accounted for 58% of the variance in scale scores. These factors included: I: respect for parents (24.05%); II: honouring parents (12.5%); and III: family unity (16.56%). Overall scale reliability was 0.87 with individual factor reliability coefficients ranging from 0.74 to 0.87 and test-retest correlation was 0.73. The results show that the Expectations of Filial Piety Scale is an internally consistent and reliable tool for use in studies of the Mexican-American population. Mexican elders historically underuse formal services; a large portion of this population will most likely depend on support from their family members when they reach advanced ages. There is a lack of culturally sensitive instruments to measure family values in caring for older adults in Mexican-Americans. This scale can enable case workers and nurses in long-term care settings to assess the elder's expectations for family support accurately and compare these expectations with available family support, children's intentions to care for a dependent parent or other family member and the need for supplemental care in Mexican-American families.

  5. Expectations for antibiotics increase their prescribing: Causal evidence about localized impact.

    PubMed

    Sirota, Miroslav; Round, Thomas; Samaranayaka, Shyamalee; Kostopoulou, Olga

    2017-04-01

    Clinically irrelevant but psychologically important factors such as patients' expectations for antibiotics encourage overprescribing. We aimed to (a) provide missing causal evidence of this effect, (b) identify whether the expectations distort the perceived probability of a bacterial infection either in a pre- or postdecisional distortions pathway, and (c) detect possible moderators of this effect. Family physicians expressed their willingness to prescribe antibiotics (Experiment 1, n₁ = 305) or their decision to prescribe (Experiment 2, n₂ = 131) and assessed the probability of a bacterial infection in hypothetical patients with infections either with low or high expectations for antibiotics. Response order of prescribing/probability was manipulated in Experiment 1. Overall, the expectations for antibiotics increased intention to prescribe (Experiment 1, F(1, 301) = 25.32, p < .001, η p ² = .08, regardless of the response order; Experiment 2, odds ratio [OR] = 2.31, and OR = 0.75, Vignettes 1 and 2, respectively). Expectations for antibiotics did not change the perceived probability of a bacterial infection (Experiment 1, F(1, 301) = 1.86, p = .173, η p ² = .01, regardless of the response order; Experiment 2, d = -0.03, and d = +0.25, Vignettes 1 and 2, respectively). Physicians' experience was positively associated with prescribing, but it did not moderate the expectations effect on prescribing. Patients' and their parents' expectations increase antibiotics prescribing, but their effect is localized-it does not leak into the perceived probability of a bacterial infection. Interventions reducing the overprescribing of antibiotics should target also psychological factors. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Imaging macrophages with nanoparticles

    NASA Astrophysics Data System (ADS)

    Weissleder, Ralph; Nahrendorf, Matthias; Pittet, Mikael J.

    2014-02-01

    Nanomaterials have much to offer, not only in deciphering innate immune cell biology and tracking cells, but also in advancing personalized clinical care by providing diagnostic and prognostic information, quantifying treatment efficacy and designing better therapeutics. This Review presents different types of nanomaterial, their biological properties and their applications for imaging macrophages in human diseases, including cancer, atherosclerosis, myocardial infarction, aortic aneurysm, diabetes and other conditions. We anticipate that future needs will include the development of nanomaterials that are specific for immune cell subsets and can be used as imaging surrogates for nanotherapeutics. New in vivo imaging clinical tools for noninvasive macrophage quantification are thus ultimately expected to become relevant to predicting patients' clinical outcome, defining treatment options and monitoring responses to therapy.

  7. Target Detection Routine (TADER). User’s Guide.

    DTIC Science & Technology

    1987-09-01

    o System range capability subset (one record - omitted for standoff SLAR and penetrating system) o System inherent detection probability subset ( IELT ...records, i.e., one per element type) * System capability modifier subset/A=1, E=1 ( IELT records) o System capability modifier subset/A=1, E=2 ( IELT ...records) s System capability modifier subset/A=2, E=1 ( IELT records) o System capability modifier subset/A=2, E=2 ( IELT records) Unit Data Set (one set

  8. Barrett's Oesophagus in an Achalasia Patient: Immunological Analysis and Comparison with a Group of Achalasia Patients

    PubMed Central

    Torres-Landa, Samuel; Coss-Adame, Enrique; Valdovinos, Miguel A.; Alejandro-Medrano, Edgar; Ramos-Ávalos, Bárbara; Martínez-Benítez, Braulio

    2016-01-01

    The aim of the study was to characterize the presence of diverse CD4 and CD8 T cell subsets and regulatory cells in peripheral blood and lower oesophageal sphincter (LES) from a young patient with BE/achalasia without treatment versus achalasia group. In order to characterize the circulating cells in this patient, a cytometric analysis was performed. LES tissue was evaluated by double-immunostaining procedure. Five healthy blood donors, 5 type achalasia patients, and 5 oesophagus tissue samples (gastrooesophageal junction) from transplant donors were included as control groups. A conspicuous systemic inflammation was determined in BE/achalasia patient and achalasia versus healthy volunteer group. Nonetheless, a predominance of Th22, Th2, IFN-α-producing T cells, Tregs, Bregs, and pDCregs was observed in BE/achalasia patient versus achalasia group. A low percentage of Th1 subset in BE/achalasia versus achalasia group was determined. A noticeable increase in tissue of Th22, Th17, Th2, Tregs, Bregs, and pDCregs was observed in BE/achalasia versus achalasia group. Th1 subset was lower in the BE/achalasia patient versus achalasia group. This study suggests that inflammation is a possible factor in the pathogenesis of BE/achalasia. Further research needs to be performed to understand the specific cause of the correlation between BE and achalasia. PMID:27752370

  9. The application of subset correspondence analysis to address the problem of missing data in a study on asthma severity in childhood.

    PubMed

    Hendry, G; North, D; Zewotir, T; Naidoo, R N

    2014-09-28

    Non-response in cross-sectional data is not uncommon and requires careful handling during the analysis stage so as not to bias results. In this paper, we illustrate how subset correspondence analysis can be applied in order to manage the non-response while at the same time retaining all observed data. This variant of correspondence analysis was applied to a set of epidemiological data in which relationships between numerous environmental, genetic, behavioural and socio-economic factors and their association with asthma severity in children were explored. The application of subset correspondence analysis revealed interesting associations between the measured variables that otherwise may not have been exposed. Many of the associations found confirm established theories found in literature regarding factors that exacerbate childhood asthma. Moderate to severe asthma was found to be associated with needing neonatal care, male children, 8- to 9-year olds, exposure to tobacco smoke in vehicles and living in areas that suffer from extreme air pollution. Associations were found between mild persistent asthma and low birthweight, and being exposed to smoke in the home and living in a home with up to four people. The classification of probable asthma was associated with a group of variables that indicate low socio-economic status. Copyright © 2014 John Wiley & Sons, Ltd.

  10. Cellular sources and targets of IFN-gamma-mediated protection against viral demyelination and neurological deficits.

    PubMed

    Murray, Paul D; McGavern, Dorian B; Pease, Larry R; Rodriguez, Moses

    2002-03-01

    IFN-gamma is an anti-viral and immunomodulatory cytokine critical for resistance to multiple pathogens. Using mice with targeted disruption of the gene for IFN-gamma, we previously demonstrated that this cytokine is critical for resistance to viral persistence and demyelination in the Theiler's virus model of multiple sclerosis. During viral infections, IFN-gamma is produced by natural killer (NK) cells, CD4(+) and CD8(+) T cells; however, the proportions of lymphocyte subsets responding to virus infection influences the contributions to IFN-gamma-mediated protection. To determine the lymphocyte subsets that produce IFN-gamma to maintain resistance, we used adoptive transfer strategies to generate mice with lymphocyte-specific deficiencies in IFN-gamma-production. We demonstrate that IFN-gamma production by both CD4(+) and CD8(+) T cell subsets is critical for resistance to Theiler's murine encephalomyelitis virus (TMEV)-induced demyelination and neurological disease, and that CD4(+) T cells make a greater contribution to IFN-gamma-mediated protection. To determine the cellular targets of IFN-gamma-mediated responses, we used adoptive transfer studies and bone marrow chimerism to generate mice in which either hematopoietic or somatic cells lacked the ability to express IFN-gamma receptor. We demonstrate that IFN-gamma receptor must be present on central nervous system glia, but not bone marrow-derived lymphocytes, in order to maintain resistance to TMEV-induced demyelination.

  11. Cellular sources and targets of IFN-γ-mediated protection against viral demyelination and neurological deficits

    PubMed Central

    Murray, Paul D.; McGavern, Dorian B.; Pease, Larry R.; Rodriguez, Moses

    2017-01-01

    IFN-γ is an anti-viral and immunomodulatory cytokine critical for resistance to multiple pathogens. Using mice with targeted disruption of the gene for IFN-γ, we previously demonstrated that this cytokine is critical for resistance to viral persistence and demyelination in the Theiler’s virus model of multiple sclerosis. During viral infections, IFN-γ is produced by natural killer (NK) cells, CD4+ and CD8+ T cells; however, the proportions of lymphocyte subsets responding to virus infection influences the contributions to IFN-γ-mediated protection. To determine the lymphocyte subsets that produce IFN-γ to maintain resistance, we used adoptive transfer strategies to generate mice with lymphocyte-specific deficiencies in IFN-γ-production. We demonstrate that IFN-γ production by both CD4+ and CD8+ T cell subsets is critical for resistance to Theiler’s murine encephalomyelitis virus (TMEV)-induced demyelination and neurological disease, and that CD4+ T cells make a greater contribution to IFN-γ-mediated protection. To determine the cellular targets of IFN-γ-mediated responses, we used adoptive transfer studies and bone marrow chimerism to generate mice in which either hematopoietic or somatic cells lacked the ability to express IFN-γ receptor. We demonstrate that IFN-γ receptor must be present on central nervous system glia, but not bone marrow-derived lymphocytes, in order to maintain resistance to TMEV-induced demyelination. PMID:11857334

  12. 43 CFR 11.84 - Damage determination phase-implementation guidance.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... expected present value of the costs of restoration, rehabilitation, replacement, and/or acquisition of... be estimated in the form of an expected present value dollar amount. In order to perform this... estimate is the expected present value of uses obtained through restoration, rehabilitation, replacement...

  13. 43 CFR 11.84 - Damage determination phase-implementation guidance.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... expected present value of the costs of restoration, rehabilitation, replacement, and/or acquisition of... be estimated in the form of an expected present value dollar amount. In order to perform this... estimate is the expected present value of uses obtained through restoration, rehabilitation, replacement...

  14. 43 CFR 11.84 - Damage determination phase-implementation guidance.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... expected present value of the costs of restoration, rehabilitation, replacement, and/or acquisition of... be estimated in the form of an expected present value dollar amount. In order to perform this... estimate is the expected present value of uses obtained through restoration, rehabilitation, replacement...

  15. Hemispheric asymmetry in the hierarchical perception of music and speech.

    PubMed

    Rosenthal, Matthew A

    2016-11-01

    The perception of music and speech involves a higher level, cognitive mechanism that allows listeners to form expectations for future music and speech events. This article comprehensively reviews studies on hemispheric differences in the formation of melodic and harmonic expectations in music and selectively reviews studies on hemispheric differences in the formation of syntactic and semantic expectations in speech. On the basis of this review, it is concluded that the higher level mechanism flexibly lateralizes music processing to either hemisphere depending on the expectation generated by a given musical context. When a context generates in the listener an expectation whose elements are sequentially ordered over time, higher level processing is dominant in the left hemisphere. When a context generates in the listener an expectation whose elements are not sequentially ordered over time, higher level processing is dominant in the right hemisphere. This article concludes with a spreading activation model that describes expectations for music and speech in terms of shared temporal and nontemporal representations. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. Spontaneous and natural cytotoxicity receptor-mediated cytotoxicity are effector functions of distinct natural killer subsets in hepatitis C virus-infected chimpanzees.

    PubMed

    Verstrepen, B E; Nieuwenhuis, I G; Mooij, P; Bogers, W M; Boonstra, A; Koopman, G

    2016-07-01

    In humans, CD16 and CD56 are used to identify functionally distinct natural killer (NK) subsets. Due to ubiquitous CD56 expression, this marker cannot be used to distinguish between NK cell subsets in chimpanzees. Therefore, functional analysis of distinct NK subsets during hepatitis C virus (HCV) infection has never been performed in these animals. In the present study an alternative strategy was used to identify four distinct NK subsets on the basis of the expression of CD16 and CD94. The expression of activating and inhibiting surface receptors showed that these subsets resemble human NK subsets. CD107 expression was used to determine degranulation of the different subsets in naive and HCV-infected chimpanzees. In HCV-infected chimpanzees increased spontaneous cytotoxicity was observed in CD94(high/dim) CD16(pos) and CD94(low) CD16(pos) subsets. By contrast, increased natural cytotoxicity receptor (NCR)- mediated degranulation after NKp30 and NKp44 triggering was demonstrated in the CD94(dim) CD16(neg) subset. Our findings suggest that spontaneous and NCR-mediated cytotoxicity are effector functions of distinct NK subsets in HCV-infected chimpanzees. © 2016 British Society for Immunology.

  17. Analysis of Epstein-Barr Virus Genomes and Expression Profiles in Gastric Adenocarcinoma.

    PubMed

    Borozan, Ivan; Zapatka, Marc; Frappier, Lori; Ferretti, Vincent

    2018-01-15

    Epstein-Barr virus (EBV) is a causative agent of a variety of lymphomas, nasopharyngeal carcinoma (NPC), and ∼9% of gastric carcinomas (GCs). An important question is whether particular EBV variants are more oncogenic than others, but conclusions are currently hampered by the lack of sequenced EBV genomes. Here, we contribute to this question by mining whole-genome sequences of 201 GCs to identify 13 EBV-positive GCs and by assembling 13 new EBV genome sequences, almost doubling the number of available GC-derived EBV genome sequences and providing the first non-Asian EBV genome sequences from GC. Whole-genome sequence comparisons of all EBV isolates sequenced to date (85 from tumors and 57 from healthy individuals) showed that most GC and NPC EBV isolates were closely related although American Caucasian GC samples were more distant, suggesting a geographical component. However, EBV GC isolates were found to contain some consistent changes in protein sequences regardless of geographical origin. In addition, transcriptome data available for eight of the EBV-positive GCs were analyzed to determine which EBV genes are expressed in GC. In addition to the expected latency proteins (EBNA1, LMP1, and LMP2A), specific subsets of lytic genes were consistently expressed that did not reflect a typical lytic or abortive lytic infection, suggesting a novel mechanism of EBV gene regulation in the context of GC. These results are consistent with a model in which a combination of specific latent and lytic EBV proteins promotes tumorigenesis. IMPORTANCE Epstein-Barr virus (EBV) is a widespread virus that causes cancer, including gastric carcinoma (GC), in a small subset of individuals. An important question is whether particular EBV variants are more cancer associated than others, but more EBV sequences are required to address this question. Here, we have generated 13 new EBV genome sequences from GC, almost doubling the number of EBV sequences from GC isolates and providing the first EBV sequences from non-Asian GC. We further identify sequence changes in some EBV proteins common to GC isolates. In addition, gene expression analysis of eight of the EBV-positive GCs showed consistent expression of both the expected latency proteins and a subset of lytic proteins that was not consistent with typical lytic or abortive lytic expression. These results suggest that novel mechanisms activate expression of some EBV lytic proteins and that their expression may contribute to oncogenesis. Copyright © 2018 American Society for Microbiology.

  18. Internet Pornography Use, Perceived Addiction, and Religious/Spiritual Struggles.

    PubMed

    Grubbs, Joshua B; Exline, Julie J; Pargament, Kenneth I; Volk, Fred; Lindberg, Matthew J

    2017-08-01

    Prior work has demonstrated that religious beliefs and moral attitudes are often related to sexual functioning. The present work sought to examine another possibility: Do sexual attitudes and behaviors have a relationship with religious and spiritual functioning? More specifically, do pornography use and perceived addiction to Internet pornography predict the experience of religious and spiritual struggle? It was expected that feelings of perceived addiction to Internet pornography would indeed predict such struggles, both cross-sectionally and over time, but that actual pornography use would not. To test these ideas, two studies were conducted using a sample of undergraduate students (N = 1519) and a sample of adult Internet users in the U.S. (N = 713). Cross-sectional analyses in both samples found that elements of perceived addiction were related to the experience of religious and spiritual struggle. Additionally, longitudinal analyses over a 1-year time span with a subset of undergraduates (N = 156) and a subset of adult web users (N = 366) revealed that perceived addiction to Internet pornography predicted unique variance in struggle over time, even when baseline levels of struggle and other related variables were held constant. Collectively, these findings identify perceived addiction to Internet pornography as a reliable predictor of religious and spiritual struggle.

  19. Do Others' Views of Us Transfer to New Groups and Tasks?: An Expectation States Approach

    ERIC Educational Resources Information Center

    Kalkhoff, Will; Younts, C. Wesley; Troyer, Lisa

    2011-01-01

    The dual nature of the self has been a core concern of social psychology since its inception. We contribute to this longstanding tradition of inquiry by focusing on two lines of research within the expectation states theoretical research program: (1) the study of second-order expectations and (2) research on the durability of expectations. We…

  20. The Influence of Socioeconomic Status on Changes in Young People's Expectations of Applying to University

    ERIC Educational Resources Information Center

    Anders, Jake

    2017-01-01

    A much larger proportion of English 14-year-olds expect to apply to university than ultimately make an application by age 21, but the proportion expecting to apply falls from age 14 onwards. In order to assess the role of socioeconomic status in explaining changes in expectations, this paper applies duration modelling techniques to the…

  1. Self-Organization of Vocabularies under Different Interaction Orders.

    PubMed

    Vera, Javier

    2017-01-01

    Traditionally, the formation of vocabularies has been studied by agent-based models (primarily, the naming game) in which random pairs of agents negotiate word-meaning associations at each discrete time step. This article proposes a first approximation to a novel question: To what extent is the negotiation of word-meaning associations influenced by the order in which agents interact? Automata networks provide the adequate mathematical framework to explore this question. Computer simulations suggest that on two-dimensional lattices the typical features of the formation of word-meaning associations are recovered under random schemes that update small fractions of the population at the same time; by contrast, if larger subsets of the population are updated, a periodic behavior may appear.

  2. Refined Weyl Law for Homogeneous Perturbations of the Harmonic Oscillator

    NASA Astrophysics Data System (ADS)

    Doll, Moritz; Gannot, Oran; Wunsch, Jared

    2018-02-01

    Let H denote the harmonic oscillator Hamiltonian on R}^d,} perturbed by an isotropic pseudodifferential operator of order 1. We consider the Schrödinger propagator {U(t)=e^{-itH},} and find that while sing-supp Tr U(t) \\subset 2 π Z as in the unperturbed case, there exists a large class of perturbations in dimensions {d ≥ 2 for which the singularities of {Tr U(t)} at nonzero multiples of {2 π} are weaker than the singularity at t = 0. The remainder term in the Weyl law is of order {o(λ^{d-1})} , improving in these cases the {o(λ^{d-1})} remainder previously established by Helffer-Robert.

  3. Enhanced Product Generation at NASA Data Centers Through Grid Technology

    NASA Technical Reports Server (NTRS)

    Barkstrom, Bruce R.; Hinke, Thomas H.; Gavali, Shradha; Seufzer, William J.

    2003-01-01

    This paper describes how grid technology can support the ability of NASA data centers to provide customized data products. A combination of grid technology and commodity processors are proposed to provide the bandwidth necessary to perform customized processing of data, with customized data subsetting providing the initial example. This customized subsetting engine can be used to support a new type of subsetting, called phenomena-based subsetting, where data is subsetted based on its association with some phenomena, such as mesoscale convective systems or hurricanes. This concept is expanded to allow the phenomena to be detected in one type of data, with the subsetting requirements transmitted to the subsetting engine to subset a different type of data. The subsetting requirements are generated by a data mining system and transmitted to the subsetter in the form of an XML feature index that describes the spatial and temporal extent of the phenomena. For this work, a grid-based mining system called the Grid Miner is used to identify the phenomena and generate the feature index. This paper discusses the value of grid technology in facilitating the development of a high performance customized product processing and the coupling of a grid mining system to support phenomena-based subsetting.

  4. HPV-Associated Head and Neck Cancer: Unique Features of Epidemiology and Clinical Management

    PubMed Central

    Maxwell, Jessica H.; Grandis, Jennifer R.; Ferris, Robert L.

    2017-01-01

    Human papillomavirus (HPV) is a recently identified causative agent for a subset of head and neck cancers, primarily in the oropharynx, and is largely responsible for the rising worldwide incidence of oropharyngeal cancer (OPC). Patients with HPV-positive OPC have distinct risk factor profiles and generally have a better prognosis than patients with traditional, HPV-negative, head and neck cancer. Concurrent chemotherapy and radiation is a widely accepted primary treatment modality for many patients with HPV-positive OPC. However, recent advances in surgical modalities, including transoral laser and robotic surgery, have led to the reemergence of primary surgical treatment for HPV-positive patients. Clinical trials are under way to determine optimal treatment strategies for the growing subset of patients with HPV-positive OPC. Similarly, identifying those patients with HPV-positive cancer who are at risk for recurrence and poor survival is critical in order to tailor individual treatment regimens and avoid potential undertreatment. PMID:26332002

  5. On the properties of energy stable flux reconstruction schemes for implicit large eddy simulation

    NASA Astrophysics Data System (ADS)

    Vermeire, B. C.; Vincent, P. E.

    2016-12-01

    We begin by investigating the stability, order of accuracy, and dispersion and dissipation characteristics of the extended range of energy stable flux reconstruction (E-ESFR) schemes in the context of implicit large eddy simulation (ILES). We proceed to demonstrate that subsets of the E-ESFR schemes are more stable than collocation nodal discontinuous Galerkin methods recovered with the flux reconstruction approach (FRDG) for marginally-resolved ILES simulations of the Taylor-Green vortex. These schemes are shown to have reduced dissipation and dispersion errors relative to FRDG schemes of the same polynomial degree and, simultaneously, have increased Courant-Friedrichs-Lewy (CFL) limits. Finally, we simulate turbulent flow over an SD7003 aerofoil using two of the most stable E-ESFR schemes identified by the aforementioned Taylor-Green vortex experiments. Results demonstrate that subsets of E-ESFR schemes appear more stable than the commonly used FRDG method, have increased CFL limits, and are suitable for ILES of complex turbulent flows on unstructured grids.

  6. Post-processing images from the WFIRST-AFTA coronagraph testbed

    NASA Astrophysics Data System (ADS)

    Zimmerman, Neil T.; Ygouf, Marie; Pueyo, Laurent; Soummer, Remi; Perrin, Marshall D.; Mennesson, Bertrand; Cady, Eric; Mejia Prada, Camilo

    2016-01-01

    The concept for the exoplanet imaging instrument on WFIRST-AFTA relies on the development of mission-specific data processing tools to reduce the speckle noise floor. No instruments have yet functioned on the sky in the planet-to-star contrast regime of the proposed coronagraph (1E-8). Therefore, starlight subtraction algorithms must be tested on a combination of simulated and laboratory data sets to give confidence that the scientific goals can be reached. The High Contrast Imaging Testbed (HCIT) at Jet Propulsion Lab has carried out several technology demonstrations for the instrument concept, demonstrating 1E-8 raw (absolute) contrast. Here, we have applied a mock reference differential imaging strategy to HCIT data sets, treating one subset of images as a reference star observation and another subset as a science target observation. We show that algorithms like KLIP (Karhunen-Loève Image Projection), by suppressing residual speckles, enable the recovery of exoplanet signals at contrast of order 2E-9.

  7. Predicting Drug-Target Interactions Based on Small Positive Samples.

    PubMed

    Hu, Pengwei; Chan, Keith C C; Hu, Yanxing

    2018-01-01

    A basic task in drug discovery is to find new medication in the form of candidate compounds that act on a target protein. In other words, a drug has to interact with a target and such drug-target interaction (DTI) is not expected to be random. Significant and interesting patterns are expected to be hidden in them. If these patterns can be discovered, new drugs are expected to be more easily discoverable. Currently, a number of computational methods have been proposed to predict DTIs based on their similarity. However, such as approach does not allow biochemical features to be directly considered. As a result, some methods have been proposed to try to discover patterns in physicochemical interactions. Since the number of potential negative DTIs are very high both in absolute terms and in comparison to that of the known ones, these methods are rather computationally expensive and they can only rely on subsets, rather than the full set, of negative DTIs for training and validation. As there is always a relatively high chance for negative DTIs to be falsely identified and as only partial subset of such DTIs is considered, existing approaches can be further improved to better predict DTIs. In this paper, we present a novel approach, called ODT (one class drug target interaction prediction), for such purpose. One main task of ODT is to discover association patterns between interacting drugs and proteins from the chemical structure of the former and the protein sequence network of the latter. ODT does so in two phases. First, the DTI-network is transformed to a representation by structural properties. Second, it applies a oneclass classification algorithm to build a prediction model based only on known positive interactions. We compared the best AUROC scores of the ODT with several state-of-art approaches on Gold standard data. The prediction accuracy of the ODT is superior in comparison with all the other methods at GPCRs dataset and Ion channels dataset. Performance evaluation of ODT shows that it can be potentially useful. It confirms that predicting potential or missing DTIs based on the known interactions is a promising direction to solve problems related to the use of uncertain and unreliable negative samples and those related to the great demand in computational resources. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  8. Efficient Orchestration of Data Centers Via Comprehensive and Application Aware Trade Off Exploration

    DTIC Science & Technology

    2016-12-01

    proposes to save power by concentrating traffic over a small subset of links. data center architecture [12], as depicted in Figure 1.1. The fat-tree... architecture is a physical network topology commonly used in data networks representing a hier- archical multi-rooted tree consisting of four levels...milliseconds) is an order of magnitude faster than the GASO variants (tens of seconds). 3.4.3 LAW for Architectures of Different Dimensions In this section

  9. Man-portable Vector Time Domain EMI Sensor and Discrimination Processing

    DTIC Science & Technology

    2012-04-16

    points of each winding are coincident. Each receiver coil is wound helically on a set of 10 grooves etched on the surface of the cube; 36- gauge wire...subset of the data, and inject various levels of noise into the position of the MPV in order to gauge the robustness of the discrimination results...as possible. The quantity φ also provides a metric to gauge goodness of fit, being essentially an average percent error: Benjamin Barrowes, Kevin

  10. Analysis of EPA's endocrine screening battery and recommendations for further review.

    PubMed

    Schapaugh, Adam W; McFadden, Lisa G; Zorrilla, Leah M; Geter, David R; Stuchal, Leah D; Sunger, Neha; Borgert, Christopher J

    2015-08-01

    EPA's Endocrine Disruptor Screening Program Tier 1 battery consists of eleven assays intended to identify the potential of a chemical to interact with the estrogen, androgen, thyroid, or steroidogenesis systems. We have collected control data from a subset of test order recipients from the first round of screening. The analysis undertaken herein demonstrates that the EPA should review all testing methods prior to issuing further test orders. Given the frequency with which certain performance criteria were violated, a primary focus of that review should consider adjustments to these standards to better reflect biological variability. A second focus should be to provide detailed, assay-specific direction on when results should be discarded; no clear guidance exists on the degree to which assays need to be re-run for failing to meet performance criteria. A third focus should be to identify permissible differences in study design and execution that have a large influence on endpoint variance. Experimental guidelines could then be re-defined such that endpoint variances are reduced and performance criteria are violated less frequently. It must be emphasized that because we were restricted to a subset (approximately half) of the control data, our analyses serve only as examples to underscore the importance of a detailed, rigorous, and comprehensive evaluation of the performance of the battery. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Perturbative reduction of derivative order in EFT

    NASA Astrophysics Data System (ADS)

    Glavan, Dražen

    2018-02-01

    Higher derivative corrections are ubiquitous in effective field theories, which seemingly introduces new degrees of freedom at successive orders. This is actually an artefact of the implicit local derivative expansion defining effective field theories. We argue that higher derivative corrections that introduce additional degrees of freedom should be removed and their effects captured either by lower derivative corrections, or special combinations of higher derivative corrections not propagating extra degrees of freedom. Three methods adapted for this task are examined and field redefinitions are found to be most appropriate. First order higher derivative corrections in a scalar tensor theory are removed by field redefinition and it is found that their effects are captured by a subset of Horndeski theories. A case is made for restricting the effective field theory expansions in principle to only terms not introducing additional degrees of freedom.

  12. Conformal field theory construction for non-Abelian hierarchy wave functions

    NASA Astrophysics Data System (ADS)

    Tournois, Yoran; Hermanns, Maria

    2017-12-01

    The fractional quantum Hall effect is the paradigmatic example of topologically ordered phases. One of its most fascinating aspects is the large variety of different topological orders that may be realized, in particular non-Abelian ones. Here we analyze a class of non-Abelian fractional quantum Hall model states which are generalizations of the Abelian Haldane-Halperin hierarchy. We derive their topological properties and show that the quasiparticles obey non-Abelian fusion rules of type su (q)k . For a subset of these states we are able to derive the conformal field theory description that makes the topological properties—in particular braiding—of the state manifest. The model states we study provide explicit wave functions for a large variety of interesting topological orders, which may be relevant for certain fractional quantum Hall states observed in the first excited Landau level.

  13. Efficient Research Design: Using Value-of-Information Analysis to Estimate the Optimal Mix of Top-down and Bottom-up Costing Approaches in an Economic Evaluation alongside a Clinical Trial.

    PubMed

    Wilson, Edward C F; Mugford, Miranda; Barton, Garry; Shepstone, Lee

    2016-04-01

    In designing economic evaluations alongside clinical trials, analysts are frequently faced with alternative methods of collecting the same data, the extremes being top-down ("gross costing") and bottom-up ("micro-costing") approaches. A priori, bottom-up approaches may be considered superior to top-down approaches but are also more expensive to collect and analyze. In this article, we use value-of-information analysis to estimate the efficient mix of observations on each method in a proposed clinical trial. By assigning a prior bivariate distribution to the 2 data collection processes, the predicted posterior (i.e., preposterior) mean and variance of the superior process can be calculated from proposed samples using either process. This is then used to calculate the preposterior mean and variance of incremental net benefit and hence the expected net gain of sampling. We apply this method to a previously collected data set to estimate the value of conducting a further trial and identifying the optimal mix of observations on drug costs at 2 levels: by individual item (process A) and by drug class (process B). We find that substituting a number of observations on process A for process B leads to a modest £ 35,000 increase in expected net gain of sampling. Drivers of the results are the correlation between the 2 processes and their relative cost. This method has potential use following a pilot study to inform efficient data collection approaches for a subsequent full-scale trial. It provides a formal quantitative approach to inform trialists whether it is efficient to collect resource use data on all patients in a trial or on a subset of patients only or to collect limited data on most and detailed data on a subset. © The Author(s) 2016.

  14. Regional climate of the Subtropical Central Andes using high-resolution CMIP5 models. Part II: future projections for the twenty-first century

    NASA Astrophysics Data System (ADS)

    Zazulie, Natalia; Rusticucci, Matilde; Raga, Graciela B.

    2017-12-01

    In Part I of our study (Zazulie et al. Clim Dyn, 2017, hereafter Z17) we analyzed the ability of a subset of fifteen high-resolution global climate models (GCMs) from the Coupled Model Intercomparison Project phase 5 to reproduce the past climate of the Subtropical Central Andes (SCA) of Argentina and Chile. A subset of only five GCMs was shown to reproduce well the past climate (1980-2005), for austral summer and winter. In this study we analyze future climate projections for the twenty-first century over this complex orography region using those five GCMs. We evaluate the projections under two of the representative concentration pathways considered as future scenarios: RCP4.5 and RCP8.5. Future projections indicate warming during the twenty-first century over the SCA region, especially pronounced over the mountains. Projections of warming at high elevations in the SCA depend on altitude, and are larger than the projected global mean warming. This phenomenon is expected to strengthen by the end of the century under the high-emission scenario. Increases in winter temperatures of up to 2.5 °C, relative to 1980-2005, are projected by 2040-2065, while a 5 °C warming is expected at the highest elevations by 2075-2100. Such a large monthly-mean warming during winter would most likely result in snowpack melting by late winter-early spring, with serious implication for water availability during summer, when precipitation is a minimum over the mountains. We also explore changes in the albedo, as a contributing factor affecting the net flux of energy at the surface and found a reduction in albedo of 20-60% at high elevations, related to the elevation dependent warming. Furthermore, a decrease in winter precipitation is projected in central Chile by the end of the century, independent of the scenario considered.

  15. Towards a better understanding of critical gradients and near-marginal turbulence in burning plasma conditions

    NASA Astrophysics Data System (ADS)

    Holland, C.; Candy, J.; Howard, N. T.

    2017-10-01

    Developing accurate predictive transport models of burning plasma conditions is essential for confident prediction and optimization of next step experiments such as ITER and DEMO. Core transport in these plasmas is expected to be very small in gyroBohm-normalized units, such that the plasma should lie close to the critical gradients for onset of microturbulence instabilities. We present recent results investigating the scaling of linear critical gradients of ITG, TEM, and ETG modes as a function of parameters such as safety factor, magnetic shear, and collisionality for nominal conditions and geometry expected in ITER H-mode plasmas. A subset of these results is then compared against predictions from nonlinear gyrokinetic simulations, to quantify differences between linear and nonlinear thresholds. As part of this study, linear and nonlinear results from both GYRO and CGYRO codes will be compared against each other, as well as to predictions from the quasilinear TGLF model. Challenges arising from near-marginal turbulence dynamics are addressed. This work was supported by the US Department of Energy under US DE-SC0006957.

  16. ';Best' Practices for Aggregating Subset Results from Archived Datasets

    NASA Astrophysics Data System (ADS)

    Baskin, W. E.; Perez, J.

    2013-12-01

    In response to the exponential growth in science data analysis and visualization capabilities Data Centers have been developing new delivery mechanisms to package and deliver large volumes of aggregated subsets of archived data. New standards are evolving to help data providers and application programmers deal with growing needs of the science community. These standards evolve from the best practices gleaned from new products and capabilities. The NASA Atmospheric Sciences Data Center (ASDC) has developed and deployed production provider-specific search and subset web applications for the CALIPSO, CERES, TES, and MOPITT missions. This presentation explores several use cases that leverage aggregated subset results and examines the standards and formats ASDC developers applied to the delivered files as well as the implementation strategies for subsetting and processing the aggregated products. The following topics will be addressed: - Applications of NetCDF CF conventions to aggregated level 2 satellite subsets - Data-Provider-Specific format requirements vs. generalized standards - Organization of the file structure of aggregated NetCDF subset output - Global Attributes of individual subsetted files vs. aggregated results - Specific applications and framework used for subsetting and delivering derivative data files

  17. Characteristics of CD8+ T cell subsets in Chinese patients with chronic HIV infection during initial ART.

    PubMed

    Jiao, Yanmei; Hua, Wei; Zhang, Tong; Zhang, Yonghong; Ji, Yunxia; Zhang, Hongwei; Wu, Hao

    2011-03-25

    CD8+ T cells may play an important role in protecting against HIV. However, the changes of CD8+ T cell subsets during early period of ART have not been fully studied. Twenty-one asymptomatic treatment-naive HIV-infected patients with CD4 T+ cells less than 350 cells/μl were enrolled in the study. Naïve, central memory(CM), effective memory(EM) and terminally differentiated effector (EMRA) CD8+ cell subsets and their activation and proliferation subsets were evaluated in blood samples collected at base line, and week 2, 4, 8 and 12 of ART. The total CD8+ T cells declined and the Naïve and CM subsets had a tendency of increase. Activation levels of all CD8+ T cell subsets except EMRA subset decreased after ART. However, proliferation levels of total CD8+ T cells, EMRA, EM and CM subsets increased at the first 4 weeks of ART, then decreased. Proliferation level of the naïve cells decreased after ART. The changes of CD8+ T cell subsets during initial ART are complex. Our results display a complete phenotypical picture of CD8+ cell subsets during initial ART and provide insights for understanding of immune status during ART.

  18. Characteristics of CD8+ T cell subsets in Chinese patients with chronic HIV infection during initial ART

    PubMed Central

    2011-01-01

    Background CD8+ T cells may play an important role in protecting against HIV. However, the changes of CD8+ T cell subsets during early period of ART have not been fully studied. Methods Twenty-one asymptomatic treatment-naive HIV-infected patients with CD4 T+ cells less than 350 cells/μl were enrolled in the study. Naïve, central memory(CM), effective memory(EM) and terminally differentiated effector (EMRA) CD8+ cell subsets and their activation and proliferation subsets were evaluated in blood samples collected at base line, and week 2, 4, 8 and 12 of ART. Results The total CD8+ T cells declined and the Naïve and CM subsets had a tendency of increase. Activation levels of all CD8+ T cell subsets except EMRA subset decreased after ART. However, proliferation levels of total CD8+ T cells, EMRA, EM and CM subsets increased at the first 4 weeks of ART, then decreased. Proliferation level of the naïve cells decreased after ART. Conclusion The changes of CD8+ T cell subsets during initial ART are complex. Our results display a complete phenotypical picture of CD8+ cell subsets during initial ART and provide insights for understanding of immune status during ART. PMID:21435275

  19. Patient expectations from an emergency medical service.

    PubMed

    Qidwai, Waris; Ali, Syed Sohail; Baqir, Muhammad; Ayub, Semi

    2005-01-01

    Patient expectation survey at the Emergency Medical Services can improve patient satisfaction. A need was established to conduct such a survey in order to recommend its use as a quality improvement tool. The study was conducted on patients visiting the Emergency Medical Services, Aga Khan University, Karachi. A questionnaire was used to collect information on the demographic profile, and expectations of patients. The ethical requirements for conducting the study were met. A hundred patients were surveyed. The majority was relatively young, married men and women, well educated and better socio-economically placed. The majority of the patients expected a waiting time and a consultation time of less than 30 minutes and 20 minutes, respectively. The majority of respondents expected and agreed to be examined by a trainee but there were reluctant to be examined by the students. There was an expectation that the consultant will examine patients and not advice the attending team over the phone. The majority of the patients expected intravenous fluid therapy. There was a desire to have patient attendant present during the consultation process. The majority of the patients expected to pay less than three thousand rupees for the visit. An expectation exists for investigations and hospitalization. Involvement of patients in decisions concerning their treatment and written feedback on their visit was expected. We have documented the need and value of patient expectation survey at the Emergency Medical Services department. The use of such a tool is recommended in order to improve the satisfaction levels of patients visiting such facilities.

  20. Numerical simulations of imaging satellites with optical interferometry

    NASA Astrophysics Data System (ADS)

    Ding, Yuanyuan; Wang, Chaoyan; Chen, Zhendong

    2015-08-01

    Optical interferometry imaging system, which is composed of multiple sub-apertures, is a type of sensor that can break through the aperture limit and realize the high resolution imaging. This technique can be utilized to precisely measure the shapes, sizes and position of astronomical objects and satellites, it also can realize to space exploration and space debris, satellite monitoring and survey. Fizeau-Type optical aperture synthesis telescope has the advantage of short baselines, common mount and multiple sub-apertures, so it is feasible for instantaneous direct imaging through focal plane combination.Since 2002, the researchers of Shanghai Astronomical Observatory have developed the study of optical interferometry technique. For array configurations, there are two optimal array configurations proposed instead of the symmetrical circular distribution: the asymmetrical circular distribution and the Y-type distribution. On this basis, two kinds of structure were proposed based on Fizeau interferometric telescope. One is Y-type independent sub-aperture telescope, the other one is segmented mirrors telescope with common secondary mirror.In this paper, we will give the description of interferometric telescope and image acquisition. Then we will mainly concerned the simulations of image restoration based on Y-type telescope and segmented mirrors telescope. The Richardson-Lucy (RL) method, Winner method and the Ordered Subsets Expectation Maximization (OS-EM) method are studied in this paper. We will analyze the influence of different stop rules too. At the last of the paper, we will present the reconstruction results of images of some satellites.

  1. PSF reconstruction for Compton-based prompt gamma imaging

    NASA Astrophysics Data System (ADS)

    Jan, Meei-Ling; Lee, Ming-Wei; Huang, Hsuan-Ming

    2018-02-01

    Compton-based prompt gamma (PG) imaging has been proposed for in vivo range verification in proton therapy. However, several factors degrade the image quality of PG images, some of which are due to inherent properties of a Compton camera such as spatial resolution and energy resolution. Moreover, Compton-based PG imaging has a spatially variant resolution loss. In this study, we investigate the performance of the list-mode ordered subset expectation maximization algorithm with a shift-variant point spread function (LM-OSEM-SV-PSF) model. We also evaluate how well the PG images reconstructed using an SV-PSF model reproduce the distal falloff of the proton beam. The SV-PSF parameters were estimated from simulation data of point sources at various positions. Simulated PGs were produced in a water phantom irradiated with a proton beam. Compared to the LM-OSEM algorithm, the LM-OSEM-SV-PSF algorithm improved the quality of the reconstructed PG images and the estimation of PG falloff positions. In addition, the 4.44 and 5.25 MeV PG emissions can be accurately reconstructed using the LM-OSEM-SV-PSF algorithm. However, for the 2.31 and 6.13 MeV PG emissions, the LM-OSEM-SV-PSF reconstruction provides limited improvement. We also found that the LM-OSEM algorithm followed by a shift-variant Richardson-Lucy deconvolution could reconstruct images with quality visually similar to the LM-OSEM-SV-PSF-reconstructed images, while requiring shorter computation time.

  2. Atmospheric corrections in interferometric synthetic aperture radar surface deformation - a case study of the city of Mendoza, Argentina

    NASA Astrophysics Data System (ADS)

    Balbarani, S.; Euillades, P. A.; Euillades, L. D.; Casu, F.; Riveros, N. C.

    2013-09-01

    Differential interferometry is a remote sensing technique that allows studying crustal deformation produced by several phenomena like earthquakes, landslides, land subsidence and volcanic eruptions. Advanced techniques, like small baseline subsets (SBAS), exploit series of images acquired by synthetic aperture radar (SAR) sensors during a given time span. Phase propagation delay in the atmosphere is the main systematic error of interferometric SAR measurements. It affects differently images acquired at different days or even at different hours of the same day. So, datasets acquired during the same time span from different sensors (or sensor configuration) often give diverging results. Here we processed two datasets acquired from June 2010 to December 2011 by COSMO-SkyMed satellites. One of them is HH-polarized, and the other one is VV-polarized and acquired on different days. As expected, time series computed from these datasets show differences. We attributed them to non-compensated atmospheric artifacts and tried to correct them by using ERA-Interim global atmospheric model (GAM) data. With this method, we were able to correct less than 50% of the scenes, considering an area where no phase unwrapping errors were detected. We conclude that GAM-based corrections are not enough for explaining differences in computed time series, at least in the processed area of interest. We remark that no direct meteorological data for the GAM-based corrections were employed. Further research is needed in order to understand under what conditions this kind of data can be used.

  3. Reproducibility Between Brain Uptake Ratio Using Anatomic Standardization and Patlak-Plot Methods.

    PubMed

    Shibutani, Takayuki; Onoguchi, Masahisa; Noguchi, Atsushi; Yamada, Tomoki; Tsuchihashi, Hiroko; Nakajima, Tadashi; Kinuya, Seigo

    2015-12-01

    The Patlak-plot and conventional methods of determining brain uptake ratio (BUR) have some problems with reproducibility. We formulated a method of determining BUR using anatomic standardization (BUR-AS) in a statistical parametric mapping algorithm to improve reproducibility. The objective of this study was to demonstrate the inter- and intraoperator reproducibility of mean cerebral blood flow as determined using BUR-AS in comparison to the conventional-BUR (BUR-C) and Patlak-plot methods. The images of 30 patients who underwent brain perfusion SPECT were retrospectively used in this study. The images were reconstructed using ordered-subset expectation maximization and processed using an automatic quantitative analysis for cerebral blood flow of ECD tool. The mean SPECT count was calculated from axial basal ganglia slices of the normal side (slices 31-40) drawn using a 3-dimensional stereotactic region-of-interest template after anatomic standardization. The mean cerebral blood flow was calculated from the mean SPECT count. Reproducibility was evaluated using coefficient of variation and Bland-Altman plotting. For both inter- and intraoperator reproducibility, the BUR-AS method had the lowest coefficient of variation and smallest error range about the Bland-Altman plot. Mean CBF obtained using the BUR-AS method had the highest reproducibility. Compared with the Patlak-plot and BUR-C methods, the BUR-AS method provides greater inter- and intraoperator reproducibility of cerebral blood flow measurement. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  4. The Brera Multiscale Wavelet ROSAT HRI Source Catalog. I. The Algorithm

    NASA Astrophysics Data System (ADS)

    Lazzati, Davide; Campana, Sergio; Rosati, Piero; Panzera, Maria Rosa; Tagliaferri, Gianpiero

    1999-10-01

    We present a new detection algorithm based on the wavelet transform for the analysis of high-energy astronomical images. The wavelet transform, because of its multiscale structure, is suited to the optimal detection of pointlike as well as extended sources, regardless of any loss of resolution with the off-axis angle. Sources are detected as significant enhancements in the wavelet space, after the subtraction of the nonflat components of the background. Detection thresholds are computed through Monte Carlo simulations in order to establish the expected number of spurious sources per field. The source characterization is performed through a multisource fitting in the wavelet space. The procedure is designed to correctly deal with very crowded fields, allowing for the simultaneous characterization of nearby sources. To obtain a fast and reliable estimate of the source parameters and related errors, we apply a novel decimation technique that, taking into account the correlation properties of the wavelet transform, extracts a subset of almost independent coefficients. We test the performance of this algorithm on synthetic fields, analyzing with particular care the characterization of sources in poor background situations, where the assumption of Gaussian statistics does not hold. In these cases, for which standard wavelet algorithms generally provide underestimated errors, we infer errors through a procedure that relies on robust basic statistics. Our algorithm is well suited to the analysis of images taken with the new generation of X-ray instruments equipped with CCD technology, which will produce images with very low background and/or high source density.

  5. Characterization of the warm-hot intergalactic medium near the Coma cluster through high-resolution spectroscopy of X Comae

    NASA Astrophysics Data System (ADS)

    Bonamente, M.; Ahoranta, J.; Tilton, E.; Tempel, E.; Morandi, A.

    2017-08-01

    We have analysed all available archival XMM-Newton observations of X Comae, a bright X-ray quasar behind the Coma cluster, to study the properties of the warm-hot intergalactic medium (WHIM) in the vicinity of the nearest massive galaxy cluster. The reflection grating spectrometer observations confirm the possible presence of a Ne ix K α absorption line at the redshift of Coma, although with a limited statistical significance. This analysis is therefore in line with the earlier analysis by Takei et al. based on a sub-set of these data. Its large column density and optical depth, however, point to implausible conditions for the absorbing medium, thereby casting serious doubts to its reality. Chandra has never observed X Comae and therefore cannot provide additional information on this source. We combine upper limits to the presence of other X-ray absorption lines (notably from O vii and O viii) at the redshift of Coma with positive measurements of the soft excess emission from Coma measured by ROSAT (Bonamente et al.). The combination of emission from warm-hot gas at kT ˜ 1/4 keV and upper limits from absorption lines provide useful constraints on the density and the sightline length of the putative WHIM towards Coma. We conclude that the putative warm-hot medium towards Coma is consistent with expected properties, with a baryon overdensity δb ≥ 10 and a sightline extent of order of tens of Mpc.

  6. Tackling the Four V's with NEXUS

    NASA Astrophysics Data System (ADS)

    Greguska, F. R., III; Gill, K. M.; Huang, T.; Jacob, J. C.; Quach, N.; Wilson, B. D.

    2016-12-01

    NASA's Earth Observing System Data and Information System (EOSDIS) reports that over 15 petabytes (PB) of Earth observing information are archived among the 12 NASA Distributed Active Archive Centers (DAACs); with more being archived daily. The upcoming Surface Water & Ocean Topography (SWOT) mission is expected to generate about 26 PB of data in 3 years. NEXUS is a state of the art deep data analytic program developed at the Jet Propulsion Laboratory with the goal of providing near real-time analytic capabilities for this vast trove of data. Rather than develop analytic services on traditional file archives, NEXUS organizes data into tiles in order to provide a platform for horizontal computing. To provide near real-time analytic solutions for missions such as SWOT, a highly scalable data ingestion solution is developed to quickly bring data into NEXUS. In order to accomplish this formidable challenge, the "Four V's" (Volume, Velocity, Veracity, and Variety) of Big Data must be considered. NEXUS consists of an ingestion subsystem that handles the Volume of data by utilizing a generic tiling strategy that subsets a given dataset into smaller tiles. These tiles are then indexed by a search engine and stored in a NoSQL database for fast retrieval. In addition to handling the Volume of data being indexed, the NEXUS ingestion subsystem is built for horizontal scalability in order to manage the Velocity of incoming data. As the load on the system increases, the components of the ingestion subsystem can be scaled to provide more capacity. During ingestion, NEXUS also takes a unique approach to the Veracity and Variety of Earth observing information being ingested. By allowing the processing and tiling mechanisms to be customized for each dataset, the NEXUS ingest system can discard erroneous or missing data as well as adapt to the many different data structures and file formats that can be found in satellite observation data. This talk will focus on the functionality and architecture of the data ingestion subsystem that is a part of the NEXUS software architecture and how it relates to the Four V's of Big Data.

  7. Sibling Status Effects: Adult Expectations.

    ERIC Educational Resources Information Center

    Baskett, Linda Musun

    1985-01-01

    This study attempted to determine what expectations or beliefs adults might hold about a child based on his or her sibling status alone. Ratings on 50 adjective pairs for each of three sibling status types, only, oldest, and youngest child, were assessed in relation to adult expectations, birth order, and parental status of rater. (Author/DST)

  8. Reasonable Expectation: Limits on the Promise of Community Councils.

    ERIC Educational Resources Information Center

    Bobrow, Sue Berryman

    Implications of developmental characteristics of youth are assessed for community education-work councils and the career education programs which councils might be expected to sponsor, in order to decide whether to evaluate councils, and if so, what outcomes councils might be expected to affect. Youth are defined to include pre-adolescent children…

  9. Forecasting Advancement Rates to Petty Officer Third Class for U.S. Navy Hospital Corpsmen

    DTIC Science & Technology

    2014-06-01

    variable. c. Designation of Data Subsets for Cross-Validation In order to maintain the integrity of the analysis and test the fitted models’ predictive...two models, an H-L goodness-of-fit test is conducted on the 1,524 individual Sailors within the designated test data set; the results of which are...the total number of sea months, the proportion of vacancies to test takers, Armed Forces Qualification Test score, and performance mark average

  10. FDTD Simulation of Novel Polarimetric and Directional Reflectance and Transmittance Measurements from Optical Nano- and Micro-Structured Materials

    DTIC Science & Technology

    2012-03-22

    structures and lead to better designs. 84 Appendix A. Particle Swarm Optimization Algorithm In order to validate the need for a new BSDF model ...24 9. Hierarchy representation of a subset of ScatMech BSDF library model classes...polarimetric BRDF at λ=4.3μm of SPP structures with Λ=1.79μm (left), 2μm (middle) and 2.33μm (right). All components are normalized by dividing by s0

  11. A global magnetic anomaly map. [obtained from POGO satellite data

    NASA Technical Reports Server (NTRS)

    Regan, R. D.; Davis, W. M.; Cain, J. C.

    1974-01-01

    A subset of POGO satellite magnetometer data has been formed that is suitable for analysis of crustal magnetic anomalies. Using a thirteenth order field model, fit to these data, magnetic residuals have been calculated over the world to latitude limits of plus 50 deg. These residuals averaged over one degree latitude-longitude blocks represent a detailed global magnetic anomaly map derived solely from satellite data. Preliminary analysis of the map indicates that the anomalies are real and of geological origin.

  12. What to Expect If Your Legislature Orders Literacy Testing

    ERIC Educational Resources Information Center

    Van Til, William

    1978-01-01

    Based on the Florida experience, one should expect, among other things, new problems for minority students, lawsuits against the tests, calls for tests of teachers, and scapegoating and blaming. (Author/IRT)

  13. Monocyte Subset Dynamics in Human Atherosclerosis Can Be Profiled with Magnetic Nano-Sensors

    PubMed Central

    Wildgruber, Moritz; Lee, Hakho; Chudnovskiy, Aleksey; Yoon, Tae-Jong; Etzrodt, Martin; Pittet, Mikael J.; Nahrendorf, Matthias; Croce, Kevin; Libby, Peter; Weissleder, Ralph; Swirski, Filip K.

    2009-01-01

    Monocytes are circulating macrophage and dendritic cell precursors that populate healthy and diseased tissue. In humans, monocytes consist of at least two subsets whose proportions in the blood fluctuate in response to coronary artery disease, sepsis, and viral infection. Animal studies have shown that specific shifts in the monocyte subset repertoire either exacerbate or attenuate disease, suggesting a role for monocyte subsets as biomarkers and therapeutic targets. Assays are therefore needed that can selectively and rapidly enumerate monocytes and their subsets. This study shows that two major human monocyte subsets express similar levels of the receptor for macrophage colony stimulating factor (MCSFR) but differ in their phagocytic capacity. We exploit these properties and custom-engineer magnetic nanoparticles for ex vivo sensing of monocytes and their subsets. We present a two-dimensional enumerative mathematical model that simultaneously reports number and proportion of monocyte subsets in a small volume of human blood. Using a recently described diagnostic magnetic resonance (DMR) chip with 1 µl sample size and high throughput capabilities, we then show that application of the model accurately quantifies subset fluctuations that occur in patients with atherosclerosis. PMID:19461894

  14. Dynamic equilibrium of heterogeneous and interconvertible multipotent hematopoietic cell subsets

    PubMed Central

    Weston, Wendy; Zayas, Jennifer; Perez, Ruben; George, John; Jurecic, Roland

    2014-01-01

    Populations of hematopoietic stem cells and progenitors are quite heterogeneous and consist of multiple cell subsets with distinct phenotypic and functional characteristics. Some of these subsets also appear to be interconvertible and oscillate between functionally distinct states. The multipotent hematopoietic cell line EML has emerged as a unique model to study the heterogeneity and interconvertibility of multipotent hematopoietic cells. Here we describe extensive phenotypic and functional heterogeneity of EML cells which stems from the coexistence of multiple cell subsets. Each of these subsets is phenotypically and functionally heterogeneous, and displays distinct multilineage differentiation potential, cell cycle profile, proliferation kinetics, and expression pattern of HSC markers and some of the key lineage-associated transcription factors. Analysis of their maintenance revealed that on a population level all EML cell subsets exhibit cell-autonomous interconvertible properties, with the capacity to generate all other subsets and re-establish complete parental EML cell population. Moreover, all EML cell subsets generated during multiple cell generations maintain their distinct phenotypic and functional signatures and interconvertible properties. The model of EML cell line suggests that interconvertible multipotent hematopoietic cell subsets coexist in a homeostatically maintained dynamic equilibrium which is regulated by currently unknown cell-intrinsic mechanisms. PMID:24903657

  15. Dynamic equilibrium of heterogeneous and interconvertible multipotent hematopoietic cell subsets.

    PubMed

    Weston, Wendy; Zayas, Jennifer; Perez, Ruben; George, John; Jurecic, Roland

    2014-06-06

    Populations of hematopoietic stem cells and progenitors are quite heterogeneous and consist of multiple cell subsets with distinct phenotypic and functional characteristics. Some of these subsets also appear to be interconvertible and oscillate between functionally distinct states. The multipotent hematopoietic cell line EML has emerged as a unique model to study the heterogeneity and interconvertibility of multipotent hematopoietic cells. Here we describe extensive phenotypic and functional heterogeneity of EML cells which stems from the coexistence of multiple cell subsets. Each of these subsets is phenotypically and functionally heterogeneous, and displays distinct multilineage differentiation potential, cell cycle profile, proliferation kinetics, and expression pattern of HSC markers and some of the key lineage-associated transcription factors. Analysis of their maintenance revealed that on a population level all EML cell subsets exhibit cell-autonomous interconvertible properties, with the capacity to generate all other subsets and re-establish complete parental EML cell population. Moreover, all EML cell subsets generated during multiple cell generations maintain their distinct phenotypic and functional signatures and interconvertible properties. The model of EML cell line suggests that interconvertible multipotent hematopoietic cell subsets coexist in a homeostatically maintained dynamic equilibrium which is regulated by currently unknown cell-intrinsic mechanisms.

  16. Expression analysis of cancer-testis genes in prostate cancer reveals candidates for immunotherapy.

    PubMed

    Faramarzi, Sepideh; Ghafouri-Fard, Soudeh

    2017-09-01

    Prostate cancer is a prevalent disorder among men with a heterogeneous etiological background. Several molecular events and signaling perturbations have been found in this disorder. Among genes whose expressions have been altered during the prostate cancer development are cancer-testis antigens (CTAs). This group of antigens has limited expression in the normal adult tissues but aberrant expression in cancers. This property provides them the possibility to be used as cancer biomarkers and immunotherapeutic targets. Several CTAs have been shown to be immunogenic in prostate cancer patients and some of the have entered clinical trials. Based on the preliminary data obtained from these trials, it is expected that CTA-based therapeutic options are beneficial for at least a subset of prostate cancer patients.

  17. Still Virtual After All These Years: Recent Developments in the Virtual Solar Observatory

    NASA Astrophysics Data System (ADS)

    Gurman, J. B.; Bogart, R. S.; Davey, A. R.; Hill, F.; Martens, P. C.; Zarro, D. M.; Team, T. v.

    2008-05-01

    While continuing to add access to data from new missions, including Hinode and STEREO, the Virtual Solar Observatory is also being enhanced as a research tool by the addition of new features such as the unified representation of catalogs and event lists (to allow joined searches in two or more catalogs) and workable representation and manipulation of large numbers of search results (as are expected from the Solar Dynamics Observatory database). Working with our RHESSI colleagues, we have also been able to improve the performance of IDL-callable vso_search and vso_get functions, to the point that use of those routines is a practical alternative to reproducing large subsets of mission data on one's own LAN.

  18. Experimental evaluation of battery cells for space-based radar application

    NASA Technical Reports Server (NTRS)

    Maskell, Craig A.; Metcalfe, John R.

    1994-01-01

    A test program was conducted to characterize five space-quality nickel-hydrogen (NiH2) battery cells. A subset of those tests was also done on five commercial nickel-cadmium (NiCd) cells, for correlation to the characteristics of an Energy Storage Unit Simulator. The test program implemented the recommendations of a 1991 study, as reported to IECEC-92. The findings of the tests are summarized, and expected impacts on the performance of the electrical power system (EPS) of a large space-based radar (SBR) surveillance satellite are derived. The main characteristics examined and compared were terminal voltage (average and transient) and capacity through discharge, equivalent series resistance, derived inductance and capacitance, charge return efficiency, and inter-pulse charge effectiveness.

  19. Still Virtual After All These Years: Recent Developments in the Virtual Solar Observatory

    NASA Technical Reports Server (NTRS)

    Gurman, Joseph B.; Bogart; Davey; Hill; Masters; Zarro

    2008-01-01

    While continuing to add access to data from new missions, including Hinode and STEREO, the Virtual Solar Observatory is also being enhanced as a research tool by the addition of new features such as the unified representation of catalogs and event lists (to allow joined searches in two or more catalogs) and workable representation and manipulation of large numbers of search results (as are expected from the Solar Dynamics Observatory database). Working with our RHESSI colleagues, we have also been able to improve the performance of IDL-callable vso_search and vso_get functions, to the point that use of those routines is a practical alternative to reproducing large subsets of mission data on one's own LAN.

  20. Data-driven confounder selection via Markov and Bayesian networks.

    PubMed

    Häggström, Jenny

    2018-06-01

    To unbiasedly estimate a causal effect on an outcome unconfoundedness is often assumed. If there is sufficient knowledge on the underlying causal structure then existing confounder selection criteria can be used to select subsets of the observed pretreatment covariates, X, sufficient for unconfoundedness, if such subsets exist. Here, estimation of these target subsets is considered when the underlying causal structure is unknown. The proposed method is to model the causal structure by a probabilistic graphical model, for example, a Markov or Bayesian network, estimate this graph from observed data and select the target subsets given the estimated graph. The approach is evaluated by simulation both in a high-dimensional setting where unconfoundedness holds given X and in a setting where unconfoundedness only holds given subsets of X. Several common target subsets are investigated and the selected subsets are compared with respect to accuracy in estimating the average causal effect. The proposed method is implemented with existing software that can easily handle high-dimensional data, in terms of large samples and large number of covariates. The results from the simulation study show that, if unconfoundedness holds given X, this approach is very successful in selecting the target subsets, outperforming alternative approaches based on random forests and LASSO, and that the subset estimating the target subset containing all causes of outcome yields smallest MSE in the average causal effect estimation. © 2017, The International Biometric Society.

  1. Role of miRNAs in CD4 T cell plasticity during inflammation and tolerance

    PubMed Central

    Sethi, Apoorva; Kulkarni, Neeraja; Sonar, Sandip; Lal, Girdhari

    2013-01-01

    Gene expression is tightly regulated in a tuneable, cell-specific and time-dependent manner. Recent advancement in epigenetics and non-coding RNA (ncRNA) revolutionized the concept of gene regulation. In order to regulate the transcription, ncRNA can promptly response to the extracellular signals as compared to transcription factors present in the cells. microRNAs (miRNAs) are ncRNA (~22 bp) encoded in the genome, and present as intergenic or oriented antisense to neighboring genes. The strategic location of miRNA in coding genes helps in the coupled regulation of its expression with host genes. miRNA together with complex machinery called RNA-induced silencing complex (RISC) interacts with target mRNA and degrade the mRNA or inhibits the translation. CD4 T cells play an important role in the generation and maintenance of inflammation and tolerance. Cytokines and chemokines present in the inflamed microenvironment controls the differentiation and function of various subsets of CD4 T cells [Th1, Th2, Th17, and regulatory CD4 T cells (Tregs)]. Recent studies suggest that miRNAs play an important role in the development and function of all subsets of CD4 T cells. In current review, we focused on how various miRNAs are regulated by cell's extrinsic and intrinsic signaling, and how miRNAs affect the transdifferentiation of subsets of CD4 T cell and controls their plasticity during inflammation and tolerance. PMID:23386861

  2. Pedotransfer functions for isoproturon sorption on soils and vadose zone materials.

    PubMed

    Moeys, Julien; Bergheaud, Valérie; Coquet, Yves

    2011-10-01

    Sorption coefficients (the linear K(D) or the non-linear K(F) and N(F)) are critical parameters in models of pesticide transport to groundwater or surface water. In this work, a dataset of isoproturon sorption coefficients and corresponding soil properties (264 K(D) and 55 K(F)) was compiled, and pedotransfer functions were built for predicting isoproturon sorption in soils and vadose zone materials. These were benchmarked against various other prediction methods. The results show that the organic carbon content (OC) and pH are the two main soil properties influencing isoproturon K(D) . The pedotransfer function is K(D) = 1.7822 + 0.0162 OC(1.5) - 0.1958 pH (K(D) in L kg(-1) and OC in g kg(-1)). For low-OC soils (OC < 6.15 g kg(-1)), clay and pH are most influential. The pedotransfer function is then K(D) = 0.9980 + 0.0002 clay - 0.0990 pH (clay in g kg(-1)). Benchmarking K(D) estimations showed that functions calibrated on more specific subsets of the data perform better on these subsets than functions calibrated on larger subsets. Predicting isoproturon sorption in soils in unsampled locations should rely, whenever possible, and by order of preference, on (a) site- or soil-specific pedotransfer functions, (b) pedotransfer functions calibrated on a large dataset, (c) K(OC) values calculated on a large dataset or (d) K(OC) values taken from existing pesticide properties databases. Copyright © 2011 Society of Chemical Industry.

  3. Selecting informative subsets of sparse supermatrices increases the chance to find correct trees.

    PubMed

    Misof, Bernhard; Meyer, Benjamin; von Reumont, Björn Marcus; Kück, Patrick; Misof, Katharina; Meusemann, Karen

    2013-12-03

    Character matrices with extensive missing data are frequently used in phylogenomics with potentially detrimental effects on the accuracy and robustness of tree inference. Therefore, many investigators select taxa and genes with high data coverage. Drawbacks of these selections are their exclusive reliance on data coverage without consideration of actual signal in the data which might, thus, not deliver optimal data matrices in terms of potential phylogenetic signal. In order to circumvent this problem, we have developed a heuristics implemented in a software called mare which (1) assesses information content of genes in supermatrices using a measure of potential signal combined with data coverage and (2) reduces supermatrices with a simple hill climbing procedure to submatrices with high total information content. We conducted simulation studies using matrices of 50 taxa × 50 genes with heterogeneous phylogenetic signal among genes and data coverage between 10-30%. With matrices of 50 taxa × 50 genes with heterogeneous phylogenetic signal among genes and data coverage between 10-30% Maximum Likelihood (ML) tree reconstructions failed to recover correct trees. A selection of a data subset with the herein proposed approach increased the chance to recover correct partial trees more than 10-fold. The selection of data subsets with the herein proposed simple hill climbing procedure performed well either considering the information content or just a simple presence/absence information of genes. We also applied our approach on an empirical data set, addressing questions of vertebrate systematics. With this empirical dataset selecting a data subset with high information content and supporting a tree with high average boostrap support was most successful if information content of genes was considered. Our analyses of simulated and empirical data demonstrate that sparse supermatrices can be reduced on a formal basis outperforming the usually used simple selections of taxa and genes with high data coverage.

  4. Morphological and physiological analysis of type-5 and other bipolar cells in the Mouse Retina.

    PubMed

    Hellmer, C B; Zhou, Y; Fyk-Kolodziej, B; Hu, Z; Ichinose, T

    2016-02-19

    Retinal bipolar cells are second-order neurons in the visual system, which initiate multiple image feature-based neural streams. Among more than ten types of bipolar cells, type-5 cells are thought to play a role in motion detection pathways. Multiple subsets of type-5 cells have been reported; however, detailed characteristics of each subset have not yet been elucidated. Here, we found that they exhibit distinct morphological features as well as unique voltage-gated channel expression. We have conducted electrophysiological and immunohistochemical analysis of retinal bipolar cells. We defined type-5 cells by their axon terminal ramification in the inner plexiform layer between the border of ON/OFF sublaminae and the ON choline acetyltransferase (ChAT) band. We found three subsets of type-5 cells: XBCs had the widest axon terminals that stratified at a close approximation of the ON ChAT band as well as exhibiting large voltage-gated Na(+) channel activity, type-5-1 cells had compact terminals and no Na(+) channel activity, and type-5-2 cells contained umbrella-shaped terminals as well as large voltage-gated Na(+) channel activity. Hyperpolarization-activated cyclic nucleotide-gated (HCN) currents were also evoked in all type-5 bipolar cells. We found that XBCs and type-5-2 cells exhibited larger HCN currents than type-5-1 cells. Furthermore, the former two types showed stronger HCN1 expression than the latter. Our previous observations (Ichinose et al., 2014) match the current study: low temporal tuning cells that we named 5S corresponded to 5-1 in this study, while high temporal tuning 5f cells from the previous study corresponded to 5-2 cells. Taken together, we found three subsets of type-5 bipolar cells based on their morphologies and physiological features. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  5. Staggered chiral perturbation theory at next-to-leading order

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharpe, Stephen R.; Van de Water, Ruth S.

    2005-06-01

    We study taste and Euclidean rotational symmetry violation for staggered fermions at nonzero lattice spacing using staggered chiral perturbation theory. We extend the staggered chiral Lagrangian to O(a{sup 2}p{sup 2}), O(a{sup 4}), and O(a{sup 2}m), the orders necessary for a full next-to-leading order calculation of pseudo-Goldstone boson masses and decay constants including analytic terms. We then calculate a number of SO(4) taste-breaking quantities, which involve only a small subset of these next-to-leading order operators. We predict relationships between SO(4) taste-breaking splittings in masses, pseudoscalar decay constants, and dispersion relations. We also find predictions for a few quantities that are notmore » SO(4) breaking. All these results hold also for theories in which the fourth root of the fermionic determinant is taken to reduce the number of quark tastes; testing them will therefore provide evidence for or against the validity of this trick.« less

  6. HTTP-based Search and Ordering Using ECHO's REST-based and OpenSearch APIs

    NASA Astrophysics Data System (ADS)

    Baynes, K.; Newman, D. J.; Pilone, D.

    2012-12-01

    Metadata is an important entity in the process of cataloging, discovering, and describing Earth science data. NASA's Earth Observing System (EOS) ClearingHOuse (ECHO) acts as the core metadata repository for EOSDIS data centers, providing a centralized mechanism for metadata and data discovery and retrieval. By supporting both the ESIP's Federated Search API and its own search and ordering interfaces, ECHO provides multiple capabilities that facilitate ease of discovery and access to its ever-increasing holdings. Users are able to search and export metadata in a variety of formats including ISO 19115, json, and ECHO10. This presentation aims to inform technically savvy clients interested in automating search and ordering of ECHO's metadata catalog. The audience will be introduced to practical and applicable examples of end-to-end workflows that demonstrate finding, sub-setting and ordering data that is bound by keyword, temporal and spatial constraints. Interaction with the ESIP OpenSearch Interface will be highlighted, as will ECHO's own REST-based API.

  7. Driven Phases of Quantum Matter

    NASA Astrophysics Data System (ADS)

    Khemani, Vedika; von Keyserlingk, Curt; Lazarides, Achilleas; Moessner, Roderich; Sondhi, Shivaji

    Clean and interacting periodically driven quantum systems are believed to exhibit a single, trivial ``infinite-temperature'' Floquet-ergodic phase. By contrast, I will show that their disordered Floquet many-body localized counterparts can exhibit distinct ordered phases with spontaneously broken symmetries delineated by sharp transitions. Some of these are analogs of equilibrium states, while others are genuinely new to the Floquet setting. I will show that a subset of these novel phases are absolutely stableto all weak local deformations of the underlying Floquet drives, and spontaneously break Hamiltonian dependent emergent symmetries. Strikingly, they simultaneously also break the underlying time-translation symmetry of the Floquet drive and the order parameter exhibits oscillations at multiples of the fundamental period. This ``time-crystallinity'' goes hand in hand with spatial symmetry breaking and, altogether, these phases exhibit a novel form of simultaneous long-range order in space and time. I will describe how this spatiotemporal order can be detected in experiments involving quenches from a broad class of initial states.

  8. SU-E-J-100: Reconstruction of Prompt Gamma Ray Three Dimensional SPECT Image From Boron Neutron Capture Therapy(BNCT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, D; Jung, J; Suh, T

    2014-06-01

    Purpose: Purpose of paper is to confirm the feasibility of acquisition of three dimensional single photon emission computed tomography (SPECT) image from boron neutron capture therapy (BNCT) using Monte Carlo simulation. Methods: In case of simulation, the pixelated SPECT detector, collimator and phantom were simulated using Monte Carlo n particle extended (MCNPX) simulation tool. A thermal neutron source (<1 eV) was used to react with the boron uptake region (BUR) in the phantom. Each geometry had a spherical pattern, and three different BURs (A, B and C region, density: 2.08 g/cm3) were located in the middle of the brain phantom.more » The data from 128 projections for each sorting process were used to achieve image reconstruction. The ordered subset expectation maximization (OSEM) reconstruction algorithm was used to obtain a tomographic image with eight subsets and five iterations. The receiver operating characteristic (ROC) curve analysis was used to evaluate the geometric accuracy of reconstructed image. Results: The OSEM image was compared with the original phantom pattern image. The area under the curve (AUC) was calculated as the gross area under each ROC curve. The three calculated AUC values were 0.738 (A region), 0.623 (B region), and 0.817 (C region). The differences between length of centers of two boron regions and distance of maximum count points were 0.3 cm, 1.6 cm and 1.4 cm. Conclusion: The possibility of extracting a 3D BNCT SPECT image was confirmed using the Monte Carlo simulation and OSEM algorithm. The prospects for obtaining an actual BNCT SPECT image were estimated from the quality of the simulated image and the simulation conditions. When multiple tumor region should be treated using the BNCT, a reasonable model to determine how many useful images can be obtained from the SPECT could be provided to the BNCT facilities. This research was supported by the Leading Foreign Research Institute Recruitment Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, Information and Communication Technologies (ICT) and Future Planning (MSIP)(Grant No.200900420) and the Radiation Technology Research and Development program (Grant No.2013043498), Republic of Korea.« less

  9. Computational Analysis of Host-Pathogen Protein Interactions between Humans and Different Strains of Enterohemorrhagic Escherichia coli.

    PubMed

    Bose, Tungadri; Venkatesh, K V; Mande, Sharmila S

    2017-01-01

    Serotype O157:H7, an enterohemorrhagic Escherichia coli (EHEC), is known to cause gastrointestinal and systemic illnesses ranging from diarrhea and hemorrhagic colitis to potentially fatal hemolytic uremic syndrome. Specific genetic factors like ompA, nsrR , and LEE genes are known to play roles in EHEC pathogenesis. However, these factors are not specific to EHEC and their presence in several non-pathogenic strains indicates that additional factors are involved in pathogenicity. We propose a comprehensive effort to screen for such potential genetic elements, through investigation of biomolecular interactions between E. coli and their host. In this work, an in silico investigation of the protein-protein interactions (PPIs) between human cells and four EHEC strains (viz., EDL933, Sakai, EC4115, and TW14359) was performed in order to understand the virulence and host-colonization strategies of these strains. Potential host-pathogen interactions (HPIs) between human cells and the "non-pathogenic" E. coli strain MG1655 were also probed to evaluate whether and how the variations in the genomes could translate into altered virulence and host-colonization capabilities of the studied bacterial strains. Results indicate that a small subset of HPIs are unique to the studied pathogens and can be implicated in virulence. This subset of interactions involved E. coli proteins like YhdW, ChuT, EivG, and HlyA. These proteins have previously been reported to be involved in bacterial virulence. In addition, clear differences in lineage and clade-specific HPI profiles could be identified. Furthermore, available gene expression profiles of the HPI-proteins were utilized to estimate the proportion of proteins which may be involved in interactions. We hypothesized that a cumulative score of the ratios of bound:unbound proteins (involved in HPIs) would indicate the extent of colonization. Thus, we designed the Host Colonization Index (HCI) measure to determine the host colonization potential of the E. coli strains. Pathogenic strains of E. coli were observed to have higher HCIs as compared to a non-pathogenic laboratory strain. However, no significant differences among the HCIs of the two pathogenic groups were observed. Overall, our findings are expected to provide additional insights into EHEC pathogenesis and are likely to aid in designing alternate preventive and therapeutic strategies.

  10. Fast GPU-based Monte Carlo code for SPECT/CT reconstructions generates improved 177Lu images.

    PubMed

    Rydén, T; Heydorn Lagerlöf, J; Hemmingsson, J; Marin, I; Svensson, J; Båth, M; Gjertsson, P; Bernhardt, P

    2018-01-04

    Full Monte Carlo (MC)-based SPECT reconstructions have a strong potential for correcting for image degrading factors, but the reconstruction times are long. The objective of this study was to develop a highly parallel Monte Carlo code for fast, ordered subset expectation maximum (OSEM) reconstructions of SPECT/CT images. The MC code was written in the Compute Unified Device Architecture language for a computer with four graphics processing units (GPUs) (GeForce GTX Titan X, Nvidia, USA). This enabled simulations of parallel photon emissions from the voxels matrix (128 3 or 256 3 ). Each computed tomography (CT) number was converted to attenuation coefficients for photo absorption, coherent scattering, and incoherent scattering. For photon scattering, the deflection angle was determined by the differential scattering cross sections. An angular response function was developed and used to model the accepted angles for photon interaction with the crystal, and a detector scattering kernel was used for modeling the photon scattering in the detector. Predefined energy and spatial resolution kernels for the crystal were used. The MC code was implemented in the OSEM reconstruction of clinical and phantom 177 Lu SPECT/CT images. The Jaszczak image quality phantom was used to evaluate the performance of the MC reconstruction in comparison with attenuated corrected (AC) OSEM reconstructions and attenuated corrected OSEM reconstructions with resolution recovery corrections (RRC). The performance of the MC code was 3200 million photons/s. The required number of photons emitted per voxel to obtain a sufficiently low noise level in the simulated image was 200 for a 128 3 voxel matrix. With this number of emitted photons/voxel, the MC-based OSEM reconstruction with ten subsets was performed within 20 s/iteration. The images converged after around six iterations. Therefore, the reconstruction time was around 3 min. The activity recovery for the spheres in the Jaszczak phantom was clearly improved with MC-based OSEM reconstruction, e.g., the activity recovery was 88% for the largest sphere, while it was 66% for AC-OSEM and 79% for RRC-OSEM. The GPU-based MC code generated an MC-based SPECT/CT reconstruction within a few minutes, and reconstructed patient images of 177 Lu-DOTATATE treatments revealed clearly improved resolution and contrast.

  11. Logical analysis of diffuse large B-cell lymphomas.

    PubMed

    Alexe, G; Alexe, S; Axelrod, D E; Hammer, P L; Weissmann, D

    2005-07-01

    The goal of this study is to re-examine the oligonucleotide microarray dataset of Shipp et al., which contains the intensity levels of 6817 genes of 58 patients with diffuse large B-cell lymphoma (DLBCL) and 19 with follicular lymphoma (FL), by means of the combinatorics, optimisation, and logic-based methodology of logical analysis of data (LAD). The motivations for this new analysis included the previously demonstrated capabilities of LAD and its expected potential (1) to identify different informative genes than those discovered by conventional statistical methods, (2) to identify combinations of gene expression levels capable of characterizing different types of lymphoma, and (3) to assemble collections of such combinations that if considered jointly are capable of accurately distinguishing different types of lymphoma. The central concept of LAD is a pattern or combinatorial biomarker, a concept that resembles a rule as used in decision tree methods. LAD is able to exhaustively generate the collection of all those patterns which satisfy certain quality constraints, through a systematic combinatorial process guided by clear optimization criteria. Then, based on a set covering approach, LAD aggregates the collection of patterns into classification models. In addition, LAD is able to use the information provided by large collections of patterns in order to extract subsets of variables, which collectively are able to distinguish between different types of disease. For the differential diagnosis of DLBCL versus FL, a model based on eight significant genes is constructed and shown to have a sensitivity of 94.7% and a specificity of 100% on the test set. For the prognosis of good versus poor outcome among the DLBCL patients, a model is constructed on another set consisting also of eight significant genes, and shown to have a sensitivity of 87.5% and a specificity of 90% on the test set. The genes selected by LAD also work well as a basis for other kinds of statistical analysis, indicating their robustness. These two models exhibit accuracies that compare favorably to those in the original study. In addition, the current study also provides a ranking by importance of the genes in the selected significant subsets as well as a library of dozens of combinatorial biomarkers (i.e. pairs or triplets of genes) that can serve as a source of mathematically generated, statistically significant research hypotheses in need of biological explanation.

  12. Order by disorder and gaugelike degeneracy in a quantum pyrochlore antiferromagnet.

    PubMed

    Henley, Christopher L

    2006-02-03

    The (three-dimensional) pyrochlore lattice antiferromagnet with Heisenberg spins of large spin length S is a highly frustrated model with a macroscopic degeneracy of classical ground states. The zero-point energy of (harmonic-order) spin-wave fluctuations distinguishes a subset of these states. I derive an approximate but illuminating effective Hamiltonian, acting within the subspace of Ising spin configurations representing the collinear ground states. It consists of products of Ising spins around loops, i.e., has the form of a Z2 lattice gauge theory. The remaining ground-state entropy is still infinite but not extensive, being O(L) for system size O(L3). All these ground states have unit cells bigger than those considered previously.

  13. Extension of the root-locus method to a certain class of fractional-order systems.

    PubMed

    Merrikh-Bayat, Farshad; Afshar, Mahdi; Karimi-Ghartemani, Masoud

    2009-01-01

    In this paper, the well-known root-locus method is developed for the special subset of linear time-invariant systems commonly known as fractional-order systems. Transfer functions of these systems are rational functions with polynomials of rational powers of the Laplace variable s. Such systems are defined on a Riemann surface because of their multi-valued nature. A set of rules for plotting the root loci on the first Riemann sheet is presented. The important features of the classical root-locus method such as asymptotes, roots condition on the real axis and breakaway points are extended to the fractional case. It is also shown that the proposed method can assess the closed-loop stability of fractional-order systems in the presence of a varying gain in the loop. Moreover, the effect of perturbation on the root loci is discussed. Three illustrative examples are presented to confirm the effectiveness of the proposed algorithm.

  14. Self-organized molecular films with long-range quasiperiodic order.

    PubMed

    Fournée, Vincent; Gaudry, Émilie; Ledieu, Julian; de Weerd, Marie-Cécile; Wu, Dongmei; Lograsso, Thomas

    2014-04-22

    Self-organized molecular films with long-range quasiperiodic order have been grown by using the complex potential energy landscape of quasicrystalline surfaces as templates. The long-range order arises from a specific subset of quasilattice sites acting as preferred adsorption sites for the molecules, thus enforcing a quasiperiodic structure in the film. These adsorption sites exhibit a local 5-fold symmetry resulting from the cut by the surface plane through the cluster units identified in the bulk solid. Symmetry matching between the C60 fullerene and the substrate leads to a preferred adsorption configuration of the molecules with a pentagonal face down, a feature unique to quasicrystalline surfaces, enabling efficient chemical bonding at the molecule-substrate interface. This finding offers opportunities to investigate the physical properties of model 2D quasiperiodic systems, as the molecules can be functionalized to yield architectures with tailor-made properties.

  15. Accuracy-preserving source term quadrature for third-order edge-based discretization

    NASA Astrophysics Data System (ADS)

    Nishikawa, Hiroaki; Liu, Yi

    2017-09-01

    In this paper, we derive a family of source term quadrature formulas for preserving third-order accuracy of the node-centered edge-based discretization for conservation laws with source terms on arbitrary simplex grids. A three-parameter family of source term quadrature formulas is derived, and as a subset, a one-parameter family of economical formulas is identified that does not require second derivatives of the source term. Among the economical formulas, a unique formula is then derived that does not require gradients of the source term at neighbor nodes, thus leading to a significantly smaller discretization stencil for source terms. All the formulas derived in this paper do not require a boundary closure, and therefore can be directly applied at boundary nodes. Numerical results are presented to demonstrate third-order accuracy at interior and boundary nodes for one-dimensional grids and linear triangular/tetrahedral grids over straight and curved geometries.

  16. Classification of endoscopic capsule images by using color wavelet features, higher order statistics and radial basis functions.

    PubMed

    Lima, C S; Barbosa, D; Ramos, J; Tavares, A; Monteiro, L; Carvalho, L

    2008-01-01

    This paper presents a system to support medical diagnosis and detection of abnormal lesions by processing capsule endoscopic images. Endoscopic images possess rich information expressed by texture. Texture information can be efficiently extracted from medium scales of the wavelet transform. The set of features proposed in this paper to code textural information is named color wavelet covariance (CWC). CWC coefficients are based on the covariances of second order textural measures, an optimum subset of them is proposed. Third and forth order moments are added to cope with distributions that tend to become non-Gaussian, especially in some pathological cases. The proposed approach is supported by a classifier based on radial basis functions procedure for the characterization of the image regions along the video frames. The whole methodology has been applied on real data containing 6 full endoscopic exams and reached 95% specificity and 93% sensitivity.

  17. Exploring the Presence of a Deaf American Cultural Life Script

    ERIC Educational Resources Information Center

    Clark, M. Diane; Daggett, Dorri J.

    2015-01-01

    Cultural life scripts are defined as culturally shared expectations that focus on a series of events that are ordered in time. In these scripts, generalized expectations for what to expect through the life course are outlined. This study examined the possibility of a Deaf American Life Script developed in relationship to the use of a visual…

  18. Family Matters: Effects of Birth Order, Culture, and Family Dynamics on Surrogate Decision Making

    PubMed Central

    Su, Christopher T.; McMahan, Ryan D.; Williams, Brie A.; Sharma, Rashmi K.; Sudore, Rebecca L.

    2014-01-01

    Cultural attitudes about medical decision making and filial expectations may lead some surrogates to experience stress and family conflict. Thirteen focus groups with racially and ethnically diverse English- and Spanish-speakers from county and Veterans hospitals, senior centers, and cancer support groups were conducted to describe participants’ experiences making serious or end-of-life decisions for others. Filial expectations and family dynamics related to birth order and surrogate decision making were explored using qualitative, thematic content analysis and overarching themes from focus group transcripts were identified. The mean age of the 69 participants was 69 years ± 14 and 29% were African American, 26% were White, 26% were Asian/Pacific Islander, and 19% were Latino. Seventy percent of participants engaged in unprompted discussions about birth order and family dynamics. Six subthemes were identified within 3 overarching categories of communication, emotion, and conflict: Communication – (1) unspoken expectations and (2) discussion of death as taboo; Emotion – (3) emotional stress and (4) feelings of loneliness; and Conflict – (5) family conflict and (6) potential solutions to prevent conflict. These findings suggest that birth order and family dynamics can have profound effects on surrogate stress and coping. Clinicians should be aware of potential unspoken filial expectations for firstborns and help facilitate communication between the patient, surrogate, and extended family to reduce stress and conflict. PMID:24383459

  19. Family matters: effects of birth order, culture, and family dynamics on surrogate decision-making.

    PubMed

    Su, Christopher T; McMahan, Ryan D; Williams, Brie A; Sharma, Rashmi K; Sudore, Rebecca L

    2014-01-01

    Cultural attitudes about medical decision-making and filial expectations may lead some surrogates to experience stress and family conflict. Thirteen focus groups with racially and ethnically diverse English and Spanish speakers from county and Veterans Affairs hospitals, senior centers, and cancer support groups were conducted to describe participants' experiences making serious or end-of-life decisions for others. Filial expectations and family dynamics related to birth order and surrogate decision-making were explored using qualitative, thematic content analysis, and overarching themes from focus group transcripts were identified. The mean age of the 69 participants was 69 ± 14, and 29% were African American, 26% were white, 26% were Asian or Pacific Islander, and 19% were Latino. Seventy percent of participants engaged in unprompted discussions about birth order and family dynamics. Six subthemes were identified within three overarching categories: communication (unspoken expectations and discussion of death as taboo), emotion (emotional stress and feelings of loneliness), and conflict (family conflict and potential solutions to prevent conflict). These findings suggest that birth order and family dynamics can have profound effects on surrogate stress and coping. Clinicians should be aware of potential unspoken filial expectations for firstborns and help facilitate communication between the patient, surrogate, and extended family to reduce stress and conflict. © Published 2013. This article is a U.S. Government work and is in the public domain in the U.S.A.

  20. An Effective Post-Filtering Framework for 3-D PET Image Denoising Based on Noise and Sensitivity Characteristics

    NASA Astrophysics Data System (ADS)

    Kim, Ji Hye; Ahn, Il Jun; Nam, Woo Hyun; Ra, Jong Beom

    2015-02-01

    Positron emission tomography (PET) images usually suffer from a noticeable amount of statistical noise. In order to reduce this noise, a post-filtering process is usually adopted. However, the performance of this approach is limited because the denoising process is mostly performed on the basis of the Gaussian random noise. It has been reported that in a PET image reconstructed by the expectation-maximization (EM), the noise variance of each voxel depends on its mean value, unlike in the case of Gaussian noise. In addition, we observe that the variance also varies with the spatial sensitivity distribution in a PET system, which reflects both the solid angle determined by a given scanner geometry and the attenuation information of a scanned object. Thus, if a post-filtering process based on the Gaussian random noise is applied to PET images without consideration of the noise characteristics along with the spatial sensitivity distribution, the spatially variant non-Gaussian noise cannot be reduced effectively. In the proposed framework, to effectively reduce the noise in PET images reconstructed by the 3-D ordinary Poisson ordered subset EM (3-D OP-OSEM), we first denormalize an image according to the sensitivity of each voxel so that the voxel mean value can represent its statistical properties reliably. Based on our observation that each noisy denormalized voxel has a linear relationship between the mean and variance, we try to convert this non-Gaussian noise image to a Gaussian noise image. We then apply a block matching 4-D algorithm that is optimized for noise reduction of the Gaussian noise image, and reconvert and renormalize the result to obtain a final denoised image. Using simulated phantom data and clinical patient data, we demonstrate that the proposed framework can effectively suppress the noise over the whole region of a PET image while minimizing degradation of the image resolution.

  1. Computational prediction of the Crc regulon identifies genus-wide and species-specific targets of catabolite repression control in Pseudomonas bacteria.

    PubMed

    Browne, Patrick; Barret, Matthieu; O'Gara, Fergal; Morrissey, John P

    2010-11-25

    Catabolite repression control (CRC) is an important global control system in Pseudomonas that fine tunes metabolism in order optimise growth and metabolism in a range of different environments. The mechanism of CRC in Pseudomonas spp. centres on the binding of a protein, Crc, to an A-rich motif on the 5' end of an mRNA resulting in translational down-regulation of target genes. Despite the identification of several Crc targets in Pseudomonas spp. the Crc regulon has remained largely unexplored. In order to predict direct targets of Crc, we used a bioinformatics approach based on detection of A-rich motifs near the initiation of translation of all protein-encoding genes in twelve fully sequenced Pseudomonas genomes. As expected, our data predict that genes related to the utilisation of less preferred nutrients, such as some carbohydrates, nitrogen sources and aromatic carbon compounds are targets of Crc. A general trend in this analysis is that the regulation of transporters is conserved across species whereas regulation of specific enzymatic steps or transcriptional activators are often conserved only within a species. Interestingly, some nucleoid associated proteins (NAPs) such as HU and IHF are predicted to be regulated by Crc. This finding indicates a possible role of Crc in indirect control over a subset of genes that depend on the DNA bending properties of NAPs for expression or repression. Finally, some virulence traits such as alginate and rhamnolipid production also appear to be regulated by Crc, which links nutritional status cues with the regulation of virulence traits. Catabolite repression control regulates a broad spectrum of genes in Pseudomonas. Some targets are genus-wide and are typically related to central metabolism, whereas other targets are species-specific, or even unique to particular strains. Further study of these novel targets will enhance our understanding of how Pseudomonas bacteria integrate nutritional status cues with the regulation of traits that are of ecological, industrial and clinical importance.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andreyev, A.

    Purpose: Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. Methods: To validate the proposed algorithm we used Monte Carlomore » simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Results: Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2–3 orders of magnitude per iteration. Conclusions: The results of our tests demonstrate the improvement of image resolution provided by the OE reconstructions with resolution recovery. The quality of images and their contrast are similar to those obtained from the OE reconstructions from scans simulated with perfect energy and spatial resolutions.« less

  3. Provincial health accounts in Kerman, Iran: an evidence of a "mixed" healthcare financing system.

    PubMed

    Mehrolhassani, Mohammad Hossein; Jafari, Mohammad; Zeinali, Javad; Ansari, Mina

    2014-02-01

    Provincial Health Accounts (PHA) as a subset of National Health Accounts (NHA) present financial information for health sectors. It leads to a logical decision making for policy-makers in order to achieve health system goals, especially Fair Financial Contribution (FFC). This study aimed to examine Health Accounts in Kerman Province. The present analytical study was carried out retrospectively between 2008 and 2011. The research population consisted of urban and rural households as well as providers and financial agents in health sectors of Kerman Province. The purposeful sampling included 16 provincial organizations. To complete data, the report on Kerman household expenditure was taken as a data source from the Governor-General's office. In order to classify the data, the International Classification for Health Accounts (ICHA) method was used, in which data set was adjusted for the province. During the study, the governmental and non-governmental fund shares of the health sector in Kerman were 27.22% and 72.78% respectively. The main portion of financial sources (59.41) was related to private household funds, of which the Out-of-Pocket (OOP) payment mounted to 92.35%. Overall, 54.86% of all financial sources were covered by OOP. The greatest portion of expenditure of Total Healthcare Expenditures (THEs) (65.19%) was related to curative services. The major portion of healthcare expenditures was related to the OOP payment which is compatible with the national average rate in Iran. However, health expenditure per capita, was two and a half times higher than the national average. By emphasizing on Social Determinant of Health (SDH) approach in the Iranian health system, the portion of OOP payment and curative expenditure are expected to be controlled in the medium term. It is suggested that PHA should be examined annually in a more comprehensive manner to monitor initiatives and reforms in healthcare sector.

  4. Validation for the Tropical Rainfall Measuring Mission: Lessons Learned and Future Plans

    NASA Technical Reports Server (NTRS)

    Wolff, David B.; Amitai, E.; Marks, D. A.; Silberstein, D.; Lawrence, R. J.

    2005-01-01

    The Tropical Rainfall Measuring Mission (TRMM) was launched in November 1997 and is a highly regarded and successful mission. A major component of the TRMM program was its Ground Validation (GV) program. Through dedicated research and hard work by many groups, both the GV and satellite-retrieved rain estimates have shown a convergence at key GV sites, lending credibility to the global TRMM estimates. To be sure, there are some regional differences between the various satellite estimates themselves, which still need to be addressed; however, it can be said with some certainty that TRMM has provided a high-quality, long-term climatological data set for researchers that provides errors on the order of 10-20%, rather than pre-TRMM era error estimates on the order of 50-100%. The TRMM GV program's main operational task is to provide rainfall products for four sites: Darwin, Australia (DARW); Houston, Texas (HSTN); Kwajalein, Republic of the Marshall Islands (KWAJ); and, Melbourne, Florida (MELB). A comparison between TRMM Ground Validation (Version 5) and Satellite (Version 6) rain intensity estimates is presented. The gridded satellite product (3668) will be compared to GV Level II rain-intensity and -type maps (2A53 and 2A54, respectively). The 3G68 product represents a 0.5 deg x 0.5 deg data grid providing estimates of rain intensities from the TRMM Precipitation Radar (PR), Microwave Imager (TMI) and Combined (COM) algorithms. The comparisons will be sub-setted according to geographical type (land, coast and ocean). The convergence of the GV and satellite estimates bodes well for expectations for the proposed Global Precipitation Measurement (GPM) program and this study and others are being leveraged towards planning GV goals for GPM. A discussion of lessons learned and future plans for TRMM GV in planning for GPM will also be provided.

  5. Resolution recovery for Compton camera using origin ensemble algorithm.

    PubMed

    Andreyev, A; Celler, A; Ozsahin, I; Sitek, A

    2016-08-01

    Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. To validate the proposed algorithm we used Monte Carlo simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2-3 orders of magnitude per iteration. The results of our tests demonstrate the improvement of image resolution provided by the OE reconstructions with resolution recovery. The quality of images and their contrast are similar to those obtained from the OE reconstructions from scans simulated with perfect energy and spatial resolutions.

  6. Accuracy of Rhenium-188 SPECT/CT activity quantification for applications in radionuclide therapy using clinical reconstruction methods.

    PubMed

    Esquinas, Pedro L; Uribe, Carlos F; Gonzalez, M; Rodríguez-Rodríguez, Cristina; Häfeli, Urs O; Celler, Anna

    2017-07-20

    The main applications of 188 Re in radionuclide therapies include trans-arterial liver radioembolization and palliation of painful bone-metastases. In order to optimize 188 Re therapies, the accurate determination of radiation dose delivered to tumors and organs at risk is required. Single photon emission computed tomography (SPECT) can be used to perform such dosimetry calculations. However, the accuracy of dosimetry estimates strongly depends on the accuracy of activity quantification in 188 Re images. In this study, we performed a series of phantom experiments aiming to investigate the accuracy of activity quantification for 188 Re SPECT using high-energy and medium-energy collimators. Objects of different shapes and sizes were scanned in Air, non-radioactive water (Cold-water) and water with activity (Hot-water). The ordered subset expectation maximization algorithm with clinically available corrections (CT-based attenuation, triple-energy window (TEW) scatter and resolution recovery was used). For high activities, the dead-time corrections were applied. The accuracy of activity quantification was evaluated using the ratio of the reconstructed activity in each object to this object's true activity. Each object's activity was determined with three segmentation methods: a 1% fixed threshold (for cold background), a 40% fixed threshold and a CT-based segmentation. Additionally, the activity recovered in the entire phantom, as well as the average activity concentration of the phantom background were compared to their true values. Finally, Monte-Carlo simulations of a commercial [Formula: see text]-camera were performed to investigate the accuracy of the TEW method. Good quantification accuracy (errors  <10%) was achieved for the entire phantom, the hot-background activity concentration and for objects in cold background segmented with a 1% threshold. However, the accuracy of activity quantification for objects segmented with 40% threshold or CT-based methods decreased (errors  >15%), mostly due to partial-volume effects. The Monte-Carlo simulations confirmed that TEW-scatter correction applied to 188 Re, although practical, yields only approximate estimates of the true scatter.

  7. Local order and crystallization of dense polydisperse hard spheres

    NASA Astrophysics Data System (ADS)

    Coslovich, Daniele; Ozawa, Misaki; Berthier, Ludovic

    2018-04-01

    Computer simulations give precious insight into the microscopic behavior of supercooled liquids and glasses, but their typical time scales are orders of magnitude shorter than the experimentally relevant ones. We recently closed this gap for a class of models of size polydisperse fluids, which we successfully equilibrate beyond laboratory time scales by means of the swap Monte Carlo algorithm. In this contribution, we study the interplay between compositional and geometric local orders in a model of polydisperse hard spheres equilibrated with this algorithm. Local compositional order has a weak state dependence, while local geometric order associated to icosahedral arrangements grows more markedly but only at very high density. We quantify the correlation lengths and the degree of sphericity associated to icosahedral structures and compare these results to those for the Wahnström Lennard-Jones mixture. Finally, we analyze the structure of very dense samples that partially crystallized following a pattern incompatible with conventional fractionation scenarios. The crystal structure has the symmetry of aluminum diboride and involves a subset of small and large particles with size ratio approximately equal to 0.5.

  8. Dynamics of Quantum Causal Structures

    NASA Astrophysics Data System (ADS)

    Castro-Ruiz, Esteban; Giacomini, Flaminia; Brukner, Časlav

    2018-01-01

    It was recently suggested that causal structures are both dynamical, because of general relativity, and indefinite, because of quantum theory. The process matrix formalism furnishes a framework for quantum mechanics on indefinite causal structures, where the order between operations of local laboratories is not definite (e.g., one cannot say whether operation in laboratory A occurs before or after operation in laboratory B ). Here, we develop a framework for "dynamics of causal structures," i.e., for transformations of process matrices into process matrices. We show that, under continuous and reversible transformations, the causal order between operations is always preserved. However, the causal order between a subset of operations can be changed under continuous yet nonreversible transformations. An explicit example is that of the quantum switch, where a party in the past affects the causal order of operations of future parties, leading to a transition from a channel from A to B , via superposition of causal orders, to a channel from B to A . We generalize our framework to construct a hierarchy of quantum maps based on transformations of process matrices and transformations thereof.

  9. Usability-driven pruning of large ontologies: the case of SNOMED CT.

    PubMed

    López-García, Pablo; Boeker, Martin; Illarramendi, Arantza; Schulz, Stefan

    2012-06-01

    To study ontology modularization techniques when applied to SNOMED CT in a scenario in which no previous corpus of information exists and to examine if frequency-based filtering using MEDLINE can reduce subset size without discarding relevant concepts. Subsets were first extracted using four graph-traversal heuristics and one logic-based technique, and were subsequently filtered with frequency information from MEDLINE. Twenty manually coded discharge summaries from cardiology patients were used as signatures and test sets. The coverage, size, and precision of extracted subsets were measured. Graph-traversal heuristics provided high coverage (71-96% of terms in the test sets of discharge summaries) at the expense of subset size (17-51% of the size of SNOMED CT). Pre-computed subsets and logic-based techniques extracted small subsets (1%), but coverage was limited (24-55%). Filtering reduced the size of large subsets to 10% while still providing 80% coverage. Extracting subsets to annotate discharge summaries is challenging when no previous corpus exists. Ontology modularization provides valuable techniques, but the resulting modules grow as signatures spread across subhierarchies, yielding a very low precision. Graph-traversal strategies and frequency data from an authoritative source can prune large biomedical ontologies and produce useful subsets that still exhibit acceptable coverage. However, a clinical corpus closer to the specific use case is preferred when available.

  10. Ovarian phagocyte subsets and their distinct tissue distribution patterns.

    PubMed

    Carlock, Colin; Wu, Jean; Zhou, Cindy; Ross, April; Adams, Henry; Lou, Yahuan

    2013-01-01

    Ovarian macrophages, which play critical roles in various ovarian events, are probably derived from multiple lineages. Thus, a systemic classification of their subsets is a necessary first step for determination of their functions. Utilizing antibodies to five phagocyte markers, i.e. IA/IE (major histocompatibility complex class II), F4/80, CD11b (Mac-1), CD11c, and CD68, this study investigated subsets of ovarian phagocytes in mice. Three-color immunofluorescence and flow cytometry, together with morphological observation on isolated ovarian cells, demonstrated complicated phenotypes of ovarian phagocytes. Four macrophage and one dendritic cell subset, in addition to many minor phagocyte subsets, were identified. A dendritic cell-like population with a unique phenotype of CD11c(high)IA/IE⁻F4/80⁻ was also frequently observed. A preliminary age-dependent study showed dramatic increases in IA/IE⁺ macrophages and IA/IE⁺ dendritic cells after puberty. Furthermore, immunofluorescences on ovarian sections showed that each subset displayed a distinct tissue distribution pattern. The pattern for each subset may hint to their role in an ovarian function. In addition, partial isolation of ovarian macrophage subset using CD11b antibodies was attempted. Establishment of this isolation method may have provided us a tool for more precise investigation of each subset's functions at the cellular and molecular levels.

  11. Biowaste home composting: experimental process monitoring and quality control.

    PubMed

    Tatàno, Fabio; Pagliaro, Giacomo; Di Giovanni, Paolo; Floriani, Enrico; Mangani, Filippo

    2015-04-01

    Because home composting is a prevention option in managing biowaste at local levels, the objective of the present study was to contribute to the knowledge of the process evolution and compost quality that can be expected and obtained, respectively, in this decentralized option. In this study, organized as the research portion of a provincial project on home composting in the territory of Pesaro-Urbino (Central Italy), four experimental composters were first initiated and temporally monitored. Second, two small sub-sets of selected provincial composters (directly operated by households involved in the project) underwent quality control on their compost products at two different temporal steps. The monitored experimental composters showed overall decreasing profiles versus composting time for moisture, organic carbon, and C/N, as well as overall increasing profiles for electrical conductivity and total nitrogen, which represented qualitative indications of progress in the process. Comparative evaluations of the monitored experimental composters also suggested some interactions in home composting, i.e., high C/N ratios limiting organic matter decomposition rates and final humification levels; high moisture contents restricting the internal temperature regime; nearly horizontal phosphorus and potassium evolutions contributing to limit the rates of increase in electrical conductivity; and prolonged biowaste additions contributing to limit the rate of decrease in moisture. The measures of parametric data variability in the two sub-sets of controlled provincial composters showed decreased variability in moisture, organic carbon, and C/N from the seventh to fifteenth month of home composting, as well as increased variability in electrical conductivity, total nitrogen, and humification rate, which could be considered compatible with the respective nature of decreasing and increasing parameters during composting. The modeled parametric kinetics in the monitored experimental composters, along with the evaluation of the parametric central tendencies in the sub-sets of controlled provincial composters, all indicate that 12-15 months is a suitable duration for the appropriate development of home composting in final and simultaneous compliance with typical reference limits. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Paleosecular variation and time-averaged field analysis over the last 10 Ma from a new global dataset (PSV10)

    NASA Astrophysics Data System (ADS)

    Cromwell, G.; Johnson, C. L.; Tauxe, L.; Constable, C.; Jarboe, N.

    2015-12-01

    Previous paleosecular variation (PSV) and time-averaged field (TAF) models draw on compilations of paleodirectional data that lack equatorial and high latitude sites and use latitudinal virtual geomagnetic pole (VGP) cutoffs designed to remove transitional field directions. We present a new selected global dataset (PSV10) of paleodirectional data spanning the last 10 Ma. We include all results calculated with modern laboratory methods, regardless of site VGP colatitude, that meet statistically derived selection criteria. We exclude studies that target transitional field states or identify significant tectonic effects, and correct for any bias from serial correlation by averaging directions from sequential lava flows. PSV10 has an improved global distribution compared with previous compilations, comprising 1519 sites from 71 studies. VGP dispersion in PSV10 varies with latitude, exhibiting substantially higher values in the southern hemisphere than at corresponding northern latitudes. Inclination anomaly estimates at many latitudes are within error of an expected GAD field, but significant negative anomalies are found at equatorial and mid-northern latitudes. Current PSV models Model G and TK03 do not fit observed PSV or TAF latitudinal behavior in PSV10, or subsets of normal and reverse polarity data, particularly for southern hemisphere sites. Attempts to fit these observations with simple modifications to TK03 showed slight statistical improvements, but still exceed acceptable errors. The root-mean-square misfit of TK03 (and subsequent iterations) is substantially lower for the normal polarity subset of PSV10, compared to reverse polarity data. Two-thirds of data in PSV10 are normal polarity, most which are from the last 5 Ma, so we develop a new TAF model using this subset of data. We use the resulting TAF model to explore whether new statistical PSV models can better describe our new global compilation.

  13. IL15 Infusion of Cancer Patients Expands the Subpopulation of Cytotoxic CD56bright NK Cells and Increases NK-Cell Cytokine Release Capabilities.

    PubMed

    Dubois, Sigrid; Conlon, Kevin C; Müller, Jürgen R; Hsu-Albert, Jennifer; Beltran, Nancy; Bryant, Bonita R; Waldmann, Thomas A

    2017-10-01

    The cytokine IL15 is required for survival and activation of natural killer (NK) cells as well as expansion of NK-cell populations. Here, we compare the effects of continuous IL15 infusions on NK-cell subpopulations in cancer patients. Infusions affected the CD56 bright NK-cell subpopulation in that the expansion rates exceeded those of CD56 dim NK-cell populations with a 350-fold increase in their total cell numbers compared with 20-fold expansion for the CD56 dim subset. CD56 bright NK cells responded with increased cytokine release to various stimuli, as expected given their immunoregulatory functions. Moreover, CD56 bright NK cells gained the ability to kill various target cells at levels that are typical for CD56 dim NK cells. Some increased cytotoxic activities were also observed for CD56 dim NK cells. IL15 infusions induced expression changes on the surface of both NK-cell subsets, resulting in a previously undescribed and similar phenotype. These data suggest that IL15 infusions expand and arm CD56 bright NK cells that alone or in combination with tumor-targeting antibodies may be useful in the treatment of cancer. Cancer Immunol Res; 5(10); 929-38. ©2017 AACR . ©2017 American Association for Cancer Research.

  14. Qualitative Fault Isolation of Hybrid Systems: A Structural Model Decomposition-Based Approach

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Daigle, Matthew; Roychoudhury, Indranil

    2016-01-01

    Quick and robust fault diagnosis is critical to ensuring safe operation of complex engineering systems. A large number of techniques are available to provide fault diagnosis in systems with continuous dynamics. However, many systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete behavioral modes, each with its own continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task computationally more complex due to the large number of possible system modes and the existence of autonomous mode transitions. This paper presents a qualitative fault isolation framework for hybrid systems based on structural model decomposition. The fault isolation is performed by analyzing the qualitative information of the residual deviations. However, in hybrid systems this process becomes complex due to possible existence of observation delays, which can cause observed deviations to be inconsistent with the expected deviations for the current mode in the system. The great advantage of structural model decomposition is that (i) it allows to design residuals that respond to only a subset of the faults, and (ii) every time a mode change occurs, only a subset of the residuals will need to be reconfigured, thus reducing the complexity of the reasoning process for isolation purposes. To demonstrate and test the validity of our approach, we use an electric circuit simulation as the case study.

  15. Testing deformation hypotheses by constraints on a time series of geodetic observations

    NASA Astrophysics Data System (ADS)

    Velsink, Hiddo

    2018-01-01

    In geodetic deformation analysis observations are used to identify form and size changes of a geodetic network, representing objects on the earth's surface. The network points are monitored, often continuously, because of suspected deformations. A deformation may affect many points during many epochs. The problem is that the best description of the deformation is, in general, unknown. To find it, different hypothesised deformation models have to be tested systematically for agreement with the observations. The tests have to be capable of stating with a certain probability the size of detectable deformations, and to be datum invariant. A statistical criterion is needed to find the best deformation model. Existing methods do not fulfil these requirements. Here we propose a method that formulates the different hypotheses as sets of constraints on the parameters of a least-squares adjustment model. The constraints can relate to subsets of epochs and to subsets of points, thus combining time series analysis and congruence model analysis. The constraints are formulated as nonstochastic observations in an adjustment model of observation equations. This gives an easy way to test the constraints and to get a quality description. The proposed method aims at providing a good discriminating method to find the best description of a deformation. The method is expected to improve the quality of geodetic deformation analysis. We demonstrate the method with an elaborate example.

  16. Drinking Motives and Alcohol Outcome Expectancies as Mediators of the Association between Negative Urgency and Alcohol Consumption

    PubMed Central

    Anthenien, Amber M.; Lembo, Jordanna; Neighbors, Clayton

    2017-01-01

    Objective To determine whether the effects of negative urgency, a unique facet of impulsivity marked by engaging in potentially unhealthy and rash behaviors in order to cope with anxiety or negative moods, on drinking behavior can be explained by positive and negative alcohol outcome expectancies and specific drinking motives (i.e., coping and enhancement). Methods College students (N = 194) completed web-based surveys in exchange for course credit. Students completed measures of negative urgency, comprehensive effects of alcohol, drinking motives, and alcohol use behaviors. Results Results of path analysis indicated significant indirect effects of negative urgency and alcohol use through both alcohol outcome expectancies and enhancement motives. The effects of enhancement motives on drinking were mediated by positive alcohol outcome expectancies. The effects of coping motives on drinking were not attributable to negative expectancies. Conclusions Individuals high on negative urgency may consume alcohol in order to ameliorate their emotional distress due to strong desires to increase positive and decrease negative experiences associated with drinking. Emotion-focused impulsivity’s influence on drinking outcomes can be ascribed to enhancement motives for drinking as well as positive and negative alcohol outcome expectancies. Prevention efforts should target drinking motives and alcohol outcome expectancies among those higher in negative urgency. PMID:27914226

  17. Drinking motives and alcohol outcome expectancies as mediators of the association between negative urgency and alcohol consumption.

    PubMed

    Anthenien, Amber M; Lembo, Jordanna; Neighbors, Clayton

    2017-03-01

    To determine whether the effects of negative urgency, a unique facet of impulsivity marked by engaging in potentially unhealthy and rash behaviors in order to cope with anxiety or negative moods, on drinking behavior can be explained by positive and negative alcohol outcome expectancies and specific drinking motives (i.e., coping and enhancement). College students (N=194) completed web-based surveys in exchange for course credit. Students completed measures of negative urgency, comprehensive effects of alcohol, drinking motives, and alcohol use behaviors. Results of path analysis indicated significant indirect effects of negative urgency and alcohol use through both alcohol outcome expectancies and enhancement motives. The effects of enhancement motives on drinking were mediated by positive alcohol outcome expectancies. The effects of coping motives on drinking were not attributable to negative expectancies. Individuals high on negative urgency may consume alcohol in order to ameliorate their emotional distress due to strong desires to increase positive and decrease negative experiences associated with drinking. Emotion-focused impulsivity's influence on drinking outcomes can be ascribed to enhancement motives for drinking as well as positive and negative alcohol outcome expectancies. Prevention efforts should target drinking motives and alcohol outcome expectancies among those higher in negative urgency. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Statistical Image Properties in Large Subsets of Traditional Art, Bad Art, and Abstract Art

    PubMed Central

    Redies, Christoph; Brachmann, Anselm

    2017-01-01

    Several statistical image properties have been associated with large subsets of traditional visual artworks. Here, we investigate some of these properties in three categories of art that differ in artistic claim and prestige: (1) Traditional art of different cultural origin from established museums and art collections (oil paintings and graphic art of Western provenance, Islamic book illustration and Chinese paintings), (2) Bad Art from two museums that collect contemporary artworks of lesser importance (© Museum Of Bad Art [MOBA], Somerville, and Official Bad Art Museum of Art [OBAMA], Seattle), and (3) twentieth century abstract art of Western provenance from two prestigious museums (Tate Gallery and Kunstsammlung Nordrhein-Westfalen). We measured the following four statistical image properties: the fractal dimension (a measure relating to subjective complexity); self-similarity (a measure of how much the sections of an image resemble the image as a whole), 1st-order entropy of edge orientations (a measure of how uniformly different orientations are represented in an image); and 2nd-order entropy of edge orientations (a measure of how independent edge orientations are across an image). As shown previously, traditional artworks of different styles share similar values for these measures. The values for Bad Art and twentieth century abstract art show a considerable overlap with those of traditional art, but we also identified numerous examples of Bad Art and abstract art that deviate from traditional art. By measuring statistical image properties, we quantify such differences in image composition for the first time. PMID:29118692

  19. POLYNOMIAL AND RATIONAL APPROXIMATION OF FUNCTIONS OF SEVERAL VARIABLES WITH CONVEX DERIVATIVES IN THE L_p-METRIC (0 < p\\leqslant\\infty)

    NASA Astrophysics Data System (ADS)

    Khatamov, A.

    1995-02-01

    Let \\operatorname{Conv}_n^{(l)}(\\mathscr{G}) be the set of all functions f such that for every n-dimensional unit vector \\mathbf{e} the lth derivative in the direction of \\mathbf{e}, D^{(l)}(\\mathbf{e})f, is continuous on a convex bounded domain \\mathscr{G}\\subset\\mathbf{R}^n ( n \\geqslant 2) and convex (upwards or downwards) on the nonempty intersection of every line L\\subset\\mathbf{R}^n with the domain \\mathscr{G}, and let M^{(l)}(f,\\mathscr{G}):= \\sup \\bigl\\{\\bigl\\Vert D^{(l)}(\\mathbf{e})f\\bigr\\Ve......})}\\colon\\mathbf{e}\\in\\mathbf{R}^n,\\,\\,\\Vert\\mathbf{e}\\Vert=1\\bigr\\} < \\infty. Sharp, in the sense of order of smallness, estimates of best simultaneous polynomial approximations of the functions f\\in\\operatorname{Conv}_n^{(l)}(\\mathscr{G}) for which D^{(l)}(\\mathbf{e})f\\in\\operatorname{Lip}_K 1 for every \\mathbf{e}, and their derivatives in the metrics of L_p(\\mathscr{G}) (0 < p\\leqslant\\infty) are obtained. It is proved that the corresponding parts of these estimates are preserved for best rational approximations, on any n-dimensional parallelepiped Q, of functions f\\in\\operatorname{Conv}_n^{(l)}(Q) in the metrics of L_p(Q) (0 < p < \\infty) and it is shown that they are sharp in the sense of order of smallness for 0 < p\\leqslant1.

  20. Statistical Image Properties in Large Subsets of Traditional Art, Bad Art, and Abstract Art.

    PubMed

    Redies, Christoph; Brachmann, Anselm

    2017-01-01

    Several statistical image properties have been associated with large subsets of traditional visual artworks. Here, we investigate some of these properties in three categories of art that differ in artistic claim and prestige: (1) Traditional art of different cultural origin from established museums and art collections (oil paintings and graphic art of Western provenance, Islamic book illustration and Chinese paintings), (2) Bad Art from two museums that collect contemporary artworks of lesser importance (© Museum Of Bad Art [MOBA], Somerville, and Official Bad Art Museum of Art [OBAMA], Seattle), and (3) twentieth century abstract art of Western provenance from two prestigious museums (Tate Gallery and Kunstsammlung Nordrhein-Westfalen). We measured the following four statistical image properties: the fractal dimension (a measure relating to subjective complexity); self-similarity (a measure of how much the sections of an image resemble the image as a whole), 1st-order entropy of edge orientations (a measure of how uniformly different orientations are represented in an image); and 2nd-order entropy of edge orientations (a measure of how independent edge orientations are across an image). As shown previously, traditional artworks of different styles share similar values for these measures. The values for Bad Art and twentieth century abstract art show a considerable overlap with those of traditional art, but we also identified numerous examples of Bad Art and abstract art that deviate from traditional art. By measuring statistical image properties, we quantify such differences in image composition for the first time.

  1. A Multination Study of Socioeconomic Inequality in Expectations for Progression to Higher Education: The Role of Between-School Tracking and Ability Stratification

    ERIC Educational Resources Information Center

    Parker, Philip D.; Jerrim, John; Schoon, Ingrid; Marsh, Herbert W.

    2016-01-01

    Persistent inequalities in educational expectations across societies are a growing concern. Recent research has explored the extent to which inequalities in education are due to primary effects (i.e., achievement differentials) versus secondary effects (i.e., choice behaviors net of achievement). We explore educational expectations in order to…

  2. Consumer Expectations of Online Services in the Insurance Industry: An Exploratory Study of Drivers and Outcomes.

    PubMed

    Méndez-Aparicio, M Dolores; Izquierdo-Yusta, Alicia; Jiménez-Zarco, Ana I

    2017-01-01

    Today, the customer-brand relationship is fundamental to a company's bottom line, especially in the service sector and with services offered via online channels. In order to maximize its effects, organizations need (1) to know which factors influence the formation of an individual's service expectations in an online environment; and (2) to establish the influence of these expectations on customers' likelihood of recommending a service before they have even used it. In accordance with the TAM model (Davis, 1989; Davis et al., 1992), the TRA model (Fishbein and Ajzen, 1975), the extended UTAUT model (Venkatesh et al., 2012), and the approach described by Alloza (2011), this work proposes a theoretical model of the antecedents and consequences of consumer expectations of online services. In order to validate the proposed theoretical model, a sample of individual insurance company customers was analyzed. The results showed, first, the importance of customers' expectations with regard to the intention to recommend the "private area" of the company's website to other customers prior to using it themselves. They also revealed the importance to expectations of the antecedents perceived usefulness, ease of use, frequency of use, reputation, and subjective norm.

  3. Consumer Expectations of Online Services in the Insurance Industry: An Exploratory Study of Drivers and Outcomes

    PubMed Central

    Méndez-Aparicio, M. Dolores; Izquierdo-Yusta, Alicia; Jiménez-Zarco, Ana I.

    2017-01-01

    Today, the customer-brand relationship is fundamental to a company’s bottom line, especially in the service sector and with services offered via online channels. In order to maximize its effects, organizations need (1) to know which factors influence the formation of an individual’s service expectations in an online environment; and (2) to establish the influence of these expectations on customers’ likelihood of recommending a service before they have even used it. In accordance with the TAM model (Davis, 1989; Davis et al., 1992), the TRA model (Fishbein and Ajzen, 1975), the extended UTAUT model (Venkatesh et al., 2012), and the approach described by Alloza (2011), this work proposes a theoretical model of the antecedents and consequences of consumer expectations of online services. In order to validate the proposed theoretical model, a sample of individual insurance company customers was analyzed. The results showed, first, the importance of customers’ expectations with regard to the intention to recommend the “private area” of the company’s website to other customers prior to using it themselves. They also revealed the importance to expectations of the antecedents perceived usefulness, ease of use, frequency of use, reputation, and subjective norm. PMID:28798705

  4. Critical thinking ability of new graduate and experienced nurses.

    PubMed

    Fero, Laura J; Witsberger, Catherine M; Wesmiller, Susan W; Zullo, Thomas G; Hoffman, Leslie A

    2009-01-01

    This paper is a report of a study to identify critical thinking learning needs of new and experienced nurses. Concern for patient safety has grown worldwide as high rates of error and injury continue to be reported. In order to improve patient safety, nurses must be able to recognize changes in patient condition, perform independent nursing interventions, anticipate orders and prioritize. In 2004-2006, a consecutive sample of 2144 newly hired nurses in a university-affiliated healthcare system completed the Performance Based Development System Assessment consisting of 10 videotaped vignettes depicting change in patient status. Results were reported as meeting or not meeting expectations. For nurses not meeting expectations, learning needs were identified in one of six subcategories. Overall, 74.9% met assessment expectations. Learning needs identified for nurses not meeting expectations included initiating independent nursing interventions (97.2%), differentiation of urgency (67%), reporting essential clinical data (65.4%), anticipating relevant medical orders (62.8%), providing relevant rationale to support decisions (62.6%) and problem recognition (57.1%). Controlling for level of preparation, associate (P=0.007) and baccalaureate (P<0.0001) nurses were more likely to meet expectations as years of experience increased; a similar trend was not seen for diploma nurses (P=0.10). Controlling for years of experience, new graduates were less likely to meet expectations compared with nurses with >or=10 years experience (P=0.046). Patient safety may be compromised if a nurse cannot provide clinically competent care. Assessments such as the Performance Based Development System can provide information about learning needs and facilitate individualized orientation targeted to increase performance level.

  5. Clinical relevance and suppressive capacity of human MDSC subsets.

    PubMed

    Lang, Stephan; Bruderek, Kirsten; Kaspar, Cordelia; Höing, Benedikt; Kanaan, Oliver; Dominas, Nina; Hussain, Timon; Droege, Freya; Eyth, Christian Peter; Hadaschik, Boris; Brandau, Sven

    2018-06-18

    Myeloid-derived suppressor cells (MDSC) are a heterogeneous group of pathologically expanded myeloid cells with immunosuppressive activity. In human disease three major MDSC subpopulations can be defined as monocytic M-MDSC, granulocytic PMN-MDSC and early stage e-MDSC, which lack myeloid lineage markers of the former two subsets. It was the purpose of this study to determine and compare the immunosuppressive capacity and clinical relevance of each of these subsets in patients with solid cancer. The frequency of MDSC subsets in the peripheral blood was determined by flow cytometry in a cohort of 49 patients with advanced head and neck cancer (HNC) and 22 patients with urological cancers. Sorted and purified MDSC subsets were tested in vitro for their T cell suppressive capacity. Frequency of circulating MDSC was correlated with overall survival of HNC patients. A high frequency of PMN-MDSC most strongly correlated with poor overall survival in HNC. T cell suppressive activity was higher in PMN-MDSC compared with M-MDSC and e-MDSC. A subset of CD66b+/CD11b+/CD16+ mature PMN-MDSC displayed high expression and activity of arginase I, and was superior to the other subsets in suppressing proliferation and cytokine production of T cells in both cancer types. High levels of this CD11b+/CD16+ PMN-MDSC, but not other PMN-MDSC subsets, strongly correlated with adverse outcome in HNC. A subset of mature CD11b+/CD16+ PMN-MDSC was identified as the MDSC subset with the strongest immunosuppressive activity and the highest clinical relevance. Copyright ©2018, American Association for Cancer Research.

  6. Differential expression of CD44 and CD24 markers discriminates the epitheliod from the fibroblastoid subset in a sarcomatoid renal carcinoma cell line: evidence suggesting the existence of cancer stem cells in both subsets as studied with sorted cells.

    PubMed

    Hsieh, Chin-Hsuan; Hsiung, Shih-Chieh; Yeh, Chi-Tai; Yen, Chih-Feng; Chou, Yah-Huei Wu; Lei, Wei-Yi; Pang, See-Tong; Chuang, Cheng-Keng; Liao, Shuen-Kuei

    2017-02-28

    Epithelioid and fibroblastoid subsets coexist in the human sarcomatoid renal cell carcinoma (sRCC) cell line, RCC52, according to previous clonal studies. Herein, using monoclonal antibodies to CD44 and CD24 markers, we identified and isolated these two populations, and showed that CD44bright/CD24dim and CD44bright/CD24bright phenotypes correspond to epithelioid and fibroblastoid subsets, respectively. Both sorted subsets displayed different levels of tumorigenicity in xenotransplantation, indicating that each harbored its own cancer stem cells (CSCs). The CD44bright/CD24bright subset, associated with higher expression of MMP-7, -8 and TIMP-1 transcripts, showed greater migratory/invasive potential than the CD44bright/CD24dim subset, which was associated with higher expression of MMP-2, -9 and TIMP-2 transcripts. Both subsets differentially expressed stemness gene products c-Myc, Oct4A, Notch1, Notch2 and Notch3, and the RCC stem cell marker, CD105 in 4-5% of RCC52 cells. These results suggest the presence of CSCs in both sRCC subsets for the first time and should therefore be considered potential therapeutic targets for this aggressive malignancy.

  7. Usability-driven pruning of large ontologies: the case of SNOMED CT

    PubMed Central

    Boeker, Martin; Illarramendi, Arantza; Schulz, Stefan

    2012-01-01

    Objectives To study ontology modularization techniques when applied to SNOMED CT in a scenario in which no previous corpus of information exists and to examine if frequency-based filtering using MEDLINE can reduce subset size without discarding relevant concepts. Materials and Methods Subsets were first extracted using four graph-traversal heuristics and one logic-based technique, and were subsequently filtered with frequency information from MEDLINE. Twenty manually coded discharge summaries from cardiology patients were used as signatures and test sets. The coverage, size, and precision of extracted subsets were measured. Results Graph-traversal heuristics provided high coverage (71–96% of terms in the test sets of discharge summaries) at the expense of subset size (17–51% of the size of SNOMED CT). Pre-computed subsets and logic-based techniques extracted small subsets (1%), but coverage was limited (24–55%). Filtering reduced the size of large subsets to 10% while still providing 80% coverage. Discussion Extracting subsets to annotate discharge summaries is challenging when no previous corpus exists. Ontology modularization provides valuable techniques, but the resulting modules grow as signatures spread across subhierarchies, yielding a very low precision. Conclusion Graph-traversal strategies and frequency data from an authoritative source can prune large biomedical ontologies and produce useful subsets that still exhibit acceptable coverage. However, a clinical corpus closer to the specific use case is preferred when available. PMID:22268217

  8. Decision Variants for the Automatic Determination of Optimal Feature Subset in RF-RFE.

    PubMed

    Chen, Qi; Meng, Zhaopeng; Liu, Xinyi; Jin, Qianguo; Su, Ran

    2018-06-15

    Feature selection, which identifies a set of most informative features from the original feature space, has been widely used to simplify the predictor. Recursive feature elimination (RFE), as one of the most popular feature selection approaches, is effective in data dimension reduction and efficiency increase. A ranking of features, as well as candidate subsets with the corresponding accuracy, is produced through RFE. The subset with highest accuracy (HA) or a preset number of features (PreNum) are often used as the final subset. However, this may lead to a large number of features being selected, or if there is no prior knowledge about this preset number, it is often ambiguous and subjective regarding final subset selection. A proper decision variant is in high demand to automatically determine the optimal subset. In this study, we conduct pioneering work to explore the decision variant after obtaining a list of candidate subsets from RFE. We provide a detailed analysis and comparison of several decision variants to automatically select the optimal feature subset. Random forest (RF)-recursive feature elimination (RF-RFE) algorithm and a voting strategy are introduced. We validated the variants on two totally different molecular biology datasets, one for a toxicogenomic study and the other one for protein sequence analysis. The study provides an automated way to determine the optimal feature subset when using RF-RFE.

  9. 7 CFR 966.71 - Granting exemptions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE TOMATOES GROWN IN FLORIDA Order...'s immediate production area and that the grade, size, or quality of the applicant's tomatoes have... expectation. Each certificate shall permit the producer to handle the amount of tomatoes specified thereon...

  10. Variation in geographic access to specialist inpatient hospices in England and Wales.

    PubMed

    Gatrell, Anthony C; Wood, D Justin

    2012-07-01

    We seek to map and describe variation in geographic access to the set of 189 specialist adult inpatient hospices in England and Wales. Using almost 35,000 small Census areas (Local Super Output Areas: LSOAs) as our units of analysis, the locations of hospices, and estimated drive times from LSOAs to hospices we construct an accessibility 'score' for each LSOA, for England and Wales as a whole. Data on cancer mortality are used as a proxy for the 'demand' for hospice care and we then identify that subset of small areas in which accessibility (service supply) is relatively poor yet the potential 'demand' for hospice services is above average. That subset is then filtered according to the deprivation score for each LSOA, in order to identify those LSOAs which are also above average in terms of deprivation. While urban areas are relatively well served, large parts of England and Wales have poor access to hospices, and there is a risk that the needs of those living in relatively deprived areas may be unmet. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. A proposed group management scheme for XTP multicast

    NASA Technical Reports Server (NTRS)

    Dempsey, Bert J.; Weaver, Alfred C.

    1990-01-01

    The purpose of a group management scheme is to enable its associated transfer layer protocol to be responsive to user determined reliability requirements for multicasting. Group management (GM) must assist the client process in coordinating multicast group membership, allow the user to express the subset of the multicast group that a particular multicast distribution must reach in order to be successful (reliable), and provide the transfer layer protocol with the group membership information necessary to guarantee delivery to this subset. GM provides services and mechanisms that respond to the need of the client process or process level management protocols to coordinate, modify, and determine attributes of the multicast group, especially membership. XTP GM provides a link between process groups and their multicast groups by maintaining a group membership database that identifies members in a name space understood by the underlying transfer layer protocol. Other attributes of the multicast group useful to both the client process and the data transfer protocol may be stored in the database. Examples include the relative dispersion, most recent update, and default delivery parameters of a group.

  12. Bootstrapping under constraint for the assessment of group behavior in human contact networks

    NASA Astrophysics Data System (ADS)

    Tremblay, Nicolas; Barrat, Alain; Forest, Cary; Nornberg, Mark; Pinton, Jean-François; Borgnat, Pierre

    2013-11-01

    The increasing availability of time- and space-resolved data describing human activities and interactions gives insights into both static and dynamic properties of human behavior. In practice, nevertheless, real-world data sets can often be considered as only one realization of a particular event. This highlights a key issue in social network analysis: the statistical significance of estimated properties. In this context, we focus here on the assessment of quantitative features of specific subset of nodes in empirical networks. We present a method of statistical resampling based on bootstrapping groups of nodes under constraints within the empirical network. The method enables us to define acceptance intervals for various null hypotheses concerning relevant properties of the subset of nodes under consideration in order to characterize by a statistical test its behavior as “normal” or not. We apply this method to a high-resolution data set describing the face-to-face proximity of individuals during two colocated scientific conferences. As a case study, we show how to probe whether colocating the two conferences succeeded in bringing together the two corresponding groups of scientists.

  13. Efficient Method for Scalable Registration of Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Prouty, R.; LeMoigne, J.; Halem, M.

    2017-12-01

    The goal of this project is to build a prototype of a resource-efficient pipeline that will provide registration within subpixel accuracy of multitemporal Earth science data. Accurate registration of Earth-science data is imperative to proper data integration and seamless mosaicing of data from multiple times, sensors, and/or observation geometries. Modern registration methods make use of many arithmetic operations and sometimes require complete knowledge of the image domain. As such, while sensors become more advanced and are able to provide higher-resolution data, the memory resources required to properly register these data become prohibitive. The proposed pipeline employs a region of interest extraction algorithm in order to extract image subsets with high local feature density. These image subsets are then used to generate local solutions to the global registration problem. The local solutions are then 'globalized' to determine the deformation model that best solves the registration problem. The region of interest extraction and globalization routines are tested for robustness among the variety of scene-types and spectral locations provided by Earth-observing instruments such as Landsat, MODIS, or ASTER.

  14. Using learning automata to determine proper subset size in high-dimensional spaces

    NASA Astrophysics Data System (ADS)

    Seyyedi, Seyyed Hossein; Minaei-Bidgoli, Behrouz

    2017-03-01

    In this paper, we offer a new method called FSLA (Finding the best candidate Subset using Learning Automata), which combines the filter and wrapper approaches for feature selection in high-dimensional spaces. Considering the difficulties of dimension reduction in high-dimensional spaces, FSLA's multi-objective functionality is to determine, in an efficient manner, a feature subset that leads to an appropriate tradeoff between the learning algorithm's accuracy and efficiency. First, using an existing weighting function, the feature list is sorted and selected subsets of the list of different sizes are considered. Then, a learning automaton verifies the performance of each subset when it is used as the input space of the learning algorithm and estimates its fitness upon the algorithm's accuracy and the subset size, which determines the algorithm's efficiency. Finally, FSLA introduces the fittest subset as the best choice. We tested FSLA in the framework of text classification. The results confirm its promising performance of attaining the identified goal.

  15. On the Q-dependence of the lowest-order QED corrections and other properties of the ground 11S-states in the two-electron ions

    NASA Astrophysics Data System (ADS)

    Frolov, Alexei M.

    2015-10-01

    Formulas and expectation values which are need to determine the lowest-order QED corrections (∼α3) and corresponding recoil (or finite mass) corrections in the two-electron helium-like ions are presented. Other important properties of the two-electron ions are also determined to high accuracy, including the expectation values of the quasi-singular Vinti operator and < reN-2> and < ree-2> expectation values. Elastic scattering of fast electrons by the two-electron ions in the Born approximation is considered. Interpolation formulas are derived for the bound state properties of the two-electron ions as the function of the nuclear electric charge Q.

  16. Application of a symmetric total variation diminishing scheme to aerodynamics of rotors

    NASA Astrophysics Data System (ADS)

    Usta, Ebru

    2002-09-01

    The aerodynamics characteristics of rotors in hover have been studied on stretched non-orthogonal grids using spatially high order symmetric total variation diminishing (STVD) schemes. Several companion numerical viscosity terms have been tested. The effects of higher order metrics, higher order load integrations and turbulence effects on the rotor performance have been studied. Where possible, calculations for 1-D and 2-D benchmark problems have been done on uniform grids, and comparisons with exact solutions have been made to understand the dispersion and dissipation characteristics of these algorithms. A baseline finite volume methodology termed TURNS (Transonic Unsteady Rotor Navier-Stokes) is the starting point for this effort. The original TURNS solver solves the 3-D compressible Navier-Stokes equations in an integral form using a third order upwind scheme. It is first or second order accurate in time. In the modified solver, the inviscid flux at a cell face is decomposed into two parts. The first part of the flux is symmetric in space, while the second part consists of an upwind-biased numerical viscosity term. The symmetric part of the flux at the cell face is computed to fourth-, sixth- or eighth order accuracy in space. The numerical viscosity portion of the flux is computed using either a third order accurate MUSCL scheme or a fifth order WENO scheme. A number of results are presented for the two-bladed Caradonna-Tung rotor and for a four-bladed UH-60A rotor in hover. Comparisons with the original TURNS code, and experiments are given. Results are also presented on the effects of metrics calculations, load integration algorithms, and turbulence models on the solution accuracy. A total of 64 combinations were studied in this thesis work. For brevity, only a small subset of results highlighting the most important conclusions are presented. It should be noted that use of higher order formulations did not affect the temporal stability of the algorithm and did not require any reduction in the time step. The calculations show that the solution accuracy increases when the 3 rd order upwind scheme in the baseline algorithm is replaced with 4th and 6th order accurate symmetric flux calculations. A point of diminishing returns is reached as increasingly larger stencils are used on highly stretched grids. The numerical viscosity term, when computed with the third order MUSCL scheme, is very dissipative, and does not resolve the tip vortex well. The WENO5 scheme, on the other hand significantly improves the tip vortex capturing. The STVD6+WENO5 scheme, in particular gave the best combinations of solution accuracy and efficiency on stretched grids. Spatially fourth order accurate metric calculations were found to be beneficial, but should be used in conjunction with a limiter that drops the metric calculation to a second order accuracy in the vicinity of grid discontinuities. High order integration of loads was found to have a beneficial, but small effect on the computed loads. Replacing the Baldwin-Lomax turbulence model with a one equation Spalart-Allmaras model resulted in higher than expected profile power contributions. Nevertheless the one-equation model is recommended for its robustness, its ability to model separated flows at high thrust settings, and the natural manner in which turbulence in the rotor wake may be treated.

  17. The relation between global migration and trade networks

    NASA Astrophysics Data System (ADS)

    Sgrignoli, Paolo; Metulini, Rodolfo; Schiavo, Stefano; Riccaboni, Massimo

    2015-01-01

    In this paper we develop a methodology to analyze and compare multiple global networks, focusing our analysis on the relation between human migration and trade. First, we identify the subset of products for which the presence of a community of migrants significantly increases trade intensity, where to assure comparability across networks we apply a hypergeometric filter that lets us identify those links which intensity is significantly higher than expected. Next, proposing a new way to define country neighbors based on the most intense links in the trade network, we use spatial econometrics techniques to measure the effect of migration on international trade, while controlling for network interdependences. Overall, we find that migration significantly boosts trade across countries and we are able to identify product categories for which this effect is particularly strong.

  18. Electromagnetic thrusters for spacecraft prime propulsion

    NASA Technical Reports Server (NTRS)

    Rudolph, L. K.; King, D. Q.

    1984-01-01

    The benefits of electromagnetic propulsion systems for the next generation of US spacecraft are discussed. Attention is given to magnetoplasmadynamic (MPD) and arc jet thrusters, which form a subset of a larger group of electromagnetic propulsion systems including pulsed plasma thrusters, Hall accelerators, and electromagnetic launchers. Mission/system study results acquired over the last twenty years suggest that for future prime propulsion applications high-power self-field MPD thrusters and low-power arc jets have the greatest potential of all electromagnetic thruster systems. Some of the benefits they are expected to provide include major reductions in required launch mass compared to chemical propulsion systems (particularly in geostationary orbit transfer) and lower life-cycle costs (almost 50 percent less). Detailed schematic drawings are provided which describe some possible configurations for the various systems.

  19. Testing the Merger Paradigm: X-ray Observations of Radio-Selected Sub-Galactic-Scale Binary AGNs

    NASA Astrophysics Data System (ADS)

    Fu, Hai

    2016-09-01

    Interactions play an important role in galaxy evolution. Strong gas inflows are expected in the process of gas-rich mergers, which may fuel intense black hole accretion and star formation. Sub-galactic-scale binary/dual AGNs thus offer elegant laboratories to study the merger-driven co-evolution phase. However, previous samples of kpc-scale binaries are small and heterogeneous. We have identified a flux-limited sample of kpc-scale binary AGNs uniformly from a wide-area high-resolution radio survey conducted by the VLA. Here we propose Chandra X-ray characterization of a subset of four radio-confirmed binary AGNs at z 0.1. Our goal is to compare their X-ray properties with those of matched control samples to test the merger-driven co-evolution paradigm.

  20. Pilot Field Test: Results of Tandem Walk Performance Following Long-Duration Spaceflight

    NASA Technical Reports Server (NTRS)

    Cerisano, J. M.; Reschke, M. F.; Kofman, I. S.; Fisher, E. A.; Gadd, N. E.; Phillips, T. R.; Lee, S. M. C.; Laurie, S. S.; Stenger, M. B.; Bloomberg, J. J.; hide

    2016-01-01

    Coordinated locomotion has proven to be challenging for many astronauts following long duration spaceflight. As NASA's vision for spaceflight points toward interplanetary travel and missions to distant objects, astronauts will not have assistance once they land. Thus, it is vital to develop a knowledge base from which operational guidelines can be written that define when astronauts can be expected to safely perform certain tasks. Data obtained during the Field Test experiment will add important insight to this knowledge base. Specifically, we aim to develop a recovery timeline of functional sensorimotor performance during the first 24 hours and several days after landing. A forerunner of the full Field Test study, the Pilot Field Test (PFT) comprised a subset of the tasks and measurements to be included in the ultimate set.

  1. A cloud physics investigation utilizing Skylab data

    NASA Technical Reports Server (NTRS)

    Alishouse, J.; Jacobowitz, H.; Wark, D. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. The Lowtran 2 program, S191 spectral response, and solar spectrum were used to compute the expected absorption by 2.0 micron band for a variety of cloud pressure levels and solar zenith angles. Analysis of the three long wavelength data channels continued in which it was found necessary to impose a minimum radiance criterion. It was also found necessary to modify the computer program to permit the computation of mean values and standard deviations for selected subsets of data on a given tape. A technique for computing the integrated absorption in the A band was devised. The technique normalizes the relative maximum at approximately .78 micron to the solar irradiance curve and then adjusts the relative maximum at approximately .74 micron to fit the solar curve.

  2. Erlotinib as a single agent in select subsets of patients with advanced non-small-cell lung cancer.

    PubMed

    Carrión, Ramón Pérez; Gracián, Antonio Cubillo; Hernandez, Pedro Salinas

    2007-07-01

    Erlotinib is an orally active inhibitor of the epidermal growth factor receptor that is effective for the treatment of non-small-cell lung cancer (NSCLC). Patients with a poor performance status (PS) of 2 constitute up to 40% of advanced NSCLC. This group of patients have a lower life expectancy and are thought to have a greater degree of treatment-related toxicity. The clinical benefit on 238 patients with poor PS included in an open-label, nonrandomized, phase II trial of erlotinib in advanced/metastatic NSCLC was 57.58% defined as complete response plus partial response plus stable disease. Median time to progression was 2.9 months. This review will summarize available data about erlotinib on patients with a PS of 2.

  3. The BMC Medicine breast cancer collection: an illustration of contemporary research and clinical care.

    PubMed

    Tripathy, Debu

    2015-09-23

    The field of breast cancer had witnessed clear improvements in survival and less morbidity over the last few decades owing to earlier detection as a result of public awareness and screening, as well as treatments involving the disciplines of surgical, radiation and medical oncology along with advances in imaging and pathological diagnostics. However, in the last 5-10 years, newer assays and biological therapies have begun to cross new boundaries with higher rates of cure seen in more aggressive cancers. Even though metastatic breast cancer remains incurable, some, but not all, subsets of patients with breast cancer are living longer and more productive lives. Many challenges still remain, and the development of team science coupled with collaborative clinical research and care is expected to accelerate advances along this trajectory.

  4. Adenovirus-specific T-cell Subsets in Human Peripheral Blood and After IFN-γ Immunomagnetic Selection.

    PubMed

    Qian, Chongsheng; Wang, Yingying; Cai, Huili; Laroye, Caroline; De Carvalho Bittencourt, Marcelo; Clement, Laurence; Stoltz, Jean-François; Decot, Véronique; Reppel, Loïc; Bensoussan, Danièle

    2016-01-01

    Adoptive antiviral cellular immunotherapy by infusion of virus-specific T cells (VSTs) is becoming an alternative treatment for viral infection after hematopoietic stem cell transplantation. The T memory stem cell (TSCM) subset was recently described as exhibiting self-renewal and multipotency properties which are required for sustained efficacy in vivo. We wondered if such a crucial subset for immunotherapy was present in VSTs. We identified, by flow cytometry, TSCM in adenovirus (ADV)-specific interferon (IFN)-γ+ T cells before and after IFN-γ-based immunomagnetic selection, and analyzed the distribution of the main T-cell subsets in VSTs: naive T cells (TN), TSCM, T central memory cells (TCM), T effector memory cell (TEM), and effector T cells (TEFF). In this study all of the different T-cell subsets were observed in the blood sample from healthy donor ADV-VSTs, both before and after IFN-γ-based immunomagnetic selection. As the IFN-γ-based immunomagnetic selection system sorts mainly the most differentiated T-cell subsets, we observed that TEM was always the major T-cell subset of ADV-specific T cells after immunomagnetic isolation and especially after expansion in vitro. Comparing T-cell subpopulation profiles before and after in vitro expansion, we observed that in vitro cell culture with interleukin-2 resulted in a significant expansion of TN-like, TCM, TEM, and TEFF subsets in CD4IFN-γ T cells and of TCM and TEM subsets only in CD8IFN-γ T cells. We demonstrated the presence of all T-cell subsets in IFN-γ VSTs including the TSCM subpopulation, although this was weakly selected by the IFN-γ-based immunomagnetic selection system.

  5. Characterization Tests of WFC3 Filters

    NASA Technical Reports Server (NTRS)

    Baggett, S.; Boucarut, R.; Telfer, R.; Quijano, J. Kim; Quijada, M.; Arsenovic, P.; Brown, T.; Dailey, M.; Figer, D.; Hilbert, B.

    2006-01-01

    The WFC3 instrument to be installed on HST during the next servicing mission consists of a UVIS and an IR channel. Each channel is allocated its own complement of filters: 48 elements for the UVIS (42 filters, 5 quads, and 1 UV grism) and 17 slots for the IR (15 filters and 2 grisms). While a majority of the UVIS filters exhibit excellent performance consistent with or exceeding expectations, a subset show significant filter ghosts. Procurement of improved replacement filters is in progress and a summary of the characterization tests being performed on the new filters is presented. In the IR channel, while no filter ghosting was detected in any of the filters during thermal vacuum testing, the grisms were found to be installed incorrectly; they have been removed and will be reinstalled. In addition, due to the significantly improved response blueward of 800nm expected in the new substrate-removed IR detector (see Invited talk by R.A.Kimble, this volume), two IR filters originally constructed on a fused silica substrate are being remade using an IR transmitting color glass to block any visible light transmission. Tests of the new IR filters and preparations for the grism reinstallation are summarized

  6. Polysensitization and individual susceptibility to allergic contact dermatitis.

    PubMed

    Gosnell, Amy L; Schmotzer, Brian; Nedorost, Susan T

    2015-01-01

    Patients with allergic contact dermatitis to 1 antigen have been shown to be at increased risk of developing delayed type hypersensitivity reactions to additional antigens. Both environmental and genetic factors likely influence the risk of sensitization. The aim of this study was to determine whether polysensitization occurs at a higher frequency than would be expected based on chance and whether polysensitization occurs more often in subsets of patients with hand involvement and atopic dermatitis. From a database of patch test results from a single practitioner, the probability of having positive reactions to 3 or more unrelated allergens was calculated under the assumption that positive reactions are independent and compared with the observed proportion having positive reactions to 3 or more unrelated allergens. The analysis was repeated excluding patients with leg involvement as a proxy for venous insufficiency dermatitis. The proportion of patients from the polysensitized and nonpolysensitized cohorts with either hand involvement or a history of atopic dermatitis was also calculated. Polysensitization occurs more often than expected based on chance. Polysensitized patients were more likely to have hand dermatitis. Atopic dermatitis was not significantly associated with polysensitization in this analysis. Polysensitized individuals may represent a phenotype with increased genetic susceptibility to sensitization.

  7. Transient Hypothyroidism after Radioiodine for Graves’ Disease: Challenges in Interpreting Thyroid Function Tests

    PubMed Central

    Sheehan, Michael T.; Doi, Suhail A.R.

    2016-01-01

    Graves’ disease is the most common cause of hyperthyroidism and is often managed with radioactive iodine (RAI) therapy. With current dosing schemes, the vast majority of patients develop permanent post-RAI hypothyroidism and are placed on life-long levothyroxine therapy. This hypothyroidism typically occurs within the first 3 to 6 months after RAI therapy is administered. Indeed, patients are typically told to expect life-long thyroid hormone replacement therapy to be required within this timeframe and many providers expect this post-RAI hypothyroidism to be complete and permanent. There is, however, a small subset of patients in whom a transient post-RAI hypothyroidism develops which, initially, presents exactly as the typical permanent hypothyroidism. In some cases the transient hypothyroidism leads to a period of euthyroidism of variable duration eventually progressing to permanent hypothyroidism. In others, persistent hyperthyroidism requires a second dose of RAI. Failure to appreciate and recognize the possibility of transient post-RAI hypothyroidism can delay optimal and appropriate treatment of the patient. We herein describe five cases of transient post-RAI hypothyroidism which highlight this unusual sequence of events. Increased awareness of this possible outcome after RAI for Graves’ disease will help in the timely management of patients. PMID:26864507

  8. Genetic profile for five common variants associated with age-related macular degeneration in densely affected families: a novel analytic approach

    PubMed Central

    Sobrin, Lucia; Maller, Julian B; Neale, Benjamin M; Reynolds, Robyn C; Fagerness, Jesen A; Daly, Mark J; Seddon, Johanna M

    2010-01-01

    About 40% of the genetic variance of age-related macular degeneration (AMD) can be explained by a common variation at five common single-nucleotide polymorphisms (SNPs). We evaluated the degree to which these known variants explain the clustering of AMD in a group of densely affected families. We sought to determine whether the actual number of risk alleles at the five variants in densely affected families matched the expected number. Using data from 322 families with AMD, we used a simulation strategy to generate comparison groups of families and determined whether their genetic profile at the known AMD risk loci differed from the observed genetic profile, given the density of disease observed. Overall, the genotypic loads for the five SNPs in the families did not deviate significantly from the genotypic loads predicted by the simulation. However, for a subset of densely affected families, the mean genotypic load in the families was significantly lower than the expected load determined from the simulation. Given that these densely affected families may harbor rare, more penetrant variants for AMD, linkage analyses and resequencing targeting these families may be an effective approach to finding additional implicated genes. PMID:19844262

  9. Empirical evaluation of predator-driven diel vertical migration in Lake Superior

    USGS Publications Warehouse

    Stockwell, J.D.; Hrabik, T.R.; Jensen, O.P.; Yule, D.L.; Balge, M.

    2010-01-01

    Recent studies on Lake Superior suggest that diel vertical migration (DVM) of prey (generalized Coregonus spp.) may be influenced by the density of predatory siscowet (Salvelinus namaycush). We empirically evaluated this hypothesis using data from acoustic, midwater trawl, and bottom trawl sampling at eight Lake Superior sites during three seasons in 2005 and a subset of sites in 2006. We expected the larger-bodied cisco (Coregonus artedi) to exhibit a shallower DVM compared with the smaller-bodied kiyi (Coregonus kiyi). Although DVM of kiyi and cisco were consistent with expectations of DVM as a size-dependent, predator-mediated process, we found no relationship between siscowet density and the magnitude of DVM of either coregonid. Cisco appear to have a size refuge from siscowet predation. Kiyi and siscowet co-occur in demersal habitat > 150 m during the day, where visual predation is unlikely, suggesting predator avoidance is not a factor in the daytime distribution of kiyi. Seasonal patterns of kiyi DVM were consistent with reported DVM of their primary prey Mysis relicta. Our results suggest that consideration of nonvisual foraging, rather than lightbased foraging theory (i.e., the antipredation window), is necessary to understand the processes driving DVM in deepwater systems.

  10. Comprehensive molecular characterization of human colon and rectal cancer.

    PubMed

    2012-07-18

    To characterize somatic alterations in colorectal carcinoma, we conducted a genome-scale analysis of 276 samples, analysing exome sequence, DNA copy number, promoter methylation and messenger RNA and microRNA expression. A subset of these samples (97) underwent low-depth-of-coverage whole-genome sequencing. In total, 16% of colorectal carcinomas were found to be hypermutated: three-quarters of these had the expected high microsatellite instability, usually with hypermethylation and MLH1 silencing, and one-quarter had somatic mismatch-repair gene and polymerase ε (POLE) mutations. Excluding the hypermutated cancers, colon and rectum cancers were found to have considerably similar patterns of genomic alteration. Twenty-four genes were significantly mutated, and in addition to the expected APC, TP53, SMAD4, PIK3CA and KRAS mutations, we found frequent mutations in ARID1A, SOX9 and FAM123B. Recurrent copy-number alterations include potentially drug-targetable amplifications of ERBB2 and newly discovered amplification of IGF2. Recurrent chromosomal translocations include the fusion of NAV2 and WNT pathway member TCF7L1. Integrative analyses suggest new markers for aggressive colorectal carcinoma and an important role for MYC-directed transcriptional activation and repression.

  11. Development and Optimization of a Dedicated, Hybrid Dual-Modality SPECT-CmT System for Improved Breast Lesion Diagnosis

    DTIC Science & Technology

    2008-01-01

    CT contrast agent, four 6.0mm nylon balls (Small Parts, Inc. Miami Lakes, FL) soaked in aqueous 99mTc-pertechnetate were used as markers and taped ...Inc, Miramar, Fl) were soaked in concentrated aqueous 99mTc and taped to the exterior surface of the breast phantom to act as fiducial markers for...vol. 21, pp. 48-55, 2006. [16] H. Erdogan and J. A. Fessler, "Ordered subsets algorithms for transmission tomography," Phys Med Biol, vol. 44, pp

  12. Research on the application of a decoupling algorithm for structure analysis

    NASA Technical Reports Server (NTRS)

    Denman, E. D.

    1980-01-01

    The mathematical theory for decoupling mth-order matrix differential equations is presented. It is shown that the decoupling precedure can be developed from the algebraic theory of matrix polynomials. The role of eigenprojectors and latent projectors in the decoupling process is discussed and the mathematical relationships between eigenvalues, eigenvectors, latent roots, and latent vectors are developed. It is shown that the eigenvectors of the companion form of a matrix contains the latent vectors as a subset. The spectral decomposition of a matrix and the application to differential equations is given.

  13. How clinicians should use the diagnostic laboratory in a changing medical world.

    PubMed

    Lundberg, G D

    1999-02-01

    In developed countries, clinicians are faced with a plethora of diagnostic tests to apply to patients to guide their clinical management. The quality, effectiveness, and efficiency of patient care should be foremost in the clinician's mind. Laboratory directors should make every effort to guide clinicians in appropriate laboratory test ordering, interpretation, and resulting actions. Medicine, being at its center a moral enterprise grounded in a covenant of trust, and laboratory medicine being a subset of medicine, must first care and advocate for the patient, and consider clinical outcomes as most important.

  14. On the domain of the Nelson Hamiltonian

    NASA Astrophysics Data System (ADS)

    Griesemer, M.; Wünsch, A.

    2018-04-01

    The Nelson Hamiltonian is unitarily equivalent to a Hamiltonian defined through a closed, semibounded quadratic form, the unitary transformation being explicitly known and due to Gross. In this paper, we study the mapping properties of the Gross-transform in order to characterize the regularity properties of vectors in the form domain of the Nelson Hamiltonian. Since the operator domain is a subset of the form domain, our results apply to vectors in the domain of the Hamiltonian as well. This work is a continuation of our previous work on the Fröhlich Hamiltonian.

  15. CARIBIAM: constrained Association Rules using Interactive Biological IncrementAl Mining.

    PubMed

    Rahal, Imad; Rahhal, Riad; Wang, Baoying; Perrizo, William

    2008-01-01

    This paper analyses annotated genome data by applying a very central data-mining technique known as Association Rule Mining (ARM) with the aim of discovering rules and hypotheses capable of yielding deeper insights into this type of data. In the literature, ARM has been noted for producing an overwhelming number of rules. This work proposes a new technique capable of using domain knowledge in the form of queries in order to efficiently mine only the subset of the associations that are of interest to investigators in an incremental and interactive manner.

  16. Measurement of global oceanic winds from Seasat-SMMR and its comparison with Seasat-SASS and ALT derived winds

    NASA Technical Reports Server (NTRS)

    Pandey, Prem C.

    1987-01-01

    The retrieval of ocean-surface wind speed from different channel combinations of Seasat SMMR measurements is demonstrated. Wind speeds derived using the best two channel subsets (10.6 H and 18.0 V) were compared with in situ data collected during the Joint Air-Sea Interaction (JASIN) experiment and an rms difference of 1.5 m/s was found. Global maps of wind speed generated with the present algorithm show that the averaged winds are arranged in well-ordered belts.

  17. Finite State Models of Manned Systems: Validation, Simplification, and Extension.

    DTIC Science & Technology

    1979-11-01

    model a time set is needed. A time set is some set T together with a binary relation defined on T which linearly orders the set. If "model time" is...discrete, so is T ; continuous time is represented by a set corresponding to a subset of the non-negative real numbers. In the following discussion time...defined as sequences, over time, of input and outIut values. The notion of sequences or trajectories is formalized as: AT = xx: T -- Al BT = tyIy: T -4BJ AT

  18. NASA Regional Planetary Image Facility

    NASA Technical Reports Server (NTRS)

    Arvidson, Raymond E.

    2001-01-01

    The Regional Planetary Image Facility (RPIF) provided access to data from NASA planetary missions and expert assistance about the data sets and how to order subsets of the collections. This ensures that the benefit/cost of acquiring the data is maximized by widespread dissemination and use of the observations and resultant collections. The RPIF provided education and outreach functions that ranged from providing data and information to teachers, involving small groups of highly motivated students in its activities, to public lectures and tours. These activities maximized dissemination of results and data to the educational and public communities.

  19. Frontiers in Fluid Mechanics: A Collection of Research Papers Written in Commemoration of the 65th Birthday of Stanley Corrsin.

    DTIC Science & Technology

    1985-04-30

    analogous fashion. If the flow variable lFtis taken at x and F 6(xi,t) > d , - 1, . n, n > 1 is required, various subsets of the flow domain atare obtained...discussed: non-premixed and premixed combustion. The chemistry of combustion in the gas phase involves complex systems of reaction steps with numerous...components. In order to keep the problem tractable, only a greatly simplified and global description of chemistry will be employed. In both cases V

  20. ESMPy and OpenClimateGIS: Python Interfaces for High Performance Grid Remapping and Geospatial Dataset Manipulation

    NASA Astrophysics Data System (ADS)

    O'Kuinghttons, Ryan; Koziol, Benjamin; Oehmke, Robert; DeLuca, Cecelia; Theurich, Gerhard; Li, Peggy; Jacob, Joseph

    2016-04-01

    The Earth System Modeling Framework (ESMF) Python interface (ESMPy) supports analysis and visualization in Earth system modeling codes by providing access to a variety of tools for data manipulation. ESMPy started as a Python interface to the ESMF grid remapping package, which provides mature and robust high-performance and scalable grid remapping between 2D and 3D logically rectangular and unstructured grids and sets of unconnected data. ESMPy now also interfaces with OpenClimateGIS (OCGIS), a package that performs subsetting, reformatting, and computational operations on climate datasets. ESMPy exposes a subset of ESMF grid remapping utilities. This includes bilinear, finite element patch recovery, first-order conservative, and nearest neighbor grid remapping methods. There are also options to ignore unmapped destination points, mask points on source and destination grids, and provide grid structure in the polar regions. Grid remapping on the sphere takes place in 3D Cartesian space, so the pole problem is not an issue as it can be with other grid remapping software. Remapping can be done between any combination of 2D and 3D logically rectangular and unstructured grids with overlapping domains. Grid pairs where one side of the regridding is represented by an appropriate set of unconnected data points, as is commonly found with observational data streams, is also supported. There is a developing interoperability layer between ESMPy and OpenClimateGIS (OCGIS). OCGIS is a pure Python, open source package designed for geospatial manipulation, subsetting, and computation on climate datasets stored in local NetCDF files or accessible remotely via the OPeNDAP protocol. Interfacing with OCGIS has brought GIS-like functionality to ESMPy (i.e. subsetting, coordinate transformations) as well as additional file output formats (i.e. CSV, ESRI Shapefile). ESMPy is distinguished by its strong emphasis on open source, community governance, and distributed development. The user base has grown quickly, and the package is integrating with several other software tools and frameworks. These include the Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT), Iris, PyFerret, cfpython, and the Community Surface Dynamics Modeling System (CSDMS). ESMPy minimum requirements include Python 2.6, Numpy 1.6.1 and an ESMF installation. Optional dependencies include NetCDF and OCGIS-related dependencies: GDAL, Shapely, and Fiona. ESMPy is regression tested nightly, and supported on Darwin, Linux and Cray systems with the GNU compiler suite and MPI communications. OCGIS is supported on Linux, and also undergoes nightly regression testing. Both packages are installable from Anaconda channels. Upcoming development plans for ESMPy involve development of a higher order conservative grid remapping method. Future OCGIS development will focus on mesh and location stream interoperability and streamlined access to ESMPy's MPI implementation.

  1. How do we choose the best model? The impact of cross-validation design on model evaluation for buried threat detection in ground penetrating radar

    NASA Astrophysics Data System (ADS)

    Malof, Jordan M.; Reichman, Daniël.; Collins, Leslie M.

    2018-04-01

    A great deal of research has been focused on the development of computer algorithms for buried threat detection (BTD) in ground penetrating radar (GPR) data. Most recently proposed BTD algorithms are supervised, and therefore they employ machine learning models that infer their parameters using training data. Cross-validation (CV) is a popular method for evaluating the performance of such algorithms, in which the available data is systematically split into ܰ disjoint subsets, and an algorithm is repeatedly trained on ܰ-1 subsets and tested on the excluded subset. There are several common types of CV in BTD, which vary principally upon the spatial criterion used to partition the data: site-based, lane-based, region-based, etc. The performance metrics obtained via CV are often used to suggest the superiority of one model over others, however, most studies utilize just one type of CV, and the impact of this choice is unclear. Here we employ several types of CV to evaluate algorithms from a recent large-scale BTD study. The results indicate that the rank-order of the performance of the algorithms varies substantially depending upon which type of CV is used. For example, the rank-1 algorithm for region-based CV is the lowest ranked algorithm for site-based CV. This suggests that any algorithm results should be interpreted carefully with respect to the type of CV employed. We discuss some potential interpretations of performance, given a particular type of CV.

  2. Aggregating job exit statuses of a plurality of compute nodes executing a parallel application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.

    Aggregating job exit statuses of a plurality of compute nodes executing a parallel application, including: identifying a subset of compute nodes in the parallel computer to execute the parallel application; selecting one compute node in the subset of compute nodes in the parallel computer as a job leader compute node; initiating execution of the parallel application on the subset of compute nodes; receiving an exit status from each compute node in the subset of compute nodes, where the exit status for each compute node includes information describing execution of some portion of the parallel application by the compute node; aggregatingmore » each exit status from each compute node in the subset of compute nodes; and sending an aggregated exit status for the subset of compute nodes in the parallel computer.« less

  3. Development and evaluation of a self-efficacy instrument for Japanese sleep apnea patients receiving continuous positive airway pressure treatment

    PubMed Central

    Saito, Ayako; Kojima, Shigeko; Sasaki, Fumihiko; Hayashi, Masamichi; Mieno, Yuki; Sakakibara, Hiroki; Hashimoto, Shuji

    2015-01-01

    The purpose of this study was to develop and evaluate a self-efficacy instrument for Japanese obstructive sleep apnea (OSA) patients treated with continuous positive airway pressure (CPAP). Analyzed subjects were 653 Japanese OSA patients (619 males and 34 females) treated with CPAP at a sleep laboratory in a respiratory clinic in a Japanese city. Based on Bandura’s social cognitive theory, the CPAP Self-Efficacy Questionnaire for Sleep Apnea in Japanese (CSESA-J) was developed by a focus group of experts, using a group interview of OSA patients for the items of two previous self-efficacy scales for Western sleep apnea patients receiving CPAP treatment. CSESA-J has two subscales, one for self-efficacy and the other for outcome expectancy, and consists of a total of 15 items. Content validity was confirmed by the focus group. Confirmatory factor analysis showed that the factor loadings of self-efficacy and outcome expectancy were 0.47–0.76 and 0.41–0.92, respectively, for the corresponding items. CSESA-J had a significant but weak positive association with the General Self-Efficacy Scale, and a strong positive association with “Self-efficacy scale on health behavior in patients with chronic disease.” Cronbach’s alpha coefficient was 0.85 for the self-efficacy subscale and 0.89 for the outcome expectancy subscale. The intraclass correlation coefficient using data from the first and second measurements with CSESA-J for a subset of 130 subjects was 0.93 for the self-efficacy and outcome expectancy subscales. These results support CSESA-J as a reliable and valid instrument for measuring the self-efficacy of Japanese OSA patients treated with CPAP. Further studies are warranted to confirm validity for female OSA patients and generalizability. PMID:25678832

  4. Strategies for Efficient Computation of the Expected Value of Partial Perfect Information

    PubMed Central

    Madan, Jason; Ades, Anthony E.; Price, Malcolm; Maitland, Kathryn; Jemutai, Julie; Revill, Paul; Welton, Nicky J.

    2014-01-01

    Expected value of information methods evaluate the potential health benefits that can be obtained from conducting new research to reduce uncertainty in the parameters of a cost-effectiveness analysis model, hence reducing decision uncertainty. Expected value of partial perfect information (EVPPI) provides an upper limit to the health gains that can be obtained from conducting a new study on a subset of parameters in the cost-effectiveness analysis and can therefore be used as a sensitivity analysis to identify parameters that most contribute to decision uncertainty and to help guide decisions around which types of study are of most value to prioritize for funding. A common general approach is to use nested Monte Carlo simulation to obtain an estimate of EVPPI. This approach is computationally intensive, can lead to significant sampling bias if an inadequate number of inner samples are obtained, and incorrect results can be obtained if correlations between parameters are not dealt with appropriately. In this article, we set out a range of methods for estimating EVPPI that avoid the need for nested simulation: reparameterization of the net benefit function, Taylor series approximations, and restricted cubic spline estimation of conditional expectations. For each method, we set out the generalized functional form that net benefit must take for the method to be valid. By specifying this functional form, our methods are able to focus on components of the model in which approximation is required, avoiding the complexities involved in developing statistical approximations for the model as a whole. Our methods also allow for any correlations that might exist between model parameters. We illustrate the methods using an example of fluid resuscitation in African children with severe malaria. PMID:24449434

  5. Adaptive Encoding of Outcome Prediction by Prefrontal Cortex Ensembles Supports Behavioral Flexibility.

    PubMed

    Del Arco, Alberto; Park, Junchol; Wood, Jesse; Kim, Yunbok; Moghaddam, Bita

    2017-08-30

    The prefrontal cortex (PFC) is thought to play a critical role in behavioral flexibility by monitoring action-outcome contingencies. How PFC ensembles represent shifts in behavior in response to changes in these contingencies remains unclear. We recorded single-unit activity and local field potentials in the dorsomedial PFC (dmPFC) of male rats during a set-shifting task that required them to update their behavior, among competing options, in response to changes in action-outcome contingencies. As behavior was updated, a subset of PFC ensembles encoded the current trial outcome before the outcome was presented. This novel outcome-prediction encoding was absent in a control task, in which actions were rewarded pseudorandomly, indicating that PFC neurons are not merely providing an expectancy signal. In both control and set-shifting tasks, dmPFC neurons displayed postoutcome discrimination activity, indicating that these neurons also monitor whether a behavior is successful in generating rewards. Gamma-power oscillatory activity increased before the outcome in both tasks but did not differentiate between expected outcomes, suggesting that this measure is not related to set-shifting behavior but reflects expectation of an outcome after action execution. These results demonstrate that PFC neurons support flexible rule-based action selection by predicting outcomes that follow a particular action. SIGNIFICANCE STATEMENT Tracking action-outcome contingencies and modifying behavior when those contingencies change is critical to behavioral flexibility. We find that ensembles of dorsomedial prefrontal cortex neurons differentiate between expected outcomes when action-outcome contingencies change. This predictive mode of signaling may be used to promote a new response strategy at the service of behavioral flexibility. Copyright © 2017 the authors 0270-6474/17/378363-11$15.00/0.

  6. Development of the PROMIS positive emotional and sensory expectancies of smoking item banks.

    PubMed

    Tucker, Joan S; Shadel, William G; Edelen, Maria Orlando; Stucky, Brian D; Li, Zhen; Hansen, Mark; Cai, Li

    2014-09-01

    The positive emotional and sensory expectancies of cigarette smoking include improved cognitive abilities, positive affective states, and pleasurable sensorimotor sensations. This paper describes development of Positive Emotional and Sensory Expectancies of Smoking item banks that will serve to standardize the assessment of this construct among daily and nondaily cigarette smokers. Data came from daily (N = 4,201) and nondaily (N =1,183) smokers who completed an online survey. To identify a unidimensional set of items, we conducted item factor analyses, item response theory analyses, and differential item functioning analyses. Additionally, we evaluated the performance of fixed-item short forms (SFs) and computer adaptive tests (CATs) to efficiently assess the construct. Eighteen items were included in the item banks (15 common across daily and nondaily smokers, 1 unique to daily, 2 unique to nondaily). The item banks are strongly unidimensional, highly reliable (reliability = 0.95 for both), and perform similarly across gender, age, and race/ethnicity groups. A SF common to daily and nondaily smokers consists of 6 items (reliability = 0.86). Results from simulated CATs indicated that, on average, less than 8 items are needed to assess the construct with adequate precision using the item banks. These analyses identified a new set of items that can assess the positive emotional and sensory expectancies of smoking in a reliable and standardized manner. Considerable efficiency in assessing this construct can be achieved by using the item bank SF, employing computer adaptive tests, or selecting subsets of items tailored to specific research or clinical purposes. © The Author 2014. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. On the reliable and flexible solution of practical subset regression problems

    NASA Technical Reports Server (NTRS)

    Verhaegen, M. H.

    1987-01-01

    A new algorithm for solving subset regression problems is described. The algorithm performs a QR decomposition with a new column-pivoting strategy, which permits subset selection directly from the originally defined regression parameters. This, in combination with a number of extensions of the new technique, makes the method a very flexible tool for analyzing subset regression problems in which the parameters have a physical meaning.

  8. Multiclass classification for skin cancer profiling based on the integration of heterogeneous gene expression series.

    PubMed

    Gálvez, Juan Manuel; Castillo, Daniel; Herrera, Luis Javier; San Román, Belén; Valenzuela, Olga; Ortuño, Francisco Manuel; Rojas, Ignacio

    2018-01-01

    Most of the research studies developed applying microarray technology to the characterization of different pathological states of any disease may fail in reaching statistically significant results. This is largely due to the small repertoire of analysed samples, and to the limitation in the number of states or pathologies usually addressed. Moreover, the influence of potential deviations on the gene expression quantification is usually disregarded. In spite of the continuous changes in omic sciences, reflected for instance in the emergence of new Next-Generation Sequencing-related technologies, the existing availability of a vast amount of gene expression microarray datasets should be properly exploited. Therefore, this work proposes a novel methodological approach involving the integration of several heterogeneous skin cancer series, and a later multiclass classifier design. This approach is thus a way to provide the clinicians with an intelligent diagnosis support tool based on the use of a robust set of selected biomarkers, which simultaneously distinguishes among different cancer-related skin states. To achieve this, a multi-platform combination of microarray datasets from Affymetrix and Illumina manufacturers was carried out. This integration is expected to strengthen the statistical robustness of the study as well as the finding of highly-reliable skin cancer biomarkers. Specifically, the designed operation pipeline has allowed the identification of a small subset of 17 differentially expressed genes (DEGs) from which to distinguish among 7 involved skin states. These genes were obtained from the assessment of a number of potential batch effects on the gene expression data. The biological interpretation of these genes was inspected in the specific literature to understand their underlying information in relation to skin cancer. Finally, in order to assess their possible effectiveness in cancer diagnosis, a cross-validation Support Vector Machines (SVM)-based classification including feature ranking was performed. The accuracy attained exceeded the 92% in overall recognition of the 7 different cancer-related skin states. The proposed integration scheme is expected to allow the co-integration with other state-of-the-art technologies such as RNA-seq.

  9. SU-C-201-06: Utility of Quantitative 3D SPECT/CT Imaging in Patient Specific Internal Dosimetry of 153-Samarium with GATE Monte Carlo Package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fallahpoor, M; Abbasi, M; Sen, A

    Purpose: Patient-specific 3-dimensional (3D) internal dosimetry in targeted radionuclide therapy is essential for efficient treatment. Two major steps to achieve reliable results are: 1) generating quantitative 3D images of radionuclide distribution and attenuation coefficients and 2) using a reliable method for dose calculation based on activity and attenuation map. In this research, internal dosimetry for 153-Samarium (153-Sm) was done by SPECT-CT images coupled GATE Monte Carlo package for internal dosimetry. Methods: A 50 years old woman with bone metastases from breast cancer was prescribed 153-Sm treatment (Gamma: 103keV and beta: 0.81MeV). A SPECT/CT scan was performed with the Siemens Simbia-Tmore » scanner. SPECT and CT images were registered using default registration software. SPECT quantification was achieved by compensating for all image degrading factors including body attenuation, Compton scattering and collimator-detector response (CDR). Triple energy window method was used to estimate and eliminate the scattered photons. Iterative ordered-subsets expectation maximization (OSEM) with correction for attenuation and distance-dependent CDR was used for image reconstruction. Bilinear energy mapping is used to convert Hounsfield units in CT image to attenuation map. Organ borders were defined by the itk-SNAP toolkit segmentation on CT image. GATE was then used for internal dose calculation. The Specific Absorbed Fractions (SAFs) and S-values were reported as MIRD schema. Results: The results showed that the largest SAFs and S-values are in osseous organs as expected. S-value for lung is the highest after spine that can be important in 153-Sm therapy. Conclusion: We presented the utility of SPECT-CT images and Monte Carlo for patient-specific dosimetry as a reliable and accurate method. It has several advantages over template-based methods or simplified dose estimation methods. With advent of high speed computers, Monte Carlo can be used for treatment planning on a day to day basis.« less

  10. Impact of Time-of-Flight on PET Tumor Detection

    PubMed Central

    Kadrmas, Dan J.; Casey, Michael E.; Conti, Maurizio; Jakoby, Bjoern W.; Lois, Cristina; Townsend, David W.

    2009-01-01

    Time-of-flight (TOF) PET uses very fast detectors to improve localization of events along coincidence lines-of-response. This information is then utilized to improve the tomographic reconstruction. This work evaluates the effect of TOF upon an observer's performance for detecting and localizing focal warm lesions in noisy PET images. Methods An advanced anthropomorphic lesion-detection phantom was scanned 12 times over 3 days on a prototype TOF PET/CT scanner (Siemens Medical Solutions). The phantom was devised to mimic whole-body oncologic 18F-FDG PET imaging, and a number of spheric lesions (diameters 6–16 mm) were distributed throughout the phantom. The data were reconstructed with the baseline line-of-response ordered-subsets expectation-maximization algorithm, with the baseline algorithm plus point spread function model (PSF), baseline plus TOF, and with both PSF+TOF. The lesion-detection performance of each reconstruction was compared and ranked using localization receiver operating characteristics (LROC) analysis with both human and numeric observers. The phantom results were then subjectively compared to 2 illustrative patient scans reconstructed with PSF and with PSF+TOF. Results Inclusion of TOF information provides a significant improvement in the area under the LROC curve compared to the baseline algorithm without TOF data (P = 0.002), providing a degree of improvement similar to that obtained with the PSF model. Use of both PSF+TOF together provided a cumulative benefit in lesion-detection performance, significantly outperforming either PSF or TOF alone (P < 0.002). Example patient images reflected the same image characteristics that gave rise to improved performance in the phantom data. Conclusion Time-of-flight PET provides a significant improvement in observer performance for detecting focal warm lesions in a noisy background. These improvements in image quality can be expected to improve performance for the clinical tasks of detecting lesions and staging disease. Further study in a large clinical population is warranted to assess the benefit of TOF for various patient sizes and count levels, and to demonstrate effective performance in the clinical environment. PMID:19617317

  11. 78 FR 28276 - Operating Limitations at John F. Kennedy International Airport

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-14

    ... Limitations at John F. Kennedy International Airport AGENCY: Federal Aviation Administration (FAA), DOT... duration of the Order. The reasons for issuing the Order have not changed appreciably since it was implemented. Without the operational limitations imposed by this Order, the FAA expects severe congestion...

  12. 78 FR 28280 - Operating Limitations at Newark Liberty International Airport

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-14

    ... Limitations at Newark Liberty International Airport AGENCY: Federal Aviation Administration (FAA), DOT. ACTION... duration of the Order. The reasons for issuing the Order have not changed appreciably since it was implemented. Without the operational limitations imposed by this Order, the FAA expects severe congestion...

  13. Expected Number of Passes in a Binary Search Scheme

    ERIC Educational Resources Information Center

    Tenenbein, Aaron

    1974-01-01

    The binary search scheme is a method of finding a particular file from a set of ordered files stored in a computer. In this article an exact expression for the expected number of passes required to find a file is derived. (Author)

  14. CERES Search and Subset Tool

    Atmospheric Science Data Center

    2016-06-24

    ... data granules using a high resolution spatial metadata database and directly accessing the archived data granules. Subset results are ... data granules using a high resolution spatial metadata database and directly accessing the archived data granules. Subset results are ...

  15. The influence of prior experience and expected timing on vibrotactile discrimination

    PubMed Central

    Karim, Muhsin; Harris, Justin A.; Langdon, Angela; Breakspear, Michael

    2013-01-01

    Vibrotactile discrimination tasks involve perceptual judgements on stimulus pairs separated by a brief interstimulus interval (ISI). Despite their apparent simplicity, decision making during these tasks is biased by prior experience in a manner that is not well understood. A striking example is when participants perform well on trials where the first stimulus is closer to the mean of the stimulus-set than the second stimulus, and perform comparatively poorly when the first stimulus is further from the stimulus mean. This “time-order effect” suggests that participants implicitly encode the mean of the stimulus-set and use this internal standard to bias decisions on any given trial. For relatively short ISIs, the magnitude of the time-order effect typically increases with the distance of the first stimulus from the global mean. Working from the premise that the time-order effect reflects the loss of precision in working memory representations, we predicted that the influence of the time-order effect, and this superimposed “distance” effect, would monotonically increase for trials with longer ISIs. However, by varying the ISI across four intervals (300, 600, 1200, and 2400 ms) we instead found a complex, non-linear dependence of the time-order effect on both the ISI and the distance, with the time-order effect being paradoxically stronger at short ISIs. We also found that this relationship depended strongly on participants' prior experience of the ISI (from previous task titration). The time-order effect not only depends on participants' expectations concerning the distribution of stimuli, but also on the expected timing of the trials. PMID:24399927

  16. An evaluation of exact methods for the multiple subset maximum cardinality selection problem.

    PubMed

    Brusco, Michael J; Köhn, Hans-Friedrich; Steinley, Douglas

    2016-05-01

    The maximum cardinality subset selection problem requires finding the largest possible subset from a set of objects, such that one or more conditions are satisfied. An important extension of this problem is to extract multiple subsets, where the addition of one more object to a larger subset would always be preferred to increases in the size of one or more smaller subsets. We refer to this as the multiple subset maximum cardinality selection problem (MSMCSP). A recently published branch-and-bound algorithm solves the MSMCSP as a partitioning problem. Unfortunately, the computational requirement associated with the algorithm is often enormous, thus rendering the method infeasible from a practical standpoint. In this paper, we present an alternative approach that successively solves a series of binary integer linear programs to obtain a globally optimal solution to the MSMCSP. Computational comparisons of the methods using published similarity data for 45 food items reveal that the proposed sequential method is computationally far more efficient than the branch-and-bound approach. © 2016 The British Psychological Society.

  17. CD94 Defines Phenotypically and Functionally Distinct Mouse NK Cell Subsets1

    PubMed Central

    Yu, Jianhua; Wei, Min; Mao, Hsiaoyin; Zhang, Jianying; Hughes, Tiffany; Mitsui, Takeki; Park, Il-kyoo; Hwang, Christine; Liu, Shujun; Marcucci, Guido; Trotta, Rossana; Benson, Don M.; Caligiuri, Michael A.

    2010-01-01

    Understanding of heterogeneous NK subsets is important for the study of NK cell biology and development, and for the application of NK cell-based therapies in the treatment of disease. Here we demonstrate that the surface expression of CD94 can distinctively divide mouse NK cells into two approximately even CD94low and CD94high subsets in all tested organs and tissues. The CD94high NK subset has significantly greater capacity to proliferate, produce IFN-γ, and lyse target cells than does the CD94low subset. The CD94high subset has exclusive expression of NKG2A/C/E, higher expression of CD117 and CD69, and lower expression of Ly49D (activating) and Ly49G2 (inhibitory). In vivo, purified mouse CD94low NK cells become CD94high NK cells, but not vice versa. Collectively, our data suggest that CD94 is an Ag that can be used to identify functionally distinct NK cell subsets in mice and could also be relevant to late-stage mouse NK cell development. PMID:19801519

  18. Domino: Extracting, Comparing, and Manipulating Subsets across Multiple Tabular Datasets

    PubMed Central

    Gratzl, Samuel; Gehlenborg, Nils; Lex, Alexander; Pfister, Hanspeter; Streit, Marc

    2016-01-01

    Answering questions about complex issues often requires analysts to take into account information contained in multiple interconnected datasets. A common strategy in analyzing and visualizing large and heterogeneous data is dividing it into meaningful subsets. Interesting subsets can then be selected and the associated data and the relationships between the subsets visualized. However, neither the extraction and manipulation nor the comparison of subsets is well supported by state-of-the-art techniques. In this paper we present Domino, a novel multiform visualization technique for effectively representing subsets and the relationships between them. By providing comprehensive tools to arrange, combine, and extract subsets, Domino allows users to create both common visualization techniques and advanced visualizations tailored to specific use cases. In addition to the novel technique, we present an implementation that enables analysts to manage the wide range of options that our approach offers. Innovative interactive features such as placeholders and live previews support rapid creation of complex analysis setups. We introduce the technique and the implementation using a simple example and demonstrate scalability and effectiveness in a use case from the field of cancer genomics. PMID:26356916

  19. Reionization and its imprint of the cosmic microwave background

    NASA Technical Reports Server (NTRS)

    Dodelson, Scott; Jubas, Jay M.

    1995-01-01

    Early reionization changes the pattern of anisotropies expected in the cosmic microwave backgrond. To explore these changes, we derive from first principles the equations governing anisotropies, focusing on the interactions of photons with electrons. Vishniac (1987) claimed that second-order terms can be large in a reionized universe, so we derive equations correct to second order in the perturbations. There are many more second-order terms than were considered by Vishniac. To understand the basic physics involved, we present a simple analytic approximation to the first-order equation. Then, turning to the second order equation, we show that the Vishniac term is indeed the only important one. We also present numerical results for a variety of ionization histories (in a standard cold dark matter universe) and show quantitatively how the signal in several experiments depends on the ionization history. The most pronounced indication of a reionized universe would be seen in very small scale experiments; the expected signal in the Owens Valley experiment is smaller by a factor of order 10 if the last scattering surface is at a redshift z approximately = 100 as it would be if the universe were reionized very early. On slightly larger scales, the expected signal in a reionized universe is smaller than it would be with standard recombination, but only a factor of 2 or so. The signal is even smaller in these experiments in the intermediate case where some photons last scattered at the standard recombination epoch.

  20. Quantifying Oldowan Stone Tool Production at Olduvai Gorge, Tanzania

    PubMed Central

    Reti, Jay S.

    2016-01-01

    Recent research suggests that variation exists among and between Oldowan stone tool assemblages. Oldowan variation might represent differential constraints on raw materials used to produce these stone implements. Alternatively, variation among Oldowan assemblages could represent different methods that Oldowan producing hominins utilized to produce these lithic implements. Identifying differential patterns of stone tool production within the Oldowan has implications for assessing how stone tool technology evolved, how traditions of lithic production might have been culturally transmitted, and for defining the timing and scope of these evolutionary events. At present there is no null model to predict what morphological variation in the Oldowan should look like. Without such a model, quantifying whether Oldowan assemblages vary due to raw material constraints or whether they vary due to differences in production technique is not possible. This research establishes a null model for Oldowan lithic artifact morphological variation. To establish these expectations this research 1) models the expected range of variation through large scale reduction experiments, 2) develops an algorithm to categorize archaeological flakes based on how they are produced, and 3) statistically assesses the methods of production behavior used by Oldowan producing hominins at the site of DK from Olduvai Gorge, Tanzania via the experimental model. Results indicate that a subset of quartzite flakes deviate from the null expectations in a manner that demonstrates efficiency in flake manufacture, while some basalt flakes deviate from null expectations in a manner that demonstrates inefficiency in flake manufacture. The simultaneous presence of efficiency in stone tool production for one raw material (quartzite) and inefficiency in stone tool production for another raw material (basalt) suggests that Oldowan producing hominins at DK were able to mediate the economic costs associated with stone tool procurement by utilizing high-cost materials more efficiently than is expected and low-cost materials in an inefficient manner. PMID:26808429

  1. Reproductive endocrine patterns and volatile urinary compounds of Arctictis binturong: discovering why bearcats smell like popcorn.

    PubMed

    Greene, Lydia K; Wallen, Timothy W; Moresco, Anneke; Goodwin, Thomas E; Drea, Christine M

    2016-06-01

    Members of the order Carnivora rely on urinary scent signaling, particularly for communicating about reproductive parameters. Here, we describe reproductive endocrine patterns in relation to urinary olfactory cues in a vulnerable and relatively unknown viverrid--the binturong (Arctictis binturong). Female binturongs are larger than and dominate males, and both sexes engage in glandular and urinary scent marking. Using a large (n = 33), captive population, we collected serum samples to measure circulating sex steroids via enzyme immunoassay and urine samples to assay volatile chemicals via gas chromatography-mass spectrometry. Male binturongs had expectedly greater androgen concentrations than did females but, more unusually, had equal estrogen concentrations, which may be linked to male deference. Males also expressed a significantly richer array of volatile chemical compounds than did females. A subset of these volatile chemicals resisted decay at ambient temperatures, potentially indicating their importance as long-lasting semiochemicals. Among these compounds was 2-acetyl-1-pyrroline (2-AP), which is typically produced at high temperatures by the Maillard reaction and is likely to be responsible for the binturong's characteristic popcorn aroma. 2-AP, the only compound expressed by all of the subjects, was found in greater abundance in males than females and was significantly and positively related to circulating androstenedione concentrations in both sexes. This unusual compound may have a more significant role in mammalian semiochemistry than previously appreciated. Based on these novel data, we suggest that hormonal action and potentially complex chemical reactions mediate communication of the binturong's signature scent and convey information about sex and reproductive state.

  2. Dual Frequency Head Maps: A New Method for Indexing Mental Workload Continuously during Execution of Cognitive Tasks

    PubMed Central

    Radüntz, Thea

    2017-01-01

    One goal of advanced information and communication technology is to simplify work. However, there is growing consensus regarding the negative consequences of inappropriate workload on employee's health and the safety of persons. In order to develop a method for continuous mental workload monitoring, we implemented a task battery consisting of cognitive tasks with diverse levels of complexity and difficulty. We conducted experiments and registered the electroencephalogram (EEG), performance data, and the NASA-TLX questionnaire from 54 people. Analysis of the EEG spectra demonstrates an increase of the frontal theta band power and a decrease of the parietal alpha band power, both under increasing task difficulty level. Based on these findings we implemented a new method for monitoring mental workload, the so-called Dual Frequency Head Maps (DFHM) that are classified by support vectors machines (SVMs) in three different workload levels. The results are in accordance with the expected difficulty levels arising from the requirements of the tasks on the executive functions. Furthermore, this article includes an empirical validation of the new method on a secondary subset with new subjects and one additional new task without any adjustment of the classifiers. Hence, the main advantage of the proposed method compared with the existing solutions is that it provides an automatic, continuous classification of the mental workload state without any need for retraining the classifier—neither for new subjects nor for new tasks. The continuous workload monitoring can help ensure good working conditions, maintain a good level of performance, and simultaneously preserve a good state of health. PMID:29276490

  3. Automated identification of reference genes based on RNA-seq data.

    PubMed

    Carmona, Rosario; Arroyo, Macarena; Jiménez-Quesada, María José; Seoane, Pedro; Zafra, Adoración; Larrosa, Rafael; Alché, Juan de Dios; Claros, M Gonzalo

    2017-08-18

    Gene expression analyses demand appropriate reference genes (RGs) for normalization, in order to obtain reliable assessments. Ideally, RG expression levels should remain constant in all cells, tissues or experimental conditions under study. Housekeeping genes traditionally fulfilled this requirement, but they have been reported to be less invariant than expected; therefore, RGs should be tested and validated for every particular situation. Microarray data have been used to propose new RGs, but only a limited set of model species and conditions are available; on the contrary, RNA-seq experiments are more and more frequent and constitute a new source of candidate RGs. An automated workflow based on mapped NGS reads has been constructed to obtain highly and invariantly expressed RGs based on a normalized expression in reads per mapped million and the coefficient of variation. This workflow has been tested with Roche/454 reads from reproductive tissues of olive tree (Olea europaea L.), as well as with Illumina paired-end reads from two different accessions of Arabidopsis thaliana and three different human cancers (prostate, small-cell cancer lung and lung adenocarcinoma). Candidate RGs have been proposed for each species and many of them have been previously reported as RGs in literature. Experimental validation of significant RGs in olive tree is provided to support the algorithm. Regardless sequencing technology, number of replicates, and library sizes, when RNA-seq experiments are designed and performed, the same datasets can be analyzed with our workflow to extract suitable RGs for subsequent PCR validation. Moreover, different subset of experimental conditions can provide different suitable RGs.

  4. Evaluation of disease progression in INCL by MR spectroscopy

    PubMed Central

    Baker, Eva H; Levin, Sondra W; Zhang, Zhongjian; Mukherjee, Anil B

    2015-01-01

    Objective Infantile neuronal ceroid lipofuscinosis (INCL) is a devastating neurodegenerative storage disease caused by palmitoyl-protein thioesterase-1 deficiency, which impairs degradation of palmitoylated proteins (constituents of ceroid) by lysosomal hydrolases. Consequent lysosomal ceroid accumulation leads to neuronal injury. As part of a pilot study to evaluate treatment benefits of cysteamine bitartrate and N-acetylcysteine, we quantitatively measured brain metabolite levels using magnetic resonance spectroscopy (MRS). Methods A subset of two patients from a larger treatment and follow-up study underwent serial quantitative single-voxel MRS examinations of five anatomical sites. Three echo times were acquired in order to estimate metabolite T2. Measured metabolite levels included correction for partial volume of cerebrospinal fluid. Comparison of INCL patients was made to a reference group composed of asymptomatic and minimally symptomatic Niemann-Pick disease type C patients. Results In INCL patients, N-acetylaspartate (NAA) was abnormally low at all locations upon initial measurement, and further declined throughout the follow-up period. In the cerebrum (affected early in the disease course), choline and myo-inositol were initially elevated and fell during the follow-up period, whereas in the cerebellum and brainstem (affected later), choline and myo-inositol were initially normal and rose subsequently. Interpretation Choline and myo-inositol levels in our patients are consistent with patterns of neuroinflammation observed in two INCL mouse models. Low, persistently declining NAA was expected based on the progressive, irreversible nature of the disease. Progression of metabolite levels in INCL has not been previously quantified; therefore the results of this study serve as a reference for quantitative evaluation of future therapeutic interventions. PMID:26339674

  5. Adoptive immunotherapy for acute leukemia: New insights in chimeric antigen receptors

    PubMed Central

    Heiblig, Maël; Elhamri, Mohamed; Michallet, Mauricette; Thomas, Xavier

    2015-01-01

    Relapses remain a major concern in acute leukemia. It is well known that leukemia stem cells (LSCs) hide in hematopoietic niches and escape to the immune system surveillance through the outgrowth of poorly immunogenic tumor-cell variants and the suppression of the active immune response. Despite the introduction of new reagents and new therapeutic approaches, no treatment strategies have been able to definitively eradicate LSCs. However, recent adoptive immunotherapy in cancer is expected to revolutionize our way to fight against this disease, by redirecting the immune system in order to eliminate relapse issues. Initially described at the onset of the 90’s, chimeric antigen receptors (CARs) are recombinant receptors transferred in various T cell subsets, providing specific antigens binding in a non-major histocompatibility complex restricted manner, and effective on a large variety of human leukocyte antigen-divers cell populations. Once transferred, engineered T cells act like an expanding “living drug” specifically targeting the tumor-associated antigen, and ensure long-term anti-tumor memory. Over the last decades, substantial improvements have been made in CARs design. CAR T cells have finally reached the clinical practice and first clinical trials have shown promising results. In acute lymphoblastic leukemia, high rate of complete and prolonged clinical responses have been observed after anti-CD19 CAR T cell therapy, with specific but manageable adverse events. In this review, our goal was to describe CAR structures and functions, and to summarize recent data regarding pre-clinical studies and clinical trials in acute leukemia. PMID:26328018

  6. Reproductive endocrine patterns and volatile urinary compounds of Arctictis binturong: discovering why bearcats smell like popcorn

    NASA Astrophysics Data System (ADS)

    Greene, Lydia K.; Wallen, Timothy W.; Moresco, Anneke; Goodwin, Thomas E.; Drea, Christine M.

    2016-06-01

    Members of the order Carnivora rely on urinary scent signaling, particularly for communicating about reproductive parameters. Here, we describe reproductive endocrine patterns in relation to urinary olfactory cues in a vulnerable and relatively unknown viverrid—the binturong ( Arctictis binturong). Female binturongs are larger than and dominate males, and both sexes engage in glandular and urinary scent marking. Using a large ( n = 33), captive population, we collected serum samples to measure circulating sex steroids via enzyme immunoassay and urine samples to assay volatile chemicals via gas chromatography-mass spectrometry. Male binturongs had expectedly greater androgen concentrations than did females but, more unusually, had equal estrogen concentrations, which may be linked to male deference. Males also expressed a significantly richer array of volatile chemical compounds than did females. A subset of these volatile chemicals resisted decay at ambient temperatures, potentially indicating their importance as long-lasting semiochemicals. Among these compounds was 2-acetyl-1-pyrroline (2-AP), which is typically produced at high temperatures by the Maillard reaction and is likely to be responsible for the binturong's characteristic popcorn aroma. 2-AP, the only compound expressed by all of the subjects, was found in greater abundance in males than females and was significantly and positively related to circulating androstenedione concentrations in both sexes. This unusual compound may have a more significant role in mammalian semiochemistry than previously appreciated. Based on these novel data, we suggest that hormonal action and potentially complex chemical reactions mediate communication of the binturong's signature scent and convey information about sex and reproductive state.

  7. A novel automated spike sorting algorithm with adaptable feature extraction.

    PubMed

    Bestel, Robert; Daus, Andreas W; Thielemann, Christiane

    2012-10-15

    To study the electrophysiological properties of neuronal networks, in vitro studies based on microelectrode arrays have become a viable tool for analysis. Although in constant progress, a challenging task still remains in this area: the development of an efficient spike sorting algorithm that allows an accurate signal analysis at the single-cell level. Most sorting algorithms currently available only extract a specific feature type, such as the principal components or Wavelet coefficients of the measured spike signals in order to separate different spike shapes generated by different neurons. However, due to the great variety in the obtained spike shapes, the derivation of an optimal feature set is still a very complex issue that current algorithms struggle with. To address this problem, we propose a novel algorithm that (i) extracts a variety of geometric, Wavelet and principal component-based features and (ii) automatically derives a feature subset, most suitable for sorting an individual set of spike signals. Thus, there is a new approach that evaluates the probability distribution of the obtained spike features and consequently determines the candidates most suitable for the actual spike sorting. These candidates can be formed into an individually adjusted set of spike features, allowing a separation of the various shapes present in the obtained neuronal signal by a subsequent expectation maximisation clustering algorithm. Test results with simulated data files and data obtained from chick embryonic neurons cultured on microelectrode arrays showed an excellent classification result, indicating the superior performance of the described algorithm approach. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Evaluation of disease progression in INCL by MR spectroscopy.

    PubMed

    Baker, Eva H; Levin, Sondra W; Zhang, Zhongjian; Mukherjee, Anil B

    2015-08-01

    Infantile neuronal ceroid lipofuscinosis (INCL) is a devastating neurodegenerative storage disease caused by palmitoyl-protein thioesterase-1 deficiency, which impairs degradation of palmitoylated proteins (constituents of ceroid) by lysosomal hydrolases. Consequent lysosomal ceroid accumulation leads to neuronal injury. As part of a pilot study to evaluate treatment benefits of cysteamine bitartrate and N-acetylcysteine, we quantitatively measured brain metabolite levels using magnetic resonance spectroscopy (MRS). A subset of two patients from a larger treatment and follow-up study underwent serial quantitative single-voxel MRS examinations of five anatomical sites. Three echo times were acquired in order to estimate metabolite T2. Measured metabolite levels included correction for partial volume of cerebrospinal fluid. Comparison of INCL patients was made to a reference group composed of asymptomatic and minimally symptomatic Niemann-Pick disease type C patients. In INCL patients, N-acetylaspartate (NAA) was abnormally low at all locations upon initial measurement, and further declined throughout the follow-up period. In the cerebrum (affected early in the disease course), choline and myo-inositol were initially elevated and fell during the follow-up period, whereas in the cerebellum and brainstem (affected later), choline and myo-inositol were initially normal and rose subsequently. Choline and myo-inositol levels in our patients are consistent with patterns of neuroinflammation observed in two INCL mouse models. Low, persistently declining NAA was expected based on the progressive, irreversible nature of the disease. Progression of metabolite levels in INCL has not been previously quantified; therefore the results of this study serve as a reference for quantitative evaluation of future therapeutic interventions.

  9. Inferring the colonization of a mountain range--refugia vs. nunatak survival in high alpine ground beetles.

    PubMed

    Lohse, Konrad; Nicholls, James A; Stone, Graham N

    2011-01-01

    It has long been debated whether high alpine specialists survived ice ages in situ on small ice-free islands of habitat, so-called nunataks, or whether glacial survival was restricted to larger massifs de refuge at the periphery. We evaluate these alternative hypotheses in a local radiation of high alpine carabid beetles (genus Trechus) in the Orobian Alps, Northern Italy. While summits along the northern ridge of this mountain range were surrounded by the icesheet as nunataks during the last glacial maximum, southern areas remained unglaciated. We analyse a total of 1366 bp of mitochondrial (Cox1 and Cox2) data sampled from 150 individuals from twelve populations and 530 bp of nuclear (PEPCK) sequence sampled for a subset of 30 individuals. Using Bayesian inference, we estimate ancestral location states in the gene trees, which in turn are used to infer the most likely order of recolonization under a model of sequential founder events from a massif de refuge from the mitochondrial data. We test for the paraphyly expected under this model and for reciprocal monophyly predicted by a contrasting model of prolonged persistence of nunatak populations. We find that (i) only three populations are incompatible with the paraphyly of the massif de refuge model, (ii) both mitochondrial and nuclear data support separate refugial origins for populations on the western and eastern ends of the northern ridge, and (iii) mitochondrial node ages suggest persistence on the northern ridge for part of the last ice age. © 2010 Blackwell Publishing Ltd.

  10. MetaboID: a graphical user interface package for assignment of 1H NMR spectra of bodyfluids and tissues.

    PubMed

    MacKinnon, Neil; Somashekar, Bagganahalli S; Tripathi, Pratima; Ge, Wencheng; Rajendiran, Thekkelnaycke M; Chinnaiyan, Arul M; Ramamoorthy, Ayyalusamy

    2013-01-01

    Nuclear magnetic resonance based measurements of small molecule mixtures continues to be confronted with the challenge of spectral assignment. While multi-dimensional experiments are capable of addressing this challenge, the imposed time constraint becomes prohibitive, particularly with the large sample sets commonly encountered in metabolomic studies. Thus, one-dimensional spectral assignment is routinely performed, guided by two-dimensional experiments on a selected sample subset; however, a publicly available graphical interface for aiding in this process is currently unavailable. We have collected spectral information for 360 unique compounds from publicly available databases including chemical shift lists and authentic full resolution spectra, supplemented with spectral information for 25 compounds collected in-house at a proton NMR frequency of 900 MHz. This library serves as the basis for MetaboID, a Matlab-based user interface designed to aid in the one-dimensional spectral assignment process. The tools of MetaboID were built to guide resonance assignment in order of increasing confidence, starting from cursory compound searches based on chemical shift positions to analysis of authentic spike experiments. Together, these tools streamline the often repetitive task of spectral assignment. The overarching goal of the integrated toolbox of MetaboID is to centralize the one dimensional spectral assignment process, from providing access to large chemical shift libraries to providing a straightforward, intuitive means of spectral comparison. Such a toolbox is expected to be attractive to both experienced and new metabolomic researchers as well as general complex mixture analysts. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. ASTEROSEISMIC-BASED ESTIMATION OF THE SURFACE GRAVITY FOR THE LAMOST GIANT STARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Chao; Wu, Yue; Deng, Li-Cai

    2015-07-01

    Asteroseismology is one of the most accurate approaches to estimate the surface gravity of a star. However, most of the data from the current spectroscopic surveys do not have asteroseismic measurements, which is very expensive and time consuming. In order to improve the spectroscopic surface gravity estimates for a large amount of survey data with the help of the small subset of the data with seismic measurements, we set up a support vector regression (SVR) model for the estimation of the surface gravity supervised by 1374 Large Sky Area Multi-object Fiber Spectroscopic Telescope (LAMOST) giant stars with Kepler seismic surfacemore » gravity. The new approach can reduce the uncertainty of the estimates down to about 0.1 dex, which is better than the LAMOST pipeline by at least a factor of 2, for the spectra with signal-to-noise ratio higher than 20. Compared with the log g estimated from the LAMOST pipeline, the revised log g values provide a significantly improved match to the expected distribution of red clump and red giant branch stars from stellar isochrones. Moreover, even the red bump stars, which extend to only about 0.1 dex in log g, can be discriminated from the new estimated surface gravity. The method is then applied to about 350,000 LAMOST metal-rich giant stars to provide improved surface gravity estimates. In general, the uncertainty of the distance estimate based on the SVR surface gravity can be reduced to about 12% for the LAMOST data.« less

  12. Optimizing convergence rates of alternating minimization reconstruction algorithms for real-time explosive detection applications

    NASA Astrophysics Data System (ADS)

    Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.

    2016-05-01

    X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.

  13. Algorithm For Solution Of Subset-Regression Problems

    NASA Technical Reports Server (NTRS)

    Verhaegen, Michel

    1991-01-01

    Reliable and flexible algorithm for solution of subset-regression problem performs QR decomposition with new column-pivoting strategy, enables selection of subset directly from originally defined regression parameters. This feature, in combination with number of extensions, makes algorithm very flexible for use in analysis of subset-regression problems in which parameters have physical meanings. Also extended to enable joint processing of columns contaminated by noise with those free of noise, without using scaling techniques.

  14. Analyzing the Fierz rearrangement freedom for local chiral two-nucleon potentials

    NASA Astrophysics Data System (ADS)

    Huth, L.; Tews, I.; Lynn, J. E.; Schwenk, A.

    2017-11-01

    Chiral effective field theory is a framework to derive systematic nuclear interactions. It is based on the symmetries of quantum chromodynamics and includes long-range pion physics explicitly, while shorter-range physics is expanded in a general operator basis. The number of low-energy couplings at a particular order in the expansion can be reduced by exploiting the fact that nucleons are fermions and therefore obey the Pauli exclusion principle. The antisymmetry permits the selection of a subset of the allowed contact operators at a given order. When local regulators are used for these short-range interactions, however, this "Fierz rearrangement freedom" is violated. In this paper, we investigate the impact of this violation at leading order (LO) in the chiral expansion. We construct LO and next-to-leading order (NLO) potentials for all possible LO-operator pairs and study their reproduction of phase shifts, the 4He ground-state energy, and the neutron-matter energy at different densities. We demonstrate that the Fierz rearrangement freedom is partially restored at NLO where subleading contact interactions enter. We also discuss implications for local chiral three-nucleon interactions.

  15. Trimming a hazard logic tree with a new model-order-reduction technique

    USGS Publications Warehouse

    Porter, Keith; Field, Edward; Milner, Kevin R

    2017-01-01

    The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.

  16. Integration by parts and Pohozaev identities for space-dependent fractional-order operators

    NASA Astrophysics Data System (ADS)

    Grubb, Gerd

    2016-08-01

    Consider a classical elliptic pseudodifferential operator P on Rn of order 2a (0 < a < 1) with even symbol. For example, P = A(x , D) a where A (x , D) is a second-order strongly elliptic differential operator; the fractional Laplacian (- Δ) a is a particular case. For solutions u of the Dirichlet problem on a bounded smooth subset Ω ⊂Rn, we show an integration-by-parts formula with a boundary integral involving (d-a u)|∂Ω, where d (x) = dist (x , ∂ Ω). This extends recent results of Ros-Oton, Serra and Valdinoci, to operators that are x-dependent, nonsymmetric, and have lower-order parts. We also generalize their formula of Pohozaev-type, that can be used to prove unique continuation properties, and nonexistence of nontrivial solutions of semilinear problems. An illustration is given with P =(- Δ +m2) a. The basic step in our analysis is a factorization of P, P ∼P-P+, where we set up a calculus for the generalized pseudodifferential operators P± that come out of the construction.

  17. On synchronization in power-grids modelled as networks of second-order Kuramoto oscillators

    NASA Astrophysics Data System (ADS)

    Grzybowski, J. M. V.; Macau, E. E. N.; Yoneyama, T.

    2016-11-01

    This work concerns analytical results on the role of coupling strength in the phenomenon of onset of complete frequency locking in power-grids modelled as a network of second-order Kuramoto oscillators. Those results allow estimation of the coupling strength for the onset of complete frequency locking and to assess the features of network and oscillators that favor synchronization. The analytical results are evaluated using an order parameter defined as the normalized sum of absolute values of phase deviations of the oscillators over time. The investigation of the frequency synchronization within the subsets of the parameter space involved in the synchronization problem is also carried out. It is shown that the analytical results are in good agreement with those observed in the numerical simulations. In order to illustrate the methodology, a case study is presented, involving the Brazilian high-voltage transmission system under a load peak condition to study the effect of load on the syncronizability of the grid. The results show that both the load and the centralized generation might have concurred to the 2014 blackout.

  18. Dimethyl fumarate–induced lymphopenia in MS due to differential T-cell subset apoptosis

    PubMed Central

    Ghadiri, Mahtab; Rezk, Ayman; Li, Rui; Evans, Ashley; Luessi, Felix; Zipp, Frauke; Giacomini, Paul S.; Antel, Jack

    2017-01-01

    Objective: To examine the mechanism underlying the preferential CD8+ vs CD4+ T-cell lymphopenia induced by dimethyl fumarate (DMF) treatment of MS. Methods: Total lymphocyte counts and comprehensive T-cell subset analyses were performed in high-quality samples obtained from patients with MS prior to and serially following DMF treatment initiation. Random coefficient mixed-effects analysis was used to model the trajectory of T-cell subset losses in vivo. Survival and apoptosis of distinct T-cell subsets were assessed following in vitro exposure to DMF. Results: Best-fit modeling indicated that the DMF-induced preferential reductions in CD8+ vs CD4+ T-cell counts nonetheless followed similar depletion kinetics, suggesting a similar rather than distinct mechanism involved in losses of both the CD8+ and CD4+ T cells. In vitro, DMF exposure resulted in dose-dependent reductions in T-cell survival, which were found to reflect apoptotic cell death. This DMF-induced apoptosis was greater for CD8+ vs CD4+, as well as for memory vs naive, and conventional vs regulatory T-cell subsets, a pattern which mirrored preferential T-cell subset losses that we observed during in vivo treatment of patients. Conclusions: Differential apoptosis mediated by DMF may underlie the preferential lymphopenia of distinct T-cell subsets, including CD8+ and memory T-cell subsets, seen in treated patients with MS. This differential susceptibility of distinct T-cell subsets to DMF-induced apoptosis may contribute to both the safety and efficacy profiles of DMF in patients with MS. PMID:28377940

  19. Vitamin D 1alpha-hydroxylase (CYP1alpha) polymorphism in Graves' disease, Hashimoto's thyroiditis and type 1 diabetes mellitus.

    PubMed

    Pani, Michael A; Regulla, Karoline; Segni, Maria; Krause, Maren; Hofmann, Stefan; Hufner, Michael; Herwig, Jurgen; Pasquino, Anna Maria; Usadel, Klaus-H; Badenhoop, Klaus

    2002-06-01

    The vitamin D endocrine system plays a role in the regulation of (auto)immunity and cell proliferation. Vitamin D 1alpha-hydroxylase (CYP1alpha) is one of the key enzymes regulating both systemic and tissue levels of 1,25-dihyroxyvitamin D(3) (1,25(OH)(2)D(3)). Administration of 1,25(OH)(2)D(3), whose serum levels were found to be reduced in type 1 diabetes and thyroid autoimmunity, prevents these diseases in animal models. We therefore investigated a recently reported CYP1alpha polymorphism for an association with type 1 diabetes mellitus, Graves' disease and Hashimoto's thyroiditis. Four hundred and seven Caucasian pedigrees with one offspring affected by either type 1 diabetes (209 families), Graves' disease (92 families) or Hashimoto's thyroiditis (106 families) were genotyped for a C/T polymorphism in intron 6 of the CYP1alpha gene on chromosome 12q13.1-13.3 and transmission disequilibrium testing (TDT) was performed. Subsets of affected offspring stratified for HLA-DQ haplotype were compared using chi(2) testing. There was no deviation from the expected transmission frequency in either type 1 diabetes mellitus (P=0.825), Graves' disease (P=0.909) or Hashimoto's thyroiditis (P=0.204). However, in Hashimoto's thyroiditis the CYP1alpha C allele was significantly more often transmitted to HLA-DQ2(-) patients (27 transmitted vs 14 not transmitted; TDT: P=0.042) than expected. The C allele was less often transmitted to HLA-DQ2(+) patients (9 transmitted vs 12 not transmitted; TDT: P=0.513), although the difference was not significant (chi(2) test: P=0.143). A similar difference was observed in type 1 diabetes between offspring with high and low risk HLA-DQ haplotypes (chi(2) test: P=0.095). The CYP1alpha intron 6 polymorphism appears not to be associated with type 1 diabetes mellitus, Graves' disease and Hashimoto's thyroiditis. A potential association in subsets of patients with type 1 diabetes and Hashimoto's thyroiditis should be further investigated as well as its functional implications.

  20. A comparative study of expectant parents ' childbirth expectations.

    PubMed

    Kao, Bi-Chin; Gau, Meei-Ling; Wu, Shian-Feng; Kuo, Bih-Jaw; Lee, Tsorng-Yeh

    2004-09-01

    The purpose of this study was to understand childbirth expectations and differences in childbirth expectations among expectant parents. For convenience sampling, 200 couples willing to participate in this study were chosen from two hospitals in central Taiwan. Inclusion criteria were at least 36 weeks of gestation, aged 18 and above, no prenatal complications, and willing to consent to participate in this study. Instruments used to collect data included basic demographic data and the Childbirth Expectations Questionnaire. Findings of the study revealed that (1) five factors were identified by expectant parents regarding childbirth expectations including the caregiving environment, expectation of labor pain, spousal support, control and participation, and medical and nursing support; (2) no general differences were identified in the childbirth expectations between expectant fathers and expectant mothers; and (3) expectant fathers with a higher socioeconomic status and who had received prenatal (childbirth) education had higher childbirth expectations, whereas mothers displayed no differences in demographic characteristics. The study results may help clinical healthcare providers better understand differences in expectations during labor and birth and childbirth expectations by expectant parents in order to improve the medical and nursing system and promote positive childbirth experiences and satisfaction for expectant parents.

Top