Dong, J; Hayakawa, Y; Kober, C
2014-01-01
When metallic prosthetic appliances and dental fillings exist in the oral cavity, the appearance of metal-induced streak artefacts is not avoidable in CT images. The aim of this study was to develop a method for artefact reduction using the statistical reconstruction on multidetector row CT images. Adjacent CT images often depict similar anatomical structures. Therefore, reconstructed images with weak artefacts were attempted using projection data of an artefact-free image in a neighbouring thin slice. Images with moderate and strong artefacts were continuously processed in sequence by successive iterative restoration where the projection data was generated from the adjacent reconstructed slice. First, the basic maximum likelihood-expectation maximization algorithm was applied. Next, the ordered subset-expectation maximization algorithm was examined. Alternatively, a small region of interest setting was designated. Finally, the general purpose graphic processing unit machine was applied in both situations. The algorithms reduced the metal-induced streak artefacts on multidetector row CT images when the sequential processing method was applied. The ordered subset-expectation maximization and small region of interest reduced the processing duration without apparent detriments. A general-purpose graphic processing unit realized the high performance. A statistical reconstruction method was applied for the streak artefact reduction. The alternative algorithms applied were effective. Both software and hardware tools, such as ordered subset-expectation maximization, small region of interest and general-purpose graphic processing unit achieved fast artefact correction.
Ceriani, Luca; Ruberto, Teresa; Delaloye, Angelika Bischof; Prior, John O; Giovanella, Luca
2010-03-01
The purposes of this study were to characterize the performance of a 3-dimensional (3D) ordered-subset expectation maximization (OSEM) algorithm in the quantification of left ventricular (LV) function with (99m)Tc-labeled agent gated SPECT (G-SPECT), the QGS program, and a beating-heart phantom and to optimize the reconstruction parameters for clinical applications. A G-SPECT image of a dynamic heart phantom simulating the beating left ventricle was acquired. The exact volumes of the phantom were known and were as follows: end-diastolic volume (EDV) of 112 mL, end-systolic volume (ESV) of 37 mL, and stroke volume (SV) of 75 mL; these volumes produced an LV ejection fraction (LVEF) of 67%. Tomographic reconstructions were obtained after 10-20 iterations (I) with 4, 8, and 16 subsets (S) at full width at half maximum (FWHM) gaussian postprocessing filter cutoff values of 8-15 mm. The QGS program was used for quantitative measurements. Measured values ranged from 72 to 92 mL for EDV, from 18 to 32 mL for ESV, and from 54 to 63 mL for SV, and the calculated LVEF ranged from 65% to 76%. Overall, the combination of 10 I, 8 S, and a cutoff filter value of 10 mm produced the most accurate results. The plot of the measures with respect to the expectation maximization-equivalent iterations (I x S product) revealed a bell-shaped curve for the LV volumes and a reverse distribution for the LVEF, with the best results in the intermediate range. In particular, FWHM cutoff values exceeding 10 mm affected the estimation of the LV volumes. The QGS program is able to correctly calculate the LVEF when used in association with an optimized 3D OSEM algorithm (8 S, 10 I, and FWHM of 10 mm) but underestimates the LV volumes. However, various combinations of technical parameters, including a limited range of I and S (80-160 expectation maximization-equivalent iterations) and low cutoff values (< or =10 mm) for the gaussian postprocessing filter, produced results with similar accuracies and without clinically relevant differences in the LV volumes and the estimated LVEF.
Olsson, Anna; Arlig, Asa; Carlsson, Gudrun Alm; Gustafsson, Agnetha
2007-09-01
The image quality of single photon emission computed tomography (SPECT) depends on the reconstruction algorithm used. The purpose of the present study was to evaluate parameters in ordered subset expectation maximization (OSEM) and to compare systematically with filtered back-projection (FBP) for reconstruction of regional cerebral blood flow (rCBF) SPECT, incorporating attenuation and scatter correction. The evaluation was based on the trade-off between contrast recovery and statistical noise using different sizes of subsets, number of iterations and filter parameters. Monte Carlo simulated SPECT studies of a digital human brain phantom were used. The contrast recovery was calculated as measured contrast divided by true contrast. Statistical noise in the reconstructed images was calculated as the coefficient of variation in pixel values. A constant contrast level was reached above 195 equivalent maximum likelihood expectation maximization iterations. The choice of subset size was not crucial as long as there were > or = 2 projections per subset. The OSEM reconstruction was found to give 5-14% higher contrast recovery than FBP for all clinically relevant noise levels in rCBF SPECT. The Butterworth filter, power 6, achieved the highest stable contrast recovery level at all clinically relevant noise levels. The cut-off frequency should be chosen according to the noise level accepted in the image. Trade-off plots are shown to be a practical way of deciding the number of iterations and subset size for the OSEM reconstruction and can be used for other examination types in nuclear medicine.
Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib
2016-02-07
Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.
NASA Astrophysics Data System (ADS)
Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib
2016-02-01
Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.
NASA Astrophysics Data System (ADS)
Karamat, Muhammad I.; Farncombe, Troy H.
2015-10-01
Simultaneous multi-isotope Single Photon Emission Computed Tomography (SPECT) imaging has a number of applications in cardiac, brain, and cancer imaging. The major concern however, is the significant crosstalk contamination due to photon scatter between the different isotopes. The current study focuses on a method of crosstalk compensation between two isotopes in simultaneous dual isotope SPECT acquisition applied to cancer imaging using 99mTc and 111In. We have developed an iterative image reconstruction technique that simulates the photon down-scatter from one isotope into the acquisition window of a second isotope. Our approach uses an accelerated Monte Carlo (MC) technique for the forward projection step in an iterative reconstruction algorithm. The MC estimated scatter contamination of a radionuclide contained in a given projection view is then used to compensate for the photon contamination in the acquisition window of other nuclide. We use a modified ordered subset-expectation maximization (OS-EM) algorithm named simultaneous ordered subset-expectation maximization (Sim-OSEM), to perform this step. We have undertaken a number of simulation tests and phantom studies to verify this approach. The proposed reconstruction technique was also evaluated by reconstruction of experimentally acquired phantom data. Reconstruction using Sim-OSEM showed very promising results in terms of contrast recovery and uniformity of object background compared to alternative reconstruction methods implementing alternative scatter correction schemes (i.e., triple energy window or separately acquired projection data). In this study the evaluation is based on the quality of reconstructed images and activity estimated using Sim-OSEM. In order to quantitate the possible improvement in spatial resolution and signal to noise ratio (SNR) observed in this study, further simulation and experimental studies are required.
Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia
2013-02-01
The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.
Evaluating low pass filters on SPECT reconstructed cardiac orientation estimation
NASA Astrophysics Data System (ADS)
Dwivedi, Shekhar
2009-02-01
Low pass filters can affect the quality of clinical SPECT images by smoothing. Appropriate filter and parameter selection leads to optimum smoothing that leads to a better quantification followed by correct diagnosis and accurate interpretation by the physician. This study aims at evaluating the low pass filters on SPECT reconstruction algorithms. Criteria for evaluating the filters are estimating the SPECT reconstructed cardiac azimuth and elevation angle. Low pass filters studied are butterworth, gaussian, hamming, hanning and parzen. Experiments are conducted using three reconstruction algorithms, FBP (filtered back projection), MLEM (maximum likelihood expectation maximization) and OSEM (ordered subsets expectation maximization), on four gated cardiac patient projections (two patients with stress and rest projections). Each filter is applied with varying cutoff and order for each reconstruction algorithm (only butterworth used for MLEM and OSEM). The azimuth and elevation angles are calculated from the reconstructed volume and the variation observed in the angles with varying filter parameters is reported. Our results demonstrate that behavior of hamming, hanning and parzen filter (used with FBP) with varying cutoff is similar for all the datasets. Butterworth filter (cutoff > 0.4) behaves in a similar fashion for all the datasets using all the algorithms whereas with OSEM for a cutoff < 0.4, it fails to generate cardiac orientation due to oversmoothing, and gives an unstable response with FBP and MLEM. This study on evaluating effect of low pass filter cutoff and order on cardiac orientation using three different reconstruction algorithms provides an interesting insight into optimal selection of filter parameters.
Muon tomography imaging improvement using optimized limited angle data
NASA Astrophysics Data System (ADS)
Bai, Chuanyong; Simon, Sean; Kindem, Joel; Luo, Weidong; Sossong, Michael J.; Steiger, Matthew
2014-05-01
Image resolution of muon tomography is limited by the range of zenith angles of cosmic ray muons and the flux rate at sea level. Low flux rate limits the use of advanced data rebinning and processing techniques to improve image quality. By optimizing the limited angle data, however, image resolution can be improved. To demonstrate the idea, physical data of tungsten blocks were acquired on a muon tomography system. The angular distribution and energy spectrum of muons measured on the system was also used to generate simulation data of tungsten blocks of different arrangement (geometry). The data were grouped into subsets using the zenith angle and volume images were reconstructed from the data subsets using two algorithms. One was a distributed PoCA (point of closest approach) algorithm and the other was an accelerated iterative maximal likelihood/expectation maximization (MLEM) algorithm. Image resolution was compared for different subsets. Results showed that image resolution was better in the vertical direction for subsets with greater zenith angles and better in the horizontal plane for subsets with smaller zenith angles. The overall image resolution appeared to be the compromise of that of different subsets. This work suggests that the acquired data can be grouped into different limited angle data subsets for optimized image resolution in desired directions. Use of multiple images with resolution optimized in different directions can improve overall imaging fidelity and the intended applications.
NASA Astrophysics Data System (ADS)
Bai, Chuanyong; Kinahan, P. E.; Brasse, D.; Comtat, C.; Townsend, D. W.
2002-02-01
We have evaluated the penalized ordered-subset transmission reconstruction (OSTR) algorithm for postinjection single photon transmission scanning. The OSTR algorithm of Erdogan and Fessler (1999) uses a more accurate model for transmission tomography than ordered-subsets expectation-maximization (OSEM) when OSEM is applied to the logarithm of the transmission data. The OSTR algorithm is directly applicable to postinjection transmission scanning with a single photon source, as emission contamination from the patient mimics the effect, in the original derivation of OSTR, of random coincidence contamination in a positron source transmission scan. Multiple noise realizations of simulated postinjection transmission data were reconstructed using OSTR, filtered backprojection (FBP), and OSEM algorithms. Due to the nonspecific task performance, or multiple uses, of the transmission image, multiple figures of merit were evaluated, including image noise, contrast, uniformity, and root mean square (rms) error. We show that: 1) the use of a three-dimensional (3-D) regularizing image roughness penalty with OSTR improves the tradeoffs in noise, contrast, and rms error relative to the use of a two-dimensional penalty; 2) OSTR with a 3-D penalty has improved tradeoffs in noise, contrast, and rms error relative to FBP or OSEM; and 3) the use of image standard deviation from a single realization to estimate the true noise can be misleading in the case of OSEM. We conclude that using OSTR with a 3-D penalty potentially allows for shorter postinjection transmission scans in single photon transmission tomography in positron emission tomography (PET) relative to FBP or OSEM reconstructed images with the same noise properties. This combination of singles+OSTR is particularly suitable for whole-body PET oncology imaging.
Studies of a Next-Generation Silicon-Photomultiplier-Based Time-of-Flight PET/CT System.
Hsu, David F C; Ilan, Ezgi; Peterson, William T; Uribe, Jorge; Lubberink, Mark; Levin, Craig S
2017-09-01
This article presents system performance studies for the Discovery MI PET/CT system, a new time-of-flight system based on silicon photomultipliers. System performance and clinical imaging were compared between this next-generation system and other commercially available PET/CT and PET/MR systems, as well as between different reconstruction algorithms. Methods: Spatial resolution, sensitivity, noise-equivalent counting rate, scatter fraction, counting rate accuracy, and image quality were characterized with the National Electrical Manufacturers Association NU-2 2012 standards. Energy resolution and coincidence time resolution were measured. Tests were conducted independently on two Discovery MI scanners installed at Stanford University and Uppsala University, and the results were averaged. Back-to-back patient scans were also performed between the Discovery MI, Discovery 690 PET/CT, and SIGNA PET/MR systems. Clinical images were reconstructed using both ordered-subset expectation maximization and Q.Clear (block-sequential regularized expectation maximization with point-spread function modeling) and were examined qualitatively. Results: The averaged full widths at half maximum (FWHMs) of the radial/tangential/axial spatial resolution reconstructed with filtered backprojection at 1, 10, and 20 cm from the system center were, respectively, 4.10/4.19/4.48 mm, 5.47/4.49/6.01 mm, and 7.53/4.90/6.10 mm. The averaged sensitivity was 13.7 cps/kBq at the center of the field of view. The averaged peak noise-equivalent counting rate was 193.4 kcps at 21.9 kBq/mL, with a scatter fraction of 40.6%. The averaged contrast recovery coefficients for the image-quality phantom were 53.7, 64.0, 73.1, 82.7, 86.8, and 90.7 for the 10-, 13-, 17-, 22-, 28-, and 37-mm-diameter spheres, respectively. The average photopeak energy resolution was 9.40% FWHM, and the average coincidence time resolution was 375.4 ps FWHM. Clinical image comparisons between the PET/CT systems demonstrated the high quality of the Discovery MI. Comparisons between the Discovery MI and SIGNA showed a similar spatial resolution and overall imaging performance. Lastly, the results indicated significantly enhanced image quality and contrast-to-noise performance for Q.Clear, compared with ordered-subset expectation maximization. Conclusion: Excellent performance was achieved with the Discovery MI, including 375 ps FWHM coincidence time resolution and sensitivity of 14 cps/kBq. Comparisons between reconstruction algorithms and other multimodal silicon photomultiplier and non-silicon photomultiplier PET detector system designs indicated that performance can be substantially enhanced with this next-generation system. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Multi-ray-based system matrix generation for 3D PET reconstruction
NASA Astrophysics Data System (ADS)
Moehrs, Sascha; Defrise, Michel; Belcari, Nicola; DelGuerra, Alberto; Bartoli, Antonietta; Fabbri, Serena; Zanetti, Gianluigi
2008-12-01
Iterative image reconstruction algorithms for positron emission tomography (PET) require a sophisticated system matrix (model) of the scanner. Our aim is to set up such a model offline for the YAP-(S)PET II small animal imaging tomograph in order to use it subsequently with standard ML-EM (maximum-likelihood expectation maximization) and OSEM (ordered subset expectation maximization) for fully three-dimensional image reconstruction. In general, the system model can be obtained analytically, via measurements or via Monte Carlo simulations. In this paper, we present the multi-ray method, which can be considered as a hybrid method to set up the system model offline. It incorporates accurate analytical (geometric) considerations as well as crystal depth and crystal scatter effects. At the same time, it has the potential to model seamlessly other physical aspects such as the positron range. The proposed method is based on multiple rays which are traced from/to the detector crystals through the image volume. Such a ray-tracing approach itself is not new; however, we derive a novel mathematical formulation of the approach and investigate the positioning of the integration (ray-end) points. First, we study single system matrix entries and show that the positioning and weighting of the ray-end points according to Gaussian integration give better results compared to equally spaced integration points (trapezoidal integration), especially if only a small number of integration points (rays) are used. Additionally, we show that, for a given variance of the single matrix entries, the number of rays (events) required to calculate the whole matrix is a factor of 20 larger when using a pure Monte-Carlo-based method. Finally, we analyse the quality of the model by reconstructing phantom data from the YAP-(S)PET II scanner.
Anatomically-Aided PET Reconstruction Using the Kernel Method
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2016-01-01
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810
Anatomically-aided PET reconstruction using the kernel method.
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi
2016-09-21
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
NOTE: Acceleration of Monte Carlo-based scatter compensation for cardiac SPECT
NASA Astrophysics Data System (ADS)
Sohlberg, A.; Watabe, H.; Iida, H.
2008-07-01
Single proton emission computed tomography (SPECT) images are degraded by photon scatter making scatter compensation essential for accurate reconstruction. Reconstruction-based scatter compensation with Monte Carlo (MC) modelling of scatter shows promise for accurate scatter correction, but it is normally hampered by long computation times. The aim of this work was to accelerate the MC-based scatter compensation using coarse grid and intermittent scatter modelling. The acceleration methods were compared to un-accelerated implementation using MC-simulated projection data of the mathematical cardiac torso (MCAT) phantom modelling 99mTc uptake and clinical myocardial perfusion studies. The results showed that when combined the acceleration methods reduced the reconstruction time for 10 ordered subset expectation maximization (OS-EM) iterations from 56 to 11 min without a significant reduction in image quality indicating that the coarse grid and intermittent scatter modelling are suitable for MC-based scatter compensation in cardiac SPECT.
Anatomically-aided PET reconstruction using the kernel method
NASA Astrophysics Data System (ADS)
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2016-09-01
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
A quantitative reconstruction software suite for SPECT imaging
NASA Astrophysics Data System (ADS)
Namías, Mauro; Jeraj, Robert
2017-11-01
Quantitative Single Photon Emission Tomography (SPECT) imaging allows for measurement of activity concentrations of a given radiotracer in vivo. Although SPECT has usually been perceived as non-quantitative by the medical community, the introduction of accurate CT based attenuation correction and scatter correction from hybrid SPECT/CT scanners has enabled SPECT systems to be as quantitative as Positron Emission Tomography (PET) systems. We implemented a software suite to reconstruct quantitative SPECT images from hybrid or dedicated SPECT systems with a separate CT scanner. Attenuation, scatter and collimator response corrections were included in an Ordered Subset Expectation Maximization (OSEM) algorithm. A novel scatter fraction estimation technique was introduced. The SPECT/CT system was calibrated with a cylindrical phantom and quantitative accuracy was assessed with an anthropomorphic phantom and a NEMA/IEC image quality phantom. Accurate activity measurements were achieved at an organ level. This software suite helps increasing quantitative accuracy of SPECT scanners.
Effect of filters and reconstruction algorithms on I-124 PET in Siemens Inveon PET scanner
NASA Astrophysics Data System (ADS)
Ram Yu, A.; Kim, Jin Su
2015-10-01
Purpose: To assess the effects of filtering and reconstruction on Siemens I-124 PET data. Methods: A Siemens Inveon PET was used. Spatial resolution of I-124 was measured to a transverse offset of 50 mm from the center FBP, 2D ordered subset expectation maximization (OSEM2D), 3D re-projection algorithm (3DRP), and maximum a posteriori (MAP) methods were tested. Non-uniformity (NU), recovery coefficient (RC), and spillover ratio (SOR) parameterized image quality. Mini deluxe phantom data of I-124 was also assessed. Results: Volumetric resolution was 7.3 mm3 from the transverse FOV center when FBP reconstruction algorithms with ramp filter was used. MAP yielded minimal NU with β =1.5. OSEM2D yielded maximal RC. SOR was below 4% for FBP with ramp, Hamming, Hanning, or Shepp-Logan filters. Based on the mini deluxe phantom results, an FBP with Hanning or Parzen filters, or a 3DRP with Hanning filter yielded feasible I-124 PET data.Conclusions: Reconstruction algorithms and filters were compared. FBP with Hanning or Parzen filters, or 3DRP with Hanning filter yielded feasible data for quantifying I-124 PET.
Attenuation correction strategies for multi-energy photon emitters using SPECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pretorius, P.H.; King, M.A.; Pan, T.S.
1996-12-31
The aim of this study was to investigate whether the photopeak window projections from different energy photons can be combined into a single window for reconstruction or if it is better to not combine the projections due to differences in the attenuation maps required for each photon energy. The mathematical cardiac torso (MCAT) phantom was modified to simulate the uptake of Ga-67 in the human body. Four spherical hot tumors were placed in locations which challenged attenuation correction. An analytical 3D projector with attenuation and detector response included was used to generate projection sets. Data were reconstructed using filtered backprojectionmore » (FBP) reconstruction with Butterworth filtering in conjunction with one iteration of Chang attenuation correction, and with 5 and 10 iterations of ordered-subset maximum-likelihood expectation-maximization reconstruction. To serve as a standard for comparison, the projection sets obtained from the two energies were first reconstructed separately using their own attenuation maps. The emission data obtained from both energies were added and reconstructed using the following attenuation strategies: (1) the 93 keV attenuation map for attenuation correction, (2) the 185 keV attenuation map for attenuation correction, (3) using a weighted mean obtained from combining the 93 keV and 185 keV maps, and (4) an ordered subset approach which combines both energies. The central count ratio (CCR) and total count ratio (TCR) were used to compare the performance of the different strategies. Compared to the standard method, results indicate an over-estimation with strategy 1, an under-estimation with strategy 2 and comparable results with strategies 3 and 4. In all strategies, the CCR`s of sphere 4 were under-estimated, although TCR`s were comparable to that of the other locations. The weighted mean and ordered subset strategies for attenuation correction were of comparable accuracy to reconstruction of the windows separately.« less
NASA Regional Planetary Image Facility
NASA Technical Reports Server (NTRS)
Arvidson, Raymond E.
2001-01-01
The Regional Planetary Image Facility (RPIF) provided access to data from NASA planetary missions and expert assistance about the data sets and how to order subsets of the collections. This ensures that the benefit/cost of acquiring the data is maximized by widespread dissemination and use of the observations and resultant collections. The RPIF provided education and outreach functions that ranged from providing data and information to teachers, involving small groups of highly motivated students in its activities, to public lectures and tours. These activities maximized dissemination of results and data to the educational and public communities.
NASA Astrophysics Data System (ADS)
Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.
2013-04-01
The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.
G-STRATEGY: Optimal Selection of Individuals for Sequencing in Genetic Association Studies
Wang, Miaoyan; Jakobsdottir, Johanna; Smith, Albert V.; McPeek, Mary Sara
2017-01-01
In a large-scale genetic association study, the number of phenotyped individuals available for sequencing may, in some cases, be greater than the study’s sequencing budget will allow. In that case, it can be important to prioritize individuals for sequencing in a way that optimizes power for association with the trait. Suppose a cohort of phenotyped individuals is available, with some subset of them possibly already sequenced, and one wants to choose an additional fixed-size subset of individuals to sequence in such a way that the power to detect association is maximized. When the phenotyped sample includes related individuals, power for association can be gained by including partial information, such as phenotype data of ungenotyped relatives, in the analysis, and this should be taken into account when assessing whom to sequence. We propose G-STRATEGY, which uses simulated annealing to choose a subset of individuals for sequencing that maximizes the expected power for association. In simulations, G-STRATEGY performs extremely well for a range of complex disease models and outperforms other strategies with, in many cases, relative power increases of 20–40% over the next best strategy, while maintaining correct type 1 error. G-STRATEGY is computationally feasible even for large datasets and complex pedigrees. We apply G-STRATEGY to data on HDL and LDL from the AGES-Reykjavik and REFINE-Reykjavik studies, in which G-STRATEGY is able to closely-approximate the power of sequencing the full sample by selecting for sequencing a only small subset of the individuals. PMID:27256766
Accelerating image reconstruction in dual-head PET system by GPU and symmetry properties.
Chou, Cheng-Ying; Dong, Yun; Hung, Yukai; Kao, Yu-Jiun; Wang, Weichung; Kao, Chien-Min; Chen, Chin-Tu
2012-01-01
Positron emission tomography (PET) is an important imaging modality in both clinical usage and research studies. We have developed a compact high-sensitivity PET system that consisted of two large-area panel PET detector heads, which produce more than 224 million lines of response and thus request dramatic computational demands. In this work, we employed a state-of-the-art graphics processing unit (GPU), NVIDIA Tesla C2070, to yield an efficient reconstruction process. Our approaches ingeniously integrate the distinguished features of the symmetry properties of the imaging system and GPU architectures, including block/warp/thread assignments and effective memory usage, to accelerate the computations for ordered subset expectation maximization (OSEM) image reconstruction. The OSEM reconstruction algorithms were implemented employing both CPU-based and GPU-based codes, and their computational performance was quantitatively analyzed and compared. The results showed that the GPU-accelerated scheme can drastically reduce the reconstruction time and thus can largely expand the applicability of the dual-head PET system.
Lasnon, Charline; Dugue, Audrey Emmanuelle; Briand, Mélanie; Blanc-Fournier, Cécile; Dutoit, Soizic; Louis, Marie-Hélène; Aide, Nicolas
2015-06-01
We compared conventional filtered back-projection (FBP), two-dimensional-ordered subsets expectation maximization (OSEM) and maximum a posteriori (MAP) NEMA NU 4-optimized reconstructions for therapy assessment. Varying reconstruction settings were used to determine the parameters for optimal image quality with two NEMA NU 4 phantom acquisitions. Subsequently, data from two experiments in which nude rats bearing subcutaneous tumors had received a dual PI3K/mTOR inhibitor were reconstructed with the NEMA NU 4-optimized parameters. Mann-Whitney tests were used to compare mean standardized uptake value (SUV(mean)) variations among groups. All NEMA NU 4-optimized reconstructions showed the same 2-deoxy-2-[(18)F]fluoro-D-glucose ([(18)F]FDG) kinetic patterns and detected a significant difference in SUV(mean) relative to day 0 between controls and treated groups for all time points with comparable p values. In the framework of therapy assessment in rats bearing subcutaneous tumors, all algorithms available on the Inveon system performed equally.
NASA Astrophysics Data System (ADS)
Hutton, Brian F.; Lau, Yiu H.
1998-06-01
Compensation for distance-dependent resolution can be directly incorporated in maximum likelihood reconstruction. Our objective was to examine the effectiveness of this compensation using either the standard expectation maximization (EM) algorithm or an accelerated algorithm based on use of ordered subsets (OSEM). We also investigated the application of post-reconstruction filtering in combination with resolution compensation. Using the MCAT phantom, projections were simulated for
data, including attenuation and distance-dependent resolution. Projection data were reconstructed using conventional EM and OSEM with subset size 2 and 4, with/without 3D compensation for detector response (CDR). Also post-reconstruction filtering (PRF) was performed using a 3D Butterworth filter of order 5 with various cutoff frequencies (0.2-
). Image quality and reconstruction accuracy were improved when CDR was included. Image noise was lower with CDR for a given iteration number. PRF with cutoff frequency greater than
improved noise with no reduction in recovery coefficient for myocardium but the effect was less when CDR was incorporated in the reconstruction. CDR alone provided better results than use of PRF without CDR. Results suggest that using CDR without PRF, and stopping at a small number of iterations, may provide sufficiently good results for myocardial SPECT. Similar behaviour was demonstrated for OSEM.
What's wrong with hazard-ranking systems? An expository note.
Cox, Louis Anthony Tony
2009-07-01
Two commonly recommended principles for allocating risk management resources to remediate uncertain hazards are: (1) select a subset to maximize risk-reduction benefits (e.g., maximize the von Neumann-Morgenstern expected utility of the selected risk-reducing activities), and (2) assign priorities to risk-reducing opportunities and then select activities from the top of the priority list down until no more can be afforded. When different activities create uncertain but correlated risk reductions, as is often the case in practice, then these principles are inconsistent: priority scoring and ranking fails to maximize risk-reduction benefits. Real-world risk priority scoring systems used in homeland security and terrorism risk assessment, environmental risk management, information system vulnerability rating, business risk matrices, and many other important applications do not exploit correlations among risk-reducing opportunities or optimally diversify risk-reducing investments. As a result, they generally make suboptimal risk management recommendations. Applying portfolio optimization methods instead of risk prioritization ranking, rating, or scoring methods can achieve greater risk-reduction value for resources spent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaefferkoetter, Joshua, E-mail: dnrjds@nus.edu.sg; Ouyang, Jinsong; Rakvongthai, Yothin
2014-06-15
Purpose: A study was designed to investigate the impact of time-of-flight (TOF) and point spread function (PSF) modeling on the detectability of myocardial defects. Methods: Clinical FDG-PET data were used to generate populations of defect-present and defect-absent images. Defects were incorporated at three contrast levels, and images were reconstructed by ordered subset expectation maximization (OSEM) iterative methods including ordinary Poisson, alone and with PSF, TOF, and PSF+TOF. Channelized Hotelling observer signal-to-noise ratio (SNR) was the surrogate for human observer performance. Results: For three iterations, 12 subsets, and no postreconstruction smoothing, TOF improved overall defect detection SNR by 8.6% as comparedmore » to its non-TOF counterpart for all the defect contrasts. Due to the slow convergence of PSF reconstruction, PSF yielded 4.4% less SNR than non-PSF. For reconstruction parameters (iteration number and postreconstruction smoothing kernel size) optimizing observer SNR, PSF showed larger improvement for faint defects. The combination of TOF and PSF improved mean detection SNR as compared to non-TOF and non-PSF counterparts by 3.0% and 3.2%, respectively. Conclusions: For typical reconstruction protocol used in clinical practice, i.e., less than five iterations, TOF improved defect detectability. In contrast, PSF generally yielded less detectability. For large number of iterations, TOF+PSF yields the best observer performance.« less
Attenuation correction strategies for multi-energy photon emitters using SPECT
NASA Astrophysics Data System (ADS)
Pretorius, P. H.; King, M. A.; Pan, T.-S.; Hutton, B. F.
1997-06-01
The aim of this study was to investigate whether the photopeak window projections from different energy photons can be combined into a single window for reconstruction or if it is better to not combine the projections due to differences in the attenuation maps required for each photon energy. The mathematical cardiac torso (MCAT) phantom was modified to simulate the uptake of Ga-67 in the human body. Four spherical hot tumors were placed in locations which challenged attenuation correction. An analytical 3D projector with attenuation and detector response included was used to generate projection sets. Data were reconstructed using filtered backprojection (FBP) reconstruction with Butterworth filtering in conjunction with one iteration of Chang attenuation correction, and with 5 and 10 iterations of ordered-subset maximum-likelihood expectation maximization (ML-OS) reconstruction. To serve as a standard for comparison, the projection sets obtained from the two energies were first reconstructed separately using their own attenuation maps. The emission data obtained from both energies were added and reconstructed using the following attenuation strategies: 1) the 93 keV attenuation map for attenuation correction, 2) the 185 keV attenuation map for attenuation correction, 3) using a weighted mean obtained from combining the 93 keV and 185 keV maps, and 4) an ordered subset approach which combines both energies. The central count ratio (CCR) and total count ratio (TCR) were used to compare the performance of the different strategies. Compared to the standard method, results indicate an over-estimation with strategy 1, an under-estimation with strategy 2 and comparable results with strategies 3 and 4. In all strategies, the CCRs of sphere 4 (in proximity to the liver, spleen and backbone) were under-estimated, although TCRs were comparable to that of the other locations. The weighted mean and ordered subset strategies for attenuation correction were of comparable accuracy to reconstruction of the windows separately. They are recommended for multi-energy photon SPECT imaging quantitation when there is a need to combine the acquisitions of multiple windows.
Incorporating HYPR de-noising within iterative PET reconstruction (HYPR-OSEM)
NASA Astrophysics Data System (ADS)
(Kevin Cheng, Ju-Chieh; Matthews, Julian; Sossi, Vesna; Anton-Rodriguez, Jose; Salomon, André; Boellaard, Ronald
2017-08-01
HighlY constrained back-PRojection (HYPR) is a post-processing de-noising technique originally developed for time-resolved magnetic resonance imaging. It has been recently applied to dynamic imaging for positron emission tomography and shown promising results. In this work, we have developed an iterative reconstruction algorithm (HYPR-OSEM) which improves the signal-to-noise ratio (SNR) in static imaging (i.e. single frame reconstruction) by incorporating HYPR de-noising directly within the ordered subsets expectation maximization (OSEM) algorithm. The proposed HYPR operator in this work operates on the target image(s) from each subset of OSEM and uses the sum of the preceding subset images as the composite which is updated every iteration. Three strategies were used to apply the HYPR operator in OSEM: (i) within the image space modeling component of the system matrix in forward-projection only, (ii) within the image space modeling component in both forward-projection and back-projection, and (iii) on the image estimate after the OSEM update for each subset thus generating three forms: (i) HYPR-F-OSEM, (ii) HYPR-FB-OSEM, and (iii) HYPR-AU-OSEM. Resolution and contrast phantom simulations with various sizes of hot and cold regions as well as experimental phantom and patient data were used to evaluate the performance of the three forms of HYPR-OSEM, and the results were compared to OSEM with and without a post reconstruction filter. It was observed that the convergence in contrast recovery coefficients (CRC) obtained from all forms of HYPR-OSEM was slower than that obtained from OSEM. Nevertheless, HYPR-OSEM improved SNR without degrading accuracy in terms of resolution and contrast. It achieved better accuracy in CRC at equivalent noise level and better precision than OSEM and better accuracy than filtered OSEM in general. In addition, HYPR-AU-OSEM has been determined to be the more effective form of HYPR-OSEM in terms of accuracy and precision based on the studies conducted in this work.
Beyond filtered backprojection: A reconstruction software package for ion beam microtomography data
NASA Astrophysics Data System (ADS)
Habchi, C.; Gordillo, N.; Bourret, S.; Barberet, Ph.; Jovet, C.; Moretto, Ph.; Seznec, H.
2013-01-01
A new version of the TomoRebuild data reduction software package is presented, for the reconstruction of scanning transmission ion microscopy tomography (STIMT) and particle induced X-ray emission tomography (PIXET) images. First, we present a state of the art of the reconstruction codes available for ion beam microtomography. The algorithm proposed here brings several advantages. It is a portable, multi-platform code, designed in C++ with well-separated classes for easier use and evolution. Data reduction is separated in different steps and the intermediate results may be checked if necessary. Although no additional graphic library or numerical tool is required to run the program as a command line, a user friendly interface was designed in Java, as an ImageJ plugin. All experimental and reconstruction parameters may be entered either through this plugin or directly in text format files. A simple standard format is proposed for the input of experimental data. Optional graphic applications using the ROOT interface may be used separately to display and fit energy spectra. Regarding the reconstruction process, the filtered backprojection (FBP) algorithm, already present in the previous version of the code, was optimized so that it is about 10 times as fast. In addition, Maximum Likelihood Expectation Maximization (MLEM) and its accelerated version Ordered Subsets Expectation Maximization (OSEM) algorithms were implemented. A detailed user guide in English is available. A reconstruction example of experimental data from a biological sample is given. It shows the capability of the code to reduce noise in the sinograms and to deal with incomplete data, which puts a new perspective on tomography using low number of projections or limited angle.
Liu, Ming; Gao, Yue; Xiao, Rui; Zhang, Bo-li
2009-01-01
This study is to analyze microcosmic significance of Chinese medicine composing principle "principal, assistant, complement and mediating guide" and it's fuzzy mathematic quantitative law. According to molecular biology and maximal membership principle, fuzzy subset and membership functions were proposed. Using in vivo experiment on the effects of SiWu Decoction and its ingredients on mice with radiation-induced blood deficiency, it is concluded that DiHuang and DangGui belonged to the principal and assistant subset, BaiShao belonged to the contrary complement subset, ChuanXiong belonged to the mediating guide subset by maximal membership principle. It is discussed that traditional Chinese medicine will be consummate medical science when its theory can be described by mathematic language.
Restricted numerical range: A versatile tool in the theory of quantum information
NASA Astrophysics Data System (ADS)
Gawron, Piotr; Puchała, Zbigniew; Miszczak, Jarosław Adam; Skowronek, Łukasz; Życzkowski, Karol
2010-10-01
Numerical range of a Hermitian operator X is defined as the set of all possible expectation values of this observable among a normalized quantum state. We analyze a modification of this definition in which the expectation value is taken among a certain subset of the set of all quantum states. One considers, for instance, the set of real states, the set of product states, separable states, or the set of maximally entangled states. We show exemplary applications of these algebraic tools in the theory of quantum information: analysis of k-positive maps and entanglement witnesses, as well as study of the minimal output entropy of a quantum channel. Product numerical range of a unitary operator is used to solve the problem of local distinguishability of a family of two unitary gates.
Monochromatic-beam-based dynamic X-ray microtomography based on OSEM-TV algorithm.
Xu, Liang; Chen, Rongchang; Yang, Yiming; Deng, Biao; Du, Guohao; Xie, Honglan; Xiao, Tiqiao
2017-01-01
Monochromatic-beam-based dynamic X-ray computed microtomography (CT) was developed to observe evolution of microstructure inside samples. However, the low flux density results in low efficiency in data collection. To increase efficiency, reducing the number of projections should be a practical solution. However, it has disadvantages of low image reconstruction quality using the traditional filtered back projection (FBP) algorithm. In this study, an iterative reconstruction method using an ordered subset expectation maximization-total variation (OSEM-TV) algorithm was employed to address and solve this problem. The simulated results demonstrated that normalized mean square error of the image slices reconstructed by the OSEM-TV algorithm was about 1/4 of that by FBP. Experimental results also demonstrated that the density resolution of OSEM-TV was high enough to resolve different materials with the number of projections less than 100. As a result, with the introduction of OSEM-TV, the monochromatic-beam-based dynamic X-ray microtomography is potentially practicable for the quantitative and non-destructive analysis to the evolution of microstructure with acceptable efficiency in data collection and reconstructed image quality.
Pretorius, P. Hendrik; Johnson, Karen L.; King, Michael A.
2016-01-01
We have recently been successful in the development and testing of rigid-body motion tracking, estimation and compensation for cardiac perfusion SPECT based on a visual tracking system (VTS). The goal of this study was to evaluate in patients the effectiveness of our rigid-body motion compensation strategy. Sixty-four patient volunteers were asked to remain motionless or execute some predefined body motion during an additional second stress perfusion acquisition. Acquisitions were performed using the standard clinical protocol with 64 projections acquired through 180 degrees. All data were reconstructed with an ordered-subsets expectation-maximization (OSEM) algorithm using 4 projections per subset and 5 iterations. All physical degradation factors were addressed (attenuation, scatter, and distance dependent resolution), while a 3-dimensional Gaussian rotator was used during reconstruction to correct for six-degree-of-freedom (6-DOF) rigid-body motion estimated by the VTS. Polar map quantification was employed to evaluate compensation techniques. In 54.7% of the uncorrected second stress studies there was a statistically significant difference in the polar maps, and in 45.3% this made a difference in the interpretation of segmental perfusion. Motion correction reduced the impact of motion such that with it 32.8 % of the polar maps were statistically significantly different, and in 14.1% this difference changed the interpretation of segmental perfusion. The improvement shown in polar map quantitation translated to visually improved uniformity of the SPECT slices. PMID:28042170
GPU-based prompt gamma ray imaging from boron neutron capture therapy.
Yoon, Do-Kun; Jung, Joo-Young; Jo Hong, Key; Sil Lee, Keum; Suk Suh, Tae
2015-01-01
The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU). Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray image reconstruction using the GPU computation for BNCT simulations.
PET Image Reconstruction Incorporating 3D Mean-Median Sinogram Filtering
NASA Astrophysics Data System (ADS)
Mokri, S. S.; Saripan, M. I.; Rahni, A. A. Abd; Nordin, A. J.; Hashim, S.; Marhaban, M. H.
2016-02-01
Positron Emission Tomography (PET) projection data or sinogram contained poor statistics and randomness that produced noisy PET images. In order to improve the PET image, we proposed an implementation of pre-reconstruction sinogram filtering based on 3D mean-median filter. The proposed filter is designed based on three aims; to minimise angular blurring artifacts, to smooth flat region and to preserve the edges in the reconstructed PET image. The performance of the pre-reconstruction sinogram filter prior to three established reconstruction methods namely filtered-backprojection (FBP), Maximum likelihood expectation maximization-Ordered Subset (OSEM) and OSEM with median root prior (OSEM-MRP) is investigated using simulated NCAT phantom PET sinogram as generated by the PET Analytical Simulator (ASIM). The improvement on the quality of the reconstructed images with and without sinogram filtering is assessed according to visual as well as quantitative evaluation based on global signal to noise ratio (SNR), local SNR, contrast to noise ratio (CNR) and edge preservation capability. Further analysis on the achieved improvement is also carried out specific to iterative OSEM and OSEM-MRP reconstruction methods with and without pre-reconstruction filtering in terms of contrast recovery curve (CRC) versus noise trade off, normalised mean square error versus iteration, local CNR versus iteration and lesion detectability. Overall, satisfactory results are obtained from both visual and quantitative evaluations.
NASA Astrophysics Data System (ADS)
Hou, Yanqing; Verhagen, Sandra; Wu, Jie
2016-12-01
Ambiguity Resolution (AR) is a key technique in GNSS precise positioning. In case of weak models (i.e., low precision of data), however, the success rate of AR may be low, which may consequently introduce large errors to the baseline solution in cases of wrong fixing. Partial Ambiguity Resolution (PAR) is therefore proposed such that the baseline precision can be improved by fixing only a subset of ambiguities with high success rate. This contribution proposes a new PAR strategy, allowing to select the subset such that the expected precision gain is maximized among a set of pre-selected subsets, while at the same time the failure rate is controlled. These pre-selected subsets are supposed to obtain the highest success rate among those with the same subset size. The strategy is called Two-step Success Rate Criterion (TSRC) as it will first try to fix a relatively large subset with the fixed failure rate ratio test (FFRT) to decide on acceptance or rejection. In case of rejection, a smaller subset will be fixed and validated by the ratio test so as to fulfill the overall failure rate criterion. It is shown how the method can be practically used, without introducing a large additional computation effort. And more importantly, how it can improve (or at least not deteriorate) the availability in terms of baseline precision comparing to classical Success Rate Criterion (SRC) PAR strategy, based on a simulation validation. In the simulation validation, significant improvements are obtained for single-GNSS on short baselines with dual-frequency observations. For dual-constellation GNSS, the improvement for single-frequency observations on short baselines is very significant, on average 68%. For the medium- to long baselines, with dual-constellation GNSS the average improvement is around 20-30%.
Mikhaylova, E; Kolstein, M; De Lorenzo, G; Chmeissani, M
2014-07-01
A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm 3 ) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics.
NASA Astrophysics Data System (ADS)
Atalay, Bora; Berker, A. Nihat
2018-05-01
Discrete-spin systems with maximally random nearest-neighbor interactions that can be symmetric or asymmetric, ferromagnetic or antiferromagnetic, including off-diagonal disorder, are studied, for the number of states q =3 ,4 in d dimensions. We use renormalization-group theory that is exact for hierarchical lattices and approximate (Migdal-Kadanoff) for hypercubic lattices. For all d >1 and all noninfinite temperatures, the system eventually renormalizes to a random single state, thus signaling q ×q degenerate ordering. Note that this is the maximally degenerate ordering. For high-temperature initial conditions, the system crosses over to this highly degenerate ordering only after spending many renormalization-group iterations near the disordered (infinite-temperature) fixed point. Thus, a temperature range of short-range disorder in the presence of long-range order is identified, as previously seen in underfrustrated Ising spin-glass systems. The entropy is calculated for all temperatures, behaves similarly for ferromagnetic and antiferromagnetic interactions, and shows a derivative maximum at the short-range disordering temperature. With a sharp immediate contrast of infinitesimally higher dimension 1 +ɛ , the system is as expected disordered at all temperatures for d =1 .
Formation Control for the MAXIM Mission
NASA Technical Reports Server (NTRS)
Luquette, Richard J.; Leitner, Jesse; Gendreau, Keith; Sanner, Robert M.
2004-01-01
Over the next twenty years, a wave of change is occurring in the space-based scientific remote sensing community. While the fundamental limits in the spatial and angular resolution achievable in spacecraft have been reached, based on today s technology, an expansive new technology base has appeared over the past decade in the area of Distributed Space Systems (DSS). A key subset of the DSS technology area is that which covers precision formation flying of space vehicles. Through precision formation flying, the baselines, previously defined by the largest monolithic structure which could fit in the largest launch vehicle fairing, are now virtually unlimited. Several missions including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), and the Stellar Imager will drive the formation flying challenges to achieve unprecedented baselines for high resolution, extended-scene, interferometry in the ultraviolet and X-ray regimes. This paper focuses on establishing the feasibility for the formation control of the MAXIM mission. MAXIM formation flying requirements are on the order of microns, while Stellar Imager mission requirements are on the order of nanometers. This paper specifically addresses: (1) high-level science requirements for these missions and how they evolve into engineering requirements; and (2) the development of linearized equations of relative motion for a formation operating in an n-body gravitational field. Linearized equations of motion provide the ground work for linear formation control designs.
GPU-based prompt gamma ray imaging from boron neutron capture therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Do-Kun; Jung, Joo-Young; Suk Suh, Tae, E-mail: suhsanta@catholic.ac.kr
Purpose: The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. Methods: To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU).more » Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. Results: The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). Conclusions: The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray image reconstruction using the GPU computation for BNCT simulations.« less
TU-FG-BRB-07: GPU-Based Prompt Gamma Ray Imaging From Boron Neutron Capture Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, S; Suh, T; Yoon, D
Purpose: The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. Methods: To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU).more » Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. Results: The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). Conclusion: The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray reconstruction using the GPU computation for BNCT simulations.« less
Resistive plate chambers in positron emission tomography
NASA Astrophysics Data System (ADS)
Crespo, Paulo; Blanco, Alberto; Couceiro, Miguel; Ferreira, Nuno C.; Lopes, Luís; Martins, Paulo; Ferreira Marques, Rui; Fonte, Paulo
2013-07-01
Resistive plate chambers (RPC) were originally deployed for high energy physics. Realizing how their properties match the needs of nuclear medicine, a LIP team proposed applying RPCs to both preclinical and clinical positron emission tomography (RPC-PET). We show a large-area RPC-PET simulated scanner covering an axial length of 2.4m —slightly superior to the height of the human body— allowing for whole-body, single-bed RPC-PET acquisitions. Simulations following NEMA (National Electrical Manufacturers Association, USA) protocols yield a system sensitivity at least one order of magnitude larger than present-day, commercial PET systems. Reconstruction of whole-body simulated data is feasible by using a dedicated, direct time-of-flight-based algorithm implemented onto an ordered subsets estimation maximization parallelized strategy. Whole-body RPC-PET patient images following the injection of only 2mCi of 18-fluorodesoxyglucose (FDG) are expected to be ready 7 minutes after the 6 minutes necessary for data acquisition. This compares to the 10-20mCi FDG presently injected for a PET scan, and to the uncomfortable 20-30minutes necessary for its data acquisition. In the preclinical field, two fully instrumented detector heads have been assembled aiming at a four-head-based, small-animal RPC-PET system. Images of a disk-shaped and a needle-like 22Na source show unprecedented sub-millimeter spatial resolution.
Evaluation of Bias and Variance in Low-count OSEM List Mode Reconstruction
Jian, Y; Planeta, B; Carson, R E
2016-01-01
Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization (MLEM) reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combination of subsets and iterations. Regions of interest (ROIs) were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations x subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1–5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR. PMID:25479254
Evaluation of bias and variance in low-count OSEM list mode reconstruction
NASA Astrophysics Data System (ADS)
Jian, Y.; Planeta, B.; Carson, R. E.
2015-01-01
Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combinations of subsets and iterations. Regions of interest were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations × subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1-5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR.
Optimization of oncological {sup 18}F-FDG PET/CT imaging based on a multiparameter analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menezes, Vinicius O., E-mail: vinicius@radtec.com.br; Machado, Marcos A. D.; Queiroz, Cleiton C.
2016-02-15
Purpose: This paper describes a method to achieve consistent clinical image quality in {sup 18}F-FDG scans accounting for patient habitus, dose regimen, image acquisition, and processing techniques. Methods: Oncological PET/CT scan data for 58 subjects were evaluated retrospectively to derive analytical curves that predict image quality. Patient noise equivalent count rate and coefficient of variation (CV) were used as metrics in their analysis. Optimized acquisition protocols were identified and prospectively applied to 179 subjects. Results: The adoption of different schemes for three body mass ranges (<60 kg, 60–90 kg, >90 kg) allows improved image quality with both point spread functionmore » and ordered-subsets expectation maximization-3D reconstruction methods. The application of this methodology showed that CV improved significantly (p < 0.0001) in clinical practice. Conclusions: Consistent oncological PET/CT image quality on a high-performance scanner was achieved from an analysis of the relations existing between dose regimen, patient habitus, acquisition, and processing techniques. The proposed methodology may be used by PET/CT centers to develop protocols to standardize PET/CT imaging procedures and achieve better patient management and cost-effective operations.« less
Assessment of prostate cancer detection with a visual-search human model observer
NASA Astrophysics Data System (ADS)
Sen, Anando; Kalantari, Faraz; Gifford, Howard C.
2014-03-01
Early staging of prostate cancer (PC) is a significant challenge, in part because of the small tumor sizes in- volved. Our long-term goal is to determine realistic diagnostic task performance benchmarks for standard PC imaging with single photon emission computed tomography (SPECT). This paper reports on a localization receiver operator characteristic (LROC) validation study comparing human and model observers. The study made use of a digital anthropomorphic phantom and one-cm tumors within the prostate and pelvic lymph nodes. Uptake values were consistent with data obtained from clinical In-111 ProstaScint scans. The SPECT simulation modeled a parallel-hole imaging geometry with medium-energy collimators. Nonuniform attenua- tion and distance-dependent detector response were accounted for both in the imaging and the ordered-subset expectation-maximization (OSEM) iterative reconstruction. The observer study made use of 2D slices extracted from reconstructed volumes. All observers were informed about the prostate and nodal locations in an image. Iteration number and the level of postreconstruction smoothing were study parameters. The results show that a visual-search (VS) model observer correlates better with the average detection performance of human observers than does a scanning channelized nonprewhitening (CNPW) model observer.
Fast, Accurate and Shift-Varying Line Projections for Iterative Reconstruction Using the GPU
Pratx, Guillem; Chinn, Garry; Olcott, Peter D.; Levin, Craig S.
2013-01-01
List-mode processing provides an efficient way to deal with sparse projections in iterative image reconstruction for emission tomography. An issue often reported is the tremendous amount of computation required by such algorithm. Each recorded event requires several back- and forward line projections. We investigated the use of the programmable graphics processing unit (GPU) to accelerate the line-projection operations and implement fully-3D list-mode ordered-subsets expectation-maximization for positron emission tomography (PET). We designed a reconstruction approach that incorporates resolution kernels, which model the spatially-varying physical processes associated with photon emission, transport and detection. Our development is particularly suitable for applications where the projection data is sparse, such as high-resolution, dynamic, and time-of-flight PET reconstruction. The GPU approach runs more than 50 times faster than an equivalent CPU implementation while image quality and accuracy are virtually identical. This paper describes in details how the GPU can be used to accelerate the line projection operations, even when the lines-of-response have arbitrary endpoint locations and shift-varying resolution kernels are used. A quantitative evaluation is included to validate the correctness of this new approach. PMID:19244015
Implementation of GPU accelerated SPECT reconstruction with Monte Carlo-based scatter correction.
Bexelius, Tobias; Sohlberg, Antti
2018-06-01
Statistical SPECT reconstruction can be very time-consuming especially when compensations for collimator and detector response, attenuation, and scatter are included in the reconstruction. This work proposes an accelerated SPECT reconstruction algorithm based on graphics processing unit (GPU) processing. Ordered subset expectation maximization (OSEM) algorithm with CT-based attenuation modelling, depth-dependent Gaussian convolution-based collimator-detector response modelling, and Monte Carlo-based scatter compensation was implemented using OpenCL. The OpenCL implementation was compared against the existing multi-threaded OSEM implementation running on a central processing unit (CPU) in terms of scatter-to-primary ratios, standardized uptake values (SUVs), and processing speed using mathematical phantoms and clinical multi-bed bone SPECT/CT studies. The difference in scatter-to-primary ratios, visual appearance, and SUVs between GPU and CPU implementations was minor. On the other hand, at its best, the GPU implementation was noticed to be 24 times faster than the multi-threaded CPU version on a normal 128 × 128 matrix size 3 bed bone SPECT/CT data set when compensations for collimator and detector response, attenuation, and scatter were included. GPU SPECT reconstructions show great promise as an every day clinical reconstruction tool.
Variance-reduction normalization technique for a compton camera system
NASA Astrophysics Data System (ADS)
Kim, S. M.; Lee, J. S.; Kim, J. H.; Seo, H.; Kim, C. H.; Lee, C. S.; Lee, S. J.; Lee, M. C.; Lee, D. S.
2011-01-01
For an artifact-free dataset, pre-processing (known as normalization) is needed to correct inherent non-uniformity of detection property in the Compton camera which consists of scattering and absorbing detectors. The detection efficiency depends on the non-uniform detection efficiency of the scattering and absorbing detectors, different incidence angles onto the detector surfaces, and the geometry of the two detectors. The correction factor for each detected position pair which is referred to as the normalization coefficient, is expressed as a product of factors representing the various variations. The variance-reduction technique (VRT) for a Compton camera (a normalization method) was studied. For the VRT, the Compton list-mode data of a planar uniform source of 140 keV was generated from a GATE simulation tool. The projection data of a cylindrical software phantom were normalized with normalization coefficients determined from the non-uniformity map, and then reconstructed by an ordered subset expectation maximization algorithm. The coefficient of variations and percent errors of the 3-D reconstructed images showed that the VRT applied to the Compton camera provides an enhanced image quality and the increased recovery rate of uniformity in the reconstructed image.
NASA Astrophysics Data System (ADS)
Ren, Xiaoqiang; Yan, Jiaqi; Mo, Yilin
2018-03-01
This paper studies binary hypothesis testing based on measurements from a set of sensors, a subset of which can be compromised by an attacker. The measurements from a compromised sensor can be manipulated arbitrarily by the adversary. The asymptotic exponential rate, with which the probability of error goes to zero, is adopted to indicate the detection performance of a detector. In practice, we expect the attack on sensors to be sporadic, and therefore the system may operate with all the sensors being benign for extended period of time. This motivates us to consider the trade-off between the detection performance of a detector, i.e., the probability of error, when the attacker is absent (defined as efficiency) and the worst-case detection performance when the attacker is present (defined as security). We first provide the fundamental limits of this trade-off, and then propose a detection strategy that achieves these limits. We then consider a special case, where there is no trade-off between security and efficiency. In other words, our detection strategy can achieve the maximal efficiency and the maximal security simultaneously. Two extensions of the secure hypothesis testing problem are also studied and fundamental limits and achievability results are provided: 1) a subset of sensors, namely "secure" sensors, are assumed to be equipped with better security countermeasures and hence are guaranteed to be benign, 2) detection performance with unknown number of compromised sensors. Numerical examples are given to illustrate the main results.
Fish swarm intelligent to optimize real time monitoring of chips drying using machine vision
NASA Astrophysics Data System (ADS)
Hendrawan, Y.; Hawa, L. C.; Damayanti, R.
2018-03-01
This study attempted to apply machine vision-based chips drying monitoring system which is able to optimise the drying process of cassava chips. The objective of this study is to propose fish swarm intelligent (FSI) optimization algorithms to find the most significant set of image features suitable for predicting water content of cassava chips during drying process using artificial neural network model (ANN). Feature selection entails choosing the feature subset that maximizes the prediction accuracy of ANN. Multi-Objective Optimization (MOO) was used in this study which consisted of prediction accuracy maximization and feature-subset size minimization. The results showed that the best feature subset i.e. grey mean, L(Lab) Mean, a(Lab) energy, red entropy, hue contrast, and grey homogeneity. The best feature subset has been tested successfully in ANN model to describe the relationship between image features and water content of cassava chips during drying process with R2 of real and predicted data was equal to 0.9.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Wei-Chen; Maitra, Ranjan
2011-01-01
We propose a model-based approach for clustering time series regression data in an unsupervised machine learning framework to identify groups under the assumption that each mixture component follows a Gaussian autoregressive regression model of order p. Given the number of groups, the traditional maximum likelihood approach of estimating the parameters using the expectation-maximization (EM) algorithm can be employed, although it is computationally demanding. The somewhat fast tune to the EM folk song provided by the Alternating Expectation Conditional Maximization (AECM) algorithm can alleviate the problem to some extent. In this article, we develop an alternative partial expectation conditional maximization algorithmmore » (APECM) that uses an additional data augmentation storage step to efficiently implement AECM for finite mixture models. Results on our simulation experiments show improved performance in both fewer numbers of iterations and computation time. The methodology is applied to the problem of clustering mutual funds data on the basis of their average annual per cent returns and in the presence of economic indicators.« less
Effect of Using 2 mm Voxels on Observer Performance for PET Lesion Detection
NASA Astrophysics Data System (ADS)
Morey, A. M.; Noo, Frédéric; Kadrmas, Dan J.
2016-06-01
Positron emission tomography (PET) images are typically reconstructed with an in-plane pixel size of approximately 4 mm for cancer imaging. The objective of this work was to evaluate the effect of using smaller pixels on general oncologic lesion-detection. A series of observer studies was performed using experimental phantom data from the Utah PET Lesion Detection Database, which modeled whole-body FDG PET cancer imaging of a 92 kg patient. The data comprised 24 scans over 4 days on a Biograph mCT time-of-flight (TOF) PET/CT scanner, with up to 23 lesions (diam. 6-16 mm) distributed throughout the phantom each day. Images were reconstructed with 2.036 mm and 4.073 mm pixels using ordered-subsets expectation-maximization (OSEM) both with and without point spread function (PSF) modeling and TOF. Detection performance was assessed using the channelized non-prewhitened numerical observer with localization receiver operating characteristic (LROC) analysis. Tumor localization performance and the area under the LROC curve were then analyzed as functions of the pixel size. In all cases, the images with 2 mm pixels provided higher detection performance than those with 4 mm pixels. The degree of improvement from the smaller pixels was larger than that offered by PSF modeling for these data, and provided roughly half the benefit of using TOF. Key results were confirmed by two human observers, who read subsets of the test data. This study suggests that a significant improvement in tumor detection performance for PET can be attained by using smaller voxel sizes than commonly used at many centers. The primary drawback is a 4-fold increase in reconstruction time and data storage requirements.
Uncountably many maximizing measures for a dense subset of continuous functions
NASA Astrophysics Data System (ADS)
Shinoda, Mao
2018-05-01
Ergodic optimization aims to single out dynamically invariant Borel probability measures which maximize the integral of a given ‘performance’ function. For a continuous self-map of a compact metric space and a dense set of continuous functions, we show the existence of uncountably many ergodic maximizing measures. We also show that, for a topologically mixing subshift of finite type and a dense set of continuous functions there exist uncountably many ergodic maximizing measures with full support and positive entropy.
Enumerating all maximal frequent subtrees in collections of phylogenetic trees
2014-01-01
Background A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. Results We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Conclusions Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees. PMID:25061474
Enumerating all maximal frequent subtrees in collections of phylogenetic trees.
Deepak, Akshay; Fernández-Baca, David
2014-01-01
A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees.
TIME SHARING WITH AN EXPLICIT PRIORITY QUEUING DISCIPLINE.
exponentially distributed service times and an ordered priority queue. Each new arrival buys a position in this queue by offering a non-negative bribe to the...parameters is investigated through numerical examples. Finally, to maximize the expected revenue per unit time accruing from bribes , an optimization
Formation Control for the Maxim Mission.
NASA Technical Reports Server (NTRS)
Luquette, Richard J.; Leitner, Jesse; Gendreau, Keith; Sanner, Robert M.
2004-01-01
Over the next twenty years, a wave of change is occurring in the spacebased scientific remote sensing community. While the fundamental limits in the spatial and angular resolution achievable in spacecraft have been reached, based on today's technology, an expansive new technology base has appeared over the past decade in the area of Distributed Space Systems (DSS). A key subset of the DSS technology area is that which covers precision formation flying of space vehicles. Through precision formation flying, the baselines, previously defined by the largest monolithic structure which could fit in the largest launch vehicle fairing, are now virtually unlimited. Several missions including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), and the Stellar Imager will drive the formation flying challenges to achieve unprecedented baselines for high resolution, extended-scene, interferometry in the ultraviolet and X-ray regimes. This paper focuses on establishing the feasibility for the formation control of the MAXIM mission. The Stellar Imager mission requirements are on the same order of those for MAXIM. This paper specifically addresses: (1) high-level science requirements for these missions and how they evolve into engineering requirements; (2) the formation control architecture devised for such missions; (3) the design of the formation control laws to maintain very high precision relative positions; and (4) the levels of fuel usage required in the duration of these missions. Specific preliminary results are presented for two spacecraft within the MAXIM mission.
Optimized 3D stitching algorithm for whole body SPECT based on transition error minimization (TEM)
NASA Astrophysics Data System (ADS)
Cao, Xinhua; Xu, Xiaoyin; Voss, Stephan
2017-02-01
Standard Single Photon Emission Computed Tomography (SPECT) has a limited field of view (FOV) and cannot provide a 3D image of an entire long whole body SPECT. To produce a 3D whole body SPECT image, two to five overlapped SPECT FOVs from head to foot are acquired and assembled using image stitching. Most commercial software from medical imaging manufacturers applies a direct mid-slice stitching method to avoid blurring or ghosting from 3D image blending. Due to intensity changes across the middle slice of overlapped images, direct mid-slice stitching often produces visible seams in the coronal and sagittal views and maximal intensity projection (MIP). In this study, we proposed an optimized algorithm to reduce the visibility of stitching edges. The new algorithm computed, based on transition error minimization (TEM), a 3D stitching interface between two overlapped 3D SPECT images. To test the suggested algorithm, four studies of 2-FOV whole body SPECT were used and included two different reconstruction methods (filtered back projection (FBP) and ordered subset expectation maximization (OSEM)) as well as two different radiopharmaceuticals (Tc-99m MDP for bone metastases and I-131 MIBG for neuroblastoma tumors). Relative transition errors of stitched whole body SPECT using mid-slice stitching and the TEM-based algorithm were measured for objective evaluation. Preliminary experiments showed that the new algorithm reduced the visibility of the stitching interface in the coronal, sagittal, and MIP views. Average relative transition errors were reduced from 56.7% of mid-slice stitching to 11.7% of TEM-based stitching. The proposed algorithm also avoids blurring artifacts by preserving the noise properties of the original SPECT images.
NASA Astrophysics Data System (ADS)
Cheng, Xiaoyin; Bayer, Christine; Maftei, Constantin-Alin; Astner, Sabrina T.; Vaupel, Peter; Ziegler, Sibylle I.; Shi, Kuangyu
2014-01-01
Compared to indirect methods, direct parametric image reconstruction (PIR) has the advantage of high quality and low statistical errors. However, it is not yet clear if this improvement in quality is beneficial for physiological quantification. This study aimed to evaluate direct PIR for the quantification of tumor hypoxia using the hypoxic fraction (HF) assessed from immunohistological data as a physiological reference. Sixteen mice with xenografted human squamous cell carcinomas were scanned with dynamic [18F]FMISO PET. Afterward, tumors were sliced and stained with H&E and the hypoxia marker pimonidazole. The hypoxic signal was segmented using k-means clustering and HF was specified as the ratio of the hypoxic area over the viable tumor area. The parametric Patlak slope images were obtained by indirect voxel-wise modeling on reconstructed images using filtered back projection and ordered-subset expectation maximization (OSEM) and by direct PIR (e.g., parametric-OSEM, POSEM). The mean and maximum Patlak slopes of the tumor area were investigated and compared with HF. POSEM resulted in generally higher correlations between slope and HF among the investigated methods. A strategy for the delineation of the hypoxic tumor volume based on thresholding parametric images at half maximum of the slope is recommended based on the results of this study.
NASA Astrophysics Data System (ADS)
Ahn, Sangtae; Ross, Steven G.; Asma, Evren; Miao, Jun; Jin, Xiao; Cheng, Lishui; Wollenweber, Scott D.; Manjeshwar, Ravindra M.
2015-08-01
Ordered subset expectation maximization (OSEM) is the most widely used algorithm for clinical PET image reconstruction. OSEM is usually stopped early and post-filtered to control image noise and does not necessarily achieve optimal quantitation accuracy. As an alternative to OSEM, we have recently implemented a penalized likelihood (PL) image reconstruction algorithm for clinical PET using the relative difference penalty with the aim of improving quantitation accuracy without compromising visual image quality. Preliminary clinical studies have demonstrated visual image quality including lesion conspicuity in images reconstructed by the PL algorithm is better than or at least as good as that in OSEM images. In this paper we evaluate lesion quantitation accuracy of the PL algorithm with the relative difference penalty compared to OSEM by using various data sets including phantom data acquired with an anthropomorphic torso phantom, an extended oval phantom and the NEMA image quality phantom; clinical data; and hybrid clinical data generated by adding simulated lesion data to clinical data. We focus on mean standardized uptake values and compare them for PL and OSEM using both time-of-flight (TOF) and non-TOF data. The results demonstrate improvements of PL in lesion quantitation accuracy compared to OSEM with a particular improvement in cold background regions such as lungs.
Designing Contributing Student Pedagogies to Promote Students' Intrinsic Motivation to Learn
ERIC Educational Resources Information Center
Herman, Geoffrey L.
2012-01-01
In order to maximize the effectiveness of our pedagogies, we must understand how our pedagogies align with prevailing theories of cognition and motivation and design our pedagogies according to this understanding. When implementing Contributing Student Pedagogies (CSPs), students are expected to make meaningful contributions to the learning of…
On the Achievable Throughput Over TVWS Sensor Networks
Caleffi, Marcello; Cacciapuoti, Angela Sara
2016-01-01
In this letter, we study the throughput achievable by an unlicensed sensor network operating over TV white space spectrum in presence of coexistence interference. Through the letter, we first analytically derive the achievable throughput as a function of the channel ordering. Then, we show that the problem of deriving the maximum expected throughput through exhaustive search is computationally unfeasible. Finally, we derive a computational-efficient algorithm characterized by polynomial-time complexity to compute the channel set maximizing the expected throughput and, stemming from this, we derive a closed-form expression of the maximum expected throughput. Numerical simulations validate the theoretical analysis. PMID:27043565
On Hardness of Pricing Items for Single-Minded Bidders
NASA Astrophysics Data System (ADS)
Khandekar, Rohit; Kimbrel, Tracy; Makarychev, Konstantin; Sviridenko, Maxim
We consider the following item pricing problem which has received much attention recently. A seller has an infinite numbers of copies of n items. There are m buyers, each with a budget and an intention to buy a fixed subset of items. Given prices on the items, each buyer buys his subset of items, at the given prices, provided the total price of the subset is at most his budget. The objective of the seller is to determine the prices such that her total profit is maximized.
Zeng, Nianyin; Wang, Zidong; Li, Yurong; Du, Min; Cao, Jie; Liu, Xiaohui
2013-12-01
In this paper, the expectation maximization (EM) algorithm is applied to the modeling of the nano-gold immunochromatographic assay (nano-GICA) via available time series of the measured signal intensities of the test and control lines. The model for the nano-GICA is developed as the stochastic dynamic model that consists of a first-order autoregressive stochastic dynamic process and a noisy measurement. By using the EM algorithm, the model parameters, the actual signal intensities of the test and control lines, as well as the noise intensity can be identified simultaneously. Three different time series data sets concerning the target concentrations are employed to demonstrate the effectiveness of the introduced algorithm. Several indices are also proposed to evaluate the inferred models. It is shown that the model fits the data very well.
Angelis, G I; Reader, A J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2011-07-07
Iterative expectation maximization (EM) techniques have been extensively used to solve maximum likelihood (ML) problems in positron emission tomography (PET) image reconstruction. Although EM methods offer a robust approach to solving ML problems, they usually suffer from slow convergence rates. The ordered subsets EM (OSEM) algorithm provides significant improvements in the convergence rate, but it can cycle between estimates converging towards the ML solution of each subset. In contrast, gradient-based methods, such as the recently proposed non-monotonic maximum likelihood (NMML) and the more established preconditioned conjugate gradient (PCG), offer a globally convergent, yet equally fast, alternative to OSEM. Reported results showed that NMML provides faster convergence compared to OSEM; however, it has never been compared to other fast gradient-based methods, like PCG. Therefore, in this work we evaluate the performance of two gradient-based methods (NMML and PCG) and investigate their potential as an alternative to the fast and widely used OSEM. All algorithms were evaluated using 2D simulations, as well as a single [(11)C]DASB clinical brain dataset. Results on simulated 2D data show that both PCG and NMML achieve orders of magnitude faster convergence to the ML solution compared to MLEM and exhibit comparable performance to OSEM. Equally fast performance is observed between OSEM and PCG for clinical 3D data, but NMML seems to perform poorly. However, with the addition of a preconditioner term to the gradient direction, the convergence behaviour of NMML can be substantially improved. Although PCG is a fast convergent algorithm, the use of a (bent) line search increases the complexity of the implementation, as well as the computational time involved per iteration. Contrary to previous reports, NMML offers no clear advantage over OSEM or PCG, for noisy PET data. Therefore, we conclude that there is little evidence to replace OSEM as the algorithm of choice for many applications, especially given that in practice convergence is often not desired for algorithms seeking ML estimates.
Why Contextual Preference Reversals Maximize Expected Value
2016-01-01
Contextual preference reversals occur when a preference for one option over another is reversed by the addition of further options. It has been argued that the occurrence of preference reversals in human behavior shows that people violate the axioms of rational choice and that people are not, therefore, expected value maximizers. In contrast, we demonstrate that if a person is only able to make noisy calculations of expected value and noisy observations of the ordinal relations among option features, then the expected value maximizing choice is influenced by the addition of new options and does give rise to apparent preference reversals. We explore the implications of expected value maximizing choice, conditioned on noisy observations, for a range of contextual preference reversal types—including attraction, compromise, similarity, and phantom effects. These preference reversal types have played a key role in the development of models of human choice. We conclude that experiments demonstrating contextual preference reversals are not evidence for irrationality. They are, however, a consequence of expected value maximization given noisy observations. PMID:27337391
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warburton, P.E.; Gosden, J.; Lawson, D.
1996-04-15
Alpha satellite DNA is a tandemly repeated DNA family found at the centromeres of all primate chromosomes examined. The fundamental repeat units of alpha satellite DNA are diverged 169- to 172-bp monomers, often found to be organized in chromosome-specific higher-order repeat units. The chromosomes of human (Homo sapiens (HSA)), chimpanzee (Pan troglodytes (PTR) and Pan paniscus), and gorilla (Gorilla gorilla) share a remarkable similarity and synteny. It is of interest to ask if alpha satellite arrays at centromeres of homologous chromosomes between these species are closely related (evolving in an orthologous manner) or if the evolutionary processes that homogenize andmore » spread these arrays within and between chromosomes result in nonorthologous evolution of arrays. By using PCR primers specific for human chromosome 17-specific alpha satellite DNA, we have amplified, cloned, and characterized a chromosome-specific subset from the PTR chimpanzee genome. Hybridization both on Southern blots and in situ as well as sequence analysis show that this subset is most closely related, as expected, to sequences on HSA 17. However, in situ hybridization reveals that this subset is not found on the homologous chromosome in chimpanzee (PTR 19), but instead on PTR 12, which is homologous to HSA 2p. 40 refs., 3 figs.« less
Constrained Fisher Scoring for a Mixture of Factor Analyzers
2016-09-01
expectation -maximization algorithm with similar computational requirements. Lastly, we demonstrate the efficacy of the proposed method for learning a... expectation maximization 44 Gene T Whipps 301 394 2372Unclassified Unclassified Unclassified UU ii Approved for public release; distribution is unlimited...14 3.6 Relationship with Expectation -Maximization 16 4. Simulation Examples 16 4.1 Synthetic MFA Example 17 4.2 Manifold Learning Example 22 5
Mu, Zhiping; Hong, Baoming; Li, Shimin; Liu, Yi-Hwa
2009-01-01
Coded aperture imaging for two-dimensional (2D) planar objects has been investigated extensively in the past, whereas little success has been achieved in imaging 3D objects using this technique. In this article, the authors present a novel method of 3D single photon emission computerized tomography (SPECT) reconstruction for near-field coded aperture imaging. Multiangular coded aperture projections are acquired and a stack of 2D images is reconstructed separately from each of the projections. Secondary projections are subsequently generated from the reconstructed image stacks based on the geometry of parallel-hole collimation and the variable magnification of near-field coded aperture imaging. Sinograms of cross-sectional slices of 3D objects are assembled from the secondary projections, and the ordered subset expectation and maximization algorithm is employed to reconstruct the cross-sectional image slices from the sinograms. Experiments were conducted using a customized capillary tube phantom and a micro hot rod phantom. Imaged at approximately 50 cm from the detector, hot rods in the phantom with diameters as small as 2.4 mm could be discerned in the reconstructed SPECT images. These results have demonstrated the feasibility of the authors’ 3D coded aperture image reconstruction algorithm for SPECT, representing an important step in their effort to develop a high sensitivity and high resolution SPECT imaging system. PMID:19544769
NASA Astrophysics Data System (ADS)
O'Connor, J. Michael; Pretorius, P. Hendrik; Gifford, Howard C.; Licho, Robert; Joffe, Samuel; McGuiness, Matthew; Mehurg, Shannon; Zacharias, Michael; Brankov, Jovan G.
2012-02-01
Our previous Single Photon Emission Computed Tomography (SPECT) myocardial perfusion imaging (MPI) research explored the utility of numerical observers. We recently created two hundred and eighty simulated SPECT cardiac cases using Dynamic MCAT (DMCAT) and SIMIND Monte Carlo tools. All simulated cases were then processed with two reconstruction methods: iterative ordered subset expectation maximization (OSEM) and filtered back-projection (FBP). Observer study sets were assembled for both OSEM and FBP methods. Five physicians performed an observer study on one hundred and seventy-nine images from the simulated cases. The observer task was to indicate detection of any myocardial perfusion defect using the American Society of Nuclear Cardiology (ASNC) 17-segment cardiac model and the ASNC five-scale rating guidelines. Human observer Receiver Operating Characteristic (ROC) studies established the guidelines for the subsequent evaluation of numerical model observer (NO) performance. Several NOs were formulated and their performance was compared with the human observer performance. One type of NO was based on evaluation of a cardiac polar map that had been pre-processed using a gradient-magnitude watershed segmentation algorithm. The second type of NO was also based on analysis of a cardiac polar map but with use of a priori calculated average image derived from an ensemble of normal cases.
A new method for spatial structure detection of complex inner cavities based on 3D γ-photon imaging
NASA Astrophysics Data System (ADS)
Xiao, Hui; Zhao, Min; Liu, Jiantang; Liu, Jiao; Chen, Hao
2018-05-01
This paper presents a new three-dimensional (3D) imaging method for detecting the spatial structure of a complex inner cavity based on positron annihilation and γ-photon detection. This method first marks carrier solution by a certain radionuclide and injects it into the inner cavity where positrons are generated. Subsequently, γ-photons are released from positron annihilation, and the γ-photon detector ring is used for recording the γ-photons. Finally, the two-dimensional (2D) image slices of the inner cavity are constructed by the ordered-subset expectation maximization scheme and the 2D image slices are merged to the 3D image of the inner cavity. To eliminate the artifact in the reconstructed image due to the scattered γ-photons, a novel angle-traversal model is proposed for γ-photon single-scattering correction, in which the path of the single scattered γ-photon is analyzed from a spatial geometry perspective. Two experiments are conducted to verify the effectiveness of the proposed correction model and the advantage of the proposed testing method in detecting the spatial structure of the inner cavity, including the distribution of gas-liquid multi-phase mixture inside the inner cavity. The above two experiments indicate the potential of the proposed method as a new tool for accurately delineating the inner structures of industrial complex parts.
Phantom experiments to improve parathyroid lesion detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nichols, Kenneth J.; Tronco, Gene G.; Tomas, Maria B.
2007-12-15
This investigation tested the hypothesis that visual analysis of iteratively reconstructed tomograms by ordered subset expectation maximization (OSEM) provides the highest accuracy for localizing parathyroid lesions using {sup 99m}Tc-sestamibi SPECT data. From an Institutional Review Board approved retrospective review of 531 patients evaluated for parathyroid localization, image characteristics were determined for 85 {sup 99m}Tc-sestamibi SPECT studies originally read as equivocal (EQ). Seventy-two plexiglas phantoms using cylindrical simulated lesions were acquired for a clinically realistic range of counts (mean simulated lesion counts of 75{+-}50 counts/pixel) and target-to-background (T:B) ratios (range=2.0 to 8.0) to determine an optimal filter for OSEM. Two experiencedmore » nuclear physicians graded simulated lesions, blinded to whether chambers contained radioactivity or plain water, and two observers used the same scale to read all phantom and clinical SPECT studies, blinded to pathology findings and clinical information. For phantom data and all clinical data, T:B analyses were not statistically different for OSEM versus FB, but visual readings were significantly more accurate than T:B (88{+-}6% versus 68{+-}6%, p=0.001) for OSEM processing, and OSEM was significantly more accurate than FB for visual readings (88{+-}6% versus 58{+-}6%, p<0.0001). These data suggest that visual analysis of iteratively reconstructed MIBI tomograms should be incorporated into imaging protocols performed to localize parathyroid lesions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, K; Hristov, D
2014-06-01
Purpose: To evaluate the potential impact of listmode-driven amplitude based optimal gating (OG) respiratory motion management technique on quantitative PET imaging. Methods: During the PET acquisitions, an optical camera tracked and recorded the motion of a tool placed on top of patients' torso. PET event data were utilized to detect and derive a motion signal that is directly coupled with a specific internal organ. A radioactivity-trace was generated from listmode data by accumulating all prompt counts in temporal bins matching the sampling rate of the external tracking device. Decay correction for 18F was performed. The image reconstructions using OG respiratorymore » motion management technique that uses 35% of total radioactivity counts within limited motion amplitudes were performed with external motion and radioactivity traces separately with ordered subset expectation maximization (OSEM) with 2 iterations and 21 subsets. Standard uptake values (SUVs) in a tumor region were calculated to measure the effect of using radioactivity trace for motion compensation. Motion-blurred 3D static PET image was also reconstructed with all counts and the SUVs derived from OG images were compared with SUVs from 3D images. Results: A 5.7 % increase of the maximum SUV in the lesion was found for optimal gating image reconstruction with radioactivity trace when compared to a static 3D image. The mean and maximum SUVs on the image that was reconstructed with radioactivity trace were found comparable (0.4 % and 4.5 % increase, respectively) to the values derived from the image that was reconstructed with external trace. Conclusion: The image reconstructed using radioactivity trace showed that the blurring due to the motion was reduced with impact on derived SUVs. The resolution and contrast of the images reconstructed with radioactivity trace were comparable to the resolution and contrast of the images reconstructed with external respiratory traces. Research supported by Siemens.« less
Vanderhaeghe, F; Smolders, A J P; Roelofs, J G M; Hoffmann, M
2012-03-01
Selecting an appropriate variable subset in linear multivariate methods is an important methodological issue for ecologists. Interest often exists in obtaining general predictive capacity or in finding causal inferences from predictor variables. Because of a lack of solid knowledge on a studied phenomenon, scientists explore predictor variables in order to find the most meaningful (i.e. discriminating) ones. As an example, we modelled the response of the amphibious softwater plant Eleocharis multicaulis using canonical discriminant function analysis. We asked how variables can be selected through comparison of several methods: univariate Pearson chi-square screening, principal components analysis (PCA) and step-wise analysis, as well as combinations of some methods. We expected PCA to perform best. The selected methods were evaluated through fit and stability of the resulting discriminant functions and through correlations between these functions and the predictor variables. The chi-square subset, at P < 0.05, followed by a step-wise sub-selection, gave the best results. In contrast to expectations, PCA performed poorly, as so did step-wise analysis. The different chi-square subset methods all yielded ecologically meaningful variables, while probable noise variables were also selected by PCA and step-wise analysis. We advise against the simple use of PCA or step-wise discriminant analysis to obtain an ecologically meaningful variable subset; the former because it does not take into account the response variable, the latter because noise variables are likely to be selected. We suggest that univariate screening techniques are a worthwhile alternative for variable selection in ecology. © 2011 German Botanical Society and The Royal Botanical Society of the Netherlands.
Convergence optimization of parametric MLEM reconstruction for estimation of Patlak plot parameters.
Angelis, Georgios I; Thielemans, Kris; Tziortzi, Andri C; Turkheimer, Federico E; Tsoumpas, Charalampos
2011-07-01
In dynamic positron emission tomography data many researchers have attempted to exploit kinetic models within reconstruction such that parametric images are estimated directly from measurements. This work studies a direct parametric maximum likelihood expectation maximization algorithm applied to [(18)F]DOPA data using reference-tissue input function. We use a modified version for direct reconstruction with a gradually descending scheme of subsets (i.e. 18-6-1) initialized with the FBP parametric image for faster convergence and higher accuracy. The results compared with analytic reconstructions show quantitative robustness (i.e. minimal bias) and clinical reproducibility within six human acquisitions in the region of clinical interest. Bland-Altman plots for all the studies showed sufficient quantitative agreement between the direct reconstructed parametric maps and the indirect FBP (--0.035x+0.48E--5). Copyright © 2011 Elsevier Ltd. All rights reserved.
Méndez-Aparicio, M Dolores; Izquierdo-Yusta, Alicia; Jiménez-Zarco, Ana I
2017-01-01
Today, the customer-brand relationship is fundamental to a company's bottom line, especially in the service sector and with services offered via online channels. In order to maximize its effects, organizations need (1) to know which factors influence the formation of an individual's service expectations in an online environment; and (2) to establish the influence of these expectations on customers' likelihood of recommending a service before they have even used it. In accordance with the TAM model (Davis, 1989; Davis et al., 1992), the TRA model (Fishbein and Ajzen, 1975), the extended UTAUT model (Venkatesh et al., 2012), and the approach described by Alloza (2011), this work proposes a theoretical model of the antecedents and consequences of consumer expectations of online services. In order to validate the proposed theoretical model, a sample of individual insurance company customers was analyzed. The results showed, first, the importance of customers' expectations with regard to the intention to recommend the "private area" of the company's website to other customers prior to using it themselves. They also revealed the importance to expectations of the antecedents perceived usefulness, ease of use, frequency of use, reputation, and subjective norm.
Méndez-Aparicio, M. Dolores; Izquierdo-Yusta, Alicia; Jiménez-Zarco, Ana I.
2017-01-01
Today, the customer-brand relationship is fundamental to a company’s bottom line, especially in the service sector and with services offered via online channels. In order to maximize its effects, organizations need (1) to know which factors influence the formation of an individual’s service expectations in an online environment; and (2) to establish the influence of these expectations on customers’ likelihood of recommending a service before they have even used it. In accordance with the TAM model (Davis, 1989; Davis et al., 1992), the TRA model (Fishbein and Ajzen, 1975), the extended UTAUT model (Venkatesh et al., 2012), and the approach described by Alloza (2011), this work proposes a theoretical model of the antecedents and consequences of consumer expectations of online services. In order to validate the proposed theoretical model, a sample of individual insurance company customers was analyzed. The results showed, first, the importance of customers’ expectations with regard to the intention to recommend the “private area” of the company’s website to other customers prior to using it themselves. They also revealed the importance to expectations of the antecedents perceived usefulness, ease of use, frequency of use, reputation, and subjective norm. PMID:28798705
Effects of Requiring Students to Meet High Expectation Levels within an On-Line Homework Environment
ERIC Educational Resources Information Center
Weber, William J., Jr.
2010-01-01
On-line homework is becoming a larger part of mathematics classrooms each year. Thus, ways to maximize the effectiveness of on-line homework for both students and teachers must be investigated. This study sought to provide one possible answer to this aim, by requiring students to achieve at least 50% for any on-line homework assignment in order to…
Probability matching and strategy availability.
Koehler, Derek J; James, Greta
2010-09-01
Findings from two experiments indicate that probability matching in sequential choice arises from an asymmetry in strategy availability: The matching strategy comes readily to mind, whereas a superior alternative strategy, maximizing, does not. First, compared with the minority who spontaneously engage in maximizing, the majority of participants endorse maximizing as superior to matching in a direct comparison when both strategies are described. Second, when the maximizing strategy is brought to their attention, more participants subsequently engage in maximizing. Third, matchers are more likely than maximizers to base decisions in other tasks on their initial intuitions, suggesting that they are more inclined to use a choice strategy that comes to mind quickly. These results indicate that a substantial subset of probability matchers are victims of "underthinking" rather than "overthinking": They fail to engage in sufficient deliberation to generate a superior alternative to the matching strategy that comes so readily to mind.
The Dynamics of Crime and Punishment
NASA Astrophysics Data System (ADS)
Hausken, Kjell; Moxnes, John F.
This article analyzes crime development which is one of the largest threats in today's world, frequently referred to as the war on crime. The criminal commits crimes in his free time (when not in jail) according to a non-stationary Poisson process which accounts for fluctuations. Expected values and variances for crime development are determined. The deterrent effect of imprisonment follows from the amount of time in imprisonment. Each criminal maximizes expected utility defined as expected benefit (from crime) minus expected cost (imprisonment). A first-order differential equation of the criminal's utility-maximizing response to the given punishment policy is then developed. The analysis shows that if imprisonment is absent, criminal activity grows substantially. All else being equal, any equilibrium is unstable (labile), implying growth of criminal activity, unless imprisonment increases sufficiently as a function of criminal activity. This dynamic approach or perspective is quite interesting and has to our knowledge not been presented earlier. The empirical data material for crime intensity and imprisonment for Norway, England and Wales, and the US supports the model. Future crime development is shown to depend strongly on the societally chosen imprisonment policy. The model is intended as a valuable tool for policy makers who can envision arbitrarily sophisticated imprisonment functions and foresee the impact they have on crime development.
NASA Technical Reports Server (NTRS)
Wong, J. T.; Andre, W. L.
1981-01-01
A recent result shows that, for a certain class of systems, the interdependency among the elements of such a system together with the elements constitutes a mathematical structure a partially ordered set. It is called a loop free logic model of the system. On the basis of an intrinsic property of the mathematical structure, a characterization of system component failure in terms of maximal subsets of bad test signals of the system was obtained. Also, as a consequence, information concerning the total number of failure components in the system was deduced. Detailed examples are given to show how to restructure real systems containing loops into loop free models for which the result is applicable.
Optimization of the reconstruction parameters in [123I]FP-CIT SPECT
NASA Astrophysics Data System (ADS)
Niñerola-Baizán, Aida; Gallego, Judith; Cot, Albert; Aguiar, Pablo; Lomeña, Francisco; Pavía, Javier; Ros, Domènec
2018-04-01
The aim of this work was to obtain a set of parameters to be applied in [123I]FP-CIT SPECT reconstruction in order to minimize the error between standardized and true values of the specific uptake ratio (SUR) in dopaminergic neurotransmission SPECT studies. To this end, Monte Carlo simulation was used to generate a database of 1380 projection data-sets from 23 subjects, including normal cases and a variety of pathologies. Studies were reconstructed using filtered back projection (FBP) with attenuation correction and ordered subset expectation maximization (OSEM) with correction for different degradations (attenuation, scatter and PSF). Reconstruction parameters to be optimized were the cut-off frequency of a 2D Butterworth pre-filter in FBP, and the number of iterations and the full width at Half maximum of a 3D Gaussian post-filter in OSEM. Reconstructed images were quantified using regions of interest (ROIs) derived from Magnetic Resonance scans and from the Automated Anatomical Labeling map. Results were standardized by applying a simple linear regression line obtained from the entire patient dataset. Our findings show that we can obtain a set of optimal parameters for each reconstruction strategy. The accuracy of the standardized SUR increases when the reconstruction method includes more corrections. The use of generic ROIs instead of subject-specific ROIs adds significant inaccuracies. Thus, after reconstruction with OSEM and correction for all degradations, subject-specific ROIs led to errors between standardized and true SUR values in the range [‑0.5, +0.5] in 87% and 92% of the cases for caudate and putamen, respectively. These percentages dropped to 75% and 88% when the generic ROIs were used.
Navalta, James W; Tibana, Ramires Alsamir; Fedor, Elizabeth A; Vieira, Amilton; Prestes, Jonato
2014-01-01
This investigation assessed the lymphocyte subset response to three days of intermittent run exercise to exhaustion. Twelve healthy college-aged males (n = 8) and females (n = 4) (age = 26 ± 4 years; height = 170.2 ± 10 cm; body mass = 75 ± 18 kg) completed an exertion test (maximal running speed and VO2max) and later performed three consecutive days of an intermittent run protocol to exhaustion (30 sec at maximal running speed and 30 sec at half of the maximal running speed). Blood was collected before exercise (PRE) and immediately following the treadmill bout (POST) each day. When the absolute change from baseline was evaluated (i. e., Δ baseline), a significant change in CD4+ and CD8+ for CX3CR1 cells was observed by completion of the third day. Significant changes in both apoptosis and migration were observed following two consecutive days in CD19+ lymphocytes, and the influence of apoptosis persisted following the third day. Given these lymphocyte responses, it is recommended that a rest day be incorporated following two consecutive days of a high-intensity intermittent run program to minimize immune cell modulations and reduce potential susceptibility.
Impacts of Maximizing Tendencies on Experience-Based Decisions.
Rim, Hye Bin
2017-06-01
Previous research on risky decisions has suggested that people tend to make different choices depending on whether they acquire the information from personally repeated experiences or from statistical summary descriptions. This phenomenon, called as a description-experience gap, was expected to be moderated by the individual difference in maximizing tendencies, a desire towards maximizing decisional outcome. Specifically, it was hypothesized that maximizers' willingness to engage in extensive information searching would lead maximizers to make experience-based decisions as payoff distributions were given explicitly. A total of 262 participants completed four decision problems. Results showed that maximizers, compared to non-maximizers, drew more samples before making a choice but reported lower confidence levels on both the accuracy of knowledge gained from experiences and the likelihood of satisfactory outcomes. Additionally, maximizers exhibited smaller description-experience gaps than non-maximizers as expected. The implications of the findings and unanswered questions for future research were discussed.
Reliability and cost: A sensitivity analysis
NASA Technical Reports Server (NTRS)
Suich, Ronald C.; Patterson, Richard L.
1991-01-01
In the design phase of a system, how a design engineer or manager choose between a subsystem with .990 reliability and a more costly subsystem with .995 reliability is examined, along with the justification of the increased cost. High reliability is not necessarily an end in itself but may be desirable in order to reduce the expected cost due to subsystem failure. However, this may not be the wisest use of funds since the expected cost due to subsystem failure is not the only cost involved. The subsystem itself may be very costly. The cost of the subsystem nor the expected cost due to subsystem failure should not be considered separately but the total of the two costs should be maximized, i.e., the total of the cost of the subsystem plus the expected cost due to subsystem failure.
Numerical simulations of imaging satellites with optical interferometry
NASA Astrophysics Data System (ADS)
Ding, Yuanyuan; Wang, Chaoyan; Chen, Zhendong
2015-08-01
Optical interferometry imaging system, which is composed of multiple sub-apertures, is a type of sensor that can break through the aperture limit and realize the high resolution imaging. This technique can be utilized to precisely measure the shapes, sizes and position of astronomical objects and satellites, it also can realize to space exploration and space debris, satellite monitoring and survey. Fizeau-Type optical aperture synthesis telescope has the advantage of short baselines, common mount and multiple sub-apertures, so it is feasible for instantaneous direct imaging through focal plane combination.Since 2002, the researchers of Shanghai Astronomical Observatory have developed the study of optical interferometry technique. For array configurations, there are two optimal array configurations proposed instead of the symmetrical circular distribution: the asymmetrical circular distribution and the Y-type distribution. On this basis, two kinds of structure were proposed based on Fizeau interferometric telescope. One is Y-type independent sub-aperture telescope, the other one is segmented mirrors telescope with common secondary mirror.In this paper, we will give the description of interferometric telescope and image acquisition. Then we will mainly concerned the simulations of image restoration based on Y-type telescope and segmented mirrors telescope. The Richardson-Lucy (RL) method, Winner method and the Ordered Subsets Expectation Maximization (OS-EM) method are studied in this paper. We will analyze the influence of different stop rules too. At the last of the paper, we will present the reconstruction results of images of some satellites.
PSF reconstruction for Compton-based prompt gamma imaging
NASA Astrophysics Data System (ADS)
Jan, Meei-Ling; Lee, Ming-Wei; Huang, Hsuan-Ming
2018-02-01
Compton-based prompt gamma (PG) imaging has been proposed for in vivo range verification in proton therapy. However, several factors degrade the image quality of PG images, some of which are due to inherent properties of a Compton camera such as spatial resolution and energy resolution. Moreover, Compton-based PG imaging has a spatially variant resolution loss. In this study, we investigate the performance of the list-mode ordered subset expectation maximization algorithm with a shift-variant point spread function (LM-OSEM-SV-PSF) model. We also evaluate how well the PG images reconstructed using an SV-PSF model reproduce the distal falloff of the proton beam. The SV-PSF parameters were estimated from simulation data of point sources at various positions. Simulated PGs were produced in a water phantom irradiated with a proton beam. Compared to the LM-OSEM algorithm, the LM-OSEM-SV-PSF algorithm improved the quality of the reconstructed PG images and the estimation of PG falloff positions. In addition, the 4.44 and 5.25 MeV PG emissions can be accurately reconstructed using the LM-OSEM-SV-PSF algorithm. However, for the 2.31 and 6.13 MeV PG emissions, the LM-OSEM-SV-PSF reconstruction provides limited improvement. We also found that the LM-OSEM algorithm followed by a shift-variant Richardson-Lucy deconvolution could reconstruct images with quality visually similar to the LM-OSEM-SV-PSF-reconstructed images, while requiring shorter computation time.
Reproducibility Between Brain Uptake Ratio Using Anatomic Standardization and Patlak-Plot Methods.
Shibutani, Takayuki; Onoguchi, Masahisa; Noguchi, Atsushi; Yamada, Tomoki; Tsuchihashi, Hiroko; Nakajima, Tadashi; Kinuya, Seigo
2015-12-01
The Patlak-plot and conventional methods of determining brain uptake ratio (BUR) have some problems with reproducibility. We formulated a method of determining BUR using anatomic standardization (BUR-AS) in a statistical parametric mapping algorithm to improve reproducibility. The objective of this study was to demonstrate the inter- and intraoperator reproducibility of mean cerebral blood flow as determined using BUR-AS in comparison to the conventional-BUR (BUR-C) and Patlak-plot methods. The images of 30 patients who underwent brain perfusion SPECT were retrospectively used in this study. The images were reconstructed using ordered-subset expectation maximization and processed using an automatic quantitative analysis for cerebral blood flow of ECD tool. The mean SPECT count was calculated from axial basal ganglia slices of the normal side (slices 31-40) drawn using a 3-dimensional stereotactic region-of-interest template after anatomic standardization. The mean cerebral blood flow was calculated from the mean SPECT count. Reproducibility was evaluated using coefficient of variation and Bland-Altman plotting. For both inter- and intraoperator reproducibility, the BUR-AS method had the lowest coefficient of variation and smallest error range about the Bland-Altman plot. Mean CBF obtained using the BUR-AS method had the highest reproducibility. Compared with the Patlak-plot and BUR-C methods, the BUR-AS method provides greater inter- and intraoperator reproducibility of cerebral blood flow measurement. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Automated Verification of Design Patterns with LePUS3
NASA Technical Reports Server (NTRS)
Nicholson, Jonathan; Gasparis, Epameinondas; Eden, Ammon H.; Kazman, Rick
2009-01-01
Specification and [visual] modelling languages are expected to combine strong abstraction mechanisms with rigour, scalability, and parsimony. LePUS3 is a visual, object-oriented design description language axiomatized in a decidable subset of the first-order predicate logic. We demonstrate how LePUS3 is used to formally specify a structural design pattern and prove ( verify ) whether any JavaTM 1.4 program satisfies that specification. We also show how LePUS3 specifications (charts) are composed and how they are verified fully automatically in the Two-Tier Programming Toolkit.
Maximizing and minimizing investment concentration with constraints of budget and investment risk
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2018-01-01
In this paper, as a first step in examining the properties of a feasible portfolio subset that is characterized by budget and risk constraints, we assess the maximum and minimum of the investment concentration using replica analysis. To do this, we apply an analytical approach of statistical mechanics. We note that the optimization problem considered in this paper is the dual problem of the portfolio optimization problem discussed in the literature, and we verify that these optimal solutions are also dual. We also present numerical experiments, in which we use the method of steepest descent that is based on Lagrange's method of undetermined multipliers, and we compare the numerical results to those obtained by replica analysis in order to assess the effectiveness of our proposed approach.
Frustration in protein elastic network models
NASA Astrophysics Data System (ADS)
Lezon, Timothy; Bahar, Ivet
2010-03-01
Elastic network models (ENMs) are widely used for studying the equilibrium dynamics of proteins. The most common approach in ENM analysis is to adopt a uniform force constant or a non-specific distance dependent function to represent the force constant strength. Here we discuss the influence of sequence and structure in determining the effective force constants between residues in ENMs. Using a novel method based on entropy maximization, we optimize the force constants such that they exactly reporduce a subset of experimentally determined pair covariances for a set of proteins. We analyze the optimized force constants in terms of amino acid types, distances, contact order and secondary structure, and we demonstrate that including frustrated interactions in the ENM is essential for accurately reproducing the global modes in the middle of the frequency spectrum.
Lepley, Adam S; Ericksen, Hayley M; Sohn, David H; Pietrosimone, Brian G
2014-06-01
Persistent quadriceps weakness is common following anterior cruciate ligament reconstruction (ACLr). Alterations in spinal-reflexive excitability, corticospinal excitability and voluntary activation have been hypothesized as underlying mechanisms contributing to quadriceps weakness. The aim of this study was to evaluate the predictive capabilities of spinal-reflexive excitability, corticospinal excitability and voluntary activation on quadriceps strength in healthy and ACLr participants. Quadriceps strength was measured using maximal voluntary isometric contractions (MVIC). Voluntary activation was quantified via the central activation ratio (CAR). Corticospinal and spinal-reflexive excitability were measured using active motor thresholds (AMT) and Hoffmann reflexes normalized to maximal muscle responses (H:M), respectively. ACLr individuals were also split into high and low strength subsets based on MVIC. CAR was the only significant predictor in the healthy group. In the ACLr group, CAR and H:M significantly predicted 47% of the variance in MVIC. ACLr individuals in the high strength subset demonstrated significantly higher CAR and H:M than those in the low strength subset. Increased quadriceps voluntary activation, spinal-reflexive excitability and corticospinal excitability relates to increased quadriceps strength in participants following ACLr. Rehabilitation strategies used to target neural alterations may be beneficial for the restoration of muscle strength following ACLr. Copyright © 2014 Elsevier B.V. All rights reserved.
Reliability of high-power QCW arrays
NASA Astrophysics Data System (ADS)
Feeler, Ryan; Junghans, Jeremy; Remley, Jennifer; Schnurbusch, Don; Stephens, Ed
2010-02-01
Northrop Grumman Cutting Edge Optronics has developed a family of arrays for high-power QCW operation. These arrays are built using CTE-matched heat sinks and hard solder in order to maximize the reliability of the devices. A summary of a recent life test is presented in order to quantify the reliability of QCW arrays and associated laser gain modules. A statistical analysis of the raw lifetime data is presented in order to quantify the data in such a way that is useful for laser system designers. The life tests demonstrate the high level of reliability of these arrays in a number of operating regimes. For single-bar arrays, a MTTF of 19.8 billion shots is predicted. For four-bar samples, a MTTF of 14.6 billion shots is predicted. In addition, data representing a large pump source is analyzed and shown to have an expected lifetime of 13.5 billion shots. This corresponds to an expected operational lifetime of greater than ten thousand hours at repetition rates less than 370 Hz.
2014-10-01
de l’exactitude et de la précision), comparativement au modèle de mesure plus simple qui n’utilise pas de multiplicateurs. Importance pour la défense...3) Bayesian experimental design for receptor placement in order to maximize the expected information in the measured concen- tration data for...applications of the Bayesian inferential methodology for source recon- struction have used high-quality concentration data from well- designed atmospheric
Layered motion segmentation and depth ordering by tracking edges.
Smith, Paul; Drummond, Tom; Cipolla, Roberto
2004-04-01
This paper presents a new Bayesian framework for motion segmentation--dividing a frame from an image sequence into layers representing different moving objects--by tracking edges between frames. Edges are found using the Canny edge detector, and the Expectation-Maximization algorithm is then used to fit motion models to these edges and also to calculate the probabilities of the edges obeying each motion model. The edges are also used to segment the image into regions of similar color. The most likely labeling for these regions is then calculated by using the edge probabilities, in association with a Markov Random Field-style prior. The identification of the relative depth ordering of the different motion layers is also determined, as an integral part of the process. An efficient implementation of this framework is presented for segmenting two motions (foreground and background) using two frames. It is then demonstrated how, by tracking the edges into further frames, the probabilities may be accumulated to provide an even more accurate and robust estimate, and segment an entire sequence. Further extensions are then presented to address the segmentation of more than two motions. Here, a hierarchical method of initializing the Expectation-Maximization algorithm is described, and it is demonstrated that the Minimum Description Length principle may be used to automatically select the best number of motion layers. The results from over 30 sequences (demonstrating both two and three motions) are presented and discussed.
Active inference and epistemic value.
Friston, Karl; Rigoli, Francesco; Ognibene, Dimitri; Mathys, Christoph; Fitzgerald, Thomas; Pezzulo, Giovanni
2015-01-01
We offer a formal treatment of choice behavior based on the premise that agents minimize the expected free energy of future outcomes. Crucially, the negative free energy or quality of a policy can be decomposed into extrinsic and epistemic (or intrinsic) value. Minimizing expected free energy is therefore equivalent to maximizing extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximizing information gain or intrinsic value (or reducing uncertainty about the causes of valuable outcomes). The resulting scheme resolves the exploration-exploitation dilemma: Epistemic value is maximized until there is no further information gain, after which exploitation is assured through maximization of extrinsic value. This is formally consistent with the Infomax principle, generalizing formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk-sensitive (Kullback-Leibler) control. Furthermore, as with previous active inference formulations of discrete (Markovian) problems, ad hoc softmax parameters become the expected (Bayes-optimal) precision of beliefs about, or confidence in, policies. This article focuses on the basic theory, illustrating the ideas with simulations. A key aspect of these simulations is the similarity between precision updates and dopaminergic discharges observed in conditioning paradigms.
Quantitative and Qualitative Assessment of Yttrium-90 PET/CT Imaging
Büsing, Karen-Anett; Schönberg, Stefan O.; Bailey, Dale L.; Willowson, Kathy; Glatting, Gerhard
2014-01-01
Yttrium-90 is known to have a low positron emission decay of 32 ppm that may allow for personalized dosimetry of liver cancer therapy with 90Y labeled microspheres. The aim of this work was to image and quantify 90Y so that accurate predictions of the absorbed dose can be made. The measurements were performed within the QUEST study (University of Sydney, and Sirtex Medical, Australia). A NEMA IEC body phantom containing 6 fillable spheres (10–37 mm ∅) was used to measure the 90Y distribution with a Biograph mCT PET/CT (Siemens, Erlangen, Germany) with time-of-flight (TOF) acquisition. A sphere to background ratio of 8∶1, with a total 90Y activity of 3 GBq was used. Measurements were performed for one week (0, 3, 5 and 7 d). he acquisition protocol consisted of 30 min-2 bed positions and 120 min-single bed position. mages were reconstructed with 3D ordered subset expectation maximization (OSEM) and point spread function (PSF) for iteration numbers of 1–12 with 21 (TOF) and 24 (non-TOF) subsets and CT based attenuation and scatter correction. Convergence of algorithms and activity recovery was assessed based on regions-of-interest (ROI) analysis of the background (100 voxels), spheres (4 voxels) and the central low density insert (25 voxels). For the largest sphere, the recovery coefficient (RC) values for the 30 min –2-bed position, 30 min-single bed and 120 min-single bed were 1.12±0.20, 1.14±0.13, 0.97±0.07 respectively. For the smaller diameter spheres, the PSF algorithm with TOF and single bed acquisition provided a comparatively better activity recovery. Quantification of Y-90 using Biograph mCT PET/CT is possible with a reasonable accuracy, the limitations being the size of the lesion and the activity concentration present. At this stage, based on our study, it seems advantageous to use different protocols depending on the size of the lesion. PMID:25369020
An Image Processing Algorithm Based On FMAT
NASA Technical Reports Server (NTRS)
Wang, Lui; Pal, Sankar K.
1995-01-01
Information deleted in ways minimizing adverse effects on reconstructed images. New grey-scale generalization of medial axis transformation (MAT), called FMAT (short for Fuzzy MAT) proposed. Formulated by making natural extension to fuzzy-set theory of all definitions and conditions (e.g., characteristic function of disk, subset condition of disk, and redundancy checking) used in defining MAT of crisp set. Does not need image to have any kind of priori segmentation, and allows medial axis (and skeleton) to be fuzzy subset of input image. Resulting FMAT (consisting of maximal fuzzy disks) capable of reconstructing exactly original image.
Deterministic quantum annealing expectation-maximization algorithm
NASA Astrophysics Data System (ADS)
Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki
2017-11-01
Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.
Foxall, Gordon R; Oliveira-Castro, Jorge M; Schrezenmaier, Teresa C
2004-06-30
Purchasers of fast-moving consumer goods generally exhibit multi-brand choice, selecting apparently randomly among a small subset or "repertoire" of tried and trusted brands. Their behavior shows both matching and maximization, though it is not clear just what the majority of buyers are maximizing. Each brand attracts, however, a small percentage of consumers who are 100%-loyal to it during the period of observation. Some of these are exclusively buyers of premium-priced brands who are presumably maximizing informational reinforcement because their demand for the brand is relatively price-insensitive or inelastic. Others buy exclusively the cheapest brands available and can be assumed to maximize utilitarian reinforcement since their behavior is particularly price-sensitive or elastic. Between them are the majority of consumers whose multi-brand buying takes the form of selecting a mixture of economy -- and premium-priced brands. Based on the analysis of buying patterns of 80 consumers for 9 product categories, the paper examines the continuum of consumers so defined and seeks to relate their buying behavior to the question of how and what consumers maximize.
ERIC Educational Resources Information Center
Shemick, John M.
1983-01-01
In a project to identify and verify professional competencies for beginning industrial education teachers, researchers found a 173-item questionnaire unwieldy. Using multiple-matrix sampling, they distributed subsets of items to respondents, resulting in adequate returns as well as duplication, postage, and time savings. (SK)
NASA Astrophysics Data System (ADS)
Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.
2016-05-01
X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.
Annotti, Lee A; Teglasi, Hedwig
2017-01-01
Real-world contexts differ in the clarity of expectations for desired responses, as do assessment procedures, ranging along a continuum from maximal conditions that provide well-defined expectations to typical conditions that provide ill-defined expectations. Executive functions guide effective social interactions, but relations between them have not been studied with measures that are matched in the clarity of response expectations. In predicting teacher-rated social competence (SC) from kindergarteners' performance on tasks of executive functions (EFs), we found better model-data fit indexes when both measures were similar in the clarity of response expectations for the child. The maximal EF measure, the Developmental Neuropsychological Assessment, presents well-defined response expectations, and the typical EF measure, 5 scales from the Thematic Apperception Test (TAT), presents ill-defined response expectations (i.e., Abstraction, Perceptual Integration, Cognitive-Experiential Integration, and Associative Thinking). To assess SC under maximal and typical conditions, we used 2 teacher-rated questionnaires, with items, respectively, that emphasize well-defined and ill-defined expectations: the Behavior Rating Inventory: Behavioral Regulation Index and the Social Skills Improvement System: Social Competence Scale. Findings suggest that matching clarity of expectations improves generalization across measures and highlight the usefulness of the TAT to measure EF.
Liu, Weihua; Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng
2014-01-01
In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC.
Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng
2014-01-01
In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC. PMID:24715818
The influence of individualism and drinking identity on alcohol problems.
Foster, Dawn W; Yeung, Nelson; Quist, Michelle C
2014-12-01
This study evaluated the interactive association between individualism and drinking identity predicting alcohol use and problems. Seven hundred and ten undergraduates (Mean age =22.84, SD = 5.31, 83.1% female) completed study materials. We expected that drinking identity and individualism would positively correlate with drinking variables. We further expected that individualism would moderate the association between drinking identity and drinking such that the relationship between drinking identity and alcohol outcomes would be positively associated, particularly among those high in individualism. Our findings supported our hypotheses. These findings better explain the relationship between drinking identity, individualism, and alcohol use. Furthermore, this research encourages the consideration of individual factors and personality characteristics in order to develop culturally tailored materials to maximize intervention efficacy across cultures.
Optimal execution in high-frequency trading with Bayesian learning
NASA Astrophysics Data System (ADS)
Du, Bian; Zhu, Hongliang; Zhao, Jingdong
2016-11-01
We consider optimal trading strategies in which traders submit bid and ask quotes to maximize the expected quadratic utility of total terminal wealth in a limit order book. The trader's bid and ask quotes will be changed by the Poisson arrival of market orders. Meanwhile, the trader may update his estimate of other traders' target sizes and directions by Bayesian learning. The solution of optimal execution in the limit order book is a two-step procedure. First, we model an inactive trading with no limit order in the market. The dealer simply holds dollars and shares of stocks until terminal time. Second, he calibrates his bid and ask quotes to the limit order book. The optimal solutions are given by dynamic programming and in fact they are globally optimal. We also give numerical simulation to the value function and optimal quotes at the last part of the article.
Identification of features in indexed data and equipment therefore
Jarman, Kristin H [Richland, WA; Daly, Don Simone [Richland, WA; Anderson, Kevin K [Richland, WA; Wahl, Karen L [Richland, WA
2002-04-02
Embodiments of the present invention provide methods of identifying a feature in an indexed dataset. Such embodiments encompass selecting an initial subset of indices, the initial subset of indices being encompassed by an initial window-of-interest and comprising at least one beginning index and at least one ending index; computing an intensity weighted measure of dispersion for the subset of indices using a subset of responses corresponding to the subset of indices; and comparing the intensity weighted measure of dispersion to a dispersion critical value determined from an expected value of the intensity weighted measure of dispersion under a null hypothesis of no transient feature present. Embodiments of the present invention also encompass equipment configured to perform the methods of the present invention.
Short-Term Planning of Hybrid Power System
NASA Astrophysics Data System (ADS)
Knežević, Goran; Baus, Zoran; Nikolovski, Srete
2016-07-01
In this paper short-term planning algorithm for hybrid power system consist of different types of cascade hydropower plants (run-of-the river, pumped storage, conventional), thermal power plants (coal-fired power plants, combined cycle gas-fired power plants) and wind farms is presented. The optimization process provides a joint bid of the hybrid system, and thus making the operation schedule of hydro and thermal power plants, the operation condition of pumped-storage hydropower plants with the aim of maximizing profits on day ahead market, according to expected hourly electricity prices, the expected local water inflow in certain hydropower plants, and the expected production of electrical energy from the wind farm, taking into account previously contracted bilateral agreement for electricity generation. Optimization process is formulated as hourly-discretized mixed integer linear optimization problem. Optimization model is applied on the case study in order to show general features of the developed model.
Patel, Nitin R; Ankolekar, Suresh
2007-11-30
Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.
FLASH_SSF_Aqua-FM3-MODIS_Version3C
Atmospheric Science Data Center
2018-04-04
... Tool: CERES Order Tool (netCDF) Subset Data: CERES Search and Subset Tool (HDF4 & netCDF) ... Cloud Layer Area Cloud Infared Emissivity Cloud Base Pressure Surface (Radiative) Flux TOA Flux Surface Types TOT ... Radiance SW Filtered Radiance LW Flux Order Data: Earthdata Search: Order Data Guide Documents: ...
FLASH_SSF_Terra-FM1-MODIS_Version3C
Atmospheric Science Data Center
2018-04-04
... Tool: CERES Order Tool (netCDF) Subset Data: CERES Search and Subset Tool (HDF4 & netCDF) ... Cloud Layer Area Cloud Infrared Emissivity Cloud Base Pressure Surface (Radiative) Flux TOA Flux Surface Types TOT ... Radiance SW Filtered Radiance LW Flux Order Data: Earthdata Search: Order Data Guide Documents: ...
NASA Astrophysics Data System (ADS)
Kim, Ji Hye; Ahn, Il Jun; Nam, Woo Hyun; Ra, Jong Beom
2015-02-01
Positron emission tomography (PET) images usually suffer from a noticeable amount of statistical noise. In order to reduce this noise, a post-filtering process is usually adopted. However, the performance of this approach is limited because the denoising process is mostly performed on the basis of the Gaussian random noise. It has been reported that in a PET image reconstructed by the expectation-maximization (EM), the noise variance of each voxel depends on its mean value, unlike in the case of Gaussian noise. In addition, we observe that the variance also varies with the spatial sensitivity distribution in a PET system, which reflects both the solid angle determined by a given scanner geometry and the attenuation information of a scanned object. Thus, if a post-filtering process based on the Gaussian random noise is applied to PET images without consideration of the noise characteristics along with the spatial sensitivity distribution, the spatially variant non-Gaussian noise cannot be reduced effectively. In the proposed framework, to effectively reduce the noise in PET images reconstructed by the 3-D ordinary Poisson ordered subset EM (3-D OP-OSEM), we first denormalize an image according to the sensitivity of each voxel so that the voxel mean value can represent its statistical properties reliably. Based on our observation that each noisy denormalized voxel has a linear relationship between the mean and variance, we try to convert this non-Gaussian noise image to a Gaussian noise image. We then apply a block matching 4-D algorithm that is optimized for noise reduction of the Gaussian noise image, and reconvert and renormalize the result to obtain a final denoised image. Using simulated phantom data and clinical patient data, we demonstrate that the proposed framework can effectively suppress the noise over the whole region of a PET image while minimizing degradation of the image resolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andreyev, A.
Purpose: Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. Methods: To validate the proposed algorithm we used Monte Carlomore » simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Results: Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2–3 orders of magnitude per iteration. Conclusions: The results of our tests demonstrate the improvement of image resolution provided by the OE reconstructions with resolution recovery. The quality of images and their contrast are similar to those obtained from the OE reconstructions from scans simulated with perfect energy and spatial resolutions.« less
Resolution recovery for Compton camera using origin ensemble algorithm.
Andreyev, A; Celler, A; Ozsahin, I; Sitek, A
2016-08-01
Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. To validate the proposed algorithm we used Monte Carlo simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2-3 orders of magnitude per iteration. The results of our tests demonstrate the improvement of image resolution provided by the OE reconstructions with resolution recovery. The quality of images and their contrast are similar to those obtained from the OE reconstructions from scans simulated with perfect energy and spatial resolutions.
Esquinas, Pedro L; Uribe, Carlos F; Gonzalez, M; Rodríguez-Rodríguez, Cristina; Häfeli, Urs O; Celler, Anna
2017-07-20
The main applications of 188 Re in radionuclide therapies include trans-arterial liver radioembolization and palliation of painful bone-metastases. In order to optimize 188 Re therapies, the accurate determination of radiation dose delivered to tumors and organs at risk is required. Single photon emission computed tomography (SPECT) can be used to perform such dosimetry calculations. However, the accuracy of dosimetry estimates strongly depends on the accuracy of activity quantification in 188 Re images. In this study, we performed a series of phantom experiments aiming to investigate the accuracy of activity quantification for 188 Re SPECT using high-energy and medium-energy collimators. Objects of different shapes and sizes were scanned in Air, non-radioactive water (Cold-water) and water with activity (Hot-water). The ordered subset expectation maximization algorithm with clinically available corrections (CT-based attenuation, triple-energy window (TEW) scatter and resolution recovery was used). For high activities, the dead-time corrections were applied. The accuracy of activity quantification was evaluated using the ratio of the reconstructed activity in each object to this object's true activity. Each object's activity was determined with three segmentation methods: a 1% fixed threshold (for cold background), a 40% fixed threshold and a CT-based segmentation. Additionally, the activity recovered in the entire phantom, as well as the average activity concentration of the phantom background were compared to their true values. Finally, Monte-Carlo simulations of a commercial [Formula: see text]-camera were performed to investigate the accuracy of the TEW method. Good quantification accuracy (errors <10%) was achieved for the entire phantom, the hot-background activity concentration and for objects in cold background segmented with a 1% threshold. However, the accuracy of activity quantification for objects segmented with 40% threshold or CT-based methods decreased (errors >15%), mostly due to partial-volume effects. The Monte-Carlo simulations confirmed that TEW-scatter correction applied to 188 Re, although practical, yields only approximate estimates of the true scatter.
Mears, Lisa; Stocks, Stuart M; Albaek, Mads O; Cassells, Benny; Sin, Gürkan; Gernaey, Krist V
2017-07-01
A novel model-based control strategy has been developed for filamentous fungal fed-batch fermentation processes. The system of interest is a pilot scale (550 L) filamentous fungus process operating at Novozymes A/S. In such processes, it is desirable to maximize the total product achieved in a batch in a defined process time. In order to achieve this goal, it is important to maximize both the product concentration, and also the total final mass in the fed-batch system. To this end, we describe the development of a control strategy which aims to achieve maximum tank fill, while avoiding oxygen limited conditions. This requires a two stage approach: (i) calculation of the tank start fill; and (ii) on-line control in order to maximize fill subject to oxygen transfer limitations. First, a mechanistic model was applied off-line in order to determine the appropriate start fill for processes with four different sets of process operating conditions for the stirrer speed, headspace pressure, and aeration rate. The start fills were tested with eight pilot scale experiments using a reference process operation. An on-line control strategy was then developed, utilizing the mechanistic model which is recursively updated using on-line measurements. The model was applied in order to predict the current system states, including the biomass concentration, and to simulate the expected future trajectory of the system until a specified end time. In this way, the desired feed rate is updated along the progress of the batch taking into account the oxygen mass transfer conditions and the expected future trajectory of the mass. The final results show that the target fill was achieved to within 5% under the maximum fill when tested using eight pilot scale batches, and over filling was avoided. The results were reproducible, unlike the reference experiments which show over 10% variation in the final tank fill, and this also includes over filling. The variance of the final tank fill is reduced by over 74%, meaning that it is possible to target the final maximum fill reproducibly. The product concentration achieved at a given set of process conditions was unaffected by the control strategy. Biotechnol. Bioeng. 2017;114: 1459-1468. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Youngrok
2013-05-15
Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates ofmore » nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.« less
Trépanier, Marc-Olivier; Lim, Joonbum; Lai, Terence K Y; Cho, Hye Jin; Domenichiello, Anthony F; Chen, Chuck T; Taha, Ameer Y; Bazinet, Richard P; Burnham, W M
2014-04-01
Docosahexaenoic acid (DHA) is an omega-3 polyunsaturated fatty acid (n-3 PUFA) which has been shown to raise seizure thresholds following acute administration in rats. The aims of the present experiment were the following: 1) to test whether subchronic DHA administration raises seizure threshold in the maximal pentylenetetrazol (PTZ) model 24h following the last injection and 2) to determine whether the increase in seizure threshold is correlated with an increase in serum and/or brain DHA. Animals received daily intraperitoneal (i.p.) injections of 50mg/kg of DHA, DHA ethyl ester (DHA EE), or volume-matched vehicle (albumin/saline) for 14days. On day 15, one subset of animals was seizure tested in the maximal PTZ model (Experiment 1). In a separate (non-seizure tested) subset of animals, blood was collected, and brains were excised following high-energy, head-focused microwave fixation. Lipid analysis was performed on serum and brain (Experiment 2). For data analysis, the DHA and DHA EE groups were combined since they did not differ significantly from each other. In the maximal PTZ model, DHA significantly increased seizure latency by approximately 3-fold as compared to vehicle-injected animals. This increase in seizure latency was associated with an increase in serum unesterified DHA. Total brain DHA and brain unesterified DHA concentrations, however, did not differ significantly in the treatment and control groups. An increase in serum unesterified DHA concentration reflecting increased flux of DHA to the brain appears to explain changes in seizure threshold, independent of changes in brain DHA concentrations. Copyright © 2014 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Wells, Ryan S.; Lynch, Cassie M.; Seifert, Tricia A.
2011-01-01
A number of studies over decades have examined determinants of educational expectations. However, even among the subset of quantitative studies, there is considerable variation in the methods used to operationally define and analyze expectations. Using a systematic literature review and several regression methods to analyze Latino students'…
Weisgerber, Michael; Danduran, Michael; Meurer, John; Hartmann, Kathryn; Berger, Stuart; Flores, Glenn
2009-07-01
To evaluate Cooper 12-minute run/walk test (CT12) as a one-time estimate of cardiorespiratory fitness and marker of fitness change compared with treadmill fitness testing in young children with persistent asthma. A cohort of urban children with asthma participated in the asthma and exercise program and a subset completed pre- and postintervention fitness testing. Treadmill fitness testing was conducted by an exercise physiologist in the fitness laboratory at an academic children's hospital. CT12 was conducted in a college recreation center gymnasium. Forty-five urban children with persistent asthma aged 7 to 14 years participated in exercise interventions. A subset of 19 children completed pre- and postintervention exercise testing. Participants completed a 9-week exercise program where they participated in either swimming or golf 3 days a week for 1 hour. A subset of participants completed fitness testing by 2 methods before and after program completion. CT12 results (meters), maximal oxygen consumption ((.)Vo2max) (mL x kg(-1) x min(-1)), and treadmill exercise time (minutes). CT12 and maximal oxygen consumption were moderately correlated (preintervention: 0.55, P = 0.003; postintervention: 0.48, P = 0.04) as one-time measures of fitness. Correlations of the tests as markers of change over time were poor and nonsignificant. In children with asthma, CT12 is a reasonable one-time estimate of fitness but a poor marker of fitness change over time.
Mapping tropical rainforest canopies using multi-temporal spaceborne imaging spectroscopy
NASA Astrophysics Data System (ADS)
Somers, Ben; Asner, Gregory P.
2013-10-01
The use of imaging spectroscopy for florisic mapping of forests is complicated by the spectral similarity among coexisting species. Here we evaluated an alternative spectral unmixing strategy combining a time series of EO-1 Hyperion images and an automated feature selection strategy in MESMA. Instead of using the same spectral subset to unmix each image pixel, our modified approach allowed the spectral subsets to vary on a per pixel basis such that each pixel is evaluated using a spectral subset tuned towards maximal separability of its specific endmember class combination or species mixture. The potential of the new approach for floristic mapping of tree species in Hawaiian rainforests was quantitatively demonstrated using both simulated and actual hyperspectral image time-series. With a Cohen's Kappa coefficient of 0.65, our approach provided a more accurate tree species map compared to MESMA (Kappa = 0.54). In addition, by the selection of spectral subsets our approach was about 90% faster than MESMA. The flexible or adaptive use of band sets in spectral unmixing as such provides an interesting avenue to address spectral similarities in complex vegetation canopies.
Forecasting continuously increasing life expectancy: what implications?
Le Bourg, Eric
2012-04-01
It has been proposed that life expectancy could linearly increase in the next decades and that median longevity of the youngest birth cohorts could reach 105 years or more. These forecasts have been criticized but it seems that their implications for future maximal lifespan (i.e. the lifespan of the last survivors) have not been considered. These implications make these forecasts untenable and it is less risky to hypothesize that life expectancy and maximal lifespan will reach an asymptotic limit in some decades from now. Copyright © 2012 Elsevier B.V. All rights reserved.
A Mathematical Modelling Approach to One-Day Cricket Batting Orders
Bukiet, Bruce; Ovens, Matthews
2006-01-01
While scoring strategies and player performance in cricket have been studied, there has been little published work about the influence of batting order with respect to One-Day cricket. We apply a mathematical modelling approach to compute efficiently the expected performance (runs distribution) of a cricket batting order in an innings. Among other applications, our method enables one to solve for the probability of one team beating another or to find the optimal batting order for a set of 11 players. The influence of defence and bowling ability can be taken into account in a straightforward manner. In this presentation, we outline how we develop our Markov Chain approach to studying the progress of runs for a batting order of non- identical players along the lines of work in baseball modelling by Bukiet et al., 1997. We describe the issues that arise in applying such methods to cricket, discuss ideas for addressing these difficulties and note limitations on modelling batting order for One-Day cricket. By performing our analysis on a selected subset of the possible batting orders, we apply the model to quantify the influence of batting order in a game of One Day cricket using available real-world data for current players. Key Points Batting order does effect the expected runs distribution in one-day cricket. One-day cricket has fewer data points than baseball, thus extreme values have greater effect on estimated probabilities. Dismissals rare and probabilities very small by comparison to baseball. Probability distribution for lower order batsmen is potentially skewed due to increased risk taking. Full enumeration of all possible line-ups is impractical using a single average computer. PMID:24357943
A mathematical modelling approach to one-day cricket batting orders.
Bukiet, Bruce; Ovens, Matthews
2006-01-01
While scoring strategies and player performance in cricket have been studied, there has been little published work about the influence of batting order with respect to One-Day cricket. We apply a mathematical modelling approach to compute efficiently the expected performance (runs distribution) of a cricket batting order in an innings. Among other applications, our method enables one to solve for the probability of one team beating another or to find the optimal batting order for a set of 11 players. The influence of defence and bowling ability can be taken into account in a straightforward manner. In this presentation, we outline how we develop our Markov Chain approach to studying the progress of runs for a batting order of non- identical players along the lines of work in baseball modelling by Bukiet et al., 1997. We describe the issues that arise in applying such methods to cricket, discuss ideas for addressing these difficulties and note limitations on modelling batting order for One-Day cricket. By performing our analysis on a selected subset of the possible batting orders, we apply the model to quantify the influence of batting order in a game of One Day cricket using available real-world data for current players. Key PointsBatting order does effect the expected runs distribution in one-day cricket.One-day cricket has fewer data points than baseball, thus extreme values have greater effect on estimated probabilities.Dismissals rare and probabilities very small by comparison to baseball.Probability distribution for lower order batsmen is potentially skewed due to increased risk taking.Full enumeration of all possible line-ups is impractical using a single average computer.
Statistical Learning of Origin-Specific Statically Optimal Individualized Treatment Rules
van der Laan, Mark J.; Petersen, Maya L.
2008-01-01
Consider a longitudinal observational or controlled study in which one collects chronological data over time on a random sample of subjects. The time-dependent process one observes on each subject contains time-dependent covariates, time-dependent treatment actions, and an outcome process or single final outcome of interest. A statically optimal individualized treatment rule (as introduced in van der Laan et. al. (2005), Petersen et. al. (2007)) is a treatment rule which at any point in time conditions on a user-supplied subset of the past, computes the future static treatment regimen that maximizes a (conditional) mean future outcome of interest, and applies the first treatment action of the latter regimen. In particular, Petersen et. al. (2007) clarified that, in order to be statically optimal, an individualized treatment rule should not depend on the observed treatment mechanism. Petersen et. al. (2007) further developed estimators of statically optimal individualized treatment rules based on a past capturing all confounding of past treatment history on outcome. In practice, however, one typically wishes to find individualized treatment rules responding to a user-supplied subset of the complete observed history, which may not be sufficient to capture all confounding. The current article provides an important advance on Petersen et. al. (2007) by developing locally efficient double robust estimators of statically optimal individualized treatment rules responding to such a user-supplied subset of the past. However, failure to capture all confounding comes at a price; the static optimality of the resulting rules becomes origin-specific. We explain origin-specific static optimality, and discuss the practical importance of the proposed methodology. We further present the results of a data analysis in which we estimate a statically optimal rule for switching antiretroviral therapy among patients infected with resistant HIV virus. PMID:19122792
Ordered mapping of 3 alphoid DNA subsets on human chromosome 22
DOE Office of Scientific and Technical Information (OSTI.GOV)
Antonacci, R.; Baldini, A.; Archidiacono, N.
1994-09-01
Alpha satellite DNA consists of tandemly repeated monomers of 171 bp clustered in the centromeric region of primate chromosomes. Sequence divergence between subsets located in different human chromosomes is usually high enough to ensure chromosome-specific hybridization. Alphoid probes specific for almost every human chromosome have been reported. A single chromosome can carry different subsets of alphoid DNA and some alphoid subsets can be shared by different chromosomes. We report the physical order of three alphoid DNA subsets on human chromosome 22 determined by a combination of low and high resolution cytological mapping methods. Results visually demonstrate the presence of threemore » distinct alphoid DNA domains at the centromeric region of chromosome 22. We have measured the interphase distances between the three probes in three-color FISH experiments. Statistical analysis of the results indicated the order of the subsets. Two color experiments on prometaphase chromosomes established the order of the three domains relative to the arms of chromosome 22 and confirmed the results obtained using interphase mapping. This demonstrates the applicability of interphase mapping for alpha satellite DNA orderering. However, in our experiments, interphase mapping did not provide any information about the relationship between extremities of the repeat arrays. This information was gained from extended chromatin hybridization. The extremities of two of the repeat arrays were seen to be almost overlapping whereas the third repeat array was clearly separated from the other two. Our data show the value of extended chromatin hybridization as a complement of other cytological techniques for high resolution mapping of repetitive DNA sequences.« less
Hierarchical trie packet classification algorithm based on expectation-maximization clustering.
Bi, Xia-An; Zhao, Junxia
2017-01-01
With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm.
Mapping the Dark Matter with 6dFGS
NASA Astrophysics Data System (ADS)
Mould, Jeremy R.; Magoulas, C.; Springob, C.; Colless, M.; Jones, H.; Lucey, J.; Erdogdu, P.; Campbell, L.
2012-05-01
Fundamental plane distances from the 6dF Galaxy Redshift Survey are fitted to a model of the density field within 200/h Mpc. Likelihood is maximized for a single value of the local galaxy density, as expected in linear theory for the relation between overdensity and peculiar velocity. The dipole of the inferred southern hemisphere early type galaxy peculiar velocities is calculated within 150/h Mpc, before and after correction for the individual galaxy velocities predicted by the model. The former agrees with that obtained by other peculiar velocity studies (e.g. SFI++). The latter is only of order 150 km/sec and consistent with the expectations of the standard cosmological model and recent forecasts of the cosmic mach number, which show linearly declining bulk flow with increasing scale.
The influence of individualism and drinking identity on alcohol problems
Foster, Dawn W.; Yeung, Nelson; Quist, Michelle C.
2014-01-01
This study evaluated the interactive association between individualism and drinking identity predicting alcohol use and problems. Seven hundred and ten undergraduates (Mean age =22.84, SD = 5.31, 83.1% female) completed study materials. We expected that drinking identity and individualism would positively correlate with drinking variables. We further expected that individualism would moderate the association between drinking identity and drinking such that the relationship between drinking identity and alcohol outcomes would be positively associated, particularly among those high in individualism. Our findings supported our hypotheses. These findings better explain the relationship between drinking identity, individualism, and alcohol use. Furthermore, this research encourages the consideration of individual factors and personality characteristics in order to develop culturally tailored materials to maximize intervention efficacy across cultures. PMID:25525420
Volume versus value maximization illustrated for Douglas-fir with thinning
Kurt H. Riitters; J. Douglas Brodie; Chiang Kao
1982-01-01
Economic and physical criteria for selecting even-aged rotation lengths are reviewed with examples of their optimizations. To demonstrate the trade-off between physical volume, economic return, and stand diameter, examples of thinning regimes for maximizing volume, forest rent, and soil expectation are compared with an example of maximizing volume without thinning. The...
Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing
2016-01-01
In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: 1) the reconstruction algorithms do not make full use of projection statistics; and 2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10 to 40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET. PMID:27385378
Cheng, Xiaoyin; Li, Zhoulei; Liu, Zhen; Navab, Nassir; Huang, Sung-Cheng; Keller, Ulrich; Ziegler, Sibylle; Shi, Kuangyu
2015-02-12
The separation of multiple PET tracers within an overlapping scan based on intrinsic differences of tracer pharmacokinetics is challenging, due to limited signal-to-noise ratio (SNR) of PET measurements and high complexity of fitting models. In this study, we developed a direct parametric image reconstruction (DPIR) method for estimating kinetic parameters and recovering single tracer information from rapid multi-tracer PET measurements. This is achieved by integrating a multi-tracer model in a reduced parameter space (RPS) into dynamic image reconstruction. This new RPS model is reformulated from an existing multi-tracer model and contains fewer parameters for kinetic fitting. Ordered-subsets expectation-maximization (OSEM) was employed to approximate log-likelihood function with respect to kinetic parameters. To incorporate the multi-tracer model, an iterative weighted nonlinear least square (WNLS) method was employed. The proposed multi-tracer DPIR (MTDPIR) algorithm was evaluated on dual-tracer PET simulations ([18F]FDG and [11C]MET) as well as on preclinical PET measurements ([18F]FLT and [18F]FDG). The performance of the proposed algorithm was compared to the indirect parameter estimation method with the original dual-tracer model. The respective contributions of the RPS technique and the DPIR method to the performance of the new algorithm were analyzed in detail. For the preclinical evaluation, the tracer separation results were compared with single [18F]FDG scans of the same subjects measured 2 days before the dual-tracer scan. The results of the simulation and preclinical studies demonstrate that the proposed MT-DPIR method can improve the separation of multiple tracers for PET image quantification and kinetic parameter estimations.
Influence of reconstruction algorithms on image quality in SPECT myocardial perfusion imaging.
Davidsson, Anette; Olsson, Eva; Engvall, Jan; Gustafsson, Agnetha
2017-11-01
We investigated if image- and diagnostic quality in SPECT MPI could be maintained despite a reduced acquisition time adding Depth Dependent Resolution Recovery (DDRR) for image reconstruction. Images were compared with filtered back projection (FBP) and iterative reconstruction using Ordered Subsets Expectation Maximization with (IRAC) and without (IRNC) attenuation correction (AC). Stress- and rest imaging for 15 min was performed on 21 subjects with a dual head gamma camera (Infinia Hawkeye; GE Healthcare), ECG-gating with 8 frames/cardiac cycle and a low-dose CT-scan. A 9 min acquisition was generated using five instead of eight gated frames and was reconstructed with DDRR, with (IRACRR) and without AC (IRNCRR) as well as with FBP. Three experienced nuclear medicine specialists visually assessed anonymized images according to eight criteria on a four point scale, three related to image quality and five to diagnostic confidence. Statistical analysis was performed using Visual Grading Regression (VGR). Observer confidence in statements on image quality was highest for the images that were reconstructed using DDRR (P<0·01 compared to FBP). Iterative reconstruction without DDRR was not superior to FBP. Interobserver variability was significant for statements on image quality (P<0·05) but lower in the diagnostic statements on ischemia and scar. The confidence in assessing ischemia and scar was not different between the reconstruction techniques (P = n.s.). SPECT MPI collected in 9 min, reconstructed with DDRR and AC, produced better image quality than the standard procedure. The observers expressed the highest diagnostic confidence in the DDRR reconstruction. © 2016 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing
2016-08-01
In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: (1) the reconstruction algorithms do not make full use of projection statistics; and (2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10-40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET.
NASA Astrophysics Data System (ADS)
King, M.; Boening, Guido; Baker, S.; Steinmetz, N.
2004-10-01
In current clinical oncology practice, it often takes weeks or months of cancer therapy until a response to treatment can be identified by evaluation of tumor size in images. It is hypothesized that changes in relative localization of the apoptosis imaging agent Tc-99m Annexin before and after the administration of chemotherapy may be useful as an early indicator of the success of therapy. The objective of this study was to determine the minimum relative change in tumor localization that could be confidently determined as an increased localization. A modified version of the Data Spectrum Anthropomorphic Torso phantom, in which four spheres could be positioned in the lung region, was filled with organ concentrations of Tc-99m representative of those observed in clinical imaging of Tc-99m Annexin. Five acquisitions of an initial sphere to lung concentration, and at concentrations of 1.1, 1.2, 1.3, and 1.4 times the initial concentration, were acquired at clinically realistic count levels. The acquisitions were reconstructed by filtered backprojection, ordered subset expectation maximization (OSEM) without attenuation compensation (AC), and OSEM with AC. Permutation methodology was used to create multiple region-of-interest count ratios from the five noise realizations at each concentration and between the elevated and initial concentrations. The resulting distributions were approximated by Gaussians, which were then used to estimate the likelihood of Type 1 and Type 2 Errors. It was determined that for the cases investigated, greater than a 20% to 30% or more increase was needed to confidently determine that an increase in localization had occurred depending on sphere size and reconstruction strategy.
NASA Astrophysics Data System (ADS)
Fakhri, G. El; Kijewski, M. F.; Moore, S. C.
2001-06-01
Estimates of SPECT activity within certain deep brain structures could be useful for clinical tasks such as early prediction of Alzheimer's disease with Tc-99m or Parkinson's disease with I-123; however, such estimates are biased by poor spatial resolution and inaccurate scatter and attenuation corrections. We compared an analytical approach (AA) of more accurate quantitation to a slower iterative approach (IA). Monte Carlo simulated projections of 12 normal and 12 pathologic Tc-99m perfusion studies, as well as 12, normal and 12 pathologic I-123 neurotransmission studies, were generated using a digital brain phantom and corrected for scatter by a multispectral fitting procedure. The AA included attenuation correction by a modified Metz-Fan algorithm and activity estimation by a technique that incorporated Metz filtering to compensate for variable collimator response (VCR), IA-modeled attenuation, and VCR in the projector/backprojector of an ordered subsets-expectation maximization (OSEM) algorithm. Bias and standard deviation over the 12 normal and 12 pathologic patients were calculated with respect to the reference values in the corpus callosum, caudate nucleus, and putamen. The IA and AA yielded similar quantitation results in both Tc-99m and I-123 studies in all brain structures considered in both normal and pathologic patients. The bias with respect to the reference activity distributions was less than 7% for Tc-99m studies, but greater than 30% for I-123 studies, due to partial volume effect in the striata. Our results were validated using I-123 physical acquisitions of an anthropomorphic brain phantom. The IA yielded quantitation accuracy comparable to that obtained with IA, while requiring much less processing time. However, in most conditions, IA yielded lower noise for the same bias than did AA.
Castro, P; Huerga, C; Chamorro, P; Garayoa, J; Roch, M; Pérez, L
2018-04-17
The goals of the study are to characterize imaging properties in 2D PET images reconstructed with the iterative algorithm ordered-subset expectation maximization (OSEM) and to propose a new method for the generation of synthetic images. The noise is analyzed in terms of its magnitude, spatial correlation, and spectral distribution through standard deviation, autocorrelation function, and noise power spectrum (NPS), respectively. Their variations with position and activity level are also analyzed. This noise analysis is based on phantom images acquired from 18 F uniform distributions. Experimental recovery coefficients of hot spheres in different backgrounds are employed to study the spatial resolution of the system through point spread function (PSF). The NPS and PSF functions provide the baseline for the proposed simulation method: convolution with PSF as kernel and noise addition from NPS. The noise spectral analysis shows that the main contribution is of random nature. It is also proven that attenuation correction does not alter noise texture but it modifies its magnitude. Finally, synthetic images of 2 phantoms, one of them an anatomical brain, are quantitatively compared with experimental images showing a good agreement in terms of pixel values and pixel correlations. Thus, the contrast to noise ratio for the biggest sphere in the NEMA IEC phantom is 10.7 for the synthetic image and 8.8 for the experimental image. The properties of the analyzed OSEM-PET images can be described by NPS and PSF functions. Synthetic images, even anatomical ones, are successfully generated by the proposed method based on the NPS and PSF. Copyright © 2018 Sociedad Española de Medicina Nuclear e Imagen Molecular. Publicado por Elsevier España, S.L.U. All rights reserved.
Furuta, Akihiro; Onishi, Hideo; Amijima, Hizuru
2018-06-01
This study aimed to evaluate the effect of ventricular enlargement on the specific binding ratio (SBR) and to validate the cerebrospinal fluid (CSF)-Mask algorithm for quantitative SBR assessment of 123 I-FP-CIT single-photon emission computed tomography (SPECT) images with the use of a 3D-striatum digital brain (SDB) phantom. Ventricular enlargement was simulated by three-dimensional extensions in a 3D-SDB phantom comprising segments representing the striatum, ventricle, brain parenchyma, and skull bone. The Evans Index (EI) was measured in 3D-SDB phantom images of an enlarged ventricle. Projection data sets were generated from the 3D-SDB phantoms with blurring, scatter, and attenuation. Images were reconstructed using the ordered subset expectation maximization (OSEM) algorithm and corrected for attenuation, scatter, and resolution recovery. We bundled DaTView (Southampton method) with the CSF-Mask processing software for SBR. We assessed SBR with the use of various coefficients (f factor) of the CSF-Mask. Specific binding ratios of 1, 2, 3, 4, and 5 corresponded to SDB phantom simulations with true values. Measured SBRs > 50% that were underestimated with EI increased compared with the true SBR and this trend was outstanding at low SBR. The CSF-Mask improved 20% underestimates and brought the measured SBR closer to the true values at an f factor of 1.0 despite an increase in EI. We connected the linear regression function (y = - 3.53x + 1.95; r = 0.95) with the EI and f factor using root-mean-square error. Processing with CSF-Mask generates accurate quantitative SBR from dopamine transporter SPECT images of patients with ventricular enlargement.
Onishi, Hideo; Motomura, Nobutoku; Takahashi, Masaaki; Yanagisawa, Masamichi; Ogawa, Koichi
2010-03-01
Degradation of SPECT images results from various physical factors. The primary aim of this study was the development of a digital phantom for use in the characterization of factors that contribute to image degradation in clinical SPECT studies. A 3-dimensional mathematic cylinder (3D-MAC) phantom was devised and developed. The phantom (200 mm in diameter and 200 mm long) comprised 3 imbedded stacks of five 30-mm-long cylinders (diameters, 4, 10, 20, 40, and 60 mm). In simulations, the 3 stacks and the background were assigned radioisotope concentrations and attenuation coefficients. SPECT projection datasets that included Compton scattering effects, photoelectric effects, and gamma-camera models were generated using the electron gamma-shower Monte Carlo simulation program. Collimator parameters, detector resolution, total photons acquired, number of projections acquired, and radius of rotation were varied in simulations. The projection data were formatted in Digital Imaging and Communications in Medicine (DICOM) and imported to and reconstructed using commercial reconstruction software on clinical SPECT workstations. Using the 3D-MAC phantom, we validated that contrast depended on size of region of interest (ROI) and was overestimated when the ROI was small. The low-energy general-purpose collimator caused a greater partial-volume effect than did the low-energy high-resolution collimator, and contrast in the cold region was higher using the filtered backprojection algorithm than using the ordered-subset expectation maximization algorithm in the SPECT images. We used imported DICOM projection data and reconstructed these data using vendor software; in addition, we validated reconstructed images. The devised and developed 3D-MAC SPECT phantom is useful for the characterization of various physical factors, contrasts, partial-volume effects, reconstruction algorithms, and such, that contribute to image degradation in clinical SPECT studies.
Digital PET compliance to EARL accreditation specifications.
Koopman, Daniëlle; Groot Koerkamp, Maureen; Jager, Pieter L; Arkies, Hester; Knollema, Siert; Slump, Cornelis H; Sanches, Pedro G; van Dalen, Jorn A
2017-12-01
Our aim was to evaluate if a recently introduced TOF PET system with digital photon counting technology (Philips Healthcare), potentially providing an improved image quality over analogue systems, can fulfil EANM research Ltd (EARL) accreditation specifications for tumour imaging with FDG-PET/CT. We have performed a phantom study on a digital TOF PET system using a NEMA NU2-2001 image quality phantom with six fillable spheres. Phantom preparation and PET/CT acquisition were performed according to the European Association of Nuclear Medicine (EANM) guidelines. We made list-mode ordered-subsets expectation maximization (OSEM) TOF PET reconstructions, with default settings, three voxel sizes (4 × 4 × 4 mm 3 , 2 × 2 × 2 mm 3 and 1 × 1 × 1 mm 3 ) and with/without point spread function (PSF) modelling. On each PET dataset, mean and maximum activity concentration recovery coefficients (RC mean and RC max ) were calculated for all phantom spheres and compared to EARL accreditation specifications. The RCs of the 4 × 4 × 4 mm 3 voxel dataset without PSF modelling proved closest to EARL specifications. Next, we added a Gaussian post-smoothing filter with varying kernel widths of 1-7 mm. EARL specifications were fulfilled when using kernel widths of 2 to 4 mm. TOF PET using digital photon counting technology fulfils EARL accreditation specifications for FDG-PET/CT tumour imaging when using an OSEM reconstruction with 4 × 4 × 4 mm 3 voxels, no PSF modelling and including a Gaussian post-smoothing filter of 2 to 4 mm.
Koyama, Kazuya; Mitsumoto, Takuya; Shiraishi, Takahiro; Tsuda, Keisuke; Nishiyama, Atsushi; Inoue, Kazumasa; Yoshikawa, Kyosan; Hatano, Kazuo; Kubota, Kazuo; Fukushi, Masahiro
2017-09-01
We aimed to determine the difference in tumor volume associated with the reconstruction model in positron-emission tomography (PET). To reduce the influence of the reconstruction model, we suggested a method to measure the tumor volume using the relative threshold method with a fixed threshold based on peak standardized uptake value (SUV peak ). The efficacy of our method was verified using 18 F-2-fluoro-2-deoxy-D-glucose PET/computed tomography images of 20 patients with lung cancer. The tumor volume was determined using the relative threshold method with a fixed threshold based on the SUV peak . The PET data were reconstructed using the ordered-subset expectation maximization (OSEM) model, the OSEM + time-of-flight (TOF) model, and the OSEM + TOF + point-spread function (PSF) model. The volume differences associated with the reconstruction algorithm (%VD) were compared. For comparison, the tumor volume was measured using the relative threshold method based on the maximum SUV (SUV max ). For the OSEM and TOF models, the mean %VD values were -0.06 ± 8.07 and -2.04 ± 4.23% for the fixed 40% threshold according to the SUV max and the SUV peak, respectively. The effect of our method in this case seemed to be minor. For the OSEM and PSF models, the mean %VD values were -20.41 ± 14.47 and -13.87 ± 6.59% for the fixed 40% threshold according to the SUV max and SUV peak , respectively. Our new method enabled the measurement of tumor volume with a fixed threshold and reduced the influence of the changes in tumor volume associated with the reconstruction model.
NASA Technical Reports Server (NTRS)
Eliason, E.; Hansen, C. J.; McEwen, A.; Delamere, W. A.; Bridges, N.; Grant, J.; Gulich, V.; Herkenhoff, K.; Keszthelyi, L.; Kirk, R.
2003-01-01
Science return from the Mars Reconnaissance Orbiter (MRO) High Resolution Imaging Science Experiment (HiRISE) will be optimized by maximizing science participation in the experiment. MRO is expected to arrive at Mars in March 2006, and the primary science phase begins near the end of 2006 after aerobraking (6 months) and a transition phase. The primary science phase lasts for almost 2 Earth years, followed by a 2-year relay phase in which science observations by MRO are expected to continue. We expect to acquire approx. 10,000 images with HiRISE over the course of MRO's two earth-year mission. HiRISE can acquire images with a ground sampling dimension of as little as 30 cm (from a typical altitude of 300 km), in up to 3 colors, and many targets will be re-imaged for stereo. With such high spatial resolution, the percent coverage of Mars will be very limited in spite of the relatively high data rate of MRO (approx. 10x greater than MGS or Odyssey). We expect to cover approx. 1% of Mars at approx. 1m/pixel or better, approx. 0.1% at full resolution, and approx. 0.05% in color or in stereo. Therefore, the placement of each HiRISE image must be carefully considered in order to maximize the scientific return from MRO. We believe that every observation should be the result of a mini research project based on pre-existing datasets. During operations, we will need a large database of carefully researched 'suggested' observations to select from. The HiRISE team is dedicated to involving the broad Mars community in creating this database, to the fullest degree that is both practical and legal. The philosophy of the team and the design of the ground data system are geared to enabling community involvement. A key aspect of this is that image data will be made available to the planetary community for science analysis as quickly as possible to encourage feedback and new ideas for targets.
Noise-enhanced clustering and competitive learning algorithms.
Osoba, Osonde; Kosko, Bart
2013-01-01
Noise can provably speed up convergence in many centroid-based clustering algorithms. This includes the popular k-means clustering algorithm. The clustering noise benefit follows from the general noise benefit for the expectation-maximization algorithm because many clustering algorithms are special cases of the expectation-maximization algorithm. Simulations show that noise also speeds up convergence in stochastic unsupervised competitive learning, supervised competitive learning, and differential competitive learning. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kagie, Matthew J.; Lanterman, Aaron D.
2017-12-01
This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.
PEM-PCA: a parallel expectation-maximization PCA face recognition architecture.
Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar
2014-01-01
Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.
Sparse principal component analysis in medical shape modeling
NASA Astrophysics Data System (ADS)
Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus
2006-03-01
Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.
Klinger, Regine; Flor, Herta
2014-01-01
Expectancy and learning are the core psychological mechanisms of placebo analgesia. They interact with further psychological processes such as emotions and motivations (e.g., anxiety, desire for relief), somatic focus, or cognitions (e.g., attitudes toward the treatment). The development of placebo responsiveness and the actual placebo response in a person is the result of the complex interaction between factors traced back to the individual learning history related to analgesic drugs or treatments and factors of the current context referring to the analgesic or placebo treatment. The aim of this chapter is to depict these complex interactions in a new model of analgesic placebo effects. It joins aspects of the learning history (preexisting experiences and preexisting expectations) of a patient with aspects of the current context (current expectation as a result of external and internal situation in which a pain medication/treatment/placebo is taken, e.g., current information about pain medication, current specific context/cues, desire for pain relief, certainty about upcoming pain relief, current expectation about pain reducing course, current selective attention, increased pain experience, or decreased pain experience). In order to exploit placebo efficacy for an analgesic treatment it is worthwhile to assess in which direction each of these factors exerts its influence in order to maximize placebo effects for a specific patient. By applying placebo mechanisms in this differentiated way, the efficacy of pain treatment can be deliberately boosted.
Coding for Parallel Links to Maximize the Expected Value of Decodable Messages
NASA Technical Reports Server (NTRS)
Klimesh, Matthew A.; Chang, Christopher S.
2011-01-01
When multiple parallel communication links are available, it is useful to consider link-utilization strategies that provide tradeoffs between reliability and throughput. Interesting cases arise when there are three or more available links. Under the model considered, the links have known probabilities of being in working order, and each link has a known capacity. The sender has a number of messages to send to the receiver. Each message has a size and a value (i.e., a worth or priority). Messages may be divided into pieces arbitrarily, and the value of each piece is proportional to its size. The goal is to choose combinations of messages to send on the links so that the expected value of the messages decodable by the receiver is maximized. There are three parts to the innovation: (1) Applying coding to parallel links under the model; (2) Linear programming formulation for finding the optimal combinations of messages to send on the links; and (3) Algorithms for assisting in finding feasible combinations of messages, as support for the linear programming formulation. There are similarities between this innovation and methods developed in the field of network coding. However, network coding has generally been concerned with either maximizing throughput in a fixed network, or robust communication of a fixed volume of data. In contrast, under this model, the throughput is expected to vary depending on the state of the network. Examples of error-correcting codes that are useful under this model but which are not needed under previous models have been found. This model can represent either a one-shot communication attempt, or a stream of communications. Under the one-shot model, message sizes and link capacities are quantities of information (e.g., measured in bits), while under the communications stream model, message sizes and link capacities are information rates (e.g., measured in bits/second). This work has the potential to increase the value of data returned from spacecraft under certain conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, D; Jung, J; Suh, T
2014-06-01
Purpose: Purpose of paper is to confirm the feasibility of acquisition of three dimensional single photon emission computed tomography (SPECT) image from boron neutron capture therapy (BNCT) using Monte Carlo simulation. Methods: In case of simulation, the pixelated SPECT detector, collimator and phantom were simulated using Monte Carlo n particle extended (MCNPX) simulation tool. A thermal neutron source (<1 eV) was used to react with the boron uptake region (BUR) in the phantom. Each geometry had a spherical pattern, and three different BURs (A, B and C region, density: 2.08 g/cm3) were located in the middle of the brain phantom.more » The data from 128 projections for each sorting process were used to achieve image reconstruction. The ordered subset expectation maximization (OSEM) reconstruction algorithm was used to obtain a tomographic image with eight subsets and five iterations. The receiver operating characteristic (ROC) curve analysis was used to evaluate the geometric accuracy of reconstructed image. Results: The OSEM image was compared with the original phantom pattern image. The area under the curve (AUC) was calculated as the gross area under each ROC curve. The three calculated AUC values were 0.738 (A region), 0.623 (B region), and 0.817 (C region). The differences between length of centers of two boron regions and distance of maximum count points were 0.3 cm, 1.6 cm and 1.4 cm. Conclusion: The possibility of extracting a 3D BNCT SPECT image was confirmed using the Monte Carlo simulation and OSEM algorithm. The prospects for obtaining an actual BNCT SPECT image were estimated from the quality of the simulated image and the simulation conditions. When multiple tumor region should be treated using the BNCT, a reasonable model to determine how many useful images can be obtained from the SPECT could be provided to the BNCT facilities. This research was supported by the Leading Foreign Research Institute Recruitment Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, Information and Communication Technologies (ICT) and Future Planning (MSIP)(Grant No.200900420) and the Radiation Technology Research and Development program (Grant No.2013043498), Republic of Korea.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gheorghiu, Vlad; Yu Li; Cohen, Scott M.
We investigate the conditions under which a set S of pure bipartite quantum states on a DxD system can be locally cloned deterministically by separable operations, when at least one of the states is full Schmidt rank. We allow for the possibility of cloning using a resource state that is less than maximally entangled. Our results include that: (i) all states in S must be full Schmidt rank and equally entangled under the G-concurrence measure, and (ii) the set S can be extended to a larger clonable set generated by a finite group G of order |G|=N, the number ofmore » states in the larger set. It is then shown that any local cloning apparatus is capable of cloning a number of states that divides D exactly. We provide a complete solution for two central problems in local cloning, giving necessary and sufficient conditions for (i) when a set of maximally entangled states can be locally cloned, valid for all D; and (ii) local cloning of entangled qubit states with nonvanishing entanglement. In both of these cases, we show that a maximally entangled resource is necessary and sufficient, and the states must be related to each other by local unitary 'shift' operations. These shifts are determined by the group structure, so need not be simple cyclic permutations. Assuming this shifted form and partially entangled states, then in D=3 we show that a maximally entangled resource is again necessary and sufficient, while for higher-dimensional systems, we find that the resource state must be strictly more entangled than the states in S. All of our necessary conditions for separable operations are also necessary conditions for local operations and classical communication (LOCC), since the latter is a proper subset of the former. In fact, all our results hold for LOCC, as our sufficient conditions are demonstrated for LOCC, directly.« less
Valverde-Barrantes, Oscar J.; Horning, Amber L.; Smemo, Kurt A.; ...
2016-02-10
In this study, there is little quantitative information about the relationship between root traits and the extent of arbuscular mycorrhizal fungi (AMF) colonization. We expected that ancestral species with thick roots will maximize AMF habitat by maintaining similar root traits across root orders (i.e., high root trait integration), whereas more derived species are expected to display a sharp transition from acquisition to structural roots. Moreover, we hypothesized that interspecific morphological differences rather than soil conditions will be the main driver of AMF colonization We analyzed 14 root morphological and chemical traits and AMF colonization rates for the first three rootmore » orders of 34 temperate tree species grown in two common gardens. We also collected associated soil to measure the effect of soil conditions on AMF colonization Results Thick-root magnoliids showed less variation in root traits along root orders than more-derived angiosperm groups. Variation in stele:root diameter ratio was the best indicator of AMF colonization within and across root orders. Root functional traits rather than soil conditions largely explained the variation in AMF colonization among species. In conclusion, not only the traits of first order but the entire structuring of the root system varied among plant lineages, suggesting alternative evolutionary strategies of resource acquisition. Understanding evolutionary pathways in below ground organs could open new avenues to understand tree species influence on soil carbon and nutrient cycling.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valverde-Barrantes, Oscar J.; Horning, Amber L.; Smemo, Kurt A.
In this study, there is little quantitative information about the relationship between root traits and the extent of arbuscular mycorrhizal fungi (AMF) colonization. We expected that ancestral species with thick roots will maximize AMF habitat by maintaining similar root traits across root orders (i.e., high root trait integration), whereas more derived species are expected to display a sharp transition from acquisition to structural roots. Moreover, we hypothesized that interspecific morphological differences rather than soil conditions will be the main driver of AMF colonization We analyzed 14 root morphological and chemical traits and AMF colonization rates for the first three rootmore » orders of 34 temperate tree species grown in two common gardens. We also collected associated soil to measure the effect of soil conditions on AMF colonization Results Thick-root magnoliids showed less variation in root traits along root orders than more-derived angiosperm groups. Variation in stele:root diameter ratio was the best indicator of AMF colonization within and across root orders. Root functional traits rather than soil conditions largely explained the variation in AMF colonization among species. In conclusion, not only the traits of first order but the entire structuring of the root system varied among plant lineages, suggesting alternative evolutionary strategies of resource acquisition. Understanding evolutionary pathways in below ground organs could open new avenues to understand tree species influence on soil carbon and nutrient cycling.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fallahpoor, M; Abbasi, M; Sen, A
Purpose: Patient-specific 3-dimensional (3D) internal dosimetry in targeted radionuclide therapy is essential for efficient treatment. Two major steps to achieve reliable results are: 1) generating quantitative 3D images of radionuclide distribution and attenuation coefficients and 2) using a reliable method for dose calculation based on activity and attenuation map. In this research, internal dosimetry for 153-Samarium (153-Sm) was done by SPECT-CT images coupled GATE Monte Carlo package for internal dosimetry. Methods: A 50 years old woman with bone metastases from breast cancer was prescribed 153-Sm treatment (Gamma: 103keV and beta: 0.81MeV). A SPECT/CT scan was performed with the Siemens Simbia-Tmore » scanner. SPECT and CT images were registered using default registration software. SPECT quantification was achieved by compensating for all image degrading factors including body attenuation, Compton scattering and collimator-detector response (CDR). Triple energy window method was used to estimate and eliminate the scattered photons. Iterative ordered-subsets expectation maximization (OSEM) with correction for attenuation and distance-dependent CDR was used for image reconstruction. Bilinear energy mapping is used to convert Hounsfield units in CT image to attenuation map. Organ borders were defined by the itk-SNAP toolkit segmentation on CT image. GATE was then used for internal dose calculation. The Specific Absorbed Fractions (SAFs) and S-values were reported as MIRD schema. Results: The results showed that the largest SAFs and S-values are in osseous organs as expected. S-value for lung is the highest after spine that can be important in 153-Sm therapy. Conclusion: We presented the utility of SPECT-CT images and Monte Carlo for patient-specific dosimetry as a reliable and accurate method. It has several advantages over template-based methods or simplified dose estimation methods. With advent of high speed computers, Monte Carlo can be used for treatment planning on a day to day basis.« less
Impact of Time-of-Flight on PET Tumor Detection
Kadrmas, Dan J.; Casey, Michael E.; Conti, Maurizio; Jakoby, Bjoern W.; Lois, Cristina; Townsend, David W.
2009-01-01
Time-of-flight (TOF) PET uses very fast detectors to improve localization of events along coincidence lines-of-response. This information is then utilized to improve the tomographic reconstruction. This work evaluates the effect of TOF upon an observer's performance for detecting and localizing focal warm lesions in noisy PET images. Methods An advanced anthropomorphic lesion-detection phantom was scanned 12 times over 3 days on a prototype TOF PET/CT scanner (Siemens Medical Solutions). The phantom was devised to mimic whole-body oncologic 18F-FDG PET imaging, and a number of spheric lesions (diameters 6–16 mm) were distributed throughout the phantom. The data were reconstructed with the baseline line-of-response ordered-subsets expectation-maximization algorithm, with the baseline algorithm plus point spread function model (PSF), baseline plus TOF, and with both PSF+TOF. The lesion-detection performance of each reconstruction was compared and ranked using localization receiver operating characteristics (LROC) analysis with both human and numeric observers. The phantom results were then subjectively compared to 2 illustrative patient scans reconstructed with PSF and with PSF+TOF. Results Inclusion of TOF information provides a significant improvement in the area under the LROC curve compared to the baseline algorithm without TOF data (P = 0.002), providing a degree of improvement similar to that obtained with the PSF model. Use of both PSF+TOF together provided a cumulative benefit in lesion-detection performance, significantly outperforming either PSF or TOF alone (P < 0.002). Example patient images reflected the same image characteristics that gave rise to improved performance in the phantom data. Conclusion Time-of-flight PET provides a significant improvement in observer performance for detecting focal warm lesions in a noisy background. These improvements in image quality can be expected to improve performance for the clinical tasks of detecting lesions and staging disease. Further study in a large clinical population is warranted to assess the benefit of TOF for various patient sizes and count levels, and to demonstrate effective performance in the clinical environment. PMID:19617317
Koutsoukas, Alexios; Paricharak, Shardul; Galloway, Warren R J D; Spring, David R; Ijzerman, Adriaan P; Glen, Robert C; Marcus, David; Bender, Andreas
2014-01-27
Chemical diversity is a widely applied approach to select structurally diverse subsets of molecules, often with the objective of maximizing the number of hits in biological screening. While many methods exist in the area, few systematic comparisons using current descriptors in particular with the objective of assessing diversity in bioactivity space have been published, and this shortage is what the current study is aiming to address. In this work, 13 widely used molecular descriptors were compared, including fingerprint-based descriptors (ECFP4, FCFP4, MACCS keys), pharmacophore-based descriptors (TAT, TAD, TGT, TGD, GpiDAPH3), shape-based descriptors (rapid overlay of chemical structures (ROCS) and principal moments of inertia (PMI)), a connectivity-matrix-based descriptor (BCUT), physicochemical-property-based descriptors (prop2D), and a more recently introduced molecular descriptor type (namely, "Bayes Affinity Fingerprints"). We assessed both the similar behavior of the descriptors in assessing the diversity of chemical libraries, and their ability to select compounds from libraries that are diverse in bioactivity space, which is a property of much practical relevance in screening library design. This is particularly evident, given that many future targets to be screened are not known in advance, but that the library should still maximize the likelihood of containing bioactive matter also for future screening campaigns. Overall, our results showed that descriptors based on atom topology (i.e., fingerprint-based descriptors and pharmacophore-based descriptors) correlate well in rank-ordering compounds, both within and between descriptor types. On the other hand, shape-based descriptors such as ROCS and PMI showed weak correlation with the other descriptors utilized in this study, demonstrating significantly different behavior. We then applied eight of the molecular descriptors compared in this study to sample a diverse subset of sample compounds (4%) from an initial population of 2587 compounds, covering the 25 largest human activity classes from ChEMBL and measured the coverage of activity classes by the subsets. Here, it was found that "Bayes Affinity Fingerprints" achieved an average coverage of 92% of activity classes. Using the descriptors ECFP4, GpiDAPH3, TGT, and random sampling, 91%, 84%, 84%, and 84% of the activity classes were represented in the selected compounds respectively, followed by BCUT, prop2D, MACCS, and PMI (in order of decreasing performance). In addition, we were able to show that there is no visible correlation between compound diversity in PMI space and in bioactivity space, despite frequent utilization of PMI plots to this end. To summarize, in this work, we assessed which descriptors select compounds with high coverage of bioactivity space, and can hence be used for diverse compound selection for biological screening. In cases where multiple descriptors are to be used for diversity selection, this work describes which descriptors behave complementarily, and can hence be used jointly to focus on different aspects of diversity in chemical space.
González, M; Gutiérrez, C; Martínez, R
2012-09-01
A two-dimensional bisexual branching process has recently been presented for the analysis of the generation-to-generation evolution of the number of carriers of a Y-linked gene. In this model, preference of females for males with a specific genetic characteristic is assumed to be determined by an allele of the gene. It has been shown that the behavior of this kind of Y-linked gene is strongly related to the reproduction law of each genotype. In practice, the corresponding offspring distributions are usually unknown, and it is necessary to develop their estimation theory in order to determine the natural selection of the gene. Here we deal with the estimation problem for the offspring distribution of each genotype of a Y-linked gene when the only observable data are each generation's total numbers of males of each genotype and of females. We set out the problem in a non parametric framework and obtain the maximum likelihood estimators of the offspring distributions using an expectation-maximization algorithm. From these estimators, we also derive the estimators for the reproduction mean of each genotype and forecast the distribution of the future population sizes. Finally, we check the accuracy of the algorithm by means of a simulation study.
Rincent, R; Laloë, D; Nicolas, S; Altmann, T; Brunel, D; Revilla, P; Rodríguez, V M; Moreno-Gonzalez, J; Melchinger, A; Bauer, E; Schoen, C-C; Meyer, N; Giauffret, C; Bauland, C; Jamin, P; Laborde, J; Monod, H; Flament, P; Charcosset, A; Moreau, L
2012-10-01
Genomic selection refers to the use of genotypic information for predicting breeding values of selection candidates. A prediction formula is calibrated with the genotypes and phenotypes of reference individuals constituting the calibration set. The size and the composition of this set are essential parameters affecting the prediction reliabilities. The objective of this study was to maximize reliabilities by optimizing the calibration set. Different criteria based on the diversity or on the prediction error variance (PEV) derived from the realized additive relationship matrix-best linear unbiased predictions model (RA-BLUP) were used to select the reference individuals. For the latter, we considered the mean of the PEV of the contrasts between each selection candidate and the mean of the population (PEVmean) and the mean of the expected reliabilities of the same contrasts (CDmean). These criteria were tested with phenotypic data collected on two diversity panels of maize (Zea mays L.) genotyped with a 50k SNPs array. In the two panels, samples chosen based on CDmean gave higher reliabilities than random samples for various calibration set sizes. CDmean also appeared superior to PEVmean, which can be explained by the fact that it takes into account the reduction of variance due to the relatedness between individuals. Selected samples were close to optimality for a wide range of trait heritabilities, which suggests that the strategy presented here can efficiently sample subsets in panels of inbred lines. A script to optimize reference samples based on CDmean is available on request.
Choosing Objectives in Over-Subscription Planning
NASA Technical Reports Server (NTRS)
Smith, David E.
2003-01-01
Many NASA planning problems are over-subscription problems - that is, there are a large number of possible goals of differing value, and the planning system must choose a subset &it car! be accomplished within the limited time and resources available. Examples include planning for telescopes like Hubble, SIRTF, and SOFIA; scheduling for the Deep Space Network; and planning science experiments for a Mars rover. Unfortunately, existing planning systems are not designed to deal with problems like this - they expect a well-defined conjunctive goal and terminate in failure unless the entire goal is achieved. In this paper we develop techniques for over-subscription problems that assist a classical planner in choosing which goals to achieve, and the order in which to achieve them. These techniques use plan graph cost-estimation techniques to construct an orienteering problem, which is then used to provide heuristic advice on the goals and goal order that should considered by a planner.
Hierarchical trie packet classification algorithm based on expectation-maximization clustering
Bi, Xia-an; Zhao, Junxia
2017-01-01
With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm. PMID:28704476
Nishi, Kanae; Kewley-Port, Diane
2008-01-01
Purpose Nishi and Kewley-Port (2007) trained Japanese listeners to perceive nine American English monophthongs and showed that a protocol using all nine vowels (fullset) produced better results than the one using only the three more difficult vowels (subset). The present study extended the target population to Koreans and examined whether protocols combining the two stimulus sets would provide more effective training. Method Three groups of five Korean listeners were trained on American English vowels for nine days using one of the three protocols: fullset only, first three days on subset then six days on fullset, or first six days on fullset then three days on subset. Participants' performance was assessed by pre- and post-training tests, as well as by a mid-training test. Results 1) Fullset training was also effective for Koreans; 2) no advantage was found for the two combined protocols over the fullset only protocol, and 3) sustained “non-improvement” was observed for training using one of the combined protocols. Conclusions In using subsets for training American English vowels, care should be taken not only in the selection of subset vowels, but also for the training orders of subsets. PMID:18664694
Research priorities and plans for the International Space Station-results of the 'REMAP' Task Force
NASA Technical Reports Server (NTRS)
Kicza, M.; Erickson, K.; Trinh, E.
2003-01-01
Recent events in the International Space Station (ISS) Program have resulted in the necessity to re-examine the research priorities and research plans for future years. Due to both technical and fiscal resource constraints expected on the International Space Station, it is imperative that research priorities be carefully reviewed and clearly articulated. In consultation with OSTP and the Office of Management and budget (OMB), NASA's Office of Biological and Physical Research (OBPR) assembled an ad-hoc external advisory committee, the Biological and Physical Research Maximization and Prioritization (REMAP) Task Force. This paper describes the outcome of the Task Force and how it is being used to define a roadmap for near and long-term Biological and Physical Research objectives that supports NASA's Vision and Mission. Additionally, the paper discusses further prioritizations that were necessitated by budget and ISS resource constraints in order to maximize utilization of the International Space Station. Finally, a process has been developed to integrate the requirements for this prioritized research with other agency requirements to develop an integrated ISS assembly and utilization plan that maximizes scientific output. c2003 American Institute of Aeronautics and Astronautics. Published by Elsevier Science Ltd. All rights reserved.
Evaluating the Investment Potential of HSAs in Benefit Programs.
LaFleur, James; Magner, Liana; Domaszewicz, Sander
Despite its complexities, the health savings account (HSA) is a powerful and growing element of the U.S. financial landscape. In the future, employers will likely be expected to provide tax-advantaged savings programs for employees' current and future medical expenses. This article discusses investment lineup issues that must be addressed in order to optimize HSAs to help participants achieve successful outcomes. Plan sponsors at the forefront of addressing these issues (and perhaps others) will be in a better position to help their employees maximize both the health benefits and the wealth benefits provided for a secure retirement.
Büttner, Kathrin; Salau, Jennifer; Krieter, Joachim
2016-01-01
The average topological overlap of two graphs of two consecutive time steps measures the amount of changes in the edge configuration between the two snapshots. This value has to be zero if the edge configuration changes completely and one if the two consecutive graphs are identical. Current methods depend on the number of nodes in the network or on the maximal number of connected nodes in the consecutive time steps. In the first case, this methodology breaks down if there are nodes with no edges. In the second case, it fails if the maximal number of active nodes is larger than the maximal number of connected nodes. In the following, an adaption of the calculation of the temporal correlation coefficient and of the topological overlap of the graph between two consecutive time steps is presented, which shows the expected behaviour mentioned above. The newly proposed adaption uses the maximal number of active nodes, i.e. the number of nodes with at least one edge, for the calculation of the topological overlap. The three methods were compared with the help of vivid example networks to reveal the differences between the proposed notations. Furthermore, these three calculation methods were applied to a real-world network of animal movements in order to detect influences of the network structure on the outcome of the different methods.
Squeezing of magnetic flux in nanorings.
Dajka, J; Ptok, A; Luczka, J
2012-12-12
We study superconducting and non-superconducting nanorings and look for non-classical features of magnetic flux passing through nanorings. We show that the magnetic flux can exhibit purely quantum properties in some peculiar states with quadrature squeezing. We identify a subset of Gazeau-Klauder states in which the magnetic flux can be squeezed and, within tailored parameter regimes, quantum fluctuations of the magnetic flux can be maximally reduced.
Michael R. Vanderberg; Kevin Boston; John Bailey
2011-01-01
Accounting for the probability of loss due to disturbance events can influence the prediction of carbon flux over a planning horizon, and can affect the determination of optimal silvicultural regimes to maximize terrestrial carbon storage. A preliminary model that includes forest disturbance-related carbon loss was developed to maximize expected values of carbon stocks...
Quantum coherence generating power, maximally abelian subalgebras, and Grassmannian geometry
NASA Astrophysics Data System (ADS)
Zanardi, Paolo; Campos Venuti, Lorenzo
2018-01-01
We establish a direct connection between the power of a unitary map in d-dimensions (d < ∞) to generate quantum coherence and the geometry of the set Md of maximally abelian subalgebras (of the quantum system full operator algebra). This set can be seen as a topologically non-trivial subset of the Grassmannian over linear operators. The natural distance over the Grassmannian induces a metric structure on Md, which quantifies the lack of commutativity between the pairs of subalgebras. Given a maximally abelian subalgebra, one can define, on physical grounds, an associated measure of quantum coherence. We show that the average quantum coherence generated by a unitary map acting on a uniform ensemble of quantum states in the algebra (the so-called coherence generating power of the map) is proportional to the distance between a pair of maximally abelian subalgebras in Md connected by the unitary transformation itself. By embedding the Grassmannian into a projective space, one can pull-back the standard Fubini-Study metric on Md and define in this way novel geometrical measures of quantum coherence generating power. We also briefly discuss the associated differential metric structures.
On Use of Multi-Chambered Fission Detectors for In-Core, Neutron Spectroscopy
NASA Astrophysics Data System (ADS)
Roberts, Jeremy A.
2018-01-01
Presented is a short, computational study on the potential use of multichambered fission detectors for in-core, neutron spectroscopy. Motivated by the development of very small fission chambers at CEA in France and at Kansas State University in the U.S., it was assumed in this preliminary analysis that devices can be made small enough to avoid flux perturbations and that uncertainties related to measurements can be ignored. It was hypothesized that a sufficient number of chambers with unique reactants can act as a real-time, foilactivation experiment. An unfolding scheme based on maximizing (Shannon) entropy was used to produce a flux spectrum from detector signals that requires no prior information. To test the method, integral, detector responses were generated for singleisotope detectors of various Th, U, Np, Pu, Am, and Cs isotopes using a simplified, pressurized-water reactor spectrum and fluxweighted, microscopic, fission cross sections, in the WIMS-69 multigroup format. An unfolded spectrum was found from subsets of these responses that had a maximum entropy while reproducing the responses considered and summing to one (that is, they were normalized). Several nuclide subsets were studied, and, as expected, the results indicate inclusion of more nuclides leads to better spectra but with diminishing improvements, with the best-case spectrum having an average, relative, group-wise error of approximately 51%. Furthermore, spectra found from minimum-norm and Tihkonov-regularization inversion were of lower quality than the maximum entropy solutions. Finally, the addition of thermal-neutron filters (here, Cd and Gd) provided substantial improvement over unshielded responses alone. The results, as a whole, suggest that in-core, neutron spectroscopy is at least marginally feasible.
Balakrishnan, Narayanaswamy; Pal, Suvra
2016-08-01
Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence. © The Author(s) 2013.
Very Slow Search and Reach: Failure to Maximize Expected Gain in an Eye-Hand Coordination Task
Zhang, Hang; Morvan, Camille; Etezad-Heydari, Louis-Alexandre; Maloney, Laurence T.
2012-01-01
We examined an eye-hand coordination task where optimal visual search and hand movement strategies were inter-related. Observers were asked to find and touch a target among five distractors on a touch screen. Their reward for touching the target was reduced by an amount proportional to how long they took to locate and reach to it. Coordinating the eye and the hand appropriately would markedly reduce the search-reach time. Using statistical decision theory we derived the sequence of interrelated eye and hand movements that would maximize expected gain and we predicted how hand movements should change as the eye gathered further information about target location. We recorded human observers' eye movements and hand movements and compared them with the optimal strategy that would have maximized expected gain. We found that most observers failed to adopt the optimal search-reach strategy. We analyze and describe the strategies they did adopt. PMID:23071430
Multi-task feature selection in microarray data by binary integer programming.
Lan, Liang; Vucetic, Slobodan
2013-12-20
A major challenge in microarray classification is that the number of features is typically orders of magnitude larger than the number of examples. In this paper, we propose a novel feature filter algorithm to select the feature subset with maximal discriminative power and minimal redundancy by solving a quadratic objective function with binary integer constraints. To improve the computational efficiency, the binary integer constraints are relaxed and a low-rank approximation to the quadratic term is applied. The proposed feature selection algorithm was extended to solve multi-task microarray classification problems. We compared the single-task version of the proposed feature selection algorithm with 9 existing feature selection methods on 4 benchmark microarray data sets. The empirical results show that the proposed method achieved the most accurate predictions overall. We also evaluated the multi-task version of the proposed algorithm on 8 multi-task microarray datasets. The multi-task feature selection algorithm resulted in significantly higher accuracy than when using the single-task feature selection methods.
NASA Astrophysics Data System (ADS)
Yang, Wen; Fung, Richard Y. K.
2014-06-01
This article considers an order acceptance problem in a make-to-stock manufacturing system with multiple demand classes in a finite time horizon. Demands in different periods are random variables and are independent of one another, and replenishments of inventory deviate from the scheduled quantities. The objective of this work is to maximize the expected net profit over the planning horizon by deciding the fraction of the demand that is going to be fulfilled. This article presents a stochastic order acceptance optimization model and analyses the existence of the optimal promising policies. An example of a discrete problem is used to illustrate the policies by applying the dynamic programming method. In order to solve the continuous problems, a heuristic algorithm based on stochastic approximation (HASA) is developed. Finally, the computational results of a case example illustrate the effectiveness and efficiency of the HASA approach, and make the application of the proposed model readily acceptable.
Expectation maximization for hard X-ray count modulation profiles
NASA Astrophysics Data System (ADS)
Benvenuto, F.; Schwartz, R.; Piana, M.; Massone, A. M.
2013-07-01
Context. This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) instrument. Aims: Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized to analyze count modulation profiles in solar hard X-ray imaging based on rotating modulation collimators. Methods: The algorithm described in this paper solves the maximum likelihood problem iteratively and encodes a positivity constraint into the iterative optimization scheme. The result is therefore a classical expectation maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, at the same time, a very satisfactory Cash-statistic (C-statistic). Results: The method is applied to both reproduce synthetic flaring configurations and reconstruct images from experimental data corresponding to three real events. In this second case, the performance of expectation maximization, when compared to Pixon image reconstruction, shows a comparable accuracy and a notably reduced computational burden; when compared to CLEAN, shows a better fidelity with respect to the measurements with a comparable computational effectiveness. Conclusions: If optimally stopped, expectation maximization represents a very reliable method for image reconstruction in the RHESSI context when count modulation profiles are used as input data.
Maximizing the Spread of Influence via Generalized Degree Discount.
Wang, Xiaojie; Zhang, Xue; Zhao, Chengli; Yi, Dongyun
2016-01-01
It is a crucial and fundamental issue to identify a small subset of influential spreaders that can control the spreading process in networks. In previous studies, a degree-based heuristic called DegreeDiscount has been shown to effectively identify multiple influential spreaders and has severed as a benchmark method. However, the basic assumption of DegreeDiscount is not adequate, because it treats all the nodes equally without any differences. To consider a general situation in real world networks, a novel heuristic method named GeneralizedDegreeDiscount is proposed in this paper as an effective extension of original method. In our method, the status of a node is defined as a probability of not being influenced by any of its neighbors, and an index generalized discounted degree of one node is presented to measure the expected number of nodes it can influence. Then the spreaders are selected sequentially upon its generalized discounted degree in current network. Empirical experiments are conducted on four real networks, and the results show that the spreaders identified by our approach are more influential than several benchmark methods. Finally, we analyze the relationship between our method and three common degree-based methods.
Maximizing the Spread of Influence via Generalized Degree Discount
Wang, Xiaojie; Zhang, Xue; Zhao, Chengli; Yi, Dongyun
2016-01-01
It is a crucial and fundamental issue to identify a small subset of influential spreaders that can control the spreading process in networks. In previous studies, a degree-based heuristic called DegreeDiscount has been shown to effectively identify multiple influential spreaders and has severed as a benchmark method. However, the basic assumption of DegreeDiscount is not adequate, because it treats all the nodes equally without any differences. To consider a general situation in real world networks, a novel heuristic method named GeneralizedDegreeDiscount is proposed in this paper as an effective extension of original method. In our method, the status of a node is defined as a probability of not being influenced by any of its neighbors, and an index generalized discounted degree of one node is presented to measure the expected number of nodes it can influence. Then the spreaders are selected sequentially upon its generalized discounted degree in current network. Empirical experiments are conducted on four real networks, and the results show that the spreaders identified by our approach are more influential than several benchmark methods. Finally, we analyze the relationship between our method and three common degree-based methods. PMID:27732681
Nyland, John; Kanouse, Zachary; Krupp, Ryan; Caborn, David; Jakob, Rolie
2011-01-01
Knee osteoarthritis is one of the most common disabling medical conditions. With longer life expectancy the number of total knee arthroplasty (TKA) procedures being performed worldwide is projected to increase dramatically. Patient education, physical activity, bodyweight levels, expectations and goals regarding the ability to continue athletic activity participation are also increasing. For the subset of motivated patients with knee osteoarthritis who have athletic activity approach type goals, early TKA may not be the best knee osteoarthritis treatment option to improve satisfaction, quality of life and outcomes. The purpose of this clinical commentary is to present a conceptual decision-making model designed to improve the knee osteoarthritis treatment intervention outcome for motivated patients with athletic activity approach type goals. The model focuses on improving knee surgeon, patient and rehabilitation clinician dialogue by rank ordering routine activities of daily living and quality of life evoking athletic activities based on knee symptom exacerbation or re-injury risk. This process should help establish realistic patient expectations and goals for a given knee osteoarthritis treatment intervention that will more likely improve self-efficacy, functional independence, satisfaction and outcomes while decreasing the failure risk associated with early TKA.
Twistor Geometry of Null Foliations in Complex Euclidean Space
NASA Astrophysics Data System (ADS)
Taghavi-Chabert, Arman
2017-01-01
We give a detailed account of the geometric correspondence between a smooth complex projective quadric hypersurface Q^n of dimension n ≥ 3, and its twistor space PT, defined to be the space of all linear subspaces of maximal dimension of Q^n. Viewing complex Euclidean space CE^n as a dense open subset of Q^n, we show how local foliations tangent to certain integrable holomorphic totally null distributions of maximal rank on CE^n can be constructed in terms of complex submanifolds of PT. The construction is illustrated by means of two examples, one involving conformal Killing spinors, the other, conformal Killing-Yano 2-forms. We focus on the odd-dimensional case, and we treat the even-dimensional case only tangentially for comparison.
An Arrival and Departure Time Predictor for Scheduling Communication in Opportunistic IoT
Pozza, Riccardo; Georgoulas, Stylianos; Moessner, Klaus; Nati, Michele; Gluhak, Alexander; Krco, Srdjan
2016-01-01
In this article, an Arrival and Departure Time Predictor (ADTP) for scheduling communication in opportunistic Internet of Things (IoT) is presented. The proposed algorithm learns about temporal patterns of encounters between IoT devices and predicts future arrival and departure times, therefore future contact durations. By relying on such predictions, a neighbour discovery scheduler is proposed, capable of jointly optimizing discovery latency and power consumption in order to maximize communication time when contacts are expected with high probability and, at the same time, saving power when contacts are expected with low probability. A comprehensive performance evaluation with different sets of synthetic and real world traces shows that ADTP performs favourably with respect to previous state of the art. This prediction framework opens opportunities for transmission planners and schedulers optimizing not only neighbour discovery, but the entire communication process. PMID:27827909
An Arrival and Departure Time Predictor for Scheduling Communication in Opportunistic IoT.
Pozza, Riccardo; Georgoulas, Stylianos; Moessner, Klaus; Nati, Michele; Gluhak, Alexander; Krco, Srdjan
2016-11-04
In this article, an Arrival and Departure Time Predictor (ADTP) for scheduling communication in opportunistic Internet of Things (IoT) is presented. The proposed algorithm learns about temporal patterns of encounters between IoT devices and predicts future arrival and departure times, therefore future contact durations. By relying on such predictions, a neighbour discovery scheduler is proposed, capable of jointly optimizing discovery latency and power consumption in order to maximize communication time when contacts are expected with high probability and, at the same time, saving power when contacts are expected with low probability. A comprehensive performance evaluation with different sets of synthetic and real world traces shows that ADTP performs favourably with respect to previous state of the art. This prediction framework opens opportunities for transmission planners and schedulers optimizing not only neighbour discovery, but the entire communication process.
Simpson, Claire L.; Wojciechowski, Robert; Ibay, Grace; Stambolian, Dwight
2011-01-01
Purpose Despite many years of research, most of the genetic factors contributing to myopia development remain unknown. Genetic studies have pointed to a strong inherited component, but although many candidate regions have been implicated, few genes have been positively identified. Methods We have previously reported 2 genomewide linkage scans in a population of 63 highly aggregated Ashkenazi Jewish families that identified a locus on chromosome 22. Here we used ordered subset analysis (OSA), conditioned on non-parametric linkage to chromosome 22 to detect other chromosomal regions which had evidence of linkage to myopia in subsets of the families, but not the overall sample. Results Strong evidence of linkage to a 19-cM linkage interval with a peak OSA nonparametric allele-sharing logarithm-of-odds (LOD) score of 3.14 on 20p12-q11.1 (ΔLOD=2.39, empirical p=0.029) was identified in a subset of 20 families that also exhibited strong evidence of linkage to chromosome 22. One other locus also presented with suggestive LOD scores >2.0 on chromosome 11p14-q14 and one locus on chromosome 6q22-q24 had an OSA LOD score=1.76 (ΔLOD=1.65, empirical p=0.02). Conclusions The chromosome 6 and 20 loci are entirely novel and appear linked in a subset of families whose myopia is known to be linked to chromosome 22. The chromosome 11 locus overlaps with the known Myopia-7 (MYP7, OMIM 609256) locus. Using ordered subset analysis allows us to find additional loci linked to myopia in subsets of families, and underlines the complex genetic heterogeneity of myopia even in highly aggregated families and genetically isolated populations such as the Ashkenazi Jews. PMID:21738393
Pal, Suvra; Balakrishnan, Narayanaswamy
2018-05-01
In this paper, we develop likelihood inference based on the expectation maximization algorithm for the Box-Cox transformation cure rate model assuming the lifetimes to follow a Weibull distribution. A simulation study is carried out to demonstrate the performance of the proposed estimation method. Through Monte Carlo simulations, we also study the effect of model misspecification on the estimate of cure rate. Finally, we analyze a well-known data on melanoma with the model and the inferential method developed here.
Husak, Jerry F; Fox, Stanley F
2006-09-01
To understand how selection acts on performance capacity, the ecological role of the performance trait being measured must be determined. Knowing if and when an animal uses maximal performance capacity may give insight into what specific selective pressures may be acting on performance, because individuals are expected to use close to maximal capacity only in contexts important to survival or reproductive success. Furthermore, if an ecological context is important, poor performers are expected to compensate behaviorally. To understand the relative roles of natural and sexual selection on maximal sprint speed capacity we measured maximal sprint speed of collared lizards (Crotaphytus collaris) in the laboratory and field-realized sprint speed for the same individuals in three different contexts (foraging, escaping a predator, and responding to a rival intruder). Females used closer to maximal speed while escaping predators than in the other contexts. Adult males, on the other hand, used closer to maximal speed while responding to an unfamiliar male intruder tethered within their territory. Sprint speeds during foraging attempts were far below maximal capacity for all lizards. Yearlings appeared to compensate for having lower absolute maximal capacity by using a greater percentage of their maximal capacity while foraging and escaping predators than did adults of either sex. We also found evidence for compensation within age and sex classes, where slower individuals used a greater percentage of their maximal capacity than faster individuals. However, this was true only while foraging and escaping predators and not while responding to a rival. Collared lizards appeared to choose microhabitats near refugia such that maximal speed was not necessary to escape predators. Although natural selection for predator avoidance cannot be ruled out as a selective force acting on locomotor performance in collared lizards, intrasexual selection for territory maintenance may be more important for territorial males.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, J; Yoon, D; Suh, T
2014-06-01
Purpose: The aim of our proposed system is to confirm the feasibility of extraction of two types of images from one positron emission tomography (PET) module with an insertable collimator for brain tumor treatment during the BNCT. Methods: Data from the PET module, neutron source, and collimator was entered in the Monte Carlo n-particle extended (MCNPX) source code. The coincidence events were first compiled on the PET detector, and then, the events of the prompt gamma ray were collected after neutron emission by using a single photon emission computed tomography (SPECT) collimator on the PET. The obtaining of full widthmore » at half maximum (FWHM) values from the energy spectrum was performed to collect effective events for reconstructed image. In order to evaluate the images easily, five boron regions in a brain phantom were used. The image profiles were extracted from the region of interest (ROI) of a phantom. The image was reconstructed using the ordered subsets expectation maximization (OSEM) reconstruction algorithm. The image profiles and the receiver operating characteristic (ROC) curve were compiled for quantitative analysis from the two kinds of reconstructed image. Results: The prompt gamma ray energy peak of 478 keV appeared in the energy spectrum with a FWHM of 41 keV (6.4%). On the basis of the ROC curve in Region A to Region E, the differences in the area under the curve (AUC) of the PET and SPECT images were found to be 10.2%, 11.7%, 8.2% (center, Region C), 12.6%, and 10.5%, respectively. Conclusion: We attempted to acquire the PET and SPECT images simultaneously using only PET without an additional isotope. Single photon images were acquired using an insertable collimator on a PET detector. This research was supported by the Leading Foreign Research Institute Recruitment Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, Information and Communication Technologies (ICT) and Future Planning (MSIP)(Grant No.2009 00420) and the Radiation Technology R and D program (Grant No.2013M2A2A7043498), Republic of Korea.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hahn, Michael G.
The project seeks to investigate the mechanism by which CBMs potentiate the activity of glycoside hydrolases against complete plant cell walls. The project is based on the hypothesis that the wide range of CBMs present in bacterial enzymes maximize the potential target substrates by directing the cognate enzymes not only to different regions of a specific plant cell wall, but also increases the range of plant cell walls that can be degraded. In addition to maximizing substrate access, it was also proposed that CBMs can target specific subsets of hydrolases with complementary activities to the same region of the plantmore » cell wall, thereby maximizing the synergistic interactions between these enzymes. This synergy is based on the premise that the hydrolysis of a specific polysaccharide will increase the access of closely associated polymers to enzyme attack. In addition, it is unclear whether the catalytic module and appended CBM of modular enzymes have evolved unique complementary activities.« less
Reward-prospect interacts with trial-by-trial preparation for potential distraction
Marini, Francesco; van den Berg, Berry; Woldorff, Marty G.
2015-01-01
When attending for impending visual stimuli, cognitive systems prepare to identify relevant information while ignoring irrelevant, potentially distracting input. Recent work (Marini et al., 2013) showed that a supramodal distracter-filtering mechanism is invoked in blocked designs involving expectation of possible distracter stimuli, although this entails a cost (distraction-filtering cost) on speeded performance when distracters are expected but not presented. Here we used an arrow-flanker task to study whether an analogous cost, potentially reflecting the recruitment of a specific distraction-filtering mechanism, occurs dynamically when potential distraction is cued trial-to-trial (cued distracter-expectation cost). In order to promote the maximal utilization of cue information by participants, in some experimental conditions the cue also signaled the possibility of earning a monetary reward for fast and accurate performance. This design also allowed us to investigate the interplay between anticipation for distracters and anticipation of reward, which is known to engender attentional preparation. Only in reward contexts did participants show a cued distracter-expectation cost, which was larger with higher reward prospect and when anticipation for both distracters and reward were manipulated trial-to-trial. Thus, these results indicate that reward prospect interacts with the distracter expectation during trial-by-trial preparatory processes for potential distraction. These findings highlight how reward guides cue-driven attentional preparation. PMID:26180506
Reward-prospect interacts with trial-by-trial preparation for potential distraction.
Marini, Francesco; van den Berg, Berry; Woldorff, Marty G
2015-02-01
When attending for impending visual stimuli, cognitive systems prepare to identify relevant information while ignoring irrelevant, potentially distracting input. Recent work (Marini et al., 2013) showed that a supramodal distracter-filtering mechanism is invoked in blocked designs involving expectation of possible distracter stimuli, although this entails a cost ( distraction-filtering cost ) on speeded performance when distracters are expected but not presented. Here we used an arrow-flanker task to study whether an analogous cost, potentially reflecting the recruitment of a specific distraction-filtering mechanism, occurs dynamically when potential distraction is cued trial-to-trial ( cued distracter-expectation cost ). In order to promote the maximal utilization of cue information by participants, in some experimental conditions the cue also signaled the possibility of earning a monetary reward for fast and accurate performance. This design also allowed us to investigate the interplay between anticipation for distracters and anticipation of reward, which is known to engender attentional preparation. Only in reward contexts did participants show a cued distracter-expectation cost, which was larger with higher reward prospect and when anticipation for both distracters and reward were manipulated trial-to-trial. Thus, these results indicate that reward prospect interacts with the distracter expectation during trial-by-trial preparatory processes for potential distraction. These findings highlight how reward guides cue-driven attentional preparation.
Natural parameter values for generalized gene adjacency.
Yang, Zhenyu; Sankoff, David
2010-09-01
Given the gene orders in two modern genomes, it may be difficult to decide if some genes are close enough in both genomes to infer some ancestral proximity or some functional relationship. Current methods all depend on arbitrary parameters. We explore a class of gene proximity criteria and find two kinds of natural values for their parameters. One kind has to do with the parameter value where the expected information contained in two genomes about each other is maximized. The other kind of natural value has to do with parameter values beyond which all genes are clustered. We analyze these using combinatorial and probabilistic arguments as well as simulations.
Darmann, Andreas; Nicosia, Gaia; Pferschy, Ulrich; Schauer, Joachim
2014-03-16
In this work we address a game theoretic variant of the Subset Sum problem, in which two decision makers (agents/players) compete for the usage of a common resource represented by a knapsack capacity. Each agent owns a set of integer weighted items and wants to maximize the total weight of its own items included in the knapsack. The solution is built as follows: Each agent, in turn, selects one of its items (not previously selected) and includes it in the knapsack if there is enough capacity. The process ends when the remaining capacity is too small for including any item left. We look at the problem from a single agent point of view and show that finding an optimal sequence of items to select is an [Formula: see text]-hard problem. Therefore we propose two natural heuristic strategies and analyze their worst-case performance when (1) the opponent is able to play optimally and (2) the opponent adopts a greedy strategy. From a centralized perspective we observe that some known results on the approximation of the classical Subset Sum can be effectively adapted to the multi-agent version of the problem.
Darmann, Andreas; Nicosia, Gaia; Pferschy, Ulrich; Schauer, Joachim
2014-01-01
In this work we address a game theoretic variant of the Subset Sum problem, in which two decision makers (agents/players) compete for the usage of a common resource represented by a knapsack capacity. Each agent owns a set of integer weighted items and wants to maximize the total weight of its own items included in the knapsack. The solution is built as follows: Each agent, in turn, selects one of its items (not previously selected) and includes it in the knapsack if there is enough capacity. The process ends when the remaining capacity is too small for including any item left. We look at the problem from a single agent point of view and show that finding an optimal sequence of items to select is an NP-hard problem. Therefore we propose two natural heuristic strategies and analyze their worst-case performance when (1) the opponent is able to play optimally and (2) the opponent adopts a greedy strategy. From a centralized perspective we observe that some known results on the approximation of the classical Subset Sum can be effectively adapted to the multi-agent version of the problem. PMID:25844012
Confronting Diversity in the Community College Classroom: Six Maxims for Good Teaching.
ERIC Educational Resources Information Center
Gillett-Karam, Rosemary
1992-01-01
Emphasizes the leadership role of community college faculty in developing critical teaching strategies focusing attention on the needs of women and minorities. Describes six maxims of teaching excellence: engaging students' desire to learn, increasing opportunities, eliminating obstacles, empowering students through high expectations, offering…
Core Hunter 3: flexible core subset selection.
De Beukelaer, Herman; Davenport, Guy F; Fack, Veerle
2018-05-31
Core collections provide genebank curators and plant breeders a way to reduce size of their collections and populations, while minimizing impact on genetic diversity and allele frequency. Many methods have been proposed to generate core collections, often using distance metrics to quantify the similarity of two accessions, based on genetic marker data or phenotypic traits. Core Hunter is a multi-purpose core subset selection tool that uses local search algorithms to generate subsets relying on one or more metrics, including several distance metrics and allelic richness. In version 3 of Core Hunter (CH3) we have incorporated two new, improved methods for summarizing distances to quantify diversity or representativeness of the core collection. A comparison of CH3 and Core Hunter 2 (CH2) showed that these new metrics can be effectively optimized with less complex algorithms, as compared to those used in CH2. CH3 is more effective at maximizing the improved diversity metric than CH2, still ensures a high average and minimum distance, and is faster for large datasets. Using CH3, a simple stochastic hill-climber is able to find highly diverse core collections, and the more advanced parallel tempering algorithm further increases the quality of the core and further reduces variability across independent samples. We also evaluate the ability of CH3 to simultaneously maximize diversity, and either representativeness or allelic richness, and compare the results with those of the GDOpt and SimEli methods. CH3 can sample equally representative cores as GDOpt, which was specifically designed for this purpose, and is able to construct cores that are simultaneously more diverse, and either are more representative or have higher allelic richness, than those obtained by SimEli. In version 3, Core Hunter has been updated to include two new core subset selection metrics that construct cores for representativeness or diversity, with improved performance. It combines and outperforms the strengths of other methods, as it (simultaneously) optimizes a variety of metrics. In addition, CH3 is an improvement over CH2, with the option to use genetic marker data or phenotypic traits, or both, and improved speed. Core Hunter 3 is freely available on http://www.corehunter.org .
Boise State's Idaho Eclipse Outreach Program
NASA Astrophysics Data System (ADS)
Davis, Karan; Jackson, Brian
2017-10-01
The 2017 total solar eclipse is an unprecedented opportunity for astronomical education throughout the continental United States. With the path of totality passing through 14 states, from Oregon to South Carolina, the United States is expecting visitors from all around the world. Due to the likelihood of clear skies, Idaho was a popular destination for eclipse-chasers. In spite of considerable enthusiasm and interest by the general population, the resources for STEM outreach in the rural Pacific Northwest are very limited. In order to help prepare Idaho for the eclipse, we put together a crowdfunding campaign through the university and raised over $10,000. Donors received eclipse shades as well as information about the eclipse specific to Idaho. Idaho expects 500,000 visitors, which could present a problem for the many small, rural towns scattered across the path of totality. In order to help prepare and equip the public for the solar eclipse, we conducted a series of site visits to towns in and near the path of totality throughout Idaho. To maximize the impact of this effort, the program included several partnerships with local educational and community organizations and a focus on the sizable refugee and low-income populations in Idaho, with considerable attendance at most events.
Redundant variables and Granger causality
NASA Astrophysics Data System (ADS)
Angelini, L.; de Tommaso, M.; Marinazzo, D.; Nitti, L.; Pellicoro, M.; Stramaglia, S.
2010-03-01
We discuss the use of multivariate Granger causality in presence of redundant variables: the application of the standard analysis, in this case, leads to under estimation of causalities. Using the un-normalized version of the causality index, we quantitatively develop the notions of redundancy and synergy in the frame of causality and propose two approaches to group redundant variables: (i) for a given target, the remaining variables are grouped so as to maximize the total causality and (ii) the whole set of variables is partitioned to maximize the sum of the causalities between subsets. We show the application to a real neurological experiment, aiming to a deeper understanding of the physiological basis of abnormal neuronal oscillations in the migraine brain. The outcome by our approach reveals the change in the informational pattern due to repetitive transcranial magnetic stimulations.
Chemical similarity and local community assembly in the species rich tropical genus Piper.
Salazar, Diego; Jaramillo, M Alejandra; Marquis, Robert J
2016-11-01
Community ecologists have strived to find mechanisms that mediate the assembly of natural communities. Recent evidence suggests that natural enemies could play an important role in the assembly of hyper-diverse tropical plant systems. Classic ecological theory predicts that in order for coexistence to occur, species differences must be maximized across biologically important niche dimensions. For plant-herbivore interactions, it has been recently suggested that, within a particular community, plant species that maximize the difference in chemical defense profiles compared to neighboring taxa will have a relative competitive advantage. Here we tested the hypothesis that plant chemical diversity can affect local community composition in the hyper-diverse genus Piper at a lowland wet forest location in Costa Rica. We first characterized the chemical composition of 27 of the most locally abundant species of Piper. We then tested whether species with different chemical compositions were more likely to coexist. Finally, we assessed the degree to which Piper phylogenetic relationships are related to differences in secondary chemical composition and community assembly. We found that, on average, co-occurring species were more likely to differ in chemical composition than expected by chance. Contrary to expectations, there was no phylogenetic signal for overall secondary chemical composition. In addition we found that species in local communities were, on average, more phylogenetically closely related than expected by chance, suggesting that functional traits other than those measured here also influence local assembly. We propose that selection by herbivores for divergent chemistries between closely related species facilitates the coexistence of a high diversity of congeneric taxa via apparent competition. © 2016 by the Ecological Society of America.
INDEXABILITY AND OPTIMAL INDEX POLICIES FOR A CLASS OF REINITIALISING RESTLESS BANDITS.
Villar, Sofía S
2016-01-01
Motivated by a class of Partially Observable Markov Decision Processes with application in surveillance systems in which a set of imperfectly observed state processes is to be inferred from a subset of available observations through a Bayesian approach, we formulate and analyze a special family of multi-armed restless bandit problems. We consider the problem of finding an optimal policy for observing the processes that maximizes the total expected net rewards over an infinite time horizon subject to the resource availability. From the Lagrangian relaxation of the original problem, an index policy can be derived, as long as the existence of the Whittle index is ensured. We demonstrate that such a class of reinitializing bandits in which the projects' state deteriorates while active and resets to its initial state when passive until its completion possesses the structural property of indexability and we further show how to compute the index in closed form. In general, the Whittle index rule for restless bandit problems does not achieve optimality. However, we show that the proposed Whittle index rule is optimal for the problem under study in the case of stochastically heterogenous arms under the expected total criterion, and it is further recovered by a simple tractable rule referred to as the 1-limited Round Robin rule. Moreover, we illustrate the significant suboptimality of other widely used heuristic: the Myopic index rule, by computing in closed form its suboptimality gap. We present numerical studies which illustrate for the more general instances the performance advantages of the Whittle index rule over other simple heuristics.
INDEXABILITY AND OPTIMAL INDEX POLICIES FOR A CLASS OF REINITIALISING RESTLESS BANDITS
Villar, Sofía S.
2016-01-01
Motivated by a class of Partially Observable Markov Decision Processes with application in surveillance systems in which a set of imperfectly observed state processes is to be inferred from a subset of available observations through a Bayesian approach, we formulate and analyze a special family of multi-armed restless bandit problems. We consider the problem of finding an optimal policy for observing the processes that maximizes the total expected net rewards over an infinite time horizon subject to the resource availability. From the Lagrangian relaxation of the original problem, an index policy can be derived, as long as the existence of the Whittle index is ensured. We demonstrate that such a class of reinitializing bandits in which the projects’ state deteriorates while active and resets to its initial state when passive until its completion possesses the structural property of indexability and we further show how to compute the index in closed form. In general, the Whittle index rule for restless bandit problems does not achieve optimality. However, we show that the proposed Whittle index rule is optimal for the problem under study in the case of stochastically heterogenous arms under the expected total criterion, and it is further recovered by a simple tractable rule referred to as the 1-limited Round Robin rule. Moreover, we illustrate the significant suboptimality of other widely used heuristic: the Myopic index rule, by computing in closed form its suboptimality gap. We present numerical studies which illustrate for the more general instances the performance advantages of the Whittle index rule over other simple heuristics. PMID:27212781
Iron Low-ionization Broad Absorption Line quasars - the missing link in galaxy evolution?
NASA Astrophysics Data System (ADS)
Lawther, Daniel Peter; Vestergaard, Marianne; Fan, Xiaohui
2015-08-01
A peculiar and rare type of quasar with strong low-ionization iron absorption lines - known as FeLoBAL quasars - may be the missing link between star forming (or starbursting) galaxies and quasars. They are hypothesized to be quasars breaking out of their dense birth blanket of gas and dust. In that case they are expected to have high rates of star formation in their galaxies. With the aim of addressing and settling this issue we have studied deep Hubble Space Telescope restframe UV and optical imaging of a subset of such quasars in order to characterize the host galaxy properties of these quasars. We present the results of this study along with simulations to characterize the uncertainties and robustness of our results.
The statistics of gravitational lenses. III - Astrophysical consequences of quasar lensing
NASA Technical Reports Server (NTRS)
Ostriker, J. P.; Vietri, M.
1986-01-01
The method of Schmidt and Green (1983) for calculating the luminosity function of quasars is combined with gravitational-lensing theory to compute expected properties of lensed systems. Multiple quasar images produced by galaxies are of order 0.001 of the observed quasars, with the numbers over the whole sky calculated to be (0.86, 120, 1600) to limiting B magnitudes of (16, 19, 22). The amount of 'false evolution' is small except for an interesting subset of apparently bright, large-redshift objects for which minilensing by starlike objects may be important. Some of the BL Lac objects may be in this category, with the galaxy identified as the parent object really a foreground object within which stars have lensed a background optically violent variable quasar.
A linearization of quantum channels
NASA Astrophysics Data System (ADS)
Crowder, Tanner
2015-06-01
Because the quantum channels form a compact, convex set, we can express any quantum channel as a convex combination of extremal channels. We give a Euclidean representation for the channels whose inverses are also valid channels; these are a subset of the extreme points. They form a compact, connected Lie group, and we calculate its Lie algebra. Lastly, we calculate a maximal torus for the group and provide a constructive approach to decomposing any invertible channel into a product of elementary channels.
The Self in Decision Making and Decision Implementation.
ERIC Educational Resources Information Center
Beach, Lee Roy; Mitchell, Terence R.
Since the early 1950's the principal prescriptive model in the psychological study of decision making has been maximization of Subjective Expected Utility (SEU). This SEU maximization has come to be regarded as a description of how people go about making decisions. However, while observed decision processes sometimes resemble the SEU model,…
Abundance of live 244Pu in deep-sea reservoirs on Earth points to rarity of actinide nucleosynthesis
Wallner, A.; Faestermann, T.; Feige, J.; Feldstein, C.; Knie, K.; Korschinek, G.; Kutschera, W.; Ofan, A.; Paul, M.; Quinto, F.; Rugel, G.; Steier, P.
2015-01-01
Half of the heavy elements including all actinides are produced in r-process nucleosynthesis, whose sites and history remain a mystery. If continuously produced, the Interstellar Medium is expected to build-up a quasi-steady state of abundances of short-lived nuclides (with half-lives ≤100 My), including actinides produced in r-process nucleosynthesis. Their existence in today’s interstellar medium would serve as a radioactive clock and would establish that their production was recent. In particular 244Pu, a radioactive actinide nuclide (half-life=81 My), can place strong constraints on recent r-process frequency and production yield. Here we report the detection of live interstellar 244Pu, archived in Earth’s deep-sea floor during the last 25 My, at abundances lower than expected from continuous production in the Galaxy by about 2 orders of magnitude. This large discrepancy may signal a rarity of actinide r-process nucleosynthesis sites, compatible with neutron-star mergers or with a small subset of actinide-producing supernovae. PMID:25601158
Brand, Samuel P C; Keeling, Matt J
2017-03-01
It is a long recognized fact that climatic variations, especially temperature, affect the life history of biting insects. This is particularly important when considering vector-borne diseases, especially in temperate regions where climatic fluctuations are large. In general, it has been found that most biological processes occur at a faster rate at higher temperatures, although not all processes change in the same manner. This differential response to temperature, often considered as a trade-off between onward transmission and vector life expectancy, leads to the total transmission potential of an infected vector being maximized at intermediate temperatures. Here we go beyond the concept of a static optimal temperature, and mathematically model how realistic temperature variation impacts transmission dynamics. We use bluetongue virus (BTV), under UK temperatures and transmitted by Culicoides midges, as a well-studied example where temperature fluctuations play a major role. We first consider an optimal temperature profile that maximizes transmission, and show that this is characterized by a warm day to maximize biting followed by cooler weather to maximize vector life expectancy. This understanding can then be related to recorded representative temperature patterns for England, the UK region which has experienced BTV cases, allowing us to infer historical transmissibility of BTV, as well as using forecasts of climate change to predict future transmissibility. Our results show that when BTV first invaded northern Europe in 2006 the cumulative transmission intensity was higher than any point in the last 50 years, although with climate change such high risks are the expected norm by 2050. Such predictions would indicate that regular BTV epizootics should be expected in the UK in the future. © 2017 The Author(s).
On the role of budget sufficiency, cost efficiency, and uncertainty in species management
van der Burg, Max Post; Bly, Bartholomew B.; Vercauteren, Tammy; Grand, James B.; Tyre, Andrew J.
2014-01-01
Many conservation planning frameworks rely on the assumption that one should prioritize locations for management actions based on the highest predicted conservation value (i.e., abundance, occupancy). This strategy may underperform relative to the expected outcome if one is working with a limited budget or the predicted responses are uncertain. Yet, cost and tolerance to uncertainty rarely become part of species management plans. We used field data and predictive models to simulate a decision problem involving western burrowing owls (Athene cunicularia hypugaea) using prairie dog colonies (Cynomys ludovicianus) in western Nebraska. We considered 2 species management strategies: one maximized abundance and the other maximized abundance in a cost-efficient way. We then used heuristic decision algorithms to compare the 2 strategies in terms of how well they met a hypothetical conservation objective. Finally, we performed an info-gap decision analysis to determine how these strategies performed under different budget constraints and uncertainty about owl response. Our results suggested that when budgets were sufficient to manage all sites, the maximizing strategy was optimal and suggested investing more in expensive actions. This pattern persisted for restricted budgets up to approximately 50% of the sufficient budget. Below this budget, the cost-efficient strategy was optimal and suggested investing in cheaper actions. When uncertainty in the expected responses was introduced, the strategy that maximized abundance remained robust under a sufficient budget. Reducing the budget induced a slight trade-off between expected performance and robustness, which suggested that the most robust strategy depended both on one's budget and tolerance to uncertainty. Our results suggest that wildlife managers should explicitly account for budget limitations and be realistic about their expected levels of performance.
Bureaucracy, institutional theory and institutionaucracy: applications to the hospital industry.
Bolon, D S
1998-01-01
The health care industry is experiencing rapid change and uncertainty. Hospitals, in particular, are redesigning structures and processes in order to maximize efficiencies and remain economically viable. This article uses two organizational theory perspectives (bureaucracy and institutional theory) to examine many of the trends and transitions which are occurring throughout the hospital industry. It suggests that many of the key tenets of bureaucracy (rationality, efficiency, productivity, control, etc.) have been incorporated into the institutional environment as normative expectations. This synthesis or blending of these two perspectives is labeled institutionaucracy, implying that, as productivity and efficiency considerations become institutionalized, hospitals conforming to such operational standards will gain legitimacy and additional resources from their environment.
CARHTA GENE: multipopulation integrated genetic and radiation hybrid mapping.
de Givry, Simon; Bouchez, Martin; Chabrier, Patrick; Milan, Denis; Schiex, Thomas
2005-04-15
CAR(H)(T)A GENE: is an integrated genetic and radiation hybrid (RH) mapping tool which can deal with multiple populations, including mixtures of genetic and RH data. CAR(H)(T)A GENE: performs multipoint maximum likelihood estimations with accelerated expectation-maximization algorithms for some pedigrees and has sophisticated algorithms for marker ordering. Dedicated heuristics for framework mapping are also included. CAR(H)(T)A GENE: can be used as a C++ library, through a shell command and a graphical interface. The XML output for companion tools is integrated. The program is available free of charge from www.inra.fr/bia/T/CarthaGene for Linux, Windows and Solaris machines (with Open Source). tschiex@toulouse.inra.fr.
Bois, John P; Geske, Jeffrey B; Foley, Thomas A; Ommen, Steve R; Pellikka, Patricia A
2017-02-15
Left ventricular (LV) wall thickness is a prognostic marker in hypertrophic cardiomyopathy (HC). LV wall thickness ≥30 mm (massive hypertrophy) is independently associated with sudden cardiac death. Presence of massive hypertrophy is used to guide decision making for cardiac defibrillator implantation. We sought to determine whether measurements of maximal LV wall thickness differ between cardiac magnetic resonance imaging (MRI) and transthoracic echocardiography (TTE). Consecutive patients were studied who had HC without previous septal ablation or myectomy and underwent both cardiac MRI and TTE at a single tertiary referral center. Reported maximal LV wall thickness was compared between the imaging techniques. Patients with ≥1 technique reporting massive hypertrophy received subset analysis. In total, 618 patients were evaluated from January 1, 2003, to December 21, 2012 (mean [SD] age, 53 [15] years; 381 men [62%]). In 75 patients (12%), reported maximal LV wall thickness was identical between MRI and TTE. Median difference in reported maximal LV wall thickness between the techniques was 3 mm (maximum difference, 17 mm). Of the 63 patients with ≥1 technique measuring maximal LV wall thickness ≥30 mm, 44 patients (70%) had discrepant classification regarding massive hypertrophy. MRI identified 52 patients (83%) with massive hypertrophy; TTE, 30 patients (48%). Although guidelines recommend MRI or TTE imaging to assess cardiac anatomy in HC, this study shows discrepancy between the techniques for maximal reported LV wall thickness assessment. In conclusion, because this measure clinically affects prognosis and therapeutic decision making, efforts to resolve these discrepancies are critical. Copyright © 2016 Elsevier Inc. All rights reserved.
High-resolution imaging of the large non-human primate brain using microPET: a feasibility study
NASA Astrophysics Data System (ADS)
Naidoo-Variawa, S.; Hey-Cunningham, A. J.; Lehnert, W.; Kench, P. L.; Kassiou, M.; Banati, R.; Meikle, S. R.
2007-11-01
The neuroanatomy and physiology of the baboon brain closely resembles that of the human brain and is well suited for evaluating promising new radioligands in non-human primates by PET and SPECT prior to their use in humans. These studies are commonly performed on clinical scanners with 5 mm spatial resolution at best, resulting in sub-optimal images for quantitative analysis. This study assessed the feasibility of using a microPET animal scanner to image the brains of large non-human primates, i.e. papio hamadryas (baboon) at high resolution. Factors affecting image accuracy, including scatter, attenuation and spatial resolution, were measured under conditions approximating a baboon brain and using different reconstruction strategies. Scatter fraction measured 32% at the centre of a 10 cm diameter phantom. Scatter correction increased image contrast by up to 21% but reduced the signal-to-noise ratio. Volume resolution was superior and more uniform using maximum a posteriori (MAP) reconstructed images (3.2-3.6 mm3 FWHM from centre to 4 cm offset) compared to both 3D ordered subsets expectation maximization (OSEM) (5.6-8.3 mm3) and 3D reprojection (3DRP) (5.9-9.1 mm3). A pilot 18F-2-fluoro-2-deoxy-d-glucose ([18F]FDG) scan was performed on a healthy female adult baboon. The pilot study demonstrated the ability to adequately resolve cortical and sub-cortical grey matter structures in the baboon brain and improved contrast when images were corrected for attenuation and scatter and reconstructed by MAP. We conclude that high resolution imaging of the baboon brain with microPET is feasible with appropriate choices of reconstruction strategy and corrections for degrading physical effects. Further work to develop suitable correction algorithms for high-resolution large primate imaging is warranted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naseri, M; Rajabi, H; Wang, J
Purpose: Respiration causes lesion smearing, image blurring and quality degradation, affecting lesion contrast and the ability to define correct lesion size. The spatial resolution of current multi pinhole SPECT (MPHS) scanners is sub-millimeter. Therefore, the effect of motion is more noticeable in comparison to conventional SPECT scanner. Gated imaging aims to reduce motion artifacts. A major issue in gating is the lack of statistics and individual reconstructed frames are noisy. The increased noise in each frame, deteriorates the quantitative accuracy of the MPHS Images. The objective of this work, is to enhance the image quality in 4D-MPHS imaging, by 4Dmore » image reconstruction. Methods: The new algorithm requires deformation vector fields (DVFs) that are calculated by non-rigid Demons registration. The algorithm is based on the motion-incorporated version of ordered subset expectation maximization (OSEM) algorithm. This iterative algorithm is capable to make full use of all projections to reconstruct each individual frame. To evaluate the performance of the proposed algorithm a simulation study was conducted. A fast ray tracing method was used to generate MPHS projections of a 4D digital mouse phantom with a small tumor in liver in eight different respiratory phases. To evaluate the 4D-OSEM algorithm potential, tumor to liver activity ratio was compared with other image reconstruction methods including 3D-MPHS and post reconstruction registered with Demons-derived DVFs. Results: Image quality of 4D-MPHS is greatly improved by the 4D-OSEM algorithm. When all projections are used to reconstruct a 3D-MPHS, motion blurring artifacts are present, leading to overestimation of the tumor size and 24% tumor contrast underestimation. This error reduced to 16% and 10% for post reconstruction registration methods and 4D-OSEM respectively. Conclusion: 4D-OSEM method can be used for motion correction in 4D-MPHS. The statistics and quantification are improved since all projection data are combined together to update the image.« less
MRI-Based Nonrigid Motion Correction in Simultaneous PET/MRI
Chun, Se Young; Reese, Timothy G.; Ouyang, Jinsong; Guerin, Bastien; Catana, Ciprian; Zhu, Xuping; Alpert, Nathaniel M.; El Fakhri, Georges
2014-01-01
Respiratory and cardiac motion is the most serious limitation to whole-body PET, resulting in spatial resolution close to 1 cm. Furthermore, motion-induced inconsistencies in the attenuation measurements often lead to significant artifacts in the reconstructed images. Gating can remove motion artifacts at the cost of increased noise. This paper presents an approach to respiratory motion correction using simultaneous PET/MRI to demonstrate initial results in phantoms, rabbits, and nonhuman primates and discusses the prospects for clinical application. Methods Studies with a deformable phantom, a free-breathing primate, and rabbits implanted with radioactive beads were performed with simultaneous PET/MRI. Motion fields were estimated from concurrently acquired tagged MR images using 2 B-spline nonrigid image registration methods and incorporated into a PET list-mode ordered-subsets expectation maximization algorithm. Using the measured motion fields to transform both the emission data and the attenuation data, we could use all the coincidence data to reconstruct any phase of the respiratory cycle. We compared the resulting SNR and the channelized Hotelling observer (CHO) detection signal-to-noise ratio (SNR) in the motion-corrected reconstruction with the results obtained from standard gating and uncorrected studies. Results Motion correction virtually eliminated motion blur without reducing SNR, yielding images with SNR comparable to those obtained by gating with 5–8 times longer acquisitions in all studies. The CHO study in dynamic phantoms demonstrated a significant improvement (166%–276%) in lesion detection SNR with MRI-based motion correction as compared with gating (P < 0.001). This improvement was 43%–92% for large motion compared with lesion detection without motion correction (P < 0.001). CHO SNR in the rabbit studies confirmed these results. Conclusion Tagged MRI motion correction in simultaneous PET/MRI significantly improves lesion detection compared with respiratory gating and no motion correction while reducing radiation dose. In vivo primate and rabbit studies confirmed the improvement in PET image quality and provide the rationale for evaluation in simultaneous whole-body PET/MRI clinical studies. PMID:22743250
NASA Astrophysics Data System (ADS)
Baghaei, H.; Wong, Wai-Hoi; Uribe, J.; Li, Hongdi; Wang, Yu; Liu, Yaqiang; Xing, Tao; Ramirez, R.; Xie, Shuping; Kim, Soonseok
2004-10-01
We compared two fully three-dimensional (3-D) image reconstruction algorithms and two 3-D rebinning algorithms followed by reconstruction with a two-dimensional (2-D) filtered-backprojection algorithm for 3-D positron emission tomography (PET) imaging. The two 3-D image reconstruction algorithms were ordered-subsets expectation-maximization (3D-OSEM) and 3-D reprojection (3DRP) algorithms. The two rebinning algorithms were Fourier rebinning (FORE) and single slice rebinning (SSRB). The 3-D projection data used for this work were acquired with a high-resolution PET scanner (MDAPET) with an intrinsic transaxial resolution of 2.8 mm. The scanner has 14 detector rings covering an axial field-of-view of 38.5 mm. We scanned three phantoms: 1) a uniform cylindrical phantom with inner diameter of 21.5 cm; 2) a uniform 11.5-cm cylindrical phantom with four embedded small hot lesions with diameters of 3, 4, 5, and 6 mm; and 3) the 3-D Hoffman brain phantom with three embedded small hot lesion phantoms with diameters of 3, 5, and 8.6 mm in a warm background. Lesions were placed at different radial and axial distances. We evaluated the different reconstruction methods for MDAPET camera by comparing the noise level of images, contrast recovery, and hot lesion detection, and visually compared images. We found that overall the 3D-OSEM algorithm, especially when images post filtered with the Metz filter, produced the best results in terms of contrast-noise tradeoff, and detection of hot spots, and reproduction of brain phantom structures. Even though the MDAPET camera has a relatively small maximum axial acceptance (/spl plusmn/5 deg), images produced with the 3DRP algorithm had slightly better contrast recovery and reproduced the structures of the brain phantom slightly better than the faster 2-D rebinning methods.
Shi, Ximin; Li, Nan; Ding, Haiyan; Dang, Yonghong; Hu, Guilan; Liu, Shuai; Cui, Jie; Zhang, Yue; Li, Fang; Zhang, Hui; Huo, Li
2018-01-01
Kinetic modeling of dynamic 11 C-acetate PET imaging provides quantitative information for myocardium assessment. The quality and quantitation of PET images are known to be dependent on PET reconstruction methods. This study aims to investigate the impacts of reconstruction algorithms on the quantitative analysis of dynamic 11 C-acetate cardiac PET imaging. Suspected alcoholic cardiomyopathy patients ( N = 24) underwent 11 C-acetate dynamic PET imaging after low dose CT scan. PET images were reconstructed using four algorithms: filtered backprojection (FBP), ordered subsets expectation maximization (OSEM), OSEM with time-of-flight (TOF), and OSEM with both time-of-flight and point-spread-function (TPSF). Standardized uptake values (SUVs) at different time points were compared among images reconstructed using the four algorithms. Time-activity curves (TACs) in myocardium and blood pools of ventricles were generated from the dynamic image series. Kinetic parameters K 1 and k 2 were derived using a 1-tissue-compartment model for kinetic modeling of cardiac flow from 11 C-acetate PET images. Significant image quality improvement was found in the images reconstructed using iterative OSEM-type algorithms (OSME, TOF, and TPSF) compared with FBP. However, no statistical differences in SUVs were observed among the four reconstruction methods at the selected time points. Kinetic parameters K 1 and k 2 also exhibited no statistical difference among the four reconstruction algorithms in terms of mean value and standard deviation. However, for the correlation analysis, OSEM reconstruction presented relatively higher residual in correlation with FBP reconstruction compared with TOF and TPSF reconstruction, and TOF and TPSF reconstruction were highly correlated with each other. All the tested reconstruction algorithms performed similarly for quantitative analysis of 11 C-acetate cardiac PET imaging. TOF and TPSF yielded highly consistent kinetic parameter results with superior image quality compared with FBP. OSEM was relatively less reliable. Both TOF and TPSF were recommended for cardiac 11 C-acetate kinetic analysis.
Evaluation of a silicon photomultiplier PET insert for simultaneous PET and MR imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ko, Guen Bae; Kim, Kyeong Yun; Yoon, Hyun Suk
2016-01-15
Purpose: In this study, the authors present a silicon photomultiplier (SiPM)-based positron emission tomography (PET) insert dedicated to small animal imaging with high system performance and robustness to temperature change. Methods: The insert consists of 64 LYSO-SiPM detector blocks arranged in 4 rings of 16 detector blocks to yield a ring diameter of 64 mm and axial field of view of 55 mm. Each detector block consists of a 9 × 9 array of LYSO crystals (1.2 × 1.2 × 10 mm{sup 3}) and a monolithic 4 × 4 SiPM array. The temperature of each monolithic SiPM is monitored, andmore » the proper bias voltage is applied according to the temperature reading in real time to maintain uniform performance. The performance of this PET insert was characterized using National Electrical Manufacturers Association NU 4-2008 standards, and its feasibility was evaluated through in vivo mouse imaging studies. Results: The PET insert had a peak sensitivity of 3.4% and volumetric spatial resolutions of 1.92 (filtered back projection) and 0.53 (ordered subset expectation maximization) mm{sup 3} at center. The peak noise equivalent count rate and scatter fraction were 42.4 kcps at 15.08 MBq and 16.5%, respectively. By applying the real-time bias voltage adjustment, an energy resolution of 14.2% ± 0.3% was maintained and the count rate varied ≤1.2%, despite severe temperature changes (10–30 °C). The mouse imaging studies demonstrate that this PET insert can produce high-quality images useful for imaging studies on the small animals. Conclusions: The developed MR-compatible PET insert is designed for insertion into a narrow-bore magnetic resonance imaging scanner, and it provides excellent imaging performance for PET/MR preclinical studies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maneru, F; Gracia, M; Gallardo, N
2015-06-15
Purpose: To present a simple and feasible method of voxel-S-value (VSV) dosimetry calculation for daily clinical use in radioembolization (RE) with {sup 90}Y microspheres. Dose distributions are obtained and visualized over CT images. Methods: Spatial dose distributions and dose in liver and tumor are calculated for RE patients treated with Sirtex Medical miscrospheres at our center. Data obtained from the previous simulation of treatment were the basis for calculations: Tc-99m maggregated albumin SPECT-CT study in a gammacamera (Infinia, General Electric Healthcare.). Attenuation correction and ordered-subsets expectation maximization (OSEM) algorithm were applied.For VSV calculations, both SPECT and CT were exported frommore » the gammacamera workstation and registered with the radiotherapy treatment planning system (Eclipse, Varian Medical systems). Convolution of activity matrix and local dose deposition kernel (S values) was implemented with an in-house developed software based on Python code. The kernel was downloaded from www.medphys.it. Final dose distribution was evaluated with the free software Dicompyler. Results: Liver mean dose is consistent with Partition method calculations (accepted as a good standard). Tumor dose has not been evaluated due to the high dependence on its contouring. Small lesion size, hot spots in health tissue and blurred limits can affect a lot the dose distribution in tumors. Extra work includes: export and import of images and other dicom files, create and calculate a dummy plan of external radiotherapy, convolution calculation and evaluation of the dose distribution with dicompyler. Total time spent is less than 2 hours. Conclusion: VSV calculations do not require any extra appointment or any uncomfortable process for patient. The total process is short enough to carry it out the same day of simulation and to contribute to prescription decisions prior to treatment. Three-dimensional dose knowledge provides much more information than other methods of dose calculation usually applied in the clinic.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalantari, F; Wang, J; Li, T
2015-06-15
Purpose: In conventional 4D-PET, images from different frames are reconstructed individually and aligned by registration methods. Two issues with these approaches are: 1) Reconstruction algorithms do not make full use of all projections statistics; and 2) Image registration between noisy images can Result in poor alignment. In this study we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) method for cone beam CT for motion estimation/correction in 4D-PET. Methods: Modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM- TV) is used to obtain a primary motion-compensated PET (pmc-PET) from all projection data using Demons derivedmore » deformation vector fields (DVFs) as initial. Motion model update is done to obtain an optimal set of DVFs between the pmc-PET and other phases by matching the forward projection of the deformed pmc-PET and measured projections of other phases. Using updated DVFs, OSEM- TV image reconstruction is repeated and new DVFs are estimated based on updated images. 4D XCAT phantom with typical FDG biodistribution and a 10mm diameter tumor was used to evaluate the performance of the SMEIR algorithm. Results: Image quality of 4D-PET is greatly improved by the SMEIR algorithm. When all projections are used to reconstruct a 3D-PET, motion blurring artifacts are present, leading to a more than 5 times overestimation of the tumor size and 54% tumor to lung contrast ratio underestimation. This error reduced to 37% and 20% for post reconstruction registration methods and SMEIR respectively. Conclusion: SMEIR method can be used for motion estimation/correction in 4D-PET. The statistics is greatly improved since all projection data are combined together to update the image. The performance of the SMEIR algorithm for 4D-PET is sensitive to smoothness control parameters in the DVF estimation step.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu; Visvikis, Dimitris; Fernandez, Philippe
2015-02-15
Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimationmore » of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a wavelet-based denoising in the reconstruction process to better correct for PVE. Future work includes further evaluations of the proposed method on clinical datasets and the use of improved PSF models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kronfeld, Andrea; Müller-Forell, Wibke; Buchholz, Hans-Georg
Purpose: Image registration is one prerequisite for the analysis of brain regions in magnetic-resonance-imaging (MRI) or positron-emission-tomography (PET) studies. Diffeomorphic anatomical registration through exponentiated Lie algebra (DARTEL) is a nonlinear, diffeomorphic algorithm for image registration and construction of image templates. The goal of this small animal study was (1) the evaluation of a MRI and calculation of several cannabinoid type 1 (CB1) receptor PET templates constructed using DARTEL and (2) the analysis of the image registration accuracy of MR and PET images to their DARTEL templates with reference to analytical and iterative PET reconstruction algorithms. Methods: Five male Sprague Dawleymore » rats were investigated for template construction using MRI and [{sup 18}F]MK-9470 PET for CB1 receptor representation. PET images were reconstructed using the algorithms filtered back-projection, ordered subset expectation maximization in 2D, and maximum a posteriori in 3D. Landmarks were defined on each MR image, and templates were constructed under different settings, i.e., based on different tissue class images [gray matter (GM), white matter (WM), and GM + WM] and regularization forms (“linear elastic energy,” “membrane energy,” and “bending energy”). Registration accuracy for MRI and PET templates was evaluated by means of the distance between landmark coordinates. Results: The best MRI template was constructed based on gray and white matter images and the regularization form linear elastic energy. In this case, most distances between landmark coordinates were <1 mm. Accordingly, MRI-based spatial normalization was most accurate, but results of the PET-based spatial normalization were quite comparable. Conclusions: Image registration using DARTEL provides a standardized and automatic framework for small animal brain data analysis. The authors were able to show that this method works with high reliability and validity. Using DARTEL templates together with nonlinear registration algorithms allows for accurate spatial normalization of combined MRI/PET or PET-only studies.« less
Anizan, Nadège; Carlier, Thomas; Hindorf, Cecilia; Barbet, Jacques; Bardiès, Manuel
2012-02-13
Noninvasive multimodality imaging is essential for preclinical evaluation of the biodistribution and pharmacokinetics of radionuclide therapy and for monitoring tumor response. Imaging with nonstandard positron-emission tomography [PET] isotopes such as 124I is promising in that context but requires accurate activity quantification. The decay scheme of 124I implies an optimization of both acquisition settings and correction processing. The PET scanner investigated in this study was the Inveon PET/CT system dedicated to small animal imaging. The noise equivalent count rate [NECR], the scatter fraction [SF], and the gamma-prompt fraction [GF] were used to determine the best acquisition parameters for mouse- and rat-sized phantoms filled with 124I. An image-quality phantom as specified by the National Electrical Manufacturers Association NU 4-2008 protocol was acquired and reconstructed with two-dimensional filtered back projection, 2D ordered-subset expectation maximization [2DOSEM], and 3DOSEM with maximum a posteriori [3DOSEM/MAP] algorithms, with and without attenuation correction, scatter correction, and gamma-prompt correction (weighted uniform distribution subtraction). Optimal energy windows were established for the rat phantom (390 to 550 keV) and the mouse phantom (400 to 590 keV) by combining the NECR, SF, and GF results. The coincidence time window had no significant impact regarding the NECR curve variation. Activity concentration of 124I measured in the uniform region of an image-quality phantom was underestimated by 9.9% for the 3DOSEM/MAP algorithm with attenuation and scatter corrections, and by 23% with the gamma-prompt correction. Attenuation, scatter, and gamma-prompt corrections decreased the residual signal in the cold insert. The optimal energy windows were chosen with the NECR, SF, and GF evaluation. Nevertheless, an image quality and an activity quantification assessment were required to establish the most suitable reconstruction algorithm and corrections for 124I small animal imaging.
Yamaguchi, Shotaro; Wagatsuma, Kei; Miwa, Kenta; Ishii, Kenji; Inoue, Kazumasa; Fukushi, Masahiro
2018-03-01
The Bayesian penalized-likelihood reconstruction algorithm (BPL), Q.Clear, uses relative difference penalty as a regularization function to control image noise and the degree of edge-preservation in PET images. The present study aimed to determine the effects of suppression on edge artifacts due to point-spread-function (PSF) correction using a Q.Clear. Spheres of a cylindrical phantom contained a background of 5.3 kBq/mL of [ 18 F]FDG and sphere-to-background ratios (SBR) of 16, 8, 4 and 2. The background also contained water and spheres containing 21.2 kBq/mL of [ 18 F]FDG as non-background. All data were acquired using a Discovery PET/CT 710 and were reconstructed using three-dimensional ordered-subset expectation maximization with time-of-flight (TOF) and PSF correction (3D-OSEM), and Q.Clear with TOF (BPL). We investigated β-values of 200-800 using BPL. The PET images were analyzed using visual assessment and profile curves, edge variability and contrast recovery coefficients were measured. The 38- and 27-mm spheres were surrounded by higher radioactivity concentration when reconstructed with 3D-OSEM as opposed to BPL, which suppressed edge artifacts. Images of 10-mm spheres had sharper overshoot at high SBR and non-background when reconstructed with BPL. Although contrast recovery coefficients of 10-mm spheres in BPL decreased as a function of increasing β, higher penalty parameter decreased the overshoot. BPL is a feasible method for the suppression of edge artifacts of PSF correction, although this depends on SBR and sphere size. Overshoot associated with BPL caused overestimation in small spheres at high SBR. Higher penalty parameter in BPL can suppress overshoot more effectively. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Comparison of TOF-PET and Bremsstrahlung SPECT Images of Yttrium-90: A Monte Carlo Simulation Study.
Takahashi, Akihiko; Himuro, Kazuhiko; Baba, Shingo; Yamashita, Yasuo; Sasaki, Masayuki
2018-01-01
Yttrium-90 ( 90 Y) is a beta particle nuclide used in targeted radionuclide therapy which is available to both single-photon emission computed tomography (SPECT) and time-of-flight (TOF) positron emission tomography (PET) imaging. The purpose of this study was to assess the image quality of PET and Bremsstrahlung SPECT by simulating PET and SPECT images of 90 Y using Monte Carlo simulation codes under the same conditions and to compare them. In-house Monte Carlo codes, MCEP-PET and MCEP-SPECT, were employed to simulate images. The phantom was a torso-shaped phantom containing six hot spheres of various sizes. The background concentrations of 90 Y were set to 50, 100, 150, and 200 kBq/mL, and the concentrations of the hot spheres were 10, 20, and 40 times of those of the background concentrations. The acquisition time was set to 30 min, and the simulated sinogram data were reconstructed using the ordered subset expectation maximization method. The contrast recovery coefficient (CRC) and contrast-to-noise ratio (CNR) were employed to evaluate the image qualities. The CRC values of SPECT images were less than 40%, while those of PET images were more than 40% when the hot sphere was larger than 20 mm in diameter. The CNR values of PET images of hot spheres of diameter smaller than 20 mm were larger than those of SPECT images. The CNR values mostly exceeded 4, which is a criterion to evaluate the discernibility of hot areas. In the case of SPECT, hot spheres of diameter smaller than 20 mm were not discernable. On the contrary, the CNR values of PET images decreased to the level of SPECT, in the case of low concentration. In almost all the cases examined in this investigation, the quantitative indexes of TOF-PET 90 Y images were better than those of Bremsstrahlung SPECT images. However, the superiority of PET image became critical in the case of low activity concentrations.
Sadjadi, Seyed J; Naeij, Jafar; Shavandi, Hasan; Makui, Ahmad
2016-06-07
This paper studying the impact of strategic customer behavior on decentralized supply chain gains and decisions, which includes a supplier, and a monopoly firm as a retailer who sells a single product over a finite two periods of selling season. We consider three types of customers: myopic, strategic and low-value customers. The problem is formulated as a bi-level game where at the second level (e.g. horizontal game), the retailer determines his/her equilibrium pricing strategy in a non-cooperative simultaneous general game with strategic customers who choose equilibrium purchasing strategy to maximize their expected surplus. At the first level (e.g. vertical game), the supplier competes with the retailer as leader and follower in the Stackelberg game. They set the wholesale price and initial stocking capacity to maximize their profits. Finally, a numerical study is presented to demonstrate the impacts of strategic behavior on supply chain gain and decisions; subsequently the effects of market parameters on decision variables and total profitability of supply chain's members is studied through a sensitivity analysis.
Interval-based reconstruction for uncertainty quantification in PET
NASA Astrophysics Data System (ADS)
Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis
2018-02-01
A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.
Receding and disparity cues aid relaxation of accommodation
Horwood, Anna M; Riddell, Patricia M
2015-01-01
Purpose Accommodation can mask hyperopia and reduce the accuracy of non-cycloplegic refraction. It is therefore important to minimize accommodation to obtain as accurate a measure of hyperopia as possible. In order to characterize the parameters required to measure the maximally hyperopic error using photorefraction, we used different target types and distances to determine which target was most likely to maximally relax accommodation and thus more accurately detect hyperopia in an individual. Methods A PlusoptiX SO4 infra-red photorefractor was mounted in a remote haploscope which presented the targets. All participants were tested with targets at four fixation distances between 0.3m and 2m containing all combinations of blur, disparity and proximity/looming cues. 38 infants (6-44 wks) were studied longitudinally, and 104 children (4 -15 yrs (mean 6.4)) and 85 adults, with a range of refractive errors and binocular vision status, were tested once. Cycloplegic refraction data was available for a sub-set of 59 participants spread across the age range. Results The maximally hyperopic refraction (MHR) found at any time in the session was most frequently found when fixating the most distant targets and those containing disparity and dynamic proximity/looming cues. Presence or absence of blur was less significant, and targets in which only single cues to depth were present were also less likely to produce MHR. MHR correlated closely with cycloplegic refraction (r = 0.93,mean difference 0.07D,p=n.s.,95%CI ±<0.25D) after correction by a calibration factor. Conclusion Maximum relaxation of accommodation occurred for binocular targets receding into the distance. Proximal and disparity cues aid relaxation of accommodation to a greater extent than blur, and thus non-cycloplegic refraction targets should incorporate these cues. This is especially important in screening contexts with a brief opportunity to test for significant hyperopia. MHR in our laboratory was found to be a reliable estimation of cycloplegic refraction. PMID:19770814
The benefits of social influence in optimized cultural markets.
Abeliuk, Andrés; Berbeglia, Gerardo; Cebrian, Manuel; Van Hentenryck, Pascal
2015-01-01
Social influence has been shown to create significant unpredictability in cultural markets, providing one potential explanation why experts routinely fail at predicting commercial success of cultural products. As a result, social influence is often presented in a negative light. Here, we show the benefits of social influence for cultural markets. We present a policy that uses product quality, appeal, position bias and social influence to maximize expected profits in the market. Our computational experiments show that our profit-maximizing policy leverages social influence to produce significant performance benefits for the market, while our theoretical analysis proves that our policy outperforms in expectation any policy not displaying social signals. Our results contrast with earlier work which focused on showing the unpredictability and inequalities created by social influence. Not only do we show for the first time that, under our policy, dynamically showing consumers positive social signals increases the expected profit of the seller in cultural markets. We also show that, in reasonable settings, our profit-maximizing policy does not introduce significant unpredictability and identifies "blockbusters". Overall, these results shed new light on the nature of social influence and how it can be leveraged for the benefits of the market.
Optimal Investment Under Transaction Costs: A Threshold Rebalanced Portfolio Approach
NASA Astrophysics Data System (ADS)
Tunc, Sait; Donmez, Mehmet Ali; Kozat, Suleyman Serdar
2013-06-01
We study optimal investment in a financial market having a finite number of assets from a signal processing perspective. We investigate how an investor should distribute capital over these assets and when he should reallocate the distribution of the funds over these assets to maximize the cumulative wealth over any investment period. In particular, we introduce a portfolio selection algorithm that maximizes the expected cumulative wealth in i.i.d. two-asset discrete-time markets where the market levies proportional transaction costs in buying and selling stocks. We achieve this using "threshold rebalanced portfolios", where trading occurs only if the portfolio breaches certain thresholds. Under the assumption that the relative price sequences have log-normal distribution from the Black-Scholes model, we evaluate the expected wealth under proportional transaction costs and find the threshold rebalanced portfolio that achieves the maximal expected cumulative wealth over any investment period. Our derivations can be readily extended to markets having more than two stocks, where these extensions are pointed out in the paper. As predicted from our derivations, we significantly improve the achieved wealth over portfolio selection algorithms from the literature on historical data sets.
Matching Pupils and Teachers to Maximize Expected Outcomes.
ERIC Educational Resources Information Center
Ward, Joe H., Jr.; And Others
To achieve a good teacher-pupil match, it is necessary (1) to predict the learning outcomes that will result when each student is instructed by each teacher, (2) to use the predicted performance to compute an Optimality Index for each teacher-pupil combination to indicate the quality of each combination toward maximizing learning for all students,…
Kelly, Nichole R; Mazzeo, Suzanne E; Bean, Melanie K
2013-01-01
To clarify directions for research and practice, research literature evaluating nutrition and dietary interventions in college and university settings was reviewed. Systematic search of database literature. Postsecondary education. Fourteen research articles evaluating randomized controlled trials or quasi-experimental interventions targeting dietary outcomes. Diet/nutrition intake, knowledge, motivation, self-efficacy, barriers, intentions, social support, self-regulation, outcome expectations, and sales. Systematic search of 936 articles and review of 14 articles meeting search criteria. Some in-person interventions (n = 6) show promise in improving students' dietary behaviors, although changes were minimal. The inclusion of self-regulation components, including self-monitoring and goal setting, may maximize outcomes. Dietary outcomes from online interventions (n = 5) were less promising overall, although they may be more effective with a subset of college students early in their readiness to change their eating habits. Environmental approaches (n = 3) may increase the sale of healthy food by serving as visual cues-to-action. A number of intervention approaches show promise for improving college students' dietary habits. However, much of this research has methodological limitations, rendering it difficult to draw conclusions across studies and hindering dissemination efforts. Copyright © 2013 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
Optimal simultaneous superpositioning of multiple structures with missing data.
Theobald, Douglas L; Steindel, Phillip A
2012-08-01
Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually 'missing' from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation-maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. dtheobald@brandeis.edu Supplementary data are available at Bioinformatics online.
Iterative Stable Alignment and Clustering of 2D Transmission Electron Microscope Images
Yang, Zhengfan; Fang, Jia; Chittuluru, Johnathan; Asturias, Francisco J.; Penczek, Pawel A.
2012-01-01
SUMMARY Identification of homogeneous subsets of images in a macromolecular electron microscopy (EM) image data set is a critical step in single-particle analysis. The task is handled by iterative algorithms, whose performance is compromised by the compounded limitations of image alignment and K-means clustering. Here we describe an approach, iterative stable alignment and clustering (ISAC) that, relying on a new clustering method and on the concepts of stability and reproducibility, can extract validated, homogeneous subsets of images. ISAC requires only a small number of simple parameters and, with minimal human intervention, can eliminate bias from two-dimensional image clustering and maximize the quality of group averages that can be used for ab initio three-dimensional structural determination and analysis of macromolecular conformational variability. Repeated testing of the stability and reproducibility of a solution within ISAC eliminates heterogeneous or incorrect classes and introduces critical validation to the process of EM image clustering. PMID:22325773
Efficient Simulation Budget Allocation for Selecting an Optimal Subset
NASA Technical Reports Server (NTRS)
Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay
2008-01-01
We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.
McLeish, Kenneth R.; Uriarte, Silvia M.; Tandon, Shweta; Creed, Timothy M.; Le, Junyi; Ward, Richard A.
2013-01-01
This study tested the hypothesis that priming the neutrophil respiratory burst requires both granule exocytosis and activation of the prolyl isomerase, Pin1. Fusion proteins containing the TAT cell permeability sequence and either the SNARE domain of syntaxin-4 or the N-terminal SNARE domain of SNAP-23 were used to examine the role of granule subsets in TNF-mediated respiratory burst priming using human neutrophils. Concentration-inhibition curves for exocytosis of individual granule subsets and for priming of fMLF-stimulated superoxide release and phagocytosis-stimulated H2O2 production were generated. Maximal inhibition of priming ranged from 72% to 88%. Linear regression lines for inhibition of priming versus inhibition of exocytosis did not differ from the line of identity for secretory vesicles and gelatinase granules, while the slopes or the y-intercepts were different from the line of identity for specific and azurophilic granules. Inhibition of Pin1 reduced priming by 56%, while exocytosis of secretory vesicles and specific granules was not affected. These findings indicate that exocytosis of secretory vesicles and gelatinase granules and activation of Pin1 are independent events required for TNF-mediated priming of neutrophil respiratory burst. PMID:23363774
Ecological neighborhoods as a framework for umbrella species selection
Stuber, Erica F.; Fontaine, Joseph J.
2018-01-01
Umbrella species are typically chosen because they are expected to confer protection for other species assumed to have similar ecological requirements. Despite its popularity and substantial history, the value of the umbrella species concept has come into question because umbrella species chosen using heuristic methods, such as body or home range size, are not acting as adequate proxies for the metrics of interest: species richness or population abundance in a multi-species community for which protection is sought. How species associate with habitat across ecological scales has important implications for understanding population size and species richness, and therefore may be a better proxy for choosing an umbrella species. We determined the spatial scales of ecological neighborhoods important for predicting abundance of 8 potential umbrella species breeding in Nebraska using Bayesian latent indicator scale selection in N-mixture models accounting for imperfect detection. We compare the conservation value measured as collective avian abundance under different umbrella species selected following commonly used criteria and selected based on identifying spatial land cover characteristics within ecological neighborhoods that maximize collective abundance. Using traditional criteria to select an umbrella species resulted in sub-maximal expected collective abundance in 86% of cases compared to selecting an umbrella species based on land cover characteristics that maximized collective abundance directly. We conclude that directly assessing the expected quantitative outcomes, rather than ecological proxies, is likely the most efficient method to maximize the potential for conservation success under the umbrella species concept.
Weiser, Emily; Lanctot, Richard B.; Brown, Stephen C.; Alves, José A.; Battley, Phil F.; Bentzen, Rebecca L.; Bety, Joel; Bishop, Mary Anne; Boldenow, Megan; Bollache, Loic; Casler, Bruce; Christie, Maureen; Coleman, Jonathan T.; Conklin, Jesse R.; English, Willow B.; Gates, H. River; Gilg, Olivier; Giroux, Marie-Andree; Gosbell, Ken; Hassell, Chris J.; Helmericks, Jim; Johnson, Andrew; Katrinardottir, Borgny; Koivula, Kari; Kwon, Eunbi; Lamarre, Jean-Francois; Lang, Johannes; Lank, David B.; Lecomte, Nicolas; Liebezeit, Joseph R.; Loverti, Vanessa; McKinnon, Laura; Minton, Clive; Mizrahi, David S.; Nol, Erica; Pakanen, Veli-Matti; Perz, Johanna; Porter, Ron; Rausch, Jennie; Reneerkens, Jeroen; Ronka, Nelli; Saalfeld, Sarah T.; Senner, Nathan R.; Sittler, Benoit; Smith, Paul A.; Sowl, Kristine M.; Taylor, Audrey; Ward, David H.; Yezerinac, Stephen; Sandercock, Brett K.
2016-01-01
Negative effects of geolocators occurred only for three of the smallest species in our dataset, but were substantial when present. Future studies could mitigate impacts of tags by reducing protruding parts and minimizing use of additional markers. Investigators could maximize recovery of tags by strategically deploying geolocators on males, previously marked individuals, and successful breeders, though targeting subsets of a population could bias the resulting migratory movement data in some species.
Coverability graphs for a class of synchronously executed unbounded Petri net
NASA Technical Reports Server (NTRS)
Stotts, P. David; Pratt, Terrence W.
1990-01-01
After detailing a variant of the concurrent-execution rule for firing of maximal subsets, in which the simultaneous firing of conflicting transitions is prohibited, an algorithm is constructed for generating the coverability graph of a net executed under this synchronous firing rule. The omega insertion criteria in the algorithm are shown to be valid for any net on which the algorithm terminates. It is accordingly shown that the set of nets on which the algorithm terminates includes the 'conflict-free' class.
Selecting a Subset of Stimulus-Response Pairs with Maximal Transmitted Information
1992-03-01
Maathe19tical 19 AT RCODS1.SBJCEM (continue on reverse if necessary and idertif by bock number) VSystem designer. are often faced with the task of...Gary K. Poock, Asoctate Advisor -- ~arlR’ Jones, Chairman Command, Co t and Communications Academic Group ii ABSTRACT System designers are often faced ...performance. 3. Stimulus-Response Pairs System designers are often faced with the task of choosing which of several stimuli should be used to represent 6 a
Ordering Elements and Subsets: Examples for Student Understanding
ERIC Educational Resources Information Center
Mellinger, Keith E.
2004-01-01
Teaching the art of counting can be quite difficult. Many undergraduate students have difficulty separating the ideas of permutation, combination, repetition, etc. This article develops some examples to help explain some of the underlying theory while looking carefully at the selection of various subsets of objects from a larger collection. The…
Flexible mini gamma camera reconstructions of extended sources using step and shoot and list mode.
Gardiazabal, José; Matthies, Philipp; Vogel, Jakob; Frisch, Benjamin; Navab, Nassir; Ziegler, Sibylle; Lasser, Tobias
2016-12-01
Hand- and robot-guided mini gamma cameras have been introduced for the acquisition of single-photon emission computed tomography (SPECT) images. Less cumbersome than whole-body scanners, they allow for a fast acquisition of the radioactivity distribution, for example, to differentiate cancerous from hormonally hyperactive lesions inside the thyroid. This work compares acquisition protocols and reconstruction algorithms in an attempt to identify the most suitable approach for fast acquisition and efficient image reconstruction, suitable for localization of extended sources, such as lesions inside the thyroid. Our setup consists of a mini gamma camera with precise tracking information provided by a robotic arm, which also provides reproducible positioning for our experiments. Based on a realistic phantom of the thyroid including hot and cold nodules as well as background radioactivity, the authors compare "step and shoot" (SAS) and continuous data (CD) acquisition protocols in combination with two different statistical reconstruction methods: maximum-likelihood expectation-maximization (ML-EM) for time-integrated count values and list-mode expectation-maximization (LM-EM) for individually detected gamma rays. In addition, the authors simulate lower uptake values by statistically subsampling the experimental data in order to study the behavior of their approach without changing other aspects of the acquired data. All compared methods yield suitable results, resolving the hot nodules and the cold nodule from the background. However, the CD acquisition is twice as fast as the SAS acquisition, while yielding better coverage of the thyroid phantom, resulting in qualitatively more accurate reconstructions of the isthmus between the lobes. For CD acquisitions, the LM-EM reconstruction method is preferable, as it yields comparable image quality to ML-EM at significantly higher speeds, on average by an order of magnitude. This work identifies CD acquisition protocols combined with LM-EM reconstruction as a prime candidate for the wider introduction of SPECT imaging with flexible mini gamma cameras in the clinical practice.
NASA Astrophysics Data System (ADS)
Gobbi, G. P.; Angelini, F.; Bonasoni, P.; Verza, G. P.; Marinoni, A.; Barnaba, F.
2010-11-01
In spite of being located at the heart of the highest mountain range in the world, the Himalayan Nepal Climate Observatory (5079 m a.s.l.) at the Ev-K2-CNR Pyramid is shown to be affected by the advection of pollution aerosols from the populated regions of southern Nepal and the Indo-Gangetic plains. Such an impact is observed along most of the period April 2006-March 2007 addressed here, with a minimum in the monsoon season. Backtrajectory-analysis indicates long-range transport episodes occurring in this year to originate mainly in the west Asian deserts. At this high altitude site, the measured aerosol optical depth is observed to be about one order of magnitude lower than the one measured at Ghandi College (60 m a.s.l.), in the Indo-Gangetic basin. As for Ghandi College, and in agreement with the in situ ground observations at the Pyramid, the fine mode aerosol optical depth maximizes during winter and minimizes in the monsoon season. Conversely, total optical depth maximizes during the monsoon due to the occurrence of elevated, coarse particle layers. Possible origins of these particles are wind erosion from the surrounding peaks and hydrated/cloud-processed aerosols. Assessment of the aerosol radiative forcing is then expected to be hampered by the presence of these high altitude particle layers, which impede an effective, continuous measurement of anthropogenic aerosol radiative properties from sky radiance inversions and/or ground measurements alone. Even though the retrieved absorption coefficients of pollution aerosols were rather large (single scattering albedo of the order of 0.6-0.9 were observed in the month of April 2006), the corresponding low optical depths (~0.03 at 500 nm) are expected to limit the relevant radiative forcing. Still, the high specific forcing of this aerosol and its capability of altering snow surface albedo provide good reasons for continuous monitoring.
Techniques for cash management in scheduling manufacturing operations
NASA Astrophysics Data System (ADS)
Morady Gohareh, Mehdy; Shams Gharneh, Naser; Ghasemy Yaghin, Reza
2017-06-01
The objective in traditional scheduling is usually time based. Minimizing the makespan, total flow times, total tardi costs, etc. are instances of these objectives. In manufacturing, processing each job entails a cost paying and price receiving. Thus, the objective should include some notion of managing the flow of cash. We have defined two new objectives: maximization of average and minimum available cash. For single machine scheduling, it is demonstrated that scheduling jobs in decreasing order of profit ratios maximizes the former and improves productivity. Moreover, scheduling jobs in increasing order of costs and breaking ties in decreasing order of prices maximizes the latter and creates protection against financial instability.
Planning the FUSE Mission Using the SOVA Algorithm
NASA Technical Reports Server (NTRS)
Lanzi, James; Heatwole, Scott; Ward, Philip R.; Civeit, Thomas; Calvani, Humberto; Kruk, Jeffrey W.; Suchkov, Anatoly
2011-01-01
Three documents discuss the Sustainable Objective Valuation and Attainability (SOVA) algorithm and software as used to plan tasks (principally, scientific observations and associated maneuvers) for the Far Ultraviolet Spectroscopic Explorer (FUSE) satellite. SOVA is a means of managing risk in a complex system, based on a concept of computing the expected return value of a candidate ordered set of tasks as a product of pre-assigned task values and assessments of attainability made against qualitatively defined strategic objectives. For the FUSE mission, SOVA autonomously assembles a week-long schedule of target observations and associated maneuvers so as to maximize the expected scientific return value while keeping the satellite stable, managing the angular momentum of spacecraft attitude- control reaction wheels, and striving for other strategic objectives. A six-degree-of-freedom model of the spacecraft is used in simulating the tasks, and the attainability of a task is calculated at each step by use of strategic objectives as defined by use of fuzzy inference systems. SOVA utilizes a variant of a graph-search algorithm known as the A* search algorithm to assemble the tasks into a week-long target schedule, using the expected scientific return value to guide the search.
Frequency assignments for HFDF receivers in a search and rescue network
NASA Astrophysics Data System (ADS)
Johnson, Krista E.
1990-03-01
This thesis applies a multiobjective linear programming approach to the problem of assigning frequencies to high frequency direction finding (HFDF) receivers in a search-and-rescue network in order to maximize the expected number of geolocations of vessels in distress. The problem is formulated as a multiobjective integer linear programming problem. The integrality of the solutions is guaranteed by the totally unimodularity of the A-matrix. Two approaches are taken to solve the multiobjective linear programming problem: (1) the multiobjective simplex method as implemented in ADBASE; and (2) an iterative approach. In this approach, the individual objective functions are weighted and combined in a single additive objective function. The resulting single objective problem is expressed as a network programming problem and solved using SAS NETFLOW. The process is then repeated with different weightings for the objective functions. The solutions obtained from the multiobjective linear programs are evaluated using a FORTRAN program to determine which solution provides the greatest expected number of geolocations. This solution is then compared to the sample mean and standard deviation for the expected number of geolocations resulting from 10,000 random frequency assignments for the network.
Text Classification for Intelligent Portfolio Management
2002-05-01
years including nearest neighbor classification [15], naive Bayes with EM (Ex- pectation Maximization) [11] [13], Winnow with active learning [10... Active Learning and Expectation Maximization (EM). In particular, active learning is used to actively select documents for labeling, then EM assigns...generalization with active learning . Machine Learning, 15(2):201–221, 1994. [3] I. Dagan and P. Engelson. Committee-based sampling for training
Borai, Anwar; Livingstone, Callum; Alsobhi, Enaam; Al Sofyani, Abeer; Balgoon, Dalal; Farzal, Anwar; Almohammadi, Mohammed; Al-Amri, Abdulafattah; Bahijri, Suhad; Alrowaili, Daad; Bassiuni, Wafaa; Saleh, Ayman; Alrowaili, Norah; Abdelaal, Mohamed
2017-04-01
Whole blood donation has immunomodulatory effects, and most of these have been observed at short intervals following blood donation. This study aimed to investigate the impact of whole blood donation on lymphocyte subsets over a typical inter-donation interval. Healthy male subjects were recruited to study changes in complete blood count (CBC) (n = 42) and lymphocyte subsets (n = 16) before and at four intervals up to 106 days following blood donation. Repeated measures ANOVA were used to compare quantitative variables between different visits. Following blood donation, changes in CBC and erythropoietin were as expected. The neutrophil count increased by 11.3% at 8 days (p < .001). Novel changes were observed in lymphocyte subsets as the CD4/CD8 ratio increased by 9.2% (p < .05) at 8 days and 13.7% (p < .05) at 22 days. CD16-56 cells decreased by 16.2% (p < .05) at 8 days. All the subsets had returned to baseline by 106 days. Regression analysis showed that the changes in CD16-56 cells and CD4/CD8 ratio were not significant (Wilk's lambda = 0.15 and 0.94, respectively) when adjusted for BMI. In conclusion, following whole blood donation, there are transient changes in lymphocyte subsets. The effect of BMI on lymphocyte subsets and the effect of this immunomodulation on the immune response merit further investigation.
Replica analysis for the duality of the portfolio optimization problem
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2016-11-01
In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.
Replica analysis for the duality of the portfolio optimization problem.
Shinzato, Takashi
2016-11-01
In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.
Orellana, Liliana; Rotnitzky, Andrea; Robins, James M
2010-01-01
Dynamic treatment regimes are set rules for sequential decision making based on patient covariate history. Observational studies are well suited for the investigation of the effects of dynamic treatment regimes because of the variability in treatment decisions found in them. This variability exists because different physicians make different decisions in the face of similar patient histories. In this article we describe an approach to estimate the optimal dynamic treatment regime among a set of enforceable regimes. This set is comprised by regimes defined by simple rules based on a subset of past information. The regimes in the set are indexed by a Euclidean vector. The optimal regime is the one that maximizes the expected counterfactual utility over all regimes in the set. We discuss assumptions under which it is possible to identify the optimal regime from observational longitudinal data. Murphy et al. (2001) developed efficient augmented inverse probability weighted estimators of the expected utility of one fixed regime. Our methods are based on an extension of the marginal structural mean model of Robins (1998, 1999) which incorporate the estimation ideas of Murphy et al. (2001). Our models, which we call dynamic regime marginal structural mean models, are specially suitable for estimating the optimal treatment regime in a moderately small class of enforceable regimes of interest. We consider both parametric and semiparametric dynamic regime marginal structural models. We discuss locally efficient, double-robust estimation of the model parameters and of the index of the optimal treatment regime in the set. In a companion paper in this issue of the journal we provide proofs of the main results.
NASA Astrophysics Data System (ADS)
Davendralingam, Navindran
Conceptual design of aircraft and the airline network (routes) on which aircraft fly on are inextricably linked to passenger driven demand. Many factors influence passenger demand for various Origin-Destination (O-D) city pairs including demographics, geographic location, seasonality, socio-economic factors and naturally, the operations of directly competing airlines. The expansion of airline operations involves the identificaion of appropriate aircraft to meet projected future demand. The decisions made in incorporating and subsequently allocating these new aircraft to serve air travel demand affects the inherent risk and profit potential as predicted through the airline revenue management systems. Competition between airlines then translates to latent passenger observations of the routes served between OD pairs and ticket pricing---this in effect reflexively drives future states of demand. This thesis addresses the integrated nature of aircraft design, airline operations and passenger demand, in order to maximize future expected profits as new aircraft are brought into service. The goal of this research is to develop an approach that utilizes aircraft design, airline network design and passenger demand as a unified framework to provide better integrated design solutions in order to maximize expexted profits of an airline. This is investigated through two approaches. The first is a static model that poses the concurrent engineering paradigm above as an investment portfolio problem. Modern financial portfolio optimization techniques are used to leverage risk of serving future projected demand using a 'yet to be introduced' aircraft against potentially generated future profits. Robust optimization methodologies are incorporated to mitigate model sensitivity and address estimation risks associated with such optimization techniques. The second extends the portfolio approach to include dynamic effects of an airline's operations. A dynamic programming approach is employed to simulate the reflexive nature of airline supply-demand interactions by modeling the aggregate changes in demand that would result from tactical allocations of aircraft to maximize profit. The best yet-to-be-introduced aircraft maximizes profit by minimizing the long term fleetwide direct operating costs.
Hancock, David G; Shklovskaya, Elena; Guy, Thomas V; Falsafi, Reza; Fjell, Chris D; Ritchie, William; Hancock, Robert E W; Fazekas de St Groth, Barbara
2014-01-01
Dendritic cells (DCs) are critical for regulating CD4 and CD8 T cell immunity, controlling Th1, Th2, and Th17 commitment, generating inducible Tregs, and mediating tolerance. It is believed that distinct DC subsets have evolved to control these different immune outcomes. However, how DC subsets mount different responses to inflammatory and/or tolerogenic signals in order to accomplish their divergent functions remains unclear. Lipopolysaccharide (LPS) provides an excellent model for investigating responses in closely related splenic DC subsets, as all subsets express the LPS receptor TLR4 and respond to LPS in vitro. However, previous studies of the LPS-induced DC transcriptome have been performed only on mixed DC populations. Moreover, comparisons of the in vivo response of two closely related DC subsets to LPS stimulation have not been reported in the literature to date. We compared the transcriptomes of murine splenic CD8 and CD11b DC subsets after in vivo LPS stimulation, using RNA-Seq and systems biology approaches. We identified subset-specific gene signatures, which included multiple functional immune mediators unique to each subset. To explain the observed subset-specific differences, we used a network analysis approach. While both DC subsets used a conserved set of transcription factors and major signalling pathways, the subsets showed differential regulation of sets of genes that 'fine-tune' the network Hubs expressed in common. We propose a model in which signalling through common pathway components is 'fine-tuned' by transcriptional control of subset-specific modulators, thus allowing for distinct functional outcomes in closely related DC subsets. We extend this analysis to comparable datasets from the literature and confirm that our model can account for cell subset-specific responses to LPS stimulation in multiple subpopulations in mouse and man.
Precollege Predictors of Incapacitated Rape Among Female Students in Their First Year of College
Carey, Kate B.; Durney, Sarah E.; Shepardson, Robyn L.; Carey, Michael P.
2015-01-01
Objective: The first year of college is an important transitional period for young adults; it is also a period associated with elevated risk of incapacitated rape (IR) for female students. The goal of this study was to identify prospective risk factors associated with experiencing attempted or completed IR during the first year of college. Method: Using a prospective cohort design, we recruited 483 incoming first-year female students. Participants completed a baseline survey and three follow-up surveys over the next year. At baseline, we assessed precollege alcohol use, marijuana use, sexual behavior, and, for the subset of sexually experienced participants, sex-related alcohol expectancies. At the baseline and all follow-ups, we assessed sexual victimization. Results: Approximately 1 in 6 women (18%) reported IR before entering college, and 15% reported IR during their first year of college. In bivariate analyses, precollege IR history, precollege heavy episodic drinking, number of precollege sexual partners, and sex-related alcohol expectancies (enhancement and disinhibition) predicted first-year IR. In multivariate analyses with the entire sample, only precollege IR (odds ratio = 4.98, p < .001) remained a significant predictor. However, among the subset of sexually experienced participants, both enhancement expectancies and precollege IR predicted IR during the study year. Conclusions: IR during the first year of college is independently associated with a history of IR and with expectancies about alcohol’s enhancement of sexual experience. Alcohol expectancies are a modifiable risk factor that may be a promising target for prevention efforts. PMID:26562590
When Does Reward Maximization Lead to Matching Law?
Sakai, Yutaka; Fukai, Tomoki
2008-01-01
What kind of strategies subjects follow in various behavioral circumstances has been a central issue in decision making. In particular, which behavioral strategy, maximizing or matching, is more fundamental to animal's decision behavior has been a matter of debate. Here, we prove that any algorithm to achieve the stationary condition for maximizing the average reward should lead to matching when it ignores the dependence of the expected outcome on subject's past choices. We may term this strategy of partial reward maximization “matching strategy”. Then, this strategy is applied to the case where the subject's decision system updates the information for making a decision. Such information includes subject's past actions or sensory stimuli, and the internal storage of this information is often called “state variables”. We demonstrate that the matching strategy provides an easy way to maximize reward when combined with the exploration of the state variables that correctly represent the crucial information for reward maximization. Our results reveal for the first time how a strategy to achieve matching behavior is beneficial to reward maximization, achieving a novel insight into the relationship between maximizing and matching. PMID:19030101
Computer access security code system
NASA Technical Reports Server (NTRS)
Collins, Earl R., Jr. (Inventor)
1990-01-01
A security code system for controlling access to computer and computer-controlled entry situations comprises a plurality of subsets of alpha-numeric characters disposed in random order in matrices of at least two dimensions forming theoretical rectangles, cubes, etc., such that when access is desired, at least one pair of previously unused character subsets not found in the same row or column of the matrix is chosen at random and transmitted by the computer. The proper response to gain access is transmittal of subsets which complete the rectangle, and/or a parallelepiped whose opposite corners were defined by first groups of code. Once used, subsets are not used again to absolutely defeat unauthorized access by eavesdropping, and the like.
NASA Astrophysics Data System (ADS)
Lillo, F.
2007-02-01
I consider the problem of the optimal limit order price of a financial asset in the framework of the maximization of the utility function of the investor. The analytical solution of the problem gives insight on the origin of the recently empirically observed power law distribution of limit order prices. In the framework of the model, the most likely proximate cause of this power law is a power law heterogeneity of traders' investment time horizons.
A decision theoretical approach for diffusion promotion
NASA Astrophysics Data System (ADS)
Ding, Fei; Liu, Yun
2009-09-01
In order to maximize cost efficiency from scarce marketing resources, marketers are facing the problem of which group of consumers to target for promotions. We propose to use a decision theoretical approach to model this strategic situation. According to one promotion model that we develop, marketers balance between probabilities of successful persuasion and the expected profits on a diffusion scale, before making their decisions. In the other promotion model, the cost for identifying influence information is considered, and marketers are allowed to ignore individual heterogeneity. We apply the proposed approach to two threshold influence models, evaluate the utility of each promotion action, and provide discussions about the best strategy. Our results show that efforts for targeting influentials or easily influenced people might be redundant under some conditions.
Przybyla, Jay; Taylor, Jeffrey; Zhou, Xuesong
2010-01-01
In this paper, a spatial information-theoretic model is proposed to locate sensors for detecting source-to-target patterns of special nuclear material (SNM) smuggling. In order to ship the nuclear materials from a source location with SNM production to a target city, the smugglers must employ global and domestic logistics systems. This paper focuses on locating a limited set of fixed and mobile radiation sensors in a transportation network, with the intent to maximize the expected information gain and minimize the estimation error for the subsequent nuclear material detection stage. A Kalman filtering-based framework is adapted to assist the decision-maker in quantifying the network-wide information gain and SNM flow estimation accuracy. PMID:22163641
Przybyla, Jay; Taylor, Jeffrey; Zhou, Xuesong
2010-01-01
In this paper, a spatial information-theoretic model is proposed to locate sensors for detecting source-to-target patterns of special nuclear material (SNM) smuggling. In order to ship the nuclear materials from a source location with SNM production to a target city, the smugglers must employ global and domestic logistics systems. This paper focuses on locating a limited set of fixed and mobile radiation sensors in a transportation network, with the intent to maximize the expected information gain and minimize the estimation error for the subsequent nuclear material detection stage. A Kalman filtering-based framework is adapted to assist the decision-maker in quantifying the network-wide information gain and SNM flow estimation accuracy.
NASA Astrophysics Data System (ADS)
Petrov, Alexander P.
1996-09-01
Classic colorimetry and the traditionally used color space do not represent all perceived colors (for example, browns look dark yellow in colorimetric conditions of observation) so, the specific goal of this work is to suggest another concept of color and to prove that the corresponding set of colors is complete. The idea of our approach attributing color to surface patches (not to the light) immediately ties all the problems of color perception and vision geometry. The equivalence relation in the linear space of light fluxes F established by a procedure of colorimetry gives us a 3D color space H. By definition we introduce a sample (sigma) (surface patch) as a linear mapping (sigma) : L yields H, where L is a subspace of F called the illumination space. A Dedekind structure of partial order can be defined in the set of the samples: two samples (alpha) and (Beta) belong to one chromatic class if ker(alpha) equals ker(Beta) and (alpha) > (Beta) if ker(alpha) ker(Beta) . The maximal elements of this chain create the chromatic class BLACK. There can be given geometrical arguments for L to be 3D and it can be proved that in this case the minimal element of the above Dedekind structure is unique and the corresponding chromatic class is called WHITE containing the samples (omega) such that ker(omega) equals {0} L. Color is defined as mapping C: H yields H and assuming color constancy the complete set of perceived colors is proved to be isomorphic to a subset C of 3 X 3 matrices. This subset is convex, limited and symmetrical with E/2 as the center of symmetry. The problem of metrization of the color space C is discussed and a color metric related to shape, i.e., to vision geometry, is suggested.
Investigation of Grating-Assisted Trimodal Interferometer Biosensors Based on a Polymer Platform.
Liang, Yuxin; Zhao, Mingshan; Wu, Zhenlin; Morthier, Geert
2018-05-10
A grating-assisted trimodal interferometer biosensor is proposed and numerically analyzed. A long period grating coupler, for adjusting the power between the fundamental mode and the second higher order mode, is investigated, and is shown to act as a conventional directional coupler for adjusting the power between the two arms. The trimodal interferometer can achieve maximal fringe visibility when the powers of the two modes are adjusted to the same value by the grating coupler, which means that a better limit of detection can be expected. In addition, the second higher order mode typically has a larger evanescent tail than the first higher order mode in bimodal interferometers, resulting in a higher sensitivity of the trimodal interferometer. The influence of fabrication tolerances on the performance of the designed interferometer is also investigated. The power difference between the two modes shows inertia to the fill factor of the grating, but high sensitivity to the modulation depth. Finally, a 2050 2π/RIU (refractive index unit) sensitivity and 43 dB extinction ratio of the output power are achieved.
NASA Astrophysics Data System (ADS)
Ahmadalipour, Ali; Moradkhani, Hamid; Demirel, Mehmet C.
2017-10-01
The changing climate and the associated future increases in temperature are expected to have impacts on drought characteristics and hydrologic cycle. This paper investigates the projected changes in spatiotemporal characteristics of droughts and their future attributes over the Willamette River Basin (WRB) in the Pacific Northwest U.S. The analysis is performed using two subsets of downscaled CMIP5 global climate models (GCMs) each consisting of 10 models from two future scenarios (RCP4.5 and RCP8.5) for 30 years of historical period (1970-1999) and 90 years of future projections (2010-2099). Hydrologic modeling is conducted using the Precipitation Runoff Modeling System (PRMS) as a robust distributed hydrologic model with lower computational cost compared to other models. Meteorological and hydrological droughts are studied using three drought indices (i.e. Standardized Precipitation Index, Standardized Precipitation Evapotranspiration Index, Standardized Streamflow Index). Results reveal that the intensity and duration of hydrological droughts are expected to increase over the WRB, albeit the annual precipitation is expected to increase. On the other hand, the intensity of meteorological droughts do not indicate an aggravation for most cases. We explore the changes of hydrometeolorogical variables over the basin in order to understand the causes for such differences and to discover the controlling factors of drought. Furthermore, the uncertainty of projections are quantified for model, scenario, and downscaling uncertainty.
Maintenance Downtime October 17 - 23, 2014
Atmospheric Science Data Center
2014-10-23
... Impact: The ASDC will be conducting extended system maintenance Fri 10/17@4pm - Thu 10/23@4pm EDT Please expect: ... and Customization Tool - AMAPS, CALIPSO, CERES, MOPITT, TES and TAD Search and Subset Tools All systems will be ...
Work Placement in UK Undergraduate Programmes. Student Expectations and Experiences.
ERIC Educational Resources Information Center
Leslie, David; Richardson, Anne
1999-01-01
A survey of 189 pre- and 106 post-sandwich work-experience students in tourism suggested that potential benefits were not being maximized. Students needed better preparation for the work experience, especially in terms of their expectations. The work experience needed better design, and the role of industry tutors needed clarification. (SK)
Career Preference among Universities' Faculty: Literature Review
ERIC Educational Resources Information Center
Alenzi, Faris Q.; Salem, Mohamed L.
2007-01-01
Why do people enter academic life? What are their expectations? How can they maximize their experience and achievements, both short- and long-term? How much should they move towards commercialization? What can they do to improve their career? How much autonomy can they reasonably expect? What are the key issues for academics and aspiring academics…
Picking battles wisely: plant behaviour under competition.
Novoplansky, Ariel
2009-06-01
Plants are limited in their ability to choose their neighbours, but they are able to orchestrate a wide spectrum of rational competitive behaviours that increase their prospects to prevail under various ecological settings. Through the perception of neighbours, plants are able to anticipate probable competitive interactions and modify their competitive behaviours to maximize their long-term gains. Specifically, plants can minimize competitive encounters by avoiding their neighbours; maximize their competitive effects by aggressively confronting their neighbours; or tolerate the competitive effects of their neighbours. However, the adaptive values of these non-mutually exclusive options are expected to depend strongly on the plants' evolutionary background and to change dynamically according to their past development, and relative sizes and vigour. Additionally, the magnitude of competitive responsiveness is expected to be positively correlated with the reliability of the environmental information regarding the expected competitive interactions and the expected time left for further plastic modifications. Concurrent competition over external and internal resources and morphogenetic signals may enable some plants to increase their efficiency and external competitive performance by discriminately allocating limited resources to their more promising organs at the expense of failing or less successful organs.
NASA Astrophysics Data System (ADS)
Kováčik, Roman; Murthy, Sowmya Sathyanarayana; Quiroga, Carmen E.; Ederer, Claude; Franchini, Cesare
2016-02-01
We merge advanced ab initio schemes (standard density functional theory, hybrid functionals, and the G W approximation) with model Hamiltonian approaches (tight-binding and Heisenberg Hamiltonian) to study the evolution of the electronic, magnetic, and dielectric properties of the manganite family R MnO3 (R =La,Pr,Nd,Sm,Eu, and Gd) . The link between first principles and tight binding is established by downfolding the physically relevant subset of 3 d bands with eg character by means of maximally localized Wannier functions (MLWFs) using the VASP2WANNIER90 interface. The MLWFs are then used to construct a general tight-binding Hamiltonian written as a sum of the kinetic term, the Hund's rule coupling, the JT coupling, and the electron-electron interaction. The dispersion of the tight-binding (TB) eg bands at all levels are found to match closely the MLWFs. We provide a complete set of TB parameters which can serve as guidance for the interpretation of future studies based on many-body Hamiltonian approaches. In particular, we find that the Hund's rule coupling strength, the Jahn-Teller coupling strength, and the Hubbard interaction parameter U remain nearly constant for all the members of the R MnO3 series, whereas the nearest-neighbor hopping amplitudes show a monotonic attenuation as expected from the trend of the tolerance factor. Magnetic exchange interactions, computed by mapping a large set of hybrid functional total energies onto an Heisenberg Hamiltonian, clarify the origin of the A-type magnetic ordering observed in the early rare-earth manganite series as arising from a net negative out-of-plane interaction energy. The obtained exchange parameters are used to estimate the Néel temperature by means of Monte Carlo simulations. The resulting data capture well the monotonic decrease of the ordering temperature down the series from R =La to Gd, in agreement with experiments. This trend correlates well with the modulation of structural properties, in particular with the progressive reduction of the Mn-O-Mn bond angle which is associated with the quenching of the volume and the decrease of the tolerance factor due to the shrinkage of the ionic radii of R going from La to Gd.
Clustering, Seriation, and Subset Extraction of Confusion Data
ERIC Educational Resources Information Center
Brusco, Michael J.; Steinley, Douglas
2006-01-01
The study of confusion data is a well established practice in psychology. Although many types of analytical approaches for confusion data are available, among the most common methods are the extraction of 1 or more subsets of stimuli, the partitioning of the complete stimulus set into distinct groups, and the ordering of the stimulus set. Although…
Moore, M A; Katzgraber, Helmut G
2014-10-01
Starting from preferences on N proposed policies obtained via questionnaires from a sample of the electorate, an Ising spin-glass model in a field can be constructed from which a political party could find the subset of the proposed policies which would maximize its appeal, form a coherent choice in the eyes of the electorate, and have maximum overlap with the party's existing policies. We illustrate the application of the procedure by simulations of a spin glass in a random field on scale-free networks.
Translation Invariant Extensions of Finite Volume Measures
NASA Astrophysics Data System (ADS)
Goldstein, S.; Kuna, T.; Lebowitz, J. L.; Speer, E. R.
2017-02-01
We investigate the following questions: Given a measure μ _Λ on configurations on a subset Λ of a lattice L, where a configuration is an element of Ω ^Λ for some fixed set Ω , does there exist a measure μ on configurations on all of L, invariant under some specified symmetry group of L, such that μ _Λ is its marginal on configurations on Λ ? When the answer is yes, what are the properties, e.g., the entropies, of such measures? Our primary focus is the case in which L=Z^d and the symmetries are the translations. For the case in which Λ is an interval in Z we give a simple necessary and sufficient condition, local translation invariance ( LTI), for extendibility. For LTI measures we construct extensions having maximal entropy, which we show are Gibbs measures; this construction extends to the case in which L is the Bethe lattice. On Z we also consider extensions supported on periodic configurations, which are analyzed using de Bruijn graphs and which include the extensions with minimal entropy. When Λ subset Z is not an interval, or when Λ subset Z^d with d>1, the LTI condition is necessary but not sufficient for extendibility. For Z^d with d>1, extendibility is in some sense undecidable.
Performance of Blind Source Separation Algorithms for FMRI Analysis using a Group ICA Method
Correa, Nicolle; Adali, Tülay; Calhoun, Vince D.
2007-01-01
Independent component analysis (ICA) is a popular blind source separation (BSS) technique that has proven to be promising for the analysis of functional magnetic resonance imaging (fMRI) data. A number of ICA approaches have been used for fMRI data analysis, and even more ICA algorithms exist, however the impact of using different algorithms on the results is largely unexplored. In this paper, we study the performance of four major classes of algorithms for spatial ICA, namely information maximization, maximization of non-gaussianity, joint diagonalization of cross-cumulant matrices, and second-order correlation based methods when they are applied to fMRI data from subjects performing a visuo-motor task. We use a group ICA method to study the variability among different ICA algorithms and propose several analysis techniques to evaluate their performance. We compare how different ICA algorithms estimate activations in expected neuronal areas. The results demonstrate that the ICA algorithms using higher-order statistical information prove to be quite consistent for fMRI data analysis. Infomax, FastICA, and JADE all yield reliable results; each having their strengths in specific areas. EVD, an algorithm using second-order statistics, does not perform reliably for fMRI data. Additionally, for the iterative ICA algorithms, it is important to investigate the variability of the estimates from different runs. We test the consistency of the iterative algorithms, Infomax and FastICA, by running the algorithm a number of times with different initializations and note that they yield consistent results over these multiple runs. Our results greatly improve our confidence in the consistency of ICA for fMRI data analysis. PMID:17540281
Performance of blind source separation algorithms for fMRI analysis using a group ICA method.
Correa, Nicolle; Adali, Tülay; Calhoun, Vince D
2007-06-01
Independent component analysis (ICA) is a popular blind source separation technique that has proven to be promising for the analysis of functional magnetic resonance imaging (fMRI) data. A number of ICA approaches have been used for fMRI data analysis, and even more ICA algorithms exist; however, the impact of using different algorithms on the results is largely unexplored. In this paper, we study the performance of four major classes of algorithms for spatial ICA, namely, information maximization, maximization of non-Gaussianity, joint diagonalization of cross-cumulant matrices and second-order correlation-based methods, when they are applied to fMRI data from subjects performing a visuo-motor task. We use a group ICA method to study variability among different ICA algorithms, and we propose several analysis techniques to evaluate their performance. We compare how different ICA algorithms estimate activations in expected neuronal areas. The results demonstrate that the ICA algorithms using higher-order statistical information prove to be quite consistent for fMRI data analysis. Infomax, FastICA and joint approximate diagonalization of eigenmatrices (JADE) all yield reliable results, with each having its strengths in specific areas. Eigenvalue decomposition (EVD), an algorithm using second-order statistics, does not perform reliably for fMRI data. Additionally, for iterative ICA algorithms, it is important to investigate the variability of estimates from different runs. We test the consistency of the iterative algorithms Infomax and FastICA by running the algorithm a number of times with different initializations, and we note that they yield consistent results over these multiple runs. Our results greatly improve our confidence in the consistency of ICA for fMRI data analysis.
Waye, J S; Willard, H F
1986-09-01
The centromeric regions of all human chromosomes are characterized by distinct subsets of a diverse tandemly repeated DNA family, alpha satellite. On human chromosome 17, the predominant form of alpha satellite is a 2.7-kilobase-pair higher-order repeat unit consisting of 16 alphoid monomers. We present the complete nucleotide sequence of the 16-monomer repeat, which is present in 500 to 1,000 copies per chromosome 17, as well as that of a less abundant 15-monomer repeat, also from chromosome 17. These repeat units were approximately 98% identical in sequence, differing by the exclusion of precisely 1 monomer from the 15-monomer repeat. Homologous unequal crossing-over is suggested as a probable mechanism by which the different repeat lengths on chromosome 17 were generated, and the putative site of such a recombination event is identified. The monomer organization of the chromosome 17 higher-order repeat unit is based, in part, on tandemly repeated pentamers. A similar pentameric suborganization has been previously demonstrated for alpha satellite of the human X chromosome. Despite the organizational similarities, substantial sequence divergence distinguishes these subsets. Hybridization experiments indicate that the chromosome 17 and X subsets are more similar to each other than to the subsets found on several other human chromosomes. We suggest that the chromosome 17 and X alpha satellite subsets may be related components of a larger alphoid subfamily which have evolved from a common ancestral repeat into the contemporary chromosome-specific subsets.
Large-Scale Multiantenna Multisine Wireless Power Transfer
NASA Astrophysics Data System (ADS)
Huang, Yang; Clerckx, Bruno
2017-11-01
Wireless Power Transfer (WPT) is expected to be a technology reshaping the landscape of low-power applications such as the Internet of Things, Radio Frequency identification (RFID) networks, etc. Although there has been some progress towards multi-antenna multi-sine WPT design, the large-scale design of WPT, reminiscent of massive MIMO in communications, remains an open challenge. In this paper, we derive efficient multiuser algorithms based on a generalizable optimization framework, in order to design transmit sinewaves that maximize the weighted-sum/minimum rectenna output DC voltage. The study highlights the significant effect of the nonlinearity introduced by the rectification process on the design of waveforms in multiuser systems. Interestingly, in the single-user case, the optimal spatial domain beamforming, obtained prior to the frequency domain power allocation optimization, turns out to be Maximum Ratio Transmission (MRT). In contrast, in the general weighted sum criterion maximization problem, the spatial domain beamforming optimization and the frequency domain power allocation optimization are coupled. Assuming channel hardening, low-complexity algorithms are proposed based on asymptotic analysis, to maximize the two criteria. The structure of the asymptotically optimal spatial domain precoder can be found prior to the optimization. The performance of the proposed algorithms is evaluated. Numerical results confirm the inefficiency of the linear model-based design for the single and multi-user scenarios. It is also shown that as nonlinear model-based designs, the proposed algorithms can benefit from an increasing number of sinewaves.
Impact of genetic features on treatment decisions in AML.
Döhner, Hartmut; Gaidzik, Verena I
2011-01-01
In recent years, research in molecular genetics has been instrumental in deciphering the molecular pathogenesis of acute myeloid leukemia (AML). With the advent of the novel genomics technologies such as next-generation sequencing, it is expected that virtually all genetic lesions in AML will soon be identified. Gene mutations or deregulated expression of genes or sets of genes now allow us to explore the enormous diversity among cytogenetically defined subsets of AML, in particular the large subset of cytogenetically normal AML. Nonetheless, there are several challenges, such as discriminating driver from passenger mutations, evaluating the prognostic and predictive value of a specific mutation in the concert of the various concurrent mutations, or translating findings from molecular disease pathogenesis into novel therapies. Progress is unlikely to be fast in developing molecular targeted therapies. Contrary to the initial assumption, the development of molecular targeted therapies is slow and the various reports of promising new compounds will need to be put into perspective because many of these drugs did not show the expected effects.
Curvature and gravity actions for matrix models: II. The case of general Poisson structures
NASA Astrophysics Data System (ADS)
Blaschke, Daniel N.; Steinacker, Harold
2010-12-01
We study the geometrical meaning of higher order terms in matrix models of Yang-Mills type in the semi-classical limit, generalizing recent results (Blaschke and Steinacker 2010 Class. Quantum Grav. 27 165010 (arXiv:1003.4132)) to the case of four-dimensional spacetime geometries with general Poisson structure. Such terms are expected to arise e.g. upon quantization of the IKKT-type models. We identify terms which depend only on the intrinsic geometry and curvature, including modified versions of the Einstein-Hilbert action as well as terms which depend on the extrinsic curvature. Furthermore, a mechanism is found which implies that the effective metric G on the spacetime brane {\\cal M}\\subset \\mathds{R}^D 'almost' coincides with the induced metric g. Deviations from G = g are suppressed, and characterized by the would-be U(1) gauge field.
Archive Management of NASA Earth Observation Data to Support Cloud Analysis
NASA Technical Reports Server (NTRS)
Lynnes, Christopher; Baynes, Kathleen; McInerney, Mark A.
2017-01-01
NASA collects, processes and distributes petabytes of Earth Observation (EO) data from satellites, aircraft, in situ instruments and model output, with an order of magnitude increase expected by 2024. Cloud-based web object storage (WOS) of these data can simplify the execution of such an increase. More importantly, it can also facilitate user analysis of those volumes by making the data available to the massively parallel computing power in the cloud. However, storing EO data in cloud WOS has a ripple effect throughout the NASA archive system with unexpected challenges and opportunities. One challenge is modifying data servicing software (such as Web Coverage Service servers) to access and subset data that are no longer on a directly accessible file system, but rather in cloud WOS. Opportunities include refactoring of the archive software to a cloud-native architecture; virtualizing data products by computing on demand; and reorganizing data to be more analysis-friendly.
Evans, Melissa L; Dionne, Mélanie; Miller, Kristina M; Bernatchez, Louis
2012-01-22
Major histocompatibility complex (MHC)-dependent mating preferences have been observed across vertebrate taxa and these preferences are expected to promote offspring disease resistance and ultimately, viability. However, little empirical evidence linking MHC-dependent mate choice and fitness is available, particularly in wild populations. Here, we explore the adaptive potential of previously observed patterns of MHC-dependent mate choice in a wild population of Atlantic salmon (Salmo salar) in Québec, Canada, by examining the relationship between MHC genetic variation and adult reproductive success and offspring survival over 3 years of study. While Atlantic salmon choose their mates in order to increase MHC diversity in offspring, adult reproductive success was in fact maximized between pairs exhibiting an intermediate level of MHC dissimilarity. Moreover, patterns of offspring survival between years 0+ and 1+, and 1+ and 2+ and population genetic structure at the MHC locus relative to microsatellite loci indicate that strong temporal variation in selection is likely to be operating on the MHC. We interpret MHC-dependent mate choice for diversity as a likely bet-hedging strategy that maximizes parental fitness in the face of temporally variable and unpredictable natural selection pressures.
Evans, Melissa L.; Dionne, Mélanie; Miller, Kristina M.; Bernatchez, Louis
2012-01-01
Major histocompatibility complex (MHC)-dependent mating preferences have been observed across vertebrate taxa and these preferences are expected to promote offspring disease resistance and ultimately, viability. However, little empirical evidence linking MHC-dependent mate choice and fitness is available, particularly in wild populations. Here, we explore the adaptive potential of previously observed patterns of MHC-dependent mate choice in a wild population of Atlantic salmon (Salmo salar) in Québec, Canada, by examining the relationship between MHC genetic variation and adult reproductive success and offspring survival over 3 years of study. While Atlantic salmon choose their mates in order to increase MHC diversity in offspring, adult reproductive success was in fact maximized between pairs exhibiting an intermediate level of MHC dissimilarity. Moreover, patterns of offspring survival between years 0+ and 1+, and 1+ and 2+ and population genetic structure at the MHC locus relative to microsatellite loci indicate that strong temporal variation in selection is likely to be operating on the MHC. We interpret MHC-dependent mate choice for diversity as a likely bet-hedging strategy that maximizes parental fitness in the face of temporally variable and unpredictable natural selection pressures. PMID:21697172
Kessler, Thomas; Neumann, Jörg; Mummendey, Amélie; Berthold, Anne; Schubert, Thomas; Waldzus, Sven
2010-09-01
To explain the determinants of negative behavior toward deviants (e.g., punishment), this article examines how people evaluate others on the basis of two types of standards: minimal and maximal. Minimal standards focus on an absolute cutoff point for appropriate behavior; accordingly, the evaluation of others varies dichotomously between acceptable or unacceptable. Maximal standards focus on the degree of deviation from that standard; accordingly, the evaluation of others varies gradually from positive to less positive. This framework leads to the prediction that violation of minimal standards should elicit punishment regardless of the degree of deviation, whereas punishment in response to violations of maximal standards should depend on the degree of deviation. Four studies assessed or manipulated the type of standard and degree of deviation displayed by a target. Results consistently showed the expected interaction between type of standard (minimal and maximal) and degree of deviation on punishment behavior.
Skedgel, Chris; Wailoo, Allan; Akehurst, Ron
2015-01-01
Economic theory suggests that resources should be allocated in a way that produces the greatest outputs, on the grounds that maximizing output allows for a redistribution that could benefit everyone. In health care, this is known as QALY (quality-adjusted life-year) maximization. This justification for QALY maximization may not hold, though, as it is difficult to reallocate health. Therefore, the allocation of health care should be seen as a matter of distributive justice as well as efficiency. A discrete choice experiment was undertaken to test consistency with the principles of QALY maximization and to quantify the willingness to trade life-year gains for distributive justice. An empirical ethics process was used to identify attributes that appeared relevant and ethically justified: patient age, severity (decomposed into initial quality and life expectancy), final health state, duration of benefit, and distributional concerns. Only 3% of respondents maximized QALYs with every choice, but scenarios with larger aggregate QALY gains were chosen more often and a majority of respondents maximized QALYs in a majority of their choices. However, respondents also appeared willing to prioritize smaller gains to preferred groups over larger gains to less preferred groups. Marginal analyses found a statistically significant preference for younger patients and a wider distribution of gains, as well as an aversion to patients with the shortest life expectancy or a poor final health state. These results support the existence of an equity-efficiency tradeoff and suggest that well-being could be enhanced by giving priority to programs that best satisfy societal preferences. Societal preferences could be incorporated through the use of explicit equity weights, although more research is required before such weights can be used in priority setting. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Nadia Dedy, Aimie; Zakuan, Norhayati; Zaidi Bahari, Ahamad; Ariff, Mohd Shoki Md; Chin, Thoo Ai; Zameri Mat Saman, Muhamad
2016-05-01
TQM is a management philosophy embracing all activities through which the needs and expectations of the customer and the community and the goals of the companies are satisfied in the most efficient and cost effective way by maximizing the potential of all workers in a continuing drive for total quality improvement. TQM is very important to the company especially in automotive industry in order for them to survive in the competitive global market. The main objective of this study is to review a relationship between TQM and employee performance. Authors review updated literature on TQM study with two main targets: (a) evolution of TQM considering as a set of practice, (b) and its impacts to employee performance. Therefore, two research questions are proposed in order to review TQM constructs and employee performance measure: (a) Is the set of critical success factors associated with TQM valid as a whole? (b) What is the critical success factors should be considered to measure employee performance in automotive industry?
Optimizing Constrained Single Period Problem under Random Fuzzy Demand
NASA Astrophysics Data System (ADS)
Taleizadeh, Ata Allah; Shavandi, Hassan; Riazi, Afshin
2008-09-01
In this paper, we consider the multi-product multi-constraint newsboy problem with random fuzzy demands and total discount. The demand of the products is often stochastic in the real word but the estimation of the parameters of distribution function may be done by fuzzy manner. So an appropriate option to modeling the demand of products is using the random fuzzy variable. The objective function of proposed model is to maximize the expected profit of newsboy. We consider the constraints such as warehouse space and restriction on quantity order for products, and restriction on budget. We also consider the batch size for products order. Finally we introduce a random fuzzy multi-product multi-constraint newsboy problem (RFM-PM-CNP) and it is changed to a multi-objective mixed integer nonlinear programming model. Furthermore, a hybrid intelligent algorithm based on genetic algorithm, Pareto and TOPSIS is presented for the developed model. Finally an illustrative example is presented to show the performance of the developed model and algorithm.
Bayesian inversion analysis of nonlinear dynamics in surface heterogeneous reactions.
Omori, Toshiaki; Kuwatani, Tatsu; Okamoto, Atsushi; Hukushima, Koji
2016-09-01
It is essential to extract nonlinear dynamics from time-series data as an inverse problem in natural sciences. We propose a Bayesian statistical framework for extracting nonlinear dynamics of surface heterogeneous reactions from sparse and noisy observable data. Surface heterogeneous reactions are chemical reactions with conjugation of multiple phases, and they have the intrinsic nonlinearity of their dynamics caused by the effect of surface-area between different phases. We adapt a belief propagation method and an expectation-maximization (EM) algorithm to partial observation problem, in order to simultaneously estimate the time course of hidden variables and the kinetic parameters underlying dynamics. The proposed belief propagation method is performed by using sequential Monte Carlo algorithm in order to estimate nonlinear dynamical system. Using our proposed method, we show that the rate constants of dissolution and precipitation reactions, which are typical examples of surface heterogeneous reactions, as well as the temporal changes of solid reactants and products, were successfully estimated only from the observable temporal changes in the concentration of the dissolved intermediate product.
EVIDENCE – BASED MEDICINE/PRACTICE IN SPORTS PHYSICAL THERAPY
Lehecka, B.J.
2012-01-01
A push for the use of evidence‐based medicine and evidence‐based practice patterns has permeated most health care disciplines. The use of evidence‐based practice in sports physical therapy may improve health care quality, reduce medical errors, help balance known benefits and risks, challenge views based on beliefs rather than evidence, and help to integrate patient preferences into decision‐making. In this era of health care utilization sports physical therapists are expected to integrate clinical experience with conscientious, explicit, and judicious use of research evidence in order to make clearly informed decisions in order to help maximize and optimize patient well‐being. One of the more common reasons for not using evidence in clinical practice is the perceived lack of skills and knowledge when searching for or appraising research. This clinical commentary was developed to educate the readership on what constitutes evidence‐based practice, and strategies used to seek evidence in the daily clinical practice of sports physical therapy. PMID:23091778
Evidence - based medicine/practice in sports physical therapy.
Manske, Robert C; Lehecka, B J
2012-10-01
A push for the use of evidence-based medicine and evidence-based practice patterns has permeated most health care disciplines. The use of evidence-based practice in sports physical therapy may improve health care quality, reduce medical errors, help balance known benefits and risks, challenge views based on beliefs rather than evidence, and help to integrate patient preferences into decision-making. In this era of health care utilization sports physical therapists are expected to integrate clinical experience with conscientious, explicit, and judicious use of research evidence in order to make clearly informed decisions in order to help maximize and optimize patient well-being. One of the more common reasons for not using evidence in clinical practice is the perceived lack of skills and knowledge when searching for or appraising research. This clinical commentary was developed to educate the readership on what constitutes evidence-based practice, and strategies used to seek evidence in the daily clinical practice of sports physical therapy.
Erol, Volkan; Ozaydin, Fatih; Altintas, Azmi Ali
2014-06-24
Entanglement has been studied extensively for unveiling the mysteries of non-classical correlations between quantum systems. In the bipartite case, there are well known measures for quantifying entanglement such as concurrence, relative entropy of entanglement (REE) and negativity, which cannot be increased via local operations. It was found that for sets of non-maximally entangled states of two qubits, comparing these entanglement measures may lead to different entanglement orderings of the states. On the other hand, although it is not an entanglement measure and not monotonic under local operations, due to its ability of detecting multipartite entanglement, quantum Fisher information (QFI) has recently received an intense attraction generally with entanglement in the focus. In this work, we revisit the state ordering problem of general two qubit states. Generating a thousand random quantum states and performing an optimization based on local general rotations of each qubit, we calculate the maximal QFI for each state. We analyze the maximized QFI in comparison with concurrence, REE and negativity and obtain new state orderings. We show that there are pairs of states having equal maximized QFI but different values for concurrence, REE and negativity and vice versa.
Erol, Volkan; Ozaydin, Fatih; Altintas, Azmi Ali
2014-01-01
Entanglement has been studied extensively for unveiling the mysteries of non-classical correlations between quantum systems. In the bipartite case, there are well known measures for quantifying entanglement such as concurrence, relative entropy of entanglement (REE) and negativity, which cannot be increased via local operations. It was found that for sets of non-maximally entangled states of two qubits, comparing these entanglement measures may lead to different entanglement orderings of the states. On the other hand, although it is not an entanglement measure and not monotonic under local operations, due to its ability of detecting multipartite entanglement, quantum Fisher information (QFI) has recently received an intense attraction generally with entanglement in the focus. In this work, we revisit the state ordering problem of general two qubit states. Generating a thousand random quantum states and performing an optimization based on local general rotations of each qubit, we calculate the maximal QFI for each state. We analyze the maximized QFI in comparison with concurrence, REE and negativity and obtain new state orderings. We show that there are pairs of states having equal maximized QFI but different values for concurrence, REE and negativity and vice versa. PMID:24957694
Liu, Tong; Green, Angela R.; Rodríguez, Luis F.; Ramirez, Brett C.; Shike, Daniel W.
2015-01-01
The number of animals required to represent the collective characteristics of a group remains a concern in animal movement monitoring with GPS. Monitoring a subset of animals from a group instead of all animals can reduce costs and labor; however, incomplete data may cause information losses and inaccuracy in subsequent data analyses. In cattle studies, little work has been conducted to determine the number of cattle within a group needed to be instrumented considering subsequent analyses. Two different groups of cattle (a mixed group of 24 beef cows and heifers, and another group of 8 beef cows) were monitored with GPS collars at 4 min intervals on intensively managed pastures and corn residue fields in 2011. The effects of subset group size on cattle movement characterization and spatial occupancy analysis were evaluated by comparing the results between subset groups and the entire group for a variety of summarization parameters. As expected, more animals yield better results for all parameters. Results show the average group travel speed and daily travel distances are overestimated as subset group size decreases, while the average group radius is underestimated. Accuracy of group centroid locations and group radii are improved linearly as subset group size increases. A kernel density estimation was performed to quantify the spatial occupancy by cattle via GPS location data. Results show animals among the group had high similarity of spatial occupancy. Decisions regarding choosing an appropriate subset group size for monitoring depend on the specific use of data for subsequent analysis: a small subset group may be adequate for identifying areas visited by cattle; larger subset group size (e.g. subset group containing more than 75% of animals) is recommended to achieve better accuracy of group movement characteristics and spatial occupancy for the use of correlating cattle locations with other environmental factors. PMID:25647571
[Varicocele and coincidental abacterial prostato-vesiculitis: negative role about the sperm output].
Vicari, Enzo; La Vignera, Sandro; Tracia, Angelo; Cardì, Francesco; Donati, Angelo
2003-03-01
To evaluate the frequency and the role of a coincidentally expressed abacterial prostato-vesiculitis (PV) on sperm output in patients with left varicocele (Vr). We evaluated 143 selected infertile patients (mean age 27 years, range 21-43), with oligo- and/or astheno- and/or teratozoospermia (OAT) subdivided in two groups. Group A included 76 patients with previous varicocelectomy and persistent OAT. Group B included 67 infertile patients (mean age 26 years, range 21-37) with OAT and not varicocelectomized. Patients with Vr and coincidental didymo-epididymal ultrasound (US) abnormalities were excluded from the study. Following rectal prostato-vesicular ultrasonography, each group was subdivided in two subsets on the basis of the absence (group A: subset Vr-/PV-; and group B: subset Vr+/PV-) or the presence of an abacterial PV (group A: subset Vr-/PV+; group B: subset Vr+/PV+). Particularly, PV was present in 47.4% and 41.8% patients of groups A and B, respectively. This coincidental pathology was ipsilateral with Vr in the 61% of the cases. Semen analysis was performed in all patients. Patients of group A showed a total sperm number significantly higher than those found in group B. In presence of PV, sperm parameters were not significantly different between matched--subsets (Vr-/PV+ vs. Vr+/PV+). In absence of PV, the sperm density, the total sperm number and the percentage of forward motility from subset with previous varicocelectomy (Vr-/PV) exhibited values significantly higher than those found in the matched--subset (Vr+/PV-). Sperm analysis alone performed in patients with left Vr is not a useful prognostic post-varicocelectomy marker. Since following varicocelectomy a lack of sperm response could mask another coincidental pathology, the identification through US scans of a possible PV may be mandatory. On the other hand, an integrated uro-andrological approach, including US scans, allows to enucleate subsets of patients with Vr alone, who will have an expected better sperm response following Vr repair.
Ordered-subsets linkage analysis detects novel Alzheimer disease loci on chromosomes 2q34 and 15q22.
Scott, William K; Hauser, Elizabeth R; Schmechel, Donald E; Welsh-Bohmer, Kathleen A; Small, Gary W; Roses, Allen D; Saunders, Ann M; Gilbert, John R; Vance, Jeffery M; Haines, Jonathan L; Pericak-Vance, Margaret A
2003-11-01
Alzheimer disease (AD) is a complex disorder characterized by a wide range, within and between families, of ages at onset of symptoms. Consideration of age at onset as a covariate in genetic-linkage studies may reduce genetic heterogeneity and increase statistical power. Ordered-subsets analysis includes continuous covariates in linkage analysis by rank ordering families by a covariate and summing LOD scores to find a subset giving a significantly increased LOD score relative to the overall sample. We have analyzed data from 336 markers in 437 multiplex (>/=2 sampled individuals with AD) families included in a recent genomic screen for AD loci. To identify genetic heterogeneity by age at onset, families were ordered by increasing and decreasing mean and minimum ages at onset. Chromosomewide significance of increases in the LOD score in subsets relative to the overall sample was assessed by permutation. A statistically significant increase in the nonparametric multipoint LOD score was observed on chromosome 2q34, with a peak LOD score of 3.2 at D2S2944 (P=.008) in 31 families with a minimum age at onset between 50 and 60 years. The LOD score in the chromosome 9p region previously linked to AD increased to 4.6 at D9S741 (P=.01) in 334 families with minimum age at onset between 60 and 75 years. LOD scores were also significantly increased on chromosome 15q22: a peak LOD score of 2.8 (P=.0004) was detected at D15S1507 (60 cM) in 38 families with minimum age at onset >/=79 years, and a peak LOD score of 3.1 (P=.0006) was obtained at D15S153 (62 cM) in 43 families with mean age at onset >80 years. Thirty-one families were contained in both 15q22 subsets, indicating that these results are likely detecting the same locus. There is little overlap in these subsets, underscoring the utility of age at onset as a marker of genetic heterogeneity. These results indicate that linkage to chromosome 9p is strongest in late-onset AD and that regions on chromosome 2q34 and 15q22 are linked to early-onset AD and very-late-onset AD, respectively.
He, Xin; Frey, Eric C
2006-08-01
Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.
1991-12-27
session. The following gives the flavour of the comments made. 17. Prototyping captures requirements. The prototype exercises requirements and allows the...can modify the data in a given sub-set. These sub-sets can be used as granules of database distribu- tion in order to simplify access control. (3
Glenn, Jordan M; Gray, Michelle; Binns, Ashley
When evaluating health in older adults, batteries of tests are typically utilized to assess functional fitness. Unfortunately, physician's visits are time-sensitive, and it may be important to develop faster methods to assess functional fitness that can be utilized in professional or clinical settings. Therefore, the purpose of this investigation was to examine the relationship of sit-to-stand (STS) power generated through the STS task with previously established measures of functional fitness, specifically strength, endurance, speed, agility, and flexibility in older adults with and without sarcopenia. This study consisted of 57 community-dwelling older adults (n = 16 males; n = 41 females). Functional fitness was assessed using the Short Physical Performance Battery (SPPB), Senior Fitness Test, handgrip, gait speed (habitual and maximal), balance, and STS power generated via the Tendo Weightlifting Analyzer. On the basis of data distribution, second-degree polynomial (quadratic) curvilinear models (lines of best fit) were applied for the relationships of 5-time STS time with average and peak power. Zero-order correlations were evaluated between STS power and all other functional fitness measures. Older adults with sarcopenia were also identified (n = 15), and relationships were reevaluated within this subset. STS power (average and peak) was significantly (P ≤ .01) correlated with physical performance measured via previously established assessments. For average power, this was observed during the senior fitness test (6-minute walk [r = 0.39], 8-ft up-and-go [r = -0.46], arm curl [r = 0.46], and chair stand [r = 0.55]), SPPB (5-time STS time [r = -0.63] and 8-ft walk [r = -0.32]), and other independent functional fitness measures (grip strength [r = 0.65] and maximal gait speed [r = -0.31]). Similar results were observed for peak power during the senior fitness test (6-minute walk [r = 0.39], 8-ft up-and-go [r = -0.46], arm curl [r = 0.45], chair stand [r = 0.52], and sit-and-reach [r = -0.27]), SPPB (5-time STS time [r = -0.60] and 8-ft walk [r = -0.33]), and other independent functional fitness measures (grip strength [r = 0.70] and maximal gait speed [r = -0.32]). Within the sarcopenic subset, for average and peak power, respectively, significant relationships were still retained for handgrip strength (r = 0.57 and r = 0.57), 6-minute walk (r = 0.55 and r = 0.61), chair stand (r = 0.76 and r = 0.81), and 5-time STS time (r = -0.76 and r = -0.80) tests. STS power generated via the STS task significantly relates to commonly administered functional fitness measures. These relationships also appear to exist when evaluating these relationships in older adults with sarcopenia. STS power may be utilized as an independent measure of functional fitness that is feasible to incorporate in clinical settings where time and space are often limiting factors.
Discrete mixture modeling to address genetic heterogeneity in time-to-event regression
Eng, Kevin H.; Hanlon, Bret M.
2014-01-01
Motivation: Time-to-event regression models are a critical tool for associating survival time outcomes with molecular data. Despite mounting evidence that genetic subgroups of the same clinical disease exist, little attention has been given to exploring how this heterogeneity affects time-to-event model building and how to accommodate it. Methods able to diagnose and model heterogeneity should be valuable additions to the biomarker discovery toolset. Results: We propose a mixture of survival functions that classifies subjects with similar relationships to a time-to-event response. This model incorporates multivariate regression and model selection and can be fit with an expectation maximization algorithm, we call Cox-assisted clustering. We illustrate a likely manifestation of genetic heterogeneity and demonstrate how it may affect survival models with little warning. An application to gene expression in ovarian cancer DNA repair pathways illustrates how the model may be used to learn new genetic subsets for risk stratification. We explore the implications of this model for censored observations and the effect on genomic predictors and diagnostic analysis. Availability and implementation: R implementation of CAC using standard packages is available at https://gist.github.com/programeng/8620b85146b14b6edf8f Data used in the analysis are publicly available. Contact: kevin.eng@roswellpark.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24532723
Search for anomalous kinematics in tt dilepton events at CDF II.
Acosta, D; Adelman, J; Affolder, T; Akimoto, T; Albrow, M G; Ambrose, D; Amerio, S; Amidei, D; Anastassov, A; Anikeev, K; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Arisawa, T; Arguin, J-F; Artikov, A; Ashmanskas, W; Attal, A; Azfar, F; Azzi-Bacchetta, P; Bacchetta, N; Bachacou, H; Badgett, W; Barbaro-Galtieri, A; Barker, G J; Barnes, V E; Barnett, B A; Baroiant, S; Barone, M; Bauer, G; Bedeschi, F; Behari, S; Belforte, S; Bellettini, G; Bellinger, J; Ben-Haim, E; Benjamin, D; Beretvas, A; Bhatti, A; Binkley, M; Bisello, D; Bishai, M; Blair, R E; Blocker, C; Bloom, K; Blumenfeld, B; Bocci, A; Bodek, A; Bolla, G; Bolshov, A; Booth, P S L; Bortoletto, D; Boudreau, J; Bourov, S; Brau, B; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Burkett, K; Busetto, G; Bussey, P; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canepa, A; Casarsa, M; Carlsmith, D; Carron, S; Carosi, R; Cavalli-Sforza, M; Castro, A; Catastini, P; Cauz, D; Cerri, A; Cerrito, L; Chapman, J; Chen, C; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, I; Cho, K; Chokheli, D; Chou, J P; Chu, M L; Chuang, S; Chung, J Y; Chung, W-H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A G; Clark, D; Coca, M; Connolly, A; Convery, M; Conway, J; Cooper, B; Cordelli, M; Cortiana, G; Cranshaw, J; Cuevas, J; Culbertson, R; Currat, C; Cyr, D; Dagenhart, D; Da Ronco, S; D'Auria, S; de Barbaro, P; De Cecco, S; De Lentdecker, G; Dell'Agnello, S; Dell'Orso, M; Demers, S; Demortier, L; Deninno, M; De Pedis, D; Derwent, P F; Dionisi, C; Dittmann, J R; Dörr, C; Doksus, P; Dominguez, A; Donati, S; Donega, M; Donini, J; D'Onofrio, M; Dorigo, T; Drollinger, V; Ebina, K; Eddy, N; Ehlers, J; Ely, R; Erbacher, R; Erdmann, M; Errede, D; Errede, S; Eusebi, R; Fang, H-C; Farrington, S; Fedorko, I; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferretti, C; Field, R D; Flanagan, G; Flaugher, B; Flores-Castillo, L R; Foland, A; Forrester, S; Foster, G W; Franklin, M; Freeman, J C; Fujii, Y; Furic, I; Gajjar, A; Gallas, A; Galyardt, J; Gallinaro, M; Garcia-Sciveres, M; Garfinkel, A F; Gay, C; Gerberich, H; Gerdes, D W; Gerchtein, E; Giagu, S; Giannetti, P; Gibson, A; Gibson, K; Ginsburg, C; Giolo, K; Giordani, M; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Goldstein, D; Goldstein, J; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Gotra, Y; Goulianos, K; Gresele, A; Griffiths, M; Grosso-Pilcher, C; Grundler, U; Guenther, M; Guimaraes da Costa, J; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Hamilton, A; Han, B-Y; Handler, R; Happacher, F; Hara, K; Hare, M; Harr, R F; Harris, R M; Hartmann, F; Hatakeyama, K; Hauser, J; Hays, C; Hayward, H; Heider, E; Heinemann, B; Heinrich, J; Hennecke, M; Herndon, M; Hill, C; Hirschhbuehl, D; Hocker, A; Hoffman, K D; Holloway, A; Hou, S; Houlden, M A; Huffman, B T; Huang, Y; Hughes, R E; Huston, J; Ikado, K; Incandela, J; Introzzi, G; Iori, M; Ishizawa, Y; Issever, C; Ivanov, A; Iwata, Y; Iyutin, B; James, E; Jang, D; Jarrell, J; Jeans, D; Jensen, H; Jeon, E J; Jones, M; Joo, K K; Jun, S Y; Junk, T; Kamon, T; Kang, J; Karagoz Unel, M; Karchin, P E; Kartal, S; Kato, Y; Kemp, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, M S; Kim, S B; Kim, S H; Kim, T H; Kim, Y K; King, B T; Kirby, M; Kirsch, L; Klimenko, S; Knuteson, B; Ko, B R; Kobayashi, H; Koehn, P; Kong, D J; Kondo, K; Konigsberg, J; Kordas, K; Korn, A; Korytov, A; Kotelnikov, K; Kotwal, A V; Kovalev, A; Kraus, J; Kravchenko, I; Kreymer, A; Kroll, J; Kruse, M; Krutelyov, V; Kuhlmann, S E; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lammel, S; Lancaster, J; Lancaster, M; Lander, R; Lannon, K; Lath, A; Latino, G; Lauhakangas, R; Lazzizzera, I; Le, Y; Lecci, C; LeCompte, T; Lee, J; Lee, J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Li, K; Lin, C; Lin, C S; Lindgren, M; Liss, T M; Lister, A; Litvintsev, D O; Liu, T; Liu, Y; Lockyer, N S; Loginov, A; Loreti, M; Loverre, P; Lu, R-S; Lucchesi, D; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; MacQueen, D; Madrak, R; Maeshima, K; Maksimovic, P; Malferrari, L; Manca, G; Marginean, R; Marino, C; Martin, A; Martin, M; Martin, V; Martínez, M; Maruyama, T; Matsunaga, H; Mattson, M; Mazzanti, P; McFarland, K S; McGivern, D; McIntyre, P M; McNamara, P; NcNulty, R; Mehta, A; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; Miao, T; Miladinovic, N; Miller, L; Miller, R; Miller, J S; Miquel, R; Miscetti, S; Mitselmakher, G; Miyamoto, A; Miyazaki, Y; Moggi, N; Mohr, B; Moore, R; Morello, M; Movilla Fernandez, P A; Mukherjee, A; Mulhearn, M; Muller, T; Mumford, R; Munar, A; Murat, P; Nachtman, J; Nahn, S; Nakamura, I; Nakano, I; Napier, A; Napora, R; Naumov, D; Necula, V; Niell, F; Nielsen, J; Nelson, C; Nelson, T; Neu, C; Neubauer, M S; Newman-Holmes, C; Nigmanov, T; Nodulman, L; Norniella, O; Oesterberg, K; Ogawa, T; Oh, S H; Oh, Y D; Ohsugi, T; Okusawa, T; Oldeman, R; Orava, R; Orejudos, W; Pagliarone, C; Palencia, E; Paoletti, R; Papadimitriou, V; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Pauly, T; Paus, C; Pellett, D; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pitts, K T; Plager, C; Pompos, A; Pondrom, L; Pope, G; Portell, X; Poukhov, O; Prakoshyn, F; Pratt, T; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Rademachker, J; Rahaman, M A; Rakitine, A; Rappoccio, S; Ratnikov, F; Ray, H; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Rimondi, F; Rinnert, K; Ristori, L; Robertson, W J; Robson, A; Rodrigo, T; Rolli, S; Rosenson, L; Roser, R; Rossin, R; Rott, C; Russ, J; Rusu, V; Ruiz, A; Ryan, D; Saarikko, H; Sabik, S; Safonov, A; St Denis, R; Sakumoto, W K; Salamanna, G; Saltzberg, D; Sanchez, C; Sansoni, A; Santi, L; Sarkar, S; Sato, K; Savard, P; Savoy-Navarro, A; Schlabach, P; Schmidt, E E; Schmidt, M P; Schmitt, M; Scodellaro, L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semeria, F; Sexton-Kennedy, L; Sfiligoi, I; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Siegrist, J; Siket, M; Sill, A; Sinervo, P; Sisakyan, A; Skiba, A; Slaughter, A J; Sliwa, K; Smirnov, D; Smith, J R; Snider, F D; Snihur, R; Soha, A; Somalwar, S V; Spalding, J; Spezziga, M; Spiegel, L; Spinella, F; Spiropulu, M; Squillacioti, P; Stadie, H; Stelzer, B; Stelzer-Chilton, O; Strologas, J; Stuart, D; Sukhanov, A; Sumorok, K; Sun, H; Suzuki, T; Taffard, A; Tafirout, R; Takach, S F; Takano, H; Takashima, R; Takeuchi, Y; Takikawa, K; Tanaka, M; Tanaka, R; Tanimoto, N; Tapprogge, S; Tecchio, M; Teng, P K; Terashi, K; Tesarek, R J; Tether, S; Thom, J; Thompson, A S; Thomson, E; Tipton, P; Tiwari, V; Trkaczyk, S; Toback, D; Tollefson, K; Tomura, T; Tonelli, D; Tönnesmann, M; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tseng, J; Tsuchiya, R; Tsuno, S; Tsybychev, D; Turini, N; Turner, M; Ukegawa, F; Unverhau, T; Uozumi, S; Usynin, D; Vacavant, L; Vaiciulis, A; Varganov, A; Vataga, E; Vejcik, S; Velev, G; Veszpremi, V; Veramendi, G; Vickey, T; Vidal, R; Vila, I; Vilar, R; Vollrath, I; Volobouev, I; von der Mey, M; Wagner, P; Wagner, R G; Wagner, R L; Wagner, W; Wallny, R; Walter, T; Yamashita, T; Yamamoto, K; Wan, Z; Wang, M J; Wang, S M; Warburton, A; Ward, B; Waschke, S; Waters, D; Watts, T; Weber, M; Wester, W C; Whitehouse, B; Wicklund, A B; Wicklund, E; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolter, M; Worcester, M; Worm, S; Wright, T; Wu, X; Würthwein, F; Wyatt, A; Yagil, A; Yang, C; Yang, U K; Yao, W; Yeh, G P; Yi, K; Yoh, J; Yoon, P; Yorita, K; Yoshida, T; Yu, I; Yu, S; Yu, Z; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zetti, F; Zhou, J; Zsenei, A; Zucchelli, S
2005-07-08
We report on a search for anomalous kinematics of tt dilepton events in pp collisions at square root of s=1.96 TeV using 193 pb(-1) of data collected with the CDF II detector. We developed a new a priori technique designed to isolate the subset in a data sample revealing the largest deviation from standard model (SM) expectations and to quantify the significance of this departure. In the four-variable space considered, no particular subset shows a significant discrepancy, and we find that the probability of obtaining a data sample less consistent with the SM than what is observed is 1.0%-4.5%.
Practical single-photon-assisted remote state preparation with non-maximally entanglement
NASA Astrophysics Data System (ADS)
Wang, Dong; Huang, Ai-Jun; Sun, Wen-Yang; Shi, Jia-Dong; Ye, Liu
2016-08-01
Remote state preparation (RSP) and joint remote state preparation (JRSP) protocols for single-photon states are investigated via linear optical elements with partially entangled states. In our scheme, by choosing two-mode instances from a polarizing beam splitter, only the sender in the communication protocol needs to prepare an ancillary single-photon and operate the entanglement preparation process in order to retrieve an arbitrary single-photon state from a photon pair in partially entangled state. In the case of JRSP, i.e., a canonical model of RSP with multi-party, we consider that the information of the desired state is split into many subsets and in prior maintained by spatially separate parties. Specifically, with the assistance of a single-photon state and a three-photon entangled state, it turns out that an arbitrary single-photon state can be jointly and remotely prepared with certain probability, which is characterized by the coefficients of both the employed entangled state and the target state. Remarkably, our protocol is readily to extend to the case for RSP and JRSP of mixed states with the all optical means. Therefore, our protocol is promising for communicating among optics-based multi-node quantum networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MAGEE,GLEN I.
Computers transfer data in a number of different ways. Whether through a serial port, a parallel port, over a modem, over an ethernet cable, or internally from a hard disk to memory, some data will be lost. To compensate for that loss, numerous error detection and correction algorithms have been developed. One of the most common error correction codes is the Reed-Solomon code, which is a special subset of BCH (Bose-Chaudhuri-Hocquenghem) linear cyclic block codes. In the AURA project, an unmanned aircraft sends the data it collects back to earth so it can be analyzed during flight and possible flightmore » modifications made. To counter possible data corruption during transmission, the data is encoded using a multi-block Reed-Solomon implementation with a possibly shortened final block. In order to maximize the amount of data transmitted, it was necessary to reduce the computation time of a Reed-Solomon encoding to three percent of the processor's time. To achieve such a reduction, many code optimization techniques were employed. This paper outlines the steps taken to reduce the processing time of a Reed-Solomon encoding and the insight into modern optimization techniques gained from the experience.« less
Higher criticism thresholding: Optimal feature selection when useful features are rare and weak.
Donoho, David; Jin, Jiashun
2008-09-30
In important application fields today-genomics and proteomics are examples-selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, ..., p, let pi(i) denote the two-sided P-value associated with the ith feature Z-score and pi((i)) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p - pi((i)))/sqrt{i/p(1-i/p)}. We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT.
Higher criticism thresholding: Optimal feature selection when useful features are rare and weak
Donoho, David; Jin, Jiashun
2008-01-01
In important application fields today—genomics and proteomics are examples—selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, …, p, let πi denote the two-sided P-value associated with the ith feature Z-score and π(i) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p − π(i))/i/p(1−i/p). We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT. PMID:18815365
Alternative trailer configurations for maximizing payloads
Jason D. Thompson; Dana Mitchell; John Klepac
2017-01-01
In order for harvesting contractors to stay ahead of increasing costs, it is imperative that they employ all options to maximize productivity and efficiency. Transportation can account for half the cost to deliver wood to a mill. Contractors seek to maximize truck payload to increase productivity. The Forest Operations Research Unit, Southern Research Station, USDA...
NASA Astrophysics Data System (ADS)
Aslan, Serdar; Taylan Cemgil, Ali; Akın, Ata
2016-08-01
Objective. In this paper, we aimed for the robust estimation of the parameters and states of the hemodynamic model by using blood oxygen level dependent signal. Approach. In the fMRI literature, there are only a few successful methods that are able to make a joint estimation of the states and parameters of the hemodynamic model. In this paper, we implemented a maximum likelihood based method called the particle smoother expectation maximization (PSEM) algorithm for the joint state and parameter estimation. Main results. Former sequential Monte Carlo methods were only reliable in the hemodynamic state estimates. They were claimed to outperform the local linearization (LL) filter and the extended Kalman filter (EKF). The PSEM algorithm is compared with the most successful method called square-root cubature Kalman smoother (SCKS) for both state and parameter estimation. SCKS was found to be better than the dynamic expectation maximization (DEM) algorithm, which was shown to be a better estimator than EKF, LL and particle filters. Significance. PSEM was more accurate than SCKS for both the state and the parameter estimation. Hence, PSEM seems to be the most accurate method for the system identification and state estimation for the hemodynamic model inversion literature. This paper do not compare its results with Tikhonov-regularized Newton—CKF (TNF-CKF), a recent robust method which works in filtering sense.
Competitive Facility Location with Random Demands
NASA Astrophysics Data System (ADS)
Uno, Takeshi; Katagiri, Hideki; Kato, Kosuke
2009-10-01
This paper proposes a new location problem of competitive facilities, e.g. shops and stores, with uncertain demands in the plane. By representing the demands for facilities as random variables, the location problem is formulated to a stochastic programming problem, and for finding its solution, three deterministic programming problems: expectation maximizing problem, probability maximizing problem, and satisfying level maximizing problem are considered. After showing that one of their optimal solutions can be found by solving 0-1 programming problems, their solution method is proposed by improving the tabu search algorithm with strategic vibration. Efficiency of the solution method is shown by applying to numerical examples of the facility location problems.
Physical renormalization condition for de Sitter QED
NASA Astrophysics Data System (ADS)
Hayashinaka, Takahiro; Xue, She-Sheng
2018-05-01
We considered a new renormalization condition for the vacuum expectation values of the scalar and spinor currents induced by a homogeneous and constant electric field background in de Sitter spacetime. Following a semiclassical argument, the condition named maximal subtraction imposes the exponential suppression on the massive charged particle limit of the renormalized currents. The maximal subtraction changes the behaviors of the induced currents previously obtained by the conventional minimal subtraction scheme. The maximal subtraction is favored for a couple of physically decent predictions including the identical asymptotic behavior of the scalar and spinor currents, the removal of the IR hyperconductivity from the scalar current, and the finite current for the massless fermion.
The impact of depuration on mussel hepatopancreas bacteriome composition and predicted metagenome.
Rubiolo, J A; Lozano-Leon, A; Rodriguez-Souto, R; Fol Rodríguez, N; Vieytes, M R; Botana, L M
2018-07-01
Due to the rapid elimination of bacteria through normal behaviour of filter feeding and excretion, the decontamination of hazardous contaminating bacteria from shellfish is performed by depuration. This process, under conditions that maximize shellfish filtering activity, is a useful method to eliminate microorganisms from bivalves. The microbiota composition in bivalves reflects that of the environment of harvesting waters, so quite different bacteriomes would be expected in shellfish collected in different locations. Bacterial accumulation within molluscan shellfish occurs primarily in the hepatopancreas. In order to assess the effect of the depuration process on these different bacteriomes, in this work we used 16S RNA pyrosequencing and metagenome prediction to assess the impact of 15 h of depuration on the whole hepatopancreas bacteriome of mussels collected in three different locations.
Vanness, David J
2003-09-01
This paper estimates a fully structural unitary household model of employment and health insurance decisions for dual wage-earner families with children in the United States, using data from the 1987 National Medical Expenditure Survey. Families choose hours of work and the breakdown of compensation between cash wages and health insurance benefits for each wage earner in order to maximize expected utility under uncertain need for medical care. Heterogeneous demand for the employer-sponsored health insurance is thus generated directly from variations in health status and earning potential. The paper concludes by discussing the benefits of using structural models for simulating welfare effects of insurance reform relative to the costly assumptions that must be imposed for identification. Copyright 2003 John Wiley & Sons, Ltd.
Trust regions in Kriging-based optimization with expected improvement
NASA Astrophysics Data System (ADS)
Regis, Rommel G.
2016-06-01
The Kriging-based Efficient Global Optimization (EGO) method works well on many expensive black-box optimization problems. However, it does not seem to perform well on problems with steep and narrow global minimum basins and on high-dimensional problems. This article develops a new Kriging-based optimization method called TRIKE (Trust Region Implementation in Kriging-based optimization with Expected improvement) that implements a trust-region-like approach where each iterate is obtained by maximizing an Expected Improvement (EI) function within some trust region. This trust region is adjusted depending on the ratio of the actual improvement to the EI. This article also develops the Kriging-based CYCLONE (CYClic Local search in OptimizatioN using Expected improvement) method that uses a cyclic pattern to determine the search regions where the EI is maximized. TRIKE and CYCLONE are compared with EGO on 28 test problems with up to 32 dimensions and on a 36-dimensional groundwater bioremediation application in appendices supplied as an online supplement available at http://dx.doi.org/10.1080/0305215X.2015.1082350. The results show that both algorithms yield substantial improvements over EGO and they are competitive with a radial basis function method.
Analysis of elliptically polarized maximally entangled states for bell inequality tests
NASA Astrophysics Data System (ADS)
Martin, A.; Smirr, J.-L.; Kaiser, F.; Diamanti, E.; Issautier, A.; Alibart, O.; Frey, R.; Zaquine, I.; Tanzilli, S.
2012-06-01
When elliptically polarized maximally entangled states are considered, i.e., states having a non random phase factor between the two bipartite polarization components, the standard settings used for optimal violation of Bell inequalities are no longer adapted. One way to retrieve the maximal amount of violation is to compensate for this phase while keeping the standard Bell inequality analysis settings. We propose in this paper a general theoretical approach that allows determining and adjusting the phase of elliptically polarized maximally entangled states in order to optimize the violation of Bell inequalities. The formalism is also applied to several suggested experimental phase compensation schemes. In order to emphasize the simplicity and relevance of our approach, we also describe an experimental implementation using a standard Soleil-Babinet phase compensator. This device is employed to correct the phase that appears in the maximally entangled state generated from a type-II nonlinear photon-pair source after the photons are created and distributed over fiber channels.
NASA Astrophysics Data System (ADS)
Tornai, Martin P.; Bowsher, James E.; Archer, Caryl N.; Peter, Jörg; Jaszczak, Ronald J.; MacDonald, Lawrence R.; Patt, Bradley E.; Iwanczyk, Jan S.
2003-01-01
A novel tomographic gantry was designed, built and initially evaluated for single photon emission imaging of metabolically active lesions in the pendant breast and near chest wall. Initial emission imaging measurements with breast lesions of various uptake ratios are presented. Methods: A prototype tomograph was constructed utilizing a compact gamma camera having a field-of-view of <13×13 cm 2 with arrays of 2×2×6 mm 3 quantized NaI(Tl) scintillators coupled to position sensitive PMTs. The camera was mounted on a radially oriented support with 6 cm variable radius-of-rotation. This unit is further mounted on a goniometric cradle providing polar motion, and in turn mounted on an azimuthal rotation stage capable of indefinite vertical axis-of-rotation about the central rotation axis (RA). Initial measurements with isotopic Tc-99 m (140 keV) to evaluate the system include acquisitions with various polar tilt angles about the RA. Tomographic measurements were made of a frequency and resolution cold-rod phantom filled with aqueous Tc-99 m. Tomographic and planar measurements of 0.6 and 1.0 cm diameter fillable spheres in an available ˜950 ml hemi-ellipsoidal (uncompressed) breast phantom attached to a life-size anthropomorphic torso phantom with lesion:breast-and-body:cardiac-and-liver activity concentration ratios of 11:1:19 were compared. Various photopeak energy windows from 10-30% widths were obtained, along with a 35% scatter window below a 15% photopeak window from the list mode data. Projections with all photopeak window and camera tilt conditions were reconstructed with an ordered subsets expectation maximization (OSEM) algorithm capable of reconstructing arbitrary tomographic orbits. Results: As iteration number increased for the tomographically measured data at all polar angles, contrasts increased while signal-to-noise ratios (SNRs) decreased in the expected way with OSEM reconstruction. The rollover between contrast improvement and SNR degradation of the lesion occurred at two to three iterations. The reconstructed tomographic data yielded SNRs with or without scatter correction that were >9 times better than the planar scans. There was up to a factor of ˜2.5 increase in total primary and scatter contamination in the photopeak window with increasing tilt angle from 15° to 45°, consistent with more direct line-of-sight of myocardial and liver activity with increased camera polar angle. Conclusion: This new, ultra-compact, dedicated tomographic imaging system has the potential of providing valuable, fully 3D functional information about small, otherwise indeterminate breast lesions as an adjunct to diagnostic mammography.
Cheng, Qiang; Zhou, Hongbo; Cheng, Jie
2011-06-01
Selecting features for multiclass classification is a critically important task for pattern recognition and machine learning applications. Especially challenging is selecting an optimal subset of features from high-dimensional data, which typically have many more variables than observations and contain significant noise, missing components, or outliers. Existing methods either cannot handle high-dimensional data efficiently or scalably, or can only obtain local optimum instead of global optimum. Toward the selection of the globally optimal subset of features efficiently, we introduce a new selector--which we call the Fisher-Markov selector--to identify those features that are the most useful in describing essential differences among the possible groups. In particular, in this paper we present a way to represent essential discriminating characteristics together with the sparsity as an optimization objective. With properly identified measures for the sparseness and discriminativeness in possibly high-dimensional settings, we take a systematic approach for optimizing the measures to choose the best feature subset. We use Markov random field optimization techniques to solve the formulated objective functions for simultaneous feature selection. Our results are noncombinatorial, and they can achieve the exact global optimum of the objective function for some special kernels. The method is fast; in particular, it can be linear in the number of features and quadratic in the number of observations. We apply our procedure to a variety of real-world data, including mid--dimensional optical handwritten digit data set and high-dimensional microarray gene expression data sets. The effectiveness of our method is confirmed by experimental results. In pattern recognition and from a model selection viewpoint, our procedure says that it is possible to select the most discriminating subset of variables by solving a very simple unconstrained objective function which in fact can be obtained with an explicit expression.
Redefining Myeloid Cell Subsets in Murine Spleen
Hey, Ying-Ying; Tan, Jonathan K. H.; O’Neill, Helen C.
2016-01-01
Spleen is known to contain multiple dendritic and myeloid cell subsets, distinguishable on the basis of phenotype, function and anatomical location. As a result of recent intensive flow cytometric analyses, splenic dendritic cell (DC) subsets are now better characterized than other myeloid subsets. In order to identify and fully characterize a novel splenic subset termed “L-DC” in relation to other myeloid cells, it was necessary to investigate myeloid subsets in more detail. In terms of cell surface phenotype, L-DC were initially characterized as a CD11bhiCD11cloMHCII−Ly6C−Ly6G− subset in murine spleen. Their expression of CD43, lack of MHCII, and a low level of CD11c was shown to best differentiate L-DC by phenotype from conventional DC subsets. A complete analysis of all subsets in spleen led to the classification of CD11bhiCD11cloMHCII−Ly6CloLy6G− cells as monocytes expressing CX3CR1, CD43 and CD115. Siglec-F expression was used to identify a specific eosinophil population, distinguishable from both Ly6Clo and Ly6Chi monocytes, and other DC subsets. L-DC were characterized as a clear subset of CD11bhiCD11cloMHCII−Ly6C−Ly6G− cells, which are CD43+, Siglec-F− and CD115−. Changes in the prevalence of L-DC compared to other subsets in spleens of mutant mice confirmed the phenotypic distinction between L-DC, cDC and monocyte subsets. L-DC development in vivo was shown to occur independently of the BATF3 transcription factor that regulates cDC development, and also independently of the FLT3L and GM-CSF growth factors which drive cDC and monocyte development, so distinguishing L-DC from these commonly defined cell types. PMID:26793192
Chen, Yi-Shin
2018-01-01
Conventional decision theory suggests that under risk, people choose option(s) by maximizing the expected utility. However, theories deal ambiguously with different options that have the same expected utility. A network approach is proposed by introducing ‘goal’ and ‘time’ factors to reduce the ambiguity in strategies for calculating the time-dependent probability of reaching a goal. As such, a mathematical foundation that explains the irrational behavior of choosing an option with a lower expected utility is revealed, which could imply that humans possess rationality in foresight. PMID:29702665
Pan, Wei; Chen, Yi-Shin
2018-01-01
Conventional decision theory suggests that under risk, people choose option(s) by maximizing the expected utility. However, theories deal ambiguously with different options that have the same expected utility. A network approach is proposed by introducing 'goal' and 'time' factors to reduce the ambiguity in strategies for calculating the time-dependent probability of reaching a goal. As such, a mathematical foundation that explains the irrational behavior of choosing an option with a lower expected utility is revealed, which could imply that humans possess rationality in foresight.
Can Monkeys Make Investments Based on Maximized Pay-off?
Steelandt, Sophie; Dufour, Valérie; Broihanne, Marie-Hélène; Thierry, Bernard
2011-01-01
Animals can maximize benefits but it is not known if they adjust their investment according to expected pay-offs. We investigated whether monkeys can use different investment strategies in an exchange task. We tested eight capuchin monkeys (Cebus apella) and thirteen macaques (Macaca fascicularis, Macaca tonkeana) in an experiment where they could adapt their investment to the food amounts proposed by two different experimenters. One, the doubling partner, returned a reward that was twice the amount given by the subject, whereas the other, the fixed partner, always returned a constant amount regardless of the amount given. To maximize pay-offs, subjects should invest a maximal amount with the first partner and a minimal amount with the second. When tested with the fixed partner only, one third of monkeys learned to remove a maximal amount of food for immediate consumption before investing a minimal one. With both partners, most subjects failed to maximize pay-offs by using different decision rules with each partner' quality. A single Tonkean macaque succeeded in investing a maximal amount to one experimenter and a minimal amount to the other. The fact that only one of over 21 subjects learned to maximize benefits in adapting investment according to experimenters' quality indicates that such a task is difficult for monkeys, albeit not impossible. PMID:21423777
Data imputation analysis for Cosmic Rays time series
NASA Astrophysics Data System (ADS)
Fernandes, R. C.; Lucio, P. S.; Fernandez, J. H.
2017-05-01
The occurrence of missing data concerning Galactic Cosmic Rays time series (GCR) is inevitable since loss of data is due to mechanical and human failure or technical problems and different periods of operation of GCR stations. The aim of this study was to perform multiple dataset imputation in order to depict the observational dataset. The study has used the monthly time series of GCR Climax (CLMX) and Roma (ROME) from 1960 to 2004 to simulate scenarios of 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80% and 90% of missing data compared to observed ROME series, with 50 replicates. Then, the CLMX station as a proxy for allocation of these scenarios was used. Three different methods for monthly dataset imputation were selected: AMÉLIA II - runs the bootstrap Expectation Maximization algorithm, MICE - runs an algorithm via Multivariate Imputation by Chained Equations and MTSDI - an Expectation Maximization algorithm-based method for imputation of missing values in multivariate normal time series. The synthetic time series compared with the observed ROME series has also been evaluated using several skill measures as such as RMSE, NRMSE, Agreement Index, R, R2, F-test and t-test. The results showed that for CLMX and ROME, the R2 and R statistics were equal to 0.98 and 0.96, respectively. It was observed that increases in the number of gaps generate loss of quality of the time series. Data imputation was more efficient with MTSDI method, with negligible errors and best skill coefficients. The results suggest a limit of about 60% of missing data for imputation, for monthly averages, no more than this. It is noteworthy that CLMX, ROME and KIEL stations present no missing data in the target period. This methodology allowed reconstructing 43 time series.
Komisarchik, G; Gelbstein, Y; Fuks, D
2016-11-30
Lead telluride based compounds are of great interest due to their enhanced thermoelectric transport properties. Nevertheless, the donor type impurities in this class of materials are currently mainly limited and alternative types of donor impurities are still required for optimizing the thermoelectric performance. In the current research titanium as a donor impurity in PbTe is examined. Although titanium is known to form resonant levels above the conduction band in PbTe, it does not enhance the thermo-power beyond the classical predictions. Recent experiments showed that alloying with a small amount of Ti (∼0.1 at%) gives a significant increase in the figure of merit. In the current research ab initio calculations were applied in order to correlate the reported experimental results with a thermoelectric optimization model. It was found that a Ti concentration of ∼1.4 at% in the Pb sublattice is expected to maximize the thermoelectric power factor. Using a statistical thermodynamic approach and in agreement with the previously reported appearance of a secondary intermetallic phase, the actual Ti solubility limit in PbTe is found to be ∼0.3 at%. Based on the proposed model, the mechanism for the formation of the previously observed secondary phase is attributed to phase separation reactions, characterized by a positive enthalpy of formation in the system. With extrapolation of the obtained ab initio results, it is demonstrated that lower Ti-doping concentrations than previously experimentally reported ones are expected to provide power factor values close to the maximal one, making doping with Ti a promising opportunity for the generation of highly efficient n-type PbTe-based thermoelectric materials.
Probabilistic co-adaptive brain-computer interfacing
NASA Astrophysics Data System (ADS)
Bryan, Matthew J.; Martin, Stefan A.; Cheung, Willy; Rao, Rajesh P. N.
2013-12-01
Objective. Brain-computer interfaces (BCIs) are confronted with two fundamental challenges: (a) the uncertainty associated with decoding noisy brain signals, and (b) the need for co-adaptation between the brain and the interface so as to cooperatively achieve a common goal in a task. We seek to mitigate these challenges. Approach. We introduce a new approach to brain-computer interfacing based on partially observable Markov decision processes (POMDPs). POMDPs provide a principled approach to handling uncertainty and achieving co-adaptation in the following manner: (1) Bayesian inference is used to compute posterior probability distributions (‘beliefs’) over brain and environment state, and (2) actions are selected based on entire belief distributions in order to maximize total expected reward; by employing methods from reinforcement learning, the POMDP’s reward function can be updated over time to allow for co-adaptive behaviour. Main results. We illustrate our approach using a simple non-invasive BCI which optimizes the speed-accuracy trade-off for individual subjects based on the signal-to-noise characteristics of their brain signals. We additionally demonstrate that the POMDP BCI can automatically detect changes in the user’s control strategy and can co-adaptively switch control strategies on-the-fly to maximize expected reward. Significance. Our results suggest that the framework of POMDPs offers a promising approach for designing BCIs that can handle uncertainty in neural signals and co-adapt with the user on an ongoing basis. The fact that the POMDP BCI maintains a probability distribution over the user’s brain state allows a much more powerful form of decision making than traditional BCI approaches, which have typically been based on the output of classifiers or regression techniques. Furthermore, the co-adaptation of the system allows the BCI to make online improvements to its behaviour, adjusting itself automatically to the user’s changing circumstances.
Alpha-Voltaic Sources Using Diamond as Conversion Medium
NASA Technical Reports Server (NTRS)
Patel, Jagadish U.; Fleurial, Jean-Pierre; Kolawa, Elizabeth
2006-01-01
A family of proposed miniature sources of power would exploit the direct conversion of the kinetic energy of a particles into electricity in diamond semiconductor diodes. These power sources would function over a wide range of temperatures encountered in terrestrial and outer-space environments. These sources are expected to have operational lifetimes of 10 to 20 years and energy conversion efficiencies >35 percent. A power source according to the proposal would include a pair of devices like that shown in the figure. Each device would contain Schottky and p/n diode devices made from high-band-gap, radiation-hard diamond substrates. The n and p layers in the diode portion would be doped sparsely (<1014 cm-3) in order to maximize the volume of the depletion region and thereby maximize efficiency. The diode layers would be supported by an undoped diamond substrate. The source of a particles would be a thin film of 244Cm (half-life 18 years) sandwiched between the two paired devices. The sandwich arrangement would force almost every a particle to go through the active volume of at least one of the devices. Typical a particle track lengths in the devices would range from 20 to 30 microns. The a particles would be made to stop only in the undoped substrates to prevent damage to the crystalline structures of the diode portions. The overall dimensions of a typical source are expected to be about 2 by 2 by 1 mm. Assuming an initial 244Cm mass of 20 mg, the estimated initial output of the source is 20 mW (a current of 20 mA at a potential of 1 V).
NASA Technical Reports Server (NTRS)
Limber, Mark A.; Manteuffel, Thomas A.; Mccormick, Stephen F.; Sholl, David S.
1993-01-01
We consider the problem of image reconstruction from a finite number of projections over the space L(sup 1)(Omega), where Omega is a compact subset of the set of Real numbers (exp 2). We prove that, given a discretization of the projection space, the function that generates the correct projection data and maximizes the Boltzmann-Shannon entropy is piecewise constant on a certain discretization of Omega, which we call the 'optimal grid'. It is on this grid that one obtains the maximum resolution given the problem setup. The size of this grid grows very quickly as the number of projections and number of cells per projection grow, indicating fast computational methods are essential to make its use feasible. We use a Fenchel duality formulation of the problem to keep the number of variables small while still using the optimal discretization, and propose a multilevel scheme to improve convergence of a simple cyclic maximization scheme applied to the dual problem.
Designing Agent Collectives For Systems With Markovian Dynamics
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Lawson, John W.
2004-01-01
The Collective Intelligence (COIN) framework concerns the design of collectives of agents so that as those agents strive to maximize their individual utility functions, their interaction causes a provided world utility function concerning the entire collective to be also maximized. Here we show how to extend that framework to scenarios having Markovian dynamics when no re-evolution of the system from counter-factual initial conditions (an often expensive calculation) is permitted. Our approach transforms the (time-extended) argument of each agent's utility function before evaluating that function. This transformation has benefits in scenarios not involving Markovian dynamics of an agent's utility function are observable. We investigate this transformation in simulations involving both hear and quadratic (nonlinear) dynamics. In addition, we find that a certain subset of these transformations, which result in utilities that have low opacity (analogous to having high signal to noise) but are not factored (analogous to not being incentive compatible), reliably improve performance over that arising with factored utilities. We also present a Taylor Series method for the fully general nonlinear case.
Evidence for surprise minimization over value maximization in choice behavior
Schwartenbeck, Philipp; FitzGerald, Thomas H. B.; Mathys, Christoph; Dolan, Ray; Kronbichler, Martin; Friston, Karl
2015-01-01
Classical economic models are predicated on the idea that the ultimate aim of choice is to maximize utility or reward. In contrast, an alternative perspective highlights the fact that adaptive behavior requires agents’ to model their environment and minimize surprise about the states they frequent. We propose that choice behavior can be more accurately accounted for by surprise minimization compared to reward or utility maximization alone. Minimizing surprise makes a prediction at variance with expected utility models; namely, that in addition to attaining valuable states, agents attempt to maximize the entropy over outcomes and thus ‘keep their options open’. We tested this prediction using a simple binary choice paradigm and show that human decision-making is better explained by surprise minimization compared to utility maximization. Furthermore, we replicated this entropy-seeking behavior in a control task with no explicit utilities. These findings highlight a limitation of purely economic motivations in explaining choice behavior and instead emphasize the importance of belief-based motivations. PMID:26564686
Hoan, Tran-Nhut-Khai; Hiep, Vu-Van; Koo, In-Soo
2016-03-31
This paper considers cognitive radio networks (CRNs) utilizing multiple time-slotted primary channels in which cognitive users (CUs) are powered by energy harvesters. The CUs are under the consideration that hardware constraints on radio devices only allow them to sense and transmit on one channel at a time. For a scenario where the arrival of harvested energy packets and the battery capacity are finite, we propose a scheme to optimize (i) the channel-sensing schedule (consisting of finding the optimal action (silent or active) and sensing order of channels) and (ii) the optimal transmission energy set corresponding to the channels in the sensing order for the operation of the CU in order to maximize the expected throughput of the CRN over multiple time slots. Frequency-switching delay, energy-switching cost, correlation in spectrum occupancy across time and frequency and errors in spectrum sensing are also considered in this work. The performance of the proposed scheme is evaluated via simulation. The simulation results show that the throughput of the proposed scheme is greatly improved, in comparison to related schemes in the literature. The collision ratio on the primary channels is also investigated.
Faith, Daniel P.
2015-01-01
The phylogenetic diversity measure, (‘PD’), measures the relative feature diversity of different subsets of taxa from a phylogeny. At the level of feature diversity, PD supports the broad goal of biodiversity conservation to maintain living variation and option values. PD calculations at the level of lineages and features include those integrating probabilities of extinction, providing estimates of expected PD. This approach has known advantages over the evolutionarily distinct and globally endangered (EDGE) methods. Expected PD methods also have limitations. An alternative notion of expected diversity, expected functional trait diversity, relies on an alternative non-phylogenetic model and allows inferences of diversity at the level of functional traits. Expected PD also faces challenges in helping to address phylogenetic tipping points and worst-case PD losses. Expected PD may not choose conservation options that best avoid worst-case losses of long branches from the tree of life. We can expand the range of useful calculations based on expected PD, including methods for identifying phylogenetic key biodiversity areas. PMID:25561672
Serial killers: ordering caspase activation events in apoptosis.
Slee, E A; Adrain, C; Martin, S J
1999-11-01
Caspases participate in the molecular control of apoptosis in several guises; as triggers of the death machinery, as regulatory elements within it, and ultimately as a subset of the effector elements of the machinery itself. The mammalian caspase family is steadily growing and currently contains 14 members. At present, it is unclear whether all of these proteases participate in apoptosis. Thus, current research in this area is focused upon establishing the repertoire and order of caspase activation events that occur during the signalling and demolition phases of cell death. Evidence is accumulating to suggest that proximal caspase activation events are typically initiated by molecules that promote caspase aggregation. As expected, distal caspase activation events are likely to be controlled by caspases activated earlier in the cascade. However, recent data has cast doubt upon the functional demarcation of caspases into signalling (upstream) and effector (downstream) roles based upon their prodomain lengths. In particular, caspase-3 may perform an important role in propagating the caspase cascade, in addition to its role as an effector caspase within the death programme. Here, we discuss the apoptosis-associated caspase cascade and the hierarchy of caspase activation events within it.
Old-fashioned responses in an updating memory task.
Ruiz, M; Elosúa, M R; Lechuga, M T
2005-07-01
Errors in a running memory task are analysed. Participants were presented with a variable-length list of items and were asked to report the last four items. It has been proposed (Morris & Jones, 1990) that this task requires two mechanisms: the temporal storage of the target set by the articulatory loop and its updating by the central executive. Two implicit assumptions in this proposal are (a) the preservation of serial order, and (b) participants' capacity to discard earlier items from the target subset as list presentation is running, and new items are appended. Order preservation within the updated target list and the inhibition of the outdated list items should imply a relatively higher rate of location errors for items from the medial positions of the target list and a lower rate of intrusion errors from the outdated and inhibited items from the pretarget positions. Contrary to these expectations, for both consonants (Experiment 1) and words (Experiment 2) we found recency effects and a relatively high rate of intrusions from the final pretarget positions, most of them from the very last. Similar effects were apparent with the embedded four-item lists for catch trials. These results are clearly at odds with the presumed updating by the central executive.
A Comparison of Seyfert 1 and 2 Host Galaxies
NASA Astrophysics Data System (ADS)
De Robertis, M.; Virani, S.
2000-12-01
Wide-field, R-band CCD data of 15 Seyfert 1 and 15 Seyfert 2 galaxies taken from the CfA survey were analysed in order to compare the properties of their host galaxies. As well, B-band images for a subset of 12 Seyfert 1s and 7 Seyfert 2s were acquired and analysed in the same way. A robust technique for decomposing the three components---nucleus, bulge and disk---was developed in order determine the structural parameters for each galaxy. In effect, the nuclear contribution was removed empirically by using a spatially nearby, high signal-to-noise ratio point source as a template. Profile fits to the bulge+disk ignored data within three seeing disks of the nucleus. Of the many parameters that were compared between Seyfert 1s and 2s, only two distributions differed at greater than the 95% confidence level for the K-S test: the magnitude of the nuclear component, and the radial color gradient outside the nucleus. The former is expected. The latter could be consistent with some proposed evolutionary models. There is some suggestion that other parameters may differ, but at a lower confidence level.
ERIC Educational Resources Information Center
Israel, Richard G.; And Others
This study compared cardio-respiratory and perceived exertion responses for four cranking rates (50, 60, 70 and 80 rpm) during a continuous maximal arm ergometry protocol in order to determine the most efficient cranking rate for maximal testing. Fifteen male volunteers from 18-30 years of age performed a continuous arm ergometry stress test in…
A supersymmetric D4 model for μ-τ symmetry
NASA Astrophysics Data System (ADS)
Adulpravitchai, A.; Blum, A.; Hagedorn, C.
2009-03-01
We construct a supersymmeterized version of the model presented by Grimus and Lavoura (GL) in \\cite{GL1} which predicts θ23 maximal and θ13 = 0 in the lepton sector. For this purpose, we extend the flavor group, which is D4 × Z2(aux) in the original model, to D4 × Z5. An additional difference is the absence of right-handed neutrinos. Despite these changes the model is the same as the GL model, since θ23 maximal and θ13 = 0 arise through the same mismatch of D4 subgroups, D2 in the charged lepton and Z2 in the neutrino sector. In our setup D4 is solely broken by gauge singlets, the flavons. We show that their vacuum structure, which leads to the prediction of θ13 and θ23, is a natural result of the scalar potential. We find that the neutrino mass matrix only allows for inverted hierarchy, if we assume a certain form of spontaneous CP violation. The quantity |mee|, measured in neutrinoless double beta decay, is nearly equal to the lightest neutrino mass m3. The Majorana phases phi1 and phi2 are restricted to a certain range for m3lesssim0.06 eV. We discuss the next-to-leading order corrections which give rise to shifts in the vacuum expectation values of the flavons. These induce deviations from maximal atmospheric mixing and vanishing θ13. It turns out that these deviations are smaller for θ23 than for θ13.
Joseph Buongiorno; Mo Zhou; Craig Johnston
2017-01-01
Markov decision process models were extended to reflect some consequences of the risk attitude of forestry decision makers. One approach consisted of maximizing the expected value of a criterion subject to an upper bound on the variance or, symmetrically, minimizing the variance subject to a lower bound on the expected value. The other method used the certainty...
NASA Astrophysics Data System (ADS)
Parker, L.; Dye, R. A.; Perez, J.; Rinsland, P.
2012-12-01
Over the past decade the Atmospheric Science Data Center (ASDC) at NASA Langley Research Center has archived and distributed a variety of satellite mission and aircraft campaign data sets. These datasets posed unique challenges to the user community at large due to the sheer volume and variety of the data and the lack of intuitive features in the order tools available to the investigator. Some of these data sets also lack sufficient metadata to provide rudimentary data discovery. To meet the needs of emerging users, the ASDC addressed issues in data discovery and delivery through the use of standards in data and access methods, and distribution through appropriate portals. The ASDC is currently undergoing a refresh of its webpages and Ordering Tools that will leverage updated collection level metadata in an effort to enhance the user experience. The ASDC is now providing search and subset capability to key mission satellite data sets. The ASDC has collaborated with Science Teams to accommodate prospective science users in the climate and modeling communities. The ASDC is using a common framework that enables more rapid development and deployment of search and subset tools that provide enhanced access features for the user community. Features of the Search and Subset web application enables a more sophisticated approach to selecting and ordering data subsets by parameter, date, time, and geographic area. The ASDC has also applied key practices from satellite missions to the multi-campaign aircraft missions executed for Earth Venture-1 and MEaSUReS
Atmospheric Science Data Center
2017-10-12
... and archived at the NASA Langley Research Center Atmospheric Science Data Center (ASDC). A MISR Order and Customization Tool is ... Pool (an on-line, short-term data cache that provides a Web interface and FTP access). Specially subsetted and/or reformatted MISR data ...
Minimizing the average distance to a closest leaf in a phylogenetic tree.
Matsen, Frederick A; Gallagher, Aaron; McCoy, Connor O
2013-11-01
When performing an analysis on a collection of molecular sequences, it can be convenient to reduce the number of sequences under consideration while maintaining some characteristic of a larger collection of sequences. For example, one may wish to select a subset of high-quality sequences that represent the diversity of a larger collection of sequences. One may also wish to specialize a large database of characterized "reference sequences" to a smaller subset that is as close as possible on average to a collection of "query sequences" of interest. Such a representative subset can be useful whenever one wishes to find a set of reference sequences that is appropriate to use for comparative analysis of environmentally derived sequences, such as for selecting "reference tree" sequences for phylogenetic placement of metagenomic reads. In this article, we formalize these problems in terms of the minimization of the Average Distance to the Closest Leaf (ADCL) and investigate algorithms to perform the relevant minimization. We show that the greedy algorithm is not effective, show that a variant of the Partitioning Around Medoids (PAM) heuristic gets stuck in local minima, and develop an exact dynamic programming approach. Using this exact program we note that the performance of PAM appears to be good for simulated trees, and is faster than the exact algorithm for small trees. On the other hand, the exact program gives solutions for all numbers of leaves less than or equal to the given desired number of leaves, whereas PAM only gives a solution for the prespecified number of leaves. Via application to real data, we show that the ADCL criterion chooses chimeric sequences less often than random subsets, whereas the maximization of phylogenetic diversity chooses them more often than random. These algorithms have been implemented in publicly available software.
Differences in Mouse and Human Non-Memory B Cell Pools1
Benitez, Abigail; Weldon, Abby J.; Tatosyan, Lynnette; Velkuru, Vani; Lee, Steve; Milford, Terry-Ann; Francis, Olivia L.; Hsu, Sheri; Nazeri, Kavoos; Casiano, Carlos M.; Schneider, Rebekah; Gonzalez, Jennifer; Su, Rui-Jun; Baez, Ineavely; Colburn, Keith; Moldovan, Ioana; Payne, Kimberly J.
2014-01-01
Identifying cross-species similarities and differences in immune development and function is critical for maximizing the translational potential of animal models. Co-expression of CD21 and CD24 distinguishes transitional and mature B cell subsets in mice. Here, we validate these markers for identifying analogous subsets in humans and use them to compare the non-memory B cell pools in mice and humans, across tissues, during fetal/neonatal and adult life. Among human CD19+IgM+ B cells, the CD21/CD24 schema identifies distinct populations that correspond to T1 (transitional 1), T2 (transitional 2), FM (follicular mature), and MZ (marginal zone) subsets identified in mice. Markers specific to human B cell development validate the identity of MZ cells and the maturation status of human CD21/CD24 non-memory B cell subsets. A comparison of the non-memory B cell pools in bone marrow (BM), blood, and spleen in mice and humans shows that transitional B cells comprise a much smaller fraction in adult humans than mice. T1 cells are a major contributor to the non-memory B cell pool in mouse BM where their frequency is more than twice that in humans. Conversely, in spleen the T1:T2 ratio shows that T2 cells are proportionally ∼8 fold higher in humans than mouse. Despite the relatively small contribution of transitional B cells to the human non-memory pool, the number of naïve FM cells produced per transitional B cell is 3-6 fold higher across tissues than in mouse. These data suggest differing dynamics or mechanisms produce the non-memory B cell compartments in mice and humans. PMID:24719464
Building Capacity through Action Research Curricula Reviews
ERIC Educational Resources Information Center
Lee, Vanessa; Coombe, Leanne; Robinson, Priscilla
2015-01-01
In Australia, graduates of Master of Public Health (MPH) programmes are expected to achieve a set of core competencies, including a subset that is specifically related to Indigenous health. This paper reports on the methods utilised in a project which was designed using action research to strengthen Indigenous public health curricula within MPH…
Minimizing Expected Maximum Risk from Cyber-Attacks with Probabilistic Attack Success
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhuiyan, Tanveer H.; Nandi, Apurba; Medal, Hugh
The goal of our work is to enhance network security by generating partial cut-sets, which are a subset of edges that remove paths from initially vulnerable nodes (initial security conditions) to goal nodes (critical assets), on an attack graph given costs for cutting an edge and a limited overall budget.
Optimization of Self-Directed Target Coverage in Wireless Multimedia Sensor Network
Yang, Yang; Wang, Yufei; Pi, Dechang; Wang, Ruchuan
2014-01-01
Video and image sensors in wireless multimedia sensor networks (WMSNs) have directed view and limited sensing angle. So the methods to solve target coverage problem for traditional sensor networks, which use circle sensing model, are not suitable for WMSNs. Based on the FoV (field of view) sensing model and FoV disk model proposed, how expected multimedia sensor covers the target is defined by the deflection angle between target and the sensor's current orientation and the distance between target and the sensor. Then target coverage optimization algorithms based on expected coverage value are presented for single-sensor single-target, multisensor single-target, and single-sensor multitargets problems distinguishingly. Selecting the orientation that sensor rotated to cover every target falling in the FoV disk of that sensor for candidate orientations and using genetic algorithm to multisensor multitargets problem, which has NP-complete complexity, then result in the approximated minimum subset of sensors which covers all the targets in networks. Simulation results show the algorithm's performance and the effect of number of targets on the resulting subset. PMID:25136667
NASA Astrophysics Data System (ADS)
Bowen, Spencer L.; Byars, Larry G.; Michel, Christian J.; Chonde, Daniel B.; Catana, Ciprian
2013-10-01
Kinetic parameters estimated from dynamic 18F-fluorodeoxyglucose (18F-FDG) PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For ordered subsets expectation maximization (OSEM), image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting 18F-FDG dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation geometric transfer matrix PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in cerebral metabolic rate of glucose estimates, although by less than 5% in most cases compared to the other PVC methods. The results indicate that the PVC implementation and choice of PSF modelling in the reconstruction can significantly impact model parameters.
Aldridge, Matthew D; Waddington, Wendy W; Dickson, John C; Prakash, Vineet; Ell, Peter J; Bomanji, Jamshed B
2013-11-01
A three-dimensional model-based resolution recovery (RR) reconstruction algorithm that compensates for collimator-detector response, resulting in an improvement in reconstructed spatial resolution and signal-to-noise ratio of single-photon emission computed tomography (SPECT) images, was tested. The software is said to retain image quality even with reduced acquisition time. Clinically, any improvement in patient throughput without loss of quality is to be welcomed. Furthermore, future restrictions in radiotracer supplies may add value to this type of data analysis. The aims of this study were to assess improvement in image quality using the software and to evaluate the potential of performing reduced time acquisitions for bone and parathyroid SPECT applications. Data acquisition was performed using the local standard SPECT/CT protocols for 99mTc-hydroxymethylene diphosphonate bone and 99mTc-methoxyisobutylisonitrile parathyroid SPECT imaging. The principal modification applied was the acquisition of an eight-frame gated data set acquired using an ECG simulator with a fixed signal as the trigger. This had the effect of partitioning the data such that the effect of reduced time acquisitions could be assessed without conferring additional scanning time on the patient. The set of summed data sets was then independently reconstructed using the RR software to permit a blinded assessment of the effect of acquired counts upon reconstructed image quality as adjudged by three experienced observers. Data sets reconstructed with the RR software were compared with the local standard processing protocols; filtered back-projection and ordered-subset expectation-maximization. Thirty SPECT studies were assessed (20 bone and 10 parathyroid). The images reconstructed with the RR algorithm showed improved image quality for both full-time and half-time acquisitions over local current processing protocols (P<0.05). The RR algorithm improved image quality compared with local processing protocols and has been introduced into routine clinical use. SPECT acquisitions are now acquired at half of the time previously required. The method of binning the data can be applied to any other camera system to evaluate the reduction in acquisition time for similar processes. The potential for dose reduction is also inherent with this approach.
Bowen, Spencer L; Byars, Larry G; Michel, Christian J; Chonde, Daniel B; Catana, Ciprian
2013-10-21
Kinetic parameters estimated from dynamic (18)F-fluorodeoxyglucose ((18)F-FDG) PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For ordered subsets expectation maximization (OSEM), image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting (18)F-FDG dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation geometric transfer matrix PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in cerebral metabolic rate of glucose estimates, although by less than 5% in most cases compared to the other PVC methods. The results indicate that the PVC implementation and choice of PSF modelling in the reconstruction can significantly impact model parameters.
Varrone, Andrea; Dickson, John C; Tossici-Bolt, Livia; Sera, Terez; Asenbaum, Susanne; Booij, Jan; Kapucu, Ozlem L; Kluge, Andreas; Knudsen, Gitte M; Koulibaly, Pierre Malick; Nobili, Flavio; Pagani, Marco; Sabri, Osama; Vander Borght, Thierry; Van Laere, Koen; Tatsch, Klaus
2013-01-01
Dopamine transporter (DAT) imaging with [(123)I]FP-CIT (DaTSCAN) is an established diagnostic tool in parkinsonism and dementia. Although qualitative assessment criteria are available, DAT quantification is important for research and for completion of a diagnostic evaluation. One critical aspect of quantification is the availability of normative data, considering possible age and gender effects on DAT availability. The aim of the European Normal Control Database of DaTSCAN (ENC-DAT) study was to generate a large database of [(123)I]FP-CIT SPECT scans in healthy controls. SPECT data from 139 healthy controls (74 men, 65 women; age range 20-83 years, mean 53 years) acquired in 13 different centres were included. Images were reconstructed using the ordered-subset expectation-maximization algorithm without correction (NOACSC), with attenuation correction (AC), and with both attenuation and scatter correction using the triple-energy window method (ACSC). Region-of-interest analysis was performed using the BRASS software (caudate and putamen), and the Southampton method (striatum). The outcome measure was the specific binding ratio (SBR). A significant effect of age on SBR was found for all data. Gender had a significant effect on SBR in the caudate and putamen for the NOACSC and AC data, and only in the left caudate for the ACSC data (BRASS method). Significant effects of age and gender on striatal SBR were observed for all data analysed with the Southampton method. Overall, there was a significant age-related decline in SBR of between 4 % and 6.7 % per decade. This study provides a large database of [(123)I]FP-CIT SPECT scans in healthy controls across a wide age range and with balanced gender representation. Higher DAT availability was found in women than in men. An average age-related decline in DAT availability of 5.5 % per decade was found for both genders, in agreement with previous reports. The data collected in this study may serve as a reference database for nuclear medicine centres and for clinical trials using [(123)I]FP-CIT SPECT as the imaging marker.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, F; Shandong Cancer Hospital and Insititute, Jinan, Shandong; Bowsher, J
2014-06-01
Purpose: PET imaging with F18-FDG is utilized for treatment planning, treatment assessment, and prognosis. A region of interest (ROI) encompassing the tumor may be determined on the PET image, often by a threshold T on the PET standard uptake values (SUVs). Several studies have shown prognostic value for relevant ROI properties including maximum SUV value (SUVmax), metabolic tumor volume (MTV), and total glycolytic activity (TGA). The choice of threshold T may affect mean SUV value (SUVmean), MTV, and TGA. Recently spatial resolution modeling (SRM) has been introduced on many PET systems. SRM may also affect these ROI properties. The purposemore » of this work is to investigate the relative influence of SRM and threshold choice T on SUVmean, MTV, TGA, and SUVmax. Methods: For 9 anal cancer patients, 18F-FDG PET scans were performed prior to treatment. PET images were reconstructed by 2 iterations of Ordered Subsets Expectation Maximization (OSEM), with and without SRM. ROI contours were generated by 5 different SUV threshold values T: 2.5, 3.0, 30%, 40%, and 50% of SUVmax. Paired-samples t tests were used to compare SUVmean, MTV, and TGA (a) for SRM on versus off and (b) between each pair of threshold values T. SUVmax was also compared for SRM on versus off. Results: For almost all (57/60) comparisons of 2 different threshold values, SUVmean, MTV, and TGA showed statistically significant variation. For comparison of SRM on versus off, there were no statistically significant changes in SUVmax and TGA, but there were statistically significant changes in MTV for T=2.5 and T=3.0 and in SUVmean for all T. Conclusion: The near-universal statistical significance of threshold choice T suggests that, regarding harmonization across sites, threshold choice may be a greater concern than choice of SRM. However, broader study is warranted, e.g. other iterations of OSEM should be considered.« less
Shang, Kun; Cui, Bixiao; Ma, Jie; Shuai, Dongmei; Liang, Zhigang; Jansen, Floris; Zhou, Yun; Lu, Jie; Zhao, Guoguang
2017-08-01
Hybrid positron emission tomography/magnetic resonance (PET/MR) imaging is a new multimodality imaging technology that can provide structural and functional information simultaneously. The aim of this study was to investigate the effects of the time-of-flight (TOF) and point-spread function (PSF) on small lesions observed in PET/MR images from clinical patient image sets. This study evaluated 54 small lesions in 14 patients who had undergone 18 F-fluorodeoxyglucose (FDG) PET/MR. Lesions up to 30mm in diameter were included. The PET data were reconstructed with a baseline ordered-subsets expectation-maximization (OSEM) algorithm, OSEM+PSF, OSEM+TOF and OSEM+TOF+PSF. PET image quality and small lesions were visually evaluated and scored by a 3-point scale. A quantitative analysis was then performed using the mean and maximum standardized uptake value (SUV) of the small lesions (SUV mean and SUV max ). The lesions were divided into two groups according to the long-axis diameter and the location respectively and evaluated with each reconstruction algorithm. We also evaluated the background signal by analyzing the SUV liver . OSEM+TOF+PSF provided the highest value and OSEM+TOF or PSF showed a higher value than OSEM for the visual assessment and quantitative analysis. The combination of TOF and PSF increased the SUV mean by 26.6% and the SUV max by 30.0%. The SUV liver was not influenced by PSF or TOF. For the OSEM+TOF+PSF model, the change in SUV mean and SUV max for lesions <10mm in diameter was 31.9% and 35.8%, and 24.5% and 27.6% for lesions 10-30mm in diameter, respectively. The abdominal lesions obtained the higher SUV than those of chest on the images with TOF and/or PSF. Application of TOF and PSF significantly increased the SUV of small lesions in hybrid PET/MR images, potentially improving small lesion detectability. Copyright © 2017 Elsevier B.V. All rights reserved.
Kidera, Daisuke; Kihara, Ken; Akamatsu, Go; Mikasa, Shohei; Taniguchi, Takafumi; Tsutsui, Yuji; Takeshita, Toshiki; Maebatake, Akira; Miwa, Kenta; Sasaki, Masayuki
2016-02-01
The aim of this study was to quantitatively evaluate the edge artifacts in PET images reconstructed using the point-spread function (PSF) algorithm at different sphere-to-background ratios of radioactivity (SBRs). We used a NEMA IEC body phantom consisting of six spheres with 37, 28, 22, 17, 13 and 10 mm in inner diameter. The background was filled with (18)F solution with a radioactivity concentration of 2.65 kBq/mL. We prepared three sets of phantoms with SBRs of 16, 8, 4 and 2. The PET data were acquired for 20 min using a Biograph mCT scanner. The images were reconstructed with the baseline ordered subsets expectation maximization (OSEM) algorithm, and with the OSEM + PSF correction model (PSF). For the image reconstruction, the number of iterations ranged from one to 10. The phantom PET image analyses were performed by a visual assessment of the PET images and profiles, a contrast recovery coefficient (CRC), which is the ratio of SBR in the images to the true SBR, and the percent change in the maximum count between the OSEM and PSF images (Δ % counts). In the PSF images, the spheres with a diameter of 17 mm or larger were surrounded by a dense edge in comparison with the OSEM images. In the spheres with a diameter of 22 mm or smaller, an overshoot appeared in the center of the spheres as a sharp peak in the PSF images in low SBR. These edge artifacts were clearly observed in relation to the increase of the SBR. The overestimation of the CRC was observed in 13 mm spheres in the PSF images. In the spheres with a diameter of 17 mm or smaller, the Δ % counts increased with an increasing SBR. The Δ % counts increased to 91 % in the 10-mm sphere at the SBR of 16. The edge artifacts in the PET images reconstructed using the PSF algorithm increased with an increasing SBR. In the small spheres, the edge artifact was observed as a sharp peak at the center of spheres and could result in overestimation.
Lee, Young Sub; Kim, Jin Su; Kim, Kyeong Min; Kang, Joo Hyun; Lim, Sang Moo; Kim, Hee-Joung
2014-05-01
The Siemens Biograph TruePoint TrueV (B-TPTV) positron emission tomography (PET) scanner performs 3D PET reconstruction using a system matrix with point spread function (PSF) modeling (called the True X reconstruction). PET resolution was dramatically improved with the True X method. In this study, we assessed the spatial resolution and image quality on a B-TPTV PET scanner. In addition, we assessed the feasibility of animal imaging with a B-TPTV PET and compared it with a microPET R4 scanner. Spatial resolution was measured at center and at 8 cm offset from the center in transverse plane with warm background activity. True X, ordered subset expectation maximization (OSEM) without PSF modeling, and filtered back-projection (FBP) reconstruction methods were used. Percent contrast (% contrast) and percent background variability (% BV) were assessed according to NEMA NU2-2007. The recovery coefficient (RC), non-uniformity, spill-over ratio (SOR), and PET imaging of the Micro Deluxe Phantom were assessed to compare image quality of B-TPTV PET with that of the microPET R4. When True X reconstruction was used, spatial resolution was <3.65 mm with warm background activity. % contrast and % BV with True X reconstruction were higher than those with the OSEM reconstruction algorithm without PSF modeling. In addition, the RC with True X reconstruction was higher than that with the FBP method and the OSEM without PSF modeling method on the microPET R4. The non-uniformity with True X reconstruction was higher than that with FBP and OSEM without PSF modeling on microPET R4. SOR with True X reconstruction was better than that with FBP or OSEM without PSF modeling on the microPET R4. This study assessed the performance of the True X reconstruction. Spatial resolution with True X reconstruction was improved by 45 % and its % contrast was significantly improved compared to those with the conventional OSEM without PSF modeling reconstruction algorithm. The noise level was higher than that with the other reconstruction algorithm. Therefore, True X reconstruction should be used with caution when quantifying PET data.
Comparison of reconstruction methods and quantitative accuracy in Siemens Inveon PET scanner
NASA Astrophysics Data System (ADS)
Ram Yu, A.; Kim, Jin Su; Kang, Joo Hyun; Moo Lim, Sang
2015-04-01
PET reconstruction is key to the quantification of PET data. To our knowledge, no comparative study of reconstruction methods has been performed to date. In this study, we compared reconstruction methods with various filters in terms of their spatial resolution, non-uniformities (NU), recovery coefficients (RCs), and spillover ratios (SORs). In addition, the linearity of reconstructed radioactivity between linearity of measured and true concentrations were also assessed. A Siemens Inveon PET scanner was used in this study. Spatial resolution was measured with NEMA standard by using a 1 mm3 sized 18F point source. Image quality was assessed in terms of NU, RC and SOR. To measure the effect of reconstruction algorithms and filters, data was reconstructed using FBP, 3D reprojection algorithm (3DRP), ordered subset expectation maximization 2D (OSEM 2D), and maximum a posteriori (MAP) with various filters or smoothing factors (β). To assess the linearity of reconstructed radioactivity, image quality phantom filled with 18F was used using FBP, OSEM and MAP (β =1.5 & 5 × 10-5). The highest achievable volumetric resolution was 2.31 mm3 and the highest RCs were obtained when OSEM 2D was used. SOR was 4.87% for air and 3.97% for water, obtained OSEM 2D reconstruction was used. The measured radioactivity of reconstruction image was proportional to the injected one for radioactivity below 16 MBq/ml when FBP or OSEM 2D reconstruction methods were used. By contrast, when the MAP reconstruction method was used, activity of reconstruction image increased proportionally, regardless of the amount of injected radioactivity. When OSEM 2D or FBP were used, the measured radioactivity concentration was reduced by 53% compared with true injected radioactivity for radioactivity <16 MBq/ml. The OSEM 2D reconstruction method provides the highest achievable volumetric resolution and highest RC among all the tested methods and yields a linear relation between the measured and true concentrations for radioactivity Our data collectively showed that OSEM 2D reconstruction method provides quantitatively accurate reconstructed PET data results.
High-resolution brain SPECT imaging by combination of parallel and tilted detector heads.
Suzuki, Atsuro; Takeuchi, Wataru; Ishitsu, Takafumi; Morimoto, Yuichi; Kobashi, Keiji; Ueno, Yuichiro
2015-10-01
To improve the spatial resolution of brain single-photon emission computed tomography (SPECT), we propose a new brain SPECT system in which the detector heads are tilted towards the rotation axis so that they are closer to the brain. In addition, parallel detector heads are used to obtain the complete projection data set. We evaluated this parallel and tilted detector head system (PT-SPECT) in simulations. In the simulation study, the tilt angle of the detector heads relative to the axis was 45°. The distance from the collimator surface of the parallel detector heads to the axis was 130 mm. The distance from the collimator surface of the tilted detector heads to the origin on the axis was 110 mm. A CdTe semiconductor panel with a 1.4 mm detector pitch and a parallel-hole collimator were employed in both types of detector head. A line source phantom, cold-rod brain-shaped phantom, and cerebral blood flow phantom were evaluated. The projection data were generated by forward-projection of the phantom images using physics models, and Poisson noise at clinical levels was applied to the projection data. The ordered-subsets expectation maximization algorithm with physics models was used. We also evaluated conventional SPECT using four parallel detector heads for the sake of comparison. The evaluation of the line source phantom showed that the transaxial FWHM in the central slice for conventional SPECT ranged from 6.1 to 8.5 mm, while that for PT-SPECT ranged from 5.3 to 6.9 mm. The cold-rod brain-shaped phantom image showed that conventional SPECT could visualize up to 8-mm-diameter rods. By contrast, PT-SPECT could visualize up to 6-mm-diameter rods in upper slices of a cerebrum. The cerebral blood flow phantom image showed that the PT-SPECT system provided higher resolution at the thalamus and caudate nucleus as well as at the longitudinal fissure of the cerebrum compared with conventional SPECT. PT-SPECT provides improved image resolution at not only upper but also at central slices of the cerebrum.
Kotasidis, F A; Matthews, J C; Angelis, G I; Noonan, P J; Jackson, A; Price, P; Lionheart, W R; Reader, A J
2011-05-21
Incorporation of a resolution model during statistical image reconstruction often produces images of improved resolution and signal-to-noise ratio. A novel and practical methodology to rapidly and accurately determine the overall emission and detection blurring component of the system matrix using a printed point source array within a custom-made Perspex phantom is presented. The array was scanned at different positions and orientations within the field of view (FOV) to examine the feasibility of extrapolating the measured point source blurring to other locations in the FOV and the robustness of measurements from a single point source array scan. We measured the spatially-variant image-based blurring on two PET/CT scanners, the B-Hi-Rez and the TruePoint TrueV. These measured spatially-variant kernels and the spatially-invariant kernel at the FOV centre were then incorporated within an ordinary Poisson ordered subset expectation maximization (OP-OSEM) algorithm and compared to the manufacturer's implementation using projection space resolution modelling (RM). Comparisons were based on a point source array, the NEMA IEC image quality phantom, the Cologne resolution phantom and two clinical studies (carbon-11 labelled anti-sense oligonucleotide [(11)C]-ASO and fluorine-18 labelled fluoro-l-thymidine [(18)F]-FLT). Robust and accurate measurements of spatially-variant image blurring were successfully obtained from a single scan. Spatially-variant resolution modelling resulted in notable resolution improvements away from the centre of the FOV. Comparison between spatially-variant image-space methods and the projection-space approach (the first such report, using a range of studies) demonstrated very similar performance with our image-based implementation producing slightly better contrast recovery (CR) for the same level of image roughness (IR). These results demonstrate that image-based resolution modelling within reconstruction is a valid alternative to projection-based modelling, and that, when using the proposed practical methodology, the necessary resolution measurements can be obtained from a single scan. This approach avoids the relatively time-consuming and involved procedures previously proposed in the literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Chuan, E-mail: chuan.huang@stonybrookmedicine.edu; Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115; Departments of Radiology, Psychiatry, Stony Brook Medicine, Stony Brook, New York 11794
2015-02-15
Purpose: Degradation of image quality caused by cardiac and respiratory motions hampers the diagnostic quality of cardiac PET. It has been shown that improved diagnostic accuracy of myocardial defect can be achieved by tagged MR (tMR) based PET motion correction using simultaneous PET-MR. However, one major hurdle for the adoption of tMR-based PET motion correction in the PET-MR routine is the long acquisition time needed for the collection of fully sampled tMR data. In this work, the authors propose an accelerated tMR acquisition strategy using parallel imaging and/or compressed sensing and assess the impact on the tMR-based motion corrected PETmore » using phantom and patient data. Methods: Fully sampled tMR data were acquired simultaneously with PET list-mode data on two simultaneous PET-MR scanners for a cardiac phantom and a patient. Parallel imaging and compressed sensing were retrospectively performed by GRAPPA and kt-FOCUSS algorithms with various acceleration factors. Motion fields were estimated using nonrigid B-spline image registration from both the accelerated and fully sampled tMR images. The motion fields were incorporated into a motion corrected ordered subset expectation maximization reconstruction algorithm with motion-dependent attenuation correction. Results: Although tMR acceleration introduced image artifacts into the tMR images for both phantom and patient data, motion corrected PET images yielded similar image quality as those obtained using the fully sampled tMR images for low to moderate acceleration factors (<4). Quantitative analysis of myocardial defect contrast over ten independent noise realizations showed similar results. It was further observed that although the image quality of the motion corrected PET images deteriorates for high acceleration factors, the images were still superior to the images reconstructed without motion correction. Conclusions: Accelerated tMR images obtained with more than 4 times acceleration can still provide relatively accurate motion fields and yield tMR-based motion corrected PET images with similar image quality as those reconstructed using fully sampled tMR data. The reduction of tMR acquisition time makes it more compatible with routine clinical cardiac PET-MR studies.« less
Maximal sfermion flavour violation in super-GUTs
Ellis, John; Olive, Keith A.; Velasco-Sevilla, Liliana
2016-10-20
We consider supersymmetric grand unified theories with soft supersymmetry-breaking scalar masses m 0 specified above the GUT scale (super-GUTs) and patterns of Yukawa couplings motivated by upper limits on flavour-changing interactions beyond the Standard Model. If the scalar masses are smaller than the gaugino masses m 1/2, as is expected in no-scale models, the dominant effects of renormalisation between the input scale and the GUT scale are generally expected to be those due to the gauge couplings, which are proportional to m 1/2 and generation independent. In this case, the input scalar masses m 0 may violate flavour maximally, amore » scenario we call MaxSFV, and there is no supersymmetric flavour problem. As a result, we illustrate this possibility within various specific super-GUT scenarios that are deformations of no-scale gravity« less
The Naïve Utility Calculus: Computational Principles Underlying Commonsense Psychology.
Jara-Ettinger, Julian; Gweon, Hyowon; Schulz, Laura E; Tenenbaum, Joshua B
2016-08-01
We propose that human social cognition is structured around a basic understanding of ourselves and others as intuitive utility maximizers: from a young age, humans implicitly assume that agents choose goals and actions to maximize the rewards they expect to obtain relative to the costs they expect to incur. This 'naïve utility calculus' allows both children and adults observe the behavior of others and infer their beliefs and desires, their longer-term knowledge and preferences, and even their character: who is knowledgeable or competent, who is praiseworthy or blameworthy, who is friendly, indifferent, or an enemy. We review studies providing support for the naïve utility calculus, and we show how it captures much of the rich social reasoning humans engage in from infancy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Speeded Reaching Movements around Invisible Obstacles
Hudson, Todd E.; Wolfe, Uta; Maloney, Laurence T.
2012-01-01
We analyze the problem of obstacle avoidance from a Bayesian decision-theoretic perspective using an experimental task in which reaches around a virtual obstacle were made toward targets on an upright monitor. Subjects received monetary rewards for touching the target and incurred losses for accidentally touching the intervening obstacle. The locations of target-obstacle pairs within the workspace were varied from trial to trial. We compared human performance to that of a Bayesian ideal movement planner (who chooses motor strategies maximizing expected gain) using the Dominance Test employed in Hudson et al. (2007). The ideal movement planner suffers from the same sources of noise as the human, but selects movement plans that maximize expected gain in the presence of that noise. We find good agreement between the predictions of the model and actual performance in most but not all experimental conditions. PMID:23028276
Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu
2016-03-01
The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed.
Clustering performance comparison using K-means and expectation maximization algorithms.
Jung, Yong Gyu; Kang, Min Soo; Heo, Jun
2014-11-14
Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K -means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K -means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results.
NASA Astrophysics Data System (ADS)
Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.
2018-04-01
Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.
[The European countries confronting cancer: a set of indicators assessing public health status].
Borella, Laurent
2008-11-01
We now know that efficient public policies for cancer control need to be global and take into account each and all the factors involved: economics and level of development, style of life and risk factors, access to screening, effectiveness of the care-providing system. A very simple scorecard is proposed, based on publicized public health indicators, which allows a comparison between European countries. We extracted 49 indicators from public databases and literature concerning 22 European countries. We made correlation calculations in order to identify relevant indicators from which a global score was extracted. Using a hierarchical clustering method we were then able to identify subsets of homogeneous countries. A 7 indicator scorecard was drawn up: national gross product, scientific production, smoking rate, breast screening participating rate, all cancer mortality rate (male population), 5 years relative survival for colorectal cancer and life expectancy at birth. A global score shows: 1) the better positioned countries: Switzerland, Sweden, Finland and France; 2) the countries where cancer control is less effective: Estonia, Hungary, Poland and Slovakia. Three subsets of countries with a fairly similar profile were identified: a high level of means and results group; a high level of means but a medium level of results group; and a low level of means and results group. This work emphasizes dramatically heterogeneous situations between countries. A follow-up, using a reduced but regularly updated set of public health indicators, would help induce an active European policy for cancer control.
Optimization of Multiple Related Negotiation through Multi-Negotiation Network
NASA Astrophysics Data System (ADS)
Ren, Fenghui; Zhang, Minjie; Miao, Chunyan; Shen, Zhiqi
In this paper, a Multi-Negotiation Network (MNN) and a Multi- Negotiation Influence Diagram (MNID) are proposed to optimally handle Multiple Related Negotiations (MRN) in a multi-agent system. Most popular, state-of-the-art approaches perform MRN sequentially. However, a sequential procedure may not optimally execute MRN in terms of maximizing the global outcome, and may even lead to unnecessary losses in some situations. The motivation of this research is to use a MNN to handle MRN concurrently so as to maximize the expected utility of MRN. Firstly, both the joint success rate and the joint utility by considering all related negotiations are dynamically calculated based on a MNN. Secondly, by employing a MNID, an agent's possible decision on each related negotiation is reflected by the value of expected utility. Lastly, through comparing expected utilities between all possible policies to conduct MRN, an optimal policy is generated to optimize the global outcome of MRN. The experimental results indicate that the proposed approach can improve the global outcome of MRN in a successful end scenario, and avoid unnecessary losses in an unsuccessful end scenario.
Action research and millennials: Improving pedagogical approaches to encourage critical thinking.
Erlam, Gwen; Smythe, Liz; Wright-St Clair, Valerie
2018-02-01
This article examines the effects of intergenerational diversity on pedagogical practice in nursing education. While generational cohorts are not entirely homogenous, certain generational features do emerge. These features may require alternative approaches in educational design in order to maximize learning for millennial students. Action research is employed with undergraduate millennial nursing students (n=161) who are co-researchers in that they are asked for changes in current simulation environments which will improve their learning in the areas of knowledge acquisition, skill development, critical thinking, and communication. These changes are put into place and a re-evaluation of the effectiveness of simulation progresses through three action cycles. Millennials, due to a tendency for risk aversion, may gravitate towards more supportive learning environments which allow for free access to educators. This tendency is mitigated by the educator modeling expected behaviors, followed by student opportunity to repeat the behavior. Millennials tend to prefer to work in teams, see tangible improvement, and employ strategies to improve inter-professional communication. This research highlights the need for nurse educators working in simulation to engage in critical discourse regarding the adequacy and effectiveness of current pedagogy informing simulation design. Pedagogical approaches which maximize repetition, modeling, immersive feedback, and effective communication tend to be favored by millennial students. Copyright © 2017 Elsevier Ltd. All rights reserved.
Carballo, Matilde; Baldenegro, Fabiana; Bollatti, Fedra; Peretti, Alfredo V; Aisenberg, Anita
2017-07-01
Behavioral plasticity allows individuals to reversibly respond to short-term variations in their ecological and social environment in order to maximize their fitness. Allocosa senex is a burrow-digging spider that inhabits the sandy coasts of South America. This species shows a reversal in typical sex roles expected in spiders: females are wanderers that visit males at their burrows and initiate courtship. They prefer males with long burrows for mating, and males prefer virgin over mated females. We tested whether female sexual rejection induced males to enlarge their burrows and if female reproductive status affected males' responses. We exposed males who had constructed burrows to: a) virgin females or b) mated females, (n=16 for each category). If female rejection occurred, we repeated the trial 48h later with the same female. As control, we maintained a group of males without female exposure (unexposed group, n=32). Rejected males enlarged their burrows more frequently and burrows were longer compared to unexposed males. However, frequency and length of enlargement did not differ according to female reproductive status. Males of A. senex showed plasticity in digging behavior in response to the availability of females, as a way to maximize the possibilities of future mating. Copyright © 2017 Elsevier B.V. All rights reserved.
SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction
Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.
2015-01-01
Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831
Nonstoichiometric defects in GaAs and the EL2 bandwagon
NASA Astrophysics Data System (ADS)
Lagowski, J.; Gatos, H. C.
1985-09-01
In the present paper, an attempt is made to formulate a common framework for a discussion of nonstoichiometric defects, especially EL2 and dislocations. An outline is provided of the most important settled and unsettled issues, taking into account not only fundamental interests, but also urgent needs in advancing IC technology. Attention is given to stoichiometry-controlled compensation, the expected role of melt stoichiometry in electrical conductivity for the basic atomic disorders, defect equilibria-dislocations and EL2, and current issues pertaining to the identification of EL2. It is concluded that nonstoichiometric defects play a critical role in the electronic properties of GaAs and its electronic applications. Very significant progress has been recently made in learning how to adjust melt stoichiometry in order to maximize its beneficial effects and minimize its detrimental ones.
Nonstoichiometric defects in GaAs and the EL2 bandwagon
NASA Technical Reports Server (NTRS)
Lagowski, J.; Gatos, H. C.
1985-01-01
In the present paper, an attempt is made to formulate a common framework for a discussion of nonstoichiometric defects, especially EL2 and dislocations. An outline is provided of the most important settled and unsettled issues, taking into account not only fundamental interests, but also urgent needs in advancing IC technology. Attention is given to stoichiometry-controlled compensation, the expected role of melt stoichiometry in electrical conductivity for the basic atomic disorders, defect equilibria-dislocations and EL2, and current issues pertaining to the identification of EL2. It is concluded that nonstoichiometric defects play a critical role in the electronic properties of GaAs and its electronic applications. Very significant progress has been recently made in learning how to adjust melt stoichiometry in order to maximize its beneficial effects and minimize its detrimental ones.
Longato, Enrico; Garrido, Maria; Saccardo, Desy; Montesinos Guevara, Camila; Mani, Ali R; Bolognesi, Massimo; Amodio, Piero; Facchinetti, Andrea; Sparacino, Giovanni; Montagnese, Sara
2017-01-01
A popular method to estimate proximal/distal temperature (TPROX and TDIST) consists in calculating a weighted average of nine wireless sensors placed on pre-defined skin locations. Specifically, TPROX is derived from five sensors placed on the infra-clavicular and mid-thigh area (left and right) and abdomen, and TDIST from four sensors located on the hands and feet. In clinical practice, the loss/removal of one or more sensors is a common occurrence, but limited information is available on how this affects the accuracy of temperature estimates. The aim of this study was to determine the accuracy of temperature estimates in relation to number/position of sensors removed. Thirteen healthy subjects wore all nine sensors for 24 hours and reference TPROX and TDIST time-courses were calculated using all sensors. Then, all possible combinations of reduced subsets of sensors were simulated and suitable weights for each sensor calculated. The accuracy of TPROX and TDIST estimates resulting from the reduced subsets of sensors, compared to reference values, was assessed by the mean squared error, the mean absolute error (MAE), the cross-validation error and the 25th and 75th percentiles of the reconstruction error. Tables of the accuracy and sensor weights for all possible combinations of sensors are provided. For instance, in relation to TPROX, a subset of three sensors placed in any combination of three non-homologous areas (abdominal, right or left infra-clavicular, right or left mid-thigh) produced an error of 0.13°C MAE, while the loss/removal of the abdominal sensor resulted in an error of 0.25°C MAE, with the greater impact on the quality of the reconstruction. This information may help researchers/clinicians: i) evaluate the expected goodness of their TPROX and TDIST estimates based on the number of available sensors; ii) select the most appropriate subset of sensors, depending on goals and operational constraints.
Longato, Enrico; Garrido, Maria; Saccardo, Desy; Montesinos Guevara, Camila; Mani, Ali R.; Bolognesi, Massimo; Amodio, Piero; Facchinetti, Andrea; Sparacino, Giovanni
2017-01-01
A popular method to estimate proximal/distal temperature (TPROX and TDIST) consists in calculating a weighted average of nine wireless sensors placed on pre-defined skin locations. Specifically, TPROX is derived from five sensors placed on the infra-clavicular and mid-thigh area (left and right) and abdomen, and TDIST from four sensors located on the hands and feet. In clinical practice, the loss/removal of one or more sensors is a common occurrence, but limited information is available on how this affects the accuracy of temperature estimates. The aim of this study was to determine the accuracy of temperature estimates in relation to number/position of sensors removed. Thirteen healthy subjects wore all nine sensors for 24 hours and reference TPROX and TDIST time-courses were calculated using all sensors. Then, all possible combinations of reduced subsets of sensors were simulated and suitable weights for each sensor calculated. The accuracy of TPROX and TDIST estimates resulting from the reduced subsets of sensors, compared to reference values, was assessed by the mean squared error, the mean absolute error (MAE), the cross-validation error and the 25th and 75th percentiles of the reconstruction error. Tables of the accuracy and sensor weights for all possible combinations of sensors are provided. For instance, in relation to TPROX, a subset of three sensors placed in any combination of three non-homologous areas (abdominal, right or left infra-clavicular, right or left mid-thigh) produced an error of 0.13°C MAE, while the loss/removal of the abdominal sensor resulted in an error of 0.25°C MAE, with the greater impact on the quality of the reconstruction. This information may help researchers/clinicians: i) evaluate the expected goodness of their TPROX and TDIST estimates based on the number of available sensors; ii) select the most appropriate subset of sensors, depending on goals and operational constraints. PMID:28666029
Network clustering and community detection using modulus of families of loops.
Shakeri, Heman; Poggi-Corradini, Pietro; Albin, Nathan; Scoglio, Caterina
2017-01-01
We study the structure of loops in networks using the notion of modulus of loop families. We introduce an alternate measure of network clustering by quantifying the richness of families of (simple) loops. Modulus tries to minimize the expected overlap among loops by spreading the expected link usage optimally. We propose weighting networks using these expected link usages to improve classical community detection algorithms. We show that the proposed method enhances the performance of certain algorithms, such as spectral partitioning and modularity maximization heuristics, on standard benchmarks.
Ogawa, Takeshi; Calbet, Jose A L; Honda, Yasushi; Fujii, Naoto; Nishiyasu, Takeshi
2010-11-01
To test the hypothesis that maximal exercise pulmonary ventilation (VE max) is a limiting factor affecting maximal oxygen uptake (VO2 max) in moderate hypobaric hypoxia (H), we examined the effect of breathing a helium-oxygen gas mixture (He-O(2); 20.9% O(2)), which would reduce air density and would be expected to increase VE max. Fourteen healthy young male subjects performed incremental treadmill running tests to exhaustion in normobaric normoxia (N; sea level) and in H (atmospheric pressure equivalent to 2,500 m above sea level). These exercise tests were carried out under three conditions [H with He-O(2), H with normal air and N] in random order. VO2 max and arterial oxy-hemoglobin saturation (SaO(2)) were, respectively, 15.2, 7.5 and 4.0% higher (all p < 0.05) with He-O(2) than with normal air (VE max, 171.9 ± 16.1 vs. 150.1 ± 16.9 L/min; VO2 max, 52.50 ± 9.13 vs. 48.72 ± 5.35 mL/kg/min; arterial oxyhemoglobin saturation (SaO(2)), 79 ± 3 vs. 76 ± 3%). There was a linear relationship between the increment in VE max and the increment in VO2 max in H (r = 0.77; p < 0.05). When subjects were divided into two groups based on their VO2 max, both groups showed increased VE max and SaO(2) in H with He-O(2), but VO2 max was increased only in the high VO2 max group. These findings suggest that in acute moderate hypobaric hypoxia, air-flow resistance can be a limiting factor affecting VE max; consequently, VO2 max is limited in part by VE max especially in subjects with high VO2 max.
Optimal Resource Allocation in Library Systems
ERIC Educational Resources Information Center
Rouse, William B.
1975-01-01
Queueing theory is used to model processes as either waiting or balking processes. The optimal allocation of resources to these processes is defined as that which maximizes the expected value of the decision-maker's utility function. (Author)
Sub-sampling genetic data to estimate black bear population size: A case study
Tredick, C.A.; Vaughan, M.R.; Stauffer, D.F.; Simek, S.L.; Eason, T.
2007-01-01
Costs for genetic analysis of hair samples collected for individual identification of bears average approximately US$50 [2004] per sample. This can easily exceed budgetary allowances for large-scale studies or studies of high-density bear populations. We used 2 genetic datasets from 2 areas in the southeastern United States to explore how reducing costs of analysis by sub-sampling affected precision and accuracy of resulting population estimates. We used several sub-sampling scenarios to create subsets of the full datasets and compared summary statistics, population estimates, and precision of estimates generated from these subsets to estimates generated from the complete datasets. Our results suggested that bias and precision of estimates improved as the proportion of total samples used increased, and heterogeneity models (e.g., Mh[CHAO]) were more robust to reduced sample sizes than other models (e.g., behavior models). We recommend that only high-quality samples (>5 hair follicles) be used when budgets are constrained, and efforts should be made to maximize capture and recapture rates in the field.
Acceptable regret in medical decision making.
Djulbegovic, B; Hozo, I; Schwartz, A; McMasters, K M
1999-09-01
When faced with medical decisions involving uncertain outcomes, the principles of decision theory hold that we should select the option with the highest expected utility to maximize health over time. Whether a decision proves right or wrong can be learned only in retrospect, when it may become apparent that another course of action would have been preferable. This realization may bring a sense of loss, or regret. When anticipated regret is compelling, a decision maker may choose to violate expected utility theory to avoid regret. We formulate a concept of acceptable regret in medical decision making that explicitly introduces the patient's attitude toward loss of health due to a mistaken decision into decision making. In most cases, minimizing expected regret results in the same decision as maximizing expected utility. However, when acceptable regret is taken into consideration, the threshold probability below which we can comfortably withhold treatment is a function only of the net benefit of the treatment, and the threshold probability above which we can comfortably administer the treatment depends only on the magnitude of the risks associated with the therapy. By considering acceptable regret, we develop new conceptual relations that can help decide whether treatment should be withheld or administered, especially when the diagnosis is uncertain. This may be particularly beneficial in deciding what constitutes futile medical care.
Maximization, learning, and economic behavior
Erev, Ido; Roth, Alvin E.
2014-01-01
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design. PMID:25024182
Maximization, learning, and economic behavior.
Erev, Ido; Roth, Alvin E
2014-07-22
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design.
Sato, Atsuko; Morone, Mieko; Azuma, Yutaka
2011-01-01
At Tohoku Pharmaceutical University, problem-based learning (PBL) tutorials were incorporated into "prescription analysis" and "case analysis" for fifth-year students in 2010 with the following objectives: ① application and confirmation of acquired knowledge and skills, and acquisition of ② communication ability, ③ presentation ability, ④ cooperativeness through groupwork, and ⑤ information collecting ability. In the present study, we conducted a questionnaire survey on a total of 158 fifth-year students in order to investigate the educational benefits of PBL tutorials. The results showed that the above five objectives of PBL tutorials were being achieved, and confirmed the educational benefits expected of PBL tutorials. In contrast, it was found to be necessary to improve the contents of scenarios and lectures, time allocation regarding schedules, the learning environment, the role of tutors, and other matters. In order to maximize the educational benefits of PBL tutorials, it will be necessary in the future to continue to conduct surveys on students and make improvements to the curriculum based on survey results.
UniEnt: uniform entropy model for the dynamics of a neuronal population
NASA Astrophysics Data System (ADS)
Hernandez Lahme, Damian; Nemenman, Ilya
Sensory information and motor responses are encoded in the brain in a collective spiking activity of a large number of neurons. Understanding the neural code requires inferring statistical properties of such collective dynamics from multicellular neurophysiological recordings. Questions of whether synchronous activity or silence of multiple neurons carries information about the stimuli or the motor responses are especially interesting. Unfortunately, detection of such high order statistical interactions from data is especially challenging due to the exponentially large dimensionality of the state space of neural collectives. Here we present UniEnt, a method for the inference of strengths of multivariate neural interaction patterns. The method is based on the Bayesian prior that makes no assumptions (uniform a priori expectations) about the value of the entropy of the observed multivariate neural activity, in contrast to popular approaches that maximize this entropy. We then study previously published multi-electrode recordings data from salamander retina, exposing the relevance of higher order neural interaction patterns for information encoding in this system. This work was supported in part by Grants JSMF/220020321 and NSF/IOS/1208126.
Distribution of a Generic Mission Planning and Scheduling Toolkit for Astronomical Spacecraft
NASA Technical Reports Server (NTRS)
Kleiner, Steven C.
1996-01-01
Work is progressing as outlined in the proposal for this contract. A working planning and scheduling system has been documented and packaged and made available to the WIRE Small Explorer group at JPL, the FUSE group at JHU, the NASA/GSFC Laboratory for Astronomy and Solar Physics and the Advanced Planning and Scheduling Branch at STScI. The package is running successfully on the WIRE computer system. It is expected that the WIRE will reuse significant portions of the SWAS code in its system. This scheduling system itself was tested successfully against the spacecraft hardware in December 1995. A fully automatic scheduling module has been developed and is being added to the toolkit. In order to maximize reuse, the code is being reorganized during the current build into object-oriented class libraries. A paper describing the toolkit has been written and is included in the software distribution. We have experienced interference between the export and production versions of the toolkit. We will be requesting permission to reprogram funds in order to purchase a standalone PC onto which to offload the export version.
Lorentz-Symmetry Test at Planck-Scale Suppression With a Spin-Polarized 133Cs Cold Atom Clock.
Pihan-Le Bars, H; Guerlin, C; Lasseri, R-D; Ebran, J-P; Bailey, Q G; Bize, S; Khan, E; Wolf, P
2018-06-01
We present the results of a local Lorentz invariance (LLI) test performed with the 133 Cs cold atom clock FO2, hosted at SYRTE. Such a test, relating the frequency shift between 133 Cs hyperfine Zeeman substates with the Lorentz violating coefficients of the standard model extension (SME), has already been realized by Wolf et al. and led to state-of-the-art constraints on several SME proton coefficients. In this second analysis, we used an improved model, based on a second-order Lorentz transformation and a self-consistent relativistic mean field nuclear model, which enables us to extend the scope of the analysis from purely proton to both proton and neutron coefficients. We have also become sensitive to the isotropic coefficient , another SME coefficient that was not constrained by Wolf et al. The resulting limits on SME coefficients improve by up to 13 orders of magnitude the present maximal sensitivities for laboratory tests and reach the generally expected suppression scales at which signatures of Lorentz violation could appear.
Rasmussen, Simon Mylius; Bilgrau, Anders Ellern; Schmitz, Alexander; Falgreen, Steffen; Bergkvist, Kim Steve; Tramm, Anette Mai; Baech, John; Jacobsen, Chris Ladefoged; Gaihede, Michael; Kjeldsen, Malene Krag; Bødker, Julie Støve; Dybkaer, Karen; Bøgsted, Martin; Johnsen, Hans Erik
2015-01-01
Cryopreservation is an acknowledged procedure to store vital cells for future biomarker analyses. Few studies, however, have analyzed the impact of the cryopreservation on phenotyping. We have performed a controlled comparison of cryopreserved and fresh cellular aliquots prepared from individual healthy donors. We studied circulating B-cell subset membrane markers and global gene expression, respectively by multiparametric flow cytometry and microarray data. Extensive statistical analysis of the generated data tested the concept that "overall, there are no phenotypic differences between cryopreserved and fresh B-cell subsets." Subsequently, we performed an uncontrolled comparison of tonsil tissue samples. By multiparametric flow analysis, we documented no significant changes following cryopreservation of subset frequencies or membrane intensity for the differentiation markers CD19, CD20, CD22, CD27, CD38, CD45, and CD200. By gene expression profiling following cryopreservation, across all samples, only 16 out of 18708 genes were significantly up or down regulated, including FOSB, KLF4, RBP7, ANXA1 or CLC, DEFA3, respectively. Implementation of cryopreserved tissue in our research program allowed us to present a performance analysis, by comparing cryopreserved and fresh tonsil tissue. As expected, phenotypic differences were identified, but to an extent that did not affect the performance of the cryopreserved tissue to generate specific B-cell subset associated gene signatures and assign subset phenotypes to independent tissue samples. We have confirmed our working concept and illustrated the usefulness of vital cryopreserved cell suspensions for phenotypic studies of the normal B-cell hierarchy; however, storage procedures need to be delineated by tissue-specific comparative analysis. © 2014 Clinical Cytometry Society.
Rasmussen, Simon Mylius; Bilgrau, Anders Ellern; Schmitz, Alexander; Falgreen, Steffen; Bergkvist, Kim Steve; Tramm, Anette Mai; Baech, John; Jacobsen, Chris Ladefoged; Gaihede, Michael; Kjeldsen, Malene Krag; Bødker, Julie Støve; Dybkaer, Karen; Bøgsted, Martin; Johnsen, Hans Erik
2014-09-20
Background Cryopreservation is an acknowledged procedure to store vital cells for future biomarker analyses. Few studies, however, have analyzed the impact of the cryopreservation on phenotyping. Methods We have performed a controlled comparison of cryopreserved and fresh cellular aliquots prepared from individual healthy donors. We studied circulating B-cell subset membrane markers and global gene expression, respectively by multiparametric flow cytometry and microarray data. Extensive statistical analysis of the generated data tested the concept that "overall, there are phenotypic differences between cryopreserved and fresh B-cell subsets". Subsequently, we performed a consecutive uncontrolled comparison of tonsil tissue samples. Results By multiparametric flow analysis, we documented no significant changes following cryopreservation of subset frequencies or membrane intensity for the differentiation markers CD19, CD20, CD22, CD27, CD38, CD45, and CD200. By gene expression profiling following cryopreservation, across all samples, only 16 out of 18708 genes were significantly up or down regulated, including FOSB, KLF4, RBP7, ANXA1 or CLC, DEFA3, respectively. Implementation of cryopreserved tissue in our research program allowed us to present a performance analysis, by comparing cryopreserved and fresh tonsil tissue. As expected, phenotypic differences were identified, but to an extent that did not affect the performance of the cryopreserved tissue to generate specific B-cell subset associated gene signatures and assign subset phenotypes to independent tissue samples. Conclusions We have confirmed our working concept and illustrated the usefulness of vital cryopreserved cell suspensions for phenotypic studies of the normal B-cell hierarchy; however, storage procedures need to be delineated by tissue specific comparative analysis. © 2014 Clinical Cytometry Society. Copyright © 2014 Clinical Cytometry Society.
Inheritance of allozyme variants in bishop pine (Pinus muricata D.Don)
Constance I. Millar
1985-01-01
Isozyme phenotypes are described for 45 structural loci and I modifier locus in bishop pine (Pinus muricata D. Don,) and segregation data are presented for a subset of 31 polymorphic loci from 19 enzyme systems. All polymorphic loci had alleles that segregated within single-focus Mendelian expectations, although one pair of alleles at each of three...
Phenomenology of maximal and near-maximal lepton mixing
NASA Astrophysics Data System (ADS)
Gonzalez-Garcia, M. C.; Peña-Garay, Carlos; Nir, Yosef; Smirnov, Alexei Yu.
2001-01-01
The possible existence of maximal or near-maximal lepton mixing constitutes an intriguing challenge for fundamental theories of flavor. We study the phenomenological consequences of maximal and near-maximal mixing of the electron neutrino with other (x=tau and/or muon) neutrinos. We describe the deviations from maximal mixing in terms of a parameter ɛ≡1-2 sin2 θex and quantify the present experimental status for \\|ɛ\\|<0.3. We show that both probabilities and observables depend on ɛ quadratically when effects are due to vacuum oscillations and they depend on ɛ linearly if matter effects dominate. The most important information on νe mixing comes from solar neutrino experiments. We find that the global analysis of solar neutrino data allows maximal mixing with confidence level better than 99% for 10-8 eV2<~Δm2<~2×10-7 eV2. In the mass ranges Δm2>~1.5×10-5 eV2 and 4×10-10 eV2<~Δm2<~2×10-7 eV2 the full interval \\|ɛ\\|<0.3 is allowed within ~4σ (99.995% CL) We suggest ways to measure ɛ in future experiments. The observable that is most sensitive to ɛ is the rate [NC]/[CC] in combination with the day-night asymmetry in the SNO detector. With theoretical and statistical uncertainties, the expected accuracy after 5 years is Δɛ~0.07. We also discuss the effects of maximal and near-maximal νe mixing in atmospheric neutrinos, supernova neutrinos, and neutrinoless double beta decay.
Mitral stenosis and hypertrophic obstructive cardiomyopathy: An unusual combination.
Hong, Joonhwa; Schaff, Hartzell V; Ommen, Steve R; Abel, Martin D; Dearani, Joseph A; Nishimura, Rick A
2016-04-01
Systolic anterior motion of mitral valve (MV) leaflets is a main pathophysiologic feature of left ventricular outflow tract (LVOT) obstruction in hypertrophic obstructive cardiomyopathy. Thus, restricted leaflet motion that occurs with MV stenosis might be expected to minimize outflow tract obstruction related to systolic anterior motion. From January 1993 through February 2015, we performed MV replacement and septal myectomy in 12 patients with mitral stenosis and hypertrophic obstructive cardiomyopathy at Mayo Clinic Hospital in Rochester, Minn. Preoperative data, echocardiographic images, operative records, and postoperative outcomes were reviewed. Mean (standard deviation) age was 70 (7.6) years. Preoperative mean (standard deviation) maximal LVOT pressure gradient was 75.0 (35.0) mm Hg; MV gradient was 13.7 (2.8) mm Hg. From echocardiographic images, 4 mechanisms of outflow tract obstruction were identified: systolic anterior motion without severe limitation in MV leaflet excursion, severe limitation in MV leaflet mobility with systolic anterior motion at the tip of the MV anterior leaflet, septal encroachment toward the LVOT, and MV displacement toward the LVOT by calcification. Mitral valve replacement and extended septal myectomy relieved outflow gradients in all patients, with no death or serious morbidity. Patients with mitral stenosis and hypertrophic obstructive cardiomyopathy have multiple LVOT obstruction mechanisms, and MV replacement may not be adequate treatment. We favor septal myectomy and MV replacement in this complex subset of hypertrophic obstructive cardiomyopathy. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.
A new augmentation based algorithm for extracting maximal chordal subgraphs
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2014-10-18
If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’more » parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.« less
A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2015-02-01
A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.
Effects of Regularisation Priors and Anatomical Partial Volume Correction on Dynamic PET Data
NASA Astrophysics Data System (ADS)
Caldeira, Liliana L.; Silva, Nuno da; Scheins, Jürgen J.; Gaens, Michaela E.; Shah, N. Jon
2015-08-01
Dynamic PET provides temporal information about the tracer uptake. However, each PET frame has usually low statistics, resulting in noisy images. Furthermore, PET images suffer from partial volume effects. The goal of this study is to understand the effects of prior regularisation on dynamic PET data and subsequent anatomical partial volume correction. The Median Root Prior (MRP) regularisation method was used in this work during reconstruction. The quantification and noise in image-domain and time-domain (time-activity curves) as well as the impact on parametric images is assessed and compared with Ordinary Poisson Ordered Subset Expectation Maximisation (OP-OSEM) reconstruction with and without Gaussian filter. This study shows the improvement in PET images and time-activity curves (TAC) in terms of noise as well as in the parametric images when using prior regularisation in dynamic PET data. Anatomical partial volume correction improves the TAC and consequently, parametric images. Therefore, the use of MRP with anatomical partial volume correction is of interest for dynamic PET studies.
Archive Management of NASA Earth Observation Data to Support Cloud Analysis
NASA Technical Reports Server (NTRS)
Lynnes, Christopher; Baynes, Kathleen; McInerney, Mark
2017-01-01
NASA collects, processes and distributes petabytes of Earth Observation (EO) data from satellites, aircraft, in situ instruments and model output, with an order of magnitude increase expected by 2024. Cloud-based web object storage (WOS) of these data can simplify the execution of such an increase. More importantly, it can also facilitate user analysis of those volumes by making the data available to the massively parallel computing power in the cloud. However, storing EO data in cloud WOS has a ripple effect throughout the NASA archive system with unexpected challenges and opportunities. One challenge is modifying data servicing software (such as Web Coverage Service servers) to access and subset data that are no longer on a directly accessible file system, but rather in cloud WOS. Opportunities include refactoring of the archive software to a cloud-native architecture; virtualizing data products by computing on demand; and reorganizing data to be more analysis-friendly. Reviewed by Mark McInerney ESDIS Deputy Project Manager.
Assessing park-and-ride impacts.
DOT National Transportation Integrated Search
2010-06-01
Efficient transportation systems are vital to quality-of-life and mobility issues, and an effective park-and-ride (P&R) : network can help maximize system performance. Properly placed P&R facilities are expected to result in fewer calls : to increase...
Three faces of node importance in network epidemiology: Exact results for small graphs
NASA Astrophysics Data System (ADS)
Holme, Petter
2017-12-01
We investigate three aspects of the importance of nodes with respect to susceptible-infectious-removed (SIR) disease dynamics: influence maximization (the expected outbreak size given a set of seed nodes), the effect of vaccination (how much deleting nodes would reduce the expected outbreak size), and sentinel surveillance (how early an outbreak could be detected with sensors at a set of nodes). We calculate the exact expressions of these quantities, as functions of the SIR parameters, for all connected graphs of three to seven nodes. We obtain the smallest graphs where the optimal node sets are not overlapping. We find that (i) node separation is more important than centrality for more than one active node, (ii) vaccination and influence maximization are the most different aspects of importance, and (iii) the three aspects are more similar when the infection rate is low.
Mauser, Wolfram; Klepper, Gernot; Zabel, Florian; Delzeit, Ruth; Hank, Tobias; Putzenlechner, Birgitta; Calzadilla, Alvaro
2015-01-01
Global biomass demand is expected to roughly double between 2005 and 2050. Current studies suggest that agricultural intensification through optimally managed crops on today's cropland alone is insufficient to satisfy future demand. In practice though, improving crop growth management through better technology and knowledge almost inevitably goes along with (1) improving farm management with increased cropping intensity and more annual harvests where feasible and (2) an economically more efficient spatial allocation of crops which maximizes farmers' profit. By explicitly considering these two factors we show that, without expansion of cropland, today's global biomass potentials substantially exceed previous estimates and even 2050s' demands. We attribute 39% increase in estimated global production potentials to increasing cropping intensities and 30% to the spatial reallocation of crops to their profit-maximizing locations. The additional potentials would make cropland expansion redundant. Their geographic distribution points at possible hotspots for future intensification. PMID:26558436
Mauser, Wolfram; Klepper, Gernot; Zabel, Florian; Delzeit, Ruth; Hank, Tobias; Putzenlechner, Birgitta; Calzadilla, Alvaro
2015-11-12
Global biomass demand is expected to roughly double between 2005 and 2050. Current studies suggest that agricultural intensification through optimally managed crops on today's cropland alone is insufficient to satisfy future demand. In practice though, improving crop growth management through better technology and knowledge almost inevitably goes along with (1) improving farm management with increased cropping intensity and more annual harvests where feasible and (2) an economically more efficient spatial allocation of crops which maximizes farmers' profit. By explicitly considering these two factors we show that, without expansion of cropland, today's global biomass potentials substantially exceed previous estimates and even 2050s' demands. We attribute 39% increase in estimated global production potentials to increasing cropping intensities and 30% to the spatial reallocation of crops to their profit-maximizing locations. The additional potentials would make cropland expansion redundant. Their geographic distribution points at possible hotspots for future intensification.
Expected Power-Utility Maximization Under Incomplete Information and with Cox-Process Observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fujimoto, Kazufumi, E-mail: m_fuji@kvj.biglobe.ne.jp; Nagai, Hideo, E-mail: nagai@sigmath.es.osaka-u.ac.jp; Runggaldier, Wolfgang J., E-mail: runggal@math.unipd.it
2013-02-15
We consider the problem of maximization of expected terminal power utility (risk sensitive criterion). The underlying market model is a regime-switching diffusion model where the regime is determined by an unobservable factor process forming a finite state Markov process. The main novelty is due to the fact that prices are observed and the portfolio is rebalanced only at random times corresponding to a Cox process where the intensity is driven by the unobserved Markovian factor process as well. This leads to a more realistic modeling for many practical situations, like in markets with liquidity restrictions; on the other hand itmore » considerably complicates the problem to the point that traditional methodologies cannot be directly applied. The approach presented here is specific to the power-utility. For log-utilities a different approach is presented in Fujimoto et al. (Preprint, 2012).« less
Schrempf, Alexandra; Giehr, Julia; Röhrl, Ramona; Steigleder, Sarah; Heinze, Jürgen
2017-04-01
One of the central tenets of life-history theory is that organisms cannot simultaneously maximize all fitness components. This results in the fundamental trade-off between reproduction and life span known from numerous animals, including humans. Social insects are a well-known exception to this rule: reproductive queens outlive nonreproductive workers. Here, we take a step forward and show that under identical social and environmental conditions the fecundity-longevity trade-off is absent also within the queen caste. A change in reproduction did not alter life expectancy, and even a strong enforced increase in reproductive efforts did not reduce residual life span. Generally, egg-laying rate and life span were positively correlated. Queens of perennial social insects thus seem to maximize at the same time two fitness parameters that are normally negatively correlated. Even though they are not immortal, they best approach a hypothetical "Darwinian demon" in the animal kingdom.
WFIRST: Exoplanet Target Selection and Scheduling with Greedy Optimization
NASA Astrophysics Data System (ADS)
Keithly, Dean; Garrett, Daniel; Delacroix, Christian; Savransky, Dmitry
2018-01-01
We present target selection and scheduling algorithms for missions with direct imaging of exoplanets, and the Wide Field Infrared Survey Telescope (WFIRST) in particular, which will be equipped with a coronagraphic instrument (CGI). Optimal scheduling of CGI targets can maximize the expected value of directly imaged exoplanets (completeness). Using target completeness as a reward metric and integration time plus overhead time as a cost metric, we can maximize the sum completeness for a mission with a fixed duration. We optimize over these metrics to create a list of target stars using a greedy optimization algorithm based off altruistic yield optimization (AYO) under ideal conditions. We simulate full missions using EXOSIMS by observing targets in this list for their predetermined integration times. In this poster, we report the theoretical maximum sum completeness, mean number of detected exoplanets from Monte Carlo simulations, and the ideal expected value of the simulated missions.
A practical approach to virtualization in HEP
NASA Astrophysics Data System (ADS)
Buncic, P.; Aguado Sánchez, C.; Blomer, J.; Harutyunyan, A.; Mudrinic, M.
2011-01-01
In the attempt to solve the problem of processing data coming from LHC experiments at CERN at a rate of 15PB per year, for almost a decade the High Enery Physics (HEP) community has focused its efforts on the development of the Worldwide LHC Computing Grid. This generated large interest and expectations promising to revolutionize computing. Meanwhile, having initially taken part in the Grid standardization process, industry has moved in a different direction and started promoting the Cloud Computing paradigm which aims to solve problems on a similar scale and in equally seamless way as it was expected in the idealized Grid approach. A key enabling technology behind Cloud computing is server virtualization. In early 2008, an R&D project was established in the PH-SFT group at CERN to investigate how virtualization technology could be used to improve and simplify the daily interaction of physicists with experiment software frameworks and the Grid infrastructure. In this article we shall first briefly compare Grid and Cloud computing paradigms and then summarize the results of the R&D activity pointing out where and how virtualization technology could be effectively used in our field in order to maximize practical benefits whilst avoiding potential pitfalls.
[Measures to reduce lighting-related energy use and costs at hospital nursing stations].
Su, Chiu-Ching; Chen, Chen-Hui; Chen, Shu-Hwa; Ping, Tsui-Chu
2011-06-01
Hospitals have long been expected to deliver medical services in an environment that is comfortable and bright. This expectation keeps hospital energy demand stubbornly high and energy costs spiraling due to escalating utility fees. Hospitals must identify appropriate strategies to control electricity usage in order to control operating costs effectively. This paper proposes several electricity saving measures that both support government policies aimed at reducing global warming and help reduce energy consumption at the authors' hospital. The authors held educational seminars, established a website teaching energy saving methods, maximized facility and equipment use effectiveness (e.g., adjusting lamp placements, power switch and computer saving modes), posted signs promoting electricity saving, and established a regularized energy saving review mechanism. After implementation, average nursing staff energy saving knowledge had risen from 71.8% to 100% and total nursing station electricity costs fell from NT$16,456 to NT$10,208 per month, representing an effective monthly savings of 37.9% (NT$6,248). This project demonstrated the ability of a program designed to slightly modify nursing staff behavior to achieve effective and meaningful results in reducing overall electricity use.
In Search of the Neural Circuits of Intrinsic Motivation
Kaplan, Frederic; Oudeyer, Pierre-Yves
2007-01-01
Children seem to acquire new know-how in a continuous and open-ended manner. In this paper, we hypothesize that an intrinsic motivation to progress in learning is at the origins of the remarkable structure of children's developmental trajectories. In this view, children engage in exploratory and playful activities for their own sake, not as steps toward other extrinsic goals. The central hypothesis of this paper is that intrinsically motivating activities correspond to expected decrease in prediction error. This motivation system pushes the infant to avoid both predictable and unpredictable situations in order to focus on the ones that are expected to maximize progress in learning. Based on a computational model and a series of robotic experiments, we show how this principle can lead to organized sequences of behavior of increasing complexity characteristic of several behavioral and developmental patterns observed in humans. We then discuss the putative circuitry underlying such an intrinsic motivation system in the brain and formulate two novel hypotheses. The first one is that tonic dopamine acts as a learning progress signal. The second is that this progress signal is directly computed through a hierarchy of microcortical circuits that act both as prediction and metaprediction systems. PMID:18982131
Sample size determination for bibliographic retrieval studies
Yao, Xiaomei; Wilczynski, Nancy L; Walter, Stephen D; Haynes, R Brian
2008-01-01
Background Research for developing search strategies to retrieve high-quality clinical journal articles from MEDLINE is expensive and time-consuming. The objective of this study was to determine the minimal number of high-quality articles in a journal subset that would need to be hand-searched to update or create new MEDLINE search strategies for treatment, diagnosis, and prognosis studies. Methods The desired width of the 95% confidence intervals (W) for the lowest sensitivity among existing search strategies was used to calculate the number of high-quality articles needed to reliably update search strategies. New search strategies were derived in journal subsets formed by 2 approaches: random sampling of journals and top journals (having the most high-quality articles). The new strategies were tested in both the original large journal database and in a low-yielding journal (having few high-quality articles) subset. Results For treatment studies, if W was 10% or less for the lowest sensitivity among our existing search strategies, a subset of 15 randomly selected journals or 2 top journals were adequate for updating search strategies, based on each approach having at least 99 high-quality articles. The new strategies derived in 15 randomly selected journals or 2 top journals performed well in the original large journal database. Nevertheless, the new search strategies developed using the random sampling approach performed better than those developed using the top journal approach in a low-yielding journal subset. For studies of diagnosis and prognosis, no journal subset had enough high-quality articles to achieve the expected W (10%). Conclusion The approach of randomly sampling a small subset of journals that includes sufficient high-quality articles is an efficient way to update or create search strategies for high-quality articles on therapy in MEDLINE. The concentrations of diagnosis and prognosis articles are too low for this approach. PMID:18823538
Observation of hard scattering in photoproduction at HERA
NASA Astrophysics Data System (ADS)
Derrick, M.; Krakauer, D.; Magill, S.; Musgrave, B.; Repond, J.; Sugano, K.; Stanek, R.; Talaga, R. L.; Thron, J.; Arzarello, F.; Ayed, R.; Barbagli, G.; Bari, G.; Basile, M.; Bellagamba, L.; Boscherini, D.; Bruni, G.; Bruni, P.; Cara Romeo, G.; Castellini, G.; Chiarini, M.; Cifarelli, L.; Cindolo, F.; Ciralli, F.; Contin, A.; D'Auria, S.; Del Papa, C.; Frasconi, F.; Giusti, P.; Iacobucci, G.; Laurenti, G.; Levi, G.; Lin, Q.; Lisowski, B.; Maccarrone, G.; Margotti, A.; Massam, T.; Nania, R.; Nemoz, C.; Palmonari, F.; Sartorelli, G.; Timellini, R.; Zamora Garcia, Y.; Zichichi, A.; Bargende, A.; Barreiro, F.; Crittenden, J.; Dabbous, H.; Desch, K.; Diekmann, B.; Geerts, M.; Geitz, G.; Gutjahr, B.; Hartmann, H.; Hartmann, J.; Haun, D.; Heinloth, K.; Hilger, E.; Jakob, H.-P.; Kramarczyk, S.; Kückes, M.; Mass, A.; Mengel, S.; Mollen, J.; Müsch, H.; Paul, E.; Schattevoy, R.; Schneider, B.; Schneider, J.-L.; Wedemeyer, R.; Cassidy, A.; Cussans, D. G.; Dyce, N.; Fawcett, H. F.; Foster, B.; Gilmore, R.; Heath, G. P.; Lancaster, M.; Llewellyn, T. J.; Malos, J.; Morgado, C. J. S.; Tapper, R. J.; Wilson, S. S.; Rau, R. R.; Bernstein, A.; Caldwell, A.; Gialas, I.; Parsons, J. A.; Ritz, S.; Sciulli, F.; Straub, P. B.; Wai, L.; Yang, S.; Barillari, T.; Schioppa, M.; Susinno, G.; Burkot, W.; Chwastowski, J.; Dwuraźny, A.; Eskreys, A.; Nizioł, B.; Jakubowski, Z.; Piotrzkowski, K.; Zachara, M.; Zawiejski, L.; Borzemski, P.; Eskreys, K.; Jeleń, K.; Kisielewska, D.; Kowalski, T.; Kulka, J.; Rulikowska-Zarȩbska, E.; Suszycki, L.; Zajaç, J.; Kȩdzierski, T.; Kotański, A.; Przybycień, M.; Bauerdick, L. A. T.; Behrens, U.; Bienlein, J. K.; Coldewey, C.; Dannemann, A.; Dierks, K.; Dorth, W.; Drews, G.; Erhard, P.; Flasiński, M.; Fleck, I.; Fürtjes, A.; Gläser, R.; Göttlicher, P.; Haas, T.; Hagge, L.; Hain, W.; Hasell, D.; Hultschig, H.; Jahnen, G.; Joos, P.; Kasemann, M.; Klanner, R.; Koch, W.; Kötz, U.; Kowalski, H.; Labs, J.; Ladage, A.; Löhr, B.; Löwe, M.; Lüke, D.; Mainusch, J.; Manczak, O.; Momayezi, M.; Nickel, S.; Notz, D.; Park, I.; Pösnecker, K.-U.; Rohde, M.; Ros, E.; Schneekloth, U.; Schroeder, J.; Schulz, W.; Selonke, F.; Tscheslog, E.; Tsurugai, T.; Turkot, F.; Vogel, W.; Woeniger, T.; Wolf, G.; Youngman, C.; Grabosch, H. J.; Leich, A.; Meyer, A.; Rethfeldt, C.; Schlenstedt, S.; Casalbuoni, R.; De Curtis, S.; Dominici, D.; Francescato, A.; Nuti, M.; Pelfer, P.; Anzivino, G.; Casaccia, R.; Laakso, I.; De Pasquale, S.; Qian, S.; Votano, L.; Bamberger, A.; Freidhof, A.; Poser, T.; Söldner-Rembold, S.; Theisen, G.; Trefzger, T.; Brook, N. H.; Bussey, P. J.; Doyle, A. T.; Forbes, J. R.; Jamieson, V. A.; Raine, C.; Saxon, D. H.; Gloth, G.; Holm, U.; Kammerlocher, H.; Krebs, B.; Neumann, T.; Wick, K.; Hofmann, A.; Kröger, W.; Krüger, J.; Lohrmann, E.; Milewski, J.; Nakahata, M.; Pavel, N.; Poelz, G.; Salomon, R.; Seidman, A.; Schott, W.; Wiik, B. H.; Zetsche, F.; Bacon, T. C.; Butterworth, I.; Markou, C.; McQuillan, D.; Miller, D. B.; Mobayyen, M. M.; Prinias, A.; Vorvolakos, A.; Bienz, T.; Kreutzmann, H.; Mallik, U.; McCliment, E.; Roco, M.; Wang, M. Z.; Cloth, P.; Filges, D.; Chen, L.; Imlay, R.; Kartik, S.; Kim, H.-J.; McNeil, R. R.; Metcalf, W.; Cases, G.; Hervás, L.; Labarga, L.; del Peso, J.; Roldán, J.; Terrón, J.; de Trocóniz, J. F.; Ikraiam, F.; Mayer, J. K.; Smith, G. R.; Corriveau, F.; Gilkinson, D. J.; Hanna, D. S.; Hung, L. W.; Mitchell, J. W.; Patel, P. M.; Sinclair, L. E.; Stairs, D. G.; Ullmann, R.; Bashindzhagyan, G. L.; Ermolov, P. F.; Golubkov, Y. A.; Kuzmin, V. A.; Kuznetsov, E. N.; Savin, A. A.; Voronin, A. G.; Zotov, N. P.; Bentvelsen, S.; Dake, A.; Engelen, J.; de Jong, P.; de Jong, S.; de Kamps, M.; Kooijman, P.; Kruse, A.; van der Lugt, H.; O'Dell, V.; Straver, J.; Tenner, A.; Tiecke, H.; Uijterwaal, H.; Vermeulen, J.; Wiggers, L.; de Wolf, E.; van Woudenberg, R.; Yoshida, R.; Bylsma, B.; Durkin, L. S.; Li, C.; Ling, T. Y.; McLean, K. W.; Murray, W. N.; Park, S. K.; Romanowski, T. A.; Seidlein, R.; Blair, G. A.; Butterworth, J. M.; Byrne, A.; Cashmore, R. J.; Cooper-Sarkar, A. M.; Devenish, R. C. E.; Gingrich, D. M.; Hallam-Baker, P. M.; Harnew, N.; Khatri, T.; Long, K. R.; Luffman, P.; McArthur, I.; Morawitz, P.; Nash, J.; Smith, S. J. P.; Roocroft, N. C.; Wilson, F. F.; Abbiendi, G.; Brugnera, R.; Carlin, R.; Dal Corso, F.; De Giorgi, M.; Dosselli, U.; Fanin, C.; Gasparini, F.; Limentani, S.; Morandin, M.; Posocco, M.; Stanco, L.; Stroili, R.; Voci, C.; Lim, J. N.; Oh, B. Y.; Whitmore, J.; Bonori, M.; Contino, U.; D'Agostini, G.; Guida, M.; Iori, M.; Mari, S.; Marini, G.; Mattioli, M.; Monaldi, D.; Nigro, A.; Hart, J. C.; McCubbin, N. A.; Shah, T. P.; Short, T. L.; Barberis, E.; Cartiglia, N.; Heusch, C.; Hubbard, B.; Leslie, J.; Ng, J. S. T.; O'Shaughnessy, K.; Sadrozinski, H. F.; Seiden, A.; Badura, E.; Biltzinger, J.; Chaves, H.; Rost, M.; Seifert, R. J.; Walenta, A. H.; Weihs, W.; Zech, G.; Dagan, S.; Heifetz, R.; Levy, A.; Zer-Zion, D.; Hasegawa, T.; Hazumi, M.; Ishii, T.; Kasai, S.; Kuze, M.; Nagasawa, Y.; Nakao, M.; Okuno, H.; Tokushuku, K.; Watanabe, T.; Yamada, S.; Chiba, M.; Hamatsu, R.; Hirose, T.; Kitamura, S.; Nagayama, S.; Nakamitsu, Y.; Arneodo, M.; Costa, M.; Ferrero, M. I.; Lamberti, L.; Maselli, S.; Peroni, C.; Solano, A.; Staiano, A.; Dardo, M.; Bailey, D. C.; Bandyopadhyay, D.; Benard, F.; Bhadra, S.; Brkic, M.; Burow, B. D.; Chlebana, F. S.; Crombie, M. B.; Hartner, G. F.; Levman, G. M.; Martin, J. F.; Orr, R. S.; Prentice, J. D.; Sampson, C. R.; Stairs, G. G.; Teuscher, R. J.; Yoon, T.-S.; Bullock, F. W.; Catterall, C. D.; Giddings, J. C.; Jones, T. W.; Khan, A. M.; Lane, J. B.; Makkar, P. L.; Shaw, D.; Shulman, J.; Blankenship, K.; Kochocki, J.; Lu, B.; Mo, L. W.; Charchuła, K.; Ciborowski, J.; Gajewski, J.; Grzelak, G.; Kasprzak, M.; Krzyżanowski, M.; Muchorowski, K.; Nowak, R. J.; Pawlak, J. M.; Stojda, K.; Stopczyński, A.; Szwed, R.; Tymieniecka, T.; Walczak, R.; Wróblewski, A. K.; Zakrzewski, J. A.; Żarnecki, A. F.; Adamus, M.; Abramowicz, H.; Eisenberg, Y.; Glasman, C.; Karshon, U.; Montag, A.; Revel, D.; Shapira, A.; Ali, I.; Behrens, B.; Camerini, U.; Dasu, S.; Fordham, C.; Foudas, C.; Goussiou, A.; Lomperski, M.; Loveless, R. J.; Nylander, P.; Ptacek, M.; Reeder, D. D.; Smith, W. H.; Silverstein, S.; Frisken, W. R.; Furutani, K. M.; Iga, Y.; ZEUS Collaboration
1992-12-01
We report a study of electron proton collisions at very low Q2, corresponding to virtual photoproduction at centre of mass energies in the range 100-295 GeV. The distribution in transverse energy of the observed hadrons is much harder than can be explained by soft processes. Some of the events show back-to-back two-jet production at the rate and with the characteristics expected from hard two-body scattering. A subset of the two-jet events have energy in the electron direction consistent with that expected from the photon remnant in resolved photon processes.
A dual-route approach to orthographic processing.
Grainger, Jonathan; Ziegler, Johannes C
2011-01-01
In the present theoretical note we examine how different learning constraints, thought to be involved in optimizing the mapping of print to meaning during reading acquisition, might shape the nature of the orthographic code involved in skilled reading. On the one hand, optimization is hypothesized to involve selecting combinations of letters that are the most informative with respect to word identity (diagnosticity constraint), and on the other hand to involve the detection of letter combinations that correspond to pre-existing sublexical phonological and morphological representations (chunking constraint). These two constraints give rise to two different kinds of prelexical orthographic code, a coarse-grained and a fine-grained code, associated with the two routes of a dual-route architecture. Processing along the coarse-grained route optimizes fast access to semantics by using minimal subsets of letters that maximize information with respect to word identity, while coding for approximate within-word letter position independently of letter contiguity. Processing along the fined-grained route, on the other hand, is sensitive to the precise ordering of letters, as well as to position with respect to word beginnings and endings. This enables the chunking of frequently co-occurring contiguous letter combinations that form relevant units for morpho-orthographic processing (prefixes and suffixes) and for the sublexical translation of print to sound (multi-letter graphemes).
A Dual-Route Approach to Orthographic Processing
Grainger, Jonathan; Ziegler, Johannes C.
2011-01-01
In the present theoretical note we examine how different learning constraints, thought to be involved in optimizing the mapping of print to meaning during reading acquisition, might shape the nature of the orthographic code involved in skilled reading. On the one hand, optimization is hypothesized to involve selecting combinations of letters that are the most informative with respect to word identity (diagnosticity constraint), and on the other hand to involve the detection of letter combinations that correspond to pre-existing sublexical phonological and morphological representations (chunking constraint). These two constraints give rise to two different kinds of prelexical orthographic code, a coarse-grained and a fine-grained code, associated with the two routes of a dual-route architecture. Processing along the coarse-grained route optimizes fast access to semantics by using minimal subsets of letters that maximize information with respect to word identity, while coding for approximate within-word letter position independently of letter contiguity. Processing along the fined-grained route, on the other hand, is sensitive to the precise ordering of letters, as well as to position with respect to word beginnings and endings. This enables the chunking of frequently co-occurring contiguous letter combinations that form relevant units for morpho-orthographic processing (prefixes and suffixes) and for the sublexical translation of print to sound (multi-letter graphemes). PMID:21716577
A demonstration of an intelligent control system for a reusable rocket engine
NASA Technical Reports Server (NTRS)
Musgrave, Jeffrey L.; Paxson, Daniel E.; Litt, Jonathan S.; Merrill, Walter C.
1992-01-01
An Intelligent Control System for reusable rocket engines is under development at NASA Lewis Research Center. The primary objective is to extend the useful life of a reusable rocket propulsion system while minimizing between flight maintenance and maximizing engine life and performance through improved control and monitoring algorithms and additional sensing and actuation. This paper describes current progress towards proof-of-concept of an Intelligent Control System for the Space Shuttle Main Engine. A subset of identifiable and accommodatable engine failure modes is selected for preliminary demonstration. Failure models are developed retaining only first order effects and included in a simplified nonlinear simulation of the rocket engine for analysis under closed loop control. The engine level coordinator acts as an interface between the diagnostic and control systems, and translates thrust and mixture ratio commands dictated by mission requirements, and engine status (health) into engine operational strategies carried out by a multivariable control. Control reconfiguration achieves fault tolerance if the nominal (healthy engine) control cannot. Each of the aforementioned functionalities is discussed in the context of an example to illustrate the operation of the system in the context of a representative failure. A graphical user interface allows the researcher to monitor the Intelligent Control System and engine performance under various failure modes selected for demonstration.
Statistical physics of medical diagnostics: Study of a probabilistic model.
Mashaghi, Alireza; Ramezanpour, Abolfazl
2018-03-01
We study a diagnostic strategy which is based on the anticipation of the diagnostic process by simulation of the dynamical process starting from the initial findings. We show that such a strategy could result in more accurate diagnoses compared to a strategy that is solely based on the direct implications of the initial observations. We demonstrate this by employing the mean-field approximation of statistical physics to compute the posterior disease probabilities for a given subset of observed signs (symptoms) in a probabilistic model of signs and diseases. A Monte Carlo optimization algorithm is then used to maximize an objective function of the sequence of observations, which favors the more decisive observations resulting in more polarized disease probabilities. We see how the observed signs change the nature of the macroscopic (Gibbs) states of the sign and disease probability distributions. The structure of these macroscopic states in the configuration space of the variables affects the quality of any approximate inference algorithm (so the diagnostic performance) which tries to estimate the sign-disease marginal probabilities. In particular, we find that the simulation (or extrapolation) of the diagnostic process is helpful when the disease landscape is not trivial and the system undergoes a phase transition to an ordered phase.
Statistical physics of medical diagnostics: Study of a probabilistic model
NASA Astrophysics Data System (ADS)
Mashaghi, Alireza; Ramezanpour, Abolfazl
2018-03-01
We study a diagnostic strategy which is based on the anticipation of the diagnostic process by simulation of the dynamical process starting from the initial findings. We show that such a strategy could result in more accurate diagnoses compared to a strategy that is solely based on the direct implications of the initial observations. We demonstrate this by employing the mean-field approximation of statistical physics to compute the posterior disease probabilities for a given subset of observed signs (symptoms) in a probabilistic model of signs and diseases. A Monte Carlo optimization algorithm is then used to maximize an objective function of the sequence of observations, which favors the more decisive observations resulting in more polarized disease probabilities. We see how the observed signs change the nature of the macroscopic (Gibbs) states of the sign and disease probability distributions. The structure of these macroscopic states in the configuration space of the variables affects the quality of any approximate inference algorithm (so the diagnostic performance) which tries to estimate the sign-disease marginal probabilities. In particular, we find that the simulation (or extrapolation) of the diagnostic process is helpful when the disease landscape is not trivial and the system undergoes a phase transition to an ordered phase.
Efficient Approximation Algorithms for Weighted $b$-Matching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, Arif; Pothen, Alex; Mostofa Ali Patwary, Md.
2016-01-01
We describe a half-approximation algorithm, b-Suitor, for computing a b-Matching of maximum weight in a graph with weights on the edges. b-Matching is a generalization of the well-known Matching problem in graphs, where the objective is to choose a subset of M edges in the graph such that at most a specified number b(v) of edges in M are incident on each vertex v. Subject to this restriction we maximize the sum of the weights of the edges in M. We prove that the b-Suitor algorithm computes the same b-Matching as the one obtained by the greedy algorithm for themore » problem. We implement the algorithm on serial and shared-memory parallel processors, and compare its performance against a collection of approximation algorithms that have been proposed for the Matching problem. Our results show that the b-Suitor algorithm outperforms the Greedy and Locally Dominant edge algorithms by one to two orders of magnitude on a serial processor. The b-Suitor algorithm has a high degree of concurrency, and it scales well up to 240 threads on a shared memory multiprocessor. The b-Suitor algorithm outperforms the Locally Dominant edge algorithm by a factor of fourteen on 16 cores of an Intel Xeon multiprocessor.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-26
... Proposed Rule Change Amending NYSE Arca Equities Rule 7.31(h) To Add a PL Select Order Type July 20, 2012...(h) to add a PL Select Order type. The proposed rule change was published for comment in the Federal... security at a specified, undisplayed price. The PL Select Order would be a subset of the PL Order that...
Cecchinato, A; De Marchi, M; Gallo, L; Bittante, G; Carnier, P
2009-10-01
The aims of this study were to investigate variation of milk coagulation property (MCP) measures and their predictions obtained by mid-infrared spectroscopy (MIR), to investigate the genetic relationship between measures of MCP and MIR predictions, and to estimate the expected response from a breeding program focusing on the enhancement of MCP using MIR predictions as indicator traits. Individual milk samples were collected from 1,200 Brown Swiss cows (progeny of 50 artificial insemination sires) reared in 30 herds located in northern Italy. Rennet coagulation time (RCT, min) and curd firmness (a(30), mm) were measured using a computerized renneting meter. The MIR data were recorded over the spectral range of 4,000 to 900 cm(-1). Prediction models for RCT and a(30) based on MIR spectra were developed using partial least squares regression. A cross-validation procedure was carried out. The procedure involved the partition of available data into 2 subsets: a calibration subset and a test subset. The calibration subset was used to develop a calibration equation able to predict individual MCP phenotypes using MIR spectra. The test subset was used to validate the calibration equation and to estimate heritabilities and genetic correlations for measured MCP and their predictions obtained from MIR spectra and the calibration equation. Point estimates of heritability ranged from 0.30 to 0.34 and from 0.22 to 0.24 for RCT and a(30), respectively. Heritability estimates for MCP predictions were larger than those obtained for measured MCP. Estimated genetic correlations between measures and predictions of RCT were very high and ranged from 0.91 to 0.96. Estimates of the genetic correlation between measures and predictions of a(30) were large and ranged from 0.71 to 0.87. Predictions of MCP provided by MIR techniques can be proposed as indicator traits for the genetic enhancement of MCP. The expected response of RCT and a(30) ensured by the selection using MIR predictions as indicator traits was equal to or slightly less than the response achievable through a single measurement of these traits. Breeding strategies for the enhancement of MCP based on MIR predictions as indicator traits could be easily and immediately implemented for dairy cattle populations where routine acquisition of spectra from individual milk samples is already performed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helton, Jon Craig; Sallaberry, Cedric J. PhD.; .)
2007-04-01
A deep geologic repository for high level radioactive waste is under development by the U.S. Department of Energy at Yucca Mountain (YM), Nevada. As mandated in the Energy Policy Act of 1992, the U.S. Environmental Protection Agency (EPA) has promulgated public health and safety standards (i.e., 40 CFR Part 197) for the YM repository, and the U.S. Nuclear Regulatory Commission has promulgated licensing standards (i.e., 10 CFR Parts 2, 19, 20, etc.) consistent with 40 CFR Part 197 that the DOE must establish are met in order for the YM repository to be licensed for operation. Important requirements in 40more » CFR Part 197 and 10 CFR Parts 2, 19, 20, etc. relate to the determination of expected (i.e., mean) dose to a reasonably maximally exposed individual (RMEI) and the incorporation of uncertainty into this determination. This presentation describes and illustrates how general and typically nonquantitive statements in 40 CFR Part 197 and 10 CFR Parts 2, 19, 20, etc. can be given a formal mathematical structure that facilitates both the calculation of expected dose to the RMEI and the appropriate separation in this calculation of aleatory uncertainty (i.e., randomness in the properties of future occurrences such as igneous and seismic events) and epistemic uncertainty (i.e., lack of knowledge about quantities that are poorly known but assumed to have constant values in the calculation of expected dose to the RMEI).« less
Predictive uncertainty in auditory sequence processing
Hansen, Niels Chr.; Pearce, Marcus T.
2014-01-01
Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty—a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music. PMID:25295018
Faith, Daniel P
2015-02-19
The phylogenetic diversity measure, ('PD'), measures the relative feature diversity of different subsets of taxa from a phylogeny. At the level of feature diversity, PD supports the broad goal of biodiversity conservation to maintain living variation and option values. PD calculations at the level of lineages and features include those integrating probabilities of extinction, providing estimates of expected PD. This approach has known advantages over the evolutionarily distinct and globally endangered (EDGE) methods. Expected PD methods also have limitations. An alternative notion of expected diversity, expected functional trait diversity, relies on an alternative non-phylogenetic model and allows inferences of diversity at the level of functional traits. Expected PD also faces challenges in helping to address phylogenetic tipping points and worst-case PD losses. Expected PD may not choose conservation options that best avoid worst-case losses of long branches from the tree of life. We can expand the range of useful calculations based on expected PD, including methods for identifying phylogenetic key biodiversity areas. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Genetic heterogeneity in Finnish hereditary prostate cancer using ordered subset analysis
Simpson, Claire L; Cropp, Cheryl D; Wahlfors, Tiina; George, Asha; Jones, MaryPat S; Harper, Ursula; Ponciano-Jackson, Damaris; Tammela, Teuvo; Schleutker, Johanna; Bailey-Wilson, Joan E
2013-01-01
Prostate cancer (PrCa) is the most common male cancer in developed countries and the second most common cause of cancer death after lung cancer. We recently reported a genome-wide linkage scan in 69 Finnish hereditary PrCa (HPC) families, which replicated the HPC9 locus on 17q21-q22 and identified a locus on 2q37. The aim of this study was to identify and to detect other loci linked to HPC. Here we used ordered subset analysis (OSA), conditioned on nonparametric linkage to these loci to detect other loci linked to HPC in subsets of families, but not the overall sample. We analyzed the families based on their evidence for linkage to chromosome 2, chromosome 17 and a maximum score using the strongest evidence of linkage from either of the two loci. Significant linkage to a 5-cM linkage interval with a peak OSA nonparametric allele-sharing LOD score of 4.876 on Xq26.3-q27 (ΔLOD=3.193, empirical P=0.009) was observed in a subset of 41 families weakly linked to 2q37, overlapping the HPCX1 locus. Two peaks that were novel to the analysis combining linkage evidence from both primary loci were identified; 18q12.1-q12.2 (OSA LOD=2.541, ΔLOD=1.651, P=0.03) and 22q11.1-q11.21 (OSA LOD=2.395, ΔLOD=2.36, P=0.006), which is close to HPC6. Using OSA allows us to find additional loci linked to HPC in subsets of families, and underlines the complex genetic heterogeneity of HPC even in highly aggregated families. PMID:22948022
Maximizing investments in work zone safety in Oregon : final report.
DOT National Transportation Integrated Search
2011-05-01
Due to the federal stimulus program and the 2009 Jobs and Transportation Act, the Oregon Department of Transportation (ODOT) anticipates that a large increase in highway construction will occur. There is the expectation that, since transportation saf...
ERIC Educational Resources Information Center
Lashway, Larry
1997-01-01
Principals today are expected to maximize their schools' performances with limited resources while also adopting educational innovations. This synopsis reviews five recent publications that offer some important insights about the nature of principals' leadership strategies: (1) "Leadership Styles and Strategies" (Larry Lashway); (2) "Facilitative…
Densest local sphere-packing diversity. II. Application to three dimensions
NASA Astrophysics Data System (ADS)
Hopkins, Adam B.; Stillinger, Frank H.; Torquato, Salvatore
2011-01-01
The densest local packings of N three-dimensional identical nonoverlapping spheres within a radius Rmin(N) of a fixed central sphere of the same size are obtained for selected values of N up to N=1054. In the predecessor to this paper [A. B. Hopkins, F. H. Stillinger, and S. Torquato, Phys. Rev. EPLEEE81063-651X10.1103/PhysRevE.81.041305 81, 041305 (2010)], we described our method for finding the putative densest packings of N spheres in d-dimensional Euclidean space Rd and presented those packings in R2 for values of N up to N=348. Here we analyze the properties and characteristics of the densest local packings in R3 and employ knowledge of the Rmin(N), using methods applicable in any d, to construct both a realizability condition for pair correlation functions of sphere packings and an upper bound on the maximal density of infinite sphere packings. In R3, we find wide variability in the densest local packings, including a multitude of packing symmetries such as perfect tetrahedral and imperfect icosahedral symmetry. We compare the densest local packings of N spheres near a central sphere to minimal-energy configurations of N+1 points interacting with short-range repulsive and long-range attractive pair potentials, e.g., 12-6 Lennard-Jones, and find that they are in general completely different, a result that has possible implications for nucleation theory. We also compare the densest local packings to finite subsets of stacking variants of the densest infinite packings in R3 (the Barlow packings) and find that the densest local packings are almost always most similar as measured by a similarity metric, to the subsets of Barlow packings with the smallest number of coordination shells measured about a single central sphere, e.g., a subset of the fcc Barlow packing. Additionally, we observe that the densest local packings are dominated by the dense arrangement of spheres with centers at distance Rmin(N). In particular, we find two “maracas” packings at N=77 and N=93, each consisting of a few unjammed spheres free to rattle within a “husk” composed of the maximal number of spheres that can be packed with centers at respective Rmin(N).
Cost-effective cloud computing: a case study using the comparative genomics tool, roundup.
Kudtarkar, Parul; Deluca, Todd F; Fusaro, Vincent A; Tonellato, Peter J; Wall, Dennis P
2010-12-22
Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource-Roundup-using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon's Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon's computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure.
Salinity effect on the maximal growth temperature of some bacteria isolated from marine enviroments.
Stanley, S O; Morita, R Y
1968-01-01
Salinity of the growth medium was found to have a marked effect on the maximal growth temperature of four bacteria isolated from marine sources. Vibrio marinus MP-1 had a maximal growth temperature of 21.2 C at a salinity of 35% and a maximal growth temperature of 10.5 C at a salinity of 7%, the lowest salinity at which it would grow. This effect was shown to be due to the presence of various cations in the medium. The order of effectiveness of cations in restoring the normal maximal growth temperature, when added to dilute seawater, was Na(+) > Li(+) > Mg(++) > K(+) > Rb(+) > NH(4) (+). The anions tested, with the exception of SO(4)=, had no marked effect on the maximal growth temperature response. In a completely defined medium, the highest maximal growth temperature was 20.0 C at 0.40 m NaCl. A decrease in the maximal growth temperature was observed at both low and high concentrations of NaCl.
Steinke, Dirk; Salzburger, Walter; Meyer, Axel
2006-06-01
The power of comparative phylogenomic analyses also depends on the amount of data that are included in such studies. We used expressed sequence tags (ESTs) from fish model species as a proof of principle approach in order to test the reliability of using ESTs for phylogenetic inference. As expected, the robustness increases with the amount of sequences. Although some progress has been made in the elucidation of the phylogeny of teleosts, relationships among the main lineages of the derived fish (Euteleostei) remain poorly defined and are still debated. We performed a phylogenomic analysis of a set of 42 of orthologous genes from 10 available fish model systems from seven different orders (Salmoniformes, Siluriformes, Cypriniformes, Tetraodontiformes, Cyprinodontiformes, Beloniformes, and Perciformes) of euteleostean fish to estimate divergence times and evolutionary relationships among those lineages. All 10 fish species serve as models for developmental, aquaculture, genomic, and comparative genetic studies. The phylogenetic signal and the strength of the contribution of each of the 42 orthologous genes were estimated with randomly chosen data subsets. Our study revealed a molecular phylogeny of higher-level relationships of derived teleosts, which indicates that the use of multiple genes produces robust phylogenies, a finding that is expected to apply to other phylogenetic issues among distantly related taxa. Our phylogenomic analyses confirm that the euteleostean superorders Ostariophysi and Acanthopterygii are monophyletic and the Protacanthopterygii and Ostariophysi are sister clades. In addition, and contrary to the traditional phylogenetic hypothesis, our analyses determine that killifish (Cyprinodontiformes), medaka (Beloniformes), and cichlids (Perciformes) appear to be more closely related to each other than either of them is to pufferfish (Tetraodontiformes). All 10 lineages split before or during the fragmentation of the supercontinent Pangea in the Jurassic.
Energy efficiency analysis and optimization for mobile platforms
NASA Astrophysics Data System (ADS)
Metri, Grace Camille
The introduction of mobile devices changed the landscape of computing. Gradually, these devices are replacing traditional personal computer (PCs) to become the devices of choice for entertainment, connectivity, and productivity. There are currently at least 45.5 million people in the United States who own a mobile device, and that number is expected to increase to 1.5 billion by 2015. Users of mobile devices expect and mandate that their mobile devices have maximized performance while consuming minimal possible power. However, due to the battery size constraints, the amount of energy stored in these devices is limited and is only growing by 5% annually. As a result, we focused in this dissertation on energy efficiency analysis and optimization for mobile platforms. We specifically developed SoftPowerMon, a tool that can power profile Android platforms in order to expose the power consumption behavior of the CPU. We also performed an extensive set of case studies in order to determine energy inefficiencies of mobile applications. Through our case studies, we were able to propose optimization techniques in order to increase the energy efficiency of mobile devices and proposed guidelines for energy-efficient application development. In addition, we developed BatteryExtender, an adaptive user-guided tool for power management of mobile devices. The tool enables users to extend battery life on demand for a specific duration until a particular task is completed. Moreover, we examined the power consumption of System-on-Chips (SoCs) and observed the impact on the energy efficiency in the event of offloading tasks from the CPU to the specialized custom engines. Based on our case studies, we were able to demonstrate that current software-based power profiling techniques for SoCs can have an error rate close to 12%, which needs to be addressed in order to be able to optimize the energy consumption of the SoC. Finally, we summarize our contributions and outline possible direction for future research in this field.
NASA Astrophysics Data System (ADS)
Grecu, M.; Tian, L.; Heymsfield, G. M.
2017-12-01
A major challenge in deriving accurate estimates of physical properties of falling snow particles from single frequency space- or airborne radar observations is that snow particles exhibit a large variety of shapes and their electromagnetic scattering characteristics are highly dependent on these shapes. Triple frequency (Ku-Ka-W) radar observations are expected to facilitate the derivation of more accurate snow estimates because specific snow particle shapes tend to have specific signatures in the associated two-dimensional dual-reflectivity-ratio (DFR) space. However, the derivation of accurate snow estimates from triple frequency radar observations is by no means a trivial task. This is because the radar observations can be subject to non-negligible attenuation (especially at W-band when super-cooled water is present), which may significantly impact the interpretation of the information in the DFR space. Moreover, the electromagnetic scattering properties of snow particles are computationally expensive to derive, which makes the derivation of reliable parameterizations usable in estimation methodologies challenging. In this study, we formulate an two-step Expectation Maximization (EM) methodology to derive accurate snow estimates in Extratropical Cyclones (ECTs) from triple frequency airborne radar observations. The Expectation (E) step consists of a least-squares triple frequency estimation procedure applied with given assumptions regarding the relationships between the density of snow particles and their sizes, while the Maximization (M) step consists of the optimization of the assumptions used in step E. The electromagnetic scattering properties of snow particles are derived using the Rayleigh-Gans approximation. The methodology is applied to triple frequency radar observations collected during the Olympic Mountains Experiment (OLYMPEX). Results show that snowfall estimates above the freezing level in ETCs consistent with the triple frequency radar observations as well as with independent rainfall estimates below the freezing level may be derived using the EM methodology formulated in the study.
Cosmological perturbations in inflation and in de Sitter space
NASA Astrophysics Data System (ADS)
Pimentel, Guilherme Leite
This thesis focuses on various aspects of inflationary fluctuations. First, we study gravitational wave fluctuations in de Sitter space. The isometries of the spacetime constrain to a few parameters the Wheeler-DeWitt wavefunctional of the universe, to cubic order in fluctuations. At cubic order, there are three independent terms in the wavefunctional. From the point of view of the bulk action, one term corresponds to Einstein gravity, and a new term comes from a cubic term in the curvature tensor. The third term is a pure phase and does not give rise to a new shape for expectation values of graviton fluctuations. These results can be seen as the leading order non-gaussian contributions in a slow-roll expansion for inflationary observables. We also use the wavefunctional approach to explain a universal consistency condition of n-point expectation values in single field inflation. This consistency condition relates a soft limit of an n-point expectation value to ( n-1)-point expectation values. We show how these conditions can be easily derived from the wavefunctional point of view. Namely, they follow from the momentum constraint of general relativity, which is equivalent to the constraint of spatial diffeomorphism invariance. We also study expectation values beyond tree level. We show that subhorizon fluctuations in loop diagrams do not generate a mass term for superhorizon fluctuations. Such a mass term could spoil the predictivity of inflation, which is based on the existence of properly defined field variables that become constant once their wavelength is bigger than the size of the horizon. Such a mass term would be seen in the two point expectation value as a contribution that grows linearly with time at late times. The absence of this mass term is closely related to the soft limits studied in previous chapters. It is analogous to the absence of a mass term for the photon in quantum electrodynamics, due to gauge symmetry. Finally, we use the tools of holography and entanglement entropy to study superhorizon correlations in quantum field theories in de Sitter space. The entropy has interesting terms that have no equivalent in flat space field theories. These new terms are due to particle creation in an expanding universe. The entropy is calculated directly for free massive scalar theories. For theories with holographic duals, it is determined by the area of some extremal surface in the bulk geometry. We calculate the entropy for different classes of holographic duals. For one of these classes, the holographic dual geometry is an asymptotically Anti-de Sitter space that decays into a crunching cosmology, an open Friedmann-Robertson-Walker universe. The extremal surface used in the calculation of the entropy lies almost entirely on the slice of maximal scale factor of the crunching cosmology.
Hudson, H M; Ma, J; Green, P
1994-01-01
Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.
Optimized up-down asymmetry to drive fast intrinsic rotation in tokamaks
NASA Astrophysics Data System (ADS)
Ball, Justin; Parra, Felix I.; Landreman, Matt; Barnes, Michael
2018-02-01
Breaking the up-down symmetry of the tokamak poloidal cross-section can significantly increase the spontaneous rotation due to turbulent momentum transport. In this work, we optimize the shape of flux surfaces with both tilted elongation and tilted triangularity in order to maximize this drive of intrinsic rotation. Nonlinear gyrokinetic simulations demonstrate that adding optimally-tilted triangularity can double the momentum transport of a tilted elliptical shape. This work indicates that tilting the elongation and triangularity in an ITER-like device can reduce the energy transport and drive intrinsic rotation with an Alfvén Mach number of roughly 1% . This rotation is four times larger than the rotation expected in ITER and is approximately what is needed to stabilize MHD instabilities. It is shown that this optimal shape can be created using the shaping coils of several present-day experiments.
Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data
NASA Astrophysics Data System (ADS)
Li, Lan; Chen, Erxue; Li, Zengyuan
2013-01-01
This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.
Effect of solenoidal magnetic field on drifting laser plasma
NASA Astrophysics Data System (ADS)
Takahashi, Kazumasa; Okamura, Masahiro; Sekine, Megumi; Cushing, Eric; Jandovitz, Peter
2013-04-01
An ion source for accelerators requires to provide a stable waveform with a certain pulse length appropriate to the application. The pulse length of laser ion source is easy to control because it is expected to be proportional to plasma drifting distance. However, current density decay is proportional to the cube of the drifting distance, so large current loss will occur under unconfined drift. We investigated the stability and current decay of a Nd:YAG laser generated copper plasma confined by a solenoidal field using a Faraday cup to measure the current waveform. It was found that the plasma was unstable at certain magnetic field strengths, so a baffle was introduced to limit the plasma diameter at injection and improve the stability. Magnetic field, solenoid length, and plasma diameter were varied in order to find the conditions that minimize current decay and maximize stability.
Joint channel estimation and multi-user detection for multipath fading channels in DS-CDMA systems
NASA Astrophysics Data System (ADS)
Wu, Sau-Hsuan; Kuo, C.-C. Jay
2002-11-01
The technique of joint blind channel estimation and multiple access interference (MAI) suppression for an asynchronous code-division multiple-access (CDMA) system is investigated in this research. To identify and track dispersive time-varying fading channels and to avoid the phase ambiguity that come with the second-order statistic approaches, a sliding-window scheme using the expectation maximization (EM) algorithm is proposed. The complexity of joint channel equalization and symbol detection for all users increases exponentially with system loading and the channel memory. The situation is exacerbated if strong inter-symbol interference (ISI) exists. To reduce the complexity and the number of samples required for channel estimation, a blind multiuser detector is developed. Together with multi-stage interference cancellation using soft outputs provided by this detector, our algorithm can track fading channels with no phase ambiguity even when channel gains attenuate close to zero.
Millimeter wave backscatter measurements in support of collision avoidance applications
NASA Astrophysics Data System (ADS)
Narayanan, Ram M.; Snuttjer, Brett R. J.
1997-11-01
Millimeter-wave short range radar systems have unique advantages in surface navigation applications, such as military vehicle mobility, aircraft landing assistance, and automotive collision avoidance. In collision avoidance applications, characterization of clutter due to terrain and roadside objects is necessary in order to maximize the signal-to-clutter ratio (SCR) and to minimize false alarms. The results of two types of radar cross section (RCS) measurements at 95 GHz are reported in this paper. The first set of measurements presents data on the normalized RCS (NRCS) as well as clutter distributions of various terrain types at low grazing angles of 5° and 7.5°. The second set of measurements presents RCS data and statistics on various types of roadside objects, such as metallic and wooden sign posts. These results are expected to be useful for designers of short-range millimeter-wave collision avoidance radar systems.
Results from a Test Fixture for button BPM Trapped Mode Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cameron,P.; Bacha, B.; Blednykh, A.
2009-05-04
A variety of measures have been suggested to mitigate the problem of button BPM trapped mode heating. A test fixture, using a combination of commercial-off-the-shelf and custom machined components, was assembled to validate the simulations. We present details of the fixture design, measurement results, and a comparison of the results with the simulations. A brief history of the trapped mode button heating problem and a set of design rules for BPM button optimization are presented elsewhere in these proceedings. Here we present measurements on a test fixture that was assembled to confirm, if possible, a subset of those rules: (1)more » Minimize the trapped mode impedance and the resulting power deposited in this mode by the beam. (2) Maximize the power re-radiated back into the beampipe. (3) Maximize electrical conductivity of the outer circumference of the button and minimize conductivity of the inner circumference of the shell, to shift power deposition from the button to the shell. The problem is then how to extract useful and relevant information from S-parameter measurements of the test fixture.« less
Designing Agent Collectives For Systems With Markovian Dynamics
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Lawson, John W.; Clancy, Daniel (Technical Monitor)
2001-01-01
The "Collective Intelligence" (COIN) framework concerns the design of collectives of agents so that as those agents strive to maximize their individual utility functions, their interaction causes a provided "world" utility function concerning the entire collective to be also maximized. Here we show how to extend that framework to scenarios having Markovian dynamics when no re-evolution of the system from counter-factual initial conditions (an often expensive calculation) is permitted. Our approach transforms the (time-extended) argument of each agent's utility function before evaluating that function. This transformation has benefits in scenarios not involving Markovian dynamics, in particular scenarios where not all of the arguments of an agent's utility function are observable. We investigate this transformation in simulations involving both linear and quadratic (nonlinear) dynamics. In addition, we find that a certain subset of these transformations, which result in utilities that have low "opacity (analogous to having high signal to noise) but are not "factored" (analogous to not being incentive compatible), reliably improve performance over that arising with factored utilities. We also present a Taylor Series method for the fully general nonlinear case.
Maximizing the Effective Use of Formative Assessments
ERIC Educational Resources Information Center
Riddell, Nancy B.
2016-01-01
In the current age of accountability, teachers must be able to produce tangible evidence of students' concept mastery. This article focuses on implementation of formative assessments before, during, and after instruction in order to maximize teachers' ability to effectively monitor student achievement. Suggested strategies are included to help…
Zhu, Dianwen; Li, Changqing
2014-12-01
Fluorescence molecular tomography (FMT) is a promising imaging modality and has been actively studied in the past two decades since it can locate the specific tumor position three-dimensionally in small animals. However, it remains a challenging task to obtain fast, robust and accurate reconstruction of fluorescent probe distribution in small animals due to the large computational burden, the noisy measurement and the ill-posed nature of the inverse problem. In this paper we propose a nonuniform preconditioning method in combination with L (1) regularization and ordered subsets technique (NUMOS) to take care of the different updating needs at different pixels, to enhance sparsity and suppress noise, and to further boost convergence of approximate solutions for fluorescence molecular tomography. Using both simulated data and phantom experiment, we found that the proposed nonuniform updating method outperforms its popular uniform counterpart by obtaining a more localized, less noisy, more accurate image. The computational cost was greatly reduced as well. The ordered subset (OS) technique provided additional 5 times and 3 times speed enhancements for simulation and phantom experiments, respectively, without degrading image qualities. When compared with the popular L (1) algorithms such as iterative soft-thresholding algorithm (ISTA) and Fast iterative soft-thresholding algorithm (FISTA) algorithms, NUMOS also outperforms them by obtaining a better image in much shorter period of time.
Haque, Mohammad Nazmul; Noman, Nasimul; Berretta, Regina; Moscato, Pablo
2016-01-01
Classification of datasets with imbalanced sample distributions has always been a challenge. In general, a popular approach for enhancing classification performance is the construction of an ensemble of classifiers. However, the performance of an ensemble is dependent on the choice of constituent base classifiers. Therefore, we propose a genetic algorithm-based search method for finding the optimum combination from a pool of base classifiers to form a heterogeneous ensemble. The algorithm, called GA-EoC, utilises 10 fold-cross validation on training data for evaluating the quality of each candidate ensembles. In order to combine the base classifiers decision into ensemble's output, we used the simple and widely used majority voting approach. The proposed algorithm, along with the random sub-sampling approach to balance the class distribution, has been used for classifying class-imbalanced datasets. Additionally, if a feature set was not available, we used the (α, β) - k Feature Set method to select a better subset of features for classification. We have tested GA-EoC with three benchmarking datasets from the UCI-Machine Learning repository, one Alzheimer's disease dataset and a subset of the PubFig database of Columbia University. In general, the performance of the proposed method on the chosen datasets is robust and better than that of the constituent base classifiers and many other well-known ensembles. Based on our empirical study we claim that a genetic algorithm is a superior and reliable approach to heterogeneous ensemble construction and we expect that the proposed GA-EoC would perform consistently in other cases.
Haque, Mohammad Nazmul; Noman, Nasimul; Berretta, Regina; Moscato, Pablo
2016-01-01
Classification of datasets with imbalanced sample distributions has always been a challenge. In general, a popular approach for enhancing classification performance is the construction of an ensemble of classifiers. However, the performance of an ensemble is dependent on the choice of constituent base classifiers. Therefore, we propose a genetic algorithm-based search method for finding the optimum combination from a pool of base classifiers to form a heterogeneous ensemble. The algorithm, called GA-EoC, utilises 10 fold-cross validation on training data for evaluating the quality of each candidate ensembles. In order to combine the base classifiers decision into ensemble’s output, we used the simple and widely used majority voting approach. The proposed algorithm, along with the random sub-sampling approach to balance the class distribution, has been used for classifying class-imbalanced datasets. Additionally, if a feature set was not available, we used the (α, β) − k Feature Set method to select a better subset of features for classification. We have tested GA-EoC with three benchmarking datasets from the UCI-Machine Learning repository, one Alzheimer’s disease dataset and a subset of the PubFig database of Columbia University. In general, the performance of the proposed method on the chosen datasets is robust and better than that of the constituent base classifiers and many other well-known ensembles. Based on our empirical study we claim that a genetic algorithm is a superior and reliable approach to heterogeneous ensemble construction and we expect that the proposed GA-EoC would perform consistently in other cases. PMID:26764911
NASA Technical Reports Server (NTRS)
Schwab, Andrew J. (Inventor); Aylor, James (Inventor); Hitchcock, Charles Young (Inventor); Wulf, William A. (Inventor); McKee, Sally A. (Inventor); Moyer, Stephen A. (Inventor); Klenke, Robert (Inventor)
2000-01-01
A data processing system is disclosed which comprises a data processor and memory control device for controlling the access of information from the memory. The memory control device includes temporary storage and decision ability for determining what order to execute the memory accesses. The compiler detects the requirements of the data processor and selects the data to stream to the memory control device which determines a memory access order. The order in which to access said information is selected based on the location of information stored in the memory. The information is repeatedly accessed from memory and stored in the temporary storage until all streamed information is accessed. The information is stored until required by the data processor. The selection of the order in which to access information maximizes bandwidth and decreases the retrieval time.
The proposal of architecture for chemical splitting to optimize QSAR models for aquatic toxicity.
Colombo, Andrea; Benfenati, Emilio; Karelson, Mati; Maran, Uko
2008-06-01
One of the challenges in the field of quantitative structure-activity relationship (QSAR) analysis is the correct classification of a chemical compound to an appropriate model for the prediction of activity. Thus, in previous studies, compounds have been divided into distinct groups according to their mode of action or chemical class. In the current study, theoretical molecular descriptors were used to divide 568 organic substances into subsets with toxicity measured for the 96-h lethal median concentration for the Fathead minnow (Pimephales promelas). Simple constitutional descriptors such as the number of aliphatic and aromatic rings and a quantum chemical descriptor, maximum bond order of a carbon atom divide compounds into nine subsets. For each subset of compounds the automatic forward selection of descriptors was applied to construct QSAR models. Significant correlations were achieved for each subset of chemicals and all models were validated with the leave-one-out internal validation procedure (R(2)(cv) approximately 0.80). The results encourage to consider this alternative way for the prediction of toxicity using QSAR subset models without direct reference to the mechanism of toxic action or the traditional chemical classification.
Gehring, Dominic; Wissler, Sabrina; Lohrer, Heinz; Nauck, Tanja; Gollhofer, Albert
2014-03-01
A thorough understanding of the functional aspects of ankle joint control is essential to developing effective injury prevention. It is of special interest to understand how neuromuscular control mechanisms and mechanical constraints stabilize the ankle joint. Therefore, the aim of the present study was to determine how expecting ankle tilts and the application of an ankle brace influence ankle joint control when imitating the ankle sprain mechanism during walking. Ankle kinematics and muscle activity were assessed in 17 healthy men. During gait rapid perturbations were applied using a trapdoor (tilting with 24° inversion and 15° plantarflexion). The subjects either knew that a perturbation would definitely occur (expected tilts) or there was only the possibility that a perturbation would occur (potential tilts). Both conditions were conducted with and without a semi-rigid ankle brace. Expecting perturbations led to an increased ankle eversion at foot contact, which was mediated by an altered muscle preactivation pattern. Moreover, the maximal inversion angle (-7%) and velocity (-4%), as well as the reactive muscle response were significantly reduced when the perturbation was expected. While wearing an ankle brace did not influence muscle preactivation nor the ankle kinematics before ground contact, it significantly reduced the maximal ankle inversion angle (-14%) and velocity (-11%) as well as reactive neuromuscular responses. The present findings reveal that expecting ankle inversion modifies neuromuscular joint control prior to landing. Although such motor control strategies are weaker in their magnitude compared with braces, they seem to assist ankle joint stabilization in a close-to-injury situation. Copyright © 2013 Elsevier B.V. All rights reserved.
Increase in Jumping Height Associated with Maximal Effort Vertical Depth Jumps.
ERIC Educational Resources Information Center
Bedi, John F.; And Others
1987-01-01
In order to assess if there existed a statistically significant increase in jumping performance when dropping from different heights, 32 males, aged 19 to 26, performed a series of maximal effort vertical jumps after dropping from eight heights onto a force plate. Results are analyzed. (Author/MT)
USDA-ARS?s Scientific Manuscript database
We measured plasma markers of cholesterol synthesis (lathosterol) and absorption (campesterol, sitosterol, and cholestanol) in order to compare the effects of maximal doses of rosuvastatin with atorvastatin and investigate the basis for the significant individual variation in lipid lowering response...
Evidential analysis of difference images for change detection of multitemporal remote sensing images
NASA Astrophysics Data System (ADS)
Chen, Yin; Peng, Lijuan; Cremers, Armin B.
2018-03-01
In this article, we develop two methods for unsupervised change detection in multitemporal remote sensing images based on Dempster-Shafer's theory of evidence (DST). In most unsupervised change detection methods, the probability of difference image is assumed to be characterized by mixture models, whose parameters are estimated by the expectation maximization (EM) method. However, the main drawback of the EM method is that it does not consider spatial contextual information, which may entail rather noisy detection results with numerous spurious alarms. To remedy this, we firstly develop an evidence theory based EM method (EEM) which incorporates spatial contextual information in EM by iteratively fusing the belief assignments of neighboring pixels to the central pixel. Secondly, an evidential labeling method in the sense of maximizing a posteriori probability (MAP) is proposed in order to further enhance the detection result. It first uses the parameters estimated by EEM to initialize the class labels of a difference image. Then it iteratively fuses class conditional information and spatial contextual information, and updates labels and class parameters. Finally it converges to a fixed state which gives the detection result. A simulated image set and two real remote sensing data sets are used to evaluate the two evidential change detection methods. Experimental results show that the new evidential methods are comparable to other prevalent methods in terms of total error rate.
Measurement of the ratio $$B(t \\to W b)/B(t \\to W q)$$ in $$t\\bar{t}$$ dilepton channel at CDF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galloni, Camilla
2012-01-01
My analysis is based on the number of b-jets found in t¯t events using the dilepton sample with at least 2 jets in the final state. The charged leptons could be either electrons or muons. Tau leptons are not included. We use SecVtx algorithm, based on the reconstruction of a secondary vertex in the event, in order to identify a jet coming from b-quark fragmentation (b-tagging). Due to the high purity of the t¯t signal in dilepton events it is possible to perform a kinematic measurement of the t¯t cross section. Our strategy is to use this result to makemore » prediction on the number of t¯t events. We divide our sample in subsets according to dilepton type (combination of the lepton type), number of jets in the final states and events with zero, one or two tags. The comparison between events and the prediction, given by the sum of the expected t¯t estimate and the background yield, in each subsample is made using a Likelihood function. Our measured value for R is the one which maximizes the Likelihood, i.e. gives the best match between our expectation and the observed data. We measure: p¯p!t¯t = 7.05±0.53stat±0.42lumi , R= 0.86±0.06 (stat+syst) and, in the hypothesis of CKM matrix unitarity with three quark generations, | Vtb | = 0.93 ± 0.03. Our analysis on the p¯p!t¯t was performed independently of the official dilepton analisys on the t¯t production cross section. So it represents also a valuable crosscheck for the official analysis. In chapter 1, a brief introduction to the theoretical framework is given. The standard model of elementary particles and the Quantum Cromodynamic theories are introduced. Then the top quark is presented, with a short descpription of its properties, as its mass, its production mode and its cross section. Some previous results on R are listed as well. Later we present the experiment that collected our data, both the collider (chapter 2) and the detector (CDF)(chapter 3). In chapter 4 we describe the physics object reconstruction, so how we collect the detector signals and translate those into physical particles traversing our detector. The event selection is described in chapter 5, where we report the complete list of the selection requirements and we estimate our sample composition. In chapters 6 and 7 we report our results for the t¯t production cross section and R. An indirect measurement for |Vtb| is given as well.« less
Zhang, ZhiZhuo; Chang, Cheng Wei; Hugo, Willy; Cheung, Edwin; Sung, Wing-Kin
2013-03-01
Although de novo motifs can be discovered through mining over-represented sequence patterns, this approach misses some real motifs and generates many false positives. To improve accuracy, one solution is to consider some additional binding features (i.e., position preference and sequence rank preference). This information is usually required from the user. This article presents a de novo motif discovery algorithm called SEME (sampling with expectation maximization for motif elicitation), which uses pure probabilistic mixture model to model the motif's binding features and uses expectation maximization (EM) algorithms to simultaneously learn the sequence motif, position, and sequence rank preferences without asking for any prior knowledge from the user. SEME is both efficient and accurate thanks to two important techniques: the variable motif length extension and importance sampling. Using 75 large-scale synthetic datasets, 32 metazoan compendium benchmark datasets, and 164 chromatin immunoprecipitation sequencing (ChIP-Seq) libraries, we demonstrated the superior performance of SEME over existing programs in finding transcription factor (TF) binding sites. SEME is further applied to a more difficult problem of finding the co-regulated TF (coTF) motifs in 15 ChIP-Seq libraries. It identified significantly more correct coTF motifs and, at the same time, predicted coTF motifs with better matching to the known motifs. Finally, we show that the learned position and sequence rank preferences of each coTF reveals potential interaction mechanisms between the primary TF and the coTF within these sites. Some of these findings were further validated by the ChIP-Seq experiments of the coTFs. The application is available online.
Maximum projection designs for computer experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joseph, V. Roshan; Gul, Evren; Ba, Shan
Space-filling properties are important in designing computer experiments. The traditional maximin and minimax distance designs only consider space-filling in the full dimensional space. This can result in poor projections onto lower dimensional spaces, which is undesirable when only a few factors are active. Restricting maximin distance design to the class of Latin hypercubes can improve one-dimensional projections, but cannot guarantee good space-filling properties in larger subspaces. We propose designs that maximize space-filling properties on projections to all subsets of factors. We call our designs maximum projection designs. As a result, our design criterion can be computed at a cost nomore » more than a design criterion that ignores projection properties.« less
Seth, Ashok; Gupta, Sajal; Pratap Singh, Vivudh; Kumar, Vijay
2017-09-01
Final stent dimensions remain an important predictor of restenosis, target vessel revascularisation (TVR) and subacute stent thrombosis (ST), even in the drug-eluting stent (DES) era. Stent balloons are usually semi-compliant and thus even high-pressure inflation may not achieve uniform or optimal stent expansion. Post-dilatation with non-compliant (NC) balloons after stent deployment has been shown to enhance stent expansion and could reduce TVR and ST. Based on supporting evidence and in the absence of large prospective randomised outcome-based trials, post-dilatation with an NC balloon to achieve optimal stent expansion and maximal luminal area is a logical technical recommendation, particularly in complex lesion subsets.
Maximum projection designs for computer experiments
Joseph, V. Roshan; Gul, Evren; Ba, Shan
2015-03-18
Space-filling properties are important in designing computer experiments. The traditional maximin and minimax distance designs only consider space-filling in the full dimensional space. This can result in poor projections onto lower dimensional spaces, which is undesirable when only a few factors are active. Restricting maximin distance design to the class of Latin hypercubes can improve one-dimensional projections, but cannot guarantee good space-filling properties in larger subspaces. We propose designs that maximize space-filling properties on projections to all subsets of factors. We call our designs maximum projection designs. As a result, our design criterion can be computed at a cost nomore » more than a design criterion that ignores projection properties.« less
Mining subspace clusters from DNA microarray data using large itemset techniques.
Chang, Ye-In; Chen, Jiun-Rung; Tsai, Yueh-Chi
2009-05-01
Mining subspace clusters from the DNA microarrays could help researchers identify those genes which commonly contribute to a disease, where a subspace cluster indicates a subset of genes whose expression levels are similar under a subset of conditions. Since in a DNA microarray, the number of genes is far larger than the number of conditions, those previous proposed algorithms which compute the maximum dimension sets (MDSs) for any two genes will take a long time to mine subspace clusters. In this article, we propose the Large Itemset-Based Clustering (LISC) algorithm for mining subspace clusters. Instead of constructing MDSs for any two genes, we construct only MDSs for any two conditions. Then, we transform the task of finding the maximal possible gene sets into the problem of mining large itemsets from the condition-pair MDSs. Since we are only interested in those subspace clusters with gene sets as large as possible, it is desirable to pay attention to those gene sets which have reasonable large support values in the condition-pair MDSs. From our simulation results, we show that the proposed algorithm needs shorter processing time than those previous proposed algorithms which need to construct gene-pair MDSs.
A Deficit in Older Adults' Effortful Selection of Cued Responses
Proctor, Robert W.; Vu, Kim-Phuong L.; Pick, David F.
2007-01-01
J. J. Adam et al. (1998) provided evidence for an “age-related deficit in preparing 2 fingers on 2 hands, but not on 1 hand” (p. 870). Instead of having an anatomical basis, the deficit could result from the effortful processing required for individuals to select cued subsets of responses that do not coincide with left and right subgroups. The deficit also could involve either the ultimate benefit that can be attained or the time required to attain that benefit. The authors report 3 experiments (Ns = 40, 48, and 32 participants, respectively) in which they tested those distinctions by using an overlapped hand placement (participants alternated the index and middle fingers of the hands), a normal hand placement, and longer precuing intervals than were used in previous studies. The older adults were able to achieve the full precuing benefit shown by younger adults but required longer to achieve the maximal benefit for most pairs of responses. The deficit did not depend on whether the responses were from different hands, suggesting that it lies primarily in the effortful processing required for those subsets of cued responses that are not selected easily. PMID:16801319
Rough sets and Laplacian score based cost-sensitive feature selection
Yu, Shenglong
2018-01-01
Cost-sensitive feature selection learning is an important preprocessing step in machine learning and data mining. Recently, most existing cost-sensitive feature selection algorithms are heuristic algorithms, which evaluate the importance of each feature individually and select features one by one. Obviously, these algorithms do not consider the relationship among features. In this paper, we propose a new algorithm for minimal cost feature selection called the rough sets and Laplacian score based cost-sensitive feature selection. The importance of each feature is evaluated by both rough sets and Laplacian score. Compared with heuristic algorithms, the proposed algorithm takes into consideration the relationship among features with locality preservation of Laplacian score. We select a feature subset with maximal feature importance and minimal cost when cost is undertaken in parallel, where the cost is given by three different distributions to simulate different applications. Different from existing cost-sensitive feature selection algorithms, our algorithm simultaneously selects out a predetermined number of “good” features. Extensive experimental results show that the approach is efficient and able to effectively obtain the minimum cost subset. In addition, the results of our method are more promising than the results of other cost-sensitive feature selection algorithms. PMID:29912884
Hypergraph Based Feature Selection Technique for Medical Diagnosis.
Somu, Nivethitha; Raman, M R Gauthama; Kirthivasan, Kannan; Sriram, V S Shankar
2016-11-01
The impact of internet and information systems across various domains have resulted in substantial generation of multidimensional datasets. The use of data mining and knowledge discovery techniques to extract the original information contained in the multidimensional datasets play a significant role in the exploitation of complete benefit provided by them. The presence of large number of features in the high dimensional datasets incurs high computational cost in terms of computing power and time. Hence, feature selection technique has been commonly used to build robust machine learning models to select a subset of relevant features which projects the maximal information content of the original dataset. In this paper, a novel Rough Set based K - Helly feature selection technique (RSKHT) which hybridize Rough Set Theory (RST) and K - Helly property of hypergraph representation had been designed to identify the optimal feature subset or reduct for medical diagnostic applications. Experiments carried out using the medical datasets from the UCI repository proves the dominance of the RSKHT over other feature selection techniques with respect to the reduct size, classification accuracy and time complexity. The performance of the RSKHT had been validated using WEKA tool, which shows that RSKHT had been computationally attractive and flexible over massive datasets.
Rough sets and Laplacian score based cost-sensitive feature selection.
Yu, Shenglong; Zhao, Hong
2018-01-01
Cost-sensitive feature selection learning is an important preprocessing step in machine learning and data mining. Recently, most existing cost-sensitive feature selection algorithms are heuristic algorithms, which evaluate the importance of each feature individually and select features one by one. Obviously, these algorithms do not consider the relationship among features. In this paper, we propose a new algorithm for minimal cost feature selection called the rough sets and Laplacian score based cost-sensitive feature selection. The importance of each feature is evaluated by both rough sets and Laplacian score. Compared with heuristic algorithms, the proposed algorithm takes into consideration the relationship among features with locality preservation of Laplacian score. We select a feature subset with maximal feature importance and minimal cost when cost is undertaken in parallel, where the cost is given by three different distributions to simulate different applications. Different from existing cost-sensitive feature selection algorithms, our algorithm simultaneously selects out a predetermined number of "good" features. Extensive experimental results show that the approach is efficient and able to effectively obtain the minimum cost subset. In addition, the results of our method are more promising than the results of other cost-sensitive feature selection algorithms.
Kurnianingsih, Yoanna A; Sim, Sam K Y; Chee, Michael W L; Mullette-Gillman, O'Dhaniel A
2015-01-01
We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61-80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision-making for losses through changes in both individual preferences and the strategies individuals employ.
The nature of genetic susceptibility to multiple sclerosis: constraining the possibilities.
Goodin, Douglas S
2016-04-27
Epidemiological observations regarding certain population-wide parameters (e.g., disease-prevalence, recurrence-risk in relatives, gender predilections, and the distribution of common genetic-variants) place important constraints on the possibilities for the genetic-basis underlying susceptibility to multiple sclerosis (MS). Using very broad range-estimates for the different population-wide epidemiological parameters, a mathematical model can help elucidate the nature and the magnitude of these constraints. For MS no more than 8.5 % of the population can possibly be in the "genetically-susceptible" subset (defined as having a life-time MS-probability at least as high as the overall population average). Indeed, the expected MS-probability for this subset is more than 12 times that for every other person of the population who is not in this subset. Moreover, provided that those genetically susceptible persons (genotypes), who carry the well-established MS susceptibility allele (DRB1*1501), are equally or more likely to get MS than those susceptible persons, who don't carry this allele, then at least 84 % of MS-cases must come from this "genetically susceptible" subset. Finally, because men, compared to women, are at least as likely (and possibly more likely) to be susceptible, it can be demonstrated that women are more responsive to the environmental factors that are involved in MS-pathogenesis (whatever these are) and, thus, susceptible women are more likely actually to develop MS than susceptible men. Finally, in contrast to genetic susceptibility, more than 70 % of men (and likely also women) must have an environmental experience (including all of the necessary factors), which is sufficient to produce MS in a susceptible individual. As a result, because of these constraints, it is possible to distinguish two classes of persons, indicating either that MS can be caused by two fundamentally different pathophysiological mechanisms or that the large majority of the population is at no risk of the developing this disease regardless of their environmental experience. Moreover, although environmental-factors would play a critical role in both mechanisms (if both exist), there is no reason to expect that these factors are the same (or even similar) between the two.
Maximally-localized position, Euclidean path-integral, and thermodynamics in GUP quantum mechanics
NASA Astrophysics Data System (ADS)
Bernardo, Reginald Christian S.; Esguerra, Jose Perico H.
2018-04-01
In dealing with quantum mechanics at very high energies, it is essential to adapt to a quasiposition representation using the maximally-localized states because of the generalized uncertainty principle. In this paper, we look at maximally-localized states as eigenstates of the operator ξ = X + iβP that we refer to as the maximally-localized position. We calculate the overlap between maximally-localized states and show that the identity operator can be expressed in terms of the maximally-localized states. Furthermore, we show that the maximally-localized position is diagonal in momentum-space and that the maximally-localized position and its adjoint satisfy commutation and anti-commutation relations reminiscent of the harmonic oscillator commutation and anti-commutation relations. As application, we use the maximally-localized position in developing the Euclidean path-integral and introduce the compact form of the propagator for maximal localization. The free particle momentum-space propagator and the propagator for maximal localization are analytically evaluated up to quadratic-order in β. Finally, we obtain a path-integral expression for the partition function of a thermodynamic system using the maximally-localized states. The partition function of a gas of noninteracting particles is evaluated. At temperatures exceeding the Planck energy, we obtain the gas' maximum internal energy N / 2 β and recover the zero heat capacity of an ideal gas.
Boligon, A A; Baldi, F; Mercadante, M E Z; Lobo, R B; Pereira, R J; Albuquerque, L G
2011-06-28
We quantified the potential increase in accuracy of expected breeding value for weights of Nelore cattle, from birth to mature age, using multi-trait and random regression models on Legendre polynomials and B-spline functions. A total of 87,712 weight records from 8144 females were used, recorded every three months from birth to mature age from the Nelore Brazil Program. For random regression analyses, all female weight records from birth to eight years of age (data set I) were considered. From this general data set, a subset was created (data set II), which included only nine weight records: at birth, weaning, 365 and 550 days of age, and 2, 3, 4, 5, and 6 years of age. Data set II was analyzed using random regression and multi-trait models. The model of analysis included the contemporary group as fixed effects and age of dam as a linear and quadratic covariable. In the random regression analyses, average growth trends were modeled using a cubic regression on orthogonal polynomials of age. Residual variances were modeled by a step function with five classes. Legendre polynomials of fourth and sixth order were utilized to model the direct genetic and animal permanent environmental effects, respectively, while third-order Legendre polynomials were considered for maternal genetic and maternal permanent environmental effects. Quadratic polynomials were applied to model all random effects in random regression models on B-spline functions. Direct genetic and animal permanent environmental effects were modeled using three segments or five coefficients, and genetic maternal and maternal permanent environmental effects were modeled with one segment or three coefficients in the random regression models on B-spline functions. For both data sets (I and II), animals ranked differently according to expected breeding value obtained by random regression or multi-trait models. With random regression models, the highest gains in accuracy were obtained at ages with a low number of weight records. The results indicate that random regression models provide more accurate expected breeding values than the traditionally finite multi-trait models. Thus, higher genetic responses are expected for beef cattle growth traits by replacing a multi-trait model with random regression models for genetic evaluation. B-spline functions could be applied as an alternative to Legendre polynomials to model covariance functions for weights from birth to mature age.
ERIC Educational Resources Information Center
Winter, Paul A.
1996-01-01
Applicant evaluations of job messages conveyed through formal position advertisements were studied with 136 role-playing teachers. Findings indicate that administrators can maximize advertisement attractiveness to women by using intrinsic job attributes and placing them first, and maximize attractiveness to men by using extrinsic attributes and…
Do Nondomestic Undergraduates Choose a Major Field in Order to Maximize Grade Point Averages?
ERIC Educational Resources Information Center
Bergman, Matthew E.; Fass-Holmes, Barry
2016-01-01
The authors investigated whether undergraduates attending an American West Coast public university who were not U.S. citizens (nondomestic) maximized their grade point averages (GPA) through their choice of major field. Multiple regression hierarchical linear modeling analyses showed that major field's effect size was small for these…
Engaging Older Adult Volunteers in National Service
ERIC Educational Resources Information Center
McBride, Amanda Moore; Greenfield, Jennifer C.; Morrow-Howell, Nancy; Lee, Yung Soo; McCrary, Stacey
2012-01-01
Volunteer-based programs are increasingly designed as interventions to affect the volunteers and the beneficiaries of the volunteers' activities. To achieve the intended impacts for both, programs need to leverage the volunteers' engagement by meeting their expectations, retaining them, and maximizing their perceptions of benefits. Programmatic…
NASA Astrophysics Data System (ADS)
Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang
2018-04-01
Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.
Merton's problem for an investor with a benchmark in a Barndorff-Nielsen and Shephard market.
Lennartsson, Jan; Lindberg, Carl
2015-01-01
To try to outperform an externally given benchmark with known weights is the most common equity mandate in the financial industry. For quantitative investors, this task is predominantly approached by optimizing their portfolios consecutively over short time horizons with one-period models. We seek in this paper to provide a theoretical justification to this practice when the underlying market is of Barndorff-Nielsen and Shephard type. This is done by verifying that an investor who seeks to maximize her expected terminal exponential utility of wealth in excess of her benchmark will in fact use an optimal portfolio equivalent to the one-period Markowitz mean-variance problem in continuum under the corresponding Black-Scholes market. Further, we can represent the solution to the optimization problem as in Feynman-Kac form. Hence, the problem, and its solution, is analogous to Merton's classical portfolio problem, with the main difference that Merton maximizes expected utility of terminal wealth, not wealth in excess of a benchmark.
NASA Astrophysics Data System (ADS)
Hawthorne, Bryant; Panchal, Jitesh H.
2014-07-01
A bilevel optimization formulation of policy design problems considering multiple objectives and incomplete preferences of the stakeholders is presented. The formulation is presented for Feed-in-Tariff (FIT) policy design for decentralized energy infrastructure. The upper-level problem is the policy designer's problem and the lower-level problem is a Nash equilibrium problem resulting from market interactions. The policy designer has two objectives: maximizing the quantity of energy generated and minimizing policy cost. The stakeholders decide on quantities while maximizing net present value and minimizing capital investment. The Nash equilibrium problem in the presence of incomplete preferences is formulated as a stochastic linear complementarity problem and solved using expected value formulation, expected residual minimization formulation, and the Monte Carlo technique. The primary contributions in this article are the mathematical formulation of the FIT policy, the extension of computational policy design problems to multiple objectives, and the consideration of incomplete preferences of stakeholders for policy design problems.
Yang, Defu; Wang, Lin; Chen, Dongmei; Yan, Chenggang; He, Xiaowei; Liang, Jimin; Chen, Xueli
2018-05-17
The reconstruction of bioluminescence tomography (BLT) is severely ill-posed due to the insufficient measurements and diffuses nature of the light propagation. Predefined permissible source region (PSR) combined with regularization terms is one common strategy to reduce such ill-posedness. However, the region of PSR is usually hard to determine and can be easily affected by subjective consciousness. Hence, we theoretically developed a filtered maximum likelihood expectation maximization (fMLEM) method for BLT. Our method can avoid predefining the PSR and provide a robust and accurate result for global reconstruction. In the method, the simplified spherical harmonics approximation (SP N ) was applied to characterize diffuse light propagation in medium, and the statistical estimation-based MLEM algorithm combined with a filter function was used to solve the inverse problem. We systematically demonstrated the performance of our method by the regular geometry- and digital mouse-based simulations and a liver cancer-based in vivo experiment. Graphical abstract The filtered MLEM-based global reconstruction method for BLT.
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Huber, David J.; Martin, Kevin
2017-05-01
This paper† describes a technique in which we improve upon the prior performance of the Rapid Serial Visual Presentation (RSVP) EEG paradigm for image classification though the insertion of visual attention distracters and overall sequence reordering based upon the expected ratio of rare to common "events" in the environment and operational context. Inserting distracter images maintains the ratio of common events to rare events at an ideal level, maximizing the rare event detection via P300 EEG response to the RSVP stimuli. The method has two steps: first, we compute the optimal number of distracters needed for an RSVP stimuli based on the desired sequence length and expected number of targets and insert the distracters into the RSVP sequence, and then we reorder the RSVP sequence to maximize P300 detection. We show that by reducing the ratio of target events to nontarget events using this method, we can allow RSVP sequences with more targets without sacrificing area under the ROC curve (azimuth).
Choosing Fitness-Enhancing Innovations Can Be Detrimental under Fluctuating Environments
Xue, Julian Z.; Costopoulos, Andre; Guichard, Frederic
2011-01-01
The ability to predict the consequences of one's behavior in a particular environment is a mechanism for adaptation. In the absence of any cost to this activity, we might expect agents to choose behaviors that maximize their fitness, an example of directed innovation. This is in contrast to blind mutation, where the probability of becoming a new genotype is independent of the fitness of the new genotypes. Here, we show that under environments punctuated by rapid reversals, a system with both genetic and cultural inheritance should not always maximize fitness through directed innovation. This is because populations highly accurate at selecting the fittest innovations tend to over-fit the environment during its stable phase, to the point that a rapid environmental reversal can cause extinction. A less accurate population, on the other hand, can track long term trends in environmental change, keeping closer to the time-average of the environment. We use both analytical and agent-based models to explore when this mechanism is expected to occur. PMID:22125601
Castillo-Barnes, Diego; Peis, Ignacio; Martínez-Murcia, Francisco J.; Segovia, Fermín; Illán, Ignacio A.; Górriz, Juan M.; Ramírez, Javier; Salas-Gonzalez, Diego
2017-01-01
A wide range of segmentation approaches assumes that intensity histograms extracted from magnetic resonance images (MRI) have a distribution for each brain tissue that can be modeled by a Gaussian distribution or a mixture of them. Nevertheless, intensity histograms of White Matter and Gray Matter are not symmetric and they exhibit heavy tails. In this work, we present a hidden Markov random field model with expectation maximization (EM-HMRF) modeling the components using the α-stable distribution. The proposed model is a generalization of the widely used EM-HMRF algorithm with Gaussian distributions. We test the α-stable EM-HMRF model in synthetic data and brain MRI data. The proposed methodology presents two main advantages: Firstly, it is more robust to outliers. Secondly, we obtain similar results than using Gaussian when the Gaussian assumption holds. This approach is able to model the spatial dependence between neighboring voxels in tomographic brain MRI. PMID:29209194
Using return on investment to maximize conservation effectiveness in Argentine grasslands.
Murdoch, William; Ranganathan, Jai; Polasky, Stephen; Regetz, James
2010-12-07
The rapid global loss of natural habitats and biodiversity, and limited resources, place a premium on maximizing the expected benefits of conservation actions. The scarcity of information on the fine-grained distribution of species of conservation concern, on risks of loss, and on costs of conservation actions, especially in developing countries, makes efficient conservation difficult. The distribution of ecosystem types (unique ecological communities) is typically better known than species and arguably better represents the entirety of biodiversity than do well-known taxa, so we use conserving the diversity of ecosystem types as our conservation goal. We define conservation benefit to include risk of conversion, spatial effects that reward clumping of habitat, and diminishing returns to investment in any one ecosystem type. Using Argentine grasslands as an example, we compare three strategies: protecting the cheapest land ("minimize cost"), maximizing conservation benefit regardless of cost ("maximize benefit"), and maximizing conservation benefit per dollar ("return on investment"). We first show that the widely endorsed goal of saving some percentage (typically 10%) of a country or habitat type, although it may inspire conservation, is a poor operational goal. It either leads to the accumulation of areas with low conservation benefit or requires infeasibly large sums of money, and it distracts from the real problem: maximizing conservation benefit given limited resources. Second, given realistic budgets, return on investment is superior to the other conservation strategies. Surprisingly, however, over a wide range of budgets, minimizing cost provides more conservation benefit than does the maximize-benefit strategy.
Statistical modeling, detection, and segmentation of stains in digitized fabric images
NASA Astrophysics Data System (ADS)
Gururajan, Arunkumar; Sari-Sarraf, Hamed; Hequet, Eric F.
2007-02-01
This paper will describe a novel and automated system based on a computer vision approach, for objective evaluation of stain release on cotton fabrics. Digitized color images of the stained fabrics are obtained, and the pixel values in the color and intensity planes of these images are probabilistically modeled as a Gaussian Mixture Model (GMM). Stain detection is posed as a decision theoretic problem, where the null hypothesis corresponds to absence of a stain. The null hypothesis and the alternate hypothesis mathematically translate into a first order GMM and a second order GMM respectively. The parameters of the GMM are estimated using a modified Expectation-Maximization (EM) algorithm. Minimum Description Length (MDL) is then used as the test statistic to decide the verity of the null hypothesis. The stain is then segmented by a decision rule based on the probability map generated by the EM algorithm. The proposed approach was tested on a dataset of 48 fabric images soiled with stains of ketchup, corn oil, mustard, ragu sauce, revlon makeup and grape juice. The decision theoretic part of the algorithm produced a correct detection rate (true positive) of 93% and a false alarm rate of 5% on these set of images.
NASA Technical Reports Server (NTRS)
Frith, James M.; Buckalew, Brent A.; Cowardin, Heather M.; Lederer, Susan M.
2018-01-01
The Gaia catalogue second data release and its implications to optical observations of man-made Earth orbiting objects. Abstract and not the Final Paper is attached. The Gaia spacecraft was launched in December 2013 by the European Space Agency to produce a three-dimensional, dynamic map of objects within the Milky Way. Gaia's first year of data was released in September 2016. Common sources from the first data release have been combined with the Tycho-2 catalogue to provide a 5 parameter astrometric solution for approximately 2 million stars. The second Gaia data release is scheduled to come out in April 2018 and is expected to provide astrometry and photometry for more than 1 billion stars, a subset of which with a the full 6 parameter astrometric solution (adding radial velocity) and positional accuracy better than 0.002 arcsec (2 mas). In addition to precise astrometry, a unique opportunity exists with the Gaia catalogue in its production of accurate, broadband photometry using the Gaia G filter. In the past, clear filters have been used by various groups to maximize likelihood of detection of dim man-made objects but these data were very difficult to calibrate. With the second release of the Gaia catalogue, a ground based system utilizing the G band filter will have access to 1.5 billion all-sky calibration sources down to an accuracy of 0.02 magnitudes or better. In this talk, we will discuss the advantages and practicalities of implementing the Gaia filters and catalogue into data pipelines designed for optical observations of man-made objects.
Age and Disability Employment Discrimination: Occupational Rehabilitation Implications
Bjelland, Melissa J.; von Schrader, Sarah; Houtenville, Andrew J.; Ruiz-Quintanilla, Antonio; Webber, Douglas A.
2009-01-01
Introduction As concerns grow that a thinning labor force due to retirement will lead to worker shortages, it becomes critical to support positive employment outcomes of groups who have been underutilized, specifically older workers and workers with disabilities. Better understanding perceived age and disability discrimination and their intersection can help rehabilitation specialists and employers address challenges expected as a result of the evolving workforce. Methods Using U.S. Equal Employment Opportunity Commission Integrated Mission System data, we investigate the nature of employment discrimination charges that cite the Americans with Disabilities Act or Age Discrimination in Employment Act individually or jointly. We focus on trends in joint filings over time and across categories of age, types of disabilities, and alleged discriminatory behavior. Results We find that employment discrimination claims that originate from older or disabled workers are concentrated within a subset of issues that include reasonable accommodation, retaliation, and termination. Age-related disabilities are more frequently referenced in joint cases than in the overall pool of ADA filings, while the psychiatric disorders are less often referenced in joint cases. When examining charges made by those protected under both the ADA and ADEA, results from a logit model indicate that in comparison to charges filed under the ADA alone, jointly-filed ADA/ADEA charges are more likely to be filed by older individuals, by those who perceive discrimination in hiring and termination, and to originate from within the smallest firms. Conclusion In light of these findings, rehabilitation and workplace practices to maximize the hiring and retention of older workers and those with disabilities are discussed. PMID:19680793
Age and disability employment discrimination: occupational rehabilitation implications.
Bjelland, Melissa J; Bruyère, Susanne M; von Schrader, Sarah; Houtenville, Andrew J; Ruiz-Quintanilla, Antonio; Webber, Douglas A
2010-12-01
As concerns grow that a thinning labor force due to retirement will lead to worker shortages, it becomes critical to support positive employment outcomes of groups who have been underutilized, specifically older workers and workers with disabilities. Better understanding perceived age and disability discrimination and their intersection can help rehabilitation specialists and employers address challenges expected as a result of the evolving workforce. Using U.S. Equal Employment Opportunity Commission Integrated Mission System data, we investigate the nature of employment discrimination charges that cite the Americans with Disabilities Act or Age Discrimination in Employment Act individually or jointly. We focus on trends in joint filings over time and across categories of age, types of disabilities, and alleged discriminatory behavior. We find that employment discrimination claims that originate from older or disabled workers are concentrated within a subset of issues that include reasonable accommodation, retaliation, and termination. Age-related disabilities are more frequently referenced in joint cases than in the overall pool of ADA filings, while the psychiatric disorders are less often referenced in joint cases. When examining charges made by those protected under both the ADA and ADEA, results from a logit model indicate that in comparison to charges filed under the ADA alone, jointly-filed ADA/ADEA charges are more likely to be filed by older individuals, by those who perceive discrimination in hiring and termination, and to originate from within the smallest firms. In light of these findings, rehabilitation and workplace practices to maximize the hiring and retention of older workers and those with disabilities are discussed.
Robust Bayesian Experimental Design for Conceptual Model Discrimination
NASA Astrophysics Data System (ADS)
Pham, H. V.; Tsai, F. T. C.
2015-12-01
A robust Bayesian optimal experimental design under uncertainty is presented to provide firm information for model discrimination, given the least number of pumping wells and observation wells. Firm information is the maximum information of a system can be guaranteed from an experimental design. The design is based on the Box-Hill expected entropy decrease (EED) before and after the experiment design and the Bayesian model averaging (BMA) framework. A max-min programming is introduced to choose the robust design that maximizes the minimal Box-Hill EED subject to that the highest expected posterior model probability satisfies a desired probability threshold. The EED is calculated by the Gauss-Hermite quadrature. The BMA method is used to predict future observations and to quantify future observation uncertainty arising from conceptual and parametric uncertainties in calculating EED. Monte Carlo approach is adopted to quantify the uncertainty in the posterior model probabilities. The optimal experimental design is tested by a synthetic 5-layer anisotropic confined aquifer. Nine conceptual groundwater models are constructed due to uncertain geological architecture and boundary condition. High-performance computing is used to enumerate all possible design solutions in order to identify the most plausible groundwater model. Results highlight the impacts of scedasticity in future observation data as well as uncertainty sources on potential pumping and observation locations.
Bayesian cross-entropy methodology for optimal design of validation experiments
NASA Astrophysics Data System (ADS)
Jiang, X.; Mahadevan, S.
2006-07-01
An important concern in the design of validation experiments is how to incorporate the mathematical model in the design in order to allow conclusive comparisons of model prediction with experimental output in model assessment. The classical experimental design methods are more suitable for phenomena discovery and may result in a subjective, expensive, time-consuming and ineffective design that may adversely impact these comparisons. In this paper, an integrated Bayesian cross-entropy methodology is proposed to perform the optimal design of validation experiments incorporating the computational model. The expected cross entropy, an information-theoretic distance between the distributions of model prediction and experimental observation, is defined as a utility function to measure the similarity of two distributions. A simulated annealing algorithm is used to find optimal values of input variables through minimizing or maximizing the expected cross entropy. The measured data after testing with the optimum input values are used to update the distribution of the experimental output using Bayes theorem. The procedure is repeated to adaptively design the required number of experiments for model assessment, each time ensuring that the experiment provides effective comparison for validation. The methodology is illustrated for the optimal design of validation experiments for a three-leg bolted joint structure and a composite helicopter rotor hub component.
Shifting orders among suppliers considering risk, price and transportation cost
NASA Astrophysics Data System (ADS)
Revitasari, C.; Pujawan, I. N.
2018-04-01
Supplier order allocation is an important supply chain decision for an enterprise. It is related to the supplier’s function as a raw material provider and other supporting materials that will be used in production process. Most of works on order allocation has been based on costs and other supply chain performance, but very limited of them taking risks into consideration. In this paper we address the problem of order allocation of a single commodity sourced from multiple suppliers considering supply risks in addition to the attempt of minimizing transportation costs. The supply chain risk was investigated and a procedure was proposed in the risk mitigation phase as a form of risk profile. The objective including risk profile in order allocation is to maximize the product flow from a risky supplier to a relatively less risky supplier. The proposed procedure is applied to a sugar company. The result suggests that order allocations should be maximized to suppliers that have a relatively low risk and minimized to suppliers that have a relatively larger risks.
NASA Astrophysics Data System (ADS)
Wall, J.; Bohnenstiehl, D. R.; Levine, N. S.
2013-12-01
An automated workflow for sinkhole detection is developed using Light Detection and Ranging (Lidar) data from Mammoth Cave National Park (MACA). While the park is known to sit within a karst formation, the generally dense canopy cover and the size of the park (~53,000 acres) creates issues for sinkhole inventorying. Lidar provides a useful remote sensing technology for peering beneath the canopy in hard to reach areas of the park. In order to detect sinkholes, a subsetting technique is used to interpolate a Digital Elevation Model (DEM) thereby reducing edge effects. For each subset, standard GIS fill tools are used to fill depressions within the DEM. The initial DEM is then subtracted from the filled DEM resulting in detected depressions or sinkholes. Resulting depressions are then described in terms of size and geospatial trend.
Wang, Deyun; Liu, Yanling; Luo, Hongyuan; Yue, Chenqiang; Cheng, Sheng
2017-01-01
Accurate PM2.5 concentration forecasting is crucial for protecting public health and atmospheric environment. However, the intermittent and unstable nature of PM2.5 concentration series makes its forecasting become a very difficult task. In order to improve the forecast accuracy of PM2.5 concentration, this paper proposes a hybrid model based on wavelet transform (WT), variational mode decomposition (VMD) and back propagation (BP) neural network optimized by differential evolution (DE) algorithm. Firstly, WT is employed to disassemble the PM2.5 concentration series into a number of subsets with different frequencies. Secondly, VMD is applied to decompose each subset into a set of variational modes (VMs). Thirdly, DE-BP model is utilized to forecast all the VMs. Fourthly, the forecast value of each subset is obtained through aggregating the forecast results of all the VMs obtained from VMD decomposition of this subset. Finally, the final forecast series of PM2.5 concentration is obtained by adding up the forecast values of all subsets. Two PM2.5 concentration series collected from Wuhan and Tianjin, respectively, located in China are used to test the effectiveness of the proposed model. The results demonstrate that the proposed model outperforms all the other considered models in this paper. PMID:28704955
On the Teaching of Portfolio Theory.
ERIC Educational Resources Information Center
Biederman, Daniel K.
1992-01-01
Demonstrates how a simple portfolio problem expressed explicitly as an expected utility maximization problem can be used to instruct students in portfolio theory. Discusses risk aversion, decision making under uncertainty, and the limitations of the traditional mean variance approach. Suggests students may develop a greater appreciation of general…
Program Monitoring: Problems and Cases.
ERIC Educational Resources Information Center
Lundin, Edward; Welty, Gordon
Designed as the major component of a comprehensive model of educational management, a behavioral model of decision making is presented that approximates the synoptic model of neoclassical economic theory. The synoptic model defines all possible alternatives and provides a basis for choosing that alternative which maximizes expected utility. The…
A Bayesian Approach to Interactive Retrieval
ERIC Educational Resources Information Center
Tague, Jean M.
1973-01-01
A probabilistic model for interactive retrieval is presented. Bayesian statistical decision theory principles are applied: use of prior and sample information about the relationship of document descriptions to query relevance; maximization of expected value of a utility function, to the problem of optimally restructuring search strategies in an…
Creating an Agent Based Framework to Maximize Information Utility
2008-03-01
information utility may be a qualitative description of the information, where one would expect the adjectives low value, fair value , high value. For...operations. Information in this category may have a fair value rating. Finally, many seemingly unrelated events, such as reports of snipers in buildings
Robust radio interferometric calibration using the t-distribution
NASA Astrophysics Data System (ADS)
Kazemi, S.; Yatawatta, S.
2013-10-01
A major stage of radio interferometric data processing is calibration or the estimation of systematic errors in the data and the correction for such errors. A stochastic error (noise) model is assumed, and in most cases, this underlying model is assumed to be Gaussian. However, outliers in the data due to interference or due to errors in the sky model would have adverse effects on processing based on a Gaussian noise model. Most of the shortcomings of calibration such as the loss in flux or coherence, and the appearance of spurious sources, could be attributed to the deviations of the underlying noise model. In this paper, we propose to improve the robustness of calibration by using a noise model based on Student's t-distribution. Student's t-noise is a special case of Gaussian noise when the variance is unknown. Unlike Gaussian-noise-model-based calibration, traditional least-squares minimization would not directly extend to a case when we have a Student's t-noise model. Therefore, we use a variant of the expectation-maximization algorithm, called the expectation-conditional maximization either algorithm, when we have a Student's t-noise model and use the Levenberg-Marquardt algorithm in the maximization step. We give simulation results to show the robustness of the proposed calibration method as opposed to traditional Gaussian-noise-model-based calibration, especially in preserving the flux of weaker sources that are not included in the calibration model.
Can differences in breast cancer utilities explain disparities in breast cancer care?
Schleinitz, Mark D; DePalo, Dina; Blume, Jeffrey; Stein, Michael
2006-12-01
Black, older, and less affluent women are less likely to receive adjuvant breast cancer therapy than their counterparts. Whereas preference contributes to disparities in other health care scenarios, it is unclear if preference explains differential rates of breast cancer care. To ascertain utilities from women of diverse backgrounds for the different stages of, and treatments for, breast cancer and to determine whether a treatment decision modeled from utilities is associated with socio-demographic characteristics. A stratified sample (by age and race) of 156 English-speaking women over 25 years old not currently undergoing breast cancer treatment. We assessed utilities using standard gamble for 5 breast cancer stages, and time-tradeoff for 3 therapeutic modalities. We incorporated each subject's utilities into a Markov model to determine whether her quality-adjusted life expectancy would be maximized with chemotherapy for a hypothetical, current diagnosis of stage II breast cancer. We used logistic regression to determine whether socio-demographic variables were associated with this optimal strategy. Median utilities for the 8 health states were: stage I disease, 0.91 (interquartile range 0.50 to 1.00); stage II, 0.75 (0.26 to 0.99); stage III, 0.51 (0.25 to 0.94); stage IV (estrogen receptor positive), 0.36 (0 to 0.75); stage IV (estrogen receptor negative), 0.40 (0 to 0.79); chemotherapy 0.50 (0 to 0.92); hormonal therapy 0.58 (0 to 1); and radiation therapy 0.83 (0.10 to 1). Utilities for early stage disease and treatment modalities, but not metastatic disease, varied with socio-demographic characteristics. One hundred and twenty-two of 156 subjects had utilities that maximized quality-adjusted life expectancy given stage II breast cancer with chemotherapy. Age over 50, black race, and low household income were associated with at least 5-fold lower odds of maximizing quality-adjusted life expectancy with chemotherapy, whereas women who were married or had a significant other were 4-fold more likely to maximize quality-adjusted life expectancy with chemotherapy. Differences in utility for breast cancer health states may partially explain the lower rate of adjuvant therapy for black, older, and less affluent women. Further work must clarify whether these differences result from health preference alone or reflect women's perceptions of sources of disparity, such as access to care, poor communication with providers, limitations in health knowledge or in obtaining social and workplace support during therapy.
Profiling dendritic cell subsets in head and neck squamous cell tonsillar cancer and benign tonsils.
Abolhalaj, Milad; Askmyr, David; Sakellariou, Christina Alexandra; Lundberg, Kristina; Greiff, Lennart; Lindstedt, Malin
2018-05-23
Dendritic cells (DCs) have a key role in orchestrating immune responses and are considered important targets for immunotherapy against cancer. In order to develop effective cancer vaccines, detailed knowledge of the micromilieu in cancer lesions is warranted. In this study, flow cytometry and human transcriptome arrays were used to characterize subsets of DCs in head and neck squamous cell tonsillar cancer and compare them to their counterparts in benign tonsils to evaluate subset-selective biomarkers associated with tonsillar cancer. We describe, for the first time, four subsets of DCs in tonsillar cancer: CD123 + plasmacytoid DCs (pDC), CD1c + , CD141 + , and CD1c - CD141 - myeloid DCs (mDC). An increased frequency of DCs and an elevated mDC/pDC ratio were shown in malignant compared to benign tonsillar tissue. The microarray data demonstrates characteristics specific for tonsil cancer DC subsets, including expression of immunosuppressive molecules and lower expression levels of genes involved in development of effector immune responses in DCs in malignant tonsillar tissue, compared to their counterparts in benign tonsillar tissue. Finally, we present target candidates selectively expressed by different DC subsets in malignant tonsils and confirm expression of CD206/MRC1 and CD207/Langerin on CD1c + DCs at protein level. This study descibes DC characteristics in the context of head and neck cancer and add valuable steps towards future DC-based therapies against tonsillar cancer.
Optimal joint detection and estimation that maximizes ROC-type curves
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K.
2017-01-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation. PMID:27093544
Optimal Joint Detection and Estimation That Maximizes ROC-Type Curves.
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K
2016-09-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation.
Dantan, Etienne; Foucher, Yohann; Lorent, Marine; Giral, Magali; Tessier, Philippe
2018-06-01
Defining thresholds of prognostic markers is essential for stratified medicine. Such thresholds are mostly estimated from purely statistical measures regardless of patient preferences potentially leading to unacceptable medical decisions. Quality-Adjusted Life-Years are a widely used preferences-based measure of health outcomes. We develop a time-dependent Quality-Adjusted Life-Years-based expected utility function for censored data that should be maximized to estimate an optimal threshold. We performed a simulation study to compare estimated thresholds when using the proposed expected utility approach and purely statistical estimators. Two applications illustrate the usefulness of the proposed methodology which was implemented in the R package ROCt ( www.divat.fr ). First, by reanalysing data of a randomized clinical trial comparing the efficacy of prednisone vs. placebo in patients with chronic liver cirrhosis, we demonstrate the utility of treating patients with a prothrombin level higher than 89%. Second, we reanalyze the data of an observational cohort of kidney transplant recipients: we conclude to the uselessness of the Kidney Transplant Failure Score to adapt the frequency of clinical visits. Applying such a patient-centered methodology may improve future transfer of novel prognostic scoring systems or markers in clinical practice.
Adar, Shay; Dor, Roi
2018-02-01
Habitat choice is an important decision that influences animals' fitness. Insect larvae are less mobile than the adults. Consequently, the contribution of the maternal choice of habitat to the survival and development of the offspring is considered to be crucial. According to the "preference-performance hypothesis", ovipositing females are expected to choose habitats that will maximize the performance of their offspring. We tested this hypothesis in wormlions (Diptera: Vermileonidae), which are small sand-dwelling insects that dig pit-traps in sandy patches and ambush small arthropods. Larvae prefer relatively deep and obstacle-free sand, and here we tested the habitat preference of the ovipositing female. In contrast to our expectation, ovipositing females showed no clear preference for either a deep sand or obstacle-free habitat, in contrast to the larval choice. This suboptimal female choice led to smaller pits being constructed later by the larvae, which may reduce prey capture success of the larvae. We offer several explanations for this apparently suboptimal female behavior, related either to maximizing maternal rather than offspring fitness, or to constraints on the female's behavior. Female's ovipositing habitat choice may have weaker negative consequences than expected for the offspring, as larvae can partially correct suboptimal maternal choice. Copyright © 2017 Elsevier B.V. All rights reserved.
Natural Killer Cells Promote Fetal Development through the Secretion of Growth-Promoting Factors.
Fu, Binqing; Zhou, Yonggang; Ni, Xiang; Tong, Xianhong; Xu, Xiuxiu; Dong, Zhongjun; Sun, Rui; Tian, Zhigang; Wei, Haiming
2017-12-19
Natural killer (NK) cells are present in large populations at the maternal-fetal interface during early pregnancy. However, the role of NK cells in fetal growth is unclear. Here, we have identified a CD49a + Eomes + subset of NK cells that secreted growth-promoting factors (GPFs), including pleiotrophin and osteoglycin, in both humans and mice. The crosstalk between HLA-G and ILT2 served as a stimulus for GPF-secreting function of this NK cell subset. Decreases in this GPF-secreting NK cell subset impaired fetal development, resulting in fetal growth restriction. The transcription factor Nfil3, but not T-bet, affected the function and the number of this decidual NK cell subset. Adoptive transfer of induced CD49a + Eomes + NK cells reversed impaired fetal growth and rebuilt an appropriate local microenvironment. These findings reveal properties of NK cells in promoting fetal growth. In addition, this research proposes approaches for therapeutic administration of NK cells in order to reverse restricted nourishments within the uterine microenvironment during early pregnancy. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vertesi, T.; Bene, E.
A bipartite Bell inequality is derived which is maximally violated on the two-qubit state space if measurements describable by positive operator valued measure (POVM) elements are allowed, rather than restricting the possible measurements to projective ones. In particular, the presented Bell inequality requires POVMs in order to be maximally violated by a maximally entangled two-qubit state. This answers a question raised by N. Gisin [in Quantum Reality, Relativistic Causality, and Closing the Epistemic Circle: Essays in Honour of Abner Shimony, edited by W. C. Myrvold and J. Christian (Springer, The Netherlands, 2009), pp. 125-138].
1986-07-01
maintainability, enhanceability, portability, flexibility, reusability of components, expected market or production life span, upward compatibility, integration...cost) but, most often, they involve global marketing and production objectives. A high life- cycle cost may be accepted in exchange for some other...ease of integration. More importantly, these results could be interpreted as suggesting the need to use a mixed approach where one uses a subset of
Tools for neuroanatomy and neurogenetics in Drosophila
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pfeiffer, Barret D.; Jenett, Arnim; Hammonds, Ann S.
2008-08-11
We demonstrate the feasibility of generating thousands of transgenic Drosophila melanogaster lines in which the expression of an exogenous gene is reproducibly directed to distinct small subsets of cells in the adult brain. We expect the expression patterns produced by the collection of 5,000 lines that we are currently generating to encompass all neurons in the brain in a variety of intersecting patterns. Overlapping 3-kb DNA fragments from the flanking noncoding and intronic regions of genes thought to have patterned expression in the adult brain were inserted into a defined genomic location by site-specific recombination. These fragments were then assayedmore » for their ability to function as transcriptional enhancers in conjunction with a synthetic core promoter designed to work with a wide variety of enhancer types. An analysis of 44 fragments from four genes found that >80% drive expression patterns in the brain; the observed patterns were, on average, comprised of <100 cells. Our results suggest that the D. melanogaster genome contains >50,000 enhancers and that multiple enhancers drive distinct subsets of expression of a gene in each tissue and developmental stage. We expect that these lines will be valuable tools for neuroanatomy as well as for the elucidation of neuronal circuits and information flow in the fly brain.« less
Threat expectancy bias and treatment outcome in patients with panic disorder and agoraphobia.
Duits, Puck; Klein Hofmeijer-Sevink, Mieke; Engelhard, Iris M; Baas, Johanna M P; Ehrismann, Wieske A M; Cath, Danielle C
2016-09-01
Previous studies suggest that patients with panic disorder and agoraphobia (PD/A) tend to overestimate the associations between fear-relevant stimuli and threat. This so-called threat expectancy bias is thought to play a role in the development and treatment of anxiety disorders. The current study tested 1) whether patients with PD/A (N = 71) show increased threat expectancy ratings to fear-relevant and fear-irrelevant stimuli relative to a comparison group without an axis I disorder (N=65), and 2) whether threat expectancy bias before treatment predicts treatment outcome in a subset of these patients (n = 51). In a computerized task, participants saw a series of panic-related and neutral words and rated for each word the likelihood that it would be followed by a loud, aversive sound. Results showed higher threat expectancy ratings to both panic-related and neutral words in patients with PD/A compared to the comparison group. Threat expectancy ratings did not predict treatment outcome. This study only used expectancy ratings and did not include physiological measures. Furthermore, no post-treatment expectancy bias task was added to shed further light on the possibility that expectancy bias might be attenuated by treatment. Patients show higher expectancies of aversive outcome following both fear-relevant and fear-irrelevant stimuli relative to the comparison group, but this does not predict treatment outcome. Copyright © 2016 Elsevier Ltd. All rights reserved.
Reliability analysis based on the losses from failures.
Todinov, M T
2006-04-01
The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yan; Sahinidis, Nikolaos V.
2013-03-06
In this paper, surrogate models are iteratively built using polynomial chaos expansion (PCE) and detailed numerical simulations of a carbon sequestration system. Output variables from a numerical simulator are approximated as polynomial functions of uncertain parameters. Once generated, PCE representations can be used in place of the numerical simulator and often decrease simulation times by several orders of magnitude. However, PCE models are expensive to derive unless the number of terms in the expansion is moderate, which requires a relatively small number of uncertain variables and a low degree of expansion. To cope with this limitation, instead of using amore » classical full expansion at each step of an iterative PCE construction method, we introduce a mixed-integer programming (MIP) formulation to identify the best subset of basis terms in the expansion. This approach makes it possible to keep the number of terms small in the expansion. Monte Carlo (MC) simulation is then performed by substituting the values of the uncertain parameters into the closed-form polynomial functions. Based on the results of MC simulation, the uncertainties of injecting CO{sub 2} underground are quantified for a saline aquifer. Moreover, based on the PCE model, we formulate an optimization problem to determine the optimal CO{sub 2} injection rate so as to maximize the gas saturation (residual trapping) during injection, and thereby minimize the chance of leakage.« less
Investigating Evolutionary Conservation of Dendritic Cell Subset Identity and Functions
Vu Manh, Thien-Phong; Bertho, Nicolas; Hosmalin, Anne; Schwartz-Cornil, Isabelle; Dalod, Marc
2015-01-01
Dendritic cells (DCs) were initially defined as mononuclear phagocytes with a dendritic morphology and an exquisite efficiency for naïve T-cell activation. DC encompass several subsets initially identified by their expression of specific cell surface molecules and later shown to excel in distinct functions and to develop under the instruction of different transcription factors or cytokines. Very few cell surface molecules are expressed in a specific manner on any immune cell type. Hence, to identify cell types, the sole use of a small number of cell surface markers in classical flow cytometry can be deceiving. Moreover, the markers currently used to define mononuclear phagocyte subsets vary depending on the tissue and animal species studied and even between laboratories. This has led to confusion in the definition of DC subset identity and in their attribution of specific functions. There is a strong need to identify a rigorous and consensus way to define mononuclear phagocyte subsets, with precise guidelines potentially applicable throughout tissues and species. We will discuss the advantages, drawbacks, and complementarities of different methodologies: cell surface phenotyping, ontogeny, functional characterization, and molecular profiling. We will advocate that gene expression profiling is a very rigorous, largely unbiased and accessible method to define the identity of mononuclear phagocyte subsets, which strengthens and refines surface phenotyping. It is uniquely powerful to yield new, experimentally testable, hypotheses on the ontogeny or functions of mononuclear phagocyte subsets, their molecular regulation, and their evolutionary conservation. We propose defining cell populations based on a combination of cell surface phenotyping, expression analysis of hallmark genes, and robust functional assays, in order to reach a consensus and integrate faster the huge but scattered knowledge accumulated by different laboratories on different cell types, organs, and species. PMID:26082777
Atmospheric Science Data Center
2016-12-27
Date(s): Wednesday, December 28, 2016 Time: 12 am - 12 pm EDT Event Impact: The Data Pool, MISR order and browse tools, TAD, TES and MOPITT Search and Subset Applications, and Reverb will be unavailable...
Alcohol-related expectancies in adults and adolescents: Similarities and disparities.
Monk, Rebecca L; Heim, Derek
2016-03-02
This study aimed to contrast student and not student outcome expectancies, and explore the diversity of alcohol-related cognitions within a wider student sample. Participants (n=549) were college students (higher education-typically aged 15-18 years), university students (further education-typically aged 18-22 years) and business people (white collar professionals <50 years) who completed questionnaires in their place of work or education. Overall positive expectancies were higher in the college students than in the business or university samples. However, not all expectancy subcategories followed this pattern. Participant groups of similar age were therefore alike in some aspects of their alcohol-related cognitions but different in others. Similarly, participant groups whom are divergent in age appeared to be alike in some of their alcohol-related cognitions, such as tension reduction expectancies. Research often homogenises students as a specific sub-set of the population, this paper hi-lights that this may be an over-simplification. Furthermore, the largely exclusive focus on student groups within research in this area may also be an oversight, given the diversity of the findings demonstrated between these groups.
Chemotherapy and target therapy in the management of adult high- grade gliomas.
Spinelli, Gian Paolo; Miele, Evelina; Lo Russo, Giuseppe; Miscusi, Massimo; Codacci-Pisanelli, Giovanni; Petrozza, Vincenzo; Papa, Anselmo; Frati, Luigi; Della Rocca, Carlo; Gulino, Alberto; Tomao, Silverio
2012-10-01
Adult high grade gliomas (HGG) are the most frequent and fatal primary central nervous system (CNS) tumors. Despite recent advances in the knowledge of the pathology and the molecular features of this neoplasm, its prognosis remains poor. In the last years temozolomide (TMZ) has dramatically changed the life expectancy of these patients: the association of this drug with radiotherapy (RT), followed by TMZ alone, is the current standard of care. However, malignant gliomas often remain resistant to chemotherapy (CHT). Therefore, preclinical and clinical research efforts have been directed on identifying and understanding the different mechanisms of chemo-resistance operating in this subset of tumors,in order to develop effective strategies to overcome resistance. Moreover, the evidence of alterations in signal transduction pathways underlying tumor progression, has increased the number of trials investigating molecular target agents, such as anti-epidermal growth factor receptor (EGFR) and anti- vascular endothelial growth factor (VEGF) signaling. The purpose of this review is to point out the current standard of treatment and to explore new available target therapies in HGG.
Pain as metaphor: metaphor and medicine
Neilson, Shane
2016-01-01
Like many other disciplines, medicine often resorts to metaphor in order to explain complicated concepts that are imperfectly understood. But what happens when medicine's metaphors close off thinking, restricting interpretations and opinions to those of the negative kind? This paper considers the deleterious effects of destructive metaphors that cluster around pain. First, the metaphoric basis of all knowledge is introduced. Next, a particular subset of medical metaphors in the domain of neurology (doors/keys/wires) are shown to encourage mechanistic thinking. Because schematics are often used in medical textbooks to simplify the complex, this paper traces the visual metaphors implied in such schematics. Mechanistic-metaphorical thinking results in the accumulation of vast amounts of data through experimentation, but this paper asks what the real value of the information is since patients can generally only expect modest benefits – or none at all – for relief from chronic pain conditions. Elucidation of mechanism through careful experimentation creates an illusion of vast medical knowledge that, to a significant degree, is metaphor-based. This paper argues that for pain outcomes to change, our metaphors must change first. PMID:26253331
Daily emotional states as reported by children and adolescents.
Larson, R; Lampman-Petraitis, C
1989-10-01
Hour-to-hour emotional states reported by children, ages 9-15, were examined in order to evaluate the hypothesis that the onset of adolescence is associated with increased emotional variability. These youths carried electronic pagers for 1 week and filled out reports on their emotional states in response to signals received at random times. To evaluate possible age-related response sets, a subset of children was asked to use the same scales to rate the emotions shown in drawings of 6 faces. The expected relation between daily emotional variability and age was not found among the boys and was small among the girls. There was, however, a linear relation between age and average mood states, with the older participants reporting more dysphoric average states, especially more mildly negative states. An absence of age difference in the ratings of the faces indicated that this relation could not be attributed to age differences in response set. Thus, these findings provide little support for the hypothesis that the onset of adolescence is associated with increased emotionality but indicate significant alterations in everyday experience associated with this age period.
Aronson, Dallas B; Bosch, Stephen; Gray, D Anthony; Howard, Philip H; Guiney, Patrick D
2007-10-01
A comparison of the human health risk to consumers using one of two types of toilet rimblock products, either a p-dichlorobenzene-based rimblock or two newer fragrance/surfactant-based alternatives, was conducted. Rimblock products are designed for global use by consumers worldwide and function by releasing volatile compounds into indoor air with subsequent exposure presumed to be mainly by inhalation of indoor air. Using the THERdbASE exposure model and experimentally determined emission data, indoor air concentrations and daily intake values were determined for both types of rimblock products. Modeled exposure concentrations from a representative p-dichlorobenzene rimblock product are an order of magnitude higher than those from the alternative rimblock products due to its nearly pure composition and high sublimation rate. Lifetime exposure to p-dichlorobenzene or the subset of fragrance components with available RfD values is not expected to lead to non-cancer-based adverse health effects based on the exposure concentrations estimated using the THERdbASE model. A similar comparison of cancer-based effects was not possible as insufficient data were available for the fragrance components.
Killgrove, Kristina; Montgomery, Janet
2016-01-01
Migration within the Roman Empire occurred at multiple scales and was engaged in both voluntarily and involuntarily. Because of the lengthy tradition of classical studies, bioarchaeological analyses must be fully contextualized within the bounds of history, material culture, and epigraphy. In order to assess migration to Rome within an updated contextual framework, strontium isotope analysis was performed on 105 individuals from two cemeteries associated with Imperial Rome—Casal Bertone and Castellaccio Europarco—and oxygen and carbon isotope analyses were performed on a subset of 55 individuals. Statistical analysis and comparisons with expected local ranges found several outliers who likely immigrated to Rome from elsewhere. Demographics of the immigrants show men and children migrated, and a comparison of carbon isotopes from teeth and bone samples suggests the immigrants may have significantly changed their diet. These data represent the first physical evidence of individual migrants to Imperial Rome. This case study demonstrates the importance of employing bioarchaeology to generate a deeper understanding of a complex ancient urban center. PMID:26863610
Killgrove, Kristina; Montgomery, Janet
2016-01-01
Migration within the Roman Empire occurred at multiple scales and was engaged in both voluntarily and involuntarily. Because of the lengthy tradition of classical studies, bioarchaeological analyses must be fully contextualized within the bounds of history, material culture, and epigraphy. In order to assess migration to Rome within an updated contextual framework, strontium isotope analysis was performed on 105 individuals from two cemeteries associated with Imperial Rome-Casal Bertone and Castellaccio Europarco-and oxygen and carbon isotope analyses were performed on a subset of 55 individuals. Statistical analysis and comparisons with expected local ranges found several outliers who likely immigrated to Rome from elsewhere. Demographics of the immigrants show men and children migrated, and a comparison of carbon isotopes from teeth and bone samples suggests the immigrants may have significantly changed their diet. These data represent the first physical evidence of individual migrants to Imperial Rome. This case study demonstrates the importance of employing bioarchaeology to generate a deeper understanding of a complex ancient urban center.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Q; Herrick, A; Hoke, S
Purpose: A new readout technology based on pulsed optically stimulating luminescence is introduced (microSTARii, Landauer, Inc, Glenwood, IL60425). This investigation searches for approaches that maximizes the dosimetry accuracy in clinical applications. Methods: The sensitivity of each optically stimulated luminescence dosimeter (OSLD) was initially characterized by exposing it to a given radiation beam. After readout, the luminescence signal stored in the OSLD was erased by exposing its sensing area to a 21W white LED light for 24 hours. A set of OSLDs with consistent sensitivities was selected to calibrate the dose reader. Higher order nonlinear curves were also derived from themore » calibration readings. OSLDs with cumulative doses below 15 Gy were reused. Before an in-vivo dosimetry, the OSLD luminescence signal was erased with the white LED light. Results: For a set of 68 manufacturer-screened OSLDs, the measured sensitivities vary in a range of 17.3%. A sub-set of the OSLDs with sensitivities within ±1% was selected for the reader calibration. Three OSLDs in a group were exposed to a given radiation. Nine groups were exposed to radiation doses ranging from 0 to 13 Gy. Additional verifications demonstrated that the reader uncertainty is about 3%. With an external calibration function derived by fitting the OSLD readings to a 3rd-order polynomial, the dosimetry uncertainty dropped to 0.5%. The dose-luminescence response curves of individual OSLDs were characterized. All curves converge within 1% after the sensitivity correction. With all uncertainties considered, the systematic uncertainty is about 2%. Additional tests emulating in-vivo dosimetry by exposing the OSLDs under different radiation sources confirmed the claim. Conclusion: The sensitivity of individual OSLD should be characterized initially. A 3rd-order polynomial function is a more accurate representation of the dose-luminescence response curve. The dosimetry uncertainty specified by the manufacturer is 4%. Following the proposed approach, it can be controlled to 2%.« less
Allocating dissipation across a molecular machine cycle to maximize flux
Brown, Aidan I.; Sivak, David A.
2017-01-01
Biomolecular machines consume free energy to break symmetry and make directed progress. Nonequilibrium ATP concentrations are the typical free energy source, with one cycle of a molecular machine consuming a certain number of ATP, providing a fixed free energy budget. Since evolution is expected to favor rapid-turnover machines that operate efficiently, we investigate how this free energy budget can be allocated to maximize flux. Unconstrained optimization eliminates intermediate metastable states, indicating that flux is enhanced in molecular machines with fewer states. When maintaining a set number of states, we show that—in contrast to previous findings—the flux-maximizing allocation of dissipation is not even. This result is consistent with the coexistence of both “irreversible” and reversible transitions in molecular machine models that successfully describe experimental data, which suggests that, in evolved machines, different transitions differ significantly in their dissipation. PMID:29073016
Ning, Jing; Chen, Yong; Piao, Jin
2017-07-01
Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Akbaş, Halil; Bilgen, Bilge; Turhan, Aykut Melih
2015-11-01
This study proposes an integrated prediction and optimization model by using multi-layer perceptron neural network and particle swarm optimization techniques. Three different objective functions are formulated. The first one is the maximization of methane percentage with single output. The second one is the maximization of biogas production with single output. The last one is the maximization of biogas quality and biogas production with two outputs. Methane percentage, carbon dioxide percentage, and other contents' percentage are used as the biogas quality criteria. Based on the formulated models and data from a wastewater treatment facility, optimal values of input variables and their corresponding maximum output values are found out for each model. It is expected that the application of the integrated prediction and optimization models increases the biogas production and biogas quality, and contributes to the quantity of electricity production at the wastewater treatment facility. Copyright © 2015 Elsevier Ltd. All rights reserved.
Optimisation of the mean boat velocity in rowing.
Rauter, G; Baumgartner, L; Denoth, J; Riener, R; Wolf, P
2012-01-01
In rowing, motor learning may be facilitated by augmented feedback that displays the ratio between actual mean boat velocity and maximal achievable mean boat velocity. To provide this ratio, the aim of this work was to develop and evaluate an algorithm calculating an individual maximal mean boat velocity. The algorithm optimised the horizontal oar movement under constraints such as the individual range of the horizontal oar displacement, individual timing of catch and release and an individual power-angle relation. Immersion and turning of the oar were simplified, and the seat movement of a professional rower was implemented. The feasibility of the algorithm, and of the associated ratio between actual boat velocity and optimised boat velocity, was confirmed by a study on four subjects: as expected, advanced rowing skills resulted in higher ratios, and the maximal mean boat velocity depended on the range of the horizontal oar displacement.
Uchiyama, Ikuo
2008-10-31
Identifying the set of intrinsically conserved genes, or the genomic core, among related genomes is crucial for understanding prokaryotic genomes where horizontal gene transfers are common. Although core genome identification appears to be obvious among very closely related genomes, it becomes more difficult when more distantly related genomes are compared. Here, we consider the core structure as a set of sufficiently long segments in which gene orders are conserved so that they are likely to have been inherited mainly through vertical transfer, and developed a method for identifying the core structure by finding the order of pre-identified orthologous groups (OGs) that maximally retains the conserved gene orders. The method was applied to genome comparisons of two well-characterized families, Bacillaceae and Enterobacteriaceae, and identified their core structures comprising 1438 and 2125 OGs, respectively. The core sets contained most of the essential genes and their related genes, which were primarily included in the intersection of the two core sets comprising around 700 OGs. The definition of the genomic core based on gene order conservation was demonstrated to be more robust than the simpler approach based only on gene conservation. We also investigated the core structures in terms of G+C content homogeneity and phylogenetic congruence, and found that the core genes primarily exhibited the expected characteristic, i.e., being indigenous and sharing the same history, more than the non-core genes. The results demonstrate that our strategy of genome alignment based on gene order conservation can provide an effective approach to identify the genomic core among moderately related microbial genomes.
Meurrens, Julie; Steiner, Thomas; Ponette, Jonathan; Janssen, Hans Antonius; Ramaekers, Monique; Wehrlin, Jon Peter; Vandekerckhove, Philippe; Deldicque, Louise
2016-12-01
The aims of the present study were to investigate the impact of three whole blood donations on endurance capacity and hematological parameters and to determine the duration to fully recover initial endurance capacity and hematological parameters after each donation. Twenty-four moderately trained subjects were randomly divided in a donation (n = 16) and a placebo (n = 8) group. Each of the three donations was interspersed by 3 months, and the recovery of endurance capacity and hematological parameters was monitored up to 1 month after donation. Maximal power output, peak oxygen consumption, and hemoglobin mass decreased (p < 0.001) up to 4 weeks after a single blood donation with a maximal decrease of 4, 10, and 7%, respectively. Hematocrit, hemoglobin concentration, ferritin, and red blood cell count (RBC), all key hematological parameters for oxygen transport, were lowered by a single donation (p < 0.001) and cumulatively further affected by the repetition of the donations (p < 0.001). The maximal decrease after a blood donation was 11% for hematocrit, 10% for hemoglobin concentration, 50% for ferritin, and 12% for RBC (p < 0.001). Maximal power output cumulatively increased in the placebo group as the maximal exercise tests were repeated (p < 0.001), which indicates positive training adaptations. This increase in maximal power output over the whole duration of the study was not observed in the donation group. Maximal, but not submaximal, endurance capacity was altered after blood donation in moderately trained people and the expected increase in capacity after multiple maximal exercise tests was not present when repeating whole blood donations.
Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.
ERIC Educational Resources Information Center
Wang, Yuh-Yin Wu; Schafer, William D.
This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…