Dong, J; Hayakawa, Y; Kober, C
2014-01-01
When metallic prosthetic appliances and dental fillings exist in the oral cavity, the appearance of metal-induced streak artefacts is not avoidable in CT images. The aim of this study was to develop a method for artefact reduction using the statistical reconstruction on multidetector row CT images. Adjacent CT images often depict similar anatomical structures. Therefore, reconstructed images with weak artefacts were attempted using projection data of an artefact-free image in a neighbouring thin slice. Images with moderate and strong artefacts were continuously processed in sequence by successive iterative restoration where the projection data was generated from the adjacent reconstructed slice. First, the basic maximum likelihood-expectation maximization algorithm was applied. Next, the ordered subset-expectation maximization algorithm was examined. Alternatively, a small region of interest setting was designated. Finally, the general purpose graphic processing unit machine was applied in both situations. The algorithms reduced the metal-induced streak artefacts on multidetector row CT images when the sequential processing method was applied. The ordered subset-expectation maximization and small region of interest reduced the processing duration without apparent detriments. A general-purpose graphic processing unit realized the high performance. A statistical reconstruction method was applied for the streak artefact reduction. The alternative algorithms applied were effective. Both software and hardware tools, such as ordered subset-expectation maximization, small region of interest and general-purpose graphic processing unit achieved fast artefact correction.
Olsson, Anna; Arlig, Asa; Carlsson, Gudrun Alm; Gustafsson, Agnetha
2007-09-01
The image quality of single photon emission computed tomography (SPECT) depends on the reconstruction algorithm used. The purpose of the present study was to evaluate parameters in ordered subset expectation maximization (OSEM) and to compare systematically with filtered back-projection (FBP) for reconstruction of regional cerebral blood flow (rCBF) SPECT, incorporating attenuation and scatter correction. The evaluation was based on the trade-off between contrast recovery and statistical noise using different sizes of subsets, number of iterations and filter parameters. Monte Carlo simulated SPECT studies of a digital human brain phantom were used. The contrast recovery was calculated as measured contrast divided by true contrast. Statistical noise in the reconstructed images was calculated as the coefficient of variation in pixel values. A constant contrast level was reached above 195 equivalent maximum likelihood expectation maximization iterations. The choice of subset size was not crucial as long as there were > or = 2 projections per subset. The OSEM reconstruction was found to give 5-14% higher contrast recovery than FBP for all clinically relevant noise levels in rCBF SPECT. The Butterworth filter, power 6, achieved the highest stable contrast recovery level at all clinically relevant noise levels. The cut-off frequency should be chosen according to the noise level accepted in the image. Trade-off plots are shown to be a practical way of deciding the number of iterations and subset size for the OSEM reconstruction and can be used for other examination types in nuclear medicine.
Muon tomography imaging improvement using optimized limited angle data
NASA Astrophysics Data System (ADS)
Bai, Chuanyong; Simon, Sean; Kindem, Joel; Luo, Weidong; Sossong, Michael J.; Steiger, Matthew
2014-05-01
Image resolution of muon tomography is limited by the range of zenith angles of cosmic ray muons and the flux rate at sea level. Low flux rate limits the use of advanced data rebinning and processing techniques to improve image quality. By optimizing the limited angle data, however, image resolution can be improved. To demonstrate the idea, physical data of tungsten blocks were acquired on a muon tomography system. The angular distribution and energy spectrum of muons measured on the system was also used to generate simulation data of tungsten blocks of different arrangement (geometry). The data were grouped into subsets using the zenith angle and volume images were reconstructed from the data subsets using two algorithms. One was a distributed PoCA (point of closest approach) algorithm and the other was an accelerated iterative maximal likelihood/expectation maximization (MLEM) algorithm. Image resolution was compared for different subsets. Results showed that image resolution was better in the vertical direction for subsets with greater zenith angles and better in the horizontal plane for subsets with smaller zenith angles. The overall image resolution appeared to be the compromise of that of different subsets. This work suggests that the acquired data can be grouped into different limited angle data subsets for optimized image resolution in desired directions. Use of multiple images with resolution optimized in different directions can improve overall imaging fidelity and the intended applications.
Ceriani, Luca; Ruberto, Teresa; Delaloye, Angelika Bischof; Prior, John O; Giovanella, Luca
2010-03-01
The purposes of this study were to characterize the performance of a 3-dimensional (3D) ordered-subset expectation maximization (OSEM) algorithm in the quantification of left ventricular (LV) function with (99m)Tc-labeled agent gated SPECT (G-SPECT), the QGS program, and a beating-heart phantom and to optimize the reconstruction parameters for clinical applications. A G-SPECT image of a dynamic heart phantom simulating the beating left ventricle was acquired. The exact volumes of the phantom were known and were as follows: end-diastolic volume (EDV) of 112 mL, end-systolic volume (ESV) of 37 mL, and stroke volume (SV) of 75 mL; these volumes produced an LV ejection fraction (LVEF) of 67%. Tomographic reconstructions were obtained after 10-20 iterations (I) with 4, 8, and 16 subsets (S) at full width at half maximum (FWHM) gaussian postprocessing filter cutoff values of 8-15 mm. The QGS program was used for quantitative measurements. Measured values ranged from 72 to 92 mL for EDV, from 18 to 32 mL for ESV, and from 54 to 63 mL for SV, and the calculated LVEF ranged from 65% to 76%. Overall, the combination of 10 I, 8 S, and a cutoff filter value of 10 mm produced the most accurate results. The plot of the measures with respect to the expectation maximization-equivalent iterations (I x S product) revealed a bell-shaped curve for the LV volumes and a reverse distribution for the LVEF, with the best results in the intermediate range. In particular, FWHM cutoff values exceeding 10 mm affected the estimation of the LV volumes. The QGS program is able to correctly calculate the LVEF when used in association with an optimized 3D OSEM algorithm (8 S, 10 I, and FWHM of 10 mm) but underestimates the LV volumes. However, various combinations of technical parameters, including a limited range of I and S (80-160 expectation maximization-equivalent iterations) and low cutoff values (< or =10 mm) for the gaussian postprocessing filter, produced results with similar accuracies and without clinically relevant differences in the LV volumes and the estimated LVEF.
Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib
2016-02-07
Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.
NASA Astrophysics Data System (ADS)
Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib
2016-02-01
Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.
G-STRATEGY: Optimal Selection of Individuals for Sequencing in Genetic Association Studies
Wang, Miaoyan; Jakobsdottir, Johanna; Smith, Albert V.; McPeek, Mary Sara
2017-01-01
In a large-scale genetic association study, the number of phenotyped individuals available for sequencing may, in some cases, be greater than the study’s sequencing budget will allow. In that case, it can be important to prioritize individuals for sequencing in a way that optimizes power for association with the trait. Suppose a cohort of phenotyped individuals is available, with some subset of them possibly already sequenced, and one wants to choose an additional fixed-size subset of individuals to sequence in such a way that the power to detect association is maximized. When the phenotyped sample includes related individuals, power for association can be gained by including partial information, such as phenotype data of ungenotyped relatives, in the analysis, and this should be taken into account when assessing whom to sequence. We propose G-STRATEGY, which uses simulated annealing to choose a subset of individuals for sequencing that maximizes the expected power for association. In simulations, G-STRATEGY performs extremely well for a range of complex disease models and outperforms other strategies with, in many cases, relative power increases of 20–40% over the next best strategy, while maintaining correct type 1 error. G-STRATEGY is computationally feasible even for large datasets and complex pedigrees. We apply G-STRATEGY to data on HDL and LDL from the AGES-Reykjavik and REFINE-Reykjavik studies, in which G-STRATEGY is able to closely-approximate the power of sequencing the full sample by selecting for sequencing a only small subset of the individuals. PMID:27256766
What's wrong with hazard-ranking systems? An expository note.
Cox, Louis Anthony Tony
2009-07-01
Two commonly recommended principles for allocating risk management resources to remediate uncertain hazards are: (1) select a subset to maximize risk-reduction benefits (e.g., maximize the von Neumann-Morgenstern expected utility of the selected risk-reducing activities), and (2) assign priorities to risk-reducing opportunities and then select activities from the top of the priority list down until no more can be afforded. When different activities create uncertain but correlated risk reductions, as is often the case in practice, then these principles are inconsistent: priority scoring and ranking fails to maximize risk-reduction benefits. Real-world risk priority scoring systems used in homeland security and terrorism risk assessment, environmental risk management, information system vulnerability rating, business risk matrices, and many other important applications do not exploit correlations among risk-reducing opportunities or optimally diversify risk-reducing investments. As a result, they generally make suboptimal risk management recommendations. Applying portfolio optimization methods instead of risk prioritization ranking, rating, or scoring methods can achieve greater risk-reduction value for resources spent.
Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia
2013-02-01
The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.
Liu, Ming; Gao, Yue; Xiao, Rui; Zhang, Bo-li
2009-01-01
This study is to analyze microcosmic significance of Chinese medicine composing principle "principal, assistant, complement and mediating guide" and it's fuzzy mathematic quantitative law. According to molecular biology and maximal membership principle, fuzzy subset and membership functions were proposed. Using in vivo experiment on the effects of SiWu Decoction and its ingredients on mice with radiation-induced blood deficiency, it is concluded that DiHuang and DangGui belonged to the principal and assistant subset, BaiShao belonged to the contrary complement subset, ChuanXiong belonged to the mediating guide subset by maximal membership principle. It is discussed that traditional Chinese medicine will be consummate medical science when its theory can be described by mathematic language.
Restricted numerical range: A versatile tool in the theory of quantum information
NASA Astrophysics Data System (ADS)
Gawron, Piotr; Puchała, Zbigniew; Miszczak, Jarosław Adam; Skowronek, Łukasz; Życzkowski, Karol
2010-10-01
Numerical range of a Hermitian operator X is defined as the set of all possible expectation values of this observable among a normalized quantum state. We analyze a modification of this definition in which the expectation value is taken among a certain subset of the set of all quantum states. One considers, for instance, the set of real states, the set of product states, separable states, or the set of maximally entangled states. We show exemplary applications of these algebraic tools in the theory of quantum information: analysis of k-positive maps and entanglement witnesses, as well as study of the minimal output entropy of a quantum channel. Product numerical range of a unitary operator is used to solve the problem of local distinguishability of a family of two unitary gates.
NASA Astrophysics Data System (ADS)
Karamat, Muhammad I.; Farncombe, Troy H.
2015-10-01
Simultaneous multi-isotope Single Photon Emission Computed Tomography (SPECT) imaging has a number of applications in cardiac, brain, and cancer imaging. The major concern however, is the significant crosstalk contamination due to photon scatter between the different isotopes. The current study focuses on a method of crosstalk compensation between two isotopes in simultaneous dual isotope SPECT acquisition applied to cancer imaging using 99mTc and 111In. We have developed an iterative image reconstruction technique that simulates the photon down-scatter from one isotope into the acquisition window of a second isotope. Our approach uses an accelerated Monte Carlo (MC) technique for the forward projection step in an iterative reconstruction algorithm. The MC estimated scatter contamination of a radionuclide contained in a given projection view is then used to compensate for the photon contamination in the acquisition window of other nuclide. We use a modified ordered subset-expectation maximization (OS-EM) algorithm named simultaneous ordered subset-expectation maximization (Sim-OSEM), to perform this step. We have undertaken a number of simulation tests and phantom studies to verify this approach. The proposed reconstruction technique was also evaluated by reconstruction of experimentally acquired phantom data. Reconstruction using Sim-OSEM showed very promising results in terms of contrast recovery and uniformity of object background compared to alternative reconstruction methods implementing alternative scatter correction schemes (i.e., triple energy window or separately acquired projection data). In this study the evaluation is based on the quality of reconstructed images and activity estimated using Sim-OSEM. In order to quantitate the possible improvement in spatial resolution and signal to noise ratio (SNR) observed in this study, further simulation and experimental studies are required.
NASA Astrophysics Data System (ADS)
Hou, Yanqing; Verhagen, Sandra; Wu, Jie
2016-12-01
Ambiguity Resolution (AR) is a key technique in GNSS precise positioning. In case of weak models (i.e., low precision of data), however, the success rate of AR may be low, which may consequently introduce large errors to the baseline solution in cases of wrong fixing. Partial Ambiguity Resolution (PAR) is therefore proposed such that the baseline precision can be improved by fixing only a subset of ambiguities with high success rate. This contribution proposes a new PAR strategy, allowing to select the subset such that the expected precision gain is maximized among a set of pre-selected subsets, while at the same time the failure rate is controlled. These pre-selected subsets are supposed to obtain the highest success rate among those with the same subset size. The strategy is called Two-step Success Rate Criterion (TSRC) as it will first try to fix a relatively large subset with the fixed failure rate ratio test (FFRT) to decide on acceptance or rejection. In case of rejection, a smaller subset will be fixed and validated by the ratio test so as to fulfill the overall failure rate criterion. It is shown how the method can be practically used, without introducing a large additional computation effort. And more importantly, how it can improve (or at least not deteriorate) the availability in terms of baseline precision comparing to classical Success Rate Criterion (SRC) PAR strategy, based on a simulation validation. In the simulation validation, significant improvements are obtained for single-GNSS on short baselines with dual-frequency observations. For dual-constellation GNSS, the improvement for single-frequency observations on short baselines is very significant, on average 68%. For the medium- to long baselines, with dual-constellation GNSS the average improvement is around 20-30%.
Evaluation of Bias and Variance in Low-count OSEM List Mode Reconstruction
Jian, Y; Planeta, B; Carson, R E
2016-01-01
Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization (MLEM) reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combination of subsets and iterations. Regions of interest (ROIs) were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations x subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1–5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR. PMID:25479254
Evaluation of bias and variance in low-count OSEM list mode reconstruction
NASA Astrophysics Data System (ADS)
Jian, Y.; Planeta, B.; Carson, R. E.
2015-01-01
Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combinations of subsets and iterations. Regions of interest were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations × subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1-5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR.
NASA Astrophysics Data System (ADS)
Ren, Xiaoqiang; Yan, Jiaqi; Mo, Yilin
2018-03-01
This paper studies binary hypothesis testing based on measurements from a set of sensors, a subset of which can be compromised by an attacker. The measurements from a compromised sensor can be manipulated arbitrarily by the adversary. The asymptotic exponential rate, with which the probability of error goes to zero, is adopted to indicate the detection performance of a detector. In practice, we expect the attack on sensors to be sporadic, and therefore the system may operate with all the sensors being benign for extended period of time. This motivates us to consider the trade-off between the detection performance of a detector, i.e., the probability of error, when the attacker is absent (defined as efficiency) and the worst-case detection performance when the attacker is present (defined as security). We first provide the fundamental limits of this trade-off, and then propose a detection strategy that achieves these limits. We then consider a special case, where there is no trade-off between security and efficiency. In other words, our detection strategy can achieve the maximal efficiency and the maximal security simultaneously. Two extensions of the secure hypothesis testing problem are also studied and fundamental limits and achievability results are provided: 1) a subset of sensors, namely "secure" sensors, are assumed to be equipped with better security countermeasures and hence are guaranteed to be benign, 2) detection performance with unknown number of compromised sensors. Numerical examples are given to illustrate the main results.
Fish swarm intelligent to optimize real time monitoring of chips drying using machine vision
NASA Astrophysics Data System (ADS)
Hendrawan, Y.; Hawa, L. C.; Damayanti, R.
2018-03-01
This study attempted to apply machine vision-based chips drying monitoring system which is able to optimise the drying process of cassava chips. The objective of this study is to propose fish swarm intelligent (FSI) optimization algorithms to find the most significant set of image features suitable for predicting water content of cassava chips during drying process using artificial neural network model (ANN). Feature selection entails choosing the feature subset that maximizes the prediction accuracy of ANN. Multi-Objective Optimization (MOO) was used in this study which consisted of prediction accuracy maximization and feature-subset size minimization. The results showed that the best feature subset i.e. grey mean, L(Lab) Mean, a(Lab) energy, red entropy, hue contrast, and grey homogeneity. The best feature subset has been tested successfully in ANN model to describe the relationship between image features and water content of cassava chips during drying process with R2 of real and predicted data was equal to 0.9.
Uncountably many maximizing measures for a dense subset of continuous functions
NASA Astrophysics Data System (ADS)
Shinoda, Mao
2018-05-01
Ergodic optimization aims to single out dynamically invariant Borel probability measures which maximize the integral of a given ‘performance’ function. For a continuous self-map of a compact metric space and a dense set of continuous functions, we show the existence of uncountably many ergodic maximizing measures. We also show that, for a topologically mixing subshift of finite type and a dense set of continuous functions there exist uncountably many ergodic maximizing measures with full support and positive entropy.
Enumerating all maximal frequent subtrees in collections of phylogenetic trees
2014-01-01
Background A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. Results We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Conclusions Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees. PMID:25061474
Enumerating all maximal frequent subtrees in collections of phylogenetic trees.
Deepak, Akshay; Fernández-Baca, David
2014-01-01
A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees.
Studies of a Next-Generation Silicon-Photomultiplier-Based Time-of-Flight PET/CT System.
Hsu, David F C; Ilan, Ezgi; Peterson, William T; Uribe, Jorge; Lubberink, Mark; Levin, Craig S
2017-09-01
This article presents system performance studies for the Discovery MI PET/CT system, a new time-of-flight system based on silicon photomultipliers. System performance and clinical imaging were compared between this next-generation system and other commercially available PET/CT and PET/MR systems, as well as between different reconstruction algorithms. Methods: Spatial resolution, sensitivity, noise-equivalent counting rate, scatter fraction, counting rate accuracy, and image quality were characterized with the National Electrical Manufacturers Association NU-2 2012 standards. Energy resolution and coincidence time resolution were measured. Tests were conducted independently on two Discovery MI scanners installed at Stanford University and Uppsala University, and the results were averaged. Back-to-back patient scans were also performed between the Discovery MI, Discovery 690 PET/CT, and SIGNA PET/MR systems. Clinical images were reconstructed using both ordered-subset expectation maximization and Q.Clear (block-sequential regularized expectation maximization with point-spread function modeling) and were examined qualitatively. Results: The averaged full widths at half maximum (FWHMs) of the radial/tangential/axial spatial resolution reconstructed with filtered backprojection at 1, 10, and 20 cm from the system center were, respectively, 4.10/4.19/4.48 mm, 5.47/4.49/6.01 mm, and 7.53/4.90/6.10 mm. The averaged sensitivity was 13.7 cps/kBq at the center of the field of view. The averaged peak noise-equivalent counting rate was 193.4 kcps at 21.9 kBq/mL, with a scatter fraction of 40.6%. The averaged contrast recovery coefficients for the image-quality phantom were 53.7, 64.0, 73.1, 82.7, 86.8, and 90.7 for the 10-, 13-, 17-, 22-, 28-, and 37-mm-diameter spheres, respectively. The average photopeak energy resolution was 9.40% FWHM, and the average coincidence time resolution was 375.4 ps FWHM. Clinical image comparisons between the PET/CT systems demonstrated the high quality of the Discovery MI. Comparisons between the Discovery MI and SIGNA showed a similar spatial resolution and overall imaging performance. Lastly, the results indicated significantly enhanced image quality and contrast-to-noise performance for Q.Clear, compared with ordered-subset expectation maximization. Conclusion: Excellent performance was achieved with the Discovery MI, including 375 ps FWHM coincidence time resolution and sensitivity of 14 cps/kBq. Comparisons between reconstruction algorithms and other multimodal silicon photomultiplier and non-silicon photomultiplier PET detector system designs indicated that performance can be substantially enhanced with this next-generation system. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
On Hardness of Pricing Items for Single-Minded Bidders
NASA Astrophysics Data System (ADS)
Khandekar, Rohit; Kimbrel, Tracy; Makarychev, Konstantin; Sviridenko, Maxim
We consider the following item pricing problem which has received much attention recently. A seller has an infinite numbers of copies of n items. There are m buyers, each with a budget and an intention to buy a fixed subset of items. Given prices on the items, each buyer buys his subset of items, at the given prices, provided the total price of the subset is at most his budget. The objective of the seller is to determine the prices such that her total profit is maximized.
Evaluating low pass filters on SPECT reconstructed cardiac orientation estimation
NASA Astrophysics Data System (ADS)
Dwivedi, Shekhar
2009-02-01
Low pass filters can affect the quality of clinical SPECT images by smoothing. Appropriate filter and parameter selection leads to optimum smoothing that leads to a better quantification followed by correct diagnosis and accurate interpretation by the physician. This study aims at evaluating the low pass filters on SPECT reconstruction algorithms. Criteria for evaluating the filters are estimating the SPECT reconstructed cardiac azimuth and elevation angle. Low pass filters studied are butterworth, gaussian, hamming, hanning and parzen. Experiments are conducted using three reconstruction algorithms, FBP (filtered back projection), MLEM (maximum likelihood expectation maximization) and OSEM (ordered subsets expectation maximization), on four gated cardiac patient projections (two patients with stress and rest projections). Each filter is applied with varying cutoff and order for each reconstruction algorithm (only butterworth used for MLEM and OSEM). The azimuth and elevation angles are calculated from the reconstructed volume and the variation observed in the angles with varying filter parameters is reported. Our results demonstrate that behavior of hamming, hanning and parzen filter (used with FBP) with varying cutoff is similar for all the datasets. Butterworth filter (cutoff > 0.4) behaves in a similar fashion for all the datasets using all the algorithms whereas with OSEM for a cutoff < 0.4, it fails to generate cardiac orientation due to oversmoothing, and gives an unstable response with FBP and MLEM. This study on evaluating effect of low pass filter cutoff and order on cardiac orientation using three different reconstruction algorithms provides an interesting insight into optimal selection of filter parameters.
Why Contextual Preference Reversals Maximize Expected Value
2016-01-01
Contextual preference reversals occur when a preference for one option over another is reversed by the addition of further options. It has been argued that the occurrence of preference reversals in human behavior shows that people violate the axioms of rational choice and that people are not, therefore, expected value maximizers. In contrast, we demonstrate that if a person is only able to make noisy calculations of expected value and noisy observations of the ordinal relations among option features, then the expected value maximizing choice is influenced by the addition of new options and does give rise to apparent preference reversals. We explore the implications of expected value maximizing choice, conditioned on noisy observations, for a range of contextual preference reversal types—including attraction, compromise, similarity, and phantom effects. These preference reversal types have played a key role in the development of models of human choice. We conclude that experiments demonstrating contextual preference reversals are not evidence for irrationality. They are, however, a consequence of expected value maximization given noisy observations. PMID:27337391
Effect of filters and reconstruction algorithms on I-124 PET in Siemens Inveon PET scanner
NASA Astrophysics Data System (ADS)
Ram Yu, A.; Kim, Jin Su
2015-10-01
Purpose: To assess the effects of filtering and reconstruction on Siemens I-124 PET data. Methods: A Siemens Inveon PET was used. Spatial resolution of I-124 was measured to a transverse offset of 50 mm from the center FBP, 2D ordered subset expectation maximization (OSEM2D), 3D re-projection algorithm (3DRP), and maximum a posteriori (MAP) methods were tested. Non-uniformity (NU), recovery coefficient (RC), and spillover ratio (SOR) parameterized image quality. Mini deluxe phantom data of I-124 was also assessed. Results: Volumetric resolution was 7.3 mm3 from the transverse FOV center when FBP reconstruction algorithms with ramp filter was used. MAP yielded minimal NU with β =1.5. OSEM2D yielded maximal RC. SOR was below 4% for FBP with ramp, Hamming, Hanning, or Shepp-Logan filters. Based on the mini deluxe phantom results, an FBP with Hanning or Parzen filters, or a 3DRP with Hanning filter yielded feasible I-124 PET data.Conclusions: Reconstruction algorithms and filters were compared. FBP with Hanning or Parzen filters, or 3DRP with Hanning filter yielded feasible data for quantifying I-124 PET.
Constrained Fisher Scoring for a Mixture of Factor Analyzers
2016-09-01
expectation -maximization algorithm with similar computational requirements. Lastly, we demonstrate the efficacy of the proposed method for learning a... expectation maximization 44 Gene T Whipps 301 394 2372Unclassified Unclassified Unclassified UU ii Approved for public release; distribution is unlimited...14 3.6 Relationship with Expectation -Maximization 16 4. Simulation Examples 16 4.1 Synthetic MFA Example 17 4.2 Manifold Learning Example 22 5
Anatomically-Aided PET Reconstruction Using the Kernel Method
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2016-01-01
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810
Anatomically-aided PET reconstruction using the kernel method.
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi
2016-09-21
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
NOTE: Acceleration of Monte Carlo-based scatter compensation for cardiac SPECT
NASA Astrophysics Data System (ADS)
Sohlberg, A.; Watabe, H.; Iida, H.
2008-07-01
Single proton emission computed tomography (SPECT) images are degraded by photon scatter making scatter compensation essential for accurate reconstruction. Reconstruction-based scatter compensation with Monte Carlo (MC) modelling of scatter shows promise for accurate scatter correction, but it is normally hampered by long computation times. The aim of this work was to accelerate the MC-based scatter compensation using coarse grid and intermittent scatter modelling. The acceleration methods were compared to un-accelerated implementation using MC-simulated projection data of the mathematical cardiac torso (MCAT) phantom modelling 99mTc uptake and clinical myocardial perfusion studies. The results showed that when combined the acceleration methods reduced the reconstruction time for 10 ordered subset expectation maximization (OS-EM) iterations from 56 to 11 min without a significant reduction in image quality indicating that the coarse grid and intermittent scatter modelling are suitable for MC-based scatter compensation in cardiac SPECT.
Anatomically-aided PET reconstruction using the kernel method
NASA Astrophysics Data System (ADS)
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2016-09-01
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
A quantitative reconstruction software suite for SPECT imaging
NASA Astrophysics Data System (ADS)
Namías, Mauro; Jeraj, Robert
2017-11-01
Quantitative Single Photon Emission Tomography (SPECT) imaging allows for measurement of activity concentrations of a given radiotracer in vivo. Although SPECT has usually been perceived as non-quantitative by the medical community, the introduction of accurate CT based attenuation correction and scatter correction from hybrid SPECT/CT scanners has enabled SPECT systems to be as quantitative as Positron Emission Tomography (PET) systems. We implemented a software suite to reconstruct quantitative SPECT images from hybrid or dedicated SPECT systems with a separate CT scanner. Attenuation, scatter and collimator response corrections were included in an Ordered Subset Expectation Maximization (OSEM) algorithm. A novel scatter fraction estimation technique was introduced. The SPECT/CT system was calibrated with a cylindrical phantom and quantitative accuracy was assessed with an anthropomorphic phantom and a NEMA/IEC image quality phantom. Accurate activity measurements were achieved at an organ level. This software suite helps increasing quantitative accuracy of SPECT scanners.
Convergence optimization of parametric MLEM reconstruction for estimation of Patlak plot parameters.
Angelis, Georgios I; Thielemans, Kris; Tziortzi, Andri C; Turkheimer, Federico E; Tsoumpas, Charalampos
2011-07-01
In dynamic positron emission tomography data many researchers have attempted to exploit kinetic models within reconstruction such that parametric images are estimated directly from measurements. This work studies a direct parametric maximum likelihood expectation maximization algorithm applied to [(18)F]DOPA data using reference-tissue input function. We use a modified version for direct reconstruction with a gradually descending scheme of subsets (i.e. 18-6-1) initialized with the FBP parametric image for faster convergence and higher accuracy. The results compared with analytic reconstructions show quantitative robustness (i.e. minimal bias) and clinical reproducibility within six human acquisitions in the region of clinical interest. Bland-Altman plots for all the studies showed sufficient quantitative agreement between the direct reconstructed parametric maps and the indirect FBP (--0.035x+0.48E--5). Copyright © 2011 Elsevier Ltd. All rights reserved.
Probability matching and strategy availability.
Koehler, Derek J; James, Greta
2010-09-01
Findings from two experiments indicate that probability matching in sequential choice arises from an asymmetry in strategy availability: The matching strategy comes readily to mind, whereas a superior alternative strategy, maximizing, does not. First, compared with the minority who spontaneously engage in maximizing, the majority of participants endorse maximizing as superior to matching in a direct comparison when both strategies are described. Second, when the maximizing strategy is brought to their attention, more participants subsequently engage in maximizing. Third, matchers are more likely than maximizers to base decisions in other tasks on their initial intuitions, suggesting that they are more inclined to use a choice strategy that comes to mind quickly. These results indicate that a substantial subset of probability matchers are victims of "underthinking" rather than "overthinking": They fail to engage in sufficient deliberation to generate a superior alternative to the matching strategy that comes so readily to mind.
Incorporating HYPR de-noising within iterative PET reconstruction (HYPR-OSEM)
NASA Astrophysics Data System (ADS)
(Kevin Cheng, Ju-Chieh; Matthews, Julian; Sossi, Vesna; Anton-Rodriguez, Jose; Salomon, André; Boellaard, Ronald
2017-08-01
HighlY constrained back-PRojection (HYPR) is a post-processing de-noising technique originally developed for time-resolved magnetic resonance imaging. It has been recently applied to dynamic imaging for positron emission tomography and shown promising results. In this work, we have developed an iterative reconstruction algorithm (HYPR-OSEM) which improves the signal-to-noise ratio (SNR) in static imaging (i.e. single frame reconstruction) by incorporating HYPR de-noising directly within the ordered subsets expectation maximization (OSEM) algorithm. The proposed HYPR operator in this work operates on the target image(s) from each subset of OSEM and uses the sum of the preceding subset images as the composite which is updated every iteration. Three strategies were used to apply the HYPR operator in OSEM: (i) within the image space modeling component of the system matrix in forward-projection only, (ii) within the image space modeling component in both forward-projection and back-projection, and (iii) on the image estimate after the OSEM update for each subset thus generating three forms: (i) HYPR-F-OSEM, (ii) HYPR-FB-OSEM, and (iii) HYPR-AU-OSEM. Resolution and contrast phantom simulations with various sizes of hot and cold regions as well as experimental phantom and patient data were used to evaluate the performance of the three forms of HYPR-OSEM, and the results were compared to OSEM with and without a post reconstruction filter. It was observed that the convergence in contrast recovery coefficients (CRC) obtained from all forms of HYPR-OSEM was slower than that obtained from OSEM. Nevertheless, HYPR-OSEM improved SNR without degrading accuracy in terms of resolution and contrast. It achieved better accuracy in CRC at equivalent noise level and better precision than OSEM and better accuracy than filtered OSEM in general. In addition, HYPR-AU-OSEM has been determined to be the more effective form of HYPR-OSEM in terms of accuracy and precision based on the studies conducted in this work.
Navalta, James W; Tibana, Ramires Alsamir; Fedor, Elizabeth A; Vieira, Amilton; Prestes, Jonato
2014-01-01
This investigation assessed the lymphocyte subset response to three days of intermittent run exercise to exhaustion. Twelve healthy college-aged males (n = 8) and females (n = 4) (age = 26 ± 4 years; height = 170.2 ± 10 cm; body mass = 75 ± 18 kg) completed an exertion test (maximal running speed and VO2max) and later performed three consecutive days of an intermittent run protocol to exhaustion (30 sec at maximal running speed and 30 sec at half of the maximal running speed). Blood was collected before exercise (PRE) and immediately following the treadmill bout (POST) each day. When the absolute change from baseline was evaluated (i. e., Δ baseline), a significant change in CD4+ and CD8+ for CX3CR1 cells was observed by completion of the third day. Significant changes in both apoptosis and migration were observed following two consecutive days in CD19+ lymphocytes, and the influence of apoptosis persisted following the third day. Given these lymphocyte responses, it is recommended that a rest day be incorporated following two consecutive days of a high-intensity intermittent run program to minimize immune cell modulations and reduce potential susceptibility.
Impacts of Maximizing Tendencies on Experience-Based Decisions.
Rim, Hye Bin
2017-06-01
Previous research on risky decisions has suggested that people tend to make different choices depending on whether they acquire the information from personally repeated experiences or from statistical summary descriptions. This phenomenon, called as a description-experience gap, was expected to be moderated by the individual difference in maximizing tendencies, a desire towards maximizing decisional outcome. Specifically, it was hypothesized that maximizers' willingness to engage in extensive information searching would lead maximizers to make experience-based decisions as payoff distributions were given explicitly. A total of 262 participants completed four decision problems. Results showed that maximizers, compared to non-maximizers, drew more samples before making a choice but reported lower confidence levels on both the accuracy of knowledge gained from experiences and the likelihood of satisfactory outcomes. Additionally, maximizers exhibited smaller description-experience gaps than non-maximizers as expected. The implications of the findings and unanswered questions for future research were discussed.
Multi-ray-based system matrix generation for 3D PET reconstruction
NASA Astrophysics Data System (ADS)
Moehrs, Sascha; Defrise, Michel; Belcari, Nicola; DelGuerra, Alberto; Bartoli, Antonietta; Fabbri, Serena; Zanetti, Gianluigi
2008-12-01
Iterative image reconstruction algorithms for positron emission tomography (PET) require a sophisticated system matrix (model) of the scanner. Our aim is to set up such a model offline for the YAP-(S)PET II small animal imaging tomograph in order to use it subsequently with standard ML-EM (maximum-likelihood expectation maximization) and OSEM (ordered subset expectation maximization) for fully three-dimensional image reconstruction. In general, the system model can be obtained analytically, via measurements or via Monte Carlo simulations. In this paper, we present the multi-ray method, which can be considered as a hybrid method to set up the system model offline. It incorporates accurate analytical (geometric) considerations as well as crystal depth and crystal scatter effects. At the same time, it has the potential to model seamlessly other physical aspects such as the positron range. The proposed method is based on multiple rays which are traced from/to the detector crystals through the image volume. Such a ray-tracing approach itself is not new; however, we derive a novel mathematical formulation of the approach and investigate the positioning of the integration (ray-end) points. First, we study single system matrix entries and show that the positioning and weighting of the ray-end points according to Gaussian integration give better results compared to equally spaced integration points (trapezoidal integration), especially if only a small number of integration points (rays) are used. Additionally, we show that, for a given variance of the single matrix entries, the number of rays (events) required to calculate the whole matrix is a factor of 20 larger when using a pure Monte-Carlo-based method. Finally, we analyse the quality of the model by reconstructing phantom data from the YAP-(S)PET II scanner.
Beyond filtered backprojection: A reconstruction software package for ion beam microtomography data
NASA Astrophysics Data System (ADS)
Habchi, C.; Gordillo, N.; Bourret, S.; Barberet, Ph.; Jovet, C.; Moretto, Ph.; Seznec, H.
2013-01-01
A new version of the TomoRebuild data reduction software package is presented, for the reconstruction of scanning transmission ion microscopy tomography (STIMT) and particle induced X-ray emission tomography (PIXET) images. First, we present a state of the art of the reconstruction codes available for ion beam microtomography. The algorithm proposed here brings several advantages. It is a portable, multi-platform code, designed in C++ with well-separated classes for easier use and evolution. Data reduction is separated in different steps and the intermediate results may be checked if necessary. Although no additional graphic library or numerical tool is required to run the program as a command line, a user friendly interface was designed in Java, as an ImageJ plugin. All experimental and reconstruction parameters may be entered either through this plugin or directly in text format files. A simple standard format is proposed for the input of experimental data. Optional graphic applications using the ROOT interface may be used separately to display and fit energy spectra. Regarding the reconstruction process, the filtered backprojection (FBP) algorithm, already present in the previous version of the code, was optimized so that it is about 10 times as fast. In addition, Maximum Likelihood Expectation Maximization (MLEM) and its accelerated version Ordered Subsets Expectation Maximization (OSEM) algorithms were implemented. A detailed user guide in English is available. A reconstruction example of experimental data from a biological sample is given. It shows the capability of the code to reduce noise in the sinograms and to deal with incomplete data, which puts a new perspective on tomography using low number of projections or limited angle.
NASA Astrophysics Data System (ADS)
Bai, Chuanyong; Kinahan, P. E.; Brasse, D.; Comtat, C.; Townsend, D. W.
2002-02-01
We have evaluated the penalized ordered-subset transmission reconstruction (OSTR) algorithm for postinjection single photon transmission scanning. The OSTR algorithm of Erdogan and Fessler (1999) uses a more accurate model for transmission tomography than ordered-subsets expectation-maximization (OSEM) when OSEM is applied to the logarithm of the transmission data. The OSTR algorithm is directly applicable to postinjection transmission scanning with a single photon source, as emission contamination from the patient mimics the effect, in the original derivation of OSTR, of random coincidence contamination in a positron source transmission scan. Multiple noise realizations of simulated postinjection transmission data were reconstructed using OSTR, filtered backprojection (FBP), and OSEM algorithms. Due to the nonspecific task performance, or multiple uses, of the transmission image, multiple figures of merit were evaluated, including image noise, contrast, uniformity, and root mean square (rms) error. We show that: 1) the use of a three-dimensional (3-D) regularizing image roughness penalty with OSTR improves the tradeoffs in noise, contrast, and rms error relative to the use of a two-dimensional penalty; 2) OSTR with a 3-D penalty has improved tradeoffs in noise, contrast, and rms error relative to FBP or OSEM; and 3) the use of image standard deviation from a single realization to estimate the true noise can be misleading in the case of OSEM. We conclude that using OSTR with a 3-D penalty potentially allows for shorter postinjection transmission scans in single photon transmission tomography in positron emission tomography (PET) relative to FBP or OSEM reconstructed images with the same noise properties. This combination of singles+OSTR is particularly suitable for whole-body PET oncology imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaefferkoetter, Joshua, E-mail: dnrjds@nus.edu.sg; Ouyang, Jinsong; Rakvongthai, Yothin
2014-06-15
Purpose: A study was designed to investigate the impact of time-of-flight (TOF) and point spread function (PSF) modeling on the detectability of myocardial defects. Methods: Clinical FDG-PET data were used to generate populations of defect-present and defect-absent images. Defects were incorporated at three contrast levels, and images were reconstructed by ordered subset expectation maximization (OSEM) iterative methods including ordinary Poisson, alone and with PSF, TOF, and PSF+TOF. Channelized Hotelling observer signal-to-noise ratio (SNR) was the surrogate for human observer performance. Results: For three iterations, 12 subsets, and no postreconstruction smoothing, TOF improved overall defect detection SNR by 8.6% as comparedmore » to its non-TOF counterpart for all the defect contrasts. Due to the slow convergence of PSF reconstruction, PSF yielded 4.4% less SNR than non-PSF. For reconstruction parameters (iteration number and postreconstruction smoothing kernel size) optimizing observer SNR, PSF showed larger improvement for faint defects. The combination of TOF and PSF improved mean detection SNR as compared to non-TOF and non-PSF counterparts by 3.0% and 3.2%, respectively. Conclusions: For typical reconstruction protocol used in clinical practice, i.e., less than five iterations, TOF improved defect detectability. In contrast, PSF generally yielded less detectability. For large number of iterations, TOF+PSF yields the best observer performance.« less
Lepley, Adam S; Ericksen, Hayley M; Sohn, David H; Pietrosimone, Brian G
2014-06-01
Persistent quadriceps weakness is common following anterior cruciate ligament reconstruction (ACLr). Alterations in spinal-reflexive excitability, corticospinal excitability and voluntary activation have been hypothesized as underlying mechanisms contributing to quadriceps weakness. The aim of this study was to evaluate the predictive capabilities of spinal-reflexive excitability, corticospinal excitability and voluntary activation on quadriceps strength in healthy and ACLr participants. Quadriceps strength was measured using maximal voluntary isometric contractions (MVIC). Voluntary activation was quantified via the central activation ratio (CAR). Corticospinal and spinal-reflexive excitability were measured using active motor thresholds (AMT) and Hoffmann reflexes normalized to maximal muscle responses (H:M), respectively. ACLr individuals were also split into high and low strength subsets based on MVIC. CAR was the only significant predictor in the healthy group. In the ACLr group, CAR and H:M significantly predicted 47% of the variance in MVIC. ACLr individuals in the high strength subset demonstrated significantly higher CAR and H:M than those in the low strength subset. Increased quadriceps voluntary activation, spinal-reflexive excitability and corticospinal excitability relates to increased quadriceps strength in participants following ACLr. Rehabilitation strategies used to target neural alterations may be beneficial for the restoration of muscle strength following ACLr. Copyright © 2014 Elsevier B.V. All rights reserved.
Accelerating image reconstruction in dual-head PET system by GPU and symmetry properties.
Chou, Cheng-Ying; Dong, Yun; Hung, Yukai; Kao, Yu-Jiun; Wang, Weichung; Kao, Chien-Min; Chen, Chin-Tu
2012-01-01
Positron emission tomography (PET) is an important imaging modality in both clinical usage and research studies. We have developed a compact high-sensitivity PET system that consisted of two large-area panel PET detector heads, which produce more than 224 million lines of response and thus request dramatic computational demands. In this work, we employed a state-of-the-art graphics processing unit (GPU), NVIDIA Tesla C2070, to yield an efficient reconstruction process. Our approaches ingeniously integrate the distinguished features of the symmetry properties of the imaging system and GPU architectures, including block/warp/thread assignments and effective memory usage, to accelerate the computations for ordered subset expectation maximization (OSEM) image reconstruction. The OSEM reconstruction algorithms were implemented employing both CPU-based and GPU-based codes, and their computational performance was quantitatively analyzed and compared. The results showed that the GPU-accelerated scheme can drastically reduce the reconstruction time and thus can largely expand the applicability of the dual-head PET system.
Lasnon, Charline; Dugue, Audrey Emmanuelle; Briand, Mélanie; Blanc-Fournier, Cécile; Dutoit, Soizic; Louis, Marie-Hélène; Aide, Nicolas
2015-06-01
We compared conventional filtered back-projection (FBP), two-dimensional-ordered subsets expectation maximization (OSEM) and maximum a posteriori (MAP) NEMA NU 4-optimized reconstructions for therapy assessment. Varying reconstruction settings were used to determine the parameters for optimal image quality with two NEMA NU 4 phantom acquisitions. Subsequently, data from two experiments in which nude rats bearing subcutaneous tumors had received a dual PI3K/mTOR inhibitor were reconstructed with the NEMA NU 4-optimized parameters. Mann-Whitney tests were used to compare mean standardized uptake value (SUV(mean)) variations among groups. All NEMA NU 4-optimized reconstructions showed the same 2-deoxy-2-[(18)F]fluoro-D-glucose ([(18)F]FDG) kinetic patterns and detected a significant difference in SUV(mean) relative to day 0 between controls and treated groups for all time points with comparable p values. In the framework of therapy assessment in rats bearing subcutaneous tumors, all algorithms available on the Inveon system performed equally.
NASA Regional Planetary Image Facility
NASA Technical Reports Server (NTRS)
Arvidson, Raymond E.
2001-01-01
The Regional Planetary Image Facility (RPIF) provided access to data from NASA planetary missions and expert assistance about the data sets and how to order subsets of the collections. This ensures that the benefit/cost of acquiring the data is maximized by widespread dissemination and use of the observations and resultant collections. The RPIF provided education and outreach functions that ranged from providing data and information to teachers, involving small groups of highly motivated students in its activities, to public lectures and tours. These activities maximized dissemination of results and data to the educational and public communities.
Active inference and epistemic value.
Friston, Karl; Rigoli, Francesco; Ognibene, Dimitri; Mathys, Christoph; Fitzgerald, Thomas; Pezzulo, Giovanni
2015-01-01
We offer a formal treatment of choice behavior based on the premise that agents minimize the expected free energy of future outcomes. Crucially, the negative free energy or quality of a policy can be decomposed into extrinsic and epistemic (or intrinsic) value. Minimizing expected free energy is therefore equivalent to maximizing extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximizing information gain or intrinsic value (or reducing uncertainty about the causes of valuable outcomes). The resulting scheme resolves the exploration-exploitation dilemma: Epistemic value is maximized until there is no further information gain, after which exploitation is assured through maximization of extrinsic value. This is formally consistent with the Infomax principle, generalizing formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk-sensitive (Kullback-Leibler) control. Furthermore, as with previous active inference formulations of discrete (Markovian) problems, ad hoc softmax parameters become the expected (Bayes-optimal) precision of beliefs about, or confidence in, policies. This article focuses on the basic theory, illustrating the ideas with simulations. A key aspect of these simulations is the similarity between precision updates and dopaminergic discharges observed in conditioning paradigms.
An Image Processing Algorithm Based On FMAT
NASA Technical Reports Server (NTRS)
Wang, Lui; Pal, Sankar K.
1995-01-01
Information deleted in ways minimizing adverse effects on reconstructed images. New grey-scale generalization of medial axis transformation (MAT), called FMAT (short for Fuzzy MAT) proposed. Formulated by making natural extension to fuzzy-set theory of all definitions and conditions (e.g., characteristic function of disk, subset condition of disk, and redundancy checking) used in defining MAT of crisp set. Does not need image to have any kind of priori segmentation, and allows medial axis (and skeleton) to be fuzzy subset of input image. Resulting FMAT (consisting of maximal fuzzy disks) capable of reconstructing exactly original image.
Deterministic quantum annealing expectation-maximization algorithm
NASA Astrophysics Data System (ADS)
Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki
2017-11-01
Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.
Foxall, Gordon R; Oliveira-Castro, Jorge M; Schrezenmaier, Teresa C
2004-06-30
Purchasers of fast-moving consumer goods generally exhibit multi-brand choice, selecting apparently randomly among a small subset or "repertoire" of tried and trusted brands. Their behavior shows both matching and maximization, though it is not clear just what the majority of buyers are maximizing. Each brand attracts, however, a small percentage of consumers who are 100%-loyal to it during the period of observation. Some of these are exclusively buyers of premium-priced brands who are presumably maximizing informational reinforcement because their demand for the brand is relatively price-insensitive or inelastic. Others buy exclusively the cheapest brands available and can be assumed to maximize utilitarian reinforcement since their behavior is particularly price-sensitive or elastic. Between them are the majority of consumers whose multi-brand buying takes the form of selecting a mixture of economy -- and premium-priced brands. Based on the analysis of buying patterns of 80 consumers for 9 product categories, the paper examines the continuum of consumers so defined and seeks to relate their buying behavior to the question of how and what consumers maximize.
ERIC Educational Resources Information Center
Shemick, John M.
1983-01-01
In a project to identify and verify professional competencies for beginning industrial education teachers, researchers found a 173-item questionnaire unwieldy. Using multiple-matrix sampling, they distributed subsets of items to respondents, resulting in adequate returns as well as duplication, postage, and time savings. (SK)
Pretorius, P. Hendrik; Johnson, Karen L.; King, Michael A.
2016-01-01
We have recently been successful in the development and testing of rigid-body motion tracking, estimation and compensation for cardiac perfusion SPECT based on a visual tracking system (VTS). The goal of this study was to evaluate in patients the effectiveness of our rigid-body motion compensation strategy. Sixty-four patient volunteers were asked to remain motionless or execute some predefined body motion during an additional second stress perfusion acquisition. Acquisitions were performed using the standard clinical protocol with 64 projections acquired through 180 degrees. All data were reconstructed with an ordered-subsets expectation-maximization (OSEM) algorithm using 4 projections per subset and 5 iterations. All physical degradation factors were addressed (attenuation, scatter, and distance dependent resolution), while a 3-dimensional Gaussian rotator was used during reconstruction to correct for six-degree-of-freedom (6-DOF) rigid-body motion estimated by the VTS. Polar map quantification was employed to evaluate compensation techniques. In 54.7% of the uncorrected second stress studies there was a statistically significant difference in the polar maps, and in 45.3% this made a difference in the interpretation of segmental perfusion. Motion correction reduced the impact of motion such that with it 32.8 % of the polar maps were statistically significantly different, and in 14.1% this difference changed the interpretation of segmental perfusion. The improvement shown in polar map quantitation translated to visually improved uniformity of the SPECT slices. PMID:28042170
Annotti, Lee A; Teglasi, Hedwig
2017-01-01
Real-world contexts differ in the clarity of expectations for desired responses, as do assessment procedures, ranging along a continuum from maximal conditions that provide well-defined expectations to typical conditions that provide ill-defined expectations. Executive functions guide effective social interactions, but relations between them have not been studied with measures that are matched in the clarity of response expectations. In predicting teacher-rated social competence (SC) from kindergarteners' performance on tasks of executive functions (EFs), we found better model-data fit indexes when both measures were similar in the clarity of response expectations for the child. The maximal EF measure, the Developmental Neuropsychological Assessment, presents well-defined response expectations, and the typical EF measure, 5 scales from the Thematic Apperception Test (TAT), presents ill-defined response expectations (i.e., Abstraction, Perceptual Integration, Cognitive-Experiential Integration, and Associative Thinking). To assess SC under maximal and typical conditions, we used 2 teacher-rated questionnaires, with items, respectively, that emphasize well-defined and ill-defined expectations: the Behavior Rating Inventory: Behavioral Regulation Index and the Social Skills Improvement System: Social Competence Scale. Findings suggest that matching clarity of expectations improves generalization across measures and highlight the usefulness of the TAT to measure EF.
Identification of features in indexed data and equipment therefore
Jarman, Kristin H [Richland, WA; Daly, Don Simone [Richland, WA; Anderson, Kevin K [Richland, WA; Wahl, Karen L [Richland, WA
2002-04-02
Embodiments of the present invention provide methods of identifying a feature in an indexed dataset. Such embodiments encompass selecting an initial subset of indices, the initial subset of indices being encompassed by an initial window-of-interest and comprising at least one beginning index and at least one ending index; computing an intensity weighted measure of dispersion for the subset of indices using a subset of responses corresponding to the subset of indices; and comparing the intensity weighted measure of dispersion to a dispersion critical value determined from an expected value of the intensity weighted measure of dispersion under a null hypothesis of no transient feature present. Embodiments of the present invention also encompass equipment configured to perform the methods of the present invention.
Patel, Nitin R; Ankolekar, Suresh
2007-11-30
Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.
Attenuation correction strategies for multi-energy photon emitters using SPECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pretorius, P.H.; King, M.A.; Pan, T.S.
1996-12-31
The aim of this study was to investigate whether the photopeak window projections from different energy photons can be combined into a single window for reconstruction or if it is better to not combine the projections due to differences in the attenuation maps required for each photon energy. The mathematical cardiac torso (MCAT) phantom was modified to simulate the uptake of Ga-67 in the human body. Four spherical hot tumors were placed in locations which challenged attenuation correction. An analytical 3D projector with attenuation and detector response included was used to generate projection sets. Data were reconstructed using filtered backprojectionmore » (FBP) reconstruction with Butterworth filtering in conjunction with one iteration of Chang attenuation correction, and with 5 and 10 iterations of ordered-subset maximum-likelihood expectation-maximization reconstruction. To serve as a standard for comparison, the projection sets obtained from the two energies were first reconstructed separately using their own attenuation maps. The emission data obtained from both energies were added and reconstructed using the following attenuation strategies: (1) the 93 keV attenuation map for attenuation correction, (2) the 185 keV attenuation map for attenuation correction, (3) using a weighted mean obtained from combining the 93 keV and 185 keV maps, and (4) an ordered subset approach which combines both energies. The central count ratio (CCR) and total count ratio (TCR) were used to compare the performance of the different strategies. Compared to the standard method, results indicate an over-estimation with strategy 1, an under-estimation with strategy 2 and comparable results with strategies 3 and 4. In all strategies, the CCR`s of sphere 4 were under-estimated, although TCR`s were comparable to that of the other locations. The weighted mean and ordered subset strategies for attenuation correction were of comparable accuracy to reconstruction of the windows separately.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Youngrok
2013-05-15
Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates ofmore » nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.« less
Trépanier, Marc-Olivier; Lim, Joonbum; Lai, Terence K Y; Cho, Hye Jin; Domenichiello, Anthony F; Chen, Chuck T; Taha, Ameer Y; Bazinet, Richard P; Burnham, W M
2014-04-01
Docosahexaenoic acid (DHA) is an omega-3 polyunsaturated fatty acid (n-3 PUFA) which has been shown to raise seizure thresholds following acute administration in rats. The aims of the present experiment were the following: 1) to test whether subchronic DHA administration raises seizure threshold in the maximal pentylenetetrazol (PTZ) model 24h following the last injection and 2) to determine whether the increase in seizure threshold is correlated with an increase in serum and/or brain DHA. Animals received daily intraperitoneal (i.p.) injections of 50mg/kg of DHA, DHA ethyl ester (DHA EE), or volume-matched vehicle (albumin/saline) for 14days. On day 15, one subset of animals was seizure tested in the maximal PTZ model (Experiment 1). In a separate (non-seizure tested) subset of animals, blood was collected, and brains were excised following high-energy, head-focused microwave fixation. Lipid analysis was performed on serum and brain (Experiment 2). For data analysis, the DHA and DHA EE groups were combined since they did not differ significantly from each other. In the maximal PTZ model, DHA significantly increased seizure latency by approximately 3-fold as compared to vehicle-injected animals. This increase in seizure latency was associated with an increase in serum unesterified DHA. Total brain DHA and brain unesterified DHA concentrations, however, did not differ significantly in the treatment and control groups. An increase in serum unesterified DHA concentration reflecting increased flux of DHA to the brain appears to explain changes in seizure threshold, independent of changes in brain DHA concentrations. Copyright © 2014 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Wells, Ryan S.; Lynch, Cassie M.; Seifert, Tricia A.
2011-01-01
A number of studies over decades have examined determinants of educational expectations. However, even among the subset of quantitative studies, there is considerable variation in the methods used to operationally define and analyze expectations. Using a systematic literature review and several regression methods to analyze Latino students'…
Weisgerber, Michael; Danduran, Michael; Meurer, John; Hartmann, Kathryn; Berger, Stuart; Flores, Glenn
2009-07-01
To evaluate Cooper 12-minute run/walk test (CT12) as a one-time estimate of cardiorespiratory fitness and marker of fitness change compared with treadmill fitness testing in young children with persistent asthma. A cohort of urban children with asthma participated in the asthma and exercise program and a subset completed pre- and postintervention fitness testing. Treadmill fitness testing was conducted by an exercise physiologist in the fitness laboratory at an academic children's hospital. CT12 was conducted in a college recreation center gymnasium. Forty-five urban children with persistent asthma aged 7 to 14 years participated in exercise interventions. A subset of 19 children completed pre- and postintervention exercise testing. Participants completed a 9-week exercise program where they participated in either swimming or golf 3 days a week for 1 hour. A subset of participants completed fitness testing by 2 methods before and after program completion. CT12 results (meters), maximal oxygen consumption ((.)Vo2max) (mL x kg(-1) x min(-1)), and treadmill exercise time (minutes). CT12 and maximal oxygen consumption were moderately correlated (preintervention: 0.55, P = 0.003; postintervention: 0.48, P = 0.04) as one-time measures of fitness. Correlations of the tests as markers of change over time were poor and nonsignificant. In children with asthma, CT12 is a reasonable one-time estimate of fitness but a poor marker of fitness change over time.
Mapping tropical rainforest canopies using multi-temporal spaceborne imaging spectroscopy
NASA Astrophysics Data System (ADS)
Somers, Ben; Asner, Gregory P.
2013-10-01
The use of imaging spectroscopy for florisic mapping of forests is complicated by the spectral similarity among coexisting species. Here we evaluated an alternative spectral unmixing strategy combining a time series of EO-1 Hyperion images and an automated feature selection strategy in MESMA. Instead of using the same spectral subset to unmix each image pixel, our modified approach allowed the spectral subsets to vary on a per pixel basis such that each pixel is evaluated using a spectral subset tuned towards maximal separability of its specific endmember class combination or species mixture. The potential of the new approach for floristic mapping of tree species in Hawaiian rainforests was quantitatively demonstrated using both simulated and actual hyperspectral image time-series. With a Cohen's Kappa coefficient of 0.65, our approach provided a more accurate tree species map compared to MESMA (Kappa = 0.54). In addition, by the selection of spectral subsets our approach was about 90% faster than MESMA. The flexible or adaptive use of band sets in spectral unmixing as such provides an interesting avenue to address spectral similarities in complex vegetation canopies.
Forecasting continuously increasing life expectancy: what implications?
Le Bourg, Eric
2012-04-01
It has been proposed that life expectancy could linearly increase in the next decades and that median longevity of the youngest birth cohorts could reach 105 years or more. These forecasts have been criticized but it seems that their implications for future maximal lifespan (i.e. the lifespan of the last survivors) have not been considered. These implications make these forecasts untenable and it is less risky to hypothesize that life expectancy and maximal lifespan will reach an asymptotic limit in some decades from now. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.
2013-04-01
The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.
Monochromatic-beam-based dynamic X-ray microtomography based on OSEM-TV algorithm.
Xu, Liang; Chen, Rongchang; Yang, Yiming; Deng, Biao; Du, Guohao; Xie, Honglan; Xiao, Tiqiao
2017-01-01
Monochromatic-beam-based dynamic X-ray computed microtomography (CT) was developed to observe evolution of microstructure inside samples. However, the low flux density results in low efficiency in data collection. To increase efficiency, reducing the number of projections should be a practical solution. However, it has disadvantages of low image reconstruction quality using the traditional filtered back projection (FBP) algorithm. In this study, an iterative reconstruction method using an ordered subset expectation maximization-total variation (OSEM-TV) algorithm was employed to address and solve this problem. The simulated results demonstrated that normalized mean square error of the image slices reconstructed by the OSEM-TV algorithm was about 1/4 of that by FBP. Experimental results also demonstrated that the density resolution of OSEM-TV was high enough to resolve different materials with the number of projections less than 100. As a result, with the introduction of OSEM-TV, the monochromatic-beam-based dynamic X-ray microtomography is potentially practicable for the quantitative and non-destructive analysis to the evolution of microstructure with acceptable efficiency in data collection and reconstructed image quality.
NASA Astrophysics Data System (ADS)
Hutton, Brian F.; Lau, Yiu H.
1998-06-01
Compensation for distance-dependent resolution can be directly incorporated in maximum likelihood reconstruction. Our objective was to examine the effectiveness of this compensation using either the standard expectation maximization (EM) algorithm or an accelerated algorithm based on use of ordered subsets (OSEM). We also investigated the application of post-reconstruction filtering in combination with resolution compensation. Using the MCAT phantom, projections were simulated for
data, including attenuation and distance-dependent resolution. Projection data were reconstructed using conventional EM and OSEM with subset size 2 and 4, with/without 3D compensation for detector response (CDR). Also post-reconstruction filtering (PRF) was performed using a 3D Butterworth filter of order 5 with various cutoff frequencies (0.2-
). Image quality and reconstruction accuracy were improved when CDR was included. Image noise was lower with CDR for a given iteration number. PRF with cutoff frequency greater than
improved noise with no reduction in recovery coefficient for myocardium but the effect was less when CDR was incorporated in the reconstruction. CDR alone provided better results than use of PRF without CDR. Results suggest that using CDR without PRF, and stopping at a small number of iterations, may provide sufficiently good results for myocardial SPECT. Similar behaviour was demonstrated for OSEM.
Hierarchical trie packet classification algorithm based on expectation-maximization clustering.
Bi, Xia-An; Zhao, Junxia
2017-01-01
With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm.
Volume versus value maximization illustrated for Douglas-fir with thinning
Kurt H. Riitters; J. Douglas Brodie; Chiang Kao
1982-01-01
Economic and physical criteria for selecting even-aged rotation lengths are reviewed with examples of their optimizations. To demonstrate the trade-off between physical volume, economic return, and stand diameter, examples of thinning regimes for maximizing volume, forest rent, and soil expectation are compared with an example of maximizing volume without thinning. The...
Noise-enhanced clustering and competitive learning algorithms.
Osoba, Osonde; Kosko, Bart
2013-01-01
Noise can provably speed up convergence in many centroid-based clustering algorithms. This includes the popular k-means clustering algorithm. The clustering noise benefit follows from the general noise benefit for the expectation-maximization algorithm because many clustering algorithms are special cases of the expectation-maximization algorithm. Simulations show that noise also speeds up convergence in stochastic unsupervised competitive learning, supervised competitive learning, and differential competitive learning. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kagie, Matthew J.; Lanterman, Aaron D.
2017-12-01
This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.
PEM-PCA: a parallel expectation-maximization PCA face recognition architecture.
Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar
2014-01-01
Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.
Rincent, R; Laloë, D; Nicolas, S; Altmann, T; Brunel, D; Revilla, P; Rodríguez, V M; Moreno-Gonzalez, J; Melchinger, A; Bauer, E; Schoen, C-C; Meyer, N; Giauffret, C; Bauland, C; Jamin, P; Laborde, J; Monod, H; Flament, P; Charcosset, A; Moreau, L
2012-10-01
Genomic selection refers to the use of genotypic information for predicting breeding values of selection candidates. A prediction formula is calibrated with the genotypes and phenotypes of reference individuals constituting the calibration set. The size and the composition of this set are essential parameters affecting the prediction reliabilities. The objective of this study was to maximize reliabilities by optimizing the calibration set. Different criteria based on the diversity or on the prediction error variance (PEV) derived from the realized additive relationship matrix-best linear unbiased predictions model (RA-BLUP) were used to select the reference individuals. For the latter, we considered the mean of the PEV of the contrasts between each selection candidate and the mean of the population (PEVmean) and the mean of the expected reliabilities of the same contrasts (CDmean). These criteria were tested with phenotypic data collected on two diversity panels of maize (Zea mays L.) genotyped with a 50k SNPs array. In the two panels, samples chosen based on CDmean gave higher reliabilities than random samples for various calibration set sizes. CDmean also appeared superior to PEVmean, which can be explained by the fact that it takes into account the reduction of variance due to the relatedness between individuals. Selected samples were close to optimality for a wide range of trait heritabilities, which suggests that the strategy presented here can efficiently sample subsets in panels of inbred lines. A script to optimize reference samples based on CDmean is available on request.
Hierarchical trie packet classification algorithm based on expectation-maximization clustering
Bi, Xia-an; Zhao, Junxia
2017-01-01
With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm. PMID:28704476
Squeezing of magnetic flux in nanorings.
Dajka, J; Ptok, A; Luczka, J
2012-12-12
We study superconducting and non-superconducting nanorings and look for non-classical features of magnetic flux passing through nanorings. We show that the magnetic flux can exhibit purely quantum properties in some peculiar states with quadrature squeezing. We identify a subset of Gazeau-Klauder states in which the magnetic flux can be squeezed and, within tailored parameter regimes, quantum fluctuations of the magnetic flux can be maximally reduced.
Michael R. Vanderberg; Kevin Boston; John Bailey
2011-01-01
Accounting for the probability of loss due to disturbance events can influence the prediction of carbon flux over a planning horizon, and can affect the determination of optimal silvicultural regimes to maximize terrestrial carbon storage. A preliminary model that includes forest disturbance-related carbon loss was developed to maximize expected values of carbon stocks...
Quantum coherence generating power, maximally abelian subalgebras, and Grassmannian geometry
NASA Astrophysics Data System (ADS)
Zanardi, Paolo; Campos Venuti, Lorenzo
2018-01-01
We establish a direct connection between the power of a unitary map in d-dimensions (d < ∞) to generate quantum coherence and the geometry of the set Md of maximally abelian subalgebras (of the quantum system full operator algebra). This set can be seen as a topologically non-trivial subset of the Grassmannian over linear operators. The natural distance over the Grassmannian induces a metric structure on Md, which quantifies the lack of commutativity between the pairs of subalgebras. Given a maximally abelian subalgebra, one can define, on physical grounds, an associated measure of quantum coherence. We show that the average quantum coherence generated by a unitary map acting on a uniform ensemble of quantum states in the algebra (the so-called coherence generating power of the map) is proportional to the distance between a pair of maximally abelian subalgebras in Md connected by the unitary transformation itself. By embedding the Grassmannian into a projective space, one can pull-back the standard Fubini-Study metric on Md and define in this way novel geometrical measures of quantum coherence generating power. We also briefly discuss the associated differential metric structures.
On Use of Multi-Chambered Fission Detectors for In-Core, Neutron Spectroscopy
NASA Astrophysics Data System (ADS)
Roberts, Jeremy A.
2018-01-01
Presented is a short, computational study on the potential use of multichambered fission detectors for in-core, neutron spectroscopy. Motivated by the development of very small fission chambers at CEA in France and at Kansas State University in the U.S., it was assumed in this preliminary analysis that devices can be made small enough to avoid flux perturbations and that uncertainties related to measurements can be ignored. It was hypothesized that a sufficient number of chambers with unique reactants can act as a real-time, foilactivation experiment. An unfolding scheme based on maximizing (Shannon) entropy was used to produce a flux spectrum from detector signals that requires no prior information. To test the method, integral, detector responses were generated for singleisotope detectors of various Th, U, Np, Pu, Am, and Cs isotopes using a simplified, pressurized-water reactor spectrum and fluxweighted, microscopic, fission cross sections, in the WIMS-69 multigroup format. An unfolded spectrum was found from subsets of these responses that had a maximum entropy while reproducing the responses considered and summing to one (that is, they were normalized). Several nuclide subsets were studied, and, as expected, the results indicate inclusion of more nuclides leads to better spectra but with diminishing improvements, with the best-case spectrum having an average, relative, group-wise error of approximately 51%. Furthermore, spectra found from minimum-norm and Tihkonov-regularization inversion were of lower quality than the maximum entropy solutions. Finally, the addition of thermal-neutron filters (here, Cd and Gd) provided substantial improvement over unshielded responses alone. The results, as a whole, suggest that in-core, neutron spectroscopy is at least marginally feasible.
Effect of Using 2 mm Voxels on Observer Performance for PET Lesion Detection
NASA Astrophysics Data System (ADS)
Morey, A. M.; Noo, Frédéric; Kadrmas, Dan J.
2016-06-01
Positron emission tomography (PET) images are typically reconstructed with an in-plane pixel size of approximately 4 mm for cancer imaging. The objective of this work was to evaluate the effect of using smaller pixels on general oncologic lesion-detection. A series of observer studies was performed using experimental phantom data from the Utah PET Lesion Detection Database, which modeled whole-body FDG PET cancer imaging of a 92 kg patient. The data comprised 24 scans over 4 days on a Biograph mCT time-of-flight (TOF) PET/CT scanner, with up to 23 lesions (diam. 6-16 mm) distributed throughout the phantom each day. Images were reconstructed with 2.036 mm and 4.073 mm pixels using ordered-subsets expectation-maximization (OSEM) both with and without point spread function (PSF) modeling and TOF. Detection performance was assessed using the channelized non-prewhitened numerical observer with localization receiver operating characteristic (LROC) analysis. Tumor localization performance and the area under the LROC curve were then analyzed as functions of the pixel size. In all cases, the images with 2 mm pixels provided higher detection performance than those with 4 mm pixels. The degree of improvement from the smaller pixels was larger than that offered by PSF modeling for these data, and provided roughly half the benefit of using TOF. Key results were confirmed by two human observers, who read subsets of the test data. This study suggests that a significant improvement in tumor detection performance for PET can be attained by using smaller voxel sizes than commonly used at many centers. The primary drawback is a 4-fold increase in reconstruction time and data storage requirements.
Balakrishnan, Narayanaswamy; Pal, Suvra
2016-08-01
Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence. © The Author(s) 2013.
Very Slow Search and Reach: Failure to Maximize Expected Gain in an Eye-Hand Coordination Task
Zhang, Hang; Morvan, Camille; Etezad-Heydari, Louis-Alexandre; Maloney, Laurence T.
2012-01-01
We examined an eye-hand coordination task where optimal visual search and hand movement strategies were inter-related. Observers were asked to find and touch a target among five distractors on a touch screen. Their reward for touching the target was reduced by an amount proportional to how long they took to locate and reach to it. Coordinating the eye and the hand appropriately would markedly reduce the search-reach time. Using statistical decision theory we derived the sequence of interrelated eye and hand movements that would maximize expected gain and we predicted how hand movements should change as the eye gathered further information about target location. We recorded human observers' eye movements and hand movements and compared them with the optimal strategy that would have maximized expected gain. We found that most observers failed to adopt the optimal search-reach strategy. We analyze and describe the strategies they did adopt. PMID:23071430
Expectation maximization for hard X-ray count modulation profiles
NASA Astrophysics Data System (ADS)
Benvenuto, F.; Schwartz, R.; Piana, M.; Massone, A. M.
2013-07-01
Context. This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) instrument. Aims: Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized to analyze count modulation profiles in solar hard X-ray imaging based on rotating modulation collimators. Methods: The algorithm described in this paper solves the maximum likelihood problem iteratively and encodes a positivity constraint into the iterative optimization scheme. The result is therefore a classical expectation maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, at the same time, a very satisfactory Cash-statistic (C-statistic). Results: The method is applied to both reproduce synthetic flaring configurations and reconstruct images from experimental data corresponding to three real events. In this second case, the performance of expectation maximization, when compared to Pixon image reconstruction, shows a comparable accuracy and a notably reduced computational burden; when compared to CLEAN, shows a better fidelity with respect to the measurements with a comparable computational effectiveness. Conclusions: If optimally stopped, expectation maximization represents a very reliable method for image reconstruction in the RHESSI context when count modulation profiles are used as input data.
Attenuation correction strategies for multi-energy photon emitters using SPECT
NASA Astrophysics Data System (ADS)
Pretorius, P. H.; King, M. A.; Pan, T.-S.; Hutton, B. F.
1997-06-01
The aim of this study was to investigate whether the photopeak window projections from different energy photons can be combined into a single window for reconstruction or if it is better to not combine the projections due to differences in the attenuation maps required for each photon energy. The mathematical cardiac torso (MCAT) phantom was modified to simulate the uptake of Ga-67 in the human body. Four spherical hot tumors were placed in locations which challenged attenuation correction. An analytical 3D projector with attenuation and detector response included was used to generate projection sets. Data were reconstructed using filtered backprojection (FBP) reconstruction with Butterworth filtering in conjunction with one iteration of Chang attenuation correction, and with 5 and 10 iterations of ordered-subset maximum-likelihood expectation maximization (ML-OS) reconstruction. To serve as a standard for comparison, the projection sets obtained from the two energies were first reconstructed separately using their own attenuation maps. The emission data obtained from both energies were added and reconstructed using the following attenuation strategies: 1) the 93 keV attenuation map for attenuation correction, 2) the 185 keV attenuation map for attenuation correction, 3) using a weighted mean obtained from combining the 93 keV and 185 keV maps, and 4) an ordered subset approach which combines both energies. The central count ratio (CCR) and total count ratio (TCR) were used to compare the performance of the different strategies. Compared to the standard method, results indicate an over-estimation with strategy 1, an under-estimation with strategy 2 and comparable results with strategies 3 and 4. In all strategies, the CCRs of sphere 4 (in proximity to the liver, spleen and backbone) were under-estimated, although TCRs were comparable to that of the other locations. The weighted mean and ordered subset strategies for attenuation correction were of comparable accuracy to reconstruction of the windows separately. They are recommended for multi-energy photon SPECT imaging quantitation when there is a need to combine the acquisitions of multiple windows.
Optimization of oncological {sup 18}F-FDG PET/CT imaging based on a multiparameter analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menezes, Vinicius O., E-mail: vinicius@radtec.com.br; Machado, Marcos A. D.; Queiroz, Cleiton C.
2016-02-15
Purpose: This paper describes a method to achieve consistent clinical image quality in {sup 18}F-FDG scans accounting for patient habitus, dose regimen, image acquisition, and processing techniques. Methods: Oncological PET/CT scan data for 58 subjects were evaluated retrospectively to derive analytical curves that predict image quality. Patient noise equivalent count rate and coefficient of variation (CV) were used as metrics in their analysis. Optimized acquisition protocols were identified and prospectively applied to 179 subjects. Results: The adoption of different schemes for three body mass ranges (<60 kg, 60–90 kg, >90 kg) allows improved image quality with both point spread functionmore » and ordered-subsets expectation maximization-3D reconstruction methods. The application of this methodology showed that CV improved significantly (p < 0.0001) in clinical practice. Conclusions: Consistent oncological PET/CT image quality on a high-performance scanner was achieved from an analysis of the relations existing between dose regimen, patient habitus, acquisition, and processing techniques. The proposed methodology may be used by PET/CT centers to develop protocols to standardize PET/CT imaging procedures and achieve better patient management and cost-effective operations.« less
Assessment of prostate cancer detection with a visual-search human model observer
NASA Astrophysics Data System (ADS)
Sen, Anando; Kalantari, Faraz; Gifford, Howard C.
2014-03-01
Early staging of prostate cancer (PC) is a significant challenge, in part because of the small tumor sizes in- volved. Our long-term goal is to determine realistic diagnostic task performance benchmarks for standard PC imaging with single photon emission computed tomography (SPECT). This paper reports on a localization receiver operator characteristic (LROC) validation study comparing human and model observers. The study made use of a digital anthropomorphic phantom and one-cm tumors within the prostate and pelvic lymph nodes. Uptake values were consistent with data obtained from clinical In-111 ProstaScint scans. The SPECT simulation modeled a parallel-hole imaging geometry with medium-energy collimators. Nonuniform attenua- tion and distance-dependent detector response were accounted for both in the imaging and the ordered-subset expectation-maximization (OSEM) iterative reconstruction. The observer study made use of 2D slices extracted from reconstructed volumes. All observers were informed about the prostate and nodal locations in an image. Iteration number and the level of postreconstruction smoothing were study parameters. The results show that a visual-search (VS) model observer correlates better with the average detection performance of human observers than does a scanning channelized nonprewhitening (CNPW) model observer.
Fast, Accurate and Shift-Varying Line Projections for Iterative Reconstruction Using the GPU
Pratx, Guillem; Chinn, Garry; Olcott, Peter D.; Levin, Craig S.
2013-01-01
List-mode processing provides an efficient way to deal with sparse projections in iterative image reconstruction for emission tomography. An issue often reported is the tremendous amount of computation required by such algorithm. Each recorded event requires several back- and forward line projections. We investigated the use of the programmable graphics processing unit (GPU) to accelerate the line-projection operations and implement fully-3D list-mode ordered-subsets expectation-maximization for positron emission tomography (PET). We designed a reconstruction approach that incorporates resolution kernels, which model the spatially-varying physical processes associated with photon emission, transport and detection. Our development is particularly suitable for applications where the projection data is sparse, such as high-resolution, dynamic, and time-of-flight PET reconstruction. The GPU approach runs more than 50 times faster than an equivalent CPU implementation while image quality and accuracy are virtually identical. This paper describes in details how the GPU can be used to accelerate the line projection operations, even when the lines-of-response have arbitrary endpoint locations and shift-varying resolution kernels are used. A quantitative evaluation is included to validate the correctness of this new approach. PMID:19244015
Implementation of GPU accelerated SPECT reconstruction with Monte Carlo-based scatter correction.
Bexelius, Tobias; Sohlberg, Antti
2018-06-01
Statistical SPECT reconstruction can be very time-consuming especially when compensations for collimator and detector response, attenuation, and scatter are included in the reconstruction. This work proposes an accelerated SPECT reconstruction algorithm based on graphics processing unit (GPU) processing. Ordered subset expectation maximization (OSEM) algorithm with CT-based attenuation modelling, depth-dependent Gaussian convolution-based collimator-detector response modelling, and Monte Carlo-based scatter compensation was implemented using OpenCL. The OpenCL implementation was compared against the existing multi-threaded OSEM implementation running on a central processing unit (CPU) in terms of scatter-to-primary ratios, standardized uptake values (SUVs), and processing speed using mathematical phantoms and clinical multi-bed bone SPECT/CT studies. The difference in scatter-to-primary ratios, visual appearance, and SUVs between GPU and CPU implementations was minor. On the other hand, at its best, the GPU implementation was noticed to be 24 times faster than the multi-threaded CPU version on a normal 128 × 128 matrix size 3 bed bone SPECT/CT data set when compensations for collimator and detector response, attenuation, and scatter were included. GPU SPECT reconstructions show great promise as an every day clinical reconstruction tool.
Maximizing the Spread of Influence via Generalized Degree Discount.
Wang, Xiaojie; Zhang, Xue; Zhao, Chengli; Yi, Dongyun
2016-01-01
It is a crucial and fundamental issue to identify a small subset of influential spreaders that can control the spreading process in networks. In previous studies, a degree-based heuristic called DegreeDiscount has been shown to effectively identify multiple influential spreaders and has severed as a benchmark method. However, the basic assumption of DegreeDiscount is not adequate, because it treats all the nodes equally without any differences. To consider a general situation in real world networks, a novel heuristic method named GeneralizedDegreeDiscount is proposed in this paper as an effective extension of original method. In our method, the status of a node is defined as a probability of not being influenced by any of its neighbors, and an index generalized discounted degree of one node is presented to measure the expected number of nodes it can influence. Then the spreaders are selected sequentially upon its generalized discounted degree in current network. Empirical experiments are conducted on four real networks, and the results show that the spreaders identified by our approach are more influential than several benchmark methods. Finally, we analyze the relationship between our method and three common degree-based methods.
Maximizing the Spread of Influence via Generalized Degree Discount
Wang, Xiaojie; Zhang, Xue; Zhao, Chengli; Yi, Dongyun
2016-01-01
It is a crucial and fundamental issue to identify a small subset of influential spreaders that can control the spreading process in networks. In previous studies, a degree-based heuristic called DegreeDiscount has been shown to effectively identify multiple influential spreaders and has severed as a benchmark method. However, the basic assumption of DegreeDiscount is not adequate, because it treats all the nodes equally without any differences. To consider a general situation in real world networks, a novel heuristic method named GeneralizedDegreeDiscount is proposed in this paper as an effective extension of original method. In our method, the status of a node is defined as a probability of not being influenced by any of its neighbors, and an index generalized discounted degree of one node is presented to measure the expected number of nodes it can influence. Then the spreaders are selected sequentially upon its generalized discounted degree in current network. Empirical experiments are conducted on four real networks, and the results show that the spreaders identified by our approach are more influential than several benchmark methods. Finally, we analyze the relationship between our method and three common degree-based methods. PMID:27732681
Variance-reduction normalization technique for a compton camera system
NASA Astrophysics Data System (ADS)
Kim, S. M.; Lee, J. S.; Kim, J. H.; Seo, H.; Kim, C. H.; Lee, C. S.; Lee, S. J.; Lee, M. C.; Lee, D. S.
2011-01-01
For an artifact-free dataset, pre-processing (known as normalization) is needed to correct inherent non-uniformity of detection property in the Compton camera which consists of scattering and absorbing detectors. The detection efficiency depends on the non-uniform detection efficiency of the scattering and absorbing detectors, different incidence angles onto the detector surfaces, and the geometry of the two detectors. The correction factor for each detected position pair which is referred to as the normalization coefficient, is expressed as a product of factors representing the various variations. The variance-reduction technique (VRT) for a Compton camera (a normalization method) was studied. For the VRT, the Compton list-mode data of a planar uniform source of 140 keV was generated from a GATE simulation tool. The projection data of a cylindrical software phantom were normalized with normalization coefficients determined from the non-uniformity map, and then reconstructed by an ordered subset expectation maximization algorithm. The coefficient of variations and percent errors of the 3-D reconstructed images showed that the VRT applied to the Compton camera provides an enhanced image quality and the increased recovery rate of uniformity in the reconstructed image.
Twistor Geometry of Null Foliations in Complex Euclidean Space
NASA Astrophysics Data System (ADS)
Taghavi-Chabert, Arman
2017-01-01
We give a detailed account of the geometric correspondence between a smooth complex projective quadric hypersurface Q^n of dimension n ≥ 3, and its twistor space PT, defined to be the space of all linear subspaces of maximal dimension of Q^n. Viewing complex Euclidean space CE^n as a dense open subset of Q^n, we show how local foliations tangent to certain integrable holomorphic totally null distributions of maximal rank on CE^n can be constructed in terms of complex submanifolds of PT. The construction is illustrated by means of two examples, one involving conformal Killing spinors, the other, conformal Killing-Yano 2-forms. We focus on the odd-dimensional case, and we treat the even-dimensional case only tangentially for comparison.
Pal, Suvra; Balakrishnan, Narayanaswamy
2018-05-01
In this paper, we develop likelihood inference based on the expectation maximization algorithm for the Box-Cox transformation cure rate model assuming the lifetimes to follow a Weibull distribution. A simulation study is carried out to demonstrate the performance of the proposed estimation method. Through Monte Carlo simulations, we also study the effect of model misspecification on the estimate of cure rate. Finally, we analyze a well-known data on melanoma with the model and the inferential method developed here.
Husak, Jerry F; Fox, Stanley F
2006-09-01
To understand how selection acts on performance capacity, the ecological role of the performance trait being measured must be determined. Knowing if and when an animal uses maximal performance capacity may give insight into what specific selective pressures may be acting on performance, because individuals are expected to use close to maximal capacity only in contexts important to survival or reproductive success. Furthermore, if an ecological context is important, poor performers are expected to compensate behaviorally. To understand the relative roles of natural and sexual selection on maximal sprint speed capacity we measured maximal sprint speed of collared lizards (Crotaphytus collaris) in the laboratory and field-realized sprint speed for the same individuals in three different contexts (foraging, escaping a predator, and responding to a rival intruder). Females used closer to maximal speed while escaping predators than in the other contexts. Adult males, on the other hand, used closer to maximal speed while responding to an unfamiliar male intruder tethered within their territory. Sprint speeds during foraging attempts were far below maximal capacity for all lizards. Yearlings appeared to compensate for having lower absolute maximal capacity by using a greater percentage of their maximal capacity while foraging and escaping predators than did adults of either sex. We also found evidence for compensation within age and sex classes, where slower individuals used a greater percentage of their maximal capacity than faster individuals. However, this was true only while foraging and escaping predators and not while responding to a rival. Collared lizards appeared to choose microhabitats near refugia such that maximal speed was not necessary to escape predators. Although natural selection for predator avoidance cannot be ruled out as a selective force acting on locomotor performance in collared lizards, intrasexual selection for territory maintenance may be more important for territorial males.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hahn, Michael G.
The project seeks to investigate the mechanism by which CBMs potentiate the activity of glycoside hydrolases against complete plant cell walls. The project is based on the hypothesis that the wide range of CBMs present in bacterial enzymes maximize the potential target substrates by directing the cognate enzymes not only to different regions of a specific plant cell wall, but also increases the range of plant cell walls that can be degraded. In addition to maximizing substrate access, it was also proposed that CBMs can target specific subsets of hydrolases with complementary activities to the same region of the plantmore » cell wall, thereby maximizing the synergistic interactions between these enzymes. This synergy is based on the premise that the hydrolysis of a specific polysaccharide will increase the access of closely associated polymers to enzyme attack. In addition, it is unclear whether the catalytic module and appended CBM of modular enzymes have evolved unique complementary activities.« less
Optimized 3D stitching algorithm for whole body SPECT based on transition error minimization (TEM)
NASA Astrophysics Data System (ADS)
Cao, Xinhua; Xu, Xiaoyin; Voss, Stephan
2017-02-01
Standard Single Photon Emission Computed Tomography (SPECT) has a limited field of view (FOV) and cannot provide a 3D image of an entire long whole body SPECT. To produce a 3D whole body SPECT image, two to five overlapped SPECT FOVs from head to foot are acquired and assembled using image stitching. Most commercial software from medical imaging manufacturers applies a direct mid-slice stitching method to avoid blurring or ghosting from 3D image blending. Due to intensity changes across the middle slice of overlapped images, direct mid-slice stitching often produces visible seams in the coronal and sagittal views and maximal intensity projection (MIP). In this study, we proposed an optimized algorithm to reduce the visibility of stitching edges. The new algorithm computed, based on transition error minimization (TEM), a 3D stitching interface between two overlapped 3D SPECT images. To test the suggested algorithm, four studies of 2-FOV whole body SPECT were used and included two different reconstruction methods (filtered back projection (FBP) and ordered subset expectation maximization (OSEM)) as well as two different radiopharmaceuticals (Tc-99m MDP for bone metastases and I-131 MIBG for neuroblastoma tumors). Relative transition errors of stitched whole body SPECT using mid-slice stitching and the TEM-based algorithm were measured for objective evaluation. Preliminary experiments showed that the new algorithm reduced the visibility of the stitching interface in the coronal, sagittal, and MIP views. Average relative transition errors were reduced from 56.7% of mid-slice stitching to 11.7% of TEM-based stitching. The proposed algorithm also avoids blurring artifacts by preserving the noise properties of the original SPECT images.
Darmann, Andreas; Nicosia, Gaia; Pferschy, Ulrich; Schauer, Joachim
2014-03-16
In this work we address a game theoretic variant of the Subset Sum problem, in which two decision makers (agents/players) compete for the usage of a common resource represented by a knapsack capacity. Each agent owns a set of integer weighted items and wants to maximize the total weight of its own items included in the knapsack. The solution is built as follows: Each agent, in turn, selects one of its items (not previously selected) and includes it in the knapsack if there is enough capacity. The process ends when the remaining capacity is too small for including any item left. We look at the problem from a single agent point of view and show that finding an optimal sequence of items to select is an [Formula: see text]-hard problem. Therefore we propose two natural heuristic strategies and analyze their worst-case performance when (1) the opponent is able to play optimally and (2) the opponent adopts a greedy strategy. From a centralized perspective we observe that some known results on the approximation of the classical Subset Sum can be effectively adapted to the multi-agent version of the problem.
Darmann, Andreas; Nicosia, Gaia; Pferschy, Ulrich; Schauer, Joachim
2014-01-01
In this work we address a game theoretic variant of the Subset Sum problem, in which two decision makers (agents/players) compete for the usage of a common resource represented by a knapsack capacity. Each agent owns a set of integer weighted items and wants to maximize the total weight of its own items included in the knapsack. The solution is built as follows: Each agent, in turn, selects one of its items (not previously selected) and includes it in the knapsack if there is enough capacity. The process ends when the remaining capacity is too small for including any item left. We look at the problem from a single agent point of view and show that finding an optimal sequence of items to select is an NP-hard problem. Therefore we propose two natural heuristic strategies and analyze their worst-case performance when (1) the opponent is able to play optimally and (2) the opponent adopts a greedy strategy. From a centralized perspective we observe that some known results on the approximation of the classical Subset Sum can be effectively adapted to the multi-agent version of the problem. PMID:25844012
Confronting Diversity in the Community College Classroom: Six Maxims for Good Teaching.
ERIC Educational Resources Information Center
Gillett-Karam, Rosemary
1992-01-01
Emphasizes the leadership role of community college faculty in developing critical teaching strategies focusing attention on the needs of women and minorities. Describes six maxims of teaching excellence: engaging students' desire to learn, increasing opportunities, eliminating obstacles, empowering students through high expectations, offering…
Core Hunter 3: flexible core subset selection.
De Beukelaer, Herman; Davenport, Guy F; Fack, Veerle
2018-05-31
Core collections provide genebank curators and plant breeders a way to reduce size of their collections and populations, while minimizing impact on genetic diversity and allele frequency. Many methods have been proposed to generate core collections, often using distance metrics to quantify the similarity of two accessions, based on genetic marker data or phenotypic traits. Core Hunter is a multi-purpose core subset selection tool that uses local search algorithms to generate subsets relying on one or more metrics, including several distance metrics and allelic richness. In version 3 of Core Hunter (CH3) we have incorporated two new, improved methods for summarizing distances to quantify diversity or representativeness of the core collection. A comparison of CH3 and Core Hunter 2 (CH2) showed that these new metrics can be effectively optimized with less complex algorithms, as compared to those used in CH2. CH3 is more effective at maximizing the improved diversity metric than CH2, still ensures a high average and minimum distance, and is faster for large datasets. Using CH3, a simple stochastic hill-climber is able to find highly diverse core collections, and the more advanced parallel tempering algorithm further increases the quality of the core and further reduces variability across independent samples. We also evaluate the ability of CH3 to simultaneously maximize diversity, and either representativeness or allelic richness, and compare the results with those of the GDOpt and SimEli methods. CH3 can sample equally representative cores as GDOpt, which was specifically designed for this purpose, and is able to construct cores that are simultaneously more diverse, and either are more representative or have higher allelic richness, than those obtained by SimEli. In version 3, Core Hunter has been updated to include two new core subset selection metrics that construct cores for representativeness or diversity, with improved performance. It combines and outperforms the strengths of other methods, as it (simultaneously) optimizes a variety of metrics. In addition, CH3 is an improvement over CH2, with the option to use genetic marker data or phenotypic traits, or both, and improved speed. Core Hunter 3 is freely available on http://www.corehunter.org .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Wei-Chen; Maitra, Ranjan
2011-01-01
We propose a model-based approach for clustering time series regression data in an unsupervised machine learning framework to identify groups under the assumption that each mixture component follows a Gaussian autoregressive regression model of order p. Given the number of groups, the traditional maximum likelihood approach of estimating the parameters using the expectation-maximization (EM) algorithm can be employed, although it is computationally demanding. The somewhat fast tune to the EM folk song provided by the Alternating Expectation Conditional Maximization (AECM) algorithm can alleviate the problem to some extent. In this article, we develop an alternative partial expectation conditional maximization algorithmmore » (APECM) that uses an additional data augmentation storage step to efficiently implement AECM for finite mixture models. Results on our simulation experiments show improved performance in both fewer numbers of iterations and computation time. The methodology is applied to the problem of clustering mutual funds data on the basis of their average annual per cent returns and in the presence of economic indicators.« less
Redundant variables and Granger causality
NASA Astrophysics Data System (ADS)
Angelini, L.; de Tommaso, M.; Marinazzo, D.; Nitti, L.; Pellicoro, M.; Stramaglia, S.
2010-03-01
We discuss the use of multivariate Granger causality in presence of redundant variables: the application of the standard analysis, in this case, leads to under estimation of causalities. Using the un-normalized version of the causality index, we quantitatively develop the notions of redundancy and synergy in the frame of causality and propose two approaches to group redundant variables: (i) for a given target, the remaining variables are grouped so as to maximize the total causality and (ii) the whole set of variables is partitioned to maximize the sum of the causalities between subsets. We show the application to a real neurological experiment, aiming to a deeper understanding of the physiological basis of abnormal neuronal oscillations in the migraine brain. The outcome by our approach reveals the change in the informational pattern due to repetitive transcranial magnetic stimulations.
Mikhaylova, E; Kolstein, M; De Lorenzo, G; Chmeissani, M
2014-07-01
A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm 3 ) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics.
INDEXABILITY AND OPTIMAL INDEX POLICIES FOR A CLASS OF REINITIALISING RESTLESS BANDITS.
Villar, Sofía S
2016-01-01
Motivated by a class of Partially Observable Markov Decision Processes with application in surveillance systems in which a set of imperfectly observed state processes is to be inferred from a subset of available observations through a Bayesian approach, we formulate and analyze a special family of multi-armed restless bandit problems. We consider the problem of finding an optimal policy for observing the processes that maximizes the total expected net rewards over an infinite time horizon subject to the resource availability. From the Lagrangian relaxation of the original problem, an index policy can be derived, as long as the existence of the Whittle index is ensured. We demonstrate that such a class of reinitializing bandits in which the projects' state deteriorates while active and resets to its initial state when passive until its completion possesses the structural property of indexability and we further show how to compute the index in closed form. In general, the Whittle index rule for restless bandit problems does not achieve optimality. However, we show that the proposed Whittle index rule is optimal for the problem under study in the case of stochastically heterogenous arms under the expected total criterion, and it is further recovered by a simple tractable rule referred to as the 1-limited Round Robin rule. Moreover, we illustrate the significant suboptimality of other widely used heuristic: the Myopic index rule, by computing in closed form its suboptimality gap. We present numerical studies which illustrate for the more general instances the performance advantages of the Whittle index rule over other simple heuristics.
INDEXABILITY AND OPTIMAL INDEX POLICIES FOR A CLASS OF REINITIALISING RESTLESS BANDITS
Villar, Sofía S.
2016-01-01
Motivated by a class of Partially Observable Markov Decision Processes with application in surveillance systems in which a set of imperfectly observed state processes is to be inferred from a subset of available observations through a Bayesian approach, we formulate and analyze a special family of multi-armed restless bandit problems. We consider the problem of finding an optimal policy for observing the processes that maximizes the total expected net rewards over an infinite time horizon subject to the resource availability. From the Lagrangian relaxation of the original problem, an index policy can be derived, as long as the existence of the Whittle index is ensured. We demonstrate that such a class of reinitializing bandits in which the projects’ state deteriorates while active and resets to its initial state when passive until its completion possesses the structural property of indexability and we further show how to compute the index in closed form. In general, the Whittle index rule for restless bandit problems does not achieve optimality. However, we show that the proposed Whittle index rule is optimal for the problem under study in the case of stochastically heterogenous arms under the expected total criterion, and it is further recovered by a simple tractable rule referred to as the 1-limited Round Robin rule. Moreover, we illustrate the significant suboptimality of other widely used heuristic: the Myopic index rule, by computing in closed form its suboptimality gap. We present numerical studies which illustrate for the more general instances the performance advantages of the Whittle index rule over other simple heuristics. PMID:27212781
Formation Control for the Maxim Mission.
NASA Technical Reports Server (NTRS)
Luquette, Richard J.; Leitner, Jesse; Gendreau, Keith; Sanner, Robert M.
2004-01-01
Over the next twenty years, a wave of change is occurring in the spacebased scientific remote sensing community. While the fundamental limits in the spatial and angular resolution achievable in spacecraft have been reached, based on today's technology, an expansive new technology base has appeared over the past decade in the area of Distributed Space Systems (DSS). A key subset of the DSS technology area is that which covers precision formation flying of space vehicles. Through precision formation flying, the baselines, previously defined by the largest monolithic structure which could fit in the largest launch vehicle fairing, are now virtually unlimited. Several missions including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), and the Stellar Imager will drive the formation flying challenges to achieve unprecedented baselines for high resolution, extended-scene, interferometry in the ultraviolet and X-ray regimes. This paper focuses on establishing the feasibility for the formation control of the MAXIM mission. The Stellar Imager mission requirements are on the same order of those for MAXIM. This paper specifically addresses: (1) high-level science requirements for these missions and how they evolve into engineering requirements; (2) the formation control architecture devised for such missions; (3) the design of the formation control laws to maintain very high precision relative positions; and (4) the levels of fuel usage required in the duration of these missions. Specific preliminary results are presented for two spacecraft within the MAXIM mission.
A linearization of quantum channels
NASA Astrophysics Data System (ADS)
Crowder, Tanner
2015-06-01
Because the quantum channels form a compact, convex set, we can express any quantum channel as a convex combination of extremal channels. We give a Euclidean representation for the channels whose inverses are also valid channels; these are a subset of the extreme points. They form a compact, connected Lie group, and we calculate its Lie algebra. Lastly, we calculate a maximal torus for the group and provide a constructive approach to decomposing any invertible channel into a product of elementary channels.
The Self in Decision Making and Decision Implementation.
ERIC Educational Resources Information Center
Beach, Lee Roy; Mitchell, Terence R.
Since the early 1950's the principal prescriptive model in the psychological study of decision making has been maximization of Subjective Expected Utility (SEU). This SEU maximization has come to be regarded as a description of how people go about making decisions. However, while observed decision processes sometimes resemble the SEU model,…
Brand, Samuel P C; Keeling, Matt J
2017-03-01
It is a long recognized fact that climatic variations, especially temperature, affect the life history of biting insects. This is particularly important when considering vector-borne diseases, especially in temperate regions where climatic fluctuations are large. In general, it has been found that most biological processes occur at a faster rate at higher temperatures, although not all processes change in the same manner. This differential response to temperature, often considered as a trade-off between onward transmission and vector life expectancy, leads to the total transmission potential of an infected vector being maximized at intermediate temperatures. Here we go beyond the concept of a static optimal temperature, and mathematically model how realistic temperature variation impacts transmission dynamics. We use bluetongue virus (BTV), under UK temperatures and transmitted by Culicoides midges, as a well-studied example where temperature fluctuations play a major role. We first consider an optimal temperature profile that maximizes transmission, and show that this is characterized by a warm day to maximize biting followed by cooler weather to maximize vector life expectancy. This understanding can then be related to recorded representative temperature patterns for England, the UK region which has experienced BTV cases, allowing us to infer historical transmissibility of BTV, as well as using forecasts of climate change to predict future transmissibility. Our results show that when BTV first invaded northern Europe in 2006 the cumulative transmission intensity was higher than any point in the last 50 years, although with climate change such high risks are the expected norm by 2050. Such predictions would indicate that regular BTV epizootics should be expected in the UK in the future. © 2017 The Author(s).
On the role of budget sufficiency, cost efficiency, and uncertainty in species management
van der Burg, Max Post; Bly, Bartholomew B.; Vercauteren, Tammy; Grand, James B.; Tyre, Andrew J.
2014-01-01
Many conservation planning frameworks rely on the assumption that one should prioritize locations for management actions based on the highest predicted conservation value (i.e., abundance, occupancy). This strategy may underperform relative to the expected outcome if one is working with a limited budget or the predicted responses are uncertain. Yet, cost and tolerance to uncertainty rarely become part of species management plans. We used field data and predictive models to simulate a decision problem involving western burrowing owls (Athene cunicularia hypugaea) using prairie dog colonies (Cynomys ludovicianus) in western Nebraska. We considered 2 species management strategies: one maximized abundance and the other maximized abundance in a cost-efficient way. We then used heuristic decision algorithms to compare the 2 strategies in terms of how well they met a hypothetical conservation objective. Finally, we performed an info-gap decision analysis to determine how these strategies performed under different budget constraints and uncertainty about owl response. Our results suggested that when budgets were sufficient to manage all sites, the maximizing strategy was optimal and suggested investing more in expensive actions. This pattern persisted for restricted budgets up to approximately 50% of the sufficient budget. Below this budget, the cost-efficient strategy was optimal and suggested investing in cheaper actions. When uncertainty in the expected responses was introduced, the strategy that maximized abundance remained robust under a sufficient budget. Reducing the budget induced a slight trade-off between expected performance and robustness, which suggested that the most robust strategy depended both on one's budget and tolerance to uncertainty. Our results suggest that wildlife managers should explicitly account for budget limitations and be realistic about their expected levels of performance.
Bois, John P; Geske, Jeffrey B; Foley, Thomas A; Ommen, Steve R; Pellikka, Patricia A
2017-02-15
Left ventricular (LV) wall thickness is a prognostic marker in hypertrophic cardiomyopathy (HC). LV wall thickness ≥30 mm (massive hypertrophy) is independently associated with sudden cardiac death. Presence of massive hypertrophy is used to guide decision making for cardiac defibrillator implantation. We sought to determine whether measurements of maximal LV wall thickness differ between cardiac magnetic resonance imaging (MRI) and transthoracic echocardiography (TTE). Consecutive patients were studied who had HC without previous septal ablation or myectomy and underwent both cardiac MRI and TTE at a single tertiary referral center. Reported maximal LV wall thickness was compared between the imaging techniques. Patients with ≥1 technique reporting massive hypertrophy received subset analysis. In total, 618 patients were evaluated from January 1, 2003, to December 21, 2012 (mean [SD] age, 53 [15] years; 381 men [62%]). In 75 patients (12%), reported maximal LV wall thickness was identical between MRI and TTE. Median difference in reported maximal LV wall thickness between the techniques was 3 mm (maximum difference, 17 mm). Of the 63 patients with ≥1 technique measuring maximal LV wall thickness ≥30 mm, 44 patients (70%) had discrepant classification regarding massive hypertrophy. MRI identified 52 patients (83%) with massive hypertrophy; TTE, 30 patients (48%). Although guidelines recommend MRI or TTE imaging to assess cardiac anatomy in HC, this study shows discrepancy between the techniques for maximal reported LV wall thickness assessment. In conclusion, because this measure clinically affects prognosis and therapeutic decision making, efforts to resolve these discrepancies are critical. Copyright © 2016 Elsevier Inc. All rights reserved.
Interval-based reconstruction for uncertainty quantification in PET
NASA Astrophysics Data System (ADS)
Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis
2018-02-01
A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.
The benefits of social influence in optimized cultural markets.
Abeliuk, Andrés; Berbeglia, Gerardo; Cebrian, Manuel; Van Hentenryck, Pascal
2015-01-01
Social influence has been shown to create significant unpredictability in cultural markets, providing one potential explanation why experts routinely fail at predicting commercial success of cultural products. As a result, social influence is often presented in a negative light. Here, we show the benefits of social influence for cultural markets. We present a policy that uses product quality, appeal, position bias and social influence to maximize expected profits in the market. Our computational experiments show that our profit-maximizing policy leverages social influence to produce significant performance benefits for the market, while our theoretical analysis proves that our policy outperforms in expectation any policy not displaying social signals. Our results contrast with earlier work which focused on showing the unpredictability and inequalities created by social influence. Not only do we show for the first time that, under our policy, dynamically showing consumers positive social signals increases the expected profit of the seller in cultural markets. We also show that, in reasonable settings, our profit-maximizing policy does not introduce significant unpredictability and identifies "blockbusters". Overall, these results shed new light on the nature of social influence and how it can be leveraged for the benefits of the market.
Optimal Investment Under Transaction Costs: A Threshold Rebalanced Portfolio Approach
NASA Astrophysics Data System (ADS)
Tunc, Sait; Donmez, Mehmet Ali; Kozat, Suleyman Serdar
2013-06-01
We study optimal investment in a financial market having a finite number of assets from a signal processing perspective. We investigate how an investor should distribute capital over these assets and when he should reallocate the distribution of the funds over these assets to maximize the cumulative wealth over any investment period. In particular, we introduce a portfolio selection algorithm that maximizes the expected cumulative wealth in i.i.d. two-asset discrete-time markets where the market levies proportional transaction costs in buying and selling stocks. We achieve this using "threshold rebalanced portfolios", where trading occurs only if the portfolio breaches certain thresholds. Under the assumption that the relative price sequences have log-normal distribution from the Black-Scholes model, we evaluate the expected wealth under proportional transaction costs and find the threshold rebalanced portfolio that achieves the maximal expected cumulative wealth over any investment period. Our derivations can be readily extended to markets having more than two stocks, where these extensions are pointed out in the paper. As predicted from our derivations, we significantly improve the achieved wealth over portfolio selection algorithms from the literature on historical data sets.
Matching Pupils and Teachers to Maximize Expected Outcomes.
ERIC Educational Resources Information Center
Ward, Joe H., Jr.; And Others
To achieve a good teacher-pupil match, it is necessary (1) to predict the learning outcomes that will result when each student is instructed by each teacher, (2) to use the predicted performance to compute an Optimality Index for each teacher-pupil combination to indicate the quality of each combination toward maximizing learning for all students,…
Kelly, Nichole R; Mazzeo, Suzanne E; Bean, Melanie K
2013-01-01
To clarify directions for research and practice, research literature evaluating nutrition and dietary interventions in college and university settings was reviewed. Systematic search of database literature. Postsecondary education. Fourteen research articles evaluating randomized controlled trials or quasi-experimental interventions targeting dietary outcomes. Diet/nutrition intake, knowledge, motivation, self-efficacy, barriers, intentions, social support, self-regulation, outcome expectations, and sales. Systematic search of 936 articles and review of 14 articles meeting search criteria. Some in-person interventions (n = 6) show promise in improving students' dietary behaviors, although changes were minimal. The inclusion of self-regulation components, including self-monitoring and goal setting, may maximize outcomes. Dietary outcomes from online interventions (n = 5) were less promising overall, although they may be more effective with a subset of college students early in their readiness to change their eating habits. Environmental approaches (n = 3) may increase the sale of healthy food by serving as visual cues-to-action. A number of intervention approaches show promise for improving college students' dietary habits. However, much of this research has methodological limitations, rendering it difficult to draw conclusions across studies and hindering dissemination efforts. Copyright © 2013 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
Optimal simultaneous superpositioning of multiple structures with missing data.
Theobald, Douglas L; Steindel, Phillip A
2012-08-01
Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually 'missing' from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation-maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. dtheobald@brandeis.edu Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Cheng, Xiaoyin; Bayer, Christine; Maftei, Constantin-Alin; Astner, Sabrina T.; Vaupel, Peter; Ziegler, Sibylle I.; Shi, Kuangyu
2014-01-01
Compared to indirect methods, direct parametric image reconstruction (PIR) has the advantage of high quality and low statistical errors. However, it is not yet clear if this improvement in quality is beneficial for physiological quantification. This study aimed to evaluate direct PIR for the quantification of tumor hypoxia using the hypoxic fraction (HF) assessed from immunohistological data as a physiological reference. Sixteen mice with xenografted human squamous cell carcinomas were scanned with dynamic [18F]FMISO PET. Afterward, tumors were sliced and stained with H&E and the hypoxia marker pimonidazole. The hypoxic signal was segmented using k-means clustering and HF was specified as the ratio of the hypoxic area over the viable tumor area. The parametric Patlak slope images were obtained by indirect voxel-wise modeling on reconstructed images using filtered back projection and ordered-subset expectation maximization (OSEM) and by direct PIR (e.g., parametric-OSEM, POSEM). The mean and maximum Patlak slopes of the tumor area were investigated and compared with HF. POSEM resulted in generally higher correlations between slope and HF among the investigated methods. A strategy for the delineation of the hypoxic tumor volume based on thresholding parametric images at half maximum of the slope is recommended based on the results of this study.
GPU-based prompt gamma ray imaging from boron neutron capture therapy.
Yoon, Do-Kun; Jung, Joo-Young; Jo Hong, Key; Sil Lee, Keum; Suk Suh, Tae
2015-01-01
The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU). Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray image reconstruction using the GPU computation for BNCT simulations.
NASA Astrophysics Data System (ADS)
Ahn, Sangtae; Ross, Steven G.; Asma, Evren; Miao, Jun; Jin, Xiao; Cheng, Lishui; Wollenweber, Scott D.; Manjeshwar, Ravindra M.
2015-08-01
Ordered subset expectation maximization (OSEM) is the most widely used algorithm for clinical PET image reconstruction. OSEM is usually stopped early and post-filtered to control image noise and does not necessarily achieve optimal quantitation accuracy. As an alternative to OSEM, we have recently implemented a penalized likelihood (PL) image reconstruction algorithm for clinical PET using the relative difference penalty with the aim of improving quantitation accuracy without compromising visual image quality. Preliminary clinical studies have demonstrated visual image quality including lesion conspicuity in images reconstructed by the PL algorithm is better than or at least as good as that in OSEM images. In this paper we evaluate lesion quantitation accuracy of the PL algorithm with the relative difference penalty compared to OSEM by using various data sets including phantom data acquired with an anthropomorphic torso phantom, an extended oval phantom and the NEMA image quality phantom; clinical data; and hybrid clinical data generated by adding simulated lesion data to clinical data. We focus on mean standardized uptake values and compare them for PL and OSEM using both time-of-flight (TOF) and non-TOF data. The results demonstrate improvements of PL in lesion quantitation accuracy compared to OSEM with a particular improvement in cold background regions such as lungs.
PET Image Reconstruction Incorporating 3D Mean-Median Sinogram Filtering
NASA Astrophysics Data System (ADS)
Mokri, S. S.; Saripan, M. I.; Rahni, A. A. Abd; Nordin, A. J.; Hashim, S.; Marhaban, M. H.
2016-02-01
Positron Emission Tomography (PET) projection data or sinogram contained poor statistics and randomness that produced noisy PET images. In order to improve the PET image, we proposed an implementation of pre-reconstruction sinogram filtering based on 3D mean-median filter. The proposed filter is designed based on three aims; to minimise angular blurring artifacts, to smooth flat region and to preserve the edges in the reconstructed PET image. The performance of the pre-reconstruction sinogram filter prior to three established reconstruction methods namely filtered-backprojection (FBP), Maximum likelihood expectation maximization-Ordered Subset (OSEM) and OSEM with median root prior (OSEM-MRP) is investigated using simulated NCAT phantom PET sinogram as generated by the PET Analytical Simulator (ASIM). The improvement on the quality of the reconstructed images with and without sinogram filtering is assessed according to visual as well as quantitative evaluation based on global signal to noise ratio (SNR), local SNR, contrast to noise ratio (CNR) and edge preservation capability. Further analysis on the achieved improvement is also carried out specific to iterative OSEM and OSEM-MRP reconstruction methods with and without pre-reconstruction filtering in terms of contrast recovery curve (CRC) versus noise trade off, normalised mean square error versus iteration, local CNR versus iteration and lesion detectability. Overall, satisfactory results are obtained from both visual and quantitative evaluations.
Iterative Stable Alignment and Clustering of 2D Transmission Electron Microscope Images
Yang, Zhengfan; Fang, Jia; Chittuluru, Johnathan; Asturias, Francisco J.; Penczek, Pawel A.
2012-01-01
SUMMARY Identification of homogeneous subsets of images in a macromolecular electron microscopy (EM) image data set is a critical step in single-particle analysis. The task is handled by iterative algorithms, whose performance is compromised by the compounded limitations of image alignment and K-means clustering. Here we describe an approach, iterative stable alignment and clustering (ISAC) that, relying on a new clustering method and on the concepts of stability and reproducibility, can extract validated, homogeneous subsets of images. ISAC requires only a small number of simple parameters and, with minimal human intervention, can eliminate bias from two-dimensional image clustering and maximize the quality of group averages that can be used for ab initio three-dimensional structural determination and analysis of macromolecular conformational variability. Repeated testing of the stability and reproducibility of a solution within ISAC eliminates heterogeneous or incorrect classes and introduces critical validation to the process of EM image clustering. PMID:22325773
Efficient Simulation Budget Allocation for Selecting an Optimal Subset
NASA Technical Reports Server (NTRS)
Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay
2008-01-01
We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.
McLeish, Kenneth R.; Uriarte, Silvia M.; Tandon, Shweta; Creed, Timothy M.; Le, Junyi; Ward, Richard A.
2013-01-01
This study tested the hypothesis that priming the neutrophil respiratory burst requires both granule exocytosis and activation of the prolyl isomerase, Pin1. Fusion proteins containing the TAT cell permeability sequence and either the SNARE domain of syntaxin-4 or the N-terminal SNARE domain of SNAP-23 were used to examine the role of granule subsets in TNF-mediated respiratory burst priming using human neutrophils. Concentration-inhibition curves for exocytosis of individual granule subsets and for priming of fMLF-stimulated superoxide release and phagocytosis-stimulated H2O2 production were generated. Maximal inhibition of priming ranged from 72% to 88%. Linear regression lines for inhibition of priming versus inhibition of exocytosis did not differ from the line of identity for secretory vesicles and gelatinase granules, while the slopes or the y-intercepts were different from the line of identity for specific and azurophilic granules. Inhibition of Pin1 reduced priming by 56%, while exocytosis of secretory vesicles and specific granules was not affected. These findings indicate that exocytosis of secretory vesicles and gelatinase granules and activation of Pin1 are independent events required for TNF-mediated priming of neutrophil respiratory burst. PMID:23363774
Ecological neighborhoods as a framework for umbrella species selection
Stuber, Erica F.; Fontaine, Joseph J.
2018-01-01
Umbrella species are typically chosen because they are expected to confer protection for other species assumed to have similar ecological requirements. Despite its popularity and substantial history, the value of the umbrella species concept has come into question because umbrella species chosen using heuristic methods, such as body or home range size, are not acting as adequate proxies for the metrics of interest: species richness or population abundance in a multi-species community for which protection is sought. How species associate with habitat across ecological scales has important implications for understanding population size and species richness, and therefore may be a better proxy for choosing an umbrella species. We determined the spatial scales of ecological neighborhoods important for predicting abundance of 8 potential umbrella species breeding in Nebraska using Bayesian latent indicator scale selection in N-mixture models accounting for imperfect detection. We compare the conservation value measured as collective avian abundance under different umbrella species selected following commonly used criteria and selected based on identifying spatial land cover characteristics within ecological neighborhoods that maximize collective abundance. Using traditional criteria to select an umbrella species resulted in sub-maximal expected collective abundance in 86% of cases compared to selecting an umbrella species based on land cover characteristics that maximized collective abundance directly. We conclude that directly assessing the expected quantitative outcomes, rather than ecological proxies, is likely the most efficient method to maximize the potential for conservation success under the umbrella species concept.
Weiser, Emily; Lanctot, Richard B.; Brown, Stephen C.; Alves, José A.; Battley, Phil F.; Bentzen, Rebecca L.; Bety, Joel; Bishop, Mary Anne; Boldenow, Megan; Bollache, Loic; Casler, Bruce; Christie, Maureen; Coleman, Jonathan T.; Conklin, Jesse R.; English, Willow B.; Gates, H. River; Gilg, Olivier; Giroux, Marie-Andree; Gosbell, Ken; Hassell, Chris J.; Helmericks, Jim; Johnson, Andrew; Katrinardottir, Borgny; Koivula, Kari; Kwon, Eunbi; Lamarre, Jean-Francois; Lang, Johannes; Lank, David B.; Lecomte, Nicolas; Liebezeit, Joseph R.; Loverti, Vanessa; McKinnon, Laura; Minton, Clive; Mizrahi, David S.; Nol, Erica; Pakanen, Veli-Matti; Perz, Johanna; Porter, Ron; Rausch, Jennie; Reneerkens, Jeroen; Ronka, Nelli; Saalfeld, Sarah T.; Senner, Nathan R.; Sittler, Benoit; Smith, Paul A.; Sowl, Kristine M.; Taylor, Audrey; Ward, David H.; Yezerinac, Stephen; Sandercock, Brett K.
2016-01-01
Negative effects of geolocators occurred only for three of the smallest species in our dataset, but were substantial when present. Future studies could mitigate impacts of tags by reducing protruding parts and minimizing use of additional markers. Investigators could maximize recovery of tags by strategically deploying geolocators on males, previously marked individuals, and successful breeders, though targeting subsets of a population could bias the resulting migratory movement data in some species.
Coverability graphs for a class of synchronously executed unbounded Petri net
NASA Technical Reports Server (NTRS)
Stotts, P. David; Pratt, Terrence W.
1990-01-01
After detailing a variant of the concurrent-execution rule for firing of maximal subsets, in which the simultaneous firing of conflicting transitions is prohibited, an algorithm is constructed for generating the coverability graph of a net executed under this synchronous firing rule. The omega insertion criteria in the algorithm are shown to be valid for any net on which the algorithm terminates. It is accordingly shown that the set of nets on which the algorithm terminates includes the 'conflict-free' class.
Selecting a Subset of Stimulus-Response Pairs with Maximal Transmitted Information
1992-03-01
Maathe19tical 19 AT RCODS1.SBJCEM (continue on reverse if necessary and idertif by bock number) VSystem designer. are often faced with the task of...Gary K. Poock, Asoctate Advisor -- ~arlR’ Jones, Chairman Command, Co t and Communications Academic Group ii ABSTRACT System designers are often faced ...performance. 3. Stimulus-Response Pairs System designers are often faced with the task of choosing which of several stimuli should be used to represent 6 a
Text Classification for Intelligent Portfolio Management
2002-05-01
years including nearest neighbor classification [15], naive Bayes with EM (Ex- pectation Maximization) [11] [13], Winnow with active learning [10... Active Learning and Expectation Maximization (EM). In particular, active learning is used to actively select documents for labeling, then EM assigns...generalization with active learning . Machine Learning, 15(2):201–221, 1994. [3] I. Dagan and P. Engelson. Committee-based sampling for training
Borai, Anwar; Livingstone, Callum; Alsobhi, Enaam; Al Sofyani, Abeer; Balgoon, Dalal; Farzal, Anwar; Almohammadi, Mohammed; Al-Amri, Abdulafattah; Bahijri, Suhad; Alrowaili, Daad; Bassiuni, Wafaa; Saleh, Ayman; Alrowaili, Norah; Abdelaal, Mohamed
2017-04-01
Whole blood donation has immunomodulatory effects, and most of these have been observed at short intervals following blood donation. This study aimed to investigate the impact of whole blood donation on lymphocyte subsets over a typical inter-donation interval. Healthy male subjects were recruited to study changes in complete blood count (CBC) (n = 42) and lymphocyte subsets (n = 16) before and at four intervals up to 106 days following blood donation. Repeated measures ANOVA were used to compare quantitative variables between different visits. Following blood donation, changes in CBC and erythropoietin were as expected. The neutrophil count increased by 11.3% at 8 days (p < .001). Novel changes were observed in lymphocyte subsets as the CD4/CD8 ratio increased by 9.2% (p < .05) at 8 days and 13.7% (p < .05) at 22 days. CD16-56 cells decreased by 16.2% (p < .05) at 8 days. All the subsets had returned to baseline by 106 days. Regression analysis showed that the changes in CD16-56 cells and CD4/CD8 ratio were not significant (Wilk's lambda = 0.15 and 0.94, respectively) when adjusted for BMI. In conclusion, following whole blood donation, there are transient changes in lymphocyte subsets. The effect of BMI on lymphocyte subsets and the effect of this immunomodulation on the immune response merit further investigation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, K; Hristov, D
2014-06-01
Purpose: To evaluate the potential impact of listmode-driven amplitude based optimal gating (OG) respiratory motion management technique on quantitative PET imaging. Methods: During the PET acquisitions, an optical camera tracked and recorded the motion of a tool placed on top of patients' torso. PET event data were utilized to detect and derive a motion signal that is directly coupled with a specific internal organ. A radioactivity-trace was generated from listmode data by accumulating all prompt counts in temporal bins matching the sampling rate of the external tracking device. Decay correction for 18F was performed. The image reconstructions using OG respiratorymore » motion management technique that uses 35% of total radioactivity counts within limited motion amplitudes were performed with external motion and radioactivity traces separately with ordered subset expectation maximization (OSEM) with 2 iterations and 21 subsets. Standard uptake values (SUVs) in a tumor region were calculated to measure the effect of using radioactivity trace for motion compensation. Motion-blurred 3D static PET image was also reconstructed with all counts and the SUVs derived from OG images were compared with SUVs from 3D images. Results: A 5.7 % increase of the maximum SUV in the lesion was found for optimal gating image reconstruction with radioactivity trace when compared to a static 3D image. The mean and maximum SUVs on the image that was reconstructed with radioactivity trace were found comparable (0.4 % and 4.5 % increase, respectively) to the values derived from the image that was reconstructed with external trace. Conclusion: The image reconstructed using radioactivity trace showed that the blurring due to the motion was reduced with impact on derived SUVs. The resolution and contrast of the images reconstructed with radioactivity trace were comparable to the resolution and contrast of the images reconstructed with external respiratory traces. Research supported by Siemens.« less
Replica analysis for the duality of the portfolio optimization problem
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2016-11-01
In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.
Replica analysis for the duality of the portfolio optimization problem.
Shinzato, Takashi
2016-11-01
In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.
Orellana, Liliana; Rotnitzky, Andrea; Robins, James M
2010-01-01
Dynamic treatment regimes are set rules for sequential decision making based on patient covariate history. Observational studies are well suited for the investigation of the effects of dynamic treatment regimes because of the variability in treatment decisions found in them. This variability exists because different physicians make different decisions in the face of similar patient histories. In this article we describe an approach to estimate the optimal dynamic treatment regime among a set of enforceable regimes. This set is comprised by regimes defined by simple rules based on a subset of past information. The regimes in the set are indexed by a Euclidean vector. The optimal regime is the one that maximizes the expected counterfactual utility over all regimes in the set. We discuss assumptions under which it is possible to identify the optimal regime from observational longitudinal data. Murphy et al. (2001) developed efficient augmented inverse probability weighted estimators of the expected utility of one fixed regime. Our methods are based on an extension of the marginal structural mean model of Robins (1998, 1999) which incorporate the estimation ideas of Murphy et al. (2001). Our models, which we call dynamic regime marginal structural mean models, are specially suitable for estimating the optimal treatment regime in a moderately small class of enforceable regimes of interest. We consider both parametric and semiparametric dynamic regime marginal structural models. We discuss locally efficient, double-robust estimation of the model parameters and of the index of the optimal treatment regime in the set. In a companion paper in this issue of the journal we provide proofs of the main results.
Precollege Predictors of Incapacitated Rape Among Female Students in Their First Year of College
Carey, Kate B.; Durney, Sarah E.; Shepardson, Robyn L.; Carey, Michael P.
2015-01-01
Objective: The first year of college is an important transitional period for young adults; it is also a period associated with elevated risk of incapacitated rape (IR) for female students. The goal of this study was to identify prospective risk factors associated with experiencing attempted or completed IR during the first year of college. Method: Using a prospective cohort design, we recruited 483 incoming first-year female students. Participants completed a baseline survey and three follow-up surveys over the next year. At baseline, we assessed precollege alcohol use, marijuana use, sexual behavior, and, for the subset of sexually experienced participants, sex-related alcohol expectancies. At the baseline and all follow-ups, we assessed sexual victimization. Results: Approximately 1 in 6 women (18%) reported IR before entering college, and 15% reported IR during their first year of college. In bivariate analyses, precollege IR history, precollege heavy episodic drinking, number of precollege sexual partners, and sex-related alcohol expectancies (enhancement and disinhibition) predicted first-year IR. In multivariate analyses with the entire sample, only precollege IR (odds ratio = 4.98, p < .001) remained a significant predictor. However, among the subset of sexually experienced participants, both enhancement expectancies and precollege IR predicted IR during the study year. Conclusions: IR during the first year of college is independently associated with a history of IR and with expectancies about alcohol’s enhancement of sexual experience. Alcohol expectancies are a modifiable risk factor that may be a promising target for prevention efforts. PMID:26562590
When Does Reward Maximization Lead to Matching Law?
Sakai, Yutaka; Fukai, Tomoki
2008-01-01
What kind of strategies subjects follow in various behavioral circumstances has been a central issue in decision making. In particular, which behavioral strategy, maximizing or matching, is more fundamental to animal's decision behavior has been a matter of debate. Here, we prove that any algorithm to achieve the stationary condition for maximizing the average reward should lead to matching when it ignores the dependence of the expected outcome on subject's past choices. We may term this strategy of partial reward maximization “matching strategy”. Then, this strategy is applied to the case where the subject's decision system updates the information for making a decision. Such information includes subject's past actions or sensory stimuli, and the internal storage of this information is often called “state variables”. We demonstrate that the matching strategy provides an easy way to maximize reward when combined with the exploration of the state variables that correctly represent the crucial information for reward maximization. Our results reveal for the first time how a strategy to achieve matching behavior is beneficial to reward maximization, achieving a novel insight into the relationship between maximizing and matching. PMID:19030101
Maintenance Downtime October 17 - 23, 2014
Atmospheric Science Data Center
2014-10-23
... Impact: The ASDC will be conducting extended system maintenance Fri 10/17@4pm - Thu 10/23@4pm EDT Please expect: ... and Customization Tool - AMAPS, CALIPSO, CERES, MOPITT, TES and TAD Search and Subset Tools All systems will be ...
Work Placement in UK Undergraduate Programmes. Student Expectations and Experiences.
ERIC Educational Resources Information Center
Leslie, David; Richardson, Anne
1999-01-01
A survey of 189 pre- and 106 post-sandwich work-experience students in tourism suggested that potential benefits were not being maximized. Students needed better preparation for the work experience, especially in terms of their expectations. The work experience needed better design, and the role of industry tutors needed clarification. (SK)
Career Preference among Universities' Faculty: Literature Review
ERIC Educational Resources Information Center
Alenzi, Faris Q.; Salem, Mohamed L.
2007-01-01
Why do people enter academic life? What are their expectations? How can they maximize their experience and achievements, both short- and long-term? How much should they move towards commercialization? What can they do to improve their career? How much autonomy can they reasonably expect? What are the key issues for academics and aspiring academics…
Picking battles wisely: plant behaviour under competition.
Novoplansky, Ariel
2009-06-01
Plants are limited in their ability to choose their neighbours, but they are able to orchestrate a wide spectrum of rational competitive behaviours that increase their prospects to prevail under various ecological settings. Through the perception of neighbours, plants are able to anticipate probable competitive interactions and modify their competitive behaviours to maximize their long-term gains. Specifically, plants can minimize competitive encounters by avoiding their neighbours; maximize their competitive effects by aggressively confronting their neighbours; or tolerate the competitive effects of their neighbours. However, the adaptive values of these non-mutually exclusive options are expected to depend strongly on the plants' evolutionary background and to change dynamically according to their past development, and relative sizes and vigour. Additionally, the magnitude of competitive responsiveness is expected to be positively correlated with the reliability of the environmental information regarding the expected competitive interactions and the expected time left for further plastic modifications. Concurrent competition over external and internal resources and morphogenetic signals may enable some plants to increase their efficiency and external competitive performance by discriminately allocating limited resources to their more promising organs at the expense of failing or less successful organs.
Formation Control for the MAXIM Mission
NASA Technical Reports Server (NTRS)
Luquette, Richard J.; Leitner, Jesse; Gendreau, Keith; Sanner, Robert M.
2004-01-01
Over the next twenty years, a wave of change is occurring in the space-based scientific remote sensing community. While the fundamental limits in the spatial and angular resolution achievable in spacecraft have been reached, based on today s technology, an expansive new technology base has appeared over the past decade in the area of Distributed Space Systems (DSS). A key subset of the DSS technology area is that which covers precision formation flying of space vehicles. Through precision formation flying, the baselines, previously defined by the largest monolithic structure which could fit in the largest launch vehicle fairing, are now virtually unlimited. Several missions including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), and the Stellar Imager will drive the formation flying challenges to achieve unprecedented baselines for high resolution, extended-scene, interferometry in the ultraviolet and X-ray regimes. This paper focuses on establishing the feasibility for the formation control of the MAXIM mission. MAXIM formation flying requirements are on the order of microns, while Stellar Imager mission requirements are on the order of nanometers. This paper specifically addresses: (1) high-level science requirements for these missions and how they evolve into engineering requirements; and (2) the development of linearized equations of relative motion for a formation operating in an n-body gravitational field. Linearized equations of motion provide the ground work for linear formation control designs.
Moore, M A; Katzgraber, Helmut G
2014-10-01
Starting from preferences on N proposed policies obtained via questionnaires from a sample of the electorate, an Ising spin-glass model in a field can be constructed from which a political party could find the subset of the proposed policies which would maximize its appeal, form a coherent choice in the eyes of the electorate, and have maximum overlap with the party's existing policies. We illustrate the application of the procedure by simulations of a spin glass in a random field on scale-free networks.
Translation Invariant Extensions of Finite Volume Measures
NASA Astrophysics Data System (ADS)
Goldstein, S.; Kuna, T.; Lebowitz, J. L.; Speer, E. R.
2017-02-01
We investigate the following questions: Given a measure μ _Λ on configurations on a subset Λ of a lattice L, where a configuration is an element of Ω ^Λ for some fixed set Ω , does there exist a measure μ on configurations on all of L, invariant under some specified symmetry group of L, such that μ _Λ is its marginal on configurations on Λ ? When the answer is yes, what are the properties, e.g., the entropies, of such measures? Our primary focus is the case in which L=Z^d and the symmetries are the translations. For the case in which Λ is an interval in Z we give a simple necessary and sufficient condition, local translation invariance ( LTI), for extendibility. For LTI measures we construct extensions having maximal entropy, which we show are Gibbs measures; this construction extends to the case in which L is the Bethe lattice. On Z we also consider extensions supported on periodic configurations, which are analyzed using de Bruijn graphs and which include the extensions with minimal entropy. When Λ subset Z is not an interval, or when Λ subset Z^d with d>1, the LTI condition is necessary but not sufficient for extendibility. For Z^d with d>1, extendibility is in some sense undecidable.
Vanderhaeghe, F; Smolders, A J P; Roelofs, J G M; Hoffmann, M
2012-03-01
Selecting an appropriate variable subset in linear multivariate methods is an important methodological issue for ecologists. Interest often exists in obtaining general predictive capacity or in finding causal inferences from predictor variables. Because of a lack of solid knowledge on a studied phenomenon, scientists explore predictor variables in order to find the most meaningful (i.e. discriminating) ones. As an example, we modelled the response of the amphibious softwater plant Eleocharis multicaulis using canonical discriminant function analysis. We asked how variables can be selected through comparison of several methods: univariate Pearson chi-square screening, principal components analysis (PCA) and step-wise analysis, as well as combinations of some methods. We expected PCA to perform best. The selected methods were evaluated through fit and stability of the resulting discriminant functions and through correlations between these functions and the predictor variables. The chi-square subset, at P < 0.05, followed by a step-wise sub-selection, gave the best results. In contrast to expectations, PCA performed poorly, as so did step-wise analysis. The different chi-square subset methods all yielded ecologically meaningful variables, while probable noise variables were also selected by PCA and step-wise analysis. We advise against the simple use of PCA or step-wise discriminant analysis to obtain an ecologically meaningful variable subset; the former because it does not take into account the response variable, the latter because noise variables are likely to be selected. We suggest that univariate screening techniques are a worthwhile alternative for variable selection in ecology. © 2011 German Botanical Society and The Royal Botanical Society of the Netherlands.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warburton, P.E.; Gosden, J.; Lawson, D.
1996-04-15
Alpha satellite DNA is a tandemly repeated DNA family found at the centromeres of all primate chromosomes examined. The fundamental repeat units of alpha satellite DNA are diverged 169- to 172-bp monomers, often found to be organized in chromosome-specific higher-order repeat units. The chromosomes of human (Homo sapiens (HSA)), chimpanzee (Pan troglodytes (PTR) and Pan paniscus), and gorilla (Gorilla gorilla) share a remarkable similarity and synteny. It is of interest to ask if alpha satellite arrays at centromeres of homologous chromosomes between these species are closely related (evolving in an orthologous manner) or if the evolutionary processes that homogenize andmore » spread these arrays within and between chromosomes result in nonorthologous evolution of arrays. By using PCR primers specific for human chromosome 17-specific alpha satellite DNA, we have amplified, cloned, and characterized a chromosome-specific subset from the PTR chimpanzee genome. Hybridization both on Southern blots and in situ as well as sequence analysis show that this subset is most closely related, as expected, to sequences on HSA 17. However, in situ hybridization reveals that this subset is not found on the homologous chromosome in chimpanzee (PTR 19), but instead on PTR 12, which is homologous to HSA 2p. 40 refs., 3 figs.« less
Impact of genetic features on treatment decisions in AML.
Döhner, Hartmut; Gaidzik, Verena I
2011-01-01
In recent years, research in molecular genetics has been instrumental in deciphering the molecular pathogenesis of acute myeloid leukemia (AML). With the advent of the novel genomics technologies such as next-generation sequencing, it is expected that virtually all genetic lesions in AML will soon be identified. Gene mutations or deregulated expression of genes or sets of genes now allow us to explore the enormous diversity among cytogenetically defined subsets of AML, in particular the large subset of cytogenetically normal AML. Nonetheless, there are several challenges, such as discriminating driver from passenger mutations, evaluating the prognostic and predictive value of a specific mutation in the concert of the various concurrent mutations, or translating findings from molecular disease pathogenesis into novel therapies. Progress is unlikely to be fast in developing molecular targeted therapies. Contrary to the initial assumption, the development of molecular targeted therapies is slow and the various reports of promising new compounds will need to be put into perspective because many of these drugs did not show the expected effects.
Kessler, Thomas; Neumann, Jörg; Mummendey, Amélie; Berthold, Anne; Schubert, Thomas; Waldzus, Sven
2010-09-01
To explain the determinants of negative behavior toward deviants (e.g., punishment), this article examines how people evaluate others on the basis of two types of standards: minimal and maximal. Minimal standards focus on an absolute cutoff point for appropriate behavior; accordingly, the evaluation of others varies dichotomously between acceptable or unacceptable. Maximal standards focus on the degree of deviation from that standard; accordingly, the evaluation of others varies gradually from positive to less positive. This framework leads to the prediction that violation of minimal standards should elicit punishment regardless of the degree of deviation, whereas punishment in response to violations of maximal standards should depend on the degree of deviation. Four studies assessed or manipulated the type of standard and degree of deviation displayed by a target. Results consistently showed the expected interaction between type of standard (minimal and maximal) and degree of deviation on punishment behavior.
Skedgel, Chris; Wailoo, Allan; Akehurst, Ron
2015-01-01
Economic theory suggests that resources should be allocated in a way that produces the greatest outputs, on the grounds that maximizing output allows for a redistribution that could benefit everyone. In health care, this is known as QALY (quality-adjusted life-year) maximization. This justification for QALY maximization may not hold, though, as it is difficult to reallocate health. Therefore, the allocation of health care should be seen as a matter of distributive justice as well as efficiency. A discrete choice experiment was undertaken to test consistency with the principles of QALY maximization and to quantify the willingness to trade life-year gains for distributive justice. An empirical ethics process was used to identify attributes that appeared relevant and ethically justified: patient age, severity (decomposed into initial quality and life expectancy), final health state, duration of benefit, and distributional concerns. Only 3% of respondents maximized QALYs with every choice, but scenarios with larger aggregate QALY gains were chosen more often and a majority of respondents maximized QALYs in a majority of their choices. However, respondents also appeared willing to prioritize smaller gains to preferred groups over larger gains to less preferred groups. Marginal analyses found a statistically significant preference for younger patients and a wider distribution of gains, as well as an aversion to patients with the shortest life expectancy or a poor final health state. These results support the existence of an equity-efficiency tradeoff and suggest that well-being could be enhanced by giving priority to programs that best satisfy societal preferences. Societal preferences could be incorporated through the use of explicit equity weights, although more research is required before such weights can be used in priority setting. © The Author(s) 2014.
Liu, Tong; Green, Angela R.; Rodríguez, Luis F.; Ramirez, Brett C.; Shike, Daniel W.
2015-01-01
The number of animals required to represent the collective characteristics of a group remains a concern in animal movement monitoring with GPS. Monitoring a subset of animals from a group instead of all animals can reduce costs and labor; however, incomplete data may cause information losses and inaccuracy in subsequent data analyses. In cattle studies, little work has been conducted to determine the number of cattle within a group needed to be instrumented considering subsequent analyses. Two different groups of cattle (a mixed group of 24 beef cows and heifers, and another group of 8 beef cows) were monitored with GPS collars at 4 min intervals on intensively managed pastures and corn residue fields in 2011. The effects of subset group size on cattle movement characterization and spatial occupancy analysis were evaluated by comparing the results between subset groups and the entire group for a variety of summarization parameters. As expected, more animals yield better results for all parameters. Results show the average group travel speed and daily travel distances are overestimated as subset group size decreases, while the average group radius is underestimated. Accuracy of group centroid locations and group radii are improved linearly as subset group size increases. A kernel density estimation was performed to quantify the spatial occupancy by cattle via GPS location data. Results show animals among the group had high similarity of spatial occupancy. Decisions regarding choosing an appropriate subset group size for monitoring depend on the specific use of data for subsequent analysis: a small subset group may be adequate for identifying areas visited by cattle; larger subset group size (e.g. subset group containing more than 75% of animals) is recommended to achieve better accuracy of group movement characteristics and spatial occupancy for the use of correlating cattle locations with other environmental factors. PMID:25647571
[Varicocele and coincidental abacterial prostato-vesiculitis: negative role about the sperm output].
Vicari, Enzo; La Vignera, Sandro; Tracia, Angelo; Cardì, Francesco; Donati, Angelo
2003-03-01
To evaluate the frequency and the role of a coincidentally expressed abacterial prostato-vesiculitis (PV) on sperm output in patients with left varicocele (Vr). We evaluated 143 selected infertile patients (mean age 27 years, range 21-43), with oligo- and/or astheno- and/or teratozoospermia (OAT) subdivided in two groups. Group A included 76 patients with previous varicocelectomy and persistent OAT. Group B included 67 infertile patients (mean age 26 years, range 21-37) with OAT and not varicocelectomized. Patients with Vr and coincidental didymo-epididymal ultrasound (US) abnormalities were excluded from the study. Following rectal prostato-vesicular ultrasonography, each group was subdivided in two subsets on the basis of the absence (group A: subset Vr-/PV-; and group B: subset Vr+/PV-) or the presence of an abacterial PV (group A: subset Vr-/PV+; group B: subset Vr+/PV+). Particularly, PV was present in 47.4% and 41.8% patients of groups A and B, respectively. This coincidental pathology was ipsilateral with Vr in the 61% of the cases. Semen analysis was performed in all patients. Patients of group A showed a total sperm number significantly higher than those found in group B. In presence of PV, sperm parameters were not significantly different between matched--subsets (Vr-/PV+ vs. Vr+/PV+). In absence of PV, the sperm density, the total sperm number and the percentage of forward motility from subset with previous varicocelectomy (Vr-/PV) exhibited values significantly higher than those found in the matched--subset (Vr+/PV-). Sperm analysis alone performed in patients with left Vr is not a useful prognostic post-varicocelectomy marker. Since following varicocelectomy a lack of sperm response could mask another coincidental pathology, the identification through US scans of a possible PV may be mandatory. On the other hand, an integrated uro-andrological approach, including US scans, allows to enucleate subsets of patients with Vr alone, who will have an expected better sperm response following Vr repair.
He, Xin; Frey, Eric C
2006-08-01
Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.
Mu, Zhiping; Hong, Baoming; Li, Shimin; Liu, Yi-Hwa
2009-01-01
Coded aperture imaging for two-dimensional (2D) planar objects has been investigated extensively in the past, whereas little success has been achieved in imaging 3D objects using this technique. In this article, the authors present a novel method of 3D single photon emission computerized tomography (SPECT) reconstruction for near-field coded aperture imaging. Multiangular coded aperture projections are acquired and a stack of 2D images is reconstructed separately from each of the projections. Secondary projections are subsequently generated from the reconstructed image stacks based on the geometry of parallel-hole collimation and the variable magnification of near-field coded aperture imaging. Sinograms of cross-sectional slices of 3D objects are assembled from the secondary projections, and the ordered subset expectation and maximization algorithm is employed to reconstruct the cross-sectional image slices from the sinograms. Experiments were conducted using a customized capillary tube phantom and a micro hot rod phantom. Imaged at approximately 50 cm from the detector, hot rods in the phantom with diameters as small as 2.4 mm could be discerned in the reconstructed SPECT images. These results have demonstrated the feasibility of the authors’ 3D coded aperture image reconstruction algorithm for SPECT, representing an important step in their effort to develop a high sensitivity and high resolution SPECT imaging system. PMID:19544769
NASA Astrophysics Data System (ADS)
O'Connor, J. Michael; Pretorius, P. Hendrik; Gifford, Howard C.; Licho, Robert; Joffe, Samuel; McGuiness, Matthew; Mehurg, Shannon; Zacharias, Michael; Brankov, Jovan G.
2012-02-01
Our previous Single Photon Emission Computed Tomography (SPECT) myocardial perfusion imaging (MPI) research explored the utility of numerical observers. We recently created two hundred and eighty simulated SPECT cardiac cases using Dynamic MCAT (DMCAT) and SIMIND Monte Carlo tools. All simulated cases were then processed with two reconstruction methods: iterative ordered subset expectation maximization (OSEM) and filtered back-projection (FBP). Observer study sets were assembled for both OSEM and FBP methods. Five physicians performed an observer study on one hundred and seventy-nine images from the simulated cases. The observer task was to indicate detection of any myocardial perfusion defect using the American Society of Nuclear Cardiology (ASNC) 17-segment cardiac model and the ASNC five-scale rating guidelines. Human observer Receiver Operating Characteristic (ROC) studies established the guidelines for the subsequent evaluation of numerical model observer (NO) performance. Several NOs were formulated and their performance was compared with the human observer performance. One type of NO was based on evaluation of a cardiac polar map that had been pre-processed using a gradient-magnitude watershed segmentation algorithm. The second type of NO was also based on analysis of a cardiac polar map but with use of a priori calculated average image derived from an ensemble of normal cases.
A new method for spatial structure detection of complex inner cavities based on 3D γ-photon imaging
NASA Astrophysics Data System (ADS)
Xiao, Hui; Zhao, Min; Liu, Jiantang; Liu, Jiao; Chen, Hao
2018-05-01
This paper presents a new three-dimensional (3D) imaging method for detecting the spatial structure of a complex inner cavity based on positron annihilation and γ-photon detection. This method first marks carrier solution by a certain radionuclide and injects it into the inner cavity where positrons are generated. Subsequently, γ-photons are released from positron annihilation, and the γ-photon detector ring is used for recording the γ-photons. Finally, the two-dimensional (2D) image slices of the inner cavity are constructed by the ordered-subset expectation maximization scheme and the 2D image slices are merged to the 3D image of the inner cavity. To eliminate the artifact in the reconstructed image due to the scattered γ-photons, a novel angle-traversal model is proposed for γ-photon single-scattering correction, in which the path of the single scattered γ-photon is analyzed from a spatial geometry perspective. Two experiments are conducted to verify the effectiveness of the proposed correction model and the advantage of the proposed testing method in detecting the spatial structure of the inner cavity, including the distribution of gas-liquid multi-phase mixture inside the inner cavity. The above two experiments indicate the potential of the proposed method as a new tool for accurately delineating the inner structures of industrial complex parts.
GPU-based prompt gamma ray imaging from boron neutron capture therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Do-Kun; Jung, Joo-Young; Suk Suh, Tae, E-mail: suhsanta@catholic.ac.kr
Purpose: The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. Methods: To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU).more » Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. Results: The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). Conclusions: The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray image reconstruction using the GPU computation for BNCT simulations.« less
TU-FG-BRB-07: GPU-Based Prompt Gamma Ray Imaging From Boron Neutron Capture Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, S; Suh, T; Yoon, D
Purpose: The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. Methods: To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU).more » Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. Results: The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). Conclusion: The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray reconstruction using the GPU computation for BNCT simulations.« less
Phantom experiments to improve parathyroid lesion detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nichols, Kenneth J.; Tronco, Gene G.; Tomas, Maria B.
2007-12-15
This investigation tested the hypothesis that visual analysis of iteratively reconstructed tomograms by ordered subset expectation maximization (OSEM) provides the highest accuracy for localizing parathyroid lesions using {sup 99m}Tc-sestamibi SPECT data. From an Institutional Review Board approved retrospective review of 531 patients evaluated for parathyroid localization, image characteristics were determined for 85 {sup 99m}Tc-sestamibi SPECT studies originally read as equivocal (EQ). Seventy-two plexiglas phantoms using cylindrical simulated lesions were acquired for a clinically realistic range of counts (mean simulated lesion counts of 75{+-}50 counts/pixel) and target-to-background (T:B) ratios (range=2.0 to 8.0) to determine an optimal filter for OSEM. Two experiencedmore » nuclear physicians graded simulated lesions, blinded to whether chambers contained radioactivity or plain water, and two observers used the same scale to read all phantom and clinical SPECT studies, blinded to pathology findings and clinical information. For phantom data and all clinical data, T:B analyses were not statistically different for OSEM versus FB, but visual readings were significantly more accurate than T:B (88{+-}6% versus 68{+-}6%, p=0.001) for OSEM processing, and OSEM was significantly more accurate than FB for visual readings (88{+-}6% versus 58{+-}6%, p<0.0001). These data suggest that visual analysis of iteratively reconstructed MIBI tomograms should be incorporated into imaging protocols performed to localize parathyroid lesions.« less
Resistive plate chambers in positron emission tomography
NASA Astrophysics Data System (ADS)
Crespo, Paulo; Blanco, Alberto; Couceiro, Miguel; Ferreira, Nuno C.; Lopes, Luís; Martins, Paulo; Ferreira Marques, Rui; Fonte, Paulo
2013-07-01
Resistive plate chambers (RPC) were originally deployed for high energy physics. Realizing how their properties match the needs of nuclear medicine, a LIP team proposed applying RPCs to both preclinical and clinical positron emission tomography (RPC-PET). We show a large-area RPC-PET simulated scanner covering an axial length of 2.4m —slightly superior to the height of the human body— allowing for whole-body, single-bed RPC-PET acquisitions. Simulations following NEMA (National Electrical Manufacturers Association, USA) protocols yield a system sensitivity at least one order of magnitude larger than present-day, commercial PET systems. Reconstruction of whole-body simulated data is feasible by using a dedicated, direct time-of-flight-based algorithm implemented onto an ordered subsets estimation maximization parallelized strategy. Whole-body RPC-PET patient images following the injection of only 2mCi of 18-fluorodesoxyglucose (FDG) are expected to be ready 7 minutes after the 6 minutes necessary for data acquisition. This compares to the 10-20mCi FDG presently injected for a PET scan, and to the uncomfortable 20-30minutes necessary for its data acquisition. In the preclinical field, two fully instrumented detector heads have been assembled aiming at a four-head-based, small-animal RPC-PET system. Images of a disk-shaped and a needle-like 22Na source show unprecedented sub-millimeter spatial resolution.
Discrete mixture modeling to address genetic heterogeneity in time-to-event regression
Eng, Kevin H.; Hanlon, Bret M.
2014-01-01
Motivation: Time-to-event regression models are a critical tool for associating survival time outcomes with molecular data. Despite mounting evidence that genetic subgroups of the same clinical disease exist, little attention has been given to exploring how this heterogeneity affects time-to-event model building and how to accommodate it. Methods able to diagnose and model heterogeneity should be valuable additions to the biomarker discovery toolset. Results: We propose a mixture of survival functions that classifies subjects with similar relationships to a time-to-event response. This model incorporates multivariate regression and model selection and can be fit with an expectation maximization algorithm, we call Cox-assisted clustering. We illustrate a likely manifestation of genetic heterogeneity and demonstrate how it may affect survival models with little warning. An application to gene expression in ovarian cancer DNA repair pathways illustrates how the model may be used to learn new genetic subsets for risk stratification. We explore the implications of this model for censored observations and the effect on genomic predictors and diagnostic analysis. Availability and implementation: R implementation of CAC using standard packages is available at https://gist.github.com/programeng/8620b85146b14b6edf8f Data used in the analysis are publicly available. Contact: kevin.eng@roswellpark.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24532723
Search for anomalous kinematics in tt dilepton events at CDF II.
Acosta, D; Adelman, J; Affolder, T; Akimoto, T; Albrow, M G; Ambrose, D; Amerio, S; Amidei, D; Anastassov, A; Anikeev, K; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Arisawa, T; Arguin, J-F; Artikov, A; Ashmanskas, W; Attal, A; Azfar, F; Azzi-Bacchetta, P; Bacchetta, N; Bachacou, H; Badgett, W; Barbaro-Galtieri, A; Barker, G J; Barnes, V E; Barnett, B A; Baroiant, S; Barone, M; Bauer, G; Bedeschi, F; Behari, S; Belforte, S; Bellettini, G; Bellinger, J; Ben-Haim, E; Benjamin, D; Beretvas, A; Bhatti, A; Binkley, M; Bisello, D; Bishai, M; Blair, R E; Blocker, C; Bloom, K; Blumenfeld, B; Bocci, A; Bodek, A; Bolla, G; Bolshov, A; Booth, P S L; Bortoletto, D; Boudreau, J; Bourov, S; Brau, B; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Burkett, K; Busetto, G; Bussey, P; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canepa, A; Casarsa, M; Carlsmith, D; Carron, S; Carosi, R; Cavalli-Sforza, M; Castro, A; Catastini, P; Cauz, D; Cerri, A; Cerrito, L; Chapman, J; Chen, C; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, I; Cho, K; Chokheli, D; Chou, J P; Chu, M L; Chuang, S; Chung, J Y; Chung, W-H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A G; Clark, D; Coca, M; Connolly, A; Convery, M; Conway, J; Cooper, B; Cordelli, M; Cortiana, G; Cranshaw, J; Cuevas, J; Culbertson, R; Currat, C; Cyr, D; Dagenhart, D; Da Ronco, S; D'Auria, S; de Barbaro, P; De Cecco, S; De Lentdecker, G; Dell'Agnello, S; Dell'Orso, M; Demers, S; Demortier, L; Deninno, M; De Pedis, D; Derwent, P F; Dionisi, C; Dittmann, J R; Dörr, C; Doksus, P; Dominguez, A; Donati, S; Donega, M; Donini, J; D'Onofrio, M; Dorigo, T; Drollinger, V; Ebina, K; Eddy, N; Ehlers, J; Ely, R; Erbacher, R; Erdmann, M; Errede, D; Errede, S; Eusebi, R; Fang, H-C; Farrington, S; Fedorko, I; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferretti, C; Field, R D; Flanagan, G; Flaugher, B; Flores-Castillo, L R; Foland, A; Forrester, S; Foster, G W; Franklin, M; Freeman, J C; Fujii, Y; Furic, I; Gajjar, A; Gallas, A; Galyardt, J; Gallinaro, M; Garcia-Sciveres, M; Garfinkel, A F; Gay, C; Gerberich, H; Gerdes, D W; Gerchtein, E; Giagu, S; Giannetti, P; Gibson, A; Gibson, K; Ginsburg, C; Giolo, K; Giordani, M; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Goldstein, D; Goldstein, J; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Gotra, Y; Goulianos, K; Gresele, A; Griffiths, M; Grosso-Pilcher, C; Grundler, U; Guenther, M; Guimaraes da Costa, J; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Hamilton, A; Han, B-Y; Handler, R; Happacher, F; Hara, K; Hare, M; Harr, R F; Harris, R M; Hartmann, F; Hatakeyama, K; Hauser, J; Hays, C; Hayward, H; Heider, E; Heinemann, B; Heinrich, J; Hennecke, M; Herndon, M; Hill, C; Hirschhbuehl, D; Hocker, A; Hoffman, K D; Holloway, A; Hou, S; Houlden, M A; Huffman, B T; Huang, Y; Hughes, R E; Huston, J; Ikado, K; Incandela, J; Introzzi, G; Iori, M; Ishizawa, Y; Issever, C; Ivanov, A; Iwata, Y; Iyutin, B; James, E; Jang, D; Jarrell, J; Jeans, D; Jensen, H; Jeon, E J; Jones, M; Joo, K K; Jun, S Y; Junk, T; Kamon, T; Kang, J; Karagoz Unel, M; Karchin, P E; Kartal, S; Kato, Y; Kemp, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, M S; Kim, S B; Kim, S H; Kim, T H; Kim, Y K; King, B T; Kirby, M; Kirsch, L; Klimenko, S; Knuteson, B; Ko, B R; Kobayashi, H; Koehn, P; Kong, D J; Kondo, K; Konigsberg, J; Kordas, K; Korn, A; Korytov, A; Kotelnikov, K; Kotwal, A V; Kovalev, A; Kraus, J; Kravchenko, I; Kreymer, A; Kroll, J; Kruse, M; Krutelyov, V; Kuhlmann, S E; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lammel, S; Lancaster, J; Lancaster, M; Lander, R; Lannon, K; Lath, A; Latino, G; Lauhakangas, R; Lazzizzera, I; Le, Y; Lecci, C; LeCompte, T; Lee, J; Lee, J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Li, K; Lin, C; Lin, C S; Lindgren, M; Liss, T M; Lister, A; Litvintsev, D O; Liu, T; Liu, Y; Lockyer, N S; Loginov, A; Loreti, M; Loverre, P; Lu, R-S; Lucchesi, D; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; MacQueen, D; Madrak, R; Maeshima, K; Maksimovic, P; Malferrari, L; Manca, G; Marginean, R; Marino, C; Martin, A; Martin, M; Martin, V; Martínez, M; Maruyama, T; Matsunaga, H; Mattson, M; Mazzanti, P; McFarland, K S; McGivern, D; McIntyre, P M; McNamara, P; NcNulty, R; Mehta, A; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; Miao, T; Miladinovic, N; Miller, L; Miller, R; Miller, J S; Miquel, R; Miscetti, S; Mitselmakher, G; Miyamoto, A; Miyazaki, Y; Moggi, N; Mohr, B; Moore, R; Morello, M; Movilla Fernandez, P A; Mukherjee, A; Mulhearn, M; Muller, T; Mumford, R; Munar, A; Murat, P; Nachtman, J; Nahn, S; Nakamura, I; Nakano, I; Napier, A; Napora, R; Naumov, D; Necula, V; Niell, F; Nielsen, J; Nelson, C; Nelson, T; Neu, C; Neubauer, M S; Newman-Holmes, C; Nigmanov, T; Nodulman, L; Norniella, O; Oesterberg, K; Ogawa, T; Oh, S H; Oh, Y D; Ohsugi, T; Okusawa, T; Oldeman, R; Orava, R; Orejudos, W; Pagliarone, C; Palencia, E; Paoletti, R; Papadimitriou, V; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Pauly, T; Paus, C; Pellett, D; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pitts, K T; Plager, C; Pompos, A; Pondrom, L; Pope, G; Portell, X; Poukhov, O; Prakoshyn, F; Pratt, T; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Rademachker, J; Rahaman, M A; Rakitine, A; Rappoccio, S; Ratnikov, F; Ray, H; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Rimondi, F; Rinnert, K; Ristori, L; Robertson, W J; Robson, A; Rodrigo, T; Rolli, S; Rosenson, L; Roser, R; Rossin, R; Rott, C; Russ, J; Rusu, V; Ruiz, A; Ryan, D; Saarikko, H; Sabik, S; Safonov, A; St Denis, R; Sakumoto, W K; Salamanna, G; Saltzberg, D; Sanchez, C; Sansoni, A; Santi, L; Sarkar, S; Sato, K; Savard, P; Savoy-Navarro, A; Schlabach, P; Schmidt, E E; Schmidt, M P; Schmitt, M; Scodellaro, L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semeria, F; Sexton-Kennedy, L; Sfiligoi, I; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Siegrist, J; Siket, M; Sill, A; Sinervo, P; Sisakyan, A; Skiba, A; Slaughter, A J; Sliwa, K; Smirnov, D; Smith, J R; Snider, F D; Snihur, R; Soha, A; Somalwar, S V; Spalding, J; Spezziga, M; Spiegel, L; Spinella, F; Spiropulu, M; Squillacioti, P; Stadie, H; Stelzer, B; Stelzer-Chilton, O; Strologas, J; Stuart, D; Sukhanov, A; Sumorok, K; Sun, H; Suzuki, T; Taffard, A; Tafirout, R; Takach, S F; Takano, H; Takashima, R; Takeuchi, Y; Takikawa, K; Tanaka, M; Tanaka, R; Tanimoto, N; Tapprogge, S; Tecchio, M; Teng, P K; Terashi, K; Tesarek, R J; Tether, S; Thom, J; Thompson, A S; Thomson, E; Tipton, P; Tiwari, V; Trkaczyk, S; Toback, D; Tollefson, K; Tomura, T; Tonelli, D; Tönnesmann, M; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tseng, J; Tsuchiya, R; Tsuno, S; Tsybychev, D; Turini, N; Turner, M; Ukegawa, F; Unverhau, T; Uozumi, S; Usynin, D; Vacavant, L; Vaiciulis, A; Varganov, A; Vataga, E; Vejcik, S; Velev, G; Veszpremi, V; Veramendi, G; Vickey, T; Vidal, R; Vila, I; Vilar, R; Vollrath, I; Volobouev, I; von der Mey, M; Wagner, P; Wagner, R G; Wagner, R L; Wagner, W; Wallny, R; Walter, T; Yamashita, T; Yamamoto, K; Wan, Z; Wang, M J; Wang, S M; Warburton, A; Ward, B; Waschke, S; Waters, D; Watts, T; Weber, M; Wester, W C; Whitehouse, B; Wicklund, A B; Wicklund, E; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolter, M; Worcester, M; Worm, S; Wright, T; Wu, X; Würthwein, F; Wyatt, A; Yagil, A; Yang, C; Yang, U K; Yao, W; Yeh, G P; Yi, K; Yoh, J; Yoon, P; Yorita, K; Yoshida, T; Yu, I; Yu, S; Yu, Z; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zetti, F; Zhou, J; Zsenei, A; Zucchelli, S
2005-07-08
We report on a search for anomalous kinematics of tt dilepton events in pp collisions at square root of s=1.96 TeV using 193 pb(-1) of data collected with the CDF II detector. We developed a new a priori technique designed to isolate the subset in a data sample revealing the largest deviation from standard model (SM) expectations and to quantify the significance of this departure. In the four-variable space considered, no particular subset shows a significant discrepancy, and we find that the probability of obtaining a data sample less consistent with the SM than what is observed is 1.0%-4.5%.
NASA Astrophysics Data System (ADS)
Aslan, Serdar; Taylan Cemgil, Ali; Akın, Ata
2016-08-01
Objective. In this paper, we aimed for the robust estimation of the parameters and states of the hemodynamic model by using blood oxygen level dependent signal. Approach. In the fMRI literature, there are only a few successful methods that are able to make a joint estimation of the states and parameters of the hemodynamic model. In this paper, we implemented a maximum likelihood based method called the particle smoother expectation maximization (PSEM) algorithm for the joint state and parameter estimation. Main results. Former sequential Monte Carlo methods were only reliable in the hemodynamic state estimates. They were claimed to outperform the local linearization (LL) filter and the extended Kalman filter (EKF). The PSEM algorithm is compared with the most successful method called square-root cubature Kalman smoother (SCKS) for both state and parameter estimation. SCKS was found to be better than the dynamic expectation maximization (DEM) algorithm, which was shown to be a better estimator than EKF, LL and particle filters. Significance. PSEM was more accurate than SCKS for both the state and the parameter estimation. Hence, PSEM seems to be the most accurate method for the system identification and state estimation for the hemodynamic model inversion literature. This paper do not compare its results with Tikhonov-regularized Newton—CKF (TNF-CKF), a recent robust method which works in filtering sense.
Competitive Facility Location with Random Demands
NASA Astrophysics Data System (ADS)
Uno, Takeshi; Katagiri, Hideki; Kato, Kosuke
2009-10-01
This paper proposes a new location problem of competitive facilities, e.g. shops and stores, with uncertain demands in the plane. By representing the demands for facilities as random variables, the location problem is formulated to a stochastic programming problem, and for finding its solution, three deterministic programming problems: expectation maximizing problem, probability maximizing problem, and satisfying level maximizing problem are considered. After showing that one of their optimal solutions can be found by solving 0-1 programming problems, their solution method is proposed by improving the tabu search algorithm with strategic vibration. Efficiency of the solution method is shown by applying to numerical examples of the facility location problems.
Physical renormalization condition for de Sitter QED
NASA Astrophysics Data System (ADS)
Hayashinaka, Takahiro; Xue, She-Sheng
2018-05-01
We considered a new renormalization condition for the vacuum expectation values of the scalar and spinor currents induced by a homogeneous and constant electric field background in de Sitter spacetime. Following a semiclassical argument, the condition named maximal subtraction imposes the exponential suppression on the massive charged particle limit of the renormalized currents. The maximal subtraction changes the behaviors of the induced currents previously obtained by the conventional minimal subtraction scheme. The maximal subtraction is favored for a couple of physically decent predictions including the identical asymptotic behavior of the scalar and spinor currents, the removal of the IR hyperconductivity from the scalar current, and the finite current for the massless fermion.
Trust regions in Kriging-based optimization with expected improvement
NASA Astrophysics Data System (ADS)
Regis, Rommel G.
2016-06-01
The Kriging-based Efficient Global Optimization (EGO) method works well on many expensive black-box optimization problems. However, it does not seem to perform well on problems with steep and narrow global minimum basins and on high-dimensional problems. This article develops a new Kriging-based optimization method called TRIKE (Trust Region Implementation in Kriging-based optimization with Expected improvement) that implements a trust-region-like approach where each iterate is obtained by maximizing an Expected Improvement (EI) function within some trust region. This trust region is adjusted depending on the ratio of the actual improvement to the EI. This article also develops the Kriging-based CYCLONE (CYClic Local search in OptimizatioN using Expected improvement) method that uses a cyclic pattern to determine the search regions where the EI is maximized. TRIKE and CYCLONE are compared with EGO on 28 test problems with up to 32 dimensions and on a 36-dimensional groundwater bioremediation application in appendices supplied as an online supplement available at http://dx.doi.org/10.1080/0305215X.2015.1082350. The results show that both algorithms yield substantial improvements over EGO and they are competitive with a radial basis function method.
Cheng, Qiang; Zhou, Hongbo; Cheng, Jie
2011-06-01
Selecting features for multiclass classification is a critically important task for pattern recognition and machine learning applications. Especially challenging is selecting an optimal subset of features from high-dimensional data, which typically have many more variables than observations and contain significant noise, missing components, or outliers. Existing methods either cannot handle high-dimensional data efficiently or scalably, or can only obtain local optimum instead of global optimum. Toward the selection of the globally optimal subset of features efficiently, we introduce a new selector--which we call the Fisher-Markov selector--to identify those features that are the most useful in describing essential differences among the possible groups. In particular, in this paper we present a way to represent essential discriminating characteristics together with the sparsity as an optimization objective. With properly identified measures for the sparseness and discriminativeness in possibly high-dimensional settings, we take a systematic approach for optimizing the measures to choose the best feature subset. We use Markov random field optimization techniques to solve the formulated objective functions for simultaneous feature selection. Our results are noncombinatorial, and they can achieve the exact global optimum of the objective function for some special kernels. The method is fast; in particular, it can be linear in the number of features and quadratic in the number of observations. We apply our procedure to a variety of real-world data, including mid--dimensional optical handwritten digit data set and high-dimensional microarray gene expression data sets. The effectiveness of our method is confirmed by experimental results. In pattern recognition and from a model selection viewpoint, our procedure says that it is possible to select the most discriminating subset of variables by solving a very simple unconstrained objective function which in fact can be obtained with an explicit expression.
Chen, Yi-Shin
2018-01-01
Conventional decision theory suggests that under risk, people choose option(s) by maximizing the expected utility. However, theories deal ambiguously with different options that have the same expected utility. A network approach is proposed by introducing ‘goal’ and ‘time’ factors to reduce the ambiguity in strategies for calculating the time-dependent probability of reaching a goal. As such, a mathematical foundation that explains the irrational behavior of choosing an option with a lower expected utility is revealed, which could imply that humans possess rationality in foresight. PMID:29702665
Pan, Wei; Chen, Yi-Shin
2018-01-01
Conventional decision theory suggests that under risk, people choose option(s) by maximizing the expected utility. However, theories deal ambiguously with different options that have the same expected utility. A network approach is proposed by introducing 'goal' and 'time' factors to reduce the ambiguity in strategies for calculating the time-dependent probability of reaching a goal. As such, a mathematical foundation that explains the irrational behavior of choosing an option with a lower expected utility is revealed, which could imply that humans possess rationality in foresight.
Can Monkeys Make Investments Based on Maximized Pay-off?
Steelandt, Sophie; Dufour, Valérie; Broihanne, Marie-Hélène; Thierry, Bernard
2011-01-01
Animals can maximize benefits but it is not known if they adjust their investment according to expected pay-offs. We investigated whether monkeys can use different investment strategies in an exchange task. We tested eight capuchin monkeys (Cebus apella) and thirteen macaques (Macaca fascicularis, Macaca tonkeana) in an experiment where they could adapt their investment to the food amounts proposed by two different experimenters. One, the doubling partner, returned a reward that was twice the amount given by the subject, whereas the other, the fixed partner, always returned a constant amount regardless of the amount given. To maximize pay-offs, subjects should invest a maximal amount with the first partner and a minimal amount with the second. When tested with the fixed partner only, one third of monkeys learned to remove a maximal amount of food for immediate consumption before investing a minimal one. With both partners, most subjects failed to maximize pay-offs by using different decision rules with each partner' quality. A single Tonkean macaque succeeded in investing a maximal amount to one experimenter and a minimal amount to the other. The fact that only one of over 21 subjects learned to maximize benefits in adapting investment according to experimenters' quality indicates that such a task is difficult for monkeys, albeit not impossible. PMID:21423777
NASA Technical Reports Server (NTRS)
Limber, Mark A.; Manteuffel, Thomas A.; Mccormick, Stephen F.; Sholl, David S.
1993-01-01
We consider the problem of image reconstruction from a finite number of projections over the space L(sup 1)(Omega), where Omega is a compact subset of the set of Real numbers (exp 2). We prove that, given a discretization of the projection space, the function that generates the correct projection data and maximizes the Boltzmann-Shannon entropy is piecewise constant on a certain discretization of Omega, which we call the 'optimal grid'. It is on this grid that one obtains the maximum resolution given the problem setup. The size of this grid grows very quickly as the number of projections and number of cells per projection grow, indicating fast computational methods are essential to make its use feasible. We use a Fenchel duality formulation of the problem to keep the number of variables small while still using the optimal discretization, and propose a multilevel scheme to improve convergence of a simple cyclic maximization scheme applied to the dual problem.
Designing Agent Collectives For Systems With Markovian Dynamics
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Lawson, John W.
2004-01-01
The Collective Intelligence (COIN) framework concerns the design of collectives of agents so that as those agents strive to maximize their individual utility functions, their interaction causes a provided world utility function concerning the entire collective to be also maximized. Here we show how to extend that framework to scenarios having Markovian dynamics when no re-evolution of the system from counter-factual initial conditions (an often expensive calculation) is permitted. Our approach transforms the (time-extended) argument of each agent's utility function before evaluating that function. This transformation has benefits in scenarios not involving Markovian dynamics of an agent's utility function are observable. We investigate this transformation in simulations involving both hear and quadratic (nonlinear) dynamics. In addition, we find that a certain subset of these transformations, which result in utilities that have low opacity (analogous to having high signal to noise) but are not factored (analogous to not being incentive compatible), reliably improve performance over that arising with factored utilities. We also present a Taylor Series method for the fully general nonlinear case.
Evidence for surprise minimization over value maximization in choice behavior
Schwartenbeck, Philipp; FitzGerald, Thomas H. B.; Mathys, Christoph; Dolan, Ray; Kronbichler, Martin; Friston, Karl
2015-01-01
Classical economic models are predicated on the idea that the ultimate aim of choice is to maximize utility or reward. In contrast, an alternative perspective highlights the fact that adaptive behavior requires agents’ to model their environment and minimize surprise about the states they frequent. We propose that choice behavior can be more accurately accounted for by surprise minimization compared to reward or utility maximization alone. Minimizing surprise makes a prediction at variance with expected utility models; namely, that in addition to attaining valuable states, agents attempt to maximize the entropy over outcomes and thus ‘keep their options open’. We tested this prediction using a simple binary choice paradigm and show that human decision-making is better explained by surprise minimization compared to utility maximization. Furthermore, we replicated this entropy-seeking behavior in a control task with no explicit utilities. These findings highlight a limitation of purely economic motivations in explaining choice behavior and instead emphasize the importance of belief-based motivations. PMID:26564686
Faith, Daniel P.
2015-01-01
The phylogenetic diversity measure, (‘PD’), measures the relative feature diversity of different subsets of taxa from a phylogeny. At the level of feature diversity, PD supports the broad goal of biodiversity conservation to maintain living variation and option values. PD calculations at the level of lineages and features include those integrating probabilities of extinction, providing estimates of expected PD. This approach has known advantages over the evolutionarily distinct and globally endangered (EDGE) methods. Expected PD methods also have limitations. An alternative notion of expected diversity, expected functional trait diversity, relies on an alternative non-phylogenetic model and allows inferences of diversity at the level of functional traits. Expected PD also faces challenges in helping to address phylogenetic tipping points and worst-case PD losses. Expected PD may not choose conservation options that best avoid worst-case losses of long branches from the tree of life. We can expand the range of useful calculations based on expected PD, including methods for identifying phylogenetic key biodiversity areas. PMID:25561672
Joseph Buongiorno; Mo Zhou; Craig Johnston
2017-01-01
Markov decision process models were extended to reflect some consequences of the risk attitude of forestry decision makers. One approach consisted of maximizing the expected value of a criterion subject to an upper bound on the variance or, symmetrically, minimizing the variance subject to a lower bound on the expected value. The other method used the certainty...
Minimizing the average distance to a closest leaf in a phylogenetic tree.
Matsen, Frederick A; Gallagher, Aaron; McCoy, Connor O
2013-11-01
When performing an analysis on a collection of molecular sequences, it can be convenient to reduce the number of sequences under consideration while maintaining some characteristic of a larger collection of sequences. For example, one may wish to select a subset of high-quality sequences that represent the diversity of a larger collection of sequences. One may also wish to specialize a large database of characterized "reference sequences" to a smaller subset that is as close as possible on average to a collection of "query sequences" of interest. Such a representative subset can be useful whenever one wishes to find a set of reference sequences that is appropriate to use for comparative analysis of environmentally derived sequences, such as for selecting "reference tree" sequences for phylogenetic placement of metagenomic reads. In this article, we formalize these problems in terms of the minimization of the Average Distance to the Closest Leaf (ADCL) and investigate algorithms to perform the relevant minimization. We show that the greedy algorithm is not effective, show that a variant of the Partitioning Around Medoids (PAM) heuristic gets stuck in local minima, and develop an exact dynamic programming approach. Using this exact program we note that the performance of PAM appears to be good for simulated trees, and is faster than the exact algorithm for small trees. On the other hand, the exact program gives solutions for all numbers of leaves less than or equal to the given desired number of leaves, whereas PAM only gives a solution for the prespecified number of leaves. Via application to real data, we show that the ADCL criterion chooses chimeric sequences less often than random subsets, whereas the maximization of phylogenetic diversity chooses them more often than random. These algorithms have been implemented in publicly available software.
Differences in Mouse and Human Non-Memory B Cell Pools1
Benitez, Abigail; Weldon, Abby J.; Tatosyan, Lynnette; Velkuru, Vani; Lee, Steve; Milford, Terry-Ann; Francis, Olivia L.; Hsu, Sheri; Nazeri, Kavoos; Casiano, Carlos M.; Schneider, Rebekah; Gonzalez, Jennifer; Su, Rui-Jun; Baez, Ineavely; Colburn, Keith; Moldovan, Ioana; Payne, Kimberly J.
2014-01-01
Identifying cross-species similarities and differences in immune development and function is critical for maximizing the translational potential of animal models. Co-expression of CD21 and CD24 distinguishes transitional and mature B cell subsets in mice. Here, we validate these markers for identifying analogous subsets in humans and use them to compare the non-memory B cell pools in mice and humans, across tissues, during fetal/neonatal and adult life. Among human CD19+IgM+ B cells, the CD21/CD24 schema identifies distinct populations that correspond to T1 (transitional 1), T2 (transitional 2), FM (follicular mature), and MZ (marginal zone) subsets identified in mice. Markers specific to human B cell development validate the identity of MZ cells and the maturation status of human CD21/CD24 non-memory B cell subsets. A comparison of the non-memory B cell pools in bone marrow (BM), blood, and spleen in mice and humans shows that transitional B cells comprise a much smaller fraction in adult humans than mice. T1 cells are a major contributor to the non-memory B cell pool in mouse BM where their frequency is more than twice that in humans. Conversely, in spleen the T1:T2 ratio shows that T2 cells are proportionally ∼8 fold higher in humans than mouse. Despite the relatively small contribution of transitional B cells to the human non-memory pool, the number of naïve FM cells produced per transitional B cell is 3-6 fold higher across tissues than in mouse. These data suggest differing dynamics or mechanisms produce the non-memory B cell compartments in mice and humans. PMID:24719464
Building Capacity through Action Research Curricula Reviews
ERIC Educational Resources Information Center
Lee, Vanessa; Coombe, Leanne; Robinson, Priscilla
2015-01-01
In Australia, graduates of Master of Public Health (MPH) programmes are expected to achieve a set of core competencies, including a subset that is specifically related to Indigenous health. This paper reports on the methods utilised in a project which was designed using action research to strengthen Indigenous public health curricula within MPH…
Minimizing Expected Maximum Risk from Cyber-Attacks with Probabilistic Attack Success
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhuiyan, Tanveer H.; Nandi, Apurba; Medal, Hugh
The goal of our work is to enhance network security by generating partial cut-sets, which are a subset of edges that remove paths from initially vulnerable nodes (initial security conditions) to goal nodes (critical assets), on an attack graph given costs for cutting an edge and a limited overall budget.
Optimization of Self-Directed Target Coverage in Wireless Multimedia Sensor Network
Yang, Yang; Wang, Yufei; Pi, Dechang; Wang, Ruchuan
2014-01-01
Video and image sensors in wireless multimedia sensor networks (WMSNs) have directed view and limited sensing angle. So the methods to solve target coverage problem for traditional sensor networks, which use circle sensing model, are not suitable for WMSNs. Based on the FoV (field of view) sensing model and FoV disk model proposed, how expected multimedia sensor covers the target is defined by the deflection angle between target and the sensor's current orientation and the distance between target and the sensor. Then target coverage optimization algorithms based on expected coverage value are presented for single-sensor single-target, multisensor single-target, and single-sensor multitargets problems distinguishingly. Selecting the orientation that sensor rotated to cover every target falling in the FoV disk of that sensor for candidate orientations and using genetic algorithm to multisensor multitargets problem, which has NP-complete complexity, then result in the approximated minimum subset of sensors which covers all the targets in networks. Simulation results show the algorithm's performance and the effect of number of targets on the resulting subset. PMID:25136667
Maximal sfermion flavour violation in super-GUTs
Ellis, John; Olive, Keith A.; Velasco-Sevilla, Liliana
2016-10-20
We consider supersymmetric grand unified theories with soft supersymmetry-breaking scalar masses m 0 specified above the GUT scale (super-GUTs) and patterns of Yukawa couplings motivated by upper limits on flavour-changing interactions beyond the Standard Model. If the scalar masses are smaller than the gaugino masses m 1/2, as is expected in no-scale models, the dominant effects of renormalisation between the input scale and the GUT scale are generally expected to be those due to the gauge couplings, which are proportional to m 1/2 and generation independent. In this case, the input scalar masses m 0 may violate flavour maximally, amore » scenario we call MaxSFV, and there is no supersymmetric flavour problem. As a result, we illustrate this possibility within various specific super-GUT scenarios that are deformations of no-scale gravity« less
Zeng, Nianyin; Wang, Zidong; Li, Yurong; Du, Min; Cao, Jie; Liu, Xiaohui
2013-12-01
In this paper, the expectation maximization (EM) algorithm is applied to the modeling of the nano-gold immunochromatographic assay (nano-GICA) via available time series of the measured signal intensities of the test and control lines. The model for the nano-GICA is developed as the stochastic dynamic model that consists of a first-order autoregressive stochastic dynamic process and a noisy measurement. By using the EM algorithm, the model parameters, the actual signal intensities of the test and control lines, as well as the noise intensity can be identified simultaneously. Three different time series data sets concerning the target concentrations are employed to demonstrate the effectiveness of the introduced algorithm. Several indices are also proposed to evaluate the inferred models. It is shown that the model fits the data very well.
The Naïve Utility Calculus: Computational Principles Underlying Commonsense Psychology.
Jara-Ettinger, Julian; Gweon, Hyowon; Schulz, Laura E; Tenenbaum, Joshua B
2016-08-01
We propose that human social cognition is structured around a basic understanding of ourselves and others as intuitive utility maximizers: from a young age, humans implicitly assume that agents choose goals and actions to maximize the rewards they expect to obtain relative to the costs they expect to incur. This 'naïve utility calculus' allows both children and adults observe the behavior of others and infer their beliefs and desires, their longer-term knowledge and preferences, and even their character: who is knowledgeable or competent, who is praiseworthy or blameworthy, who is friendly, indifferent, or an enemy. We review studies providing support for the naïve utility calculus, and we show how it captures much of the rich social reasoning humans engage in from infancy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Speeded Reaching Movements around Invisible Obstacles
Hudson, Todd E.; Wolfe, Uta; Maloney, Laurence T.
2012-01-01
We analyze the problem of obstacle avoidance from a Bayesian decision-theoretic perspective using an experimental task in which reaches around a virtual obstacle were made toward targets on an upright monitor. Subjects received monetary rewards for touching the target and incurred losses for accidentally touching the intervening obstacle. The locations of target-obstacle pairs within the workspace were varied from trial to trial. We compared human performance to that of a Bayesian ideal movement planner (who chooses motor strategies maximizing expected gain) using the Dominance Test employed in Hudson et al. (2007). The ideal movement planner suffers from the same sources of noise as the human, but selects movement plans that maximize expected gain in the presence of that noise. We find good agreement between the predictions of the model and actual performance in most but not all experimental conditions. PMID:23028276
Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu
2016-03-01
The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed.
Clustering performance comparison using K-means and expectation maximization algorithms.
Jung, Yong Gyu; Kang, Min Soo; Heo, Jun
2014-11-14
Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K -means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K -means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results.
NASA Astrophysics Data System (ADS)
Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.
2018-04-01
Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.
Optimization of Multiple Related Negotiation through Multi-Negotiation Network
NASA Astrophysics Data System (ADS)
Ren, Fenghui; Zhang, Minjie; Miao, Chunyan; Shen, Zhiqi
In this paper, a Multi-Negotiation Network (MNN) and a Multi- Negotiation Influence Diagram (MNID) are proposed to optimally handle Multiple Related Negotiations (MRN) in a multi-agent system. Most popular, state-of-the-art approaches perform MRN sequentially. However, a sequential procedure may not optimally execute MRN in terms of maximizing the global outcome, and may even lead to unnecessary losses in some situations. The motivation of this research is to use a MNN to handle MRN concurrently so as to maximize the expected utility of MRN. Firstly, both the joint success rate and the joint utility by considering all related negotiations are dynamically calculated based on a MNN. Secondly, by employing a MNID, an agent's possible decision on each related negotiation is reflected by the value of expected utility. Lastly, through comparing expected utilities between all possible policies to conduct MRN, an optimal policy is generated to optimize the global outcome of MRN. The experimental results indicate that the proposed approach can improve the global outcome of MRN in a successful end scenario, and avoid unnecessary losses in an unsuccessful end scenario.
Longato, Enrico; Garrido, Maria; Saccardo, Desy; Montesinos Guevara, Camila; Mani, Ali R; Bolognesi, Massimo; Amodio, Piero; Facchinetti, Andrea; Sparacino, Giovanni; Montagnese, Sara
2017-01-01
A popular method to estimate proximal/distal temperature (TPROX and TDIST) consists in calculating a weighted average of nine wireless sensors placed on pre-defined skin locations. Specifically, TPROX is derived from five sensors placed on the infra-clavicular and mid-thigh area (left and right) and abdomen, and TDIST from four sensors located on the hands and feet. In clinical practice, the loss/removal of one or more sensors is a common occurrence, but limited information is available on how this affects the accuracy of temperature estimates. The aim of this study was to determine the accuracy of temperature estimates in relation to number/position of sensors removed. Thirteen healthy subjects wore all nine sensors for 24 hours and reference TPROX and TDIST time-courses were calculated using all sensors. Then, all possible combinations of reduced subsets of sensors were simulated and suitable weights for each sensor calculated. The accuracy of TPROX and TDIST estimates resulting from the reduced subsets of sensors, compared to reference values, was assessed by the mean squared error, the mean absolute error (MAE), the cross-validation error and the 25th and 75th percentiles of the reconstruction error. Tables of the accuracy and sensor weights for all possible combinations of sensors are provided. For instance, in relation to TPROX, a subset of three sensors placed in any combination of three non-homologous areas (abdominal, right or left infra-clavicular, right or left mid-thigh) produced an error of 0.13°C MAE, while the loss/removal of the abdominal sensor resulted in an error of 0.25°C MAE, with the greater impact on the quality of the reconstruction. This information may help researchers/clinicians: i) evaluate the expected goodness of their TPROX and TDIST estimates based on the number of available sensors; ii) select the most appropriate subset of sensors, depending on goals and operational constraints.
Longato, Enrico; Garrido, Maria; Saccardo, Desy; Montesinos Guevara, Camila; Mani, Ali R.; Bolognesi, Massimo; Amodio, Piero; Facchinetti, Andrea; Sparacino, Giovanni
2017-01-01
A popular method to estimate proximal/distal temperature (TPROX and TDIST) consists in calculating a weighted average of nine wireless sensors placed on pre-defined skin locations. Specifically, TPROX is derived from five sensors placed on the infra-clavicular and mid-thigh area (left and right) and abdomen, and TDIST from four sensors located on the hands and feet. In clinical practice, the loss/removal of one or more sensors is a common occurrence, but limited information is available on how this affects the accuracy of temperature estimates. The aim of this study was to determine the accuracy of temperature estimates in relation to number/position of sensors removed. Thirteen healthy subjects wore all nine sensors for 24 hours and reference TPROX and TDIST time-courses were calculated using all sensors. Then, all possible combinations of reduced subsets of sensors were simulated and suitable weights for each sensor calculated. The accuracy of TPROX and TDIST estimates resulting from the reduced subsets of sensors, compared to reference values, was assessed by the mean squared error, the mean absolute error (MAE), the cross-validation error and the 25th and 75th percentiles of the reconstruction error. Tables of the accuracy and sensor weights for all possible combinations of sensors are provided. For instance, in relation to TPROX, a subset of three sensors placed in any combination of three non-homologous areas (abdominal, right or left infra-clavicular, right or left mid-thigh) produced an error of 0.13°C MAE, while the loss/removal of the abdominal sensor resulted in an error of 0.25°C MAE, with the greater impact on the quality of the reconstruction. This information may help researchers/clinicians: i) evaluate the expected goodness of their TPROX and TDIST estimates based on the number of available sensors; ii) select the most appropriate subset of sensors, depending on goals and operational constraints. PMID:28666029
Network clustering and community detection using modulus of families of loops.
Shakeri, Heman; Poggi-Corradini, Pietro; Albin, Nathan; Scoglio, Caterina
2017-01-01
We study the structure of loops in networks using the notion of modulus of loop families. We introduce an alternate measure of network clustering by quantifying the richness of families of (simple) loops. Modulus tries to minimize the expected overlap among loops by spreading the expected link usage optimally. We propose weighting networks using these expected link usages to improve classical community detection algorithms. We show that the proposed method enhances the performance of certain algorithms, such as spectral partitioning and modularity maximization heuristics, on standard benchmarks.
Angelis, G I; Reader, A J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2011-07-07
Iterative expectation maximization (EM) techniques have been extensively used to solve maximum likelihood (ML) problems in positron emission tomography (PET) image reconstruction. Although EM methods offer a robust approach to solving ML problems, they usually suffer from slow convergence rates. The ordered subsets EM (OSEM) algorithm provides significant improvements in the convergence rate, but it can cycle between estimates converging towards the ML solution of each subset. In contrast, gradient-based methods, such as the recently proposed non-monotonic maximum likelihood (NMML) and the more established preconditioned conjugate gradient (PCG), offer a globally convergent, yet equally fast, alternative to OSEM. Reported results showed that NMML provides faster convergence compared to OSEM; however, it has never been compared to other fast gradient-based methods, like PCG. Therefore, in this work we evaluate the performance of two gradient-based methods (NMML and PCG) and investigate their potential as an alternative to the fast and widely used OSEM. All algorithms were evaluated using 2D simulations, as well as a single [(11)C]DASB clinical brain dataset. Results on simulated 2D data show that both PCG and NMML achieve orders of magnitude faster convergence to the ML solution compared to MLEM and exhibit comparable performance to OSEM. Equally fast performance is observed between OSEM and PCG for clinical 3D data, but NMML seems to perform poorly. However, with the addition of a preconditioner term to the gradient direction, the convergence behaviour of NMML can be substantially improved. Although PCG is a fast convergent algorithm, the use of a (bent) line search increases the complexity of the implementation, as well as the computational time involved per iteration. Contrary to previous reports, NMML offers no clear advantage over OSEM or PCG, for noisy PET data. Therefore, we conclude that there is little evidence to replace OSEM as the algorithm of choice for many applications, especially given that in practice convergence is often not desired for algorithms seeking ML estimates.
Quantitative and Qualitative Assessment of Yttrium-90 PET/CT Imaging
Büsing, Karen-Anett; Schönberg, Stefan O.; Bailey, Dale L.; Willowson, Kathy; Glatting, Gerhard
2014-01-01
Yttrium-90 is known to have a low positron emission decay of 32 ppm that may allow for personalized dosimetry of liver cancer therapy with 90Y labeled microspheres. The aim of this work was to image and quantify 90Y so that accurate predictions of the absorbed dose can be made. The measurements were performed within the QUEST study (University of Sydney, and Sirtex Medical, Australia). A NEMA IEC body phantom containing 6 fillable spheres (10–37 mm ∅) was used to measure the 90Y distribution with a Biograph mCT PET/CT (Siemens, Erlangen, Germany) with time-of-flight (TOF) acquisition. A sphere to background ratio of 8∶1, with a total 90Y activity of 3 GBq was used. Measurements were performed for one week (0, 3, 5 and 7 d). he acquisition protocol consisted of 30 min-2 bed positions and 120 min-single bed position. mages were reconstructed with 3D ordered subset expectation maximization (OSEM) and point spread function (PSF) for iteration numbers of 1–12 with 21 (TOF) and 24 (non-TOF) subsets and CT based attenuation and scatter correction. Convergence of algorithms and activity recovery was assessed based on regions-of-interest (ROI) analysis of the background (100 voxels), spheres (4 voxels) and the central low density insert (25 voxels). For the largest sphere, the recovery coefficient (RC) values for the 30 min –2-bed position, 30 min-single bed and 120 min-single bed were 1.12±0.20, 1.14±0.13, 0.97±0.07 respectively. For the smaller diameter spheres, the PSF algorithm with TOF and single bed acquisition provided a comparatively better activity recovery. Quantification of Y-90 using Biograph mCT PET/CT is possible with a reasonable accuracy, the limitations being the size of the lesion and the activity concentration present. At this stage, based on our study, it seems advantageous to use different protocols depending on the size of the lesion. PMID:25369020
Optimal Resource Allocation in Library Systems
ERIC Educational Resources Information Center
Rouse, William B.
1975-01-01
Queueing theory is used to model processes as either waiting or balking processes. The optimal allocation of resources to these processes is defined as that which maximizes the expected value of the decision-maker's utility function. (Author)
Sub-sampling genetic data to estimate black bear population size: A case study
Tredick, C.A.; Vaughan, M.R.; Stauffer, D.F.; Simek, S.L.; Eason, T.
2007-01-01
Costs for genetic analysis of hair samples collected for individual identification of bears average approximately US$50 [2004] per sample. This can easily exceed budgetary allowances for large-scale studies or studies of high-density bear populations. We used 2 genetic datasets from 2 areas in the southeastern United States to explore how reducing costs of analysis by sub-sampling affected precision and accuracy of resulting population estimates. We used several sub-sampling scenarios to create subsets of the full datasets and compared summary statistics, population estimates, and precision of estimates generated from these subsets to estimates generated from the complete datasets. Our results suggested that bias and precision of estimates improved as the proportion of total samples used increased, and heterogeneity models (e.g., Mh[CHAO]) were more robust to reduced sample sizes than other models (e.g., behavior models). We recommend that only high-quality samples (>5 hair follicles) be used when budgets are constrained, and efforts should be made to maximize capture and recapture rates in the field.
The Dynamics of Crime and Punishment
NASA Astrophysics Data System (ADS)
Hausken, Kjell; Moxnes, John F.
This article analyzes crime development which is one of the largest threats in today's world, frequently referred to as the war on crime. The criminal commits crimes in his free time (when not in jail) according to a non-stationary Poisson process which accounts for fluctuations. Expected values and variances for crime development are determined. The deterrent effect of imprisonment follows from the amount of time in imprisonment. Each criminal maximizes expected utility defined as expected benefit (from crime) minus expected cost (imprisonment). A first-order differential equation of the criminal's utility-maximizing response to the given punishment policy is then developed. The analysis shows that if imprisonment is absent, criminal activity grows substantially. All else being equal, any equilibrium is unstable (labile), implying growth of criminal activity, unless imprisonment increases sufficiently as a function of criminal activity. This dynamic approach or perspective is quite interesting and has to our knowledge not been presented earlier. The empirical data material for crime intensity and imprisonment for Norway, England and Wales, and the US supports the model. Future crime development is shown to depend strongly on the societally chosen imprisonment policy. The model is intended as a valuable tool for policy makers who can envision arbitrarily sophisticated imprisonment functions and foresee the impact they have on crime development.
Acceptable regret in medical decision making.
Djulbegovic, B; Hozo, I; Schwartz, A; McMasters, K M
1999-09-01
When faced with medical decisions involving uncertain outcomes, the principles of decision theory hold that we should select the option with the highest expected utility to maximize health over time. Whether a decision proves right or wrong can be learned only in retrospect, when it may become apparent that another course of action would have been preferable. This realization may bring a sense of loss, or regret. When anticipated regret is compelling, a decision maker may choose to violate expected utility theory to avoid regret. We formulate a concept of acceptable regret in medical decision making that explicitly introduces the patient's attitude toward loss of health due to a mistaken decision into decision making. In most cases, minimizing expected regret results in the same decision as maximizing expected utility. However, when acceptable regret is taken into consideration, the threshold probability below which we can comfortably withhold treatment is a function only of the net benefit of the treatment, and the threshold probability above which we can comfortably administer the treatment depends only on the magnitude of the risks associated with the therapy. By considering acceptable regret, we develop new conceptual relations that can help decide whether treatment should be withheld or administered, especially when the diagnosis is uncertain. This may be particularly beneficial in deciding what constitutes futile medical care.
Maximization, learning, and economic behavior
Erev, Ido; Roth, Alvin E.
2014-01-01
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design. PMID:25024182
Maximization, learning, and economic behavior.
Erev, Ido; Roth, Alvin E
2014-07-22
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design.
Rasmussen, Simon Mylius; Bilgrau, Anders Ellern; Schmitz, Alexander; Falgreen, Steffen; Bergkvist, Kim Steve; Tramm, Anette Mai; Baech, John; Jacobsen, Chris Ladefoged; Gaihede, Michael; Kjeldsen, Malene Krag; Bødker, Julie Støve; Dybkaer, Karen; Bøgsted, Martin; Johnsen, Hans Erik
2015-01-01
Cryopreservation is an acknowledged procedure to store vital cells for future biomarker analyses. Few studies, however, have analyzed the impact of the cryopreservation on phenotyping. We have performed a controlled comparison of cryopreserved and fresh cellular aliquots prepared from individual healthy donors. We studied circulating B-cell subset membrane markers and global gene expression, respectively by multiparametric flow cytometry and microarray data. Extensive statistical analysis of the generated data tested the concept that "overall, there are no phenotypic differences between cryopreserved and fresh B-cell subsets." Subsequently, we performed an uncontrolled comparison of tonsil tissue samples. By multiparametric flow analysis, we documented no significant changes following cryopreservation of subset frequencies or membrane intensity for the differentiation markers CD19, CD20, CD22, CD27, CD38, CD45, and CD200. By gene expression profiling following cryopreservation, across all samples, only 16 out of 18708 genes were significantly up or down regulated, including FOSB, KLF4, RBP7, ANXA1 or CLC, DEFA3, respectively. Implementation of cryopreserved tissue in our research program allowed us to present a performance analysis, by comparing cryopreserved and fresh tonsil tissue. As expected, phenotypic differences were identified, but to an extent that did not affect the performance of the cryopreserved tissue to generate specific B-cell subset associated gene signatures and assign subset phenotypes to independent tissue samples. We have confirmed our working concept and illustrated the usefulness of vital cryopreserved cell suspensions for phenotypic studies of the normal B-cell hierarchy; however, storage procedures need to be delineated by tissue-specific comparative analysis. © 2014 Clinical Cytometry Society.
Rasmussen, Simon Mylius; Bilgrau, Anders Ellern; Schmitz, Alexander; Falgreen, Steffen; Bergkvist, Kim Steve; Tramm, Anette Mai; Baech, John; Jacobsen, Chris Ladefoged; Gaihede, Michael; Kjeldsen, Malene Krag; Bødker, Julie Støve; Dybkaer, Karen; Bøgsted, Martin; Johnsen, Hans Erik
2014-09-20
Background Cryopreservation is an acknowledged procedure to store vital cells for future biomarker analyses. Few studies, however, have analyzed the impact of the cryopreservation on phenotyping. Methods We have performed a controlled comparison of cryopreserved and fresh cellular aliquots prepared from individual healthy donors. We studied circulating B-cell subset membrane markers and global gene expression, respectively by multiparametric flow cytometry and microarray data. Extensive statistical analysis of the generated data tested the concept that "overall, there are phenotypic differences between cryopreserved and fresh B-cell subsets". Subsequently, we performed a consecutive uncontrolled comparison of tonsil tissue samples. Results By multiparametric flow analysis, we documented no significant changes following cryopreservation of subset frequencies or membrane intensity for the differentiation markers CD19, CD20, CD22, CD27, CD38, CD45, and CD200. By gene expression profiling following cryopreservation, across all samples, only 16 out of 18708 genes were significantly up or down regulated, including FOSB, KLF4, RBP7, ANXA1 or CLC, DEFA3, respectively. Implementation of cryopreserved tissue in our research program allowed us to present a performance analysis, by comparing cryopreserved and fresh tonsil tissue. As expected, phenotypic differences were identified, but to an extent that did not affect the performance of the cryopreserved tissue to generate specific B-cell subset associated gene signatures and assign subset phenotypes to independent tissue samples. Conclusions We have confirmed our working concept and illustrated the usefulness of vital cryopreserved cell suspensions for phenotypic studies of the normal B-cell hierarchy; however, storage procedures need to be delineated by tissue specific comparative analysis. © 2014 Clinical Cytometry Society. Copyright © 2014 Clinical Cytometry Society.
Inheritance of allozyme variants in bishop pine (Pinus muricata D.Don)
Constance I. Millar
1985-01-01
Isozyme phenotypes are described for 45 structural loci and I modifier locus in bishop pine (Pinus muricata D. Don,) and segregation data are presented for a subset of 31 polymorphic loci from 19 enzyme systems. All polymorphic loci had alleles that segregated within single-focus Mendelian expectations, although one pair of alleles at each of three...
Phenomenology of maximal and near-maximal lepton mixing
NASA Astrophysics Data System (ADS)
Gonzalez-Garcia, M. C.; Peña-Garay, Carlos; Nir, Yosef; Smirnov, Alexei Yu.
2001-01-01
The possible existence of maximal or near-maximal lepton mixing constitutes an intriguing challenge for fundamental theories of flavor. We study the phenomenological consequences of maximal and near-maximal mixing of the electron neutrino with other (x=tau and/or muon) neutrinos. We describe the deviations from maximal mixing in terms of a parameter ɛ≡1-2 sin2 θex and quantify the present experimental status for \\|ɛ\\|<0.3. We show that both probabilities and observables depend on ɛ quadratically when effects are due to vacuum oscillations and they depend on ɛ linearly if matter effects dominate. The most important information on νe mixing comes from solar neutrino experiments. We find that the global analysis of solar neutrino data allows maximal mixing with confidence level better than 99% for 10-8 eV2<~Δm2<~2×10-7 eV2. In the mass ranges Δm2>~1.5×10-5 eV2 and 4×10-10 eV2<~Δm2<~2×10-7 eV2 the full interval \\|ɛ\\|<0.3 is allowed within ~4σ (99.995% CL) We suggest ways to measure ɛ in future experiments. The observable that is most sensitive to ɛ is the rate [NC]/[CC] in combination with the day-night asymmetry in the SNO detector. With theoretical and statistical uncertainties, the expected accuracy after 5 years is Δɛ~0.07. We also discuss the effects of maximal and near-maximal νe mixing in atmospheric neutrinos, supernova neutrinos, and neutrinoless double beta decay.
Numerical simulations of imaging satellites with optical interferometry
NASA Astrophysics Data System (ADS)
Ding, Yuanyuan; Wang, Chaoyan; Chen, Zhendong
2015-08-01
Optical interferometry imaging system, which is composed of multiple sub-apertures, is a type of sensor that can break through the aperture limit and realize the high resolution imaging. This technique can be utilized to precisely measure the shapes, sizes and position of astronomical objects and satellites, it also can realize to space exploration and space debris, satellite monitoring and survey. Fizeau-Type optical aperture synthesis telescope has the advantage of short baselines, common mount and multiple sub-apertures, so it is feasible for instantaneous direct imaging through focal plane combination.Since 2002, the researchers of Shanghai Astronomical Observatory have developed the study of optical interferometry technique. For array configurations, there are two optimal array configurations proposed instead of the symmetrical circular distribution: the asymmetrical circular distribution and the Y-type distribution. On this basis, two kinds of structure were proposed based on Fizeau interferometric telescope. One is Y-type independent sub-aperture telescope, the other one is segmented mirrors telescope with common secondary mirror.In this paper, we will give the description of interferometric telescope and image acquisition. Then we will mainly concerned the simulations of image restoration based on Y-type telescope and segmented mirrors telescope. The Richardson-Lucy (RL) method, Winner method and the Ordered Subsets Expectation Maximization (OS-EM) method are studied in this paper. We will analyze the influence of different stop rules too. At the last of the paper, we will present the reconstruction results of images of some satellites.
PSF reconstruction for Compton-based prompt gamma imaging
NASA Astrophysics Data System (ADS)
Jan, Meei-Ling; Lee, Ming-Wei; Huang, Hsuan-Ming
2018-02-01
Compton-based prompt gamma (PG) imaging has been proposed for in vivo range verification in proton therapy. However, several factors degrade the image quality of PG images, some of which are due to inherent properties of a Compton camera such as spatial resolution and energy resolution. Moreover, Compton-based PG imaging has a spatially variant resolution loss. In this study, we investigate the performance of the list-mode ordered subset expectation maximization algorithm with a shift-variant point spread function (LM-OSEM-SV-PSF) model. We also evaluate how well the PG images reconstructed using an SV-PSF model reproduce the distal falloff of the proton beam. The SV-PSF parameters were estimated from simulation data of point sources at various positions. Simulated PGs were produced in a water phantom irradiated with a proton beam. Compared to the LM-OSEM algorithm, the LM-OSEM-SV-PSF algorithm improved the quality of the reconstructed PG images and the estimation of PG falloff positions. In addition, the 4.44 and 5.25 MeV PG emissions can be accurately reconstructed using the LM-OSEM-SV-PSF algorithm. However, for the 2.31 and 6.13 MeV PG emissions, the LM-OSEM-SV-PSF reconstruction provides limited improvement. We also found that the LM-OSEM algorithm followed by a shift-variant Richardson-Lucy deconvolution could reconstruct images with quality visually similar to the LM-OSEM-SV-PSF-reconstructed images, while requiring shorter computation time.
Mitral stenosis and hypertrophic obstructive cardiomyopathy: An unusual combination.
Hong, Joonhwa; Schaff, Hartzell V; Ommen, Steve R; Abel, Martin D; Dearani, Joseph A; Nishimura, Rick A
2016-04-01
Systolic anterior motion of mitral valve (MV) leaflets is a main pathophysiologic feature of left ventricular outflow tract (LVOT) obstruction in hypertrophic obstructive cardiomyopathy. Thus, restricted leaflet motion that occurs with MV stenosis might be expected to minimize outflow tract obstruction related to systolic anterior motion. From January 1993 through February 2015, we performed MV replacement and septal myectomy in 12 patients with mitral stenosis and hypertrophic obstructive cardiomyopathy at Mayo Clinic Hospital in Rochester, Minn. Preoperative data, echocardiographic images, operative records, and postoperative outcomes were reviewed. Mean (standard deviation) age was 70 (7.6) years. Preoperative mean (standard deviation) maximal LVOT pressure gradient was 75.0 (35.0) mm Hg; MV gradient was 13.7 (2.8) mm Hg. From echocardiographic images, 4 mechanisms of outflow tract obstruction were identified: systolic anterior motion without severe limitation in MV leaflet excursion, severe limitation in MV leaflet mobility with systolic anterior motion at the tip of the MV anterior leaflet, septal encroachment toward the LVOT, and MV displacement toward the LVOT by calcification. Mitral valve replacement and extended septal myectomy relieved outflow gradients in all patients, with no death or serious morbidity. Patients with mitral stenosis and hypertrophic obstructive cardiomyopathy have multiple LVOT obstruction mechanisms, and MV replacement may not be adequate treatment. We favor septal myectomy and MV replacement in this complex subset of hypertrophic obstructive cardiomyopathy. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.
Reproducibility Between Brain Uptake Ratio Using Anatomic Standardization and Patlak-Plot Methods.
Shibutani, Takayuki; Onoguchi, Masahisa; Noguchi, Atsushi; Yamada, Tomoki; Tsuchihashi, Hiroko; Nakajima, Tadashi; Kinuya, Seigo
2015-12-01
The Patlak-plot and conventional methods of determining brain uptake ratio (BUR) have some problems with reproducibility. We formulated a method of determining BUR using anatomic standardization (BUR-AS) in a statistical parametric mapping algorithm to improve reproducibility. The objective of this study was to demonstrate the inter- and intraoperator reproducibility of mean cerebral blood flow as determined using BUR-AS in comparison to the conventional-BUR (BUR-C) and Patlak-plot methods. The images of 30 patients who underwent brain perfusion SPECT were retrospectively used in this study. The images were reconstructed using ordered-subset expectation maximization and processed using an automatic quantitative analysis for cerebral blood flow of ECD tool. The mean SPECT count was calculated from axial basal ganglia slices of the normal side (slices 31-40) drawn using a 3-dimensional stereotactic region-of-interest template after anatomic standardization. The mean cerebral blood flow was calculated from the mean SPECT count. Reproducibility was evaluated using coefficient of variation and Bland-Altman plotting. For both inter- and intraoperator reproducibility, the BUR-AS method had the lowest coefficient of variation and smallest error range about the Bland-Altman plot. Mean CBF obtained using the BUR-AS method had the highest reproducibility. Compared with the Patlak-plot and BUR-C methods, the BUR-AS method provides greater inter- and intraoperator reproducibility of cerebral blood flow measurement. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Assessing park-and-ride impacts.
DOT National Transportation Integrated Search
2010-06-01
Efficient transportation systems are vital to quality-of-life and mobility issues, and an effective park-and-ride (P&R) : network can help maximize system performance. Properly placed P&R facilities are expected to result in fewer calls : to increase...
Three faces of node importance in network epidemiology: Exact results for small graphs
NASA Astrophysics Data System (ADS)
Holme, Petter
2017-12-01
We investigate three aspects of the importance of nodes with respect to susceptible-infectious-removed (SIR) disease dynamics: influence maximization (the expected outbreak size given a set of seed nodes), the effect of vaccination (how much deleting nodes would reduce the expected outbreak size), and sentinel surveillance (how early an outbreak could be detected with sensors at a set of nodes). We calculate the exact expressions of these quantities, as functions of the SIR parameters, for all connected graphs of three to seven nodes. We obtain the smallest graphs where the optimal node sets are not overlapping. We find that (i) node separation is more important than centrality for more than one active node, (ii) vaccination and influence maximization are the most different aspects of importance, and (iii) the three aspects are more similar when the infection rate is low.
Mauser, Wolfram; Klepper, Gernot; Zabel, Florian; Delzeit, Ruth; Hank, Tobias; Putzenlechner, Birgitta; Calzadilla, Alvaro
2015-01-01
Global biomass demand is expected to roughly double between 2005 and 2050. Current studies suggest that agricultural intensification through optimally managed crops on today's cropland alone is insufficient to satisfy future demand. In practice though, improving crop growth management through better technology and knowledge almost inevitably goes along with (1) improving farm management with increased cropping intensity and more annual harvests where feasible and (2) an economically more efficient spatial allocation of crops which maximizes farmers' profit. By explicitly considering these two factors we show that, without expansion of cropland, today's global biomass potentials substantially exceed previous estimates and even 2050s' demands. We attribute 39% increase in estimated global production potentials to increasing cropping intensities and 30% to the spatial reallocation of crops to their profit-maximizing locations. The additional potentials would make cropland expansion redundant. Their geographic distribution points at possible hotspots for future intensification. PMID:26558436
Mauser, Wolfram; Klepper, Gernot; Zabel, Florian; Delzeit, Ruth; Hank, Tobias; Putzenlechner, Birgitta; Calzadilla, Alvaro
2015-11-12
Global biomass demand is expected to roughly double between 2005 and 2050. Current studies suggest that agricultural intensification through optimally managed crops on today's cropland alone is insufficient to satisfy future demand. In practice though, improving crop growth management through better technology and knowledge almost inevitably goes along with (1) improving farm management with increased cropping intensity and more annual harvests where feasible and (2) an economically more efficient spatial allocation of crops which maximizes farmers' profit. By explicitly considering these two factors we show that, without expansion of cropland, today's global biomass potentials substantially exceed previous estimates and even 2050s' demands. We attribute 39% increase in estimated global production potentials to increasing cropping intensities and 30% to the spatial reallocation of crops to their profit-maximizing locations. The additional potentials would make cropland expansion redundant. Their geographic distribution points at possible hotspots for future intensification.
Expected Power-Utility Maximization Under Incomplete Information and with Cox-Process Observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fujimoto, Kazufumi, E-mail: m_fuji@kvj.biglobe.ne.jp; Nagai, Hideo, E-mail: nagai@sigmath.es.osaka-u.ac.jp; Runggaldier, Wolfgang J., E-mail: runggal@math.unipd.it
2013-02-15
We consider the problem of maximization of expected terminal power utility (risk sensitive criterion). The underlying market model is a regime-switching diffusion model where the regime is determined by an unobservable factor process forming a finite state Markov process. The main novelty is due to the fact that prices are observed and the portfolio is rebalanced only at random times corresponding to a Cox process where the intensity is driven by the unobserved Markovian factor process as well. This leads to a more realistic modeling for many practical situations, like in markets with liquidity restrictions; on the other hand itmore » considerably complicates the problem to the point that traditional methodologies cannot be directly applied. The approach presented here is specific to the power-utility. For log-utilities a different approach is presented in Fujimoto et al. (Preprint, 2012).« less
Schrempf, Alexandra; Giehr, Julia; Röhrl, Ramona; Steigleder, Sarah; Heinze, Jürgen
2017-04-01
One of the central tenets of life-history theory is that organisms cannot simultaneously maximize all fitness components. This results in the fundamental trade-off between reproduction and life span known from numerous animals, including humans. Social insects are a well-known exception to this rule: reproductive queens outlive nonreproductive workers. Here, we take a step forward and show that under identical social and environmental conditions the fecundity-longevity trade-off is absent also within the queen caste. A change in reproduction did not alter life expectancy, and even a strong enforced increase in reproductive efforts did not reduce residual life span. Generally, egg-laying rate and life span were positively correlated. Queens of perennial social insects thus seem to maximize at the same time two fitness parameters that are normally negatively correlated. Even though they are not immortal, they best approach a hypothetical "Darwinian demon" in the animal kingdom.
WFIRST: Exoplanet Target Selection and Scheduling with Greedy Optimization
NASA Astrophysics Data System (ADS)
Keithly, Dean; Garrett, Daniel; Delacroix, Christian; Savransky, Dmitry
2018-01-01
We present target selection and scheduling algorithms for missions with direct imaging of exoplanets, and the Wide Field Infrared Survey Telescope (WFIRST) in particular, which will be equipped with a coronagraphic instrument (CGI). Optimal scheduling of CGI targets can maximize the expected value of directly imaged exoplanets (completeness). Using target completeness as a reward metric and integration time plus overhead time as a cost metric, we can maximize the sum completeness for a mission with a fixed duration. We optimize over these metrics to create a list of target stars using a greedy optimization algorithm based off altruistic yield optimization (AYO) under ideal conditions. We simulate full missions using EXOSIMS by observing targets in this list for their predetermined integration times. In this poster, we report the theoretical maximum sum completeness, mean number of detected exoplanets from Monte Carlo simulations, and the ideal expected value of the simulated missions.
Sample size determination for bibliographic retrieval studies
Yao, Xiaomei; Wilczynski, Nancy L; Walter, Stephen D; Haynes, R Brian
2008-01-01
Background Research for developing search strategies to retrieve high-quality clinical journal articles from MEDLINE is expensive and time-consuming. The objective of this study was to determine the minimal number of high-quality articles in a journal subset that would need to be hand-searched to update or create new MEDLINE search strategies for treatment, diagnosis, and prognosis studies. Methods The desired width of the 95% confidence intervals (W) for the lowest sensitivity among existing search strategies was used to calculate the number of high-quality articles needed to reliably update search strategies. New search strategies were derived in journal subsets formed by 2 approaches: random sampling of journals and top journals (having the most high-quality articles). The new strategies were tested in both the original large journal database and in a low-yielding journal (having few high-quality articles) subset. Results For treatment studies, if W was 10% or less for the lowest sensitivity among our existing search strategies, a subset of 15 randomly selected journals or 2 top journals were adequate for updating search strategies, based on each approach having at least 99 high-quality articles. The new strategies derived in 15 randomly selected journals or 2 top journals performed well in the original large journal database. Nevertheless, the new search strategies developed using the random sampling approach performed better than those developed using the top journal approach in a low-yielding journal subset. For studies of diagnosis and prognosis, no journal subset had enough high-quality articles to achieve the expected W (10%). Conclusion The approach of randomly sampling a small subset of journals that includes sufficient high-quality articles is an efficient way to update or create search strategies for high-quality articles on therapy in MEDLINE. The concentrations of diagnosis and prognosis articles are too low for this approach. PMID:18823538
Observation of hard scattering in photoproduction at HERA
NASA Astrophysics Data System (ADS)
Derrick, M.; Krakauer, D.; Magill, S.; Musgrave, B.; Repond, J.; Sugano, K.; Stanek, R.; Talaga, R. L.; Thron, J.; Arzarello, F.; Ayed, R.; Barbagli, G.; Bari, G.; Basile, M.; Bellagamba, L.; Boscherini, D.; Bruni, G.; Bruni, P.; Cara Romeo, G.; Castellini, G.; Chiarini, M.; Cifarelli, L.; Cindolo, F.; Ciralli, F.; Contin, A.; D'Auria, S.; Del Papa, C.; Frasconi, F.; Giusti, P.; Iacobucci, G.; Laurenti, G.; Levi, G.; Lin, Q.; Lisowski, B.; Maccarrone, G.; Margotti, A.; Massam, T.; Nania, R.; Nemoz, C.; Palmonari, F.; Sartorelli, G.; Timellini, R.; Zamora Garcia, Y.; Zichichi, A.; Bargende, A.; Barreiro, F.; Crittenden, J.; Dabbous, H.; Desch, K.; Diekmann, B.; Geerts, M.; Geitz, G.; Gutjahr, B.; Hartmann, H.; Hartmann, J.; Haun, D.; Heinloth, K.; Hilger, E.; Jakob, H.-P.; Kramarczyk, S.; Kückes, M.; Mass, A.; Mengel, S.; Mollen, J.; Müsch, H.; Paul, E.; Schattevoy, R.; Schneider, B.; Schneider, J.-L.; Wedemeyer, R.; Cassidy, A.; Cussans, D. G.; Dyce, N.; Fawcett, H. F.; Foster, B.; Gilmore, R.; Heath, G. P.; Lancaster, M.; Llewellyn, T. J.; Malos, J.; Morgado, C. J. S.; Tapper, R. J.; Wilson, S. S.; Rau, R. R.; Bernstein, A.; Caldwell, A.; Gialas, I.; Parsons, J. A.; Ritz, S.; Sciulli, F.; Straub, P. B.; Wai, L.; Yang, S.; Barillari, T.; Schioppa, M.; Susinno, G.; Burkot, W.; Chwastowski, J.; Dwuraźny, A.; Eskreys, A.; Nizioł, B.; Jakubowski, Z.; Piotrzkowski, K.; Zachara, M.; Zawiejski, L.; Borzemski, P.; Eskreys, K.; Jeleń, K.; Kisielewska, D.; Kowalski, T.; Kulka, J.; Rulikowska-Zarȩbska, E.; Suszycki, L.; Zajaç, J.; Kȩdzierski, T.; Kotański, A.; Przybycień, M.; Bauerdick, L. A. T.; Behrens, U.; Bienlein, J. K.; Coldewey, C.; Dannemann, A.; Dierks, K.; Dorth, W.; Drews, G.; Erhard, P.; Flasiński, M.; Fleck, I.; Fürtjes, A.; Gläser, R.; Göttlicher, P.; Haas, T.; Hagge, L.; Hain, W.; Hasell, D.; Hultschig, H.; Jahnen, G.; Joos, P.; Kasemann, M.; Klanner, R.; Koch, W.; Kötz, U.; Kowalski, H.; Labs, J.; Ladage, A.; Löhr, B.; Löwe, M.; Lüke, D.; Mainusch, J.; Manczak, O.; Momayezi, M.; Nickel, S.; Notz, D.; Park, I.; Pösnecker, K.-U.; Rohde, M.; Ros, E.; Schneekloth, U.; Schroeder, J.; Schulz, W.; Selonke, F.; Tscheslog, E.; Tsurugai, T.; Turkot, F.; Vogel, W.; Woeniger, T.; Wolf, G.; Youngman, C.; Grabosch, H. J.; Leich, A.; Meyer, A.; Rethfeldt, C.; Schlenstedt, S.; Casalbuoni, R.; De Curtis, S.; Dominici, D.; Francescato, A.; Nuti, M.; Pelfer, P.; Anzivino, G.; Casaccia, R.; Laakso, I.; De Pasquale, S.; Qian, S.; Votano, L.; Bamberger, A.; Freidhof, A.; Poser, T.; Söldner-Rembold, S.; Theisen, G.; Trefzger, T.; Brook, N. H.; Bussey, P. J.; Doyle, A. T.; Forbes, J. R.; Jamieson, V. A.; Raine, C.; Saxon, D. H.; Gloth, G.; Holm, U.; Kammerlocher, H.; Krebs, B.; Neumann, T.; Wick, K.; Hofmann, A.; Kröger, W.; Krüger, J.; Lohrmann, E.; Milewski, J.; Nakahata, M.; Pavel, N.; Poelz, G.; Salomon, R.; Seidman, A.; Schott, W.; Wiik, B. H.; Zetsche, F.; Bacon, T. C.; Butterworth, I.; Markou, C.; McQuillan, D.; Miller, D. B.; Mobayyen, M. M.; Prinias, A.; Vorvolakos, A.; Bienz, T.; Kreutzmann, H.; Mallik, U.; McCliment, E.; Roco, M.; Wang, M. Z.; Cloth, P.; Filges, D.; Chen, L.; Imlay, R.; Kartik, S.; Kim, H.-J.; McNeil, R. R.; Metcalf, W.; Cases, G.; Hervás, L.; Labarga, L.; del Peso, J.; Roldán, J.; Terrón, J.; de Trocóniz, J. F.; Ikraiam, F.; Mayer, J. K.; Smith, G. R.; Corriveau, F.; Gilkinson, D. J.; Hanna, D. S.; Hung, L. W.; Mitchell, J. W.; Patel, P. M.; Sinclair, L. E.; Stairs, D. G.; Ullmann, R.; Bashindzhagyan, G. L.; Ermolov, P. F.; Golubkov, Y. A.; Kuzmin, V. A.; Kuznetsov, E. N.; Savin, A. A.; Voronin, A. G.; Zotov, N. P.; Bentvelsen, S.; Dake, A.; Engelen, J.; de Jong, P.; de Jong, S.; de Kamps, M.; Kooijman, P.; Kruse, A.; van der Lugt, H.; O'Dell, V.; Straver, J.; Tenner, A.; Tiecke, H.; Uijterwaal, H.; Vermeulen, J.; Wiggers, L.; de Wolf, E.; van Woudenberg, R.; Yoshida, R.; Bylsma, B.; Durkin, L. S.; Li, C.; Ling, T. Y.; McLean, K. W.; Murray, W. N.; Park, S. K.; Romanowski, T. A.; Seidlein, R.; Blair, G. A.; Butterworth, J. M.; Byrne, A.; Cashmore, R. J.; Cooper-Sarkar, A. M.; Devenish, R. C. E.; Gingrich, D. M.; Hallam-Baker, P. M.; Harnew, N.; Khatri, T.; Long, K. R.; Luffman, P.; McArthur, I.; Morawitz, P.; Nash, J.; Smith, S. J. P.; Roocroft, N. C.; Wilson, F. F.; Abbiendi, G.; Brugnera, R.; Carlin, R.; Dal Corso, F.; De Giorgi, M.; Dosselli, U.; Fanin, C.; Gasparini, F.; Limentani, S.; Morandin, M.; Posocco, M.; Stanco, L.; Stroili, R.; Voci, C.; Lim, J. N.; Oh, B. Y.; Whitmore, J.; Bonori, M.; Contino, U.; D'Agostini, G.; Guida, M.; Iori, M.; Mari, S.; Marini, G.; Mattioli, M.; Monaldi, D.; Nigro, A.; Hart, J. C.; McCubbin, N. A.; Shah, T. P.; Short, T. L.; Barberis, E.; Cartiglia, N.; Heusch, C.; Hubbard, B.; Leslie, J.; Ng, J. S. T.; O'Shaughnessy, K.; Sadrozinski, H. F.; Seiden, A.; Badura, E.; Biltzinger, J.; Chaves, H.; Rost, M.; Seifert, R. J.; Walenta, A. H.; Weihs, W.; Zech, G.; Dagan, S.; Heifetz, R.; Levy, A.; Zer-Zion, D.; Hasegawa, T.; Hazumi, M.; Ishii, T.; Kasai, S.; Kuze, M.; Nagasawa, Y.; Nakao, M.; Okuno, H.; Tokushuku, K.; Watanabe, T.; Yamada, S.; Chiba, M.; Hamatsu, R.; Hirose, T.; Kitamura, S.; Nagayama, S.; Nakamitsu, Y.; Arneodo, M.; Costa, M.; Ferrero, M. I.; Lamberti, L.; Maselli, S.; Peroni, C.; Solano, A.; Staiano, A.; Dardo, M.; Bailey, D. C.; Bandyopadhyay, D.; Benard, F.; Bhadra, S.; Brkic, M.; Burow, B. D.; Chlebana, F. S.; Crombie, M. B.; Hartner, G. F.; Levman, G. M.; Martin, J. F.; Orr, R. S.; Prentice, J. D.; Sampson, C. R.; Stairs, G. G.; Teuscher, R. J.; Yoon, T.-S.; Bullock, F. W.; Catterall, C. D.; Giddings, J. C.; Jones, T. W.; Khan, A. M.; Lane, J. B.; Makkar, P. L.; Shaw, D.; Shulman, J.; Blankenship, K.; Kochocki, J.; Lu, B.; Mo, L. W.; Charchuła, K.; Ciborowski, J.; Gajewski, J.; Grzelak, G.; Kasprzak, M.; Krzyżanowski, M.; Muchorowski, K.; Nowak, R. J.; Pawlak, J. M.; Stojda, K.; Stopczyński, A.; Szwed, R.; Tymieniecka, T.; Walczak, R.; Wróblewski, A. K.; Zakrzewski, J. A.; Żarnecki, A. F.; Adamus, M.; Abramowicz, H.; Eisenberg, Y.; Glasman, C.; Karshon, U.; Montag, A.; Revel, D.; Shapira, A.; Ali, I.; Behrens, B.; Camerini, U.; Dasu, S.; Fordham, C.; Foudas, C.; Goussiou, A.; Lomperski, M.; Loveless, R. J.; Nylander, P.; Ptacek, M.; Reeder, D. D.; Smith, W. H.; Silverstein, S.; Frisken, W. R.; Furutani, K. M.; Iga, Y.; ZEUS Collaboration
1992-12-01
We report a study of electron proton collisions at very low Q2, corresponding to virtual photoproduction at centre of mass energies in the range 100-295 GeV. The distribution in transverse energy of the observed hadrons is much harder than can be explained by soft processes. Some of the events show back-to-back two-jet production at the rate and with the characteristics expected from hard two-body scattering. A subset of the two-jet events have energy in the electron direction consistent with that expected from the photon remnant in resolved photon processes.
Cecchinato, A; De Marchi, M; Gallo, L; Bittante, G; Carnier, P
2009-10-01
The aims of this study were to investigate variation of milk coagulation property (MCP) measures and their predictions obtained by mid-infrared spectroscopy (MIR), to investigate the genetic relationship between measures of MCP and MIR predictions, and to estimate the expected response from a breeding program focusing on the enhancement of MCP using MIR predictions as indicator traits. Individual milk samples were collected from 1,200 Brown Swiss cows (progeny of 50 artificial insemination sires) reared in 30 herds located in northern Italy. Rennet coagulation time (RCT, min) and curd firmness (a(30), mm) were measured using a computerized renneting meter. The MIR data were recorded over the spectral range of 4,000 to 900 cm(-1). Prediction models for RCT and a(30) based on MIR spectra were developed using partial least squares regression. A cross-validation procedure was carried out. The procedure involved the partition of available data into 2 subsets: a calibration subset and a test subset. The calibration subset was used to develop a calibration equation able to predict individual MCP phenotypes using MIR spectra. The test subset was used to validate the calibration equation and to estimate heritabilities and genetic correlations for measured MCP and their predictions obtained from MIR spectra and the calibration equation. Point estimates of heritability ranged from 0.30 to 0.34 and from 0.22 to 0.24 for RCT and a(30), respectively. Heritability estimates for MCP predictions were larger than those obtained for measured MCP. Estimated genetic correlations between measures and predictions of RCT were very high and ranged from 0.91 to 0.96. Estimates of the genetic correlation between measures and predictions of a(30) were large and ranged from 0.71 to 0.87. Predictions of MCP provided by MIR techniques can be proposed as indicator traits for the genetic enhancement of MCP. The expected response of RCT and a(30) ensured by the selection using MIR predictions as indicator traits was equal to or slightly less than the response achievable through a single measurement of these traits. Breeding strategies for the enhancement of MCP based on MIR predictions as indicator traits could be easily and immediately implemented for dairy cattle populations where routine acquisition of spectra from individual milk samples is already performed.
Faith, Daniel P
2015-02-19
The phylogenetic diversity measure, ('PD'), measures the relative feature diversity of different subsets of taxa from a phylogeny. At the level of feature diversity, PD supports the broad goal of biodiversity conservation to maintain living variation and option values. PD calculations at the level of lineages and features include those integrating probabilities of extinction, providing estimates of expected PD. This approach has known advantages over the evolutionarily distinct and globally endangered (EDGE) methods. Expected PD methods also have limitations. An alternative notion of expected diversity, expected functional trait diversity, relies on an alternative non-phylogenetic model and allows inferences of diversity at the level of functional traits. Expected PD also faces challenges in helping to address phylogenetic tipping points and worst-case PD losses. Expected PD may not choose conservation options that best avoid worst-case losses of long branches from the tree of life. We can expand the range of useful calculations based on expected PD, including methods for identifying phylogenetic key biodiversity areas. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Maximizing investments in work zone safety in Oregon : final report.
DOT National Transportation Integrated Search
2011-05-01
Due to the federal stimulus program and the 2009 Jobs and Transportation Act, the Oregon Department of Transportation (ODOT) anticipates that a large increase in highway construction will occur. There is the expectation that, since transportation saf...
ERIC Educational Resources Information Center
Lashway, Larry
1997-01-01
Principals today are expected to maximize their schools' performances with limited resources while also adopting educational innovations. This synopsis reviews five recent publications that offer some important insights about the nature of principals' leadership strategies: (1) "Leadership Styles and Strategies" (Larry Lashway); (2) "Facilitative…
Densest local sphere-packing diversity. II. Application to three dimensions
NASA Astrophysics Data System (ADS)
Hopkins, Adam B.; Stillinger, Frank H.; Torquato, Salvatore
2011-01-01
The densest local packings of N three-dimensional identical nonoverlapping spheres within a radius Rmin(N) of a fixed central sphere of the same size are obtained for selected values of N up to N=1054. In the predecessor to this paper [A. B. Hopkins, F. H. Stillinger, and S. Torquato, Phys. Rev. EPLEEE81063-651X10.1103/PhysRevE.81.041305 81, 041305 (2010)], we described our method for finding the putative densest packings of N spheres in d-dimensional Euclidean space Rd and presented those packings in R2 for values of N up to N=348. Here we analyze the properties and characteristics of the densest local packings in R3 and employ knowledge of the Rmin(N), using methods applicable in any d, to construct both a realizability condition for pair correlation functions of sphere packings and an upper bound on the maximal density of infinite sphere packings. In R3, we find wide variability in the densest local packings, including a multitude of packing symmetries such as perfect tetrahedral and imperfect icosahedral symmetry. We compare the densest local packings of N spheres near a central sphere to minimal-energy configurations of N+1 points interacting with short-range repulsive and long-range attractive pair potentials, e.g., 12-6 Lennard-Jones, and find that they are in general completely different, a result that has possible implications for nucleation theory. We also compare the densest local packings to finite subsets of stacking variants of the densest infinite packings in R3 (the Barlow packings) and find that the densest local packings are almost always most similar as measured by a similarity metric, to the subsets of Barlow packings with the smallest number of coordination shells measured about a single central sphere, e.g., a subset of the fcc Barlow packing. Additionally, we observe that the densest local packings are dominated by the dense arrangement of spheres with centers at distance Rmin(N). In particular, we find two “maracas” packings at N=77 and N=93, each consisting of a few unjammed spheres free to rattle within a “husk” composed of the maximal number of spheres that can be packed with centers at respective Rmin(N).
On the Achievable Throughput Over TVWS Sensor Networks
Caleffi, Marcello; Cacciapuoti, Angela Sara
2016-01-01
In this letter, we study the throughput achievable by an unlicensed sensor network operating over TV white space spectrum in presence of coexistence interference. Through the letter, we first analytically derive the achievable throughput as a function of the channel ordering. Then, we show that the problem of deriving the maximum expected throughput through exhaustive search is computationally unfeasible. Finally, we derive a computational-efficient algorithm characterized by polynomial-time complexity to compute the channel set maximizing the expected throughput and, stemming from this, we derive a closed-form expression of the maximum expected throughput. Numerical simulations validate the theoretical analysis. PMID:27043565
NASA Astrophysics Data System (ADS)
Grecu, M.; Tian, L.; Heymsfield, G. M.
2017-12-01
A major challenge in deriving accurate estimates of physical properties of falling snow particles from single frequency space- or airborne radar observations is that snow particles exhibit a large variety of shapes and their electromagnetic scattering characteristics are highly dependent on these shapes. Triple frequency (Ku-Ka-W) radar observations are expected to facilitate the derivation of more accurate snow estimates because specific snow particle shapes tend to have specific signatures in the associated two-dimensional dual-reflectivity-ratio (DFR) space. However, the derivation of accurate snow estimates from triple frequency radar observations is by no means a trivial task. This is because the radar observations can be subject to non-negligible attenuation (especially at W-band when super-cooled water is present), which may significantly impact the interpretation of the information in the DFR space. Moreover, the electromagnetic scattering properties of snow particles are computationally expensive to derive, which makes the derivation of reliable parameterizations usable in estimation methodologies challenging. In this study, we formulate an two-step Expectation Maximization (EM) methodology to derive accurate snow estimates in Extratropical Cyclones (ECTs) from triple frequency airborne radar observations. The Expectation (E) step consists of a least-squares triple frequency estimation procedure applied with given assumptions regarding the relationships between the density of snow particles and their sizes, while the Maximization (M) step consists of the optimization of the assumptions used in step E. The electromagnetic scattering properties of snow particles are derived using the Rayleigh-Gans approximation. The methodology is applied to triple frequency radar observations collected during the Olympic Mountains Experiment (OLYMPEX). Results show that snowfall estimates above the freezing level in ETCs consistent with the triple frequency radar observations as well as with independent rainfall estimates below the freezing level may be derived using the EM methodology formulated in the study.
Hudson, H M; Ma, J; Green, P
1994-01-01
Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.
Results from a Test Fixture for button BPM Trapped Mode Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cameron,P.; Bacha, B.; Blednykh, A.
2009-05-04
A variety of measures have been suggested to mitigate the problem of button BPM trapped mode heating. A test fixture, using a combination of commercial-off-the-shelf and custom machined components, was assembled to validate the simulations. We present details of the fixture design, measurement results, and a comparison of the results with the simulations. A brief history of the trapped mode button heating problem and a set of design rules for BPM button optimization are presented elsewhere in these proceedings. Here we present measurements on a test fixture that was assembled to confirm, if possible, a subset of those rules: (1)more » Minimize the trapped mode impedance and the resulting power deposited in this mode by the beam. (2) Maximize the power re-radiated back into the beampipe. (3) Maximize electrical conductivity of the outer circumference of the button and minimize conductivity of the inner circumference of the shell, to shift power deposition from the button to the shell. The problem is then how to extract useful and relevant information from S-parameter measurements of the test fixture.« less
Designing Agent Collectives For Systems With Markovian Dynamics
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Lawson, John W.; Clancy, Daniel (Technical Monitor)
2001-01-01
The "Collective Intelligence" (COIN) framework concerns the design of collectives of agents so that as those agents strive to maximize their individual utility functions, their interaction causes a provided "world" utility function concerning the entire collective to be also maximized. Here we show how to extend that framework to scenarios having Markovian dynamics when no re-evolution of the system from counter-factual initial conditions (an often expensive calculation) is permitted. Our approach transforms the (time-extended) argument of each agent's utility function before evaluating that function. This transformation has benefits in scenarios not involving Markovian dynamics, in particular scenarios where not all of the arguments of an agent's utility function are observable. We investigate this transformation in simulations involving both linear and quadratic (nonlinear) dynamics. In addition, we find that a certain subset of these transformations, which result in utilities that have low "opacity (analogous to having high signal to noise) but are not "factored" (analogous to not being incentive compatible), reliably improve performance over that arising with factored utilities. We also present a Taylor Series method for the fully general nonlinear case.
Gehring, Dominic; Wissler, Sabrina; Lohrer, Heinz; Nauck, Tanja; Gollhofer, Albert
2014-03-01
A thorough understanding of the functional aspects of ankle joint control is essential to developing effective injury prevention. It is of special interest to understand how neuromuscular control mechanisms and mechanical constraints stabilize the ankle joint. Therefore, the aim of the present study was to determine how expecting ankle tilts and the application of an ankle brace influence ankle joint control when imitating the ankle sprain mechanism during walking. Ankle kinematics and muscle activity were assessed in 17 healthy men. During gait rapid perturbations were applied using a trapdoor (tilting with 24° inversion and 15° plantarflexion). The subjects either knew that a perturbation would definitely occur (expected tilts) or there was only the possibility that a perturbation would occur (potential tilts). Both conditions were conducted with and without a semi-rigid ankle brace. Expecting perturbations led to an increased ankle eversion at foot contact, which was mediated by an altered muscle preactivation pattern. Moreover, the maximal inversion angle (-7%) and velocity (-4%), as well as the reactive muscle response were significantly reduced when the perturbation was expected. While wearing an ankle brace did not influence muscle preactivation nor the ankle kinematics before ground contact, it significantly reduced the maximal ankle inversion angle (-14%) and velocity (-11%) as well as reactive neuromuscular responses. The present findings reveal that expecting ankle inversion modifies neuromuscular joint control prior to landing. Although such motor control strategies are weaker in their magnitude compared with braces, they seem to assist ankle joint stabilization in a close-to-injury situation. Copyright © 2013 Elsevier B.V. All rights reserved.
Zhang, ZhiZhuo; Chang, Cheng Wei; Hugo, Willy; Cheung, Edwin; Sung, Wing-Kin
2013-03-01
Although de novo motifs can be discovered through mining over-represented sequence patterns, this approach misses some real motifs and generates many false positives. To improve accuracy, one solution is to consider some additional binding features (i.e., position preference and sequence rank preference). This information is usually required from the user. This article presents a de novo motif discovery algorithm called SEME (sampling with expectation maximization for motif elicitation), which uses pure probabilistic mixture model to model the motif's binding features and uses expectation maximization (EM) algorithms to simultaneously learn the sequence motif, position, and sequence rank preferences without asking for any prior knowledge from the user. SEME is both efficient and accurate thanks to two important techniques: the variable motif length extension and importance sampling. Using 75 large-scale synthetic datasets, 32 metazoan compendium benchmark datasets, and 164 chromatin immunoprecipitation sequencing (ChIP-Seq) libraries, we demonstrated the superior performance of SEME over existing programs in finding transcription factor (TF) binding sites. SEME is further applied to a more difficult problem of finding the co-regulated TF (coTF) motifs in 15 ChIP-Seq libraries. It identified significantly more correct coTF motifs and, at the same time, predicted coTF motifs with better matching to the known motifs. Finally, we show that the learned position and sequence rank preferences of each coTF reveals potential interaction mechanisms between the primary TF and the coTF within these sites. Some of these findings were further validated by the ChIP-Seq experiments of the coTFs. The application is available online.
Maximum projection designs for computer experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joseph, V. Roshan; Gul, Evren; Ba, Shan
Space-filling properties are important in designing computer experiments. The traditional maximin and minimax distance designs only consider space-filling in the full dimensional space. This can result in poor projections onto lower dimensional spaces, which is undesirable when only a few factors are active. Restricting maximin distance design to the class of Latin hypercubes can improve one-dimensional projections, but cannot guarantee good space-filling properties in larger subspaces. We propose designs that maximize space-filling properties on projections to all subsets of factors. We call our designs maximum projection designs. As a result, our design criterion can be computed at a cost nomore » more than a design criterion that ignores projection properties.« less
Seth, Ashok; Gupta, Sajal; Pratap Singh, Vivudh; Kumar, Vijay
2017-09-01
Final stent dimensions remain an important predictor of restenosis, target vessel revascularisation (TVR) and subacute stent thrombosis (ST), even in the drug-eluting stent (DES) era. Stent balloons are usually semi-compliant and thus even high-pressure inflation may not achieve uniform or optimal stent expansion. Post-dilatation with non-compliant (NC) balloons after stent deployment has been shown to enhance stent expansion and could reduce TVR and ST. Based on supporting evidence and in the absence of large prospective randomised outcome-based trials, post-dilatation with an NC balloon to achieve optimal stent expansion and maximal luminal area is a logical technical recommendation, particularly in complex lesion subsets.
Maximum projection designs for computer experiments
Joseph, V. Roshan; Gul, Evren; Ba, Shan
2015-03-18
Space-filling properties are important in designing computer experiments. The traditional maximin and minimax distance designs only consider space-filling in the full dimensional space. This can result in poor projections onto lower dimensional spaces, which is undesirable when only a few factors are active. Restricting maximin distance design to the class of Latin hypercubes can improve one-dimensional projections, but cannot guarantee good space-filling properties in larger subspaces. We propose designs that maximize space-filling properties on projections to all subsets of factors. We call our designs maximum projection designs. As a result, our design criterion can be computed at a cost nomore » more than a design criterion that ignores projection properties.« less
Mining subspace clusters from DNA microarray data using large itemset techniques.
Chang, Ye-In; Chen, Jiun-Rung; Tsai, Yueh-Chi
2009-05-01
Mining subspace clusters from the DNA microarrays could help researchers identify those genes which commonly contribute to a disease, where a subspace cluster indicates a subset of genes whose expression levels are similar under a subset of conditions. Since in a DNA microarray, the number of genes is far larger than the number of conditions, those previous proposed algorithms which compute the maximum dimension sets (MDSs) for any two genes will take a long time to mine subspace clusters. In this article, we propose the Large Itemset-Based Clustering (LISC) algorithm for mining subspace clusters. Instead of constructing MDSs for any two genes, we construct only MDSs for any two conditions. Then, we transform the task of finding the maximal possible gene sets into the problem of mining large itemsets from the condition-pair MDSs. Since we are only interested in those subspace clusters with gene sets as large as possible, it is desirable to pay attention to those gene sets which have reasonable large support values in the condition-pair MDSs. From our simulation results, we show that the proposed algorithm needs shorter processing time than those previous proposed algorithms which need to construct gene-pair MDSs.
A Deficit in Older Adults' Effortful Selection of Cued Responses
Proctor, Robert W.; Vu, Kim-Phuong L.; Pick, David F.
2007-01-01
J. J. Adam et al. (1998) provided evidence for an “age-related deficit in preparing 2 fingers on 2 hands, but not on 1 hand” (p. 870). Instead of having an anatomical basis, the deficit could result from the effortful processing required for individuals to select cued subsets of responses that do not coincide with left and right subgroups. The deficit also could involve either the ultimate benefit that can be attained or the time required to attain that benefit. The authors report 3 experiments (Ns = 40, 48, and 32 participants, respectively) in which they tested those distinctions by using an overlapped hand placement (participants alternated the index and middle fingers of the hands), a normal hand placement, and longer precuing intervals than were used in previous studies. The older adults were able to achieve the full precuing benefit shown by younger adults but required longer to achieve the maximal benefit for most pairs of responses. The deficit did not depend on whether the responses were from different hands, suggesting that it lies primarily in the effortful processing required for those subsets of cued responses that are not selected easily. PMID:16801319
Rough sets and Laplacian score based cost-sensitive feature selection
Yu, Shenglong
2018-01-01
Cost-sensitive feature selection learning is an important preprocessing step in machine learning and data mining. Recently, most existing cost-sensitive feature selection algorithms are heuristic algorithms, which evaluate the importance of each feature individually and select features one by one. Obviously, these algorithms do not consider the relationship among features. In this paper, we propose a new algorithm for minimal cost feature selection called the rough sets and Laplacian score based cost-sensitive feature selection. The importance of each feature is evaluated by both rough sets and Laplacian score. Compared with heuristic algorithms, the proposed algorithm takes into consideration the relationship among features with locality preservation of Laplacian score. We select a feature subset with maximal feature importance and minimal cost when cost is undertaken in parallel, where the cost is given by three different distributions to simulate different applications. Different from existing cost-sensitive feature selection algorithms, our algorithm simultaneously selects out a predetermined number of “good” features. Extensive experimental results show that the approach is efficient and able to effectively obtain the minimum cost subset. In addition, the results of our method are more promising than the results of other cost-sensitive feature selection algorithms. PMID:29912884
Hypergraph Based Feature Selection Technique for Medical Diagnosis.
Somu, Nivethitha; Raman, M R Gauthama; Kirthivasan, Kannan; Sriram, V S Shankar
2016-11-01
The impact of internet and information systems across various domains have resulted in substantial generation of multidimensional datasets. The use of data mining and knowledge discovery techniques to extract the original information contained in the multidimensional datasets play a significant role in the exploitation of complete benefit provided by them. The presence of large number of features in the high dimensional datasets incurs high computational cost in terms of computing power and time. Hence, feature selection technique has been commonly used to build robust machine learning models to select a subset of relevant features which projects the maximal information content of the original dataset. In this paper, a novel Rough Set based K - Helly feature selection technique (RSKHT) which hybridize Rough Set Theory (RST) and K - Helly property of hypergraph representation had been designed to identify the optimal feature subset or reduct for medical diagnostic applications. Experiments carried out using the medical datasets from the UCI repository proves the dominance of the RSKHT over other feature selection techniques with respect to the reduct size, classification accuracy and time complexity. The performance of the RSKHT had been validated using WEKA tool, which shows that RSKHT had been computationally attractive and flexible over massive datasets.
Rough sets and Laplacian score based cost-sensitive feature selection.
Yu, Shenglong; Zhao, Hong
2018-01-01
Cost-sensitive feature selection learning is an important preprocessing step in machine learning and data mining. Recently, most existing cost-sensitive feature selection algorithms are heuristic algorithms, which evaluate the importance of each feature individually and select features one by one. Obviously, these algorithms do not consider the relationship among features. In this paper, we propose a new algorithm for minimal cost feature selection called the rough sets and Laplacian score based cost-sensitive feature selection. The importance of each feature is evaluated by both rough sets and Laplacian score. Compared with heuristic algorithms, the proposed algorithm takes into consideration the relationship among features with locality preservation of Laplacian score. We select a feature subset with maximal feature importance and minimal cost when cost is undertaken in parallel, where the cost is given by three different distributions to simulate different applications. Different from existing cost-sensitive feature selection algorithms, our algorithm simultaneously selects out a predetermined number of "good" features. Extensive experimental results show that the approach is efficient and able to effectively obtain the minimum cost subset. In addition, the results of our method are more promising than the results of other cost-sensitive feature selection algorithms.
Kurnianingsih, Yoanna A; Sim, Sam K Y; Chee, Michael W L; Mullette-Gillman, O'Dhaniel A
2015-01-01
We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61-80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision-making for losses through changes in both individual preferences and the strategies individuals employ.
The nature of genetic susceptibility to multiple sclerosis: constraining the possibilities.
Goodin, Douglas S
2016-04-27
Epidemiological observations regarding certain population-wide parameters (e.g., disease-prevalence, recurrence-risk in relatives, gender predilections, and the distribution of common genetic-variants) place important constraints on the possibilities for the genetic-basis underlying susceptibility to multiple sclerosis (MS). Using very broad range-estimates for the different population-wide epidemiological parameters, a mathematical model can help elucidate the nature and the magnitude of these constraints. For MS no more than 8.5 % of the population can possibly be in the "genetically-susceptible" subset (defined as having a life-time MS-probability at least as high as the overall population average). Indeed, the expected MS-probability for this subset is more than 12 times that for every other person of the population who is not in this subset. Moreover, provided that those genetically susceptible persons (genotypes), who carry the well-established MS susceptibility allele (DRB1*1501), are equally or more likely to get MS than those susceptible persons, who don't carry this allele, then at least 84 % of MS-cases must come from this "genetically susceptible" subset. Finally, because men, compared to women, are at least as likely (and possibly more likely) to be susceptible, it can be demonstrated that women are more responsive to the environmental factors that are involved in MS-pathogenesis (whatever these are) and, thus, susceptible women are more likely actually to develop MS than susceptible men. Finally, in contrast to genetic susceptibility, more than 70 % of men (and likely also women) must have an environmental experience (including all of the necessary factors), which is sufficient to produce MS in a susceptible individual. As a result, because of these constraints, it is possible to distinguish two classes of persons, indicating either that MS can be caused by two fundamentally different pathophysiological mechanisms or that the large majority of the population is at no risk of the developing this disease regardless of their environmental experience. Moreover, although environmental-factors would play a critical role in both mechanisms (if both exist), there is no reason to expect that these factors are the same (or even similar) between the two.
Engaging Older Adult Volunteers in National Service
ERIC Educational Resources Information Center
McBride, Amanda Moore; Greenfield, Jennifer C.; Morrow-Howell, Nancy; Lee, Yung Soo; McCrary, Stacey
2012-01-01
Volunteer-based programs are increasingly designed as interventions to affect the volunteers and the beneficiaries of the volunteers' activities. To achieve the intended impacts for both, programs need to leverage the volunteers' engagement by meeting their expectations, retaining them, and maximizing their perceptions of benefits. Programmatic…
NASA Astrophysics Data System (ADS)
Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang
2018-04-01
Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.
Merton's problem for an investor with a benchmark in a Barndorff-Nielsen and Shephard market.
Lennartsson, Jan; Lindberg, Carl
2015-01-01
To try to outperform an externally given benchmark with known weights is the most common equity mandate in the financial industry. For quantitative investors, this task is predominantly approached by optimizing their portfolios consecutively over short time horizons with one-period models. We seek in this paper to provide a theoretical justification to this practice when the underlying market is of Barndorff-Nielsen and Shephard type. This is done by verifying that an investor who seeks to maximize her expected terminal exponential utility of wealth in excess of her benchmark will in fact use an optimal portfolio equivalent to the one-period Markowitz mean-variance problem in continuum under the corresponding Black-Scholes market. Further, we can represent the solution to the optimization problem as in Feynman-Kac form. Hence, the problem, and its solution, is analogous to Merton's classical portfolio problem, with the main difference that Merton maximizes expected utility of terminal wealth, not wealth in excess of a benchmark.
NASA Astrophysics Data System (ADS)
Hawthorne, Bryant; Panchal, Jitesh H.
2014-07-01
A bilevel optimization formulation of policy design problems considering multiple objectives and incomplete preferences of the stakeholders is presented. The formulation is presented for Feed-in-Tariff (FIT) policy design for decentralized energy infrastructure. The upper-level problem is the policy designer's problem and the lower-level problem is a Nash equilibrium problem resulting from market interactions. The policy designer has two objectives: maximizing the quantity of energy generated and minimizing policy cost. The stakeholders decide on quantities while maximizing net present value and minimizing capital investment. The Nash equilibrium problem in the presence of incomplete preferences is formulated as a stochastic linear complementarity problem and solved using expected value formulation, expected residual minimization formulation, and the Monte Carlo technique. The primary contributions in this article are the mathematical formulation of the FIT policy, the extension of computational policy design problems to multiple objectives, and the consideration of incomplete preferences of stakeholders for policy design problems.
Yang, Defu; Wang, Lin; Chen, Dongmei; Yan, Chenggang; He, Xiaowei; Liang, Jimin; Chen, Xueli
2018-05-17
The reconstruction of bioluminescence tomography (BLT) is severely ill-posed due to the insufficient measurements and diffuses nature of the light propagation. Predefined permissible source region (PSR) combined with regularization terms is one common strategy to reduce such ill-posedness. However, the region of PSR is usually hard to determine and can be easily affected by subjective consciousness. Hence, we theoretically developed a filtered maximum likelihood expectation maximization (fMLEM) method for BLT. Our method can avoid predefining the PSR and provide a robust and accurate result for global reconstruction. In the method, the simplified spherical harmonics approximation (SP N ) was applied to characterize diffuse light propagation in medium, and the statistical estimation-based MLEM algorithm combined with a filter function was used to solve the inverse problem. We systematically demonstrated the performance of our method by the regular geometry- and digital mouse-based simulations and a liver cancer-based in vivo experiment. Graphical abstract The filtered MLEM-based global reconstruction method for BLT.
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Huber, David J.; Martin, Kevin
2017-05-01
This paper† describes a technique in which we improve upon the prior performance of the Rapid Serial Visual Presentation (RSVP) EEG paradigm for image classification though the insertion of visual attention distracters and overall sequence reordering based upon the expected ratio of rare to common "events" in the environment and operational context. Inserting distracter images maintains the ratio of common events to rare events at an ideal level, maximizing the rare event detection via P300 EEG response to the RSVP stimuli. The method has two steps: first, we compute the optimal number of distracters needed for an RSVP stimuli based on the desired sequence length and expected number of targets and insert the distracters into the RSVP sequence, and then we reorder the RSVP sequence to maximize P300 detection. We show that by reducing the ratio of target events to nontarget events using this method, we can allow RSVP sequences with more targets without sacrificing area under the ROC curve (azimuth).
Choosing Fitness-Enhancing Innovations Can Be Detrimental under Fluctuating Environments
Xue, Julian Z.; Costopoulos, Andre; Guichard, Frederic
2011-01-01
The ability to predict the consequences of one's behavior in a particular environment is a mechanism for adaptation. In the absence of any cost to this activity, we might expect agents to choose behaviors that maximize their fitness, an example of directed innovation. This is in contrast to blind mutation, where the probability of becoming a new genotype is independent of the fitness of the new genotypes. Here, we show that under environments punctuated by rapid reversals, a system with both genetic and cultural inheritance should not always maximize fitness through directed innovation. This is because populations highly accurate at selecting the fittest innovations tend to over-fit the environment during its stable phase, to the point that a rapid environmental reversal can cause extinction. A less accurate population, on the other hand, can track long term trends in environmental change, keeping closer to the time-average of the environment. We use both analytical and agent-based models to explore when this mechanism is expected to occur. PMID:22125601
Castillo-Barnes, Diego; Peis, Ignacio; Martínez-Murcia, Francisco J.; Segovia, Fermín; Illán, Ignacio A.; Górriz, Juan M.; Ramírez, Javier; Salas-Gonzalez, Diego
2017-01-01
A wide range of segmentation approaches assumes that intensity histograms extracted from magnetic resonance images (MRI) have a distribution for each brain tissue that can be modeled by a Gaussian distribution or a mixture of them. Nevertheless, intensity histograms of White Matter and Gray Matter are not symmetric and they exhibit heavy tails. In this work, we present a hidden Markov random field model with expectation maximization (EM-HMRF) modeling the components using the α-stable distribution. The proposed model is a generalization of the widely used EM-HMRF algorithm with Gaussian distributions. We test the α-stable EM-HMRF model in synthetic data and brain MRI data. The proposed methodology presents two main advantages: Firstly, it is more robust to outliers. Secondly, we obtain similar results than using Gaussian when the Gaussian assumption holds. This approach is able to model the spatial dependence between neighboring voxels in tomographic brain MRI. PMID:29209194
Using return on investment to maximize conservation effectiveness in Argentine grasslands.
Murdoch, William; Ranganathan, Jai; Polasky, Stephen; Regetz, James
2010-12-07
The rapid global loss of natural habitats and biodiversity, and limited resources, place a premium on maximizing the expected benefits of conservation actions. The scarcity of information on the fine-grained distribution of species of conservation concern, on risks of loss, and on costs of conservation actions, especially in developing countries, makes efficient conservation difficult. The distribution of ecosystem types (unique ecological communities) is typically better known than species and arguably better represents the entirety of biodiversity than do well-known taxa, so we use conserving the diversity of ecosystem types as our conservation goal. We define conservation benefit to include risk of conversion, spatial effects that reward clumping of habitat, and diminishing returns to investment in any one ecosystem type. Using Argentine grasslands as an example, we compare three strategies: protecting the cheapest land ("minimize cost"), maximizing conservation benefit regardless of cost ("maximize benefit"), and maximizing conservation benefit per dollar ("return on investment"). We first show that the widely endorsed goal of saving some percentage (typically 10%) of a country or habitat type, although it may inspire conservation, is a poor operational goal. It either leads to the accumulation of areas with low conservation benefit or requires infeasibly large sums of money, and it distracts from the real problem: maximizing conservation benefit given limited resources. Second, given realistic budgets, return on investment is superior to the other conservation strategies. Surprisingly, however, over a wide range of budgets, minimizing cost provides more conservation benefit than does the maximize-benefit strategy.
NASA Technical Reports Server (NTRS)
Frith, James M.; Buckalew, Brent A.; Cowardin, Heather M.; Lederer, Susan M.
2018-01-01
The Gaia catalogue second data release and its implications to optical observations of man-made Earth orbiting objects. Abstract and not the Final Paper is attached. The Gaia spacecraft was launched in December 2013 by the European Space Agency to produce a three-dimensional, dynamic map of objects within the Milky Way. Gaia's first year of data was released in September 2016. Common sources from the first data release have been combined with the Tycho-2 catalogue to provide a 5 parameter astrometric solution for approximately 2 million stars. The second Gaia data release is scheduled to come out in April 2018 and is expected to provide astrometry and photometry for more than 1 billion stars, a subset of which with a the full 6 parameter astrometric solution (adding radial velocity) and positional accuracy better than 0.002 arcsec (2 mas). In addition to precise astrometry, a unique opportunity exists with the Gaia catalogue in its production of accurate, broadband photometry using the Gaia G filter. In the past, clear filters have been used by various groups to maximize likelihood of detection of dim man-made objects but these data were very difficult to calibrate. With the second release of the Gaia catalogue, a ground based system utilizing the G band filter will have access to 1.5 billion all-sky calibration sources down to an accuracy of 0.02 magnitudes or better. In this talk, we will discuss the advantages and practicalities of implementing the Gaia filters and catalogue into data pipelines designed for optical observations of man-made objects.
Age and Disability Employment Discrimination: Occupational Rehabilitation Implications
Bjelland, Melissa J.; von Schrader, Sarah; Houtenville, Andrew J.; Ruiz-Quintanilla, Antonio; Webber, Douglas A.
2009-01-01
Introduction As concerns grow that a thinning labor force due to retirement will lead to worker shortages, it becomes critical to support positive employment outcomes of groups who have been underutilized, specifically older workers and workers with disabilities. Better understanding perceived age and disability discrimination and their intersection can help rehabilitation specialists and employers address challenges expected as a result of the evolving workforce. Methods Using U.S. Equal Employment Opportunity Commission Integrated Mission System data, we investigate the nature of employment discrimination charges that cite the Americans with Disabilities Act or Age Discrimination in Employment Act individually or jointly. We focus on trends in joint filings over time and across categories of age, types of disabilities, and alleged discriminatory behavior. Results We find that employment discrimination claims that originate from older or disabled workers are concentrated within a subset of issues that include reasonable accommodation, retaliation, and termination. Age-related disabilities are more frequently referenced in joint cases than in the overall pool of ADA filings, while the psychiatric disorders are less often referenced in joint cases. When examining charges made by those protected under both the ADA and ADEA, results from a logit model indicate that in comparison to charges filed under the ADA alone, jointly-filed ADA/ADEA charges are more likely to be filed by older individuals, by those who perceive discrimination in hiring and termination, and to originate from within the smallest firms. Conclusion In light of these findings, rehabilitation and workplace practices to maximize the hiring and retention of older workers and those with disabilities are discussed. PMID:19680793
Age and disability employment discrimination: occupational rehabilitation implications.
Bjelland, Melissa J; Bruyère, Susanne M; von Schrader, Sarah; Houtenville, Andrew J; Ruiz-Quintanilla, Antonio; Webber, Douglas A
2010-12-01
As concerns grow that a thinning labor force due to retirement will lead to worker shortages, it becomes critical to support positive employment outcomes of groups who have been underutilized, specifically older workers and workers with disabilities. Better understanding perceived age and disability discrimination and their intersection can help rehabilitation specialists and employers address challenges expected as a result of the evolving workforce. Using U.S. Equal Employment Opportunity Commission Integrated Mission System data, we investigate the nature of employment discrimination charges that cite the Americans with Disabilities Act or Age Discrimination in Employment Act individually or jointly. We focus on trends in joint filings over time and across categories of age, types of disabilities, and alleged discriminatory behavior. We find that employment discrimination claims that originate from older or disabled workers are concentrated within a subset of issues that include reasonable accommodation, retaliation, and termination. Age-related disabilities are more frequently referenced in joint cases than in the overall pool of ADA filings, while the psychiatric disorders are less often referenced in joint cases. When examining charges made by those protected under both the ADA and ADEA, results from a logit model indicate that in comparison to charges filed under the ADA alone, jointly-filed ADA/ADEA charges are more likely to be filed by older individuals, by those who perceive discrimination in hiring and termination, and to originate from within the smallest firms. In light of these findings, rehabilitation and workplace practices to maximize the hiring and retention of older workers and those with disabilities are discussed.
NASA Astrophysics Data System (ADS)
Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.
2016-05-01
X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.
On the Teaching of Portfolio Theory.
ERIC Educational Resources Information Center
Biederman, Daniel K.
1992-01-01
Demonstrates how a simple portfolio problem expressed explicitly as an expected utility maximization problem can be used to instruct students in portfolio theory. Discusses risk aversion, decision making under uncertainty, and the limitations of the traditional mean variance approach. Suggests students may develop a greater appreciation of general…
TIME SHARING WITH AN EXPLICIT PRIORITY QUEUING DISCIPLINE.
exponentially distributed service times and an ordered priority queue. Each new arrival buys a position in this queue by offering a non-negative bribe to the...parameters is investigated through numerical examples. Finally, to maximize the expected revenue per unit time accruing from bribes , an optimization
Program Monitoring: Problems and Cases.
ERIC Educational Resources Information Center
Lundin, Edward; Welty, Gordon
Designed as the major component of a comprehensive model of educational management, a behavioral model of decision making is presented that approximates the synoptic model of neoclassical economic theory. The synoptic model defines all possible alternatives and provides a basis for choosing that alternative which maximizes expected utility. The…
A Bayesian Approach to Interactive Retrieval
ERIC Educational Resources Information Center
Tague, Jean M.
1973-01-01
A probabilistic model for interactive retrieval is presented. Bayesian statistical decision theory principles are applied: use of prior and sample information about the relationship of document descriptions to query relevance; maximization of expected value of a utility function, to the problem of optimally restructuring search strategies in an…
Creating an Agent Based Framework to Maximize Information Utility
2008-03-01
information utility may be a qualitative description of the information, where one would expect the adjectives low value, fair value , high value. For...operations. Information in this category may have a fair value rating. Finally, many seemingly unrelated events, such as reports of snipers in buildings
Robust radio interferometric calibration using the t-distribution
NASA Astrophysics Data System (ADS)
Kazemi, S.; Yatawatta, S.
2013-10-01
A major stage of radio interferometric data processing is calibration or the estimation of systematic errors in the data and the correction for such errors. A stochastic error (noise) model is assumed, and in most cases, this underlying model is assumed to be Gaussian. However, outliers in the data due to interference or due to errors in the sky model would have adverse effects on processing based on a Gaussian noise model. Most of the shortcomings of calibration such as the loss in flux or coherence, and the appearance of spurious sources, could be attributed to the deviations of the underlying noise model. In this paper, we propose to improve the robustness of calibration by using a noise model based on Student's t-distribution. Student's t-noise is a special case of Gaussian noise when the variance is unknown. Unlike Gaussian-noise-model-based calibration, traditional least-squares minimization would not directly extend to a case when we have a Student's t-noise model. Therefore, we use a variant of the expectation-maximization algorithm, called the expectation-conditional maximization either algorithm, when we have a Student's t-noise model and use the Levenberg-Marquardt algorithm in the maximization step. We give simulation results to show the robustness of the proposed calibration method as opposed to traditional Gaussian-noise-model-based calibration, especially in preserving the flux of weaker sources that are not included in the calibration model.
Can differences in breast cancer utilities explain disparities in breast cancer care?
Schleinitz, Mark D; DePalo, Dina; Blume, Jeffrey; Stein, Michael
2006-12-01
Black, older, and less affluent women are less likely to receive adjuvant breast cancer therapy than their counterparts. Whereas preference contributes to disparities in other health care scenarios, it is unclear if preference explains differential rates of breast cancer care. To ascertain utilities from women of diverse backgrounds for the different stages of, and treatments for, breast cancer and to determine whether a treatment decision modeled from utilities is associated with socio-demographic characteristics. A stratified sample (by age and race) of 156 English-speaking women over 25 years old not currently undergoing breast cancer treatment. We assessed utilities using standard gamble for 5 breast cancer stages, and time-tradeoff for 3 therapeutic modalities. We incorporated each subject's utilities into a Markov model to determine whether her quality-adjusted life expectancy would be maximized with chemotherapy for a hypothetical, current diagnosis of stage II breast cancer. We used logistic regression to determine whether socio-demographic variables were associated with this optimal strategy. Median utilities for the 8 health states were: stage I disease, 0.91 (interquartile range 0.50 to 1.00); stage II, 0.75 (0.26 to 0.99); stage III, 0.51 (0.25 to 0.94); stage IV (estrogen receptor positive), 0.36 (0 to 0.75); stage IV (estrogen receptor negative), 0.40 (0 to 0.79); chemotherapy 0.50 (0 to 0.92); hormonal therapy 0.58 (0 to 1); and radiation therapy 0.83 (0.10 to 1). Utilities for early stage disease and treatment modalities, but not metastatic disease, varied with socio-demographic characteristics. One hundred and twenty-two of 156 subjects had utilities that maximized quality-adjusted life expectancy given stage II breast cancer with chemotherapy. Age over 50, black race, and low household income were associated with at least 5-fold lower odds of maximizing quality-adjusted life expectancy with chemotherapy, whereas women who were married or had a significant other were 4-fold more likely to maximize quality-adjusted life expectancy with chemotherapy. Differences in utility for breast cancer health states may partially explain the lower rate of adjuvant therapy for black, older, and less affluent women. Further work must clarify whether these differences result from health preference alone or reflect women's perceptions of sources of disparity, such as access to care, poor communication with providers, limitations in health knowledge or in obtaining social and workplace support during therapy.
Optimal joint detection and estimation that maximizes ROC-type curves
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K.
2017-01-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation. PMID:27093544
Optimal Joint Detection and Estimation That Maximizes ROC-Type Curves.
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K
2016-09-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation.
Dantan, Etienne; Foucher, Yohann; Lorent, Marine; Giral, Magali; Tessier, Philippe
2018-06-01
Defining thresholds of prognostic markers is essential for stratified medicine. Such thresholds are mostly estimated from purely statistical measures regardless of patient preferences potentially leading to unacceptable medical decisions. Quality-Adjusted Life-Years are a widely used preferences-based measure of health outcomes. We develop a time-dependent Quality-Adjusted Life-Years-based expected utility function for censored data that should be maximized to estimate an optimal threshold. We performed a simulation study to compare estimated thresholds when using the proposed expected utility approach and purely statistical estimators. Two applications illustrate the usefulness of the proposed methodology which was implemented in the R package ROCt ( www.divat.fr ). First, by reanalysing data of a randomized clinical trial comparing the efficacy of prednisone vs. placebo in patients with chronic liver cirrhosis, we demonstrate the utility of treating patients with a prothrombin level higher than 89%. Second, we reanalyze the data of an observational cohort of kidney transplant recipients: we conclude to the uselessness of the Kidney Transplant Failure Score to adapt the frequency of clinical visits. Applying such a patient-centered methodology may improve future transfer of novel prognostic scoring systems or markers in clinical practice.
Adar, Shay; Dor, Roi
2018-02-01
Habitat choice is an important decision that influences animals' fitness. Insect larvae are less mobile than the adults. Consequently, the contribution of the maternal choice of habitat to the survival and development of the offspring is considered to be crucial. According to the "preference-performance hypothesis", ovipositing females are expected to choose habitats that will maximize the performance of their offspring. We tested this hypothesis in wormlions (Diptera: Vermileonidae), which are small sand-dwelling insects that dig pit-traps in sandy patches and ambush small arthropods. Larvae prefer relatively deep and obstacle-free sand, and here we tested the habitat preference of the ovipositing female. In contrast to our expectation, ovipositing females showed no clear preference for either a deep sand or obstacle-free habitat, in contrast to the larval choice. This suboptimal female choice led to smaller pits being constructed later by the larvae, which may reduce prey capture success of the larvae. We offer several explanations for this apparently suboptimal female behavior, related either to maximizing maternal rather than offspring fitness, or to constraints on the female's behavior. Female's ovipositing habitat choice may have weaker negative consequences than expected for the offspring, as larvae can partially correct suboptimal maternal choice. Copyright © 2017 Elsevier B.V. All rights reserved.
1986-07-01
maintainability, enhanceability, portability, flexibility, reusability of components, expected market or production life span, upward compatibility, integration...cost) but, most often, they involve global marketing and production objectives. A high life- cycle cost may be accepted in exchange for some other...ease of integration. More importantly, these results could be interpreted as suggesting the need to use a mixed approach where one uses a subset of
Tools for neuroanatomy and neurogenetics in Drosophila
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pfeiffer, Barret D.; Jenett, Arnim; Hammonds, Ann S.
2008-08-11
We demonstrate the feasibility of generating thousands of transgenic Drosophila melanogaster lines in which the expression of an exogenous gene is reproducibly directed to distinct small subsets of cells in the adult brain. We expect the expression patterns produced by the collection of 5,000 lines that we are currently generating to encompass all neurons in the brain in a variety of intersecting patterns. Overlapping 3-kb DNA fragments from the flanking noncoding and intronic regions of genes thought to have patterned expression in the adult brain were inserted into a defined genomic location by site-specific recombination. These fragments were then assayedmore » for their ability to function as transcriptional enhancers in conjunction with a synthetic core promoter designed to work with a wide variety of enhancer types. An analysis of 44 fragments from four genes found that >80% drive expression patterns in the brain; the observed patterns were, on average, comprised of <100 cells. Our results suggest that the D. melanogaster genome contains >50,000 enhancers and that multiple enhancers drive distinct subsets of expression of a gene in each tissue and developmental stage. We expect that these lines will be valuable tools for neuroanatomy as well as for the elucidation of neuronal circuits and information flow in the fly brain.« less
Threat expectancy bias and treatment outcome in patients with panic disorder and agoraphobia.
Duits, Puck; Klein Hofmeijer-Sevink, Mieke; Engelhard, Iris M; Baas, Johanna M P; Ehrismann, Wieske A M; Cath, Danielle C
2016-09-01
Previous studies suggest that patients with panic disorder and agoraphobia (PD/A) tend to overestimate the associations between fear-relevant stimuli and threat. This so-called threat expectancy bias is thought to play a role in the development and treatment of anxiety disorders. The current study tested 1) whether patients with PD/A (N = 71) show increased threat expectancy ratings to fear-relevant and fear-irrelevant stimuli relative to a comparison group without an axis I disorder (N=65), and 2) whether threat expectancy bias before treatment predicts treatment outcome in a subset of these patients (n = 51). In a computerized task, participants saw a series of panic-related and neutral words and rated for each word the likelihood that it would be followed by a loud, aversive sound. Results showed higher threat expectancy ratings to both panic-related and neutral words in patients with PD/A compared to the comparison group. Threat expectancy ratings did not predict treatment outcome. This study only used expectancy ratings and did not include physiological measures. Furthermore, no post-treatment expectancy bias task was added to shed further light on the possibility that expectancy bias might be attenuated by treatment. Patients show higher expectancies of aversive outcome following both fear-relevant and fear-irrelevant stimuli relative to the comparison group, but this does not predict treatment outcome. Copyright © 2016 Elsevier Ltd. All rights reserved.
Reliability analysis based on the losses from failures.
Todinov, M T
2006-04-01
The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.
Alcohol-related expectancies in adults and adolescents: Similarities and disparities.
Monk, Rebecca L; Heim, Derek
2016-03-02
This study aimed to contrast student and not student outcome expectancies, and explore the diversity of alcohol-related cognitions within a wider student sample. Participants (n=549) were college students (higher education-typically aged 15-18 years), university students (further education-typically aged 18-22 years) and business people (white collar professionals <50 years) who completed questionnaires in their place of work or education. Overall positive expectancies were higher in the college students than in the business or university samples. However, not all expectancy subcategories followed this pattern. Participant groups of similar age were therefore alike in some aspects of their alcohol-related cognitions but different in others. Similarly, participant groups whom are divergent in age appeared to be alike in some of their alcohol-related cognitions, such as tension reduction expectancies. Research often homogenises students as a specific sub-set of the population, this paper hi-lights that this may be an over-simplification. Furthermore, the largely exclusive focus on student groups within research in this area may also be an oversight, given the diversity of the findings demonstrated between these groups.
Allocating dissipation across a molecular machine cycle to maximize flux
Brown, Aidan I.; Sivak, David A.
2017-01-01
Biomolecular machines consume free energy to break symmetry and make directed progress. Nonequilibrium ATP concentrations are the typical free energy source, with one cycle of a molecular machine consuming a certain number of ATP, providing a fixed free energy budget. Since evolution is expected to favor rapid-turnover machines that operate efficiently, we investigate how this free energy budget can be allocated to maximize flux. Unconstrained optimization eliminates intermediate metastable states, indicating that flux is enhanced in molecular machines with fewer states. When maintaining a set number of states, we show that—in contrast to previous findings—the flux-maximizing allocation of dissipation is not even. This result is consistent with the coexistence of both “irreversible” and reversible transitions in molecular machine models that successfully describe experimental data, which suggests that, in evolved machines, different transitions differ significantly in their dissipation. PMID:29073016
Ning, Jing; Chen, Yong; Piao, Jin
2017-07-01
Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Akbaş, Halil; Bilgen, Bilge; Turhan, Aykut Melih
2015-11-01
This study proposes an integrated prediction and optimization model by using multi-layer perceptron neural network and particle swarm optimization techniques. Three different objective functions are formulated. The first one is the maximization of methane percentage with single output. The second one is the maximization of biogas production with single output. The last one is the maximization of biogas quality and biogas production with two outputs. Methane percentage, carbon dioxide percentage, and other contents' percentage are used as the biogas quality criteria. Based on the formulated models and data from a wastewater treatment facility, optimal values of input variables and their corresponding maximum output values are found out for each model. It is expected that the application of the integrated prediction and optimization models increases the biogas production and biogas quality, and contributes to the quantity of electricity production at the wastewater treatment facility. Copyright © 2015 Elsevier Ltd. All rights reserved.
Optimisation of the mean boat velocity in rowing.
Rauter, G; Baumgartner, L; Denoth, J; Riener, R; Wolf, P
2012-01-01
In rowing, motor learning may be facilitated by augmented feedback that displays the ratio between actual mean boat velocity and maximal achievable mean boat velocity. To provide this ratio, the aim of this work was to develop and evaluate an algorithm calculating an individual maximal mean boat velocity. The algorithm optimised the horizontal oar movement under constraints such as the individual range of the horizontal oar displacement, individual timing of catch and release and an individual power-angle relation. Immersion and turning of the oar were simplified, and the seat movement of a professional rower was implemented. The feasibility of the algorithm, and of the associated ratio between actual boat velocity and optimised boat velocity, was confirmed by a study on four subjects: as expected, advanced rowing skills resulted in higher ratios, and the maximal mean boat velocity depended on the range of the horizontal oar displacement.
Meurrens, Julie; Steiner, Thomas; Ponette, Jonathan; Janssen, Hans Antonius; Ramaekers, Monique; Wehrlin, Jon Peter; Vandekerckhove, Philippe; Deldicque, Louise
2016-12-01
The aims of the present study were to investigate the impact of three whole blood donations on endurance capacity and hematological parameters and to determine the duration to fully recover initial endurance capacity and hematological parameters after each donation. Twenty-four moderately trained subjects were randomly divided in a donation (n = 16) and a placebo (n = 8) group. Each of the three donations was interspersed by 3 months, and the recovery of endurance capacity and hematological parameters was monitored up to 1 month after donation. Maximal power output, peak oxygen consumption, and hemoglobin mass decreased (p < 0.001) up to 4 weeks after a single blood donation with a maximal decrease of 4, 10, and 7%, respectively. Hematocrit, hemoglobin concentration, ferritin, and red blood cell count (RBC), all key hematological parameters for oxygen transport, were lowered by a single donation (p < 0.001) and cumulatively further affected by the repetition of the donations (p < 0.001). The maximal decrease after a blood donation was 11% for hematocrit, 10% for hemoglobin concentration, 50% for ferritin, and 12% for RBC (p < 0.001). Maximal power output cumulatively increased in the placebo group as the maximal exercise tests were repeated (p < 0.001), which indicates positive training adaptations. This increase in maximal power output over the whole duration of the study was not observed in the donation group. Maximal, but not submaximal, endurance capacity was altered after blood donation in moderately trained people and the expected increase in capacity after multiple maximal exercise tests was not present when repeating whole blood donations.
Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.
ERIC Educational Resources Information Center
Wang, Yuh-Yin Wu; Schafer, William D.
This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…
Aging Education: A Worldwide Imperative
ERIC Educational Resources Information Center
McGuire, Sandra L.
2017-01-01
Life expectancy is increasing worldwide. Unfortunately, people are generally not prepared for this long life ahead and have ageist attitudes that inhibit maximizing the "longevity dividend" they have been given. Aging education can prepare people for life's later years and combat ageism. It can reimage aging as a time of continued…
ERIC Educational Resources Information Center
Casabianca, Jodi M.; Lewis, Charles
2015-01-01
Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…
The Probabilistic Nature of Preferential Choice
ERIC Educational Resources Information Center
Rieskamp, Jorg
2008-01-01
Previous research has developed a variety of theories explaining when and why people's decisions under risk deviate from the standard economic view of expected utility maximization. These theories are limited in their predictive accuracy in that they do not explain the probabilistic nature of preferential choice, that is, why an individual makes…
Relevance of a Managerial Decision-Model to Educational Administration.
ERIC Educational Resources Information Center
Lundin, Edward.; Welty, Gordon
The rational model of classical economic theory assumes that the decision maker has complete information on alternatives and consequences, and that he chooses the alternative that maximizes expected utility. This model does not allow for constraints placed on the decision maker resulting from lack of information, organizational pressures,…
India's growing participation in global clinical trials.
Gupta, Yogendra K; Padhy, Biswa M
2011-06-01
Lower operational costs, recent regulatory reforms and several logistic advantages make India an attractive destination for conducting clinical trials. Efforts for maintaining stringent ethical standards and the launch of Pharmacovigilance Program of India are expected to maximize the potential of the country for clinical research. Copyright © 2011. Published by Elsevier Ltd.
Stochastic Approximation Methods for Latent Regression Item Response Models
ERIC Educational Resources Information Center
von Davier, Matthias; Sinharay, Sandip
2010-01-01
This article presents an application of a stochastic approximation expectation maximization (EM) algorithm using a Metropolis-Hastings (MH) sampler to estimate the parameters of an item response latent regression model. Latent regression item response models are extensions of item response theory (IRT) to a latent variable model with covariates…
ERIC Educational Resources Information Center
Chen, Ping
2017-01-01
Calibration of new items online has been an important topic in item replenishment for multidimensional computerized adaptive testing (MCAT). Several online calibration methods have been proposed for MCAT, such as multidimensional "one expectation-maximization (EM) cycle" (M-OEM) and multidimensional "multiple EM cycles"…
Optimization Techniques for College Financial Aid Managers
ERIC Educational Resources Information Center
Bosshardt, Donald I.; Lichtenstein, Larry; Palumbo, George; Zaporowski, Mark P.
2010-01-01
In the context of a theoretical model of expected profit maximization, this paper shows how historic institutional data can be used to assist enrollment managers in determining the level of financial aid for students with varying demographic and quality characteristics. Optimal tuition pricing in conjunction with empirical estimation of…
2005-04-01
experience. The critical incident interview uses recollection of a specific incident as its starting point and employs a semistructured interview format...context assessment, expectancies, and judgments. The four sweeps in the critical incident interview include: Sweep 1 - Prompting the interviewee to
An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models
ERIC Educational Resources Information Center
Lee, Taehun
2010-01-01
In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…
ERIC Educational Resources Information Center
Hess, Frederick M.; McShane, Michael Q.
2013-01-01
There are at least four key places where the Common Core intersects with current efforts to improve education in the United States--testing, professional development, expectations, and accountability. Understanding them can help educators, parents, and policymakers maximize the chance that the Common Core is helpful to these efforts and, perhaps…
Designing Contributing Student Pedagogies to Promote Students' Intrinsic Motivation to Learn
ERIC Educational Resources Information Center
Herman, Geoffrey L.
2012-01-01
In order to maximize the effectiveness of our pedagogies, we must understand how our pedagogies align with prevailing theories of cognition and motivation and design our pedagogies according to this understanding. When implementing Contributing Student Pedagogies (CSPs), students are expected to make meaningful contributions to the learning of…
Charter School Discipline: Examples of Policies and School Climate Efforts from the Field
ERIC Educational Resources Information Center
Kern, Nora; Kim, Suzie
2016-01-01
Students need a safe and supportive school environment to maximize their academic and social-emotional learning potential. A school's discipline policies and practices directly impact school climate and student achievement. Together, discipline policies and positive school climate efforts can reinforce behavioral expectations and ensure student…
Llewellyn-Thomas, H; Thiel, E; Paterson, M; Naylor, D
1999-04-01
To elicit patients' maximal acceptable waiting times (MAWT) for non-urgent coronary artery bypass grafting (CABG), and to determine if MAWT is related to prior expectations of waiting times, symptom burden, expected relief, or perceived risks of myocardial infarction while waiting. Seventy-two patients on an elective CABG waiting list chose between two hypothetical but plausible options: a 1-month wait with 2% risk of surgical mortality, and a 6-month wait with 1% risk of surgical mortality. Waiting time in the 6-month option was varied up if respondents chose the 6-month/lower risk option, and down if they chose the 1-month/higher risk option, until the MAWT switch point was reached. Patients also reported their expected waiting time, perceived risks of myocardial infarction while waiting, current function, expected functional improvement and the value of that improvement. Only 17 (24%) patients chose the 6-month/1% risk option, while 55 (76%) chose the 1-month/2% risk option. The median MAWT was 2 months; scores ranged from 1 to 12 months (with two outliers). Many perceived high cumulative risks of myocardial infarction if waiting for 1 (upper quartile, > or = 1.45%) or 6 (upper quartile, > or = 10%) months. However, MAWT scores were related only to expected waiting time (r = 0.47; P < 0.0001). Most patients reject waiting 6 months for elective CABG, even if offered along with a halving in surgical mortality (from 2% to 1%). Intolerance for further delay seems to be determined primarily by patients' attachment to their scheduled surgical dates. Many also have severely inflated perceptions of their risk of myocardial infarction in the queue. These results suggest a need for interventions to modify patients' inaccurate risk perceptions, particularly if a scheduled surgical date must be deferred.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, D; Jung, J; Suh, T
2014-06-01
Purpose: Purpose of paper is to confirm the feasibility of acquisition of three dimensional single photon emission computed tomography (SPECT) image from boron neutron capture therapy (BNCT) using Monte Carlo simulation. Methods: In case of simulation, the pixelated SPECT detector, collimator and phantom were simulated using Monte Carlo n particle extended (MCNPX) simulation tool. A thermal neutron source (<1 eV) was used to react with the boron uptake region (BUR) in the phantom. Each geometry had a spherical pattern, and three different BURs (A, B and C region, density: 2.08 g/cm3) were located in the middle of the brain phantom.more » The data from 128 projections for each sorting process were used to achieve image reconstruction. The ordered subset expectation maximization (OSEM) reconstruction algorithm was used to obtain a tomographic image with eight subsets and five iterations. The receiver operating characteristic (ROC) curve analysis was used to evaluate the geometric accuracy of reconstructed image. Results: The OSEM image was compared with the original phantom pattern image. The area under the curve (AUC) was calculated as the gross area under each ROC curve. The three calculated AUC values were 0.738 (A region), 0.623 (B region), and 0.817 (C region). The differences between length of centers of two boron regions and distance of maximum count points were 0.3 cm, 1.6 cm and 1.4 cm. Conclusion: The possibility of extracting a 3D BNCT SPECT image was confirmed using the Monte Carlo simulation and OSEM algorithm. The prospects for obtaining an actual BNCT SPECT image were estimated from the quality of the simulated image and the simulation conditions. When multiple tumor region should be treated using the BNCT, a reasonable model to determine how many useful images can be obtained from the SPECT could be provided to the BNCT facilities. This research was supported by the Leading Foreign Research Institute Recruitment Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, Information and Communication Technologies (ICT) and Future Planning (MSIP)(Grant No.200900420) and the Radiation Technology Research and Development program (Grant No.2013043498), Republic of Korea.« less
Impact of chronobiology on neuropathic pain treatment.
Gilron, Ian
2016-01-01
Inflammatory pain exhibits circadian rhythmicity. Recently, a distinct diurnal pattern has been described for peripheral neuropathic conditions. This diurnal variation has several implications: advancing understanding of chronobiology may facilitate identification of new and improved treatments; developing pain-contingent strategies that maximize treatment at times of the day associated with highest pain intensity may provide optimal pain relief as well as minimize treatment-related adverse effects (e.g., daytime cognitive dysfunction); and consideration of the impact of chronobiology on pain measurement may lead to improvements in analgesic study design that will maximize assay sensitivity of clinical trials. Recent and ongoing chronobiology studies are thus expected to advance knowledge and treatment of neuropathic pain.
NASA Technical Reports Server (NTRS)
Eliason, E.; Hansen, C. J.; McEwen, A.; Delamere, W. A.; Bridges, N.; Grant, J.; Gulich, V.; Herkenhoff, K.; Keszthelyi, L.; Kirk, R.
2003-01-01
Science return from the Mars Reconnaissance Orbiter (MRO) High Resolution Imaging Science Experiment (HiRISE) will be optimized by maximizing science participation in the experiment. MRO is expected to arrive at Mars in March 2006, and the primary science phase begins near the end of 2006 after aerobraking (6 months) and a transition phase. The primary science phase lasts for almost 2 Earth years, followed by a 2-year relay phase in which science observations by MRO are expected to continue. We expect to acquire approx. 10,000 images with HiRISE over the course of MRO's two earth-year mission. HiRISE can acquire images with a ground sampling dimension of as little as 30 cm (from a typical altitude of 300 km), in up to 3 colors, and many targets will be re-imaged for stereo. With such high spatial resolution, the percent coverage of Mars will be very limited in spite of the relatively high data rate of MRO (approx. 10x greater than MGS or Odyssey). We expect to cover approx. 1% of Mars at approx. 1m/pixel or better, approx. 0.1% at full resolution, and approx. 0.05% in color or in stereo. Therefore, the placement of each HiRISE image must be carefully considered in order to maximize the scientific return from MRO. We believe that every observation should be the result of a mini research project based on pre-existing datasets. During operations, we will need a large database of carefully researched 'suggested' observations to select from. The HiRISE team is dedicated to involving the broad Mars community in creating this database, to the fullest degree that is both practical and legal. The philosophy of the team and the design of the ground data system are geared to enabling community involvement. A key aspect of this is that image data will be made available to the planetary community for science analysis as quickly as possible to encourage feedback and new ideas for targets.
Maximal qubit violation of n-locality inequalities in a star-shaped quantum network
NASA Astrophysics Data System (ADS)
Andreoli, Francesco; Carvacho, Gonzalo; Santodonato, Luca; Chaves, Rafael; Sciarrino, Fabio
2017-11-01
Bell's theorem was a cornerstone for our understanding of quantum theory and the establishment of Bell non-locality played a crucial role in the development of quantum information. Recently, its extension to complex networks has been attracting growing attention, but a deep characterization of quantum behavior is still missing for this novel context. In this work we analyze quantum correlations arising in the bilocality scenario, that is a tripartite quantum network where the correlations between the parties are mediated by two independent sources of states. First, we prove that non-bilocal correlations witnessed through a Bell-state measurement in the central node of the network form a subset of those obtainable by means of a local projective measurement. This leads us to derive the maximal violation of the bilocality inequality that can be achieved by arbitrary two-qubit quantum states and arbitrary local projective measurements. We then analyze in details the relation between the violation of the bilocality inequality and the CHSH inequality. Finally, we show how our method can be extended to the n-locality scenario consisting of n two-qubit quantum states distributed among n+1 nodes of a star-shaped network.
Artacho, Paulina; Jouanneau, Isabelle; Le Galliard, Jean-François
2013-01-01
Studies of the relationship of performance and behavioral traits with environmental factors have tended to neglect interindividual variation even though quantification of this variation is fundamental to understanding how phenotypic traits can evolve. In ectotherms, functional integration of locomotor performance, thermal behavior, and energy metabolism is of special interest because of the potential for coadaptation among these traits. For this reason, we analyzed interindividual variation, covariation, and repeatability of the thermal sensitivity of maximal sprint speed, preferred body temperature, thermal precision, and resting metabolic rate measured in ca. 200 common lizards (Zootoca vivipara) that varied by sex, age, and body size. We found significant interindividual variation in selected body temperatures and in the thermal performance curve of maximal sprint speed for both the intercept (expected trait value at the average temperature) and the slope (measure of thermal sensitivity). Interindividual differences in maximal sprint speed across temperatures, preferred body temperature, and thermal precision were significantly repeatable. A positive relationship existed between preferred body temperature and thermal precision, implying that individuals selecting higher temperatures were more precise. The resting metabolic rate was highly variable but was not related to thermal sensitivity of maximal sprint speed or thermal behavior. Thus, locomotor performance, thermal behavior, and energy metabolism were not directly functionally linked in the common lizard.
Using return on investment to maximize conservation effectiveness in Argentine grasslands
Murdoch, William; Ranganathan, Jai; Polasky, Stephen; Regetz, James
2010-01-01
The rapid global loss of natural habitats and biodiversity, and limited resources, place a premium on maximizing the expected benefits of conservation actions. The scarcity of information on the fine-grained distribution of species of conservation concern, on risks of loss, and on costs of conservation actions, especially in developing countries, makes efficient conservation difficult. The distribution of ecosystem types (unique ecological communities) is typically better known than species and arguably better represents the entirety of biodiversity than do well-known taxa, so we use conserving the diversity of ecosystem types as our conservation goal. We define conservation benefit to include risk of conversion, spatial effects that reward clumping of habitat, and diminishing returns to investment in any one ecosystem type. Using Argentine grasslands as an example, we compare three strategies: protecting the cheapest land (“minimize cost”), maximizing conservation benefit regardless of cost (“maximize benefit”), and maximizing conservation benefit per dollar (“return on investment”). We first show that the widely endorsed goal of saving some percentage (typically 10%) of a country or habitat type, although it may inspire conservation, is a poor operational goal. It either leads to the accumulation of areas with low conservation benefit or requires infeasibly large sums of money, and it distracts from the real problem: maximizing conservation benefit given limited resources. Second, given realistic budgets, return on investment is superior to the other conservation strategies. Surprisingly, however, over a wide range of budgets, minimizing cost provides more conservation benefit than does the maximize-benefit strategy. PMID:21098281
Optimization of detectors for the ILC
NASA Astrophysics Data System (ADS)
Suehara, Taikan; ILD Group; SID Group
2016-04-01
International Linear Collider (ILC) is a next-generation e+e- linear collider to explore Higgs, Beyond-Standard-Models, top and electroweak particles with great precision. We are optimizing our two detectors, International Large Detector (ILD) and Silicon Detector (SiD) to maximize the physics reach expected in ILC with reasonable detector cost and good reliability. The optimization study on vertex detectors, main trackers and calorimeters is underway. We aim to conclude the optimization to establish final designs in a few years, to finish detector TDR and proposal in reply to expected ;green sign; of the ILC project.
Semiquantum secret sharing using entangled states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Qin; Department of Computer Science, Sun Yat-sen University, Guangzhou 510006; Department of Mathematics, Hong Kong Baptist University, Kowloon
Secret sharing is a procedure for sharing a secret among a number of participants such that only the qualified subsets of participants have the ability to reconstruct the secret. Even in the presence of eavesdropping, secret sharing can be achieved when all the members are quantum. So what happens if not all the members are quantum? In this paper, we propose two semiquantum secret sharing protocols by using maximally entangled Greenberger-Horne-Zeilinger-type states in which quantum Alice shares a secret with two classical parties, Bob and Charlie, in a way that both parties are sufficient to obtain the secret, but onemore » of them cannot. The presented protocols are also shown to be secure against eavesdropping.« less
NASA Technical Reports Server (NTRS)
Wong, J. T.; Andre, W. L.
1981-01-01
A recent result shows that, for a certain class of systems, the interdependency among the elements of such a system together with the elements constitutes a mathematical structure a partially ordered set. It is called a loop free logic model of the system. On the basis of an intrinsic property of the mathematical structure, a characterization of system component failure in terms of maximal subsets of bad test signals of the system was obtained. Also, as a consequence, information concerning the total number of failure components in the system was deduced. Detailed examples are given to show how to restructure real systems containing loops into loop free models for which the result is applicable.
van den Bergh, F
2018-03-01
The slanted-edge method of spatial frequency response (SFR) measurement is usually applied to grayscale images under the assumption that any distortion of the expected straight edge is negligible. By decoupling the edge orientation and position estimation step from the edge spread function construction step, it is shown in this paper that the slanted-edge method can be extended to allow it to be applied to images suffering from significant geometric distortion, such as produced by equiangular fisheye lenses. This same decoupling also allows the slanted-edge method to be applied directly to Bayer-mosaicked images so that the SFR of the color filter array subsets can be measured directly without the unwanted influence of demosaicking artifacts. Numerical simulation results are presented to demonstrate the efficacy of the proposed deferred slanted-edge method in relation to existing methods.
Kurnianingsih, Yoanna A.; Sim, Sam K. Y.; Chee, Michael W. L.; Mullette-Gillman, O’Dhaniel A.
2015-01-01
We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61–80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision-making for losses through changes in both individual preferences and the strategies individuals employ. PMID:26029092
Hiwa, Ryosuke; Ikari, Katsunori; Ohmura, Koichiro; Nakabo, Shuichiro; Matsuo, Keitaro; Saji, Hiroh; Yurugi, Kimiko; Miura, Yasuo; Maekawa, Taira; Taniguchi, Atsuo; Yamanaka, Hisashi; Matsuda, Fumihiko; Mimori, Tsuneyo; Terao, Chikashi
2018-04-01
HLA-DRB1 is the most important locus associated with rheumatoid arthritis (RA) and anticitrullinated protein antibodies (ACPA). However, fluctuations of rheumatoid factor (RF) over the disease course have made it difficult to define fine subgroups according to consistent RF positivity for the analyses of genetic background and the levels of RF. A total of 2873 patients with RA and 2008 healthy controls were recruited. We genotyped HLA-DRB1 alleles for the participants and collected consecutive data of RF in the case subjects. In addition to RF+ and RF- subsets, we classified the RF+ subjects into group 1 (constant RF+) and group 2 (seroconversion). We compared HLA-DRB1 alleles between the RA subsets and controls and performed linear regression analysis to identify HLA-DRB1 alleles associated with maximal RF levels. Omnibus tests were conducted to assess important amino acid positions. RF positivity was 88%, and 1372 and 970 RF+ subjects were classified into groups 1 and 2, respectively. RF+ and RF- showed similar genetic associations to ACPA+ and ACPA- RA, respectively. We found that shared epitope (SE) was more enriched in group 2 than 1, p = 2.0 × 10 -5 , and that amino acid position 11 showed a significant association between 1 and 2, p = 2.7 × 10 -5 . These associations were independent of ACPA positivity. SE showed a tendency to be negatively correlated with RF titer (p = 0.012). HLA-DRB1*09:01, which reduces ACPA titer, was not associated with RF levels (p = 0.70). The seroconversion group was shown to have distinct genetic characteristics. The genetic architecture of RF levels is different from that of ACPA.
Statistical Learning of Origin-Specific Statically Optimal Individualized Treatment Rules
van der Laan, Mark J.; Petersen, Maya L.
2008-01-01
Consider a longitudinal observational or controlled study in which one collects chronological data over time on a random sample of subjects. The time-dependent process one observes on each subject contains time-dependent covariates, time-dependent treatment actions, and an outcome process or single final outcome of interest. A statically optimal individualized treatment rule (as introduced in van der Laan et. al. (2005), Petersen et. al. (2007)) is a treatment rule which at any point in time conditions on a user-supplied subset of the past, computes the future static treatment regimen that maximizes a (conditional) mean future outcome of interest, and applies the first treatment action of the latter regimen. In particular, Petersen et. al. (2007) clarified that, in order to be statically optimal, an individualized treatment rule should not depend on the observed treatment mechanism. Petersen et. al. (2007) further developed estimators of statically optimal individualized treatment rules based on a past capturing all confounding of past treatment history on outcome. In practice, however, one typically wishes to find individualized treatment rules responding to a user-supplied subset of the complete observed history, which may not be sufficient to capture all confounding. The current article provides an important advance on Petersen et. al. (2007) by developing locally efficient double robust estimators of statically optimal individualized treatment rules responding to such a user-supplied subset of the past. However, failure to capture all confounding comes at a price; the static optimality of the resulting rules becomes origin-specific. We explain origin-specific static optimality, and discuss the practical importance of the proposed methodology. We further present the results of a data analysis in which we estimate a statically optimal rule for switching antiretroviral therapy among patients infected with resistant HIV virus. PMID:19122792
Muirhead, K A; Wallace, P K; Schmitt, T C; Frescatore, R L; Franco, J A; Horan, P K
1986-01-01
As the diagnostic utility of lymphocyte subset analysis has been recognized in the clinical research laboratory, a wide variety of reagents and cell preparation, staining and analysis methods have also been described. Methods that are perfectly suitable for analysis of smaller sample numbers in the biological or clinical research setting are not always appropriate and/or applicable in the setting of a high volume clinical reference laboratory. We describe here some of the specific considerations involved in choosing a method for flow cytometric analysis which minimizes sample preparation and data analysis time while maximizing sample stability, viability, and reproducibility. Monoclonal T- and B-cell reagents from three manufacturers were found to give equivalent results for a reference population of healthy individuals. This was true whether direct or indirect immunofluorescence staining was used and whether cells were prepared by Ficoll-Hypaque fractionation (FH) or by lysis of whole blood. When B cells were enumerated using a polyclonal anti-immunoglobulin reagent, less cytophilic immunoglobulin staining was present after lysis than after FH preparation. However, both preparation methods required additional incubation at 37 degrees C to obtain results concordant with monoclonal B-cell reagents. Standard reagents were chosen on the basis of maximum positive/negative separation and the availability of appropriate negative controls. The effects of collection medium and storage conditions on sample stability and reproducibility of subset analysis were also assessed. Specimens collected in heparin and stored at room temperature in buffered medium gave reproducible results for 3 days after specimen collection, using either FH or lysis as the preparation method. General strategies for instrument optimization, quality control, and biohazard containment are also discussed.
Ballabio, Davide; Consonni, Viviana; Mauri, Andrea; Todeschini, Roberto
2010-01-11
In multivariate regression and classification issues variable selection is an important procedure used to select an optimal subset of variables with the aim of producing more parsimonious and eventually more predictive models. Variable selection is often necessary when dealing with methodologies that produce thousands of variables, such as Quantitative Structure-Activity Relationships (QSARs) and highly dimensional analytical procedures. In this paper a novel method for variable selection for classification purposes is introduced. This method exploits the recently proposed Canonical Measure of Correlation between two sets of variables (CMC index). The CMC index is in this case calculated for two specific sets of variables, the former being comprised of the independent variables and the latter of the unfolded class matrix. The CMC values, calculated by considering one variable at a time, can be sorted and a ranking of the variables on the basis of their class discrimination capabilities results. Alternatively, CMC index can be calculated for all the possible combinations of variables and the variable subset with the maximal CMC can be selected, but this procedure is computationally more demanding and classification performance of the selected subset is not always the best one. The effectiveness of the CMC index in selecting variables with discriminative ability was compared with that of other well-known strategies for variable selection, such as the Wilks' Lambda, the VIP index based on the Partial Least Squares-Discriminant Analysis, and the selection provided by classification trees. A variable Forward Selection based on the CMC index was finally used in conjunction of Linear Discriminant Analysis. This approach was tested on several chemical data sets. Obtained results were encouraging.
Reliability and cost: A sensitivity analysis
NASA Technical Reports Server (NTRS)
Suich, Ronald C.; Patterson, Richard L.
1991-01-01
In the design phase of a system, how a design engineer or manager choose between a subsystem with .990 reliability and a more costly subsystem with .995 reliability is examined, along with the justification of the increased cost. High reliability is not necessarily an end in itself but may be desirable in order to reduce the expected cost due to subsystem failure. However, this may not be the wisest use of funds since the expected cost due to subsystem failure is not the only cost involved. The subsystem itself may be very costly. The cost of the subsystem nor the expected cost due to subsystem failure should not be considered separately but the total of the two costs should be maximized, i.e., the total of the cost of the subsystem plus the expected cost due to subsystem failure.
The genome architecture of the Collaborative Cross mouse genetic reference population.
2012-02-01
The Collaborative Cross Consortium reports here on the development of a unique genetic resource population. The Collaborative Cross (CC) is a multiparental recombinant inbred panel derived from eight laboratory mouse inbred strains. Breeding of the CC lines was initiated at multiple international sites using mice from The Jackson Laboratory. Currently, this innovative project is breeding independent CC lines at the University of North Carolina (UNC), at Tel Aviv University (TAU), and at Geniad in Western Australia (GND). These institutions aim to make publicly available the completed CC lines and their genotypes and sequence information. We genotyped, and report here, results from 458 extant lines from UNC, TAU, and GND using a custom genotyping array with 7500 SNPs designed to be maximally informative in the CC and used a novel algorithm to infer inherited haplotypes directly from hybridization intensity patterns. We identified lines with breeding errors and cousin lines generated by splitting incipient lines into two or more cousin lines at early generations of inbreeding. We then characterized the genome architecture of 350 genetically independent CC lines. Results showed that founder haplotypes are inherited at the expected frequency, although we also consistently observed highly significant transmission ratio distortion at specific loci across all three populations. On chromosome 2, there is significant overrepresentation of WSB/EiJ alleles, and on chromosome X, there is a large deficit of CC lines with CAST/EiJ alleles. Linkage disequilibrium decays as expected and we saw no evidence of gametic disequilibrium in the CC population as a whole or in random subsets of the population. Gametic equilibrium in the CC population is in marked contrast to the gametic disequilibrium present in a large panel of classical inbred strains. Finally, we discuss access to the CC population and to the associated raw data describing the genetic structure of individual lines. Integration of rich phenotypic and genomic data over time and across a wide variety of fields will be vital to delivering on one of the key attributes of the CC, a common genetic reference platform for identifying causative variants and genetic networks determining traits in mammals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fallahpoor, M; Abbasi, M; Sen, A
Purpose: Patient-specific 3-dimensional (3D) internal dosimetry in targeted radionuclide therapy is essential for efficient treatment. Two major steps to achieve reliable results are: 1) generating quantitative 3D images of radionuclide distribution and attenuation coefficients and 2) using a reliable method for dose calculation based on activity and attenuation map. In this research, internal dosimetry for 153-Samarium (153-Sm) was done by SPECT-CT images coupled GATE Monte Carlo package for internal dosimetry. Methods: A 50 years old woman with bone metastases from breast cancer was prescribed 153-Sm treatment (Gamma: 103keV and beta: 0.81MeV). A SPECT/CT scan was performed with the Siemens Simbia-Tmore » scanner. SPECT and CT images were registered using default registration software. SPECT quantification was achieved by compensating for all image degrading factors including body attenuation, Compton scattering and collimator-detector response (CDR). Triple energy window method was used to estimate and eliminate the scattered photons. Iterative ordered-subsets expectation maximization (OSEM) with correction for attenuation and distance-dependent CDR was used for image reconstruction. Bilinear energy mapping is used to convert Hounsfield units in CT image to attenuation map. Organ borders were defined by the itk-SNAP toolkit segmentation on CT image. GATE was then used for internal dose calculation. The Specific Absorbed Fractions (SAFs) and S-values were reported as MIRD schema. Results: The results showed that the largest SAFs and S-values are in osseous organs as expected. S-value for lung is the highest after spine that can be important in 153-Sm therapy. Conclusion: We presented the utility of SPECT-CT images and Monte Carlo for patient-specific dosimetry as a reliable and accurate method. It has several advantages over template-based methods or simplified dose estimation methods. With advent of high speed computers, Monte Carlo can be used for treatment planning on a day to day basis.« less
Impact of Time-of-Flight on PET Tumor Detection
Kadrmas, Dan J.; Casey, Michael E.; Conti, Maurizio; Jakoby, Bjoern W.; Lois, Cristina; Townsend, David W.
2009-01-01
Time-of-flight (TOF) PET uses very fast detectors to improve localization of events along coincidence lines-of-response. This information is then utilized to improve the tomographic reconstruction. This work evaluates the effect of TOF upon an observer's performance for detecting and localizing focal warm lesions in noisy PET images. Methods An advanced anthropomorphic lesion-detection phantom was scanned 12 times over 3 days on a prototype TOF PET/CT scanner (Siemens Medical Solutions). The phantom was devised to mimic whole-body oncologic 18F-FDG PET imaging, and a number of spheric lesions (diameters 6–16 mm) were distributed throughout the phantom. The data were reconstructed with the baseline line-of-response ordered-subsets expectation-maximization algorithm, with the baseline algorithm plus point spread function model (PSF), baseline plus TOF, and with both PSF+TOF. The lesion-detection performance of each reconstruction was compared and ranked using localization receiver operating characteristics (LROC) analysis with both human and numeric observers. The phantom results were then subjectively compared to 2 illustrative patient scans reconstructed with PSF and with PSF+TOF. Results Inclusion of TOF information provides a significant improvement in the area under the LROC curve compared to the baseline algorithm without TOF data (P = 0.002), providing a degree of improvement similar to that obtained with the PSF model. Use of both PSF+TOF together provided a cumulative benefit in lesion-detection performance, significantly outperforming either PSF or TOF alone (P < 0.002). Example patient images reflected the same image characteristics that gave rise to improved performance in the phantom data. Conclusion Time-of-flight PET provides a significant improvement in observer performance for detecting focal warm lesions in a noisy background. These improvements in image quality can be expected to improve performance for the clinical tasks of detecting lesions and staging disease. Further study in a large clinical population is warranted to assess the benefit of TOF for various patient sizes and count levels, and to demonstrate effective performance in the clinical environment. PMID:19617317
Maximizing Your Grant Development: A Guide for CEOs.
ERIC Educational Resources Information Center
Snyder, Thomas
1993-01-01
Since most private and public sources of external funding generally expect increased effort and accountability, Chief Executive Officers (CEOs) at two-year colleges must inform faculty and staff that if they do not expend extra effort their college will not receive significant grants. The CEO must also work with the college's professional…
A Prelude to Strategic Management of an Online Enterprise
ERIC Educational Resources Information Center
Pan, Cheng-Chang; Sivo, Stephen A.; Goldsmith, Clair
2016-01-01
Strategic management is expected to allow an organization to maximize given constraints and optimize limited resources in an effort to create a competitive advantage that leads to better results. For both for-profit and non-profit organizations, such strategic thinking helps the management make informed decisions and sustain long-term planning. To…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-22
... state-operated permit banks for the purpose of maximizing the fishing opportunities made available by... activity to regain their DAS for that trip, providing another opportunity to profit from the DAS that would... entities. Further, no reductions in profit are expected for any small entities, so the profitability...
Smooth Transitions: Helping Students with Autism Spectrum Disorder Navigate the School Day
ERIC Educational Resources Information Center
Hume, Kara; Sreckovic, Melissa; Snyder, Kate; Carnahan, Christina R.
2014-01-01
In school, students are expected to navigate different types of transitions every day, including those between instructors, subjects, and instructional formats, as well as classrooms. Despite the routines that many teachers develop to facilitate efficient transitions and maximize instructional time, many learners with ASD continue to struggle with…
3D image reconstruction algorithms for cryo-electron-microscopy images of virus particles
NASA Astrophysics Data System (ADS)
Doerschuk, Peter C.; Johnson, John E.
2000-11-01
A statistical model for the object and the complete image formation process in cryo electron microscopy of viruses is presented. Using this model, maximum likelihood reconstructions of the 3D structure of viruses are computed using the expectation maximization algorithm and an example based on Cowpea mosaic virus is provided.
A Benefit-Maximization Solution to Our Faculty Promotion and Tenure Process
ERIC Educational Resources Information Center
Barat, Somjit; Harvey, Hanafiah
2015-01-01
Tenure-track/tenured faculty at higher education institutions are expected to teach, conduct research and provide service as part of their promotion and tenure process, the relative importance of each component varying with the position and/or the university. However, based on the author's personal experience, feedback received from several…
"At Least One" Way to Add Value to Conferences
ERIC Educational Resources Information Center
Wilson, Warren J.
2005-01-01
In "EDUCAUSE Quarterly," Volume 25, Number 3, 2002, Joan Getman and Nikki Reynolds published an excellent article about getting the most from a conference. They listed 10 strategies that a conference attendee could use to maximize the conference's yield in information and motivation: (1) Plan ahead; (2) Set realistic expectations; (3) Use e-mail…
ERIC Educational Resources Information Center
Tseng, Hung Wei; Yeh, Hsin-Te
2013-01-01
Teamwork factors can facilitate team members, committing themselves to the purposes of maximizing their own and others' contributions and successes. It is important for online instructors to comprehend students' expectations on learning collaboratively. The aims of this study were to investigate online collaborative learning experiences and to…
A Probability Based Framework for Testing the Missing Data Mechanism
ERIC Educational Resources Information Center
Lin, Johnny Cheng-Han
2013-01-01
Many methods exist for imputing missing data but fewer methods have been proposed to test the missing data mechanism. Little (1988) introduced a multivariate chi-square test for the missing completely at random data mechanism (MCAR) that compares observed means for each pattern with expectation-maximization (EM) estimated means. As an alternative,…
Effects of Missing Data Methods in Structural Equation Modeling with Nonnormal Longitudinal Data
ERIC Educational Resources Information Center
Shin, Tacksoo; Davison, Mark L.; Long, Jeffrey D.
2009-01-01
The purpose of this study is to investigate the effects of missing data techniques in longitudinal studies under diverse conditions. A Monte Carlo simulation examined the performance of 3 missing data methods in latent growth modeling: listwise deletion (LD), maximum likelihood estimation using the expectation and maximization algorithm with a…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-03
... the status quo. The action is expected to maximize the profitability for the spiny dogfish fishery... possible commercial quotas by not making a deduction from the ACL accounting for management uncertainty...) in 2015; however, not accounting for management uncertainty would have increased the risk of...
ERIC Educational Resources Information Center
Weissman, Alexander
2013-01-01
Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by…
Optimization-Based Model Fitting for Latent Class and Latent Profile Analyses
ERIC Educational Resources Information Center
Huang, Guan-Hua; Wang, Su-Mei; Hsu, Chung-Chu
2011-01-01
Statisticians typically estimate the parameters of latent class and latent profile models using the Expectation-Maximization algorithm. This paper proposes an alternative two-stage approach to model fitting. The first stage uses the modified k-means and hierarchical clustering algorithms to identify the latent classes that best satisfy the…
Modeling Adversaries in Counterterrorism Decisions Using Prospect Theory.
Merrick, Jason R W; Leclerc, Philip
2016-04-01
Counterterrorism decisions have been an intense area of research in recent years. Both decision analysis and game theory have been used to model such decisions, and more recently approaches have been developed that combine the techniques of the two disciplines. However, each of these approaches assumes that the attacker is maximizing its utility. Experimental research shows that human beings do not make decisions by maximizing expected utility without aid, but instead deviate in specific ways such as loss aversion or likelihood insensitivity. In this article, we modify existing methods for counterterrorism decisions. We keep expected utility as the defender's paradigm to seek for the rational decision, but we use prospect theory to solve for the attacker's decision to descriptively model the attacker's loss aversion and likelihood insensitivity. We study the effects of this approach in a critical decision, whether to screen containers entering the United States for radioactive materials. We find that the defender's optimal decision is sensitive to the attacker's levels of loss aversion and likelihood insensitivity, meaning that understanding such descriptive decision effects is important in making such decisions. © 2014 Society for Risk Analysis.
Steganalysis feature improvement using expectation maximization
NASA Astrophysics Data System (ADS)
Rodriguez, Benjamin M.; Peterson, Gilbert L.; Agaian, Sos S.
2007-04-01
Images and data files provide an excellent opportunity for concealing illegal or clandestine material. Currently, there are over 250 different tools which embed data into an image without causing noticeable changes to the image. From a forensics perspective, when a system is confiscated or an image of a system is generated the investigator needs a tool that can scan and accurately identify files suspected of containing malicious information. The identification process is termed the steganalysis problem which focuses on both blind identification, in which only normal images are available for training, and multi-class identification, in which both the clean and stego images at several embedding rates are available for training. In this paper an investigation of a clustering and classification technique (Expectation Maximization with mixture models) is used to determine if a digital image contains hidden information. The steganalysis problem is for both anomaly detection and multi-class detection. The various clusters represent clean images and stego images with between 1% and 10% embedding percentage. Based on the results it is concluded that the EM classification technique is highly suitable for both blind detection and the multi-class problem.
Liu, Haiguang; Spence, John C H
2014-11-01
Crystallographic auto-indexing algorithms provide crystal orientations and unit-cell parameters and assign Miller indices based on the geometric relations between the Bragg peaks observed in diffraction patterns. However, if the Bravais symmetry is higher than the space-group symmetry, there will be multiple indexing options that are geometrically equivalent, and hence many ways to merge diffraction intensities from protein nanocrystals. Structure factor magnitudes from full reflections are required to resolve this ambiguity but only partial reflections are available from each XFEL shot, which must be merged to obtain full reflections from these 'stills'. To resolve this chicken-and-egg problem, an expectation maximization algorithm is described that iteratively constructs a model from the intensities recorded in the diffraction patterns as the indexing ambiguity is being resolved. The reconstructed model is then used to guide the resolution of the indexing ambiguity as feedback for the next iteration. Using both simulated and experimental data collected at an X-ray laser for photosystem I in the P63 space group (which supports a merohedral twinning indexing ambiguity), the method is validated.
NASA Astrophysics Data System (ADS)
Cardoso, T.; Oliveira, M. D.; Barbosa-Póvoa, A.; Nickel, S.
2015-05-01
Although the maximization of health is a key objective in health care systems, location-allocation literature has not yet considered this dimension. This study proposes a multi-objective stochastic mathematical programming approach to support the planning of a multi-service network of long-term care (LTC), both in terms of services location and capacity planning. This approach is based on a mixed integer linear programming model with two objectives - the maximization of expected health gains and the minimization of expected costs - with satisficing levels in several dimensions of equity - namely, equity of access, equity of utilization, socioeconomic equity and geographical equity - being imposed as constraints. The augmented ε-constraint method is used to explore the trade-off between these conflicting objectives, with uncertainty in the demand and delivery of care being accounted for. The model is applied to analyze the (re)organization of the LTC network currently operating in the Great Lisbon region in Portugal for the 2014-2016 period. Results show that extending the network of LTC is a cost-effective investment.
Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzi-Bacchetta, P; Azzurri, P; Bacchetta, N; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Bar-Shalom, S; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Behari, S; Bellettini, G; Bellinger, J; Belloni, A; Benjamin, D; Beretvas, A; Beringer, J; Berry, T; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bolshov, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Cooper, B; Copic, K; Cordelli, M; Cortiana, G; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lentdecker, G; De Lorenzo, G; Dell'orso, M; Demortier, L; Deng, J; Deninno, M; De Pedis, D; Derwent, P F; Di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Forrester, S; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Giagu, S; Giakoumopolou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Hamilton, A; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; Iyutin, B; James, E; Jayatilaka, B; Jeans, D; Jeon, E J; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Klute, M; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kraus, J; Kreps, M; Kroll, J; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhlmann, S E; Kuhr, T; Kulkarni, N P; Kusakabe, Y; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, J; Lee, J; Lee, Y J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Lin, C; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, M; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; Miao, T; Miladinovic, N; Miles, J; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moed, S; Moggi, N; Moon, C S; Moore, R; Morello, M; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Oldeman, R; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Portell, X; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Rajaraman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Salamanna, G; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyria, A; Shalhout, S Z; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soderberg, M; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spinella, F; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Sun, H; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tu, Y; Turini, N; Ukegawa, F; Uozumi, S; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Yagil, A; Yamamoto, K; Yamaoka, J; Yamashita, T; Yang, C; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, F; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S
2009-01-30
Models of maximal flavor violation (MxFV) in elementary particle physics may contain at least one new scalar SU(2) doublet field Phi(FV)=(eta(0),eta(+)) that couples the first and third generation quarks (q_(1), q_(3)) via a Lagrangian term L(FV)=xi(13)Phi(FV)q(1)q(3). These models have a distinctive signature of same-charge top-quark pairs and evade flavor-changing limits from meson mixing measurements. Data corresponding to 2 fb(-1) collected by the Collider Dectector at Fermilab II detector in pp[over ] collisions at sqrt[s]=1.96 TeV are analyzed for evidence of the MxFV signature. For a neutral scalar eta(0) with m_(eta;(0))=200 GeV/c(2) and coupling xi(13)=1, approximately 11 signal events are expected over a background of 2.1+/-1.8 events. Three events are observed in the data, consistent with background expectations, and limits are set on the coupling xi(13) for m(eta(0)=180-300 GeV/c(2).
A CCR2+ myeloid cell niche required for pancreatic β cell growth
Mussar, Kristin; Pardike, Stephanie; Hohl, Tobias M.; Hardiman, Gary; Cirulli, Vincenzo
2017-01-01
Organ-specific patterns of myeloid cells may contribute tissue-specific growth and/or regenerative potentials. The perinatal stage of pancreas development marks a time characterized by maximal proliferation of pancreatic islets, ensuring the maintenance of glucose homeostasis throughout life. Ontogenically distinct CX3CR1+ and CCR2+ macrophage populations have been reported in the adult pancreas, but their functional contribution to islet cell growth at birth remains unknown. Here, we uncovered a temporally restricted requirement for CCR2+ myeloid cells in the perinatal proliferation of the endocrine pancreatic epithelium. CCR2+ macrophages are transiently enriched over CX3CR1+ subsets in the neonatal pancreas through both local expansion and recruitment of immature precursors. Using CCR2-specific depletion models, we show that loss of this myeloid population leads to a striking reduction in β cell proliferation, dysfunctional islet phenotypes, and glucose intolerance in newborns. Replenishment of pancreatic CCR2+ myeloid compartments by adoptive transfer rescues these defects. Gene profiling identifies pancreatic CCR2+ myeloid cells as a prominent source of IGF2, which contributes to IGF1R-mediated islet proliferation. These findings uncover proproliferative functions of CCR2+ myeloid subsets and identify myeloid-dependent regulation of IGF signaling as a local cue supporting pancreatic proliferation. PMID:28768911
NASA Astrophysics Data System (ADS)
Fukao, Takeshi; Kurima, Shunsuke; Yokota, Tomomi
2018-05-01
This paper develops an abstract theory for subdifferential operators to give existence and uniqueness of solutions to the initial-boundary problem (P) for the nonlinear diffusion equation in an unbounded domain $\\Omega\\subset\\mathbb{R}^N$ ($N\\in{\\mathbb N}$), written as \\[ \\frac{\\partial u}{\\partial t} + (-\\Delta+1)\\beta(u) = g \\quad \\mbox{in}\\ \\Omega\\times(0, T), \\] which represents the porous media, the fast diffusion equations, etc., where $\\beta$ is a single-valued maximal monotone function on $\\mathbb{R}$, and $T>0$. Existence and uniqueness for (P) were directly proved under a growth condition for $\\beta$ even though the Stefan problem was excluded from examples of (P). This paper completely removes the growth condition for $\\beta$ by confirming Cauchy's criterion for solutions of the following approximate problem (P)$_{\\varepsilon}$ with approximate parameter $\\varepsilon>0$: \\[ \\frac{\\partial u_{\\varepsilon}}{\\partial t} + (-\\Delta+1)(\\varepsilon(-\\Delta+1)u_{\\varepsilon} + \\beta(u_{\\varepsilon}) + \\pi_{\\varepsilon}(u_{\\varepsilon})) = g \\quad \\mbox{in}\\ \\Omega\\times(0, T), \\] which is called the Cahn--Hilliard system, even if $\\Omega \\subset \\mathbb{R}^N$ ($N \\in \\mathbb{N}$) is an unbounded domain. Moreover, it can be seen that the Stefan problem is covered in the framework of this paper.
Optimal flight initiation distance.
Cooper, William E; Frederick, William G
2007-01-07
Decisions regarding flight initiation distance have received scant theoretical attention. A graphical model by Ydenberg and Dill (1986. The economics of fleeing from predators. Adv. Stud. Behav. 16, 229-249) that has guided research for the past 20 years specifies when escape begins. In the model, a prey detects a predator, monitors its approach until costs of escape and of remaining are equal, and then flees. The distance between predator and prey when escape is initiated (approach distance = flight initiation distance) occurs where decreasing cost of remaining and increasing cost of fleeing intersect. We argue that prey fleeing as predicted cannot maximize fitness because the best prey can do is break even during an encounter. We develop two optimality models, one applying when all expected future contribution to fitness (residual reproductive value) is lost if the prey dies, the other when any fitness gained (increase in expected RRV) during the encounter is retained after death. Both models predict optimal flight initiation distance from initial expected fitness, benefits obtainable during encounters, costs of escaping, and probability of being killed. Predictions match extensively verified predictions of Ydenberg and Dill's (1986) model. Our main conclusion is that optimality models are preferable to break-even models because they permit fitness maximization, offer many new testable predictions, and allow assessment of prey decisions in many naturally occurring situations through modification of benefit, escape cost, and risk functions.
Graham, Jeffrey K; Smith, Myron L; Simons, Andrew M
2014-07-22
All organisms are faced with environmental uncertainty. Bet-hedging theory expects unpredictable selection to result in the evolution of traits that maximize the geometric-mean fitness even though such traits appear to be detrimental over the shorter term. Despite the centrality of fitness measures to evolutionary analysis, no direct test of the geometric-mean fitness principle exists. Here, we directly distinguish between predictions of competing fitness maximization principles by testing Cohen's 1966 classic bet-hedging model using the fungus Neurospora crassa. The simple prediction is that propagule dormancy will evolve in proportion to the frequency of 'bad' years, whereas the prediction of the alternative arithmetic-mean principle is the evolution of zero dormancy as long as the expectation of a bad year is less than 0.5. Ascospore dormancy fraction in N. crassa was allowed to evolve under five experimental selection regimes that differed in the frequency of unpredictable 'bad years'. Results were consistent with bet-hedging theory: final dormancy fraction in 12 genetic lineages across 88 independently evolving samples was proportional to the frequency of bad years, and evolved both upwards and downwards as predicted from a range of starting dormancy fractions. These findings suggest that selection results in adaptation to variable rather than to expected environments. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Defender-Attacker Decision Tree Analysis to Combat Terrorism.
Garcia, Ryan J B; von Winterfeldt, Detlof
2016-12-01
We propose a methodology, called defender-attacker decision tree analysis, to evaluate defensive actions against terrorist attacks in a dynamic and hostile environment. Like most game-theoretic formulations of this problem, we assume that the defenders act rationally by maximizing their expected utility or minimizing their expected costs. However, we do not assume that attackers maximize their expected utilities. Instead, we encode the defender's limited knowledge about the attacker's motivations and capabilities as a conditional probability distribution over the attacker's decisions. We apply this methodology to the problem of defending against possible terrorist attacks on commercial airplanes, using one of three weapons: infrared-guided MANPADS (man-portable air defense systems), laser-guided MANPADS, or visually targeted RPGs (rocket propelled grenades). We also evaluate three countermeasures against these weapons: DIRCMs (directional infrared countermeasures), perimeter control around the airport, and hardening airplanes. The model includes deterrence effects, the effectiveness of the countermeasures, and the substitution of weapons and targets once a specific countermeasure is selected. It also includes a second stage of defensive decisions after an attack occurs. Key findings are: (1) due to the high cost of the countermeasures, not implementing countermeasures is the preferred defensive alternative for a large range of parameters; (2) if the probability of an attack and the associated consequences are large, a combination of DIRCMs and ground perimeter control are preferred over any single countermeasure. © 2016 Society for Risk Analysis.
Optimal rotation sequences for active perception
NASA Astrophysics Data System (ADS)
Nakath, David; Rachuy, Carsten; Clemens, Joachim; Schill, Kerstin
2016-05-01
One major objective of autonomous systems navigating in dynamic environments is gathering information needed for self localization, decision making, and path planning. To account for this, such systems are usually equipped with multiple types of sensors. As these sensors often have a limited field of view and a fixed orientation, the task of active perception breaks down to the problem of calculating alignment sequences which maximize the information gain regarding expected measurements. Action sequences that rotate the system according to the calculated optimal patterns then have to be generated. In this paper we present an approach for calculating these sequences for an autonomous system equipped with multiple sensors. We use a particle filter for multi- sensor fusion and state estimation. The planning task is modeled as a Markov decision process (MDP), where the system decides in each step, what actions to perform next. The optimal control policy, which provides the best action depending on the current estimated state, maximizes the expected cumulative reward. The latter is computed from the expected information gain of all sensors over time using value iteration. The algorithm is applied to a manifold representation of the joint space of rotation and time. We show the performance of the approach in a spacecraft navigation scenario where the information gain is changing over time, caused by the dynamic environment and the continuous movement of the spacecraft
Coding for Parallel Links to Maximize the Expected Value of Decodable Messages
NASA Technical Reports Server (NTRS)
Klimesh, Matthew A.; Chang, Christopher S.
2011-01-01
When multiple parallel communication links are available, it is useful to consider link-utilization strategies that provide tradeoffs between reliability and throughput. Interesting cases arise when there are three or more available links. Under the model considered, the links have known probabilities of being in working order, and each link has a known capacity. The sender has a number of messages to send to the receiver. Each message has a size and a value (i.e., a worth or priority). Messages may be divided into pieces arbitrarily, and the value of each piece is proportional to its size. The goal is to choose combinations of messages to send on the links so that the expected value of the messages decodable by the receiver is maximized. There are three parts to the innovation: (1) Applying coding to parallel links under the model; (2) Linear programming formulation for finding the optimal combinations of messages to send on the links; and (3) Algorithms for assisting in finding feasible combinations of messages, as support for the linear programming formulation. There are similarities between this innovation and methods developed in the field of network coding. However, network coding has generally been concerned with either maximizing throughput in a fixed network, or robust communication of a fixed volume of data. In contrast, under this model, the throughput is expected to vary depending on the state of the network. Examples of error-correcting codes that are useful under this model but which are not needed under previous models have been found. This model can represent either a one-shot communication attempt, or a stream of communications. Under the one-shot model, message sizes and link capacities are quantities of information (e.g., measured in bits), while under the communications stream model, message sizes and link capacities are information rates (e.g., measured in bits/second). This work has the potential to increase the value of data returned from spacecraft under certain conditions.
Beauchaine, Theodore P; Constantino, John N
2017-09-11
In psychopathology research, endophenotypes are a subset of biomarkers that indicate genetic vulnerability independent of clinical state. To date, an explicit expectation is that endophenotypes be specific to single disorders. We evaluate this expectation considering recent advances in psychiatric genetics, recognition that transdiagnostic vulnerability traits are often more useful than clinical diagnoses in psychiatric genetics, and appreciation for etiological complexity across genetic, neural, hormonal and environmental levels of analysis. We suggest that the disorder-specificity requirement of endophenotypes be relaxed, that neural functions are preferable to behaviors as starting points in searches for endophenotypes, and that future research should focus on interactive effects of multiple endophenotypes on complex psychiatric disorders, some of which are 'phenocopies' with distinct etiologies.
Oyster reef restoration in the northern Gulf of Mexico: extent, methods and outcomes
LaPeyre, Megan K.; Furlong, Jessica N.; Brown, Laura A.; Piazza, Bryan P.; Brown, Ken
2014-01-01
Shellfish reef restoration to support ecological services has become more common in recent decades, driven by increasing awareness of the functional decline of shellfish systems. Maximizing restoration benefits and increasing efficiency of shellfish restoration activities would greatly benefit from understanding and measurement of system responses to management activities. This project (1) compiles a database of northern Gulf of Mexico inshore artificial oyster reefs created for restoration purposes, and (2) quantitatively assesses a subset of reefs to determine project outcomes. We documented 259 artificial inshore reefs created for ecological restoration. Information on reef material, reef design and monitoring was located for 94, 43 and 20% of the reefs identified. To quantify restoration success, we used diver surveys to quantitatively sample oyster density and substrate volume of 11 created reefs across the coast (7 with rock; 4 with shell), paired with 7 historic reefs. Reefs were defined as fully successful if there were live oysters, and partially successful if there was hard substrate. Of these created reefs, 73% were fully successful, while 82% were partially successful. These data highlight that critical information related to reef design, cost, and success remain difficult to find and are generally inaccessible or lost, ultimately hindering efforts to maximize restoration success rates. Maintenance of reef creation information data, development of standard reef performance measures, and inclusion of material and reef design testing within reef creation projects would be highly beneficial in implementing adaptive management. Adaptive management protocols seek specifically to maximize short and long-term restoration success, but are critically dependent on tracking and measuring system responses to management activities.
Price of anarchy is maximized at the percolation threshold.
Skinner, Brian
2015-05-01
When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called price of anarchy (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly placed congestible and incongestible links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold.
Liu, Shu-Ming; Wang, Shi-Jun; Song, Si-Yao; Zou, Yong; Wang, Jun-Ru; Sun, Bing-Yin
Great variations have been found in composition and content of the essential oil of Zanthoxylum bungeanum Maxim. (Rutaceae), resulting from various factors such as harvest time, drying and extraction methods (Huang et al., 2006; Shao et al., 2013), solvent and herbal parts used (Zhang, 1996; Cao and Zhang, 2010; Wang et al., 2011). However, in terms of artificial introduction and cultivation, there is little research on the chemical composition of essential oil extracted from Z. bungeanum Maxim. cultivars, which have been introduced from different origins. In this study, the composition and content of essential oil from six cultivars (I-VI) have been investigated. They were introduced and cultivated for 11 years in the same cultivation conditions. Cultivars were as followings: Qin'an (I) cultivar originally introduced from Qin'an City in Gansu Province; Dahongpao A (II) from She County in Hebei Province; Dahongpao B (III) from Fuping County; Dahongpao C (IV) from Tongchuan City; Meifengjiao (V) from Feng County; and, Shizitou (VI) from Hancheng City, in Shaanxi Province, China. This research is expected to provide a theoretical basis for further introduction, cultivation, and commercial development of Z. bungeanum Maxim.
NASA Astrophysics Data System (ADS)
Atalay, Bora; Berker, A. Nihat
2018-05-01
Discrete-spin systems with maximally random nearest-neighbor interactions that can be symmetric or asymmetric, ferromagnetic or antiferromagnetic, including off-diagonal disorder, are studied, for the number of states q =3 ,4 in d dimensions. We use renormalization-group theory that is exact for hierarchical lattices and approximate (Migdal-Kadanoff) for hypercubic lattices. For all d >1 and all noninfinite temperatures, the system eventually renormalizes to a random single state, thus signaling q ×q degenerate ordering. Note that this is the maximally degenerate ordering. For high-temperature initial conditions, the system crosses over to this highly degenerate ordering only after spending many renormalization-group iterations near the disordered (infinite-temperature) fixed point. Thus, a temperature range of short-range disorder in the presence of long-range order is identified, as previously seen in underfrustrated Ising spin-glass systems. The entropy is calculated for all temperatures, behaves similarly for ferromagnetic and antiferromagnetic interactions, and shows a derivative maximum at the short-range disordering temperature. With a sharp immediate contrast of infinitesimally higher dimension 1 +ɛ , the system is as expected disordered at all temperatures for d =1 .
Frick, Winifred F; Hayes, John P; Heady, Paul A
2009-01-01
Nested patterns of community composition exist when species at depauperate sites are subsets of those occurring at sites with more species. Nested subset analysis provides a framework for analyzing species occurrences to determine non-random patterns in community composition and potentially identify mechanisms that may shape faunal assemblages. We examined nested subset structure of desert bat assemblages on 20 islands in the southern Gulf of California and at 27 sites along the Baja California peninsula coast, the presumable source pool for the insular faunas. Nested structure was analyzed using a conservative null model that accounts for expected variation in species richness and species incidence across sites (fixed row and column totals). Associations of nestedness and island traits, such as size and isolation, as well as species traits related to mobility, were assessed to determine the potential role of differential extinction and immigration abilities as mechanisms of nestedness. Bat faunas were significantly nested in both the insular and terrestrial landscape and island size was significantly correlated with nested structure, such that species on smaller islands tended to be subsets of species on larger islands, suggesting that differential extinction vulnerabilities may be important in shaping insular bat faunas. The role of species mobility and immigration abilities is less clearly associated with nestedness in this system. Nestedness in the terrestrial landscape is likely due to stochastic processes related to random placement of individuals and this may also influence nested patterns on islands, but additional data on abundances will be necessary to distinguish among these potential mechanisms.
F-35 Joint Strike Fighter (JSF) Program
2012-02-16
Operational Test and Evaluation ( IOT &E), a subset of SDD.61 The eight partner countries are expected to purchase hundreds of F-35s, with the United...Netherlands have agreed to participate in the IOT &E program. UK, the senior F-35 partner, will have the strongest participation in the IOT &E phase...testing. (Telephone conversation with OSD/AT&L, October 3, 2007.) Other partner nations are still weighing their option to participate in the IOT &E
Tuffaha, Haitham W; Reynolds, Heather; Gordon, Louisa G; Rickard, Claire M; Scuffham, Paul A
2014-12-01
Value of information analysis has been proposed as an alternative to the standard hypothesis testing approach, which is based on type I and type II errors, in determining sample sizes for randomized clinical trials. However, in addition to sample size calculation, value of information analysis can optimize other aspects of research design such as possible comparator arms and alternative follow-up times, by considering trial designs that maximize the expected net benefit of research, which is the difference between the expected cost of the trial and the expected value of additional information. To apply value of information methods to the results of a pilot study on catheter securement devices to determine the optimal design of a future larger clinical trial. An economic evaluation was performed using data from a multi-arm randomized controlled pilot study comparing the efficacy of four types of catheter securement devices: standard polyurethane, tissue adhesive, bordered polyurethane and sutureless securement device. Probabilistic Monte Carlo simulation was used to characterize uncertainty surrounding the study results and to calculate the expected value of additional information. To guide the optimal future trial design, the expected costs and benefits of the alternative trial designs were estimated and compared. Analysis of the value of further information indicated that a randomized controlled trial on catheter securement devices is potentially worthwhile. Among the possible designs for the future trial, a four-arm study with 220 patients/arm would provide the highest expected net benefit corresponding to 130% return-on-investment. The initially considered design of 388 patients/arm, based on hypothesis testing calculations, would provide lower net benefit with return-on-investment of 79%. Cost-effectiveness and value of information analyses were based on the data from a single pilot trial which might affect the accuracy of our uncertainty estimation. Another limitation was that different follow-up durations for the larger trial were not evaluated. The value of information approach allows efficient trial design by maximizing the expected net benefit of additional research. This approach should be considered early in the design of randomized clinical trials. © The Author(s) 2014.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gheorghiu, Vlad; Yu Li; Cohen, Scott M.
We investigate the conditions under which a set S of pure bipartite quantum states on a DxD system can be locally cloned deterministically by separable operations, when at least one of the states is full Schmidt rank. We allow for the possibility of cloning using a resource state that is less than maximally entangled. Our results include that: (i) all states in S must be full Schmidt rank and equally entangled under the G-concurrence measure, and (ii) the set S can be extended to a larger clonable set generated by a finite group G of order |G|=N, the number ofmore » states in the larger set. It is then shown that any local cloning apparatus is capable of cloning a number of states that divides D exactly. We provide a complete solution for two central problems in local cloning, giving necessary and sufficient conditions for (i) when a set of maximally entangled states can be locally cloned, valid for all D; and (ii) local cloning of entangled qubit states with nonvanishing entanglement. In both of these cases, we show that a maximally entangled resource is necessary and sufficient, and the states must be related to each other by local unitary 'shift' operations. These shifts are determined by the group structure, so need not be simple cyclic permutations. Assuming this shifted form and partially entangled states, then in D=3 we show that a maximally entangled resource is again necessary and sufficient, while for higher-dimensional systems, we find that the resource state must be strictly more entangled than the states in S. All of our necessary conditions for separable operations are also necessary conditions for local operations and classical communication (LOCC), since the latter is a proper subset of the former. In fact, all our results hold for LOCC, as our sufficient conditions are demonstrated for LOCC, directly.« less
Effect of menstrual cycle phase on exercise performance of high-altitude native women at 3600 m.
Brutsaert, Tom D; Spielvogel, Hilde; Caceres, Esperanza; Araoz, Mauricio; Chatterton, Robert T; Vitzthum, Virginia J
2002-01-01
At sea level normally menstruating women show increased ventilation (VE) and hemodynamic changes due to increased progesterone (P) and estrogen (E2) levels during the mid-luteal (L) compared to the mid-follicular (F) phase of the ovarian cycle. Such changes may affect maximal exercise performance. This repeated-measures, randomized study, conducted at 3600 m, tests the hypothesis that a P-mediated increase in VE increases maximal oxygen consumption (V(O(2)max)) during the L phase relative to the F phase in Bolivian women, either born and raised at high altitude (HA), or resident at HA since early childhood. Subjects (N=30) enrolled in the study were aged 27.7 +/- 0.7 years (mean +/- S.E.M.) and non-pregnant, non-lactating, relatively sedentary residents of La Paz, Bolivia, who were not using hormonal contraceptives. Mean salivary P levels at the time of the exercise tests were 63.3 pg ml(-1) and 22.9 pg ml(-1) for the L and F phases, respectively. Subset analyses of submaximal (N=23) and maximal (N=13) exercise responses were conducted only with women showing increased P levels from F to L and, in the latter case, with those also achieving true (V(O(2)max)). Submaximal exercise VE and ventilatory equivalents were higher in the L phase (P<0.001). P levels were significantly correlated to the submaximal exercise VE (r=0.487, P=0.006). Maximal work output (W) was higher (approximately 5 %) during the L phase (P=0.044), but (V(O(2)max)) (l min(-1)) was unchanged (P=0.063). Post-hoc analyses revealed no significant relationship between changes in P levels and changes in (V(O(2)max))) from F to L (P=0.072). In sum, the menstrual cycle phase has relatively modest effects on ventilation, but no effect on (V(O(2)max)) of HA native women.
Lefante, John J; Harmon, Gary N; Ashby, Keith M; Barnard, David; Webber, Larry S
2005-04-01
The utility of the SF-8 for assessing health-related quality of life (HRQL) is demonstrated. Race and gender differences in physical component (PCS) and mental component (MCS) summary scores among participants in the CENLA Medication Access Program (CMAP), along with comparisons to the United States population are made. Age-adjusted multiple linear regression analyses were used to compare 1687 CMAP participants to the US population. Internal race and gender comparisons, adjusting for age and the number of self reported diagnoses, were also obtained. The paired t-test was used to assess 6-month change in PCS and MCS scores for a subset of 342 participants. CMAP participants have PCS and MCS scores that are significantly 10-12 points lower than the US population, indicating lower self-reported HRQL. Females have significantly higher PCS and significantly lower MCS than males. African-Americans have significantly higher MCS than Caucasians. Significant increases in both PCS and MCS were observed for the subset of participants after 6 months of intervention. The expected lower baseline PCS and MCS measures and the expected associations with age and number of diagnoses indicate that the SF-8 survey is an effective tool for measuring the HRQL of participants in this program. Preliminary results indicate significant increases in both PCS and MCS 6 months after intervention.
Wells, Timothy S; Ryan, Margaret A K; Jones, Kelly A; Hooper, Tomoko I; Boyko, Edward J; Jacobson, Isabel G; Smith, Tyler C; Gackstetter, Gary D
2012-02-01
It has been hypothesized that those who entered military service in the pre-September 11, 2001 era might have expectations incongruent with their subsequent experiences, increasing the risk for posttraumatic stress disorder (PTSD) or other mental disorders. A subset of Millennium Cohort Study participants who joined the military during 1995-1999 was selected and compared with a subset of members who joined the military in 2002 or later. Outcomes included new-onset symptoms of PTSD, depression, panic/anxiety, and alcohol-related problems. Multivariable methods adjusted for differences in demographic and military characteristics. More than 11,000 cohort members were included in the analyses. Those who entered service in the pre-September 11 era had lower odds of new-onset PTSD symptoms (odds ratio [OR] 0.74, 95% CI [0.59, 0.93]) compared with the post-September 11 cohort. There were no statistically significant differences in rates of new-onset symptoms of depression, panic/anxiety, or alcohol-related problems between the groups. The cohort who entered military service in the pre-September 11 era did not experience higher rates of new-onset mental health challenges compared with the cohort who entered service after September 11, 2001. Findings support the concept that the experience of war, and resulting psychological morbidity, is not a function of incongruent expectations. Copyright © 2012 International Society for Traumatic Stress Studies.
Functional analysis of circadian pacemaker neurons in Drosophila melanogaster.
Rieger, Dirk; Shafer, Orie Thomas; Tomioka, Kenji; Helfrich-Förster, Charlotte
2006-03-01
The molecular mechanisms of circadian rhythms are well known, but how multiple clocks within one organism generate a structured rhythmic output remains a mystery. Many animals show bimodal activity rhythms with morning (M) and evening (E) activity bouts. One long-standing model assumes that two mutually coupled oscillators underlie these bouts and show different sensitivities to light. Three groups of lateral neurons (LN) and three groups of dorsal neurons govern behavioral rhythmicity of Drosophila. Recent data suggest that two groups of the LN (the ventral subset of the small LN cells and the dorsal subset of LN cells) are plausible candidates for the M and E oscillator, respectively. We provide evidence that these neuronal groups respond differently to light and can be completely desynchronized from one another by constant light, leading to two activity components that free-run with different periods. As expected, a long-period component started from the E activity bout. However, a short-period component originated not exclusively from the morning peak but more prominently from the evening peak. This reveals an interesting deviation from the original Pittendrigh and Daan (1976) model and suggests that a subgroup of the ventral subset of the small LN acts as "main" oscillator controlling M and E activity bouts in Drosophila.
Design and Application of Drought Indexes in Highly Regulated Mediterranean Water Systems
NASA Astrophysics Data System (ADS)
Castelletti, A.; Zaniolo, M.; Giuliani, M.
2017-12-01
Costs of drought are progressively increasing due to the undergoing alteration of hydro-meteorological regimes induced by climate change. Although drought management is largely studied in the literature, most of the traditional drought indexes fail in detecting critical events in highly regulated systems, which generally rely on ad-hoc formulations and cannot be generalized to different context. In this study, we contribute a novel framework for the design of a basin-customized drought index. This index represents a surrogate of the state of the basin and is computed by combining the available information about the water available in the system to reproduce a representative target variable for the drought condition of the basin (e.g., water deficit). To select the relevant variables and combinatione thereof, we use an advanced feature extraction algorithm called Wrapper for Quasi Equally Informative Subset Selection (W-QEISS). W-QEISS relies on a multi-objective evolutionary algorithm to find Pareto-efficient subsets of variables by maximizing the wrapper accuracy, minimizing the number of selected variables, and optimizing relevance and redundancy of the subset. The accuracy objective is evaluated trough the calibration of an extreme learning machine of the water deficit for each candidate subset of variables, with the index selected from the resulting solutions identifying a suitable compromise between accuracy, cardinality, relevance, and redundancy. The approach is tested on Lake Como, Italy, a regulated lake mainly operated for irrigation supply. In the absence of an institutional drought monitoring system, we constructed the combined index using all the hydrological variables from the existing monitoring system as well as common drought indicators at multiple time aggregations. The soil moisture deficit in the root zone computed by a distributed-parameter water balance model of the agricultural districts is used as target variable. Numerical results show that our combined drought index succesfully reproduces the deficit. The index represents a valuable information for supporting appropriate drought management strategies, including the possibility of directly informing the lake operations about the drought conditions and improve the overall reliability of the irrigation supply system.
Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management. PMID:25538868
Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.
Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing
2016-01-01
In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: 1) the reconstruction algorithms do not make full use of projection statistics; and 2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10 to 40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET. PMID:27385378
Cheng, Xiaoyin; Li, Zhoulei; Liu, Zhen; Navab, Nassir; Huang, Sung-Cheng; Keller, Ulrich; Ziegler, Sibylle; Shi, Kuangyu
2015-02-12
The separation of multiple PET tracers within an overlapping scan based on intrinsic differences of tracer pharmacokinetics is challenging, due to limited signal-to-noise ratio (SNR) of PET measurements and high complexity of fitting models. In this study, we developed a direct parametric image reconstruction (DPIR) method for estimating kinetic parameters and recovering single tracer information from rapid multi-tracer PET measurements. This is achieved by integrating a multi-tracer model in a reduced parameter space (RPS) into dynamic image reconstruction. This new RPS model is reformulated from an existing multi-tracer model and contains fewer parameters for kinetic fitting. Ordered-subsets expectation-maximization (OSEM) was employed to approximate log-likelihood function with respect to kinetic parameters. To incorporate the multi-tracer model, an iterative weighted nonlinear least square (WNLS) method was employed. The proposed multi-tracer DPIR (MTDPIR) algorithm was evaluated on dual-tracer PET simulations ([18F]FDG and [11C]MET) as well as on preclinical PET measurements ([18F]FLT and [18F]FDG). The performance of the proposed algorithm was compared to the indirect parameter estimation method with the original dual-tracer model. The respective contributions of the RPS technique and the DPIR method to the performance of the new algorithm were analyzed in detail. For the preclinical evaluation, the tracer separation results were compared with single [18F]FDG scans of the same subjects measured 2 days before the dual-tracer scan. The results of the simulation and preclinical studies demonstrate that the proposed MT-DPIR method can improve the separation of multiple tracers for PET image quantification and kinetic parameter estimations.
Influence of reconstruction algorithms on image quality in SPECT myocardial perfusion imaging.
Davidsson, Anette; Olsson, Eva; Engvall, Jan; Gustafsson, Agnetha
2017-11-01
We investigated if image- and diagnostic quality in SPECT MPI could be maintained despite a reduced acquisition time adding Depth Dependent Resolution Recovery (DDRR) for image reconstruction. Images were compared with filtered back projection (FBP) and iterative reconstruction using Ordered Subsets Expectation Maximization with (IRAC) and without (IRNC) attenuation correction (AC). Stress- and rest imaging for 15 min was performed on 21 subjects with a dual head gamma camera (Infinia Hawkeye; GE Healthcare), ECG-gating with 8 frames/cardiac cycle and a low-dose CT-scan. A 9 min acquisition was generated using five instead of eight gated frames and was reconstructed with DDRR, with (IRACRR) and without AC (IRNCRR) as well as with FBP. Three experienced nuclear medicine specialists visually assessed anonymized images according to eight criteria on a four point scale, three related to image quality and five to diagnostic confidence. Statistical analysis was performed using Visual Grading Regression (VGR). Observer confidence in statements on image quality was highest for the images that were reconstructed using DDRR (P<0·01 compared to FBP). Iterative reconstruction without DDRR was not superior to FBP. Interobserver variability was significant for statements on image quality (P<0·05) but lower in the diagnostic statements on ischemia and scar. The confidence in assessing ischemia and scar was not different between the reconstruction techniques (P = n.s.). SPECT MPI collected in 9 min, reconstructed with DDRR and AC, produced better image quality than the standard procedure. The observers expressed the highest diagnostic confidence in the DDRR reconstruction. © 2016 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.
Winhusen, Theresa; Wilder, Christine; Wexelblatt, Scott L; Theobald, Jeffrey; Hall, Eric S; Lewis, Daniel; Van Hook, James; Marcotte, Michael
2014-09-01
In recent years, the U.S. has experienced a significant increase in the prevalence of pregnant opioid-dependent women and of neonatal abstinence syndrome (NAS), which is caused by withdrawal from in-utero drug exposure. While methadone-maintenance currently is the standard of care for opioid dependence during pregnancy, research suggests that buprenorphine-maintenance may be associated with shorter infant hospital lengths of stay (LOS) relative to methadone-maintenance. There is no "gold standard" treatment for NAS but there is evidence that buprenorphine, relative to morphine or methadone, treatment may reduce LOS and length of treatment. Point-of-care clinical trial (POCCT) designs, maximizing external validity while reducing cost and complexity associated with classic randomized clinical trials, were selected for two planned trials to compare methadone to buprenorphine treatment for opioid dependence during pregnancy and for NAS. This paper describes design considerations for the Medication-assisted treatment for Opioid-dependent expecting Mothers (MOMs; estimated N = 370) and Investigation of Narcotics for Ameliorating Neonatal abstinence syndrome on Time in hospital (INFANTs; estimated N = 284) POCCTs, both of which are randomized, intent-to-treat, two-group trials. Outcomes would be obtained from participants' electronic health record at three participating hospitals. Additionally, a subset of infants in the INFANTs POCCT would be from mothers in the MOMs POCCT and, thus, potential interaction between medication treatment of mother and infant could be evaluated. This pair of planned POCCTs would evaluate the comparative effectiveness of treatments for opioid dependence during pregnancy and for NAS. The results could have a significant impact on practice. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing
2016-08-01
In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: (1) the reconstruction algorithms do not make full use of projection statistics; and (2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10-40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET.
NASA Astrophysics Data System (ADS)
King, M.; Boening, Guido; Baker, S.; Steinmetz, N.
2004-10-01
In current clinical oncology practice, it often takes weeks or months of cancer therapy until a response to treatment can be identified by evaluation of tumor size in images. It is hypothesized that changes in relative localization of the apoptosis imaging agent Tc-99m Annexin before and after the administration of chemotherapy may be useful as an early indicator of the success of therapy. The objective of this study was to determine the minimum relative change in tumor localization that could be confidently determined as an increased localization. A modified version of the Data Spectrum Anthropomorphic Torso phantom, in which four spheres could be positioned in the lung region, was filled with organ concentrations of Tc-99m representative of those observed in clinical imaging of Tc-99m Annexin. Five acquisitions of an initial sphere to lung concentration, and at concentrations of 1.1, 1.2, 1.3, and 1.4 times the initial concentration, were acquired at clinically realistic count levels. The acquisitions were reconstructed by filtered backprojection, ordered subset expectation maximization (OSEM) without attenuation compensation (AC), and OSEM with AC. Permutation methodology was used to create multiple region-of-interest count ratios from the five noise realizations at each concentration and between the elevated and initial concentrations. The resulting distributions were approximated by Gaussians, which were then used to estimate the likelihood of Type 1 and Type 2 Errors. It was determined that for the cases investigated, greater than a 20% to 30% or more increase was needed to confidently determine that an increase in localization had occurred depending on sphere size and reconstruction strategy.
NASA Astrophysics Data System (ADS)
Fakhri, G. El; Kijewski, M. F.; Moore, S. C.
2001-06-01
Estimates of SPECT activity within certain deep brain structures could be useful for clinical tasks such as early prediction of Alzheimer's disease with Tc-99m or Parkinson's disease with I-123; however, such estimates are biased by poor spatial resolution and inaccurate scatter and attenuation corrections. We compared an analytical approach (AA) of more accurate quantitation to a slower iterative approach (IA). Monte Carlo simulated projections of 12 normal and 12 pathologic Tc-99m perfusion studies, as well as 12, normal and 12 pathologic I-123 neurotransmission studies, were generated using a digital brain phantom and corrected for scatter by a multispectral fitting procedure. The AA included attenuation correction by a modified Metz-Fan algorithm and activity estimation by a technique that incorporated Metz filtering to compensate for variable collimator response (VCR), IA-modeled attenuation, and VCR in the projector/backprojector of an ordered subsets-expectation maximization (OSEM) algorithm. Bias and standard deviation over the 12 normal and 12 pathologic patients were calculated with respect to the reference values in the corpus callosum, caudate nucleus, and putamen. The IA and AA yielded similar quantitation results in both Tc-99m and I-123 studies in all brain structures considered in both normal and pathologic patients. The bias with respect to the reference activity distributions was less than 7% for Tc-99m studies, but greater than 30% for I-123 studies, due to partial volume effect in the striata. Our results were validated using I-123 physical acquisitions of an anthropomorphic brain phantom. The IA yielded quantitation accuracy comparable to that obtained with IA, while requiring much less processing time. However, in most conditions, IA yielded lower noise for the same bias than did AA.
Optimization of the reconstruction parameters in [123I]FP-CIT SPECT
NASA Astrophysics Data System (ADS)
Niñerola-Baizán, Aida; Gallego, Judith; Cot, Albert; Aguiar, Pablo; Lomeña, Francisco; Pavía, Javier; Ros, Domènec
2018-04-01
The aim of this work was to obtain a set of parameters to be applied in [123I]FP-CIT SPECT reconstruction in order to minimize the error between standardized and true values of the specific uptake ratio (SUR) in dopaminergic neurotransmission SPECT studies. To this end, Monte Carlo simulation was used to generate a database of 1380 projection data-sets from 23 subjects, including normal cases and a variety of pathologies. Studies were reconstructed using filtered back projection (FBP) with attenuation correction and ordered subset expectation maximization (OSEM) with correction for different degradations (attenuation, scatter and PSF). Reconstruction parameters to be optimized were the cut-off frequency of a 2D Butterworth pre-filter in FBP, and the number of iterations and the full width at Half maximum of a 3D Gaussian post-filter in OSEM. Reconstructed images were quantified using regions of interest (ROIs) derived from Magnetic Resonance scans and from the Automated Anatomical Labeling map. Results were standardized by applying a simple linear regression line obtained from the entire patient dataset. Our findings show that we can obtain a set of optimal parameters for each reconstruction strategy. The accuracy of the standardized SUR increases when the reconstruction method includes more corrections. The use of generic ROIs instead of subject-specific ROIs adds significant inaccuracies. Thus, after reconstruction with OSEM and correction for all degradations, subject-specific ROIs led to errors between standardized and true SUR values in the range [‑0.5, +0.5] in 87% and 92% of the cases for caudate and putamen, respectively. These percentages dropped to 75% and 88% when the generic ROIs were used.
Castro, P; Huerga, C; Chamorro, P; Garayoa, J; Roch, M; Pérez, L
2018-04-17
The goals of the study are to characterize imaging properties in 2D PET images reconstructed with the iterative algorithm ordered-subset expectation maximization (OSEM) and to propose a new method for the generation of synthetic images. The noise is analyzed in terms of its magnitude, spatial correlation, and spectral distribution through standard deviation, autocorrelation function, and noise power spectrum (NPS), respectively. Their variations with position and activity level are also analyzed. This noise analysis is based on phantom images acquired from 18 F uniform distributions. Experimental recovery coefficients of hot spheres in different backgrounds are employed to study the spatial resolution of the system through point spread function (PSF). The NPS and PSF functions provide the baseline for the proposed simulation method: convolution with PSF as kernel and noise addition from NPS. The noise spectral analysis shows that the main contribution is of random nature. It is also proven that attenuation correction does not alter noise texture but it modifies its magnitude. Finally, synthetic images of 2 phantoms, one of them an anatomical brain, are quantitatively compared with experimental images showing a good agreement in terms of pixel values and pixel correlations. Thus, the contrast to noise ratio for the biggest sphere in the NEMA IEC phantom is 10.7 for the synthetic image and 8.8 for the experimental image. The properties of the analyzed OSEM-PET images can be described by NPS and PSF functions. Synthetic images, even anatomical ones, are successfully generated by the proposed method based on the NPS and PSF. Copyright © 2018 Sociedad Española de Medicina Nuclear e Imagen Molecular. Publicado por Elsevier España, S.L.U. All rights reserved.
Furuta, Akihiro; Onishi, Hideo; Amijima, Hizuru
2018-06-01
This study aimed to evaluate the effect of ventricular enlargement on the specific binding ratio (SBR) and to validate the cerebrospinal fluid (CSF)-Mask algorithm for quantitative SBR assessment of 123 I-FP-CIT single-photon emission computed tomography (SPECT) images with the use of a 3D-striatum digital brain (SDB) phantom. Ventricular enlargement was simulated by three-dimensional extensions in a 3D-SDB phantom comprising segments representing the striatum, ventricle, brain parenchyma, and skull bone. The Evans Index (EI) was measured in 3D-SDB phantom images of an enlarged ventricle. Projection data sets were generated from the 3D-SDB phantoms with blurring, scatter, and attenuation. Images were reconstructed using the ordered subset expectation maximization (OSEM) algorithm and corrected for attenuation, scatter, and resolution recovery. We bundled DaTView (Southampton method) with the CSF-Mask processing software for SBR. We assessed SBR with the use of various coefficients (f factor) of the CSF-Mask. Specific binding ratios of 1, 2, 3, 4, and 5 corresponded to SDB phantom simulations with true values. Measured SBRs > 50% that were underestimated with EI increased compared with the true SBR and this trend was outstanding at low SBR. The CSF-Mask improved 20% underestimates and brought the measured SBR closer to the true values at an f factor of 1.0 despite an increase in EI. We connected the linear regression function (y = - 3.53x + 1.95; r = 0.95) with the EI and f factor using root-mean-square error. Processing with CSF-Mask generates accurate quantitative SBR from dopamine transporter SPECT images of patients with ventricular enlargement.
Onishi, Hideo; Motomura, Nobutoku; Takahashi, Masaaki; Yanagisawa, Masamichi; Ogawa, Koichi
2010-03-01
Degradation of SPECT images results from various physical factors. The primary aim of this study was the development of a digital phantom for use in the characterization of factors that contribute to image degradation in clinical SPECT studies. A 3-dimensional mathematic cylinder (3D-MAC) phantom was devised and developed. The phantom (200 mm in diameter and 200 mm long) comprised 3 imbedded stacks of five 30-mm-long cylinders (diameters, 4, 10, 20, 40, and 60 mm). In simulations, the 3 stacks and the background were assigned radioisotope concentrations and attenuation coefficients. SPECT projection datasets that included Compton scattering effects, photoelectric effects, and gamma-camera models were generated using the electron gamma-shower Monte Carlo simulation program. Collimator parameters, detector resolution, total photons acquired, number of projections acquired, and radius of rotation were varied in simulations. The projection data were formatted in Digital Imaging and Communications in Medicine (DICOM) and imported to and reconstructed using commercial reconstruction software on clinical SPECT workstations. Using the 3D-MAC phantom, we validated that contrast depended on size of region of interest (ROI) and was overestimated when the ROI was small. The low-energy general-purpose collimator caused a greater partial-volume effect than did the low-energy high-resolution collimator, and contrast in the cold region was higher using the filtered backprojection algorithm than using the ordered-subset expectation maximization algorithm in the SPECT images. We used imported DICOM projection data and reconstructed these data using vendor software; in addition, we validated reconstructed images. The devised and developed 3D-MAC SPECT phantom is useful for the characterization of various physical factors, contrasts, partial-volume effects, reconstruction algorithms, and such, that contribute to image degradation in clinical SPECT studies.
Digital PET compliance to EARL accreditation specifications.
Koopman, Daniëlle; Groot Koerkamp, Maureen; Jager, Pieter L; Arkies, Hester; Knollema, Siert; Slump, Cornelis H; Sanches, Pedro G; van Dalen, Jorn A
2017-12-01
Our aim was to evaluate if a recently introduced TOF PET system with digital photon counting technology (Philips Healthcare), potentially providing an improved image quality over analogue systems, can fulfil EANM research Ltd (EARL) accreditation specifications for tumour imaging with FDG-PET/CT. We have performed a phantom study on a digital TOF PET system using a NEMA NU2-2001 image quality phantom with six fillable spheres. Phantom preparation and PET/CT acquisition were performed according to the European Association of Nuclear Medicine (EANM) guidelines. We made list-mode ordered-subsets expectation maximization (OSEM) TOF PET reconstructions, with default settings, three voxel sizes (4 × 4 × 4 mm 3 , 2 × 2 × 2 mm 3 and 1 × 1 × 1 mm 3 ) and with/without point spread function (PSF) modelling. On each PET dataset, mean and maximum activity concentration recovery coefficients (RC mean and RC max ) were calculated for all phantom spheres and compared to EARL accreditation specifications. The RCs of the 4 × 4 × 4 mm 3 voxel dataset without PSF modelling proved closest to EARL specifications. Next, we added a Gaussian post-smoothing filter with varying kernel widths of 1-7 mm. EARL specifications were fulfilled when using kernel widths of 2 to 4 mm. TOF PET using digital photon counting technology fulfils EARL accreditation specifications for FDG-PET/CT tumour imaging when using an OSEM reconstruction with 4 × 4 × 4 mm 3 voxels, no PSF modelling and including a Gaussian post-smoothing filter of 2 to 4 mm.
Koyama, Kazuya; Mitsumoto, Takuya; Shiraishi, Takahiro; Tsuda, Keisuke; Nishiyama, Atsushi; Inoue, Kazumasa; Yoshikawa, Kyosan; Hatano, Kazuo; Kubota, Kazuo; Fukushi, Masahiro
2017-09-01
We aimed to determine the difference in tumor volume associated with the reconstruction model in positron-emission tomography (PET). To reduce the influence of the reconstruction model, we suggested a method to measure the tumor volume using the relative threshold method with a fixed threshold based on peak standardized uptake value (SUV peak ). The efficacy of our method was verified using 18 F-2-fluoro-2-deoxy-D-glucose PET/computed tomography images of 20 patients with lung cancer. The tumor volume was determined using the relative threshold method with a fixed threshold based on the SUV peak . The PET data were reconstructed using the ordered-subset expectation maximization (OSEM) model, the OSEM + time-of-flight (TOF) model, and the OSEM + TOF + point-spread function (PSF) model. The volume differences associated with the reconstruction algorithm (%VD) were compared. For comparison, the tumor volume was measured using the relative threshold method based on the maximum SUV (SUV max ). For the OSEM and TOF models, the mean %VD values were -0.06 ± 8.07 and -2.04 ± 4.23% for the fixed 40% threshold according to the SUV max and the SUV peak, respectively. The effect of our method in this case seemed to be minor. For the OSEM and PSF models, the mean %VD values were -20.41 ± 14.47 and -13.87 ± 6.59% for the fixed 40% threshold according to the SUV max and SUV peak , respectively. Our new method enabled the measurement of tumor volume with a fixed threshold and reduced the influence of the changes in tumor volume associated with the reconstruction model.
Gamma-ray blazars: the combined AGILE and MAGIC views
NASA Astrophysics Data System (ADS)
Persic, M.; De Angelis, A.; Longo, F.; Tavani, M.
The large FOV of the AGILE Gamma-Ray Imaging Detector (GRID), 2.5 sr, will allow the whole sky to be surveyed once every 10 days in the 30 MeV - 50 GeV energy band down to 0.05 Crab Units. This fact gives the opportunity of performing the first flux-limited, high-energy g-ray all-sky survey. The high Galactic latitude point-source population is expected to be largely dominated by blazars. Several tens of blazars are expected to be detected by AGILE (e.g., Costamante & Ghisellini 2002), about half of which accessible to the ground-based MAGIC Cherenkov telescope. The latter can then carry out pointed observations of this subset of AGILE sources in the 50GeV - 10TeV band. Given the comparable sensitivities of AGILE/GRID and MAGIC in adjacent energy bands where the emitted radiation is produced by the same (e.g., SSC) mechanism, we expect that most of these sources can be detected by MAGIC. We expect this broadband g-ray strategy to enable discovery by MAGIC of 10-15 previously unknown TeV blazars.
Condition-dependent mate choice: A stochastic dynamic programming approach.
Frame, Alicia M; Mills, Alex F
2014-09-01
We study how changing female condition during the mating season and condition-dependent search costs impact female mate choice, and what strategies a female could employ in choosing mates to maximize her own fitness. We address this problem via a stochastic dynamic programming model of mate choice. In the model, a female encounters males sequentially and must choose whether to mate or continue searching. As the female searches, her own condition changes stochastically, and she incurs condition-dependent search costs. The female attempts to maximize the quality of the offspring, which is a function of the female's condition at mating and the quality of the male with whom she mates. The mating strategy that maximizes the female's net expected reward is a quality threshold. We compare the optimal policy with other well-known mate choice strategies, and we use simulations to examine how well the optimal policy fares under imperfect information. Copyright © 2014 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Lucas, Christopher M.
2009-01-01
For educators in the field of higher education and judicial affairs, issues are growing. Campus adjudicators must somehow maximize every opportunity for student education and development in the context of declining resources and increasing expectations of public accountability. Numbers of student misconduct cases, including matters of violence and…
Optimizing Experimental Designs Relative to Costs and Effect Sizes.
ERIC Educational Resources Information Center
Headrick, Todd C.; Zumbo, Bruno D.
A general model is derived for the purpose of efficiently allocating integral numbers of units in multi-level designs given prespecified power levels. The derivation of the model is based on a constrained optimization problem that maximizes a general form of a ratio of expected mean squares subject to a budget constraint. This model provides more…
Charles T. Stiff; William F. Stansfield
2004-01-01
Separate thinning guidelines were developed for maximizing land expectation value (LEV), present net worth (PNW), and total sawlog yield (TSY) of existing and future loblolly pine (Pinus taeda L.) plantations in eastern Texas. The guidelines were created using data from simulated stands which were thinned one time during their rotation using a...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-12
... the season through December 31, the end of the fishing year, thus maximizing this sector's opportunity... expected to significantly reduce profits for a substantial number of small entities. This proposed rule... and associated increased profits for for-hire entities associated with the recreational harvest of red...
ERIC Educational Resources Information Center
Bouchet, Francois; Harley, Jason M.; Trevors, Gregory J.; Azevedo, Roger
2013-01-01
In this paper, we present the results obtained using a clustering algorithm (Expectation-Maximization) on data collected from 106 college students learning about the circulatory system with MetaTutor, an agent-based Intelligent Tutoring System (ITS) designed to foster self-regulated learning (SRL). The three extracted clusters were validated and…
USDA-ARS?s Scientific Manuscript database
Water shortages are responsible for the greatest crop losses around the world and are expected to worsen. In arid areas where agriculture is dependent on irrigation, various forms of deficit irrigation management have been suggested to optimize crop yields for available soil water. The relationshi...
Optimizing reserve expansion for disjunct populations of San Joaquin kit fox
Robert G. Haight; Brian Cypher; Patrick A. Kelly; Scott Phillips; Katherine Ralls; Hugh P. Possingham
2004-01-01
Expanding habitat protection is a common strategy for species conservation. We present a model to optimize the expansion of reserves for disjunct populations of an endangered species. The objective is to maximize the expected number of surviving populations subject to budget and habitat constraints. The model accounts for benefits of reserve expansion in terms of...
Benefits of advanced software techniques for mission planning systems
NASA Technical Reports Server (NTRS)
Gasquet, A.; Parrod, Y.; Desaintvincent, A.
1994-01-01
The increasing complexity of modern spacecraft, and the stringent requirement for maximizing their mission return, call for a new generation of Mission Planning Systems (MPS). In this paper, we discuss the requirements for the Space Mission Planning and the benefits which can be expected from Artificial Intelligence techniques through examples of applications developed by Matra Marconi Space.
Benefits of advanced software techniques for mission planning systems
NASA Astrophysics Data System (ADS)
Gasquet, A.; Parrod, Y.; Desaintvincent, A.
1994-10-01
The increasing complexity of modern spacecraft, and the stringent requirement for maximizing their mission return, call for a new generation of Mission Planning Systems (MPS). In this paper, we discuss the requirements for the Space Mission Planning and the benefits which can be expected from Artificial Intelligence techniques through examples of applications developed by Matra Marconi Space.
ERIC Educational Resources Information Center
Köse, Alper
2014-01-01
The primary objective of this study was to examine the effect of missing data on goodness of fit statistics in confirmatory factor analysis (CFA). For this aim, four missing data handling methods; listwise deletion, full information maximum likelihood, regression imputation and expectation maximization (EM) imputation were examined in terms of…
What Influences Young Canadians to Pursue Post-Secondary Studies? Final Report
ERIC Educational Resources Information Center
Dubois, Julie
2002-01-01
This paper uses the theory of human capital to model post-secondary education enrolment decisions. The model is based on the assumption that high school graduates assess the costs and benefits associated with various levels of post-secondary education (college or university) and select the option that maximizes the expected net present value.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramachandran, Thiagarajan; Kundu, Soumya; Chen, Yan
This paper develops and utilizes an optimization based framework to investigate the maximal energy efficiency potentially attainable by HVAC system operation in a non-predictive context. Performance is evaluated relative to the existing state of the art set point reset strategies. The expected efficiency increase driven by operation constraints relaxations is evaluated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramachandran, Thiagarajan; Kundu, Soumya; Chen, Yan
This paper develops and utilizes an optimization based framework to investigate the maximal energy efficiency potentially attainable by HVAC system operation in a non-predictive context. Performance is evaluated relative to the existing state of the art set-point reset strategies. The expected efficiency increase driven by operation constraints relaxations is evaluated.
Optimizing the Use of Response Times for Item Selection in Computerized Adaptive Testing
ERIC Educational Resources Information Center
Choe, Edison M.; Kern, Justin L.; Chang, Hua-Hua
2018-01-01
Despite common operationalization, measurement efficiency of computerized adaptive testing should not only be assessed in terms of the number of items administered but also the time it takes to complete the test. To this end, a recent study introduced a novel item selection criterion that maximizes Fisher information per unit of expected response…
ERIC Educational Resources Information Center
Schulze, Pamela A.; Harwood, Robin L.; Schoelmerich, Axel
2001-01-01
Investigated differences in beliefs and practices about infant feeding among middle class Anglo and Puerto Rican mothers. Interviews and observations indicated that Anglo mothers reported earlier attainment of self-feeding and more emphasis on child rearing goals related to self-maximization. Puerto Rican mothers reported later attainment of…
Solar-Energy System for a Commercial Building--Topeka, Kansas
NASA Technical Reports Server (NTRS)
1982-01-01
Report describes a solar-energy system for space heating, cooling and domestic hot water at a 5,600 square-foot (520-square-meter) Topeka, Kansas, commercial building. System is expected to provide 74% of annual cooling load, 47% of heating load, and 95% of domestic hot-water load. System was included in building design to maximize energy conservation.
Magnetic Tape Storage and Handling: A Guide for Libraries and Archives.
ERIC Educational Resources Information Center
Van Bogart, John W. C.
This document provides a guide on how to properly store and care for magnetic media to maximize their life expectancies. An introduction compares magnetic media to paper and film and outlines the scope of the report. The second section discusses things that can go wrong with magnetic media. Binder degradation, magnetic particle instabilities,…
Autonomous entropy-based intelligent experimental design
NASA Astrophysics Data System (ADS)
Malakar, Nabin Kumar
2011-07-01
The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same goal in an automated fashion.
Maximizing and minimizing investment concentration with constraints of budget and investment risk
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2018-01-01
In this paper, as a first step in examining the properties of a feasible portfolio subset that is characterized by budget and risk constraints, we assess the maximum and minimum of the investment concentration using replica analysis. To do this, we apply an analytical approach of statistical mechanics. We note that the optimization problem considered in this paper is the dual problem of the portfolio optimization problem discussed in the literature, and we verify that these optimal solutions are also dual. We also present numerical experiments, in which we use the method of steepest descent that is based on Lagrange's method of undetermined multipliers, and we compare the numerical results to those obtained by replica analysis in order to assess the effectiveness of our proposed approach.
Developing a framework for energy technology portfolio selection
NASA Astrophysics Data System (ADS)
Davoudpour, Hamid; Ashrafi, Maryam
2012-11-01
Today, the increased consumption of energy in world, in addition to the risk of quick exhaustion of fossil resources, has forced industrial firms and organizations to utilize energy technology portfolio management tools viewed both as a process of diversification of energy sources and optimal use of available energy sources. Furthermore, the rapid development of technologies, their increasing complexity and variety, and market dynamics have made the task of technology portfolio selection difficult. Considering high level of competitiveness, organizations need to strategically allocate their limited resources to the best subset of possible candidates. This paper presents the results of developing a mathematical model for energy technology portfolio selection at a R&D center maximizing support of the organization's strategy and values. The model balances the cost and benefit of the entire portfolio.
Frustration in protein elastic network models
NASA Astrophysics Data System (ADS)
Lezon, Timothy; Bahar, Ivet
2010-03-01
Elastic network models (ENMs) are widely used for studying the equilibrium dynamics of proteins. The most common approach in ENM analysis is to adopt a uniform force constant or a non-specific distance dependent function to represent the force constant strength. Here we discuss the influence of sequence and structure in determining the effective force constants between residues in ENMs. Using a novel method based on entropy maximization, we optimize the force constants such that they exactly reporduce a subset of experimentally determined pair covariances for a set of proteins. We analyze the optimized force constants in terms of amino acid types, distances, contact order and secondary structure, and we demonstrate that including frustrated interactions in the ENM is essential for accurately reproducing the global modes in the middle of the frequency spectrum.
Nava, Stefano; Fasano, Luca
2011-01-01
Weaning from prolonged mechanical ventilation is a complex, time-consuming process that involves the loss of force/generating capacity of the inspiratory muscle. In their study 'Inspiratory muscle strength training improves the outcome in failure to wean patients: a randomized trial', Martin and colleagues showed that the use of an inspiratory muscle strength program increased the maximal inspiratory pressure and improved weaning success compared to a control group. The study was performed mainly in post-surgical patients, however, and the results, therefore, may not be generalizable to other subsets of patients, such as those with chronic obstructive pulmonary disease or congestive heart failure. Indeed, the study applied so-called 'strength training' and not 'endurance training', which may be more appropriate in certain circumstances.
Maximum entropy models as a tool for building precise neural controls.
Savin, Cristina; Tkačik, Gašper
2017-10-01
Neural responses are highly structured, with population activity restricted to a small subset of the astronomical range of possible activity patterns. Characterizing these statistical regularities is important for understanding circuit computation, but challenging in practice. Here we review recent approaches based on the maximum entropy principle used for quantifying collective behavior in neural activity. We highlight recent models that capture population-level statistics of neural data, yielding insights into the organization of the neural code and its biological substrate. Furthermore, the MaxEnt framework provides a general recipe for constructing surrogate ensembles that preserve aspects of the data, but are otherwise maximally unstructured. This idea can be used to generate a hierarchy of controls against which rigorous statistical tests are possible. Copyright © 2017 Elsevier Ltd. All rights reserved.
Characterisation of an OCS-dependent severe asthma population treated with mepolizumab.
Prazma, C M; Wenzel, S; Barnes, N; Douglass, J A; Hartley, B F; Ortega, H
2014-12-01
A subpopulation of patients with asthma treated with maximal inhaled treatments is unable to maintain asthma control and requires additional therapy with oral corticosteroids (OCS); a subset of this population continues to have frequent exacerbations. Alternate treatment options are needed as daily use of OCS is associated with significant systemic adverse effects that affect many body systems and have a direct association with the dose and duration of OCS use. We compared the population demographics, medical conditions and efficacy responses of the OCS-dependent group from the DREAM study of mepolizumab with the group not managed with daily OCS. NCT01000506. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Waste Collector System Technology Comparisons for Constellation Applications
NASA Technical Reports Server (NTRS)
Broyan, James Lee, Jr.
2006-01-01
The Waste Collection Systems (WCS) for space vehicles have utilized a variety of hardware for collecting human metabolic wastes. It has typically required multiple missions to resolve crew usability and hardware performance issues that are difficult to duplicate on the ground. New space vehicles should leverage off past WCS systems. Past WCS hardware designs are substantially different and unique for each vehicle. However, each WCS can be analyzed and compared as a subset of technologies which encompass fecal collection, urine collection, air systems, pretreatment systems. Technology components from the WCS of various vehicles can then be combined to reduce hardware mass and volume while maximizing use of previous technology and proven human-equipment interfaces. Analysis of past US and Russian WCS are compared and extrapolated to Constellation missions.
Kim, Dae-Young; Seo, Byoung-Do; Choi, Pan-Am
2014-04-01
[Purpose] This study was conducted to determine the influence of Taekwondo as security martial arts training on anaerobic threshold, cardiorespiratory fitness, and blood lactate recovery. [Subjects and Methods] Fourteen healthy university students were recruited and divided into an exercise group and a control group (n = 7 in each group). The subjects who participated in the experiment were subjected to an exercise loading test in which anaerobic threshold, value of ventilation, oxygen uptake, maximal oxygen uptake, heart rate, and maximal values of ventilation / heart rate were measured during the exercise, immediately after maximum exercise loading, and at 1, 3, 5, 10, and 15 min of recovery. [Results] At the anaerobic threshold time point, the exercise group showed a significantly longer time to reach anaerobic threshold. The exercise group showed significantly higher values for the time to reach VO2max, maximal values of ventilation, maximal oxygen uptake and maximal values of ventilation / heart rate. Significant changes were observed in the value of ventilation volumes at the 1- and 5-min recovery time points within the exercise group; oxygen uptake and maximal oxygen uptake were significantly different at the 5- and 10-min time points; heart rate was significantly different at the 1- and 3-min time points; and maximal values of ventilation / heart rate was significantly different at the 5-min time point. The exercise group showed significant decreases in blood lactate levels at the 15- and 30-min recovery time points. [Conclusion] The study results revealed that Taekwondo as a security martial arts training increases the maximal oxygen uptake and anaerobic threshold and accelerates an individual's recovery to the normal state of cardiorespiratory fitness and blood lactate level. These results are expected to contribute to the execution of more effective security services in emergencies in which violence can occur.
The value of foresight: how prospection affects decision-making.
Pezzulo, Giovanni; Rigoli, Francesco
2011-01-01
Traditional theories of decision-making assume that utilities are based on the intrinsic value of outcomes; in turn, these values depend on associations between expected outcomes and the current motivational state of the decision-maker. This view disregards the fact that humans (and possibly other animals) have prospection abilities, which permit anticipating future mental processes and motivational and emotional states. For instance, we can evaluate future outcomes in light of the motivational state we expect to have when the outcome is collected, not (only) when we make a decision. Consequently, we can plan for the future and choose to store food to be consumed when we expect to be hungry, not immediately. Furthermore, similarly to any expected outcome, we can assign a value to our anticipated mental processes and emotions. It has been reported that (in some circumstances) human subjects prefer to receive an unavoidable punishment immediately, probably because they are anticipating the dread associated with the time spent waiting for the punishment. This article offers a formal framework to guide neuroeconomic research on how prospection affects decision-making. The model has two characteristics. First, it uses model-based Bayesian inference to describe anticipation of cognitive and motivational processes. Second, the utility-maximization process considers these anticipations in two ways: to evaluate outcomes (e.g., the pleasure of eating a pie is evaluated differently at the beginning of a dinner, when one is hungry, and at the end of the dinner, when one is satiated), and as outcomes having a value themselves (e.g., the case of dread as a cost of waiting for punishment). By explicitly accounting for the relationship between prospection and value, our model provides a framework to reconcile the utility-maximization approach with psychological phenomena such as planning for the future and dread.
The Value of Foresight: How Prospection Affects Decision-Making
Pezzulo, Giovanni; Rigoli, Francesco
2011-01-01
Traditional theories of decision-making assume that utilities are based on the intrinsic value of outcomes; in turn, these values depend on associations between expected outcomes and the current motivational state of the decision-maker. This view disregards the fact that humans (and possibly other animals) have prospection abilities, which permit anticipating future mental processes and motivational and emotional states. For instance, we can evaluate future outcomes in light of the motivational state we expect to have when the outcome is collected, not (only) when we make a decision. Consequently, we can plan for the future and choose to store food to be consumed when we expect to be hungry, not immediately. Furthermore, similarly to any expected outcome, we can assign a value to our anticipated mental processes and emotions. It has been reported that (in some circumstances) human subjects prefer to receive an unavoidable punishment immediately, probably because they are anticipating the dread associated with the time spent waiting for the punishment. This article offers a formal framework to guide neuroeconomic research on how prospection affects decision-making. The model has two characteristics. First, it uses model-based Bayesian inference to describe anticipation of cognitive and motivational processes. Second, the utility-maximization process considers these anticipations in two ways: to evaluate outcomes (e.g., the pleasure of eating a pie is evaluated differently at the beginning of a dinner, when one is hungry, and at the end of the dinner, when one is satiated), and as outcomes having a value themselves (e.g., the case of dread as a cost of waiting for punishment). By explicitly accounting for the relationship between prospection and value, our model provides a framework to reconcile the utility-maximization approach with psychological phenomena such as planning for the future and dread. PMID:21747755
Measurements of Crossflow Instability Modes for HIFiRE 5 at Angle of Attack
2017-11-15
temperature sensitive paint (TSP) did not show any vortices in noisy flow, and only revealed vortices in quiet flow for a subset of the Reynolds numbers for...evidence of traveling crossflow waves with a noisy freestream, even though the spectra of the surface pressure signals showed an expected progression...cone ray describing the minor axis, and retains a 2:1 elliptical cross-section to the tip. Figure 1: Photograph of model The model is made of solid 15
2015-11-16
detailed discussion of barcode designs in Supplementary Note 1, Supplementary Fig. 1 and sequences in Supplementary Note 2). Whereas the nicking and...eight subpools, each as a one- or as a two-barcode version ( design details in Supplementary Note 1). All subpools amplified strands with the expected...for the c2ca designs . We used the same restriction enzymes (Nb.BsrDI and Nt.BspQI) that were encoded between the primers and the target sequences to
Quantum secret sharing for a general quantum access structure
NASA Astrophysics Data System (ADS)
Bai, Chen-Ming; Li, Zhi-Hui; Si, Meng-Meng; Li, Yong-Ming
2017-10-01
Quantum secret sharing is a procedure for sharing a secret among a number of participants such that only certain subsets of participants can collaboratively reconstruct it, which are called authorized sets. The quantum access structure of a secret sharing is a family of all authorized sets. Firstly, in this paper, we propose the concept of decomposition of quantum access structure to design a quantum secret sharing scheme. Secondly, based on a maximal quantum access structure (MQAS) [D. Gottesman, Phys. Rev. A 61, 042311 (2000)], we propose an algorithm to improve a MQAS and obtain an improved maximal quantum access structure (IMQAS). Then, we present a sufficient and necessary condition about IMQAS, which shows the relationship between the minimal authorized sets and the players. In accordance with properties, we construct an efficient quantum secret sharing scheme with a decomposition and IMQAS. A major advantage of these techniques is that it allows us to construct a method to realize a general quantum access structure. Finally, we present two kinds of quantum secret sharing schemes via the thought of concatenation or a decomposition of quantum access structure. As a consequence, we find that the application of these techniques allows us to save more quantum shares and reduces more cost than the existing scheme.
Effects of Renal Denervation on Renal Artery Function in Humans: Preliminary Study
Doltra, Adelina; Hartmann, Arthur; Stawowy, Philipp; Goubergrits, Leonid; Kuehne, Titus; Wellnhofer, Ernst; Gebker, Rolf; Schneeweis, Christopher; Schnackenburg, Bernhard; Esler, Murray; Fleck, Eckart; Kelle, Sebastian
2016-01-01
Aim To study the effects of RD on renal artery wall function non-invasively using magnetic resonance. Methods and Results 32 patients undergoing RD were included. A 3.0 Tesla magnetic resonance of the renal arteries was performed before RD and after 6-month. We quantified the vessel sharpness of both renal arteries using a quantitative analysis tool (Soap-Bubble®). In 17 patients we assessed the maximal and minimal cross-sectional area of both arteries, peak velocity, mean flow, and renal artery distensibility. In a subset of patients wall shear stress was assessed with computational flow dynamics. Neither renal artery sharpness nor renal artery distensibility differed significantly. A significant increase in minimal and maximal areas (by 25.3%, p = 0.008, and 24.6%, p = 0.007, respectively), peak velocity (by 16.9%, p = 0.021), and mean flow (by 22.4%, p = 0.007) was observed after RD. Wall shear stress significantly decreased (by 25%, p = 0.029). These effects were observed in blood pressure responders and non-responders. Conclusions RD is not associated with adverse effects at renal artery level, and leads to an increase in cross-sectional areas, velocity and flow and a decrease in wall shear stress. PMID:27003912
Estrogen has opposing effects on vascular reactivity in obese, insulin-resistant male Zucker rats
NASA Technical Reports Server (NTRS)
Brooks-Asplund, Esther M.; Shoukas, Artin A.; Kim, Soon-Yul; Burke, Sean A.; Berkowitz, Dan E.
2002-01-01
We hypothesized that estradiol treatment would improve vascular dysfunction commonly associated with obesity, hyperlipidemia, and insulin resistance. A sham operation or 17beta-estradiol pellet implantation was performed in male lean and obese Zucker rats. Maximal vasoconstriction (VC) to phenylephrine (PE) and potassium chloride was exaggerated in control obese rats compared with lean rats, but estradiol significantly attenuated VC in the obese rats. Estradiol reduced the PE EC50 in all groups. This effect was cyclooxygenase independent, because preincubation with indomethacin reduced VC response to PE similarly in a subset of control and estrogen-treated lean rats. Endothelium-independent vasodilation (VD) to sodium nitroprusside was similar among groups, but endothelium-dependent VD to ACh was significantly impaired in obese compared with lean rats. Estradiol improved VD in lean and obese rats by decreasing EC50 but impaired function by decreasing maximal VD. The shift in EC50 corresponded to an upregulation in nitric oxide synthase III protein expression in the aorta of the estrogen-treated obese rats. In summary, estrogen treatment improves vascular function in male insulin-resistant, obese rats, partially via an upregulation of nitric oxide synthase III protein expression. These effects are counteracted by adverse factors, such as hyperlipidemia and, potentially, a release of an endothelium-derived contractile agent.
Effect of maximal oxygen uptake and different forms of physical training on serum lipoproteins.
Schnabel, A; Kindermann, W
1982-01-01
260 well trained male sportsmen between 17 and 30 years of age participating in a variety of events were examined for total serum cholesterol and lipoprotein cholesterol and compared with 37 moderately active leisure-time sportsmen and 20 sedentary controls of similar ages and sex. Lipoprotein cholesterol distribution was determined by quantitative electrophoresis. Mean HDL-cholesterol increased progressively from the mean of the sedentary control to the mean of the long-distance runners, indicating a graded effect of physical activity on HDL-cholesterol. In all sporting groups mean LDL-cholesterol tended to be lower than in the controls, no association between LDL-cholesterol and form of training being apparent. Except for the long-distance runners, all sporting groups tended to be lower in total cholesterol than the controls. The HDL-/total cholesterol and LDL/HDL ratios yielded a better discrimination between the physically active and inactive than the HDL-cholesterol alone. Significant positive correlations with maximal oxygen uptake and roentgenologically determined heart volume were found for HDL-cholesterol and HDL-/total cholesterol, and negative ones for LDL/HDL. Differences in the regressions among subsets made up of sporting groups under different physical demands suggest a positive relationship between lipoprotein distribution and the magnitude of the trained muscle mass.
An analysis of competitive bidding by providers for indigent medical care contracts.
Kirkman-Liff, B L; Christianson, J B; Hillman, D G
1985-01-01
This article develops a model of behavior in bidding for indigent medical care contracts in which bidders set bid prices to maximize their expected utility, conditional on estimates of variables which affect the payoff associated with winning or losing a contract. The hypotheses generated by this model are tested empirically using data from the first round of bidding in the Arizona indigent health care experiment. The behavior of bidding organizations in Arizona is found to be consistent in most respects with the predictions of the model. Bid prices appear to have been influenced by estimated costs and by expectations concerning the potential loss from not securing a contract, the initial wealth of the bidding organization, and the expected number of competitors in the bidding process. PMID:4086301
Assessment of Optimal Flexibility in Ensemble of Frequency Responsive Loads
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kundu, Soumya; Hansen, Jacob; Lian, Jianming
2018-04-19
Potential of electrical loads in providing grid ancillary services is often limited due to the uncertainties associated with the load behavior. A knowledge of the expected uncertainties with a load control program would invariably yield to better informed control policies, opening up the possibility of extracting the maximal load control potential without affecting grid operations. In the context of frequency responsive load control, a probabilistic uncertainty analysis framework is presented to quantify the expected error between the target and actual load response, under uncertainties in the load dynamics. A closed-form expression of an optimal demand flexibility, minimizing the expected errormore » in actual and committed flexibility, is provided. Analytical results are validated through Monte Carlo simulations of ensembles of electric water heaters.« less
Generalized expectation-maximization segmentation of brain MR images
NASA Astrophysics Data System (ADS)
Devalkeneer, Arnaud A.; Robe, Pierre A.; Verly, Jacques G.; Phillips, Christophe L. M.
2006-03-01
Manual segmentation of medical images is unpractical because it is time consuming, not reproducible, and prone to human error. It is also very difficult to take into account the 3D nature of the images. Thus, semi- or fully-automatic methods are of great interest. Current segmentation algorithms based on an Expectation- Maximization (EM) procedure present some limitations. The algorithm by Ashburner et al., 2005, does not allow multichannel inputs, e.g. two MR images of different contrast, and does not use spatial constraints between adjacent voxels, e.g. Markov random field (MRF) constraints. The solution of Van Leemput et al., 1999, employs a simplified model (mixture coefficients are not estimated and only one Gaussian is used by tissue class, with three for the image background). We have thus implemented an algorithm that combines the features of these two approaches: multichannel inputs, intensity bias correction, multi-Gaussian histogram model, and Markov random field (MRF) constraints. Our proposed method classifies tissues in three iterative main stages by way of a Generalized-EM (GEM) algorithm: (1) estimation of the Gaussian parameters modeling the histogram of the images, (2) correction of image intensity non-uniformity, and (3) modification of prior classification knowledge by MRF techniques. The goal of the GEM algorithm is to maximize the log-likelihood across the classes and voxels. Our segmentation algorithm was validated on synthetic data (with the Dice metric criterion) and real data (by a neurosurgeon) and compared to the original algorithms by Ashburner et al. and Van Leemput et al. Our combined approach leads to more robust and accurate segmentation.
Agent-Based Model Approach to Complex Phenomena in Real Economy
NASA Astrophysics Data System (ADS)
Iyetomi, H.; Aoyama, H.; Fujiwara, Y.; Ikeda, Y.; Souma, W.
An agent-based model for firms' dynamics is developed. The model consists of firm agents with identical characteristic parameters and a bank agent. Dynamics of those agents are described by their balance sheets. Each firm tries to maximize its expected profit with possible risks in market. Infinite growth of a firm directed by the ``profit maximization" principle is suppressed by a concept of ``going concern". Possibility of bankruptcy of firms is also introduced by incorporating a retardation effect of information on firms' decision. The firms, mutually interacting through the monopolistic bank, become heterogeneous in the course of temporal evolution. Statistical properties of firms' dynamics obtained by simulations based on the model are discussed in light of observations in the real economy.
Wilmoth, Siri K.; Irvine, Kathryn M.; Larson, Chad
2015-01-01
Various GIS-generated land-use predictor variables, physical habitat metrics, and water chemistry variables from 75 reference streams and 351 randomly sampled sites throughout Washington State were evaluated for effectiveness at discriminating reference from random sites within level III ecoregions. A combination of multivariate clustering and ordination techniques were used. We describe average observed conditions for a subset of predictor variables as well as proposing statistical criteria for establishing reference conditions for stream habitat in Washington. Using these criteria, we determined whether any of the random sites met expectations for reference condition and whether any of the established reference sites failed to meet expectations for reference condition. Establishing these criteria will set a benchmark from which future data will be compared.
Do employee health management programs work?
Serxner, Seth; Gold, Daniel; Meraz, Angela; Gray, Ann
2009-01-01
Current peer review literature clearly documents the economic return and Return-on-Investment (ROI) for employee health management (EHM) programs. These EHM programs are defined as: health promotion, self-care, disease management, and case management programs. The evaluation literature for the sub-set of health promotion and disease management programs is examined in this article for specific evidence of the level of economic return in medical benefit cost reduction or avoidance. The article identifies the methodological challenges associated with determination of economic return for EHM programs and summarizes the findings from 23 articles that included 120 peer review study results. The article identifies the average ROI and percent health plan cost impact to be expected for both types of EHM programs, the expected time period for its occurrence, and caveats related to its measurement.
Vasudevan, Abhinav; Gibson, Peter R; van Langenberg, Daniel R
2017-01-01
An awareness of the expected time for therapies to induce symptomatic improvement and remission is necessary for determining the timing of follow-up, disease (re)assessment, and the duration to persist with therapies, yet this is seldom reported as an outcome in clinical trials. In this review, we explore the time to clinical response and remission of current therapies for inflammatory bowel disease (IBD) as well as medication, patient and disease related factors that may influence the time to clinical response. It appears that the time to therapeutic response varies depending on the indication for therapy (Crohn’s disease or ulcerative colitis). Agents with the most rapid time to clinical response included corticosteroids, calcineurin inhibitors, exclusive enteral nutrition, aminosalicylates and anti-tumor necrosis factor therapy which will work in most patients within the first 2 mo. Vedolizumab, methotrexate and thiopurines had a longer time to clinical response and can take several months to achieve maximal efficacy. Factors affecting the time to clinical response of therapies included use of concomitant therapy, disease duration, smoking status, disease phenotype and advanced age. There appears to be marked variation in time to clinical response for therapies used in IBD which is further influenced by disease and patient related factors. Understanding the expected time to therapeutic response is integral to inform further decision making, maintain a patient-centered approach and ensure treatment is given an appropriate timeframe to achieve maximal benefit prior to cessation. PMID:29085188
Gwak, Jae Ha; Lee, Bo Kyeong; Lee, Won Kyung; Sohn, So Young
2017-03-15
This study proposes a new framework for the selection of optimal locations for green roofs to achieve a sustainable urban ecosystem. The proposed framework selects building sites that can maximize the benefits of green roofs, based not only on the socio-economic and environmental benefits to urban residents, but also on the provision of urban foraging sites for honeybees. The framework comprises three steps. First, building candidates for green roofs are selected considering the building type. Second, the selected building candidates are ranked in terms of their expected socio-economic and environmental effects. The benefits of green roofs are improved energy efficiency and air quality, reduction of urban flood risk and infrastructure improvement costs, reuse of storm water, and creation of space for education and leisure. Furthermore, the estimated cost of installing green roofs is also considered. We employ spatial data to determine the expected effects of green roofs on each building unit, because the benefits and costs may vary depending on the location of the building. This is due to the heterogeneous spatial conditions. In the third step, the final building sites are proposed by solving the maximal covering location problem (MCLP) to determine the optimal locations for green roofs as urban honeybee foraging sites. As an illustrative example, we apply the proposed framework in Seoul, Korea. This new framework is expected to contribute to sustainable urban ecosystems. Copyright © 2016 Elsevier Ltd. All rights reserved.
Vasudevan, Abhinav; Gibson, Peter R; van Langenberg, Daniel R
2017-09-21
An awareness of the expected time for therapies to induce symptomatic improvement and remission is necessary for determining the timing of follow-up, disease (re)assessment, and the duration to persist with therapies, yet this is seldom reported as an outcome in clinical trials. In this review, we explore the time to clinical response and remission of current therapies for inflammatory bowel disease (IBD) as well as medication, patient and disease related factors that may influence the time to clinical response. It appears that the time to therapeutic response varies depending on the indication for therapy (Crohn's disease or ulcerative colitis). Agents with the most rapid time to clinical response included corticosteroids, calcineurin inhibitors, exclusive enteral nutrition, aminosalicylates and anti-tumor necrosis factor therapy which will work in most patients within the first 2 mo. Vedolizumab, methotrexate and thiopurines had a longer time to clinical response and can take several months to achieve maximal efficacy. Factors affecting the time to clinical response of therapies included use of concomitant therapy, disease duration, smoking status, disease phenotype and advanced age. There appears to be marked variation in time to clinical response for therapies used in IBD which is further influenced by disease and patient related factors. Understanding the expected time to therapeutic response is integral to inform further decision making, maintain a patient-centered approach and ensure treatment is given an appropriate timeframe to achieve maximal benefit prior to cessation.
Maximizing Federal IT Dollars: A Connection Between IT Investments and Organizational Performance
2011-04-01
Theory for investments, where diversification of financial assets (stocks, bonds, and cash) is balanced by expected returns and risk (Markowitz, 1952...Stakeholder satisfaction (stakeholder may not pay proportionally for service) Stakeholders Stockholders , owners, market Taxpayers; legislative...Adviser for Off-Campus Programs in the Department of Engineering Manage- ment and Systems Engineering. His current research interests include stochastic
Expectation Maximization and its Application in Modeling, Segmentation and Anomaly Detection
2008-05-01
ocomplNc <la!a rrot>lcm,. ",., i’lCOll\\l>lc,c,ICSS of Ihc dala mayan "" IIuc lu missing dala. (J,,,,,,.,ed di,nibu!ions . elc . 0"" such c • ..- is a...Estimation Techniques in Computer Huiyan, Z., Yongfeng, C., Wen, Y. SAR Image Segmentation Using MPM Constrained Stochastic Relaxation. Civil Engineering
ERIC Educational Resources Information Center
Song, Hairong; Ferrer, Emilio
2009-01-01
This article presents a state-space modeling (SSM) technique for fitting process factor analysis models directly to raw data. The Kalman smoother via the expectation-maximization algorithm to obtain maximum likelihood parameter estimates is used. To examine the finite sample properties of the estimates in SSM when common factors are involved, a…
Optimal control of orientation and entanglement for two dipole-dipole coupled quantum planar rotors.
Yu, Hongling; Ho, Tak-San; Rabitz, Herschel
2018-05-09
Optimal control simulations are performed for orientation and entanglement of two dipole-dipole coupled identical quantum rotors. The rotors at various fixed separations lie on a model non-interacting plane with an applied control field. It is shown that optimal control of orientation or entanglement represents two contrasting control scenarios. In particular, the maximally oriented state (MOS) of the two rotors has a zero entanglement entropy and is readily attainable at all rotor separations. Whereas, the contrasting maximally entangled state (MES) has a zero orientation expectation value and is most conveniently attainable at small separations where the dipole-dipole coupling is strong. It is demonstrated that the peak orientation expectation value attained by the MOS at large separations exhibits a long time revival pattern due to the small energy splittings arising form the extremely weak dipole-dipole coupling between the degenerate product states of the two free rotors. Moreover, it is found that the peak entanglement entropy value attained by the MES remains largely unchanged as the two rotors are transported to large separations after turning off the control field. Finally, optimal control simulations of transition dynamics between the MOS and the MES reveal the intricate interplay between orientation and entanglement.
Time perspective and well-being: Swedish survey questionnaires and data.
Garcia, Danilo; Nima, Ali Al; Lindskär, Erik
2016-12-01
The data pertains 448 Swedes' responses to questionnaires on time perspective (Zimbardo Time Perspective Inventory), temporal life satisfaction (Temporal Satisfaction with Life Scale), affect (Positive Affect and Negative Affect Schedule), and psychological well-being (Ryff׳s Scales of Psychological Well-Being-short version). The data was collected among university students and individuals at a training facility (see U. Sailer, P. Rosenberg, A.A. Nima, A. Gamble, T. Gärling, T. Archer, D. Garcia, 2014; [1]). Since there were no differences in any of the other background variables, but exercise frequency, all subsequent analyses were conducted on the 448 participants as one single sample. In this article we include the Swedish versions of the questionnaires used to operationalize the time perspective and well-being variables. The data is available, SPSS file, as Supplementary material in this article. We used the Expectation-Maximization Algorithm to input missing values. Little׳s Chi-Square test for Missing Completely at Random showed a χ (2)=67.25 (df=53, p=.09) for men and χ (2)=77.65 (df=72, p=.31) for women. These values suggested that the Expectation-Maximization Algorithm was suitable to use on this data for missing data imputation.
A Local Scalable Distributed Expectation Maximization Algorithm for Large Peer-to-Peer Networks
NASA Technical Reports Server (NTRS)
Bhaduri, Kanishka; Srivastava, Ashok N.
2009-01-01
This paper offers a local distributed algorithm for expectation maximization in large peer-to-peer environments. The algorithm can be used for a variety of well-known data mining tasks in a distributed environment such as clustering, anomaly detection, target tracking to name a few. This technology is crucial for many emerging peer-to-peer applications for bioinformatics, astronomy, social networking, sensor networks and web mining. Centralizing all or some of the data for building global models is impractical in such peer-to-peer environments because of the large number of data sources, the asynchronous nature of the peer-to-peer networks, and dynamic nature of the data/network. The distributed algorithm we have developed in this paper is provably-correct i.e. it converges to the same result compared to a similar centralized algorithm and can automatically adapt to changes to the data and the network. We show that the communication overhead of the algorithm is very low due to its local nature. This monitoring algorithm is then used as a feedback loop to sample data from the network and rebuild the model when it is outdated. We present thorough experimental results to verify our theoretical claims.
Feng, Haihua; Karl, William Clem; Castañon, David A
2008-05-01
In this paper, we develop a new unified approach for laser radar range anomaly suppression, range profiling, and segmentation. This approach combines an object-based hybrid scene model for representing the range distribution of the field and a statistical mixture model for the range data measurement noise. The image segmentation problem is formulated as a minimization problem which jointly estimates the target boundary together with the target region range variation and background range variation directly from the noisy and anomaly-filled range data. This formulation allows direct incorporation of prior information concerning the target boundary, target ranges, and background ranges into an optimal reconstruction process. Curve evolution techniques and a generalized expectation-maximization algorithm are jointly employed as an efficient solver for minimizing the objective energy, resulting in a coupled pair of object and intensity optimization tasks. The method directly and optimally extracts the target boundary, avoiding a suboptimal two-step process involving image smoothing followed by boundary extraction. Experiments are presented demonstrating that the proposed approach is robust to anomalous pixels (missing data) and capable of producing accurate estimation of the target boundary and range values from noisy data.
Long, Justin M.; Ray, Balmiki; Lahiri, Debomoy K.
2012-01-01
Regulation of amyloid-β (Aβ) precursor protein (APP) expression is complex. MicroRNAs (miRNAs) are expected to participate in the molecular network that controls this process. The composition of this network is, however, still undefined. Elucidating the complement of miRNAs that regulate APP expression should reveal novel drug targets capable of modulating Aβ production in AD. Here, we investigated the contribution of miR-153 to this regulatory network. A miR-153 target site within the APP 3′-untranslated region (3′-UTR) was predicted by several bioinformatic algorithms. We found that miR-153 significantly reduced reporter expression when co-transfected with an APP 3′-UTR reporter construct. Mutation of the predicted miR-153 target site eliminated this reporter response. miR-153 delivery in both HeLa cells and primary human fetal brain cultures significantly reduced APP expression. Delivery of a miR-153 antisense inhibitor to human fetal brain cultures significantly elevated APP expression. miR-153 delivery also reduced expression of the APP paralog APLP2. High functional redundancy between APP and APLP2 suggests that miR-153 may target biological pathways in which they both function. Interestingly, in a subset of human AD brain specimens with moderate AD pathology, miR-153 levels were reduced. This same subset also exhibited elevated APP levels relative to control specimens. Therefore, endogenous miR-153 inhibits expression of APP in human neurons by specifically interacting with the APP 3′-UTR. This regulatory interaction may have relevance to AD etiology, where low miR-153 levels may drive increased APP expression in a subset of AD patients. PMID:22733824
Innate NKTγδ and NKTαβ cells exert similar functions and compete for a thymic niche.
Pereira, Pablo; Boucontet, Laurent
2012-05-01
The transcriptional regulator promyelocytic leukemia zinc finger (PLZF) is highly expressed during the differentiation of natural killer T (NKT) cells and is essential for the acquisition of their effector/memory innate-like phenotype. Staining with anti-PLZF and anti-NK1.1 Abs allows the definition of two subsets of NKTαβ and NKTγδ thymocytes that differ phenotypically and functionally: a PLZF(+) NK1.1(-) subset composed of mostly quiescent cells that secrete more IL-4 than IFN-γ upon activation and a PLZF(+/-) NK1.1(+) subset that expresses CD127, NK1.1, and other NK-cell markers, secrete more IFN-γ than IL-4 upon activation and contains a sizable fraction of dividing cells. The size of the NK1.1(+) population is very tightly regulated and NK1.1(+) αβ and γδ thymocytes compete for a thymic niche. Furthermore, the relative representation of the PLZF(+) and NK1.1(+) subsets varies in a strain-specific manner with C57BL/6 (B6) mice containing more NK1.1(+) cells and (B6 × DBA/2)F1 (B6D2F1) mice more PLZF(+) cells. Consequently, activation of NKT cells in vivo is expected to result in higher levels of IL-4 secreted in B6D2F1 mice than in B6 mice. Consistent with this possibility, B6D2F1 mice, when compared with B6 mice, contain more "innate" CD8(+) thymocytes, the generation of which depends on IL-4 secreted by NKT cells. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Abe, Ikumi; Shirato, Ken; Hashizume, Yoko; Mitsuhashi, Ryosuke; Kobayashi, Ayumu; Shiono, Chikako; Sato, Shogo; Tachiyashiki, Kaoru; Imaizumi, Kazuhiko
2013-01-01
Folate (vitamin B(9)) plays key roles in cell growth and proliferation through regulating the synthesis and stabilization of DNA and RNA, and its deficiency leads to lymphocytopenia and granulocytopenia. However, precisely how folate deficiency affects the distribution of a variety of white blood cell subsets, including the minor population of basophils, and the cell specificity of the effects remain unclear. Therefore, we examined the effects of a folate-deficient diet on the circulating number of lymphocyte subsets [T-lymphocytes, B-lymphocytes, and natural killer (NK) cells] and granulocyte subsets (neutrophils, eosinophils, and basophils) in rats. Rats were divided into two groups, with one receiving the folate-deficient diet (FAD group) and the other a control diet (CON group). All rats were pair-fed for 8 weeks. Plasma folate level was dramatically lower in the FAD group than in the CON group, and the level of homocysteine in the plasma, a predictor of folate deficiency was significantly higher in the FAD group than in the CON group. The number of T-lymphocytes, B-lymphocytes, and NK cells was significantly lower in the FAD group than in the CON group by 0.73-, 0.49-, and 0.70-fold, respectively, indicating that B-lymphocytes are more sensitive to folate deficiency than the other lymphocyte subsets. As expected, the number of neutrophils and eosinophils was significantly lower in the FAD group than in the CON group. However, the number of basophils, the least common type of granulocyte, showed transiently an increasing tendency in the FAD group as compared with the CON group. These results suggest that folate deficiency induces lymphocytopenia and granulocytopenia in a cell-specific manner.
Peute, L W; Knijnenburg, S L; Kremer, L C; Jaspers, M W M
2015-01-01
The Website Developmental Model for the Healthcare Consumer (WDMHC) is an extensive and successfully evaluated framework that incorporates user-centered design principles. However, due to its extensiveness its application is limited. In the current study we apply a subset of the WDMHC framework in a case study concerning the development and evaluation of a website aimed at childhood cancer survivors (CCS). To assess whether the implementation of a limited subset of the WDMHC-framework is sufficient to deliver a high-quality website with few usability problems, aimed at a specific patient population. The website was developed using a six-step approach divided into three phases derived from the WDMHC: 1) information needs analysis, mock-up creation and focus group discussion; 2) website prototype development; and 3) heuristic evaluation (HE) and think aloud analysis (TA). The HE was performed by three double experts (knowledgeable both in usability engineering and childhood cancer survivorship), who assessed the site using the Nielsen heuristics. Eight end-users were invited to complete three scenarios covering all functionality of the website by TA. The HE and TA were performed concurrently on the website prototype. The HE resulted in 29 unique usability issues; the end-users performing the TA encountered eleven unique problems. Four issues specifically revealed by HE concerned cosmetic design flaws, whereas two problems revealed by TA were related to website content. Based on the subset of the WDMHC framework we were able to deliver a website that closely matched the expectancy of the end-users and resulted in relatively few usability problems during end-user testing. With the successful application of this subset of the WDMHC, we provide developers with a clear and easily applicable framework for the development of healthcare websites with high usability aimed at specific medical populations.
Possibility expectation and its decision making algorithm
NASA Technical Reports Server (NTRS)
Keller, James M.; Yan, Bolin
1992-01-01
The fuzzy integral has been shown to be an effective tool for the aggregation of evidence in decision making. Of primary importance in the development of a fuzzy integral pattern recognition algorithm is the choice (construction) of the measure which embodies the importance of subsets of sources of evidence. Sugeno fuzzy measures have received the most attention due to the recursive nature of the fabrication of the measure on nested sequences of subsets. Possibility measures exhibit an even simpler generation capability, but usually require that one of the sources of information possess complete credibility. In real applications, such normalization may not be possible, or even desirable. In this report, both the theory and a decision making algorithm for a variation of the fuzzy integral are presented. This integral is based on a possibility measure where it is not required that the measure of the universe be unity. A training algorithm for the possibility densities in a pattern recognition application is also presented with the results demonstrated on the shuttle-earth-space training and testing images.
Compositional differences between meteorites and near-Earth asteroids.
Vernazza, P; Binzel, R P; Thomas, C A; DeMeo, F E; Bus, S J; Rivkin, A S; Tokunaga, A T
2008-08-14
Understanding the nature and origin of the asteroid population in Earth's vicinity (near-Earth asteroids, and its subset of potentially hazardous asteroids) is a matter of both scientific interest and practical importance. It is generally expected that the compositions of the asteroids that are most likely to hit Earth should reflect those of the most common meteorites. Here we report that most near-Earth asteroids (including the potentially hazardous subset) have spectral properties quantitatively similar to the class of meteorites known as LL chondrites. The prominent Flora family in the inner part of the asteroid belt shares the same spectral properties, suggesting that it is a dominant source of near-Earth asteroids. The observed similarity of near-Earth asteroids to LL chondrites is, however, surprising, as this meteorite class is relatively rare ( approximately 8 per cent of all meteorite falls). One possible explanation is the role of a size-dependent process, such as the Yarkovsky effect, in transporting material from the main belt.
How long will my mouse live? Machine learning approaches for prediction of mouse life span.
Swindell, William R; Harper, James M; Miller, Richard A
2008-09-01
Prediction of individual life span based on characteristics evaluated at middle-age represents a challenging objective for aging research. In this study, we used machine learning algorithms to construct models that predict life span in a stock of genetically heterogeneous mice. Life-span prediction accuracy of 22 algorithms was evaluated using a cross-validation approach, in which models were trained and tested with distinct subsets of data. Using a combination of body weight and T-cell subset measures evaluated before 2 years of age, we show that the life-span quartile to which an individual mouse belongs can be predicted with an accuracy of 35.3% (+/-0.10%). This result provides a new benchmark for the development of life-span-predictive models, but improvement can be expected through identification of new predictor variables and development of computational approaches. Future work in this direction can provide tools for aging research and will shed light on associations between phenotypic traits and longevity.
Verification of TREX1 as a promising indicator of judging the prognosis of osteosarcoma.
Feng, Jinyi; Lan, Ruilong; Cai, Guanxiong; Lin, Jinluan; Wang, Xinwen; Lin, Jianhua; Han, Deping
2016-11-24
The study aimed to explore the correlation between the expression of TREX1 and the metastasis and the survival time of patients with osteosarcoma as well as biological characteristics of osteosarcoma cells for the prognosis judgment of osteosarcoma. The correlation between the expression of TREX1 protein and the occurrence of pulmonary metastasis in 45 cases of osteosarcoma was analyzed. The CD133 + and CD133 - cell subsets of osteosarcoma stem cells were sorted by the flow cytometry. The tumorsphere culture, clone formation, growth curve, osteogenic and adipogenic differentiation, tumor-formation ability in nude mice, sensitivity of chemotherapeutic drugs, and other cytobiology behaviors were compared between the cell subsets in two groups; the expressions of stem cell-related genes Nanog and Oct4 were compared; The expressions of TREX1 protein and mRNA were compared between the cell subsets in two groups. The data was statistically analyzed. The measurement data between the two groups were compared using t test. The count data between the two groups were compared using χ 2 test and Kaplan-Meier survival analysis. A P value <0.05 indicated that the difference was statistically significant. The expression of TREX1 protein in patients with osteosarcoma in the metastasis group was significantly lower than that in the non-metastasis group. The difference was statistically significant (P < 0.05). Up to the last follow-up visit, the former average survival time was significantly lower than that of the latter, and the difference was statistically significant (P < 0.05). The expression of TREX1 in human osteosarcoma CD133 + cell subsets was significantly lower than that in CD133 - cell subsets. Stemness-related genes Nanog and Oct4 were highly expressed in human osteosarcoma CD133 + cell subsets with lower expression of TREX1; the biological characteristics identification experiment showed that human CD133 + cell subsets with low TREX1 expression could form tumorspheres, the number of colony forming was more, the cell proliferation ability was strong, the osteogenic and adipogenic differentiation potential was big, the tumor-forming ability in nude mice was strong, and the sensibility of chemotherapeutics drugs on cisplatin was low. The expression of TREX1 may be related to metastasis in patients with osteosarcoma. The expression of TREX1 was closely related to the cytobiology characteristics of osteosarcoma stem cell. TREX1 can play an important role in the occurrence and development processes. And, TREX1 is expected to become an effective new index for the evaluation of the prognosis.
Pollock, Ross D; O'Brien, Katie A; Daniels, Lorna J; Nielsen, Kathrine B; Rowlerson, Anthea; Duggal, Niharika A; Lazarus, Norman R; Lord, Janet M; Philp, Andrew; Harridge, Stephen D R
2018-04-01
In this study, results are reported from the analyses of vastus lateralis muscle biopsy samples obtained from a subset (n = 90) of 125 previously phenotyped, highly active male and female cyclists aged 55-79 years in regard to age. We then subsequently attempted to uncover associations between the findings in muscle and in vivo physiological functions. Muscle fibre type and composition (ATPase histochemistry), size (morphometry), capillary density (immunohistochemistry) and mitochondrial protein content (Western blot) in relation to age were determined in the biopsy specimens. Aside from an age-related change in capillary density in males (r = -.299; p = .02), no other parameter measured in the muscle samples showed an association with age. However, in males type I fibres and capillarity (p < .05) were significantly associated with training volume, maximal oxygen uptake, oxygen uptake kinetics and ventilatory threshold. In females, the only association observed was between capillarity and training volume (p < .05). In males, both type II fibre proportion and area (p < .05) were associated with peak power during sprint cycling and with maximal rate of torque development during a maximal voluntary isometric contraction. Mitochondrial protein content was not associated with any cardiorespiratory parameter in either males or females (p > .05). We conclude in this highly active cohort, selected to mitigate most of the effects of inactivity, that there is little evidence of age-related changes in the properties of VL muscle across the age range studied. By contrast, some of these muscle characteristics were correlated with in vivo physiological indices. © 2018 The Authors. Aging Cell published by the Anatomical Society and John Wiley & Sons Ltd.
Multi-Objective Bidding Strategy for Genco Using Non-Dominated Sorting Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Saksinchai, Apinat; Boonchuay, Chanwit; Ongsakul, Weerakorn
2010-06-01
This paper proposes a multi-objective bidding strategy for a generation company (GenCo) in uniform price spot market using non-dominated sorting particle swarm optimization (NSPSO). Instead of using a tradeoff technique, NSPSO is introduced to solve the multi-objective strategic bidding problem considering expected profit maximization and risk (profit variation) minimization. Monte Carlo simulation is employed to simulate rivals' bidding behavior. Test results indicate that the proposed approach can provide the efficient non-dominated solution front effectively. In addition, it can be used as a decision making tool for a GenCo compromising between expected profit and price risk in spot market.
Tootelian, Dennis H; Mikhailitchenko, Andrey; Holst, Cindy; Gaedeke, Ralph M
2016-01-01
The health care landscape has changed dramatically. Consumers now seek plans whose benefits better fit their health care needs and desires for access to providers. This exploratory survey of more than 1,000 HMO and non-HMO customers found significant differences with respect to their selection processes for health plans and providers, and their expectations regarding access to and communication with health care providers. While there are some similarities in factors affecting choice, segmentation strategies are necessary to maximize the appeal of a plan, satisfy customers in the selection of physicians, and meet their expectations regarding access to those physicians.
Noisy Preferences in Risky Choice: A Cautionary Note
2017-01-01
We examine the effects of multiple sources of noise in risky decision making. Noise in the parameters that characterize an individual’s preferences can combine with noise in the response process to distort observed choice proportions. Thus, underlying preferences that conform to expected value maximization can appear to show systematic risk aversion or risk seeking. Similarly, core preferences that are consistent with expected utility theory, when perturbed by such noise, can appear to display nonlinear probability weighting. For this reason, modal choices cannot be used simplistically to infer underlying preferences. Quantitative model fits that do not allow for both sorts of noise can lead to wrong conclusions. PMID:28569526
Collective states in social systems with interacting learning agents
NASA Astrophysics Data System (ADS)
Semeshenko, Viktoriya; Gordon, Mirta B.; Nadal, Jean-Pierre
2008-08-01
We study the implications of social interactions and individual learning features on consumer demand in a simple market model. We consider a social system of interacting heterogeneous agents with learning abilities. Given a fixed price, agents repeatedly decide whether or not to buy a unit of a good, so as to maximize their expected utilities. This model is close to Random Field Ising Models, where the random field corresponds to the idiosyncratic willingness to pay. We show that the equilibrium reached depends on the nature of the information agents use to estimate their expected utilities. It may be different from the systems’ Nash equilibria.
Jagtap, Pratik; Goslinga, Jill; Kooren, Joel A; McGowan, Thomas; Wroblewski, Matthew S; Seymour, Sean L; Griffin, Timothy J
2013-04-01
Large databases (>10(6) sequences) used in metaproteomic and proteogenomic studies present challenges in matching peptide sequences to MS/MS data using database-search programs. Most notably, strict filtering to avoid false-positive matches leads to more false negatives, thus constraining the number of peptide matches. To address this challenge, we developed a two-step method wherein matches derived from a primary search against a large database were used to create a smaller subset database. The second search was performed against a target-decoy version of this subset database merged with a host database. High confidence peptide sequence matches were then used to infer protein identities. Applying our two-step method for both metaproteomic and proteogenomic analysis resulted in twice the number of high confidence peptide sequence matches in each case, as compared to the conventional one-step method. The two-step method captured almost all of the same peptides matched by the one-step method, with a majority of the additional matches being false negatives from the one-step method. Furthermore, the two-step method improved results regardless of the database search program used. Our results show that our two-step method maximizes the peptide matching sensitivity for applications requiring large databases, especially valuable for proteogenomics and metaproteomics studies. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Johnson, Fred A.; Jensen, Gitte H.; Madsen, Jesper; Williams, Byron K.
2014-01-01
We explored the application of dynamic-optimization methods to the problem of pink-footed goose (Anser brachyrhynchus) management in western Europe. We were especially concerned with the extent to which uncertainty in population dynamics influenced an optimal management strategy, the gain in management performance that could be expected if uncertainty could be eliminated or reduced, and whether an adaptive or robust management strategy might be most appropriate in the face of uncertainty. We combined three alternative survival models with three alternative reproductive models to form a set of nine annual-cycle models for pink-footed geese. These models represent a wide range of possibilities concerning the extent to which demographic rates are density dependent or independent, and the extent to which they are influenced by spring temperatures. We calculated state-dependent harvest strategies for these models using stochastic dynamic programming and an objective function that maximized sustainable harvest, subject to a constraint on desired population size. As expected, attaining the largest mean objective value (i.e., the relative measure of management performance) depended on the ability to match a model-dependent optimal strategy with its generating model of population dynamics. The nine models suggested widely varying objective values regardless of the harvest strategy, with the density-independent models generally producing higher objective values than models with density-dependent survival. In the face of uncertainty as to which of the nine models is most appropriate, the optimal strategy assuming that both survival and reproduction were a function of goose abundance and spring temperatures maximized the expected minimum objective value (i.e., maxi–min). In contrast, the optimal strategy assuming equal model weights minimized the expected maximum loss in objective value. The expected value of eliminating model uncertainty was an increase in objective value of only 3.0%. This value represents the difference between the best that could be expected if the most appropriate model were known and the best that could be expected in the face of model uncertainty. The value of eliminating uncertainty about the survival process was substantially higher than that associated with the reproductive process, which is consistent with evidence that variation in survival is more important than variation in reproduction in relatively long-lived avian species. Comparing the expected objective value if the most appropriate model were known with that of the maxi–min robust strategy, we found the value of eliminating uncertainty to be an expected increase of 6.2% in objective value. This result underscores the conservatism of the maxi–min rule and suggests that risk-neutral managers would prefer the optimal strategy that maximizes expected value, which is also the strategy that is expected to minimize the maximum loss (i.e., a strategy based on equal model weights). The low value of information calculated for pink-footed geese suggests that a robust strategy (i.e., one in which no learning is anticipated) could be as nearly effective as an adaptive one (i.e., a strategy in which the relative credibility of models is assessed through time). Of course, an alternative explanation for the low value of information is that the set of population models we considered was too narrow to represent key uncertainties in population dynamics. Yet we know that questions about the presence of density dependence must be central to the development of a sustainable harvest strategy. And while there are potentially many environmental covariates that could help explain variation in survival or reproduction, our admission of models in which vital rates are drawn randomly from reasonable distributions represents a worst-case scenario for management. We suspect that much of the value of the various harvest strategies we calculated is derived from the fact that they are state dependent, such that appropriate harvest rates depend on population abundance and weather conditions, as well as our focus on an infinite time horizon for sustainability.
Demographic differences in Down syndrome livebirths in the US from 1989 to 2006.
Egan, James F X; Smith, Kathleen; Timms, Diane; Bolnick, Jay M; Campbell, Winston A; Benn, Peter A
2011-04-01
To explore demographic differences in Down syndrome livebirths in the United States. Using National Center for Health Statistics (NCHS) birth certificate data from 1989 to 2006 we analyzed Down syndrome livebirths after correcting for under-reporting. We created six subsets based on maternal age (15-34 and 35-49 years old); US regions, that is, Northeast, Midwest, South and West; marital status, (married, unmarried); education, ( ≤ 12 years, ≥ 13 years); race, (white, black); and Hispanic ethnicity, (non-Hispanic, Hispanic). We estimated expected Down syndrome livebirths assuming no change in birth certificate reporting. The percentage of expected Down syndrome livebirths actually born was calculated by year. There were 72 613 424 livebirths from 1989 to 2006. There were 122 519 Down syndrome livebirths expected and 65 492 were actually born. The Midwest had the most expected Down syndrome livebirths actually born (67.6%); the West was lowest (44.4%). More expected Down syndrome livebirths were born to women who were 15 to 34 years old (61 vs 43.8%) and to those with ≤ 12 years education (60.4 vs 46.9%), white race (56.6 vs 37%), unmarried (56.0 vs 52.5%), and of Hispanic ethnicity (55.0 vs 53.3%). The percentage of expected Down syndrome livebirths actually born varies by demographics. Copyright © 2011 John Wiley & Sons, Ltd.
The Quantification of Consistent Subjective Logic Tree Branch Weights for PSHA
NASA Astrophysics Data System (ADS)
Runge, A. K.; Scherbaum, F.
2012-04-01
The development of quantitative models for the rate of exceedance of seismically generated ground motion parameters is the target of probabilistic seismic hazard analysis (PSHA). In regions of low to moderate seismicity, the selection and evaluation of source- and/or ground-motion models is often a major challenge to hazard analysts and affected by large epistemic uncertainties. In PSHA this type of uncertainties is commonly treated within a logic tree framework in which the branch weights express the degree-of-belief values of an expert in the corresponding set of models. For the calculation of the distribution of hazard curves, these branch weights are subsequently used as subjective probabilities. However the quality of the results depends strongly on the "quality" of the expert knowledge. A major challenge for experts in this context is to provide weight estimates which are logically consistent (in the sense of Kolmogorov's axioms) and to be aware of and to deal with the multitude of heuristics and biases which affect human judgment under uncertainty. For example, people tend to give smaller weights to each branch of a logic tree the more branches it has, starting with equal weights for all branches and then adjusting this uniform distribution based on his/her beliefs about how the branches differ. This effect is known as pruning bias.¹ A similar unwanted effect, which may even wrongly suggest robustness of the corresponding hazard estimates, will appear in cases where all models are first judged according to some numerical quality measure approach and the resulting weights are subsequently normalized to sum up to one.2 To address these problems, we have developed interactive graphical tools for the determination of logic tree branch weights in form of logically consistent subjective probabilities, based on the concepts suggested in Curtis and Wood (2004).3 Instead of determining the set of weights for all the models in a single step, the computer driven elicitation process is performed as a sequence of evaluations of relative weights for small subsets of models which are presented to the analyst. From these, the distribution of logic tree weights for the whole model set is determined as solution of an optimization problem. The model subset presented to the analyst in each step is designed to maximize the expected information. The result of this process is a set of logically consistent weights together with a measure of confidence determined from the amount of conflicting information which is provided by the expert during the relative weighting process.
He, Ye; Lin, Huazhen; Tu, Dongsheng
2018-06-04
In this paper, we introduce a single-index threshold Cox proportional hazard model to select and combine biomarkers to identify patients who may be sensitive to a specific treatment. A penalized smoothed partial likelihood is proposed to estimate the parameters in the model. A simple, efficient, and unified algorithm is presented to maximize this likelihood function. The estimators based on this likelihood function are shown to be consistent and asymptotically normal. Under mild conditions, the proposed estimators also achieve the oracle property. The proposed approach is evaluated through simulation analyses and application to the analysis of data from two clinical trials, one involving patients with locally advanced or metastatic pancreatic cancer and one involving patients with resectable lung cancer. Copyright © 2018 John Wiley & Sons, Ltd.
Unified sensor management in unknown dynamic clutter
NASA Astrophysics Data System (ADS)
Mahler, Ronald; El-Fallah, Adel
2010-04-01
In recent years the first author has developed a unified, computationally tractable approach to multisensor-multitarget sensor management. This approach consists of closed-loop recursion of a PHD or CPHD filter with maximization of a "natural" sensor management objective function called PENT (posterior expected number of targets). In this paper we extend this approach so that it can be used in unknown, dynamic clutter backgrounds.
ERIC Educational Resources Information Center
von Davier, Matthias
2016-01-01
This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…
Effects of Requiring Students to Meet High Expectation Levels within an On-Line Homework Environment
ERIC Educational Resources Information Center
Weber, William J., Jr.
2010-01-01
On-line homework is becoming a larger part of mathematics classrooms each year. Thus, ways to maximize the effectiveness of on-line homework for both students and teachers must be investigated. This study sought to provide one possible answer to this aim, by requiring students to achieve at least 50% for any on-line homework assignment in order to…
ERIC Educational Resources Information Center
Enders, Craig K.; Peugh, James L.
2004-01-01
Two methods, direct maximum likelihood (ML) and the expectation maximization (EM) algorithm, can be used to obtain ML parameter estimates for structural equation models with missing data (MD). Although the 2 methods frequently produce identical parameter estimates, it may be easier to satisfy missing at random assumptions using EM. However, no…
State-Dependent Risk Preferences in Evolutionary Games
NASA Astrophysics Data System (ADS)
Roos, Patrick; Nau, Dana
There is much empirical evidence that human decision-making under risk does not correspond the decision-theoretic notion of "rational" decision making, namely to make choices that maximize the expected value. An open question is how such behavior could have arisen evolutionarily. We believe that the answer to this question lies, at least in part, in the interplay between risk-taking and sequentiality of choice in evolutionary environments.
Decision Making Analysis: Critical Factors-Based Methodology
2010-04-01
the pitfalls associated with current wargaming methods such as assuming a western view of rational values in decision - making regardless of the cultures...Utilization theory slightly expands the rational decision making model as it states that “actors try to maximize their expected utility by weighing the...items to categorize the decision - making behavior of political leaders which tend to demonstrate either a rational or cognitive leaning. Leaders
Effective return, risk aversion and drawdowns
NASA Astrophysics Data System (ADS)
Dacorogna, Michel M.; Gençay, Ramazan; Müller, Ulrich A.; Pictet, Olivier V.
2001-01-01
We derive two risk-adjusted performance measures for investors with risk averse preferences. Maximizing these measures is equivalent to maximizing the expected utility of an investor. The first measure, Xeff, is derived assuming a constant risk aversion while the second measure, Reff, is based on a stronger risk aversion to clustering of losses than of gains. The clustering of returns is captured through a multi-horizon framework. The empirical properties of Xeff, Reff are studied within the context of real-time trading models for foreign exchange rates and their properties are compared to those of more traditional measures like the annualized return, the Sharpe Ratio and the maximum drawdown. Our measures are shown to be more robust against clustering of losses and have the ability to fully characterize the dynamic behaviour of investment strategies.
Influencing Busy People in a Social Network
Sarkar, Kaushik; Sundaram, Hari
2016-01-01
We identify influential early adopters in a social network, where individuals are resource constrained, to maximize the spread of multiple, costly behaviors. A solution to this problem is especially important for viral marketing. The problem of maximizing influence in a social network is challenging since it is computationally intractable. We make three contributions. First, we propose a new model of collective behavior that incorporates individual intent, knowledge of neighbors actions and resource constraints. Second, we show that the multiple behavior influence maximization is NP-hard. Furthermore, we show that the problem is submodular, implying the existence of a greedy solution that approximates the optimal solution to within a constant. However, since the greedy algorithm is expensive for large networks, we propose efficient heuristics to identify the influential individuals, including heuristics to assign behaviors to the different early adopters. We test our approach on synthetic and real-world topologies with excellent results. We evaluate the effectiveness under three metrics: unique number of participants, total number of active behaviors and network resource utilization. Our heuristics produce 15-51% increase in expected resource utilization over the naïve approach. PMID:27711127
Multiway spectral community detection in networks
NASA Astrophysics Data System (ADS)
Zhang, Xiao; Newman, M. E. J.
2015-11-01
One of the most widely used methods for community detection in networks is the maximization of the quality function known as modularity. Of the many maximization techniques that have been used in this context, some of the most conceptually attractive are the spectral methods, which are based on the eigenvectors of the modularity matrix. Spectral algorithms have, however, been limited, by and large, to the division of networks into only two or three communities, with divisions into more than three being achieved by repeated two-way division. Here we present a spectral algorithm that can directly divide a network into any number of communities. The algorithm makes use of a mapping from modularity maximization to a vector partitioning problem, combined with a fast heuristic for vector partitioning. We compare the performance of this spectral algorithm with previous approaches and find it to give superior results, particularly in cases where community sizes are unbalanced. We also give demonstrative applications of the algorithm to two real-world networks and find that it produces results in good agreement with expectations for the networks studied.
Influencing Busy People in a Social Network.
Sarkar, Kaushik; Sundaram, Hari
2016-01-01
We identify influential early adopters in a social network, where individuals are resource constrained, to maximize the spread of multiple, costly behaviors. A solution to this problem is especially important for viral marketing. The problem of maximizing influence in a social network is challenging since it is computationally intractable. We make three contributions. First, we propose a new model of collective behavior that incorporates individual intent, knowledge of neighbors actions and resource constraints. Second, we show that the multiple behavior influence maximization is NP-hard. Furthermore, we show that the problem is submodular, implying the existence of a greedy solution that approximates the optimal solution to within a constant. However, since the greedy algorithm is expensive for large networks, we propose efficient heuristics to identify the influential individuals, including heuristics to assign behaviors to the different early adopters. We test our approach on synthetic and real-world topologies with excellent results. We evaluate the effectiveness under three metrics: unique number of participants, total number of active behaviors and network resource utilization. Our heuristics produce 15-51% increase in expected resource utilization over the naïve approach.
Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model.
Shinzato, Takashi
2015-01-01
In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk) and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach.
Stolyarova, Alexandra; Izquierdo, Alicia
2017-01-01
We make choices based on the values of expected outcomes, informed by previous experience in similar settings. When the outcomes of our decisions consistently violate expectations, new learning is needed to maximize rewards. Yet not every surprising event indicates a meaningful change in the environment. Even when conditions are stable overall, outcomes of a single experience can still be unpredictable due to small fluctuations (i.e., expected uncertainty) in reward or costs. In the present work, we investigate causal contributions of the basolateral amygdala (BLA) and orbitofrontal cortex (OFC) in rats to learning under expected outcome uncertainty in a novel delay-based task that incorporates both predictable fluctuations and directional shifts in outcome values. We demonstrate that OFC is required to accurately represent the distribution of wait times to stabilize choice preferences despite trial-by-trial fluctuations in outcomes, whereas BLA is necessary for the facilitation of learning in response to surprising events. DOI: http://dx.doi.org/10.7554/eLife.27483.001 PMID:28682238
Méndez-Aparicio, M Dolores; Izquierdo-Yusta, Alicia; Jiménez-Zarco, Ana I
2017-01-01
Today, the customer-brand relationship is fundamental to a company's bottom line, especially in the service sector and with services offered via online channels. In order to maximize its effects, organizations need (1) to know which factors influence the formation of an individual's service expectations in an online environment; and (2) to establish the influence of these expectations on customers' likelihood of recommending a service before they have even used it. In accordance with the TAM model (Davis, 1989; Davis et al., 1992), the TRA model (Fishbein and Ajzen, 1975), the extended UTAUT model (Venkatesh et al., 2012), and the approach described by Alloza (2011), this work proposes a theoretical model of the antecedents and consequences of consumer expectations of online services. In order to validate the proposed theoretical model, a sample of individual insurance company customers was analyzed. The results showed, first, the importance of customers' expectations with regard to the intention to recommend the "private area" of the company's website to other customers prior to using it themselves. They also revealed the importance to expectations of the antecedents perceived usefulness, ease of use, frequency of use, reputation, and subjective norm.
Méndez-Aparicio, M. Dolores; Izquierdo-Yusta, Alicia; Jiménez-Zarco, Ana I.
2017-01-01
Today, the customer-brand relationship is fundamental to a company’s bottom line, especially in the service sector and with services offered via online channels. In order to maximize its effects, organizations need (1) to know which factors influence the formation of an individual’s service expectations in an online environment; and (2) to establish the influence of these expectations on customers’ likelihood of recommending a service before they have even used it. In accordance with the TAM model (Davis, 1989; Davis et al., 1992), the TRA model (Fishbein and Ajzen, 1975), the extended UTAUT model (Venkatesh et al., 2012), and the approach described by Alloza (2011), this work proposes a theoretical model of the antecedents and consequences of consumer expectations of online services. In order to validate the proposed theoretical model, a sample of individual insurance company customers was analyzed. The results showed, first, the importance of customers’ expectations with regard to the intention to recommend the “private area” of the company’s website to other customers prior to using it themselves. They also revealed the importance to expectations of the antecedents perceived usefulness, ease of use, frequency of use, reputation, and subjective norm. PMID:28798705
Interaction and Synergism of Microbial Fuel Cell Bacteria within Methanogenesis
NASA Technical Reports Server (NTRS)
Klaus, David
2004-01-01
Biological hydrogen production from waste biomass has both terrestrial and Martian advanced life support applications. On earth, biological hydrogen production is being explored as a greenhouse neutral form of clean and efficient energy. In a permanently enclosed space habitat, carbon loop closure is required to reduce mission costs. Plants are grown to revitalize oxygen supply and are consumed by habitat inhabitants. Unharvested portions must then be recycled for reuse in the habitat. Several biological degradation techniques exist, but one process, biophotolysis, can be used to produce hydrogen from inedible plant biomass. This process is two-stage, with one stage using dark fermentation to convert plant wastes into organic acids. The second stage, photofermentation, uses photoheterotrophic purple non-sulfur bacteria with the addition of light to turn the organic acids into hydrogen and carbon dioxide. Such a system can prove useful as a co-generation scheme, providing some of the energy needed to power a larger primary carbon recovery system, such as composting. Since butyrate is expected as one of the major inputs into photofermentation, a characterization study was conducted with the bacterium Rhodobacter sphaeroides SCJ, a novel photoheterotrophic non-sulfur purple bacteria, to examine hydrogen production performance at 10 mM-100 mM butyrate concentrations. As butyrate levels increased, hydrogen production increased up to 25 mM, and then decreased and ceased by 100 mM. Additionally, lag phase increased with butyrate concentration, possibly indicating some product inhibition. Maximal substrate conversion efficiency was 8.0%; maximal light efficiency was 0.89%; and maximal hydrogen production rate was 7.7 Umol/mg/cdw/hr (173 ul/mg cdw/hr). These values were either consistent or lower than expected from literature.
Engineering Design Handbook: Development Guide for Reliability. Part Three. Reliability Prediction
1976-01-01
Populations 4-3 4-7 IFR and DFR Distributions 4-3 CHAPTER 5. SOME ADVANCED MATHEMATICAL TECHNIQUES 5-0 List of Symbols 5-1 5-1 Introduction...Aß,CJE= sets AFyAGSFJBc ~ events that units UA and -\\AAA E{] EB £H T $£ T UB are Failed or Good subsets "of A£,CJ),E s-expected value of...i.e., the sample space is all possible values that can arise. Each value is called a sample point. There are six sample points in the sample space
Automated Verification of Design Patterns with LePUS3
NASA Technical Reports Server (NTRS)
Nicholson, Jonathan; Gasparis, Epameinondas; Eden, Ammon H.; Kazman, Rick
2009-01-01
Specification and [visual] modelling languages are expected to combine strong abstraction mechanisms with rigour, scalability, and parsimony. LePUS3 is a visual, object-oriented design description language axiomatized in a decidable subset of the first-order predicate logic. We demonstrate how LePUS3 is used to formally specify a structural design pattern and prove ( verify ) whether any JavaTM 1.4 program satisfies that specification. We also show how LePUS3 specifications (charts) are composed and how they are verified fully automatically in the Two-Tier Programming Toolkit.
Convergence in High Probability of the Quantum Diffusion in a Random Band Matrix Model
NASA Astrophysics Data System (ADS)
Margarint, Vlad
2018-06-01
We consider Hermitian random band matrices H in d ≥slant 1 dimensions. The matrix elements H_{xy}, indexed by x, y \\in Λ \\subset Z^d, are independent, uniformly distributed random variable if |x-y| is less than the band width W, and zero otherwise. We update the previous results of the converge of quantum diffusion in a random band matrix model from convergence of the expectation to convergence in high probability. The result is uniformly in the size |Λ| of the matrix.
Automatic design of basin-specific drought indexes for highly regulated water systems
NASA Astrophysics Data System (ADS)
Zaniolo, Marta; Giuliani, Matteo; Castelletti, Andrea Francesco; Pulido-Velazquez, Manuel
2018-04-01
Socio-economic costs of drought are progressively increasing worldwide due to undergoing alterations of hydro-meteorological regimes induced by climate change. Although drought management is largely studied in the literature, traditional drought indexes often fail at detecting critical events in highly regulated systems, where natural water availability is conditioned by the operation of water infrastructures such as dams, diversions, and pumping wells. Here, ad hoc index formulations are usually adopted based on empirical combinations of several, supposed-to-be significant, hydro-meteorological variables. These customized formulations, however, while effective in the design basin, can hardly be generalized and transferred to different contexts. In this study, we contribute FRIDA (FRamework for Index-based Drought Analysis), a novel framework for the automatic design of basin-customized drought indexes. In contrast to ad hoc empirical approaches, FRIDA is fully automated, generalizable, and portable across different basins. FRIDA builds an index representing a surrogate of the drought conditions of the basin, computed by combining all the relevant available information about the water circulating in the system identified by means of a feature extraction algorithm. We used the Wrapper for Quasi-Equally Informative Subset Selection (W-QEISS), which features a multi-objective evolutionary algorithm to find Pareto-efficient subsets of variables by maximizing the wrapper accuracy, minimizing the number of selected variables, and optimizing relevance and redundancy of the subset. The preferred variable subset is selected among the efficient solutions and used to formulate the final index according to alternative model structures. We apply FRIDA to the case study of the Jucar river basin (Spain), a drought-prone and highly regulated Mediterranean water resource system, where an advanced drought management plan relying on the formulation of an ad hoc state index
is used for triggering drought management measures. The state index was constructed empirically with a trial-and-error process begun in the 1980s and finalized in 2007, guided by the experts from the Confederación Hidrográfica del Júcar (CHJ). Our results show that the automated variable selection outcomes align with CHJ's 25-year-long empirical refinement. In addition, the resultant FRIDA index outperforms the official State Index in terms of accuracy in reproducing the target variable and cardinality of the selected inputs set.
Designing basin-customized combined drought indices via feature extraction
NASA Astrophysics Data System (ADS)
Zaniolo, Marta; Giuliani, Matteo; Castelletti, Andrea
2017-04-01
The socio-economic costs of drought are progressively increasing worldwide due to the undergoing alteration of hydro-meteorological regimes induced by climate change. Although drought management is largely studied in the literature, most of the traditional drought indexes fail in detecting critical events in highly regulated systems, which generally rely on ad-hoc formulations and cannot be generalized to different context. In this study, we contribute a novel framework for the design of a basin-customized drought index. This index represents a surrogate of the state of the basin and is computed by combining the available information about the water available in the system to reproduce a representative target variable for the drought condition of the basin (e.g., water deficit). To select the relevant variables and how to combine them, we use an advanced feature extraction algorithm called Wrapper for Quasi Equally Informative Subset Selection (W-QEISS). The W-QEISS algorithm relies on a multi-objective evolutionary algorithm to find Pareto-efficient subsets of variables by maximizing the wrapper accuracy, minimizing the number of selected variables (cardinality) and optimizing relevance and redundancy of the subset. The accuracy objective is evaluated trough the calibration of a pre-defined model (i.e., an extreme learning machine) of the water deficit for each candidate subset of variables, with the index selected from the resulting solutions identifying a suitable compromise between accuracy, cardinality, relevance, and redundancy. The proposed methodology is tested in the case study of Lake Como in northern Italy, a regulated lake mainly operated for irrigation supply to four downstream agricultural districts. In the absence of an institutional drought monitoring system, we constructed the combined index using all the hydrological variables from the existing monitoring system as well as the most common drought indicators at multiple time aggregations. The soil moisture deficit in the root zone computed by a distributed-parameter water balance model of the agricultural districts is used as target variable. Numerical results show that our framework succeeds in constructing a combined drought index that reproduces the soil moisture deficit. Moreover, this index represents a valuable information for supporting appropriate drought management strategies, including the possibility of directly informing the lake operations about the drought conditions and improve the overall reliability of the irrigation supply system.
Relationships between treated hypertension and subsequent mortality in an insured population.
Ivanovic, Brian; Cumming, Marianne E; Pinkham, C Allen
2004-01-01
To investigate if a mortality differential exists between insurance policyholders with treated hypertension and policyholders who are not under such treatment, where both groups are noted to have the same blood pressure at the time of policy issue. Hypertension is a known mortality risk factor in the insured and general population. Treatment for hypertension is very common in the insured population, especially as age increases. At the time of insurance application, a subset of individuals with treated hypertension will have blood pressures that are effectively controlled and are in the normal range. These individuals often meet established preferred underwriting criteria for blood pressure. In some life insurance companies, they may be offered insurance at the same rates as individuals who are not hypertensive with the same blood pressure. Such companies make the assumption that the pharmacologically induced normotensive state confers no excess risk relative to the natural normotensive state. Given the potential pricing implications of this decision, we undertook an investigation to test this hypothesis. We studied internal data on direct and reinsurance business between 1975 and 2001 followed through anniversaries in 2002 or prior termination with an average duration of 5.2 years per policy. Actual-to-expected analyses and Cox proportional hazards models were used to assess if a mortality differential existed between policyholders coded for hypertension and policyholders with the same blood pressure that were not coded as hypertensive. Eight thousand six hundred forty-seven deaths were observed during follow-up in the standard or preferred policy cohort. Within the same blood pressure category, mortality was higher in policyholders identified as treated hypertensives compared with those in the subset of individuals who were not coded for hypertension. This finding was present in males and females and persisted across age groups in almost all age-gender-smoking status subsets examined. The differential in mortality was 125% to 160% of standard mortality based on the ratio of actual-to-expected claims. In this insured cohort, a designation of treated hypertension is associated with increased relative mortality compared to life insurance policyholders not so coded.
Variation in opsin genes correlates with signaling ecology in North American fireflies
Sander, Sarah E.; Hall, David W.
2015-01-01
Genes underlying signal reception should evolve to maximize signal detection in a particular environment. In animals, opsins, the protein component of visual pigments, are predicted to evolve according to this expectation. Fireflies are known for their bioluminescent mating signals. The eyes of nocturnal species are expected to maximize detection of conspecific signal colors emitted in the typical low-light environment. This is not expected for species that have transitioned to diurnal activity in bright daytime environments. Here we test the hypothesis that opsin gene sequence plays a role in modifying firefly eye spectral sensitivity. We use genome and transcriptome sequencing in four firefly species, transcriptome sequencing in six additional species, and targeted gene sequencing in 28 other species to identify all opsin genes present in North American fireflies and to elucidate amino acid sites under positive selection. We also determine whether amino acid substitutions in opsins are linked to evolutionary changes in signal mode, signal color, and light environment. We find only two opsins, one long wavelength and one ultraviolet, in all firefly species and identify 25 candidate sites that may be involved in determining spectral sensitivity. In addition, we find elevated rates of evolution at transitions to diurnal activity, and changes in selective constraint on LW opsin associated with changes in light environment. Our results suggest that changes in eye spectral sensitivity are at least partially due to opsin sequence. Fireflies continue to be a promising system in which to investigate the evolution of signals, receptors, and signaling environments. PMID:26289828
Sequoia Messaging Rate Benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedley, Andrew
2008-01-22
The purpose of this benchmark is to measure the maximal message rate of a single compute node. The first num_cores ranks are expected to reside on the 'core' compute node for which message rate is being tested. After that, the next num_nbors ranks are neighbors for the first core rank, the next set of num_nbors ranks are neighbors for the second core rank, and so on. For example, testing an 8-core node (num_cores = 8) with 4 neighbors (num_nbors = 4) requires 8 + 8 * 4 - 40 ranks. The first 8 of those 40 ranks are expected tomore » be on the 'core' node being benchmarked, while the rest of the ranks are on separate nodes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hinman, N.D.; Yancey, M.A.
1997-12-31
One of the main functions of government is to invest taxpayers dollars in projects, programs, and properties that will result in social benefit. Public programs focused on the development of technology are examples of such opportunities. Selecting these programs requires the same investment analysis approaches that private companies and individuals use. Good use of investment analysis approaches to these programs will minimize our tax costs and maximize public benefit from tax dollars invested. This article describes the use of the net present value (NPV) analysis approach to select public R&D programs and valuate expected private sector participation in the programs.more » 5 refs.« less
Noisy preferences in risky choice: A cautionary note.
Bhatia, Sudeep; Loomes, Graham
2017-10-01
We examine the effects of multiple sources of noise in risky decision making. Noise in the parameters that characterize an individual's preferences can combine with noise in the response process to distort observed choice proportions. Thus, underlying preferences that conform to expected value maximization can appear to show systematic risk aversion or risk seeking. Similarly, core preferences that are consistent with expected utility theory, when perturbed by such noise, can appear to display nonlinear probability weighting. For this reason, modal choices cannot be used simplistically to infer underlying preferences. Quantitative model fits that do not allow for both sorts of noise can lead to wrong conclusions. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Huntley, Edward D; Juliano, Laura M
2012-09-01
Expectancies for drug effects predict drug initiation, use, cessation, and relapse, and may play a causal role in drug effects (i.e., placebo effects). Surprisingly little is known about expectancies for caffeine even though it is the most widely used psychoactive drug in the world. In a series of independent studies, the nature and scope of caffeine expectancies among caffeine consumers and nonconsumers were assessed, and a comprehensive and psychometrically sound Caffeine Expectancy Questionnaire (CaffEQ) was developed. After 2 preliminary studies, the CaffEQ was administered to 1,046 individuals from the general population along with other measures of interest (e.g., caffeine use history, anxiety). Exploratory factor analysis of the CaffEQ yielded a 7-factor solution. Subsequently, an independent sample of 665 individuals completed the CaffEQ and other measures, and a subset (n = 440) completed the CaffEQ again approximately 2 weeks later. Confirmatory factor analysis revealed good model fit, and test-retest reliability was very good. The frequency and quantity of caffeine use were associated with greater expectancies for withdrawal/dependence, energy/work enhancement, appetite suppression, social/mood enhancement, and physical performance enhancement and lower expectancies for anxiety/negative physical effects and sleep disturbance. Caffeine expectancies predicted various caffeine- associated features of substance dependence (e.g., use despite harm, withdrawal incidence and severity, perceived difficulty stopping use, tolerance). Expectancies for caffeine consumed via coffee were stronger than for caffeine consumed via soft drinks or tea. The CaffEQ should facilitate the advancement of our knowledge of caffeine and drug use in general. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Michel, Christian J
2017-04-18
In 1996, a set X of 20 trinucleotides was identified in genes of both prokaryotes and eukaryotes which has on average the highest occurrence in reading frame compared to its two shifted frames. Furthermore, this set X has an interesting mathematical property as X is a maximal C 3 self-complementary trinucleotide circular code. In 2015, by quantifying the inspection approach used in 1996, the circular code X was confirmed in the genes of bacteria and eukaryotes and was also identified in the genes of plasmids and viruses. The method was based on the preferential occurrence of trinucleotides among the three frames at the gene population level. We extend here this definition at the gene level. This new statistical approach considers all the genes, i.e., of large and small lengths, with the same weight for searching the circular code X . As a consequence, the concept of circular code, in particular the reading frame retrieval, is directly associated to each gene. At the gene level, the circular code X is strengthened in the genes of bacteria, eukaryotes, plasmids, and viruses, and is now also identified in the genes of archaea. The genes of mitochondria and chloroplasts contain a subset of the circular code X . Finally, by studying viral genes, the circular code X was found in DNA genomes, RNA genomes, double-stranded genomes, and single-stranded genomes.
Repeated high-intensity exercise modulates Ca(2+) sensitivity of human skeletal muscle fibers.
Gejl, K D; Hvid, L G; Willis, S J; Andersson, E; Holmberg, H-C; Jensen, R; Frandsen, U; Hansen, J; Plomgaard, P; Ørtenblad, N
2016-05-01
The effects of short-term high-intensity exercise on single fiber contractile function in humans are unknown. Therefore, the purposes of this study were: (a) to access the acute effects of repeated high-intensity exercise on human single muscle fiber contractile function; and (b) to examine whether contractile function was affected by alterations in the redox balance. Eleven elite cross-country skiers performed four maximal bouts of 1300 m treadmill skiing with 45 min recovery. Contractile function of chemically skinned single fibers from triceps brachii was examined before the first and following the fourth sprint with respect to Ca(2+) sensitivity and maximal Ca(2+) -activated force. To investigate the oxidative effects of exercise on single fiber contractile function, a subset of fibers was incubated with dithiothreitol (DTT) before analysis. Ca(2+) sensitivity was enhanced by exercise in both MHC I (17%, P < 0.05) and MHC II (15%, P < 0.05) fibers. This potentiation was not present after incubation of fibers with DTT. Specific force of both MHC I and MHC II fibers was unaffected by exercise. In conclusion, repeated high-intensity exercise increased Ca(2+) sensitivity in both MHC I and MHC II fibers. This effect was not observed in a reducing environment indicative of an exercise-induced oxidation of the human contractile apparatus. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
McTwo: a two-step feature selection algorithm based on maximal information coefficient.
Ge, Ruiquan; Zhou, Manli; Luo, Youxi; Meng, Qinghan; Mai, Guoqin; Ma, Dongli; Wang, Guoqing; Zhou, Fengfeng
2016-03-23
High-throughput bio-OMIC technologies are producing high-dimension data from bio-samples at an ever increasing rate, whereas the training sample number in a traditional experiment remains small due to various difficulties. This "large p, small n" paradigm in the area of biomedical "big data" may be at least partly solved by feature selection algorithms, which select only features significantly associated with phenotypes. Feature selection is an NP-hard problem. Due to the exponentially increased time requirement for finding the globally optimal solution, all the existing feature selection algorithms employ heuristic rules to find locally optimal solutions, and their solutions achieve different performances on different datasets. This work describes a feature selection algorithm based on a recently published correlation measurement, Maximal Information Coefficient (MIC). The proposed algorithm, McTwo, aims to select features associated with phenotypes, independently of each other, and achieving high classification performance of the nearest neighbor algorithm. Based on the comparative study of 17 datasets, McTwo performs about as well as or better than existing algorithms, with significantly reduced numbers of selected features. The features selected by McTwo also appear to have particular biomedical relevance to the phenotypes from the literature. McTwo selects a feature subset with very good classification performance, as well as a small feature number. So McTwo may represent a complementary feature selection algorithm for the high-dimensional biomedical datasets.
Formal Darwinism, the individual-as-maximizing-agent analogy and bet-hedging
Grafen, A.
1999-01-01
The central argument of The origin of species was that mechanical processes (inheritance of features and the differential reproduction they cause) can give rise to the appearance of design. The 'mechanical processes' are now mathematically represented by the dynamic systems of population genetics, and the appearance of design by optimization and game theory in which the individual plays the part of the maximizing agent. Establishing a precise individual-as-maximizing-agent (IMA) analogy for a population-genetics system justifies optimization approaches, and so provides a modern formal representation of the core of Darwinism. It is a hitherto unnoticed implication of recent population-genetics models that, contrary to a decades-long consensus, an IMA analogy can be found in models with stochastic environments (subject to a convexity assumption), in which individuals maximize expected reproductive value. The key is that the total reproductive value of a species must be considered as constant, so therefore reproductive value should always be calculated in relative terms. This result removes a major obstacle from the theoretical challenge to find a unifying framework which establishes the IMA analogy for all of Darwinian biology, including as special cases inclusive fitness, evolutionarily stable strategies, evolutionary life-history theory, age-structured models and sex ratio theory. This would provide a formal, mathematical justification of fruitful and widespread but 'intentional' terms in evolutionary biology, such as 'selfish', 'altruism' and 'conflict'.
Büttner, Kathrin; Salau, Jennifer; Krieter, Joachim
2016-01-01
The average topological overlap of two graphs of two consecutive time steps measures the amount of changes in the edge configuration between the two snapshots. This value has to be zero if the edge configuration changes completely and one if the two consecutive graphs are identical. Current methods depend on the number of nodes in the network or on the maximal number of connected nodes in the consecutive time steps. In the first case, this methodology breaks down if there are nodes with no edges. In the second case, it fails if the maximal number of active nodes is larger than the maximal number of connected nodes. In the following, an adaption of the calculation of the temporal correlation coefficient and of the topological overlap of the graph between two consecutive time steps is presented, which shows the expected behaviour mentioned above. The newly proposed adaption uses the maximal number of active nodes, i.e. the number of nodes with at least one edge, for the calculation of the topological overlap. The three methods were compared with the help of vivid example networks to reveal the differences between the proposed notations. Furthermore, these three calculation methods were applied to a real-world network of animal movements in order to detect influences of the network structure on the outcome of the different methods.
Do framing effects reveal irrational choice?
Mandel, David R
2014-06-01
Framing effects have long been viewed as compelling evidence of irrationality in human decision making, yet that view rests on the questionable assumption that numeric quantifiers used to convey the expected values of choice options are uniformly interpreted as exact values. Two experiments show that when the exactness of such quantifiers is made explicit by the experimenter, framing effects vanish. However, when the same quantifiers are given a lower bound (at least) meaning, the typical framing effect is found. A 3rd experiment confirmed that most people spontaneously interpret the quantifiers in standard framing tests as lower bounded and that their interpretations strongly moderate the framing effect. Notably, in each experiment, a significant majority of participants made rational choices, either choosing the option that maximized expected value (i.e., lives saved) or choosing consistently across frames when the options were of equal expected value. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Blonder, Benjamin
2016-04-01
Hypervolumes are used widely to conceptualize niches and trait distributions for both species and communities. Some hypervolumes are expected to be convex, with boundaries defined by only upper and lower limits (e.g., fundamental niches), while others are expected to be maximal, with boundaries defined by the limits of available space (e.g., potential niches). However, observed hypervolumes (e.g., realized niches) could also have holes, defined as unoccupied hyperspace representing deviations from these expectations that may indicate unconsidered ecological or evolutionary processes. Detecting holes in more than two dimensions has to date not been possible. I develop a mathematical approach, implemented in the hypervolume R package, to infer holes in large and high-dimensional data sets. As a demonstration analysis, I assess evidence for vacant niches in a Galapagos finch community on Isabela Island. These mathematical concepts and software tools for detecting holes provide approaches for addressing contemporary research questions across ecology and evolutionary biology.
Short-Term Planning of Hybrid Power System
NASA Astrophysics Data System (ADS)
Knežević, Goran; Baus, Zoran; Nikolovski, Srete
2016-07-01
In this paper short-term planning algorithm for hybrid power system consist of different types of cascade hydropower plants (run-of-the river, pumped storage, conventional), thermal power plants (coal-fired power plants, combined cycle gas-fired power plants) and wind farms is presented. The optimization process provides a joint bid of the hybrid system, and thus making the operation schedule of hydro and thermal power plants, the operation condition of pumped-storage hydropower plants with the aim of maximizing profits on day ahead market, according to expected hourly electricity prices, the expected local water inflow in certain hydropower plants, and the expected production of electrical energy from the wind farm, taking into account previously contracted bilateral agreement for electricity generation. Optimization process is formulated as hourly-discretized mixed integer linear optimization problem. Optimization model is applied on the case study in order to show general features of the developed model.
The heart-break of social rejection versus the brain wave of social acceptance
van der Molen, Maurits W.; Sahibdin, Priya P.; Franken, Ingmar H. A.
2014-01-01
The effect of social rejection on cardiac and brain responses was examined in a study in which participants had to decide on the basis of pictures of virtual peers whether these peers would like them or not. Physiological and behavioral responses to expected and unexpected acceptance and rejection were compared. It was found that participants expected that about 50% of the virtual judges gave them a positive judgment. Cardiac deceleration was strongest for unexpected social rejection. In contrast, the brain response was strongest to expected acceptance and was characterized by a positive deflection peaking around 325 ms following stimulus onset and the observed difference was maximal at fronto-central positions. The cardiac and electro-cortical responses were not related. It is hypothesized that these differential response patterns might be related to earlier described differential involvement of the dorsal and ventral portion of the anterior cingulate cortex. PMID:23887821
ERIC Educational Resources Information Center
American Council on Education, Washington, DC.
Guidelines for colleges concerning the privacy of employee records are presented in two policy statements. Institutional policy should minimize intrusiveness, maximize fairness, and create legitimate expectations of confidentiality. In addition to strengthening professional equity of treatment, confidentiality permits consideration of both adverse…
Wireless Sensor Network Metrics for Real-Time Systems
2009-05-20
to compute the probability of end-to-end packet delivery as a function of latency, the expected radio energy consumption on the nodes from relaying... schedules for WSNs. Particularly, we focus on the impact scheduling has on path diversity, using short repeating schedules and Greedy Maximal Matching...a greedy algorithm for constructing a mesh routing topology. Finally, we study the implications of using distributed scheduling schemes to generate
Directly Reconstructing Principal Components of Heterogeneous Particles from Cryo-EM Images
Tagare, Hemant D.; Kucukelbir, Alp; Sigworth, Fred J.; Wang, Hongwei; Rao, Murali
2015-01-01
Structural heterogeneity of particles can be investigated by their three-dimensional principal components. This paper addresses the question of whether, and with what algorithm, the three-dimensional principal components can be directly recovered from cryo-EM images. The first part of the paper extends the Fourier slice theorem to covariance functions showing that the three-dimensional covariance, and hence the principal components, of a heterogeneous particle can indeed be recovered from two-dimensional cryo-EM images. The second part of the paper proposes a practical algorithm for reconstructing the principal components directly from cryo-EM images without the intermediate step of calculating covariances. This algorithm is based on maximizing the (posterior) likelihood using the Expectation-Maximization algorithm. The last part of the paper applies this algorithm to simulated data and to two real cryo-EM data sets: a data set of the 70S ribosome with and without Elongation Factor-G (EF-G), and a data set of the inluenza virus RNA dependent RNA Polymerase (RdRP). The first principal component of the 70S ribosome data set reveals the expected conformational changes of the ribosome as the EF-G binds and unbinds. The first principal component of the RdRP data set reveals a conformational change in the two dimers of the RdRP. PMID:26049077
Gupta, Rahul; Audhkhasi, Kartik; Jacokes, Zach; Rozga, Agata; Narayanan, Shrikanth
2018-01-01
Studies of time-continuous human behavioral phenomena often rely on ratings from multiple annotators. Since the ground truth of the target construct is often latent, the standard practice is to use ad-hoc metrics (such as averaging annotator ratings). Despite being easy to compute, such metrics may not provide accurate representations of the underlying construct. In this paper, we present a novel method for modeling multiple time series annotations over a continuous variable that computes the ground truth by modeling annotator specific distortions. We condition the ground truth on a set of features extracted from the data and further assume that the annotators provide their ratings as modification of the ground truth, with each annotator having specific distortion tendencies. We train the model using an Expectation-Maximization based algorithm and evaluate it on a study involving natural interaction between a child and a psychologist, to predict confidence ratings of the children's smiles. We compare and analyze the model against two baselines where: (i) the ground truth in considered to be framewise mean of ratings from various annotators and, (ii) each annotator is assumed to bear a distinct time delay in annotation and their annotations are aligned before computing the framewise mean.
Model-based clustering for RNA-seq data.
Si, Yaqing; Liu, Peng; Li, Pinghua; Brutnell, Thomas P
2014-01-15
RNA-seq technology has been widely adopted as an attractive alternative to microarray-based methods to study global gene expression. However, robust statistical tools to analyze these complex datasets are still lacking. By grouping genes with similar expression profiles across treatments, cluster analysis provides insight into gene functions and networks, and hence is an important technique for RNA-seq data analysis. In this manuscript, we derive clustering algorithms based on appropriate probability models for RNA-seq data. An expectation-maximization algorithm and another two stochastic versions of expectation-maximization algorithms are described. In addition, a strategy for initialization based on likelihood is proposed to improve the clustering algorithms. Moreover, we present a model-based hybrid-hierarchical clustering method to generate a tree structure that allows visualization of relationships among clusters as well as flexibility of choosing the number of clusters. Results from both simulation studies and analysis of a maize RNA-seq dataset show that our proposed methods provide better clustering results than alternative methods such as the K-means algorithm and hierarchical clustering methods that are not based on probability models. An R package, MBCluster.Seq, has been developed to implement our proposed algorithms. This R package provides fast computation and is publicly available at http://www.r-project.org
NASA Astrophysics Data System (ADS)
Floberg, J. M.; Holden, J. E.
2013-02-01
We introduce a method for denoising dynamic PET data, spatio-temporal expectation-maximization (STEM) filtering, that combines four-dimensional Gaussian filtering with EM deconvolution. The initial Gaussian filter suppresses noise at a broad range of spatial and temporal frequencies and EM deconvolution quickly restores the frequencies most important to the signal. We aim to demonstrate that STEM filtering can improve variance in both individual time frames and in parametric images without introducing significant bias. We evaluate STEM filtering with a dynamic phantom study, and with simulated and human dynamic PET studies of a tracer with reversible binding behaviour, [C-11]raclopride, and a tracer with irreversible binding behaviour, [F-18]FDOPA. STEM filtering is compared to a number of established three and four-dimensional denoising methods. STEM filtering provides substantial improvements in variance in both individual time frames and in parametric images generated with a number of kinetic analysis techniques while introducing little bias. STEM filtering does bias early frames, but this does not affect quantitative parameter estimates. STEM filtering is shown to be superior to the other simple denoising methods studied. STEM filtering is a simple and effective denoising method that could be valuable for a wide range of dynamic PET applications.
NASA Astrophysics Data System (ADS)
Mat Jafri, Mohd. Zubir; Abdulbaqi, Hayder Saad; Mutter, Kussay N.; Mustapha, Iskandar Shahrim; Omar, Ahmad Fairuz
2017-06-01
A brain tumour is an abnormal growth of tissue in the brain. Most tumour volume measurement processes are carried out manually by the radiographer and radiologist without relying on any auto program. This manual method is a timeconsuming task and may give inaccurate results. Treatment, diagnosis, signs and symptoms of the brain tumours mainly depend on the tumour volume and its location. In this paper, an approach is proposed to improve volume measurement of brain tumors as well as using a new method to determine the brain tumour location. The current study presents a hybrid method that includes two methods. One method is hidden Markov random field - expectation maximization (HMRFEM), which employs a positive initial classification of the image. The other method employs the threshold, which enables the final segmentation. In this method, the tumour volume is calculated using voxel dimension measurements. The brain tumour location was determined accurately in T2- weighted MRI image using a new algorithm. According to the results, this process was proven to be more useful compared to the manual method. Thus, it provides the possibility of calculating the volume and determining location of a brain tumour.
Parameter Estimation of Multiple Frequency-Hopping Signals with Two Sensors
Pan, Jin; Ma, Boyuan
2018-01-01
This paper essentially focuses on parameter estimation of multiple wideband emitting sources with time-varying frequencies, such as two-dimensional (2-D) direction of arrival (DOA) and signal sorting, with a low-cost circular synthetic array (CSA) consisting of only two rotating sensors. Our basic idea is to decompose the received data, which is a superimposition of phase measurements from multiple sources into separated groups and separately estimate the DOA associated with each source. Motivated by joint parameter estimation, we propose to adopt the expectation maximization (EM) algorithm in this paper; our method involves two steps, namely, the expectation-step (E-step) and the maximization (M-step). In the E-step, the correspondence of each signal with its emitting source is found. Then, in the M-step, the maximum-likelihood (ML) estimates of the DOA parameters are obtained. These two steps are iteratively and alternatively executed to jointly determine the DOAs and sort multiple signals. Closed-form DOA estimation formulae are developed by ML estimation based on phase data, which also realize an optimal estimation. Directional ambiguity is also addressed by another ML estimation method based on received complex responses. The Cramer-Rao lower bound is derived for understanding the estimation accuracy and performance comparison. The verification of the proposed method is demonstrated with simulations. PMID:29617323
González, M; Gutiérrez, C; Martínez, R
2012-09-01
A two-dimensional bisexual branching process has recently been presented for the analysis of the generation-to-generation evolution of the number of carriers of a Y-linked gene. In this model, preference of females for males with a specific genetic characteristic is assumed to be determined by an allele of the gene. It has been shown that the behavior of this kind of Y-linked gene is strongly related to the reproduction law of each genotype. In practice, the corresponding offspring distributions are usually unknown, and it is necessary to develop their estimation theory in order to determine the natural selection of the gene. Here we deal with the estimation problem for the offspring distribution of each genotype of a Y-linked gene when the only observable data are each generation's total numbers of males of each genotype and of females. We set out the problem in a non parametric framework and obtain the maximum likelihood estimators of the offspring distributions using an expectation-maximization algorithm. From these estimators, we also derive the estimators for the reproduction mean of each genotype and forecast the distribution of the future population sizes. Finally, we check the accuracy of the algorithm by means of a simulation study.
NASA Astrophysics Data System (ADS)
Coogan, A.; Avanzi, F.; Akella, R.; Conklin, M. H.; Bales, R. C.; Glaser, S. D.
2017-12-01
Automatic meteorological and snow stations provide large amounts of information at dense temporal resolution, but data quality is often compromised by noise and missing values. We present a new gap-filling and cleaning procedure for networks of these stations based on Kalman filtering and expectation maximization. Our method utilizes a multi-sensor, regime-switching Kalman filter to learn a latent process that captures dependencies between nearby stations and handles sharp changes in snowfall rate. Since the latent process is inferred using observations across working stations in the network, it can be used to fill in large data gaps for a malfunctioning station. The procedure was tested on meteorological and snow data from Wireless Sensor Networks (WSN) in the American River basin of the Sierra Nevada. Data include air temperature, relative humidity, and snow depth from dense networks of 10 to 12 stations within 1 km2 swaths. Both wet and dry water years have similar data issues. Data with artificially created gaps was used to quantify the method's performance. Our multi-sensor approach performs better than a single-sensor one, especially with large data gaps, as it learns and exploits the dominant underlying processes in snowpack at each site.
Wang, Xiaorong; Kang, Yu; Luo, Chunxiong; Zhao, Tong; Liu, Lin; Jiang, Xiangdan; Fu, Rongrong; An, Shuchang; Chen, Jichao; Jiang, Ning; Ren, Lufeng; Wang, Qi; Baillie, J Kenneth; Gao, Zhancheng; Yu, Jun
2014-02-11
Heteroresistance refers to phenotypic heterogeneity of microbial clonal populations under antibiotic stress, and it has been thought to be an allocation of a subset of "resistant" cells for surviving in higher concentrations of antibiotic. The assumption fits the so-called bet-hedging strategy, where a bacterial population "hedges" its "bet" on different phenotypes to be selected by unpredicted environment stresses. To test this hypothesis, we constructed a heteroresistance model by introducing a blaCTX-M-14 gene (coding for a cephalosporin hydrolase) into a sensitive Escherichia coli strain. We confirmed heteroresistance in this clone and that a subset of the cells expressed more hydrolase and formed more colonies in the presence of ceftriaxone (exhibited stronger "resistance"). However, subsequent single-cell-level investigation by using a microfluidic device showed that a subset of cells with a distinguishable phenotype of slowed growth and intensified hydrolase expression emerged, and they were not positively selected but increased their proportion in the population with ascending antibiotic concentrations. Therefore, heteroresistance--the gradually decreased colony-forming capability in the presence of antibiotic--was a result of a decreased growth rate rather than of selection for resistant cells. Using a mock strain without the resistance gene, we further demonstrated the existence of two nested growth-centric feedback loops that control the expression of the hydrolase and maximize population growth in various antibiotic concentrations. In conclusion, phenotypic heterogeneity is a population-based strategy beneficial for bacterial survival and propagation through task allocation and interphenotypic collaboration, and the growth rate provides a critical control for the expression of stress-related genes and an essential mechanism in responding to environmental stresses. Heteroresistance is essentially phenotypic heterogeneity, where a population-based strategy is thought to be at work, being assumed to be variable cell-to-cell resistance to be selected under antibiotic stress. Exact mechanisms of heteroresistance and its roles in adaptation to antibiotic stress have yet to be fully understood at the molecular and single-cell levels. In our study, we have not been able to detect any apparent subset of "resistant" cells selected by antibiotics; on the contrary, cell populations differentiate into phenotypic subsets with variable growth statuses and hydrolase expression. The growth rate appears to be sensitive to stress intensity and plays a key role in controlling hydrolase expression at both the bulk population and single-cell levels. We have shown here, for the first time, that phenotypic heterogeneity can be beneficial to a growing bacterial population through task allocation and interphenotypic collaboration other than partitioning cells into different categories of selective advantage.
Signalling changes to individuals who show resistance to change can reduce challenging behaviour.
Bull, Leah E; Oliver, Chris; Woodcock, Kate A
2017-03-01
Several neurodevelopmental disorders are associated with resistance to change and challenging behaviours - including temper outbursts - that ensue following changes to routines, plans or expectations (here, collectively: expectations). Here, a change signalling intervention was tested for proof of concept and potential practical effectiveness. Twelve individuals with Prader-Willi syndrome participated in researcher- and caregiver-led pairing of a distinctive visual-verbal signal with subsequent changes to expectations. Specific expectations for a planned subset of five participants were systematically observed in minimally manipulated natural environments. Nine caregivers completed a temper outburst diary during a four week baseline period and a two week signalling evaluation period. Participants demonstrated consistently less temper outburst behaviour in the systematic observations when changes imposed to expectations were signalled, compared to when changes were not signalled. Four of the nine participants whose caregivers completed the behaviour diary demonstrated reliable reductions in temper outbursts between baseline and signalling evaluation. An active control group for the present initial evaluation of the signalling strategy using evidence from caregiver behaviour diaries was outside the scope of the present pilot study. Thus, findings cannot support the clinical efficacy of the present signalling approach. Proof of concept evidence that reliable pairing of a distinctive cue with a subsequent change to expectation can reduce associated challenging behaviour is provided. Data provide additional support for the importance of specific practical steps in further evaluations of the change signalling approach. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Chang, C. Y.
1974-01-01
The author has identified the following significant results. The Skylab S192 data was evaluated by: (1) comparing the classification results using S192 and ERTS-1 data over the Holt County, Nebraska agricultural study area, and (2) investigating the impact of signal-to-noise ratio on classification accuracies using registered S192 and ERTS-1 data. Results indicate: (1) The classification accuracy obtained on S192 data using its best subset of four bands can be expected to be as high as that on ERTS-1 data. (2) When a subset of four S192 bands that are spectrally similar to the ERTS-1 bands was used for classification, an obvious deterioration in the classification accuracy was observed with respect to the ERTS-1 results. (3) The thermal bands 13 and 14 as well as the near IR bands were found to be relatively important in the classification of agricultural data. Although bands 11 and 12 were highly correlated, both were invariably included in the best subsets of the band sizes, four and beyond, according to the divergence criterion. (4) The differentiation of corn from popcorn was difficult on both S192 and ERTS-1 data acquired at an early summer date. (5) The results on both sets of data indicate that it was relatively easy to differentiate grass from any other class.
NASA Astrophysics Data System (ADS)
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2014-06-01
We give an evidence of the Big Fix. The theory of wormholes and multiverse suggests that the parameters of the Standard Model are fixed in such a way that the total entropy at the late stage of the universe is maximized, which we call the maximum entropy principle. In this paper, we discuss how it can be confirmed by the experimental data, and we show that it is indeed true for the Higgs vacuum expectation value vh. We assume that the baryon number is produced by the sphaleron process, and that the current quark masses, the gauge couplings and the Higgs self-coupling are fixed when we vary vh. It turns out that the existence of the atomic nuclei plays a crucial role to maximize the entropy. This is reminiscent of the anthropic principle, however it is required by the fundamental law in our case.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lami, L.; Giovannetti, V.
The set of Entanglement Saving (ES) quantum channels is introduced and characterized. These are completely positive, trace preserving transformations which when acting locally on a bipartite quantum system initially prepared into a maximally entangled configuration, preserve its entanglement even when applied an arbitrary number of times. In other words, a quantum channel ψ is said to be ES if its powers ψ{sup n} are not entanglement-breaking for all integers n. We also characterize the properties of the Asymptotic Entanglement Saving (AES) maps. These form a proper subset of the ES channels that is constituted by those maps that not onlymore » preserve entanglement for all finite n but which also sustain an explicitly not null level of entanglement in the asymptotic limit n → ∞. Structure theorems are provided for ES and for AES maps which yield an almost complete characterization of the former and a full characterization of the latter.« less
Spread of risk across financial markets: better to invest in the peripheries
NASA Astrophysics Data System (ADS)
Pozzi, F.; Di Matteo, T.; Aste, T.
2013-04-01
Risk is not uniformly spread across financial markets and this fact can be exploited to reduce investment risk contributing to improve global financial stability. We discuss how, by extracting the dependency structure of financial equities, a network approach can be used to build a well-diversified portfolio that effectively reduces investment risk. We find that investments in stocks that occupy peripheral, poorly connected regions in financial filtered networks, namely Minimum Spanning Trees and Planar Maximally Filtered Graphs, are most successful in diversifying, improving the ratio between returns' average and standard deviation, reducing the likelihood of negative returns, while keeping profits in line with the general market average even for small baskets of stocks. On the contrary, investments in subsets of central, highly connected stocks are characterized by greater risk and worse performance. This methodology has the added advantage of visualizing portfolio choices directly over the graphic layout of the network.
Spread of risk across financial markets: better to invest in the peripheries
Pozzi, F.; Di Matteo, T.; Aste, T.
2013-01-01
Risk is not uniformly spread across financial markets and this fact can be exploited to reduce investment risk contributing to improve global financial stability. We discuss how, by extracting the dependency structure of financial equities, a network approach can be used to build a well-diversified portfolio that effectively reduces investment risk. We find that investments in stocks that occupy peripheral, poorly connected regions in financial filtered networks, namely Minimum Spanning Trees and Planar Maximally Filtered Graphs, are most successful in diversifying, improving the ratio between returns' average and standard deviation, reducing the likelihood of negative returns, while keeping profits in line with the general market average even for small baskets of stocks. On the contrary, investments in subsets of central, highly connected stocks are characterized by greater risk and worse performance. This methodology has the added advantage of visualizing portfolio choices directly over the graphic layout of the network. PMID:23588852
Local electric dipole moments for periodic systems via density functional theory embedding.
Luber, Sandra
2014-12-21
We describe a novel approach for the calculation of local electric dipole moments for periodic systems. Since the position operator is ill-defined in periodic systems, maximally localized Wannier functions based on the Berry-phase approach are usually employed for the evaluation of local contributions to the total electric dipole moment of the system. We propose an alternative approach: within a subsystem-density functional theory based embedding scheme, subset electric dipole moments are derived without any additional localization procedure, both for hybrid and non-hybrid exchange-correlation functionals. This opens the way to a computationally efficient evaluation of local electric dipole moments in (molecular) periodic systems as well as their rigorous splitting into atomic electric dipole moments. As examples, Infrared spectra of liquid ethylene carbonate and dimethyl carbonate are presented, which are commonly employed as solvents in Lithium ion batteries.
Piezoelectric ribbons printed onto rubber for flexible energy conversion.
Qi, Yi; Jafferis, Noah T; Lyons, Kenneth; Lee, Christine M; Ahmad, Habib; McAlpine, Michael C
2010-02-10
The development of a method for integrating highly efficient energy conversion materials onto stretchable, biocompatible rubbers could yield breakthroughs in implantable or wearable energy harvesting systems. Being electromechanically coupled, piezoelectric crystals represent a particularly interesting subset of smart materials that function as sensors/actuators, bioMEMS devices, and energy converters. Yet, the crystallization of these materials generally requires high temperatures for maximally efficient performance, rendering them incompatible with temperature-sensitive plastics and rubbers. Here, we overcome these limitations by presenting a scalable and parallel process for transferring crystalline piezoelectric nanothick ribbons of lead zirconate titanate from host substrates onto flexible rubbers over macroscopic areas. Fundamental characterization of the ribbons by piezo-force microscopy indicates that their electromechanical energy conversion metrics are among the highest reported on a flexible medium. The excellent performance of the piezo-ribbon assemblies coupled with stretchable, biocompatible rubber may enable a host of exciting avenues in fundamental research and novel applications.
Renal Denervation: Intractable Hypertension and Beyond
Ariyanon, Wassawon; Mao, Huijuan; Adýbelli, Zelal; Romano, Silvia; Rodighiero, Mariapia; Reimers, Bernhard; La Vecchia, Luigi; Ronco, Claudio
2014-01-01
Background Hypertension continues to be a major burden of public health concern despite the recent advances and proven benefit of pharmacological therapy. A certain subset of patients has hypertension resistant to maximal medical therapy and appropriate lifestyle measures. A novel catheter-based technique for renal denervation (RDN) as a new therapeutic avenue has great promise for the treatment of refractory hypertension. Summary This review included the physiology of the renal sympathetic nervous system and the renal nerve anatomy. Furthermore, the RDN procedure, technology systems, and RDN clinical trials as well as findings besides antihypertensive effects were discussed. Findings on safety and efficacy seem to suggest that renal sympathetic denervation could be of therapeutic benefit in refractory hypertensive patients. Despite the fast pace of development in RDN therapies, only initial and very limited clinical data are available. Large gaps in knowledge concerning the long-term effects and consequences of RDN still exist, and solid, randomized data are warranted. PMID:24847331
Random access in large-scale DNA data storage.
Organick, Lee; Ang, Siena Dumas; Chen, Yuan-Jyue; Lopez, Randolph; Yekhanin, Sergey; Makarychev, Konstantin; Racz, Miklos Z; Kamath, Govinda; Gopalan, Parikshit; Nguyen, Bichlien; Takahashi, Christopher N; Newman, Sharon; Parker, Hsing-Yeh; Rashtchian, Cyrus; Stewart, Kendall; Gupta, Gagan; Carlson, Robert; Mulligan, John; Carmean, Douglas; Seelig, Georg; Ceze, Luis; Strauss, Karin
2018-03-01
Synthetic DNA is durable and can encode digital data with high density, making it an attractive medium for data storage. However, recovering stored data on a large-scale currently requires all the DNA in a pool to be sequenced, even if only a subset of the information needs to be extracted. Here, we encode and store 35 distinct files (over 200 MB of data), in more than 13 million DNA oligonucleotides, and show that we can recover each file individually and with no errors, using a random access approach. We design and validate a large library of primers that enable individual recovery of all files stored within the DNA. We also develop an algorithm that greatly reduces the sequencing read coverage required for error-free decoding by maximizing information from all sequence reads. These advances demonstrate a viable, large-scale system for DNA data storage and retrieval.
Hybrid Photon-Plasmon Coupling and Ultrafast Control of Nanoantennas on a Silicon Photonic Chip.
Chen, Bigeng; Bruck, Roman; Traviss, Daniel; Khokhar, Ali Z; Reynolds, Scott; Thomson, David J; Mashanovich, Goran Z; Reed, Graham T; Muskens, Otto L
2018-01-10
Hybrid integration of nanoplasmonic devices with silicon photonic circuits holds promise for a range of applications in on-chip sensing, field-enhanced and nonlinear spectroscopy, and integrated nanophotonic switches. Here, we demonstrate a new regime of photon-plasmon coupling by combining a silicon photonic resonator with plasmonic nanoantennas. Using principles from coherent perfect absorption, we make use of standing-wave light fields to maximize the photon-plasmon interaction strength. Precise placement of the broadband antennas with respect to the narrowband photonic racetrack modes results in controlled hybridization of only a subset of these modes. By combining antennas into groups of radiating dipoles with opposite phase, far-field scattering is effectively suppressed. We achieve ultrafast tuning of photon-plasmon hybridization including reconfigurable routing of the standing-wave input between two output ports. Hybrid photonic-plasmonic resonators provide conceptually new approaches for on-chip integrated nanophotonic devices.
Particle chaos and pitch angle scattering
NASA Technical Reports Server (NTRS)
Burkhart, G. R.; Dusenbery, P. B.; Speiser, T. W.
1995-01-01
Pitch angle scattering is a factor that helps determine the dawn-to-dusk current, controls particle energization, and it has also been used as a remote probe of the current sheet structure. Previous studies have interpreted their results under the exception that randomization will be greatest when the ratio of the two timescales of motion (gyration parallel to and perpendicular to the current sheet) is closet to one. Recently, the average expotential divergence rate (AEDR) has been calculated for particle motion in a hyperbolic current sheet (Chen, 1992). It is claimed that this AEDR measures the degree of chaos and therefore may be thought to measure the randomization. In contrast to previous expectations, the AEDR is not maximized when Kappa is approximately equal to 1 but instead increases with decreasing Kappa. Also contrary to previous expectations, the AEDR is dependent upon the parameter b(sub z). In response to the challenge to previous expectations that has been raised by this calculation of the AEDR, we have investigated the dependence of a measure of particle pitch angle scattering on both the parameters Kappa and b(sub z). We find that, as was previously expected, particle pitch angle scattering is maximized near Kappa = 1 provided that Kappa/b(sub z) greater than 1. In the opposite regime, Kappa/b(sub z) less than 1, we find that particle pitch angle scattering is still largest when the two timescales are equal, but the ratio of the timescales is proportional to b(sub z). In this second regime, particle pitch angle scattering is not due to randomization, but is instead due to a systematic pitch angle change. This result shows that particle pitch angle scattering need not be due to randomization and indicates how a measure of pitch angle scattering can exhibit a different behavior than a measure of chaos.
Alpha-Fair Resource Allocation under Incomplete Information and Presence of a Jammer
NASA Astrophysics Data System (ADS)
Altman, Eitan; Avrachenkov, Konstantin; Garnaev, Andrey
In the present work we deal with the concept of alpha-fair resource allocation in the situation where the decision maker (in our case, the base station) does not have complete information about the environment. Namely, we develop a concept of α-fairness under uncertainty to allocate power resource in the presence of a jammer under two types of uncertainty: (a) the decision maker does not have complete knowledge about the parameters of the environment, but knows only their distribution, (b) the jammer can come into the environment with some probability bringing extra background noise. The goal of the decision maker is to maximize the α-fairness utility function with respect to the SNIR (signal to noise-plus-interference ratio). Here we consider a concept of the expected α-fairness utility function (short-term fairness) as well as fairness of expectation (long-term fairness). In the scenario with the unknown parameters of the environment the most adequate approach is a zero-sum game since it can also be viewed as a minimax problem for the decision maker playing against the nature where the decision maker has to apply the best allocation under the worst circumstances. In the scenario with the uncertainty about jamming being in the system the Nash equilibrium concept is employed since the agents have non-zero sum payoffs: the decision maker would like to maximize either the expected fairness or the fairness of expectation while the jammer would like to minimize the fairness if he comes in on the scene. For all the plots the equilibrium strategies in closed form are found. We have shown that for all the scenarios the equilibrium has to be constructed into two steps. In the first step the equilibrium jamming strategy has to be constructed based on a solution of the corresponding modification of the water-filling equation. In the second step the decision maker equilibrium strategy has to be constructed equalizing the induced by jammer background noise.
AVC: Selecting discriminative features on basis of AUC by maximizing variable complementarity.
Sun, Lei; Wang, Jun; Wei, Jinmao
2017-03-14
The Receiver Operator Characteristic (ROC) curve is well-known in evaluating classification performance in biomedical field. Owing to its superiority in dealing with imbalanced and cost-sensitive data, the ROC curve has been exploited as a popular metric to evaluate and find out disease-related genes (features). The existing ROC-based feature selection approaches are simple and effective in evaluating individual features. However, these approaches may fail to find real target feature subset due to their lack of effective means to reduce the redundancy between features, which is essential in machine learning. In this paper, we propose to assess feature complementarity by a trick of measuring the distances between the misclassified instances and their nearest misses on the dimensions of pairwise features. If a misclassified instance and its nearest miss on one feature dimension are far apart on another feature dimension, the two features are regarded as complementary to each other. Subsequently, we propose a novel filter feature selection approach on the basis of the ROC analysis. The new approach employs an efficient heuristic search strategy to select optimal features with highest complementarities. The experimental results on a broad range of microarray data sets validate that the classifiers built on the feature subset selected by our approach can get the minimal balanced error rate with a small amount of significant features. Compared with other ROC-based feature selection approaches, our new approach can select fewer features and effectively improve the classification performance.
Koutsoukas, Alexios; Paricharak, Shardul; Galloway, Warren R J D; Spring, David R; Ijzerman, Adriaan P; Glen, Robert C; Marcus, David; Bender, Andreas
2014-01-27
Chemical diversity is a widely applied approach to select structurally diverse subsets of molecules, often with the objective of maximizing the number of hits in biological screening. While many methods exist in the area, few systematic comparisons using current descriptors in particular with the objective of assessing diversity in bioactivity space have been published, and this shortage is what the current study is aiming to address. In this work, 13 widely used molecular descriptors were compared, including fingerprint-based descriptors (ECFP4, FCFP4, MACCS keys), pharmacophore-based descriptors (TAT, TAD, TGT, TGD, GpiDAPH3), shape-based descriptors (rapid overlay of chemical structures (ROCS) and principal moments of inertia (PMI)), a connectivity-matrix-based descriptor (BCUT), physicochemical-property-based descriptors (prop2D), and a more recently introduced molecular descriptor type (namely, "Bayes Affinity Fingerprints"). We assessed both the similar behavior of the descriptors in assessing the diversity of chemical libraries, and their ability to select compounds from libraries that are diverse in bioactivity space, which is a property of much practical relevance in screening library design. This is particularly evident, given that many future targets to be screened are not known in advance, but that the library should still maximize the likelihood of containing bioactive matter also for future screening campaigns. Overall, our results showed that descriptors based on atom topology (i.e., fingerprint-based descriptors and pharmacophore-based descriptors) correlate well in rank-ordering compounds, both within and between descriptor types. On the other hand, shape-based descriptors such as ROCS and PMI showed weak correlation with the other descriptors utilized in this study, demonstrating significantly different behavior. We then applied eight of the molecular descriptors compared in this study to sample a diverse subset of sample compounds (4%) from an initial population of 2587 compounds, covering the 25 largest human activity classes from ChEMBL and measured the coverage of activity classes by the subsets. Here, it was found that "Bayes Affinity Fingerprints" achieved an average coverage of 92% of activity classes. Using the descriptors ECFP4, GpiDAPH3, TGT, and random sampling, 91%, 84%, 84%, and 84% of the activity classes were represented in the selected compounds respectively, followed by BCUT, prop2D, MACCS, and PMI (in order of decreasing performance). In addition, we were able to show that there is no visible correlation between compound diversity in PMI space and in bioactivity space, despite frequent utilization of PMI plots to this end. To summarize, in this work, we assessed which descriptors select compounds with high coverage of bioactivity space, and can hence be used for diverse compound selection for biological screening. In cases where multiple descriptors are to be used for diversity selection, this work describes which descriptors behave complementarily, and can hence be used jointly to focus on different aspects of diversity in chemical space.
A Novel Protocol for Model Calibration in Biological Wastewater Treatment
Zhu, Ao; Guo, Jianhua; Ni, Bing-Jie; Wang, Shuying; Yang, Qing; Peng, Yongzhen
2015-01-01
Activated sludge models (ASMs) have been widely used for process design, operation and optimization in wastewater treatment plants. However, it is still a challenge to achieve an efficient calibration for reliable application by using the conventional approaches. Hereby, we propose a novel calibration protocol, i.e. Numerical Optimal Approaching Procedure (NOAP), for the systematic calibration of ASMs. The NOAP consists of three key steps in an iterative scheme flow: i) global factors sensitivity analysis for factors fixing; ii) pseudo-global parameter correlation analysis for non-identifiable factors detection; and iii) formation of a parameter subset through an estimation by using genetic algorithm. The validity and applicability are confirmed using experimental data obtained from two independent wastewater treatment systems, including a sequencing batch reactor and a continuous stirred-tank reactor. The results indicate that the NOAP can effectively determine the optimal parameter subset and successfully perform model calibration and validation for these two different systems. The proposed NOAP is expected to use for automatic calibration of ASMs and be applied potentially to other ordinary differential equations models. PMID:25682959
Pincikova, Terezia; Paquin-Proulx, Dominic; Moll, Markus; Flodström-Tullberg, Malin; Hjelte, Lena; Sandberg, Johan K
2018-05-01
Here we report a unique case of a patient with cystic fibrosis characterized by severely impaired control of bacterial respiratory infections. This patient's susceptibility to such infections was much worse than expected from a cystic fibrosis clinical perspective, and he died at age 22 years despite extensive efforts and massive use of antibiotics. We found that this severe condition was associated with a near-complete deficiency in circulating mucosal-associated invariant T (MAIT) cells as measured at several time points. MAIT cells are a large, recently described subset of T cells that recognize microbial riboflavin metabolites presented by the highly evolutionarily conserved MR1 molecules. The MAIT cell deficiency was specific; other T-cell subsets were intact. Even though this is only one unique case, the findings lend significant support to the emerging role of MAIT cells in mucosal immune defense and suggest that MAIT cells may significantly modify the clinical phenotype of respiratory diseases. Copyright © 2018 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.
Zhang, Wei; Liu, Yuanyuan; Warren, Alan; Xu, Henglong
2014-12-15
The aim of this study is to determine the feasibility of using a small species pool from a raw dataset of biofilm-dwelling ciliates for bioassessment based on taxonomic diversity. Samples were collected monthly at four stations within a gradient of environmental stress in coastal waters of the Yellow Sea, northern China from August 2011 to July 2012. A 33-species subset was identified from the raw 137-species dataset using a multivariate method. The spatial patterns of this subset were significantly correlated with the changes in the nutrients and chemical oxygen demand. The taxonomic diversity indices were significantly correlated with nutrients. The pair-wise indices of average taxonomic distinctness (Δ(+)) and the taxonomic distinctness (Λ(+)) showed a clear departure from the expected taxonomic pattern. These findings suggest that this small ciliate assemblage might be used as an adequate species pool for discriminating water quality status based on taxonomic distinctness in marine ecosystems. Copyright © 2014 Elsevier Ltd. All rights reserved.
Watson, Jean-Paul; Murray, Regan; Hart, William E.
2009-11-13
We report that the sensor placement problem in contamination warning system design for municipal water distribution networks involves maximizing the protection level afforded by limited numbers of sensors, typically quantified as the expected impact of a contamination event; the issue of how to mitigate against high-consequence events is either handled implicitly or ignored entirely. Consequently, expected-case sensor placements run the risk of failing to protect against high-consequence 9/11-style attacks. In contrast, robust sensor placements address this concern by focusing strictly on high-consequence events and placing sensors to minimize the impact of these events. We introduce several robust variations of themore » sensor placement problem, distinguished by how they quantify the potential damage due to high-consequence events. We explore the nature of robust versus expected-case sensor placements on three real-world large-scale distribution networks. We find that robust sensor placements can yield large reductions in the number and magnitude of high-consequence events, with only modest increases in expected impact. Finally, the ability to trade-off between robust and expected-case impacts is a key unexplored dimension in contamination warning system design.« less
Deterministic annealing for density estimation by multivariate normal mixtures
NASA Astrophysics Data System (ADS)
Kloppenburg, Martin; Tavan, Paul
1997-03-01
An approach to maximum-likelihood density estimation by mixtures of multivariate normal distributions for large high-dimensional data sets is presented. Conventionally that problem is tackled by notoriously unstable expectation-maximization (EM) algorithms. We remove these instabilities by the introduction of soft constraints, enabling deterministic annealing. Our developments are motivated by the proof that algorithmically stable fuzzy clustering methods that are derived from statistical physics analogs are special cases of EM procedures.
Maritime Anomaly Detection: Domain Introduction and Review of Selected Literature
2011-10-01
This “side” information can be used to justify the behaviour of an entity. This is one more reason why good situational awareness is needed to...the probability density function is updated using the expected maximization and the Kullback - Leibler information metric. The last step tries to...on this assessment, recommendations about future directions are made. To prevent confusion, the use of the terms “data” and “ information ” must be
Planning with Continuous Resources in Stochastic Domains
NASA Technical Reports Server (NTRS)
Mausam, Mausau; Benazera, Emmanuel; Brafman, Roneu; Hansen, Eric
2005-01-01
We consider the problem of optimal planning in stochastic domains with metric resource constraints. Our goal is to generate a policy whose expected sum of rewards is maximized for a given initial state. We consider a general formulation motivated by our application domain--planetary exploration--in which the choice of an action at each step may depend on the current resource levels. We adapt the forward search algorithm AO* to handle our continuous state space efficiently.
Easily Processed Host-Guest Polymer Systems with High-Tg Characteristics (First-year Report)
2012-05-01
manner such that the effective electro- optical coefficient is maximized. Unfortunately, relaxation of the chromophore in the host polymer leads to...polished stainless steel facing plates (0.25 in thickness, McMaster ) and window molds cut from aluminum stock (1 mm thickness, McMaster ). Both facing...plasticization from the chromophore. Both chromophores resulted in substantial red-shifted absorption compared to a sample prepared in virgin PMMA. We expect
Decision-making competence predicts domain-specific risk attitudes
Weller, Joshua A.; Ceschi, Andrea; Randolph, Caleb
2015-01-01
Decision-making competence (DMC) reflects individual differences in rational responding across several classic behavioral decision-making tasks. Although it has been associated with real-world risk behavior, less is known about the degree to which DMC contributes to specific components of risk attitudes. Utilizing a psychological risk-return framework, we examined the associations between risk attitudes and DMC. Italian community residents (n = 804) completed an online DMC measure, using a subset of the original Adult-DMC battery. Participants also completed a self-reported risk attitude measure for three components of risk attitudes (risk-taking, risk perceptions, and expected benefits) across six risk domains. Overall, greater performance on the DMC component scales were inversely, albeit modestly, associated with risk-taking tendencies. Structural equation modeling results revealed that DMC was associated with lower perceived expected benefits for all domains. In contrast, its association with perceived risks was more domain-specific. These analyses also revealed stronger indirect effects for the DMC → expected benefits → risk-taking path than the DMC → perceived risk → risk-taking path, especially for behaviors that may be considered more maladaptive in nature. These results suggest that DMC performance differentially impacts specific components of risk attitudes, and may be more strongly related to the evaluation of expected value of a specific behavior. PMID:26029128
Knijnenburg, S.L.; Kremer, L.C.; Jaspers, M.W.M.
2015-01-01
Summary Background The Website Developmental Model for the Healthcare Consumer (WDMHC) is an extensive and successfully evaluated framework that incorporates user-centered design principles. However, due to its extensiveness its application is limited. In the current study we apply a subset of the WDMHC framework in a case study concerning the development and evaluation of a website aimed at childhood cancer survivors (CCS). Objective To assess whether the implementation of a limited subset of the WDMHC-framework is sufficient to deliver a high-quality website with few usability problems, aimed at a specific patient population. Methods The website was developed using a six-step approach divided into three phases derived from the WDMHC: 1) information needs analysis, mock-up creation and focus group discussion; 2) website prototype development; and 3) heuristic evaluation (HE) and think aloud analysis (TA). The HE was performed by three double experts (knowledgeable both in usability engineering and childhood cancer survivorship), who assessed the site using the Nielsen heuristics. Eight end-users were invited to complete three scenarios covering all functionality of the website by TA. Results The HE and TA were performed concurrently on the website prototype. The HE resulted in 29 unique usability issues; the end-users performing the TA encountered eleven unique problems. Four issues specifically revealed by HE concerned cosmetic design flaws, whereas two problems revealed by TA were related to website content. Conclusion Based on the subset of the WDMHC framework we were able to deliver a website that closely matched the expectancy of the end-users and resulted in relatively few usability problems during end-user testing. With the successful application of this subset of the WDMHC, we provide developers with a clear and easily applicable framework for the development of healthcare websites with high usability aimed at specific medical populations. PMID:26171083
SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, T; Ruan, D
2015-06-15
Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is firstmore » roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit in both complexity and performance is expected to be most pronounced with large-scale heterogeneous data.« less
Lee, Jia-Cheng; Chuang, Keh-Shih; Chen, Yi-Wei; Hsu, Fang-Yuh; Chou, Fong-In; Yen, Sang-Hue; Wu, Yuan-Hung
2017-01-01
Diffuse intrinsic pontine glioma is a very frustrating disease. Since the tumor infiltrates the brain stem, surgical removal is often impossible. For conventional radiotherapy, the dose constraint of the brain stem impedes attempts at further dose escalation. Boron neutron capture therapy (BNCT), a targeted radiotherapy, carries the potential to selectively irradiate tumors with an adequate dose while sparing adjacent normal tissue. In this study, 12 consecutive patients treated with conventional radiotherapy in our institute were reviewed to evaluate the feasibility of BNCT. NCTPlan Ver. 1.1.44 was used for dose calculations. Compared with two and three fields, the average maximal dose to the normal brain may be lowered to 7.35 ± 0.72 Gy-Eq by four-field irradiation. The mean ratio of minimal dose to clinical target volume and maximal dose to normal tissue was 2.41 ± 0.26 by four-field irradiation. A therapeutic benefit may be expected with multi-field boron neutron capture therapy to treat diffuse intrinsic pontine glioma without craniotomy, while the maximal dose to the normal brain would be minimized by using the four-field setting.
Lee, Jia-Cheng; Chuang, Keh-Shih; Chen, Yi-Wei; Hsu, Fang-Yuh; Chou, Fong-In; Yen, Sang-Hue
2017-01-01
Diffuse intrinsic pontine glioma is a very frustrating disease. Since the tumor infiltrates the brain stem, surgical removal is often impossible. For conventional radiotherapy, the dose constraint of the brain stem impedes attempts at further dose escalation. Boron neutron capture therapy (BNCT), a targeted radiotherapy, carries the potential to selectively irradiate tumors with an adequate dose while sparing adjacent normal tissue. In this study, 12 consecutive patients treated with conventional radiotherapy in our institute were reviewed to evaluate the feasibility of BNCT. NCTPlan Ver. 1.1.44 was used for dose calculations. Compared with two and three fields, the average maximal dose to the normal brain may be lowered to 7.35 ± 0.72 Gy-Eq by four-field irradiation. The mean ratio of minimal dose to clinical target volume and maximal dose to normal tissue was 2.41 ± 0.26 by four-field irradiation. A therapeutic benefit may be expected with multi-field boron neutron capture therapy to treat diffuse intrinsic pontine glioma without craniotomy, while the maximal dose to the normal brain would be minimized by using the four-field setting. PMID:28662135
Research priorities and plans for the International Space Station-results of the 'REMAP' Task Force
NASA Technical Reports Server (NTRS)
Kicza, M.; Erickson, K.; Trinh, E.
2003-01-01
Recent events in the International Space Station (ISS) Program have resulted in the necessity to re-examine the research priorities and research plans for future years. Due to both technical and fiscal resource constraints expected on the International Space Station, it is imperative that research priorities be carefully reviewed and clearly articulated. In consultation with OSTP and the Office of Management and budget (OMB), NASA's Office of Biological and Physical Research (OBPR) assembled an ad-hoc external advisory committee, the Biological and Physical Research Maximization and Prioritization (REMAP) Task Force. This paper describes the outcome of the Task Force and how it is being used to define a roadmap for near and long-term Biological and Physical Research objectives that supports NASA's Vision and Mission. Additionally, the paper discusses further prioritizations that were necessitated by budget and ISS resource constraints in order to maximize utilization of the International Space Station. Finally, a process has been developed to integrate the requirements for this prioritized research with other agency requirements to develop an integrated ISS assembly and utilization plan that maximizes scientific output. c2003 American Institute of Aeronautics and Astronautics. Published by Elsevier Science Ltd. All rights reserved.
Quality competition and uncertainty in a horizontally differentiated hospital market.
Montefiori, Marcello
2014-01-01
The chapter studies hospital competition in a spatially differentiated market in which patient demand reflects the quality/distance mix that maximizes their utility. Treatment is free at the point of use and patients freely choose the provider which best fits their expectations. Hospitals might have asymmetric objectives and costs, however they are reimbursed using a uniform prospective payment. The chapter provides different equilibrium outcomes, under perfect and asymmetric information. The results show that asymmetric costs, in the case where hospitals are profit maximizers, allow for a social welfare and quality improvement. On the other hand, the presence of a publicly managed hospital which pursues the objective of quality maximization is able to ensure a higher level of quality, patient surplus and welfare. However, the extent of this outcome might be considerably reduced when high levels of public hospital inefficiency are detectable. Finally, the negative consequences caused by the presence of asymmetric information are highlighted in the different scenarios of ownership/objectives and costs. The setting adopted in the model aims at describing the up-coming European market for secondary health care, focusing on hospital behavior and it is intended to help the policy-maker in understanding real world dynamics.
The influence of individualism and drinking identity on alcohol problems.
Foster, Dawn W; Yeung, Nelson; Quist, Michelle C
2014-12-01
This study evaluated the interactive association between individualism and drinking identity predicting alcohol use and problems. Seven hundred and ten undergraduates (Mean age =22.84, SD = 5.31, 83.1% female) completed study materials. We expected that drinking identity and individualism would positively correlate with drinking variables. We further expected that individualism would moderate the association between drinking identity and drinking such that the relationship between drinking identity and alcohol outcomes would be positively associated, particularly among those high in individualism. Our findings supported our hypotheses. These findings better explain the relationship between drinking identity, individualism, and alcohol use. Furthermore, this research encourages the consideration of individual factors and personality characteristics in order to develop culturally tailored materials to maximize intervention efficacy across cultures.