Sample records for iterative multidimensional statistics

  1. Nuclear Forensic Inferences Using Iterative Multidimensional Statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robel, M; Kristo, M J; Heller, M A

    2009-06-09

    Nuclear forensics involves the analysis of interdicted nuclear material for specific material characteristics (referred to as 'signatures') that imply specific geographical locations, production processes, culprit intentions, etc. Predictive signatures rely on expert knowledge of physics, chemistry, and engineering to develop inferences from these material characteristics. Comparative signatures, on the other hand, rely on comparison of the material characteristics of the interdicted sample (the 'questioned sample' in FBI parlance) with those of a set of known samples. In the ideal case, the set of known samples would be a comprehensive nuclear forensics database, a database which does not currently exist. Inmore » fact, our ability to analyze interdicted samples and produce an extensive list of precise materials characteristics far exceeds our ability to interpret the results. Therefore, as we seek to develop the extensive databases necessary for nuclear forensics, we must also develop the methods necessary to produce the necessary inferences from comparison of our analytical results with these large, multidimensional sets of data. In the work reported here, we used a large, multidimensional dataset of results from quality control analyses of uranium ore concentrate (UOC, sometimes called 'yellowcake'). We have found that traditional multidimensional techniques, such as principal components analysis (PCA), are especially useful for understanding such datasets and drawing relevant conclusions. In particular, we have developed an iterative partial least squares-discriminant analysis (PLS-DA) procedure that has proven especially adept at identifying the production location of unknown UOC samples. By removing classes which fell far outside the initial decision boundary, and then rebuilding the PLS-DA model, we have consistently produced better and more definitive attributions than with a single pass classification approach. Performance of the iterative PLS-DA method compared favorably to that of classification and regression tree (CART) and k nearest neighbor (KNN) algorithms, with the best combination of accuracy and robustness, as tested by classifying samples measured independently in our laboratories against the vendor QC based reference set.« less

  2. Data-driven Green's function retrieval and application to imaging with multidimensional deconvolution

    NASA Astrophysics Data System (ADS)

    Broggini, Filippo; Wapenaar, Kees; van der Neut, Joost; Snieder, Roel

    2014-01-01

    An iterative method is presented that allows one to retrieve the Green's function originating from a virtual source located inside a medium using reflection data measured only at the acquisition surface. In addition to the reflection response, an estimate of the travel times corresponding to the direct arrivals is required. However, no detailed information about the heterogeneities in the medium is needed. The iterative scheme generalizes the Marchenko equation for inverse scattering to the seismic reflection problem. To give insight in the mechanism of the iterative method, its steps for a simple layered medium are analyzed using physical arguments based on the stationary phase method. The retrieved Green's wavefield is shown to correctly contain the multiples due to the inhomogeneities present in the medium. Additionally, a variant of the iterative scheme enables decomposition of the retrieved wavefield into its downgoing and upgoing components. These wavefields then enable creation of a ghost-free image of the medium with either cross correlation or multidimensional deconvolution, presenting an advantage over standard prestack migration.

  3. Evaluating Item Fit for Multidimensional Item Response Models

    ERIC Educational Resources Information Center

    Zhang, Bo; Stone, Clement A.

    2008-01-01

    This research examines the utility of the s-x[superscript 2] statistic proposed by Orlando and Thissen (2000) in evaluating item fit for multidimensional item response models. Monte Carlo simulation was conducted to investigate both the Type I error and statistical power of this fit statistic in analyzing two kinds of multidimensional test…

  4. Iteration and superposition encryption scheme for image sequences based on multi-dimensional keys

    NASA Astrophysics Data System (ADS)

    Han, Chao; Shen, Yuzhen; Ma, Wenlin

    2017-12-01

    An iteration and superposition encryption scheme for image sequences based on multi-dimensional keys is proposed for high security, big capacity and low noise information transmission. Multiple images to be encrypted are transformed into phase-only images with the iterative algorithm and then are encrypted by different random phase, respectively. The encrypted phase-only images are performed by inverse Fourier transform, respectively, thus new object functions are generated. The new functions are located in different blocks and padded zero for a sparse distribution, then they propagate to a specific region at different distances by angular spectrum diffraction, respectively and are superposed in order to form a single image. The single image is multiplied with a random phase in the frequency domain and then the phase part of the frequency spectrums is truncated and the amplitude information is reserved. The random phase, propagation distances, truncated phase information in frequency domain are employed as multiple dimensional keys. The iteration processing and sparse distribution greatly reduce the crosstalk among the multiple encryption images. The superposition of image sequences greatly improves the capacity of encrypted information. Several numerical experiments based on a designed optical system demonstrate that the proposed scheme can enhance encrypted information capacity and make image transmission at a highly desired security level.

  5. The Multidimensional Assessment of Interoceptive Awareness (MAIA)

    PubMed Central

    Mehling, Wolf E.; Price, Cynthia; Daubenmier, Jennifer J.; Acree, Mike; Bartmess, Elizabeth; Stewart, Anita

    2012-01-01

    This paper describes the development of a multidimensional self-report measure of interoceptive body awareness. The systematic mixed-methods process involved reviewing the current literature, specifying a multidimensional conceptual framework, evaluating prior instruments, developing items, and analyzing focus group responses to scale items by instructors and patients of body awareness-enhancing therapies. Following refinement by cognitive testing, items were field-tested in students and instructors of mind-body approaches. Final item selection was achieved by submitting the field test data to an iterative process using multiple validation methods, including exploratory cluster and confirmatory factor analyses, comparison between known groups, and correlations with established measures of related constructs. The resulting 32-item multidimensional instrument assesses eight concepts. The psychometric properties of these final scales suggest that the Multidimensional Assessment of Interoceptive Awareness (MAIA) may serve as a starting point for research and further collaborative refinement. PMID:23133619

  6. Multidimensional effects in nonadiabatic statistical theories of spin- forbidden kinetics. A case study of 3O + CO → CO 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jasper, Ahren

    2015-04-14

    The appropriateness of treating crossing seams of electronic states of different spins as nonadiabatic transition states in statistical calculations of spin-forbidden reaction rates is considered. We show that the spin-forbidden reaction coordinate, the nuclear coordinate perpendicular to the crossing seam, is coupled to the remaining nuclear degrees of freedom. We found that this coupling gives rise to multidimensional effects that are not typically included in statistical treatments of spin-forbidden kinetics. Three qualitative categories of multidimensional effects may be identified: static multidimensional effects due to the geometry-dependence of the local shape of the crossing seam and of the spin–orbit coupling, dynamicalmore » multidimensional effects due to energy exchange with the reaction coordinate during the seam crossing, and nonlocal(history-dependent) multidimensional effects due to interference of the electronic variables at second, third, and later seam crossings. Nonlocal multidimensional effects are intimately related to electronic decoherence, where electronic dephasing acts to erase the history of the system. A semiclassical model based on short-time full-dimensional trajectories that includes all three multidimensional effects as well as a model for electronic decoherence is presented. The results of this multidimensional nonadiabatic statistical theory (MNST) for the 3O + CO → CO 2 reaction are compared with the results of statistical theories employing one-dimensional (Landau–Zener and weak coupling) models for the transition probability and with those calculated previously using multistate trajectories. The MNST method is shown to accurately reproduce the multistate decay-of-mixing trajectory results, so long as consistent thresholds are used. Furthermore, the MNST approach has several advantages over multistate trajectory approaches and is more suitable in chemical kinetics calculations at low temperatures and for complex systems. The error in statistical calculations that neglect multidimensional effects is shown to be as large as a factor of 2 for this system, with static multidimensional effects identified as the largest source of error.« less

  7. VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shapiro, A.; Huria, H.C.; Cho, K.W.

    1991-12-01

    VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less

  8. Ultra-low-dose computed tomographic angiography with model-based iterative reconstruction compared with standard-dose imaging after endovascular aneurysm repair: a prospective pilot study.

    PubMed

    Naidu, Sailen G; Kriegshauser, J Scott; Paden, Robert G; He, Miao; Wu, Qing; Hara, Amy K

    2014-12-01

    An ultra-low-dose radiation protocol reconstructed with model-based iterative reconstruction was compared with our standard-dose protocol. This prospective study evaluated 20 men undergoing surveillance-enhanced computed tomography after endovascular aneurysm repair. All patients underwent standard-dose and ultra-low-dose venous phase imaging; images were compared after reconstruction with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction. Objective measures of aortic contrast attenuation and image noise were averaged. Images were subjectively assessed (1 = worst, 5 = best) for diagnostic confidence, image noise, and vessel sharpness. Aneurysm sac diameter and endoleak detection were compared. Quantitative image noise was 26% less with ultra-low-dose model-based iterative reconstruction than with standard-dose adaptive statistical iterative reconstruction and 58% less than with ultra-low-dose adaptive statistical iterative reconstruction. Average subjective noise scores were not different between ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction (3.8 vs. 4.0, P = .25). Subjective scores for diagnostic confidence were better with standard-dose adaptive statistical iterative reconstruction than with ultra-low-dose model-based iterative reconstruction (4.4 vs. 4.0, P = .002). Vessel sharpness was decreased with ultra-low-dose model-based iterative reconstruction compared with standard-dose adaptive statistical iterative reconstruction (3.3 vs. 4.1, P < .0001). Ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction aneurysm sac diameters were not significantly different (4.9 vs. 4.9 cm); concordance for the presence of endoleak was 100% (P < .001). Compared with a standard-dose technique, an ultra-low-dose model-based iterative reconstruction protocol provides comparable image quality and diagnostic assessment at a 73% lower radiation dose.

  9. VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system. Version 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shapiro, A.; Huria, H.C.; Cho, K.W.

    1991-12-01

    VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less

  10. SciSpark: Highly Interactive and Scalable Model Evaluation and Climate Metrics

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Mattmann, C. A.; Waliser, D. E.; Kim, J.; Loikith, P.; Lee, H.; McGibbney, L. J.; Whitehall, K. D.

    2014-12-01

    Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We are developing a lightning fast Big Data technology called SciSpark based on ApacheTM Spark. Spark implements the map-reduce paradigm for parallel computing on a cluster, but emphasizes in-memory computation, "spilling" to disk only as needed, and so outperforms the disk-based ApacheTM Hadoop by 100x in memory and by 10x on disk, and makes iterative algorithms feasible. SciSpark will enable scalable model evaluation by executing large-scale comparisons of A-Train satellite observations to model grids on a cluster of 100 to 1000 compute nodes. This 2nd generation capability for NASA's Regional Climate Model Evaluation System (RCMES) will compute simple climate metrics at interactive speeds, and extend to quite sophisticated iterative algorithms such as machine-learning (ML) based clustering of temperature PDFs, and even graph-based algorithms for searching for Mesocale Convective Complexes. The goals of SciSpark are to: (1) Decrease the time to compute comparison statistics and plots from minutes to seconds; (2) Allow for interactive exploration of time-series properties over seasons and years; (3) Decrease the time for satellite data ingestion into RCMES to hours; (4) Allow for Level-2 comparisons with higher-order statistics or PDF's in minutes to hours; and (5) Move RCMES into a near real time decision-making platform. We will report on: the architecture and design of SciSpark, our efforts to integrate climate science algorithms in Python and Scala, parallel ingest and partitioning (sharding) of A-Train satellite observations from HDF files and model grids from netCDF files, first parallel runs to compute comparison statistics and PDF's, and first metrics quantifying parallel speedups and memory & disk usage.

  11. Standard and reduced radiation dose liver CT images: adaptive statistical iterative reconstruction versus model-based iterative reconstruction-comparison of findings and image quality.

    PubMed

    Shuman, William P; Chan, Keith T; Busey, Janet M; Mitsumori, Lee M; Choi, Eunice; Koprowicz, Kent M; Kanal, Kalpana M

    2014-12-01

    To investigate whether reduced radiation dose liver computed tomography (CT) images reconstructed with model-based iterative reconstruction ( MBIR model-based iterative reconstruction ) might compromise depiction of clinically relevant findings or might have decreased image quality when compared with clinical standard radiation dose CT images reconstructed with adaptive statistical iterative reconstruction ( ASIR adaptive statistical iterative reconstruction ). With institutional review board approval, informed consent, and HIPAA compliance, 50 patients (39 men, 11 women) were prospectively included who underwent liver CT. After a portal venous pass with ASIR adaptive statistical iterative reconstruction images, a 60% reduced radiation dose pass was added with MBIR model-based iterative reconstruction images. One reviewer scored ASIR adaptive statistical iterative reconstruction image quality and marked findings. Two additional independent reviewers noted whether marked findings were present on MBIR model-based iterative reconstruction images and assigned scores for relative conspicuity, spatial resolution, image noise, and image quality. Liver and aorta Hounsfield units and image noise were measured. Volume CT dose index and size-specific dose estimate ( SSDE size-specific dose estimate ) were recorded. Qualitative reviewer scores were summarized. Formal statistical inference for signal-to-noise ratio ( SNR signal-to-noise ratio ), contrast-to-noise ratio ( CNR contrast-to-noise ratio ), volume CT dose index, and SSDE size-specific dose estimate was made (paired t tests), with Bonferroni adjustment. Two independent reviewers identified all 136 ASIR adaptive statistical iterative reconstruction image findings (n = 272) on MBIR model-based iterative reconstruction images, scoring them as equal or better for conspicuity, spatial resolution, and image noise in 94.1% (256 of 272), 96.7% (263 of 272), and 99.3% (270 of 272), respectively. In 50 image sets, two reviewers (n = 100) scored overall image quality as sufficient or good with MBIR model-based iterative reconstruction in 99% (99 of 100). Liver SNR signal-to-noise ratio was significantly greater for MBIR model-based iterative reconstruction (10.8 ± 2.5 [standard deviation] vs 7.7 ± 1.4, P < .001); there was no difference for CNR contrast-to-noise ratio (2.5 ± 1.4 vs 2.4 ± 1.4, P = .45). For ASIR adaptive statistical iterative reconstruction and MBIR model-based iterative reconstruction , respectively, volume CT dose index was 15.2 mGy ± 7.6 versus 6.2 mGy ± 3.6; SSDE size-specific dose estimate was 16.4 mGy ± 6.6 versus 6.7 mGy ± 3.1 (P < .001). Liver CT images reconstructed with MBIR model-based iterative reconstruction may allow up to 59% radiation dose reduction compared with the dose with ASIR adaptive statistical iterative reconstruction , without compromising depiction of findings or image quality. © RSNA, 2014.

  12. On Green's function retrieval by iterative substitution of the coupled Marchenko equations

    NASA Astrophysics Data System (ADS)

    van der Neut, Joost; Vasconcelos, Ivan; Wapenaar, Kees

    2015-11-01

    Iterative substitution of the coupled Marchenko equations is a novel methodology to retrieve the Green's functions from a source or receiver array at an acquisition surface to an arbitrary location in an acoustic medium. The methodology requires as input the single-sided reflection response at the acquisition surface and an initial focusing function, being the time-reversed direct wavefield from the acquisition surface to a specified location in the subsurface. We express the iterative scheme that is applied by this methodology explicitly as the successive actions of various linear operators, acting on an initial focusing function. These operators involve multidimensional crosscorrelations with the reflection data and truncations in time. We offer physical interpretations of the multidimensional crosscorrelations by subtracting traveltimes along common ray paths at the stationary points of the underlying integrals. This provides a clear understanding of how individual events are retrieved by the scheme. Our interpretation also exposes some of the scheme's limitations in terms of what can be retrieved in case of a finite recording aperture. Green's function retrieval is only successful if the relevant stationary points are sampled. As a consequence, internal multiples can only be retrieved at a subsurface location with a particular ray parameter if this location is illuminated by the direct wavefield with this specific ray parameter. Several assumptions are required to solve the Marchenko equations. We show that these assumptions are not always satisfied in arbitrary heterogeneous media, which can result in incomplete Green's function retrieval and the emergence of artefacts. Despite these limitations, accurate Green's functions can often be retrieved by the iterative scheme, which is highly relevant for seismic imaging and inversion of internal multiple reflections.

  13. Adaptive Statistical Iterative Reconstruction-V Versus Adaptive Statistical Iterative Reconstruction: Impact on Dose Reduction and Image Quality in Body Computed Tomography.

    PubMed

    Gatti, Marco; Marchisio, Filippo; Fronda, Marco; Rampado, Osvaldo; Faletti, Riccardo; Bergamasco, Laura; Ropolo, Roberto; Fonio, Paolo

    The aim of this study was to evaluate the impact on dose reduction and image quality of the new iterative reconstruction technique: adaptive statistical iterative reconstruction (ASIR-V). Fifty consecutive oncologic patients acted as case controls undergoing during their follow-up a computed tomography scan both with ASIR and ASIR-V. Each study was analyzed in a double-blinded fashion by 2 radiologists. Both quantitative and qualitative analyses of image quality were conducted. Computed tomography scanner radiation output was 38% (29%-45%) lower (P < 0.0001) for the ASIR-V examinations than for the ASIR ones. The quantitative image noise was significantly lower (P < 0.0001) for ASIR-V. Adaptive statistical iterative reconstruction-V had a higher performance for the subjective image noise (P = 0.01 for 5 mm and P = 0.009 for 1.25 mm), the other parameters (image sharpness, diagnostic acceptability, and overall image quality) being similar (P > 0.05). Adaptive statistical iterative reconstruction-V is a new iterative reconstruction technique that has the potential to provide image quality equal to or greater than ASIR, with a dose reduction around 40%.

  14. Computed Tomography Image Quality Evaluation of a New Iterative Reconstruction Algorithm in the Abdomen (Adaptive Statistical Iterative Reconstruction-V) a Comparison With Model-Based Iterative Reconstruction, Adaptive Statistical Iterative Reconstruction, and Filtered Back Projection Reconstructions.

    PubMed

    Goodenberger, Martin H; Wagner-Bartak, Nicolaus A; Gupta, Shiva; Liu, Xinming; Yap, Ramon Q; Sun, Jia; Tamm, Eric P; Jensen, Corey T

    The purpose of this study was to compare abdominopelvic computed tomography images reconstructed with adaptive statistical iterative reconstruction-V (ASIR-V) with model-based iterative reconstruction (Veo 3.0), ASIR, and filtered back projection (FBP). Abdominopelvic computed tomography scans for 36 patients (26 males and 10 females) were reconstructed using FBP, ASIR (80%), Veo 3.0, and ASIR-V (30%, 60%, 90%). Mean ± SD patient age was 32 ± 10 years with mean ± SD body mass index of 26.9 ± 4.4 kg/m. Images were reviewed by 2 independent readers in a blinded, randomized fashion. Hounsfield unit, noise, and contrast-to-noise ratio (CNR) values were calculated for each reconstruction algorithm for further comparison. Phantom evaluation of low-contrast detectability (LCD) and high-contrast resolution was performed. Adaptive statistical iterative reconstruction-V 30%, ASIR-V 60%, and ASIR 80% were generally superior qualitatively compared with ASIR-V 90%, Veo 3.0, and FBP (P < 0.05). Adaptive statistical iterative reconstruction-V 90% showed superior LCD and had the highest CNR in the liver, aorta, and, pancreas, measuring 7.32 ± 3.22, 11.60 ± 4.25, and 4.60 ± 2.31, respectively, compared with the next best series of ASIR-V 60% with respective CNR values of 5.54 ± 2.39, 8.78 ± 3.15, and 3.49 ± 1.77 (P <0.0001). Veo 3.0 and ASIR 80% had the best and worst spatial resolution, respectively. Adaptive statistical iterative reconstruction-V 30% and ASIR-V 60% provided the best combination of qualitative and quantitative performance. Adaptive statistical iterative reconstruction 80% was equivalent qualitatively, but demonstrated inferior spatial resolution and LCD.

  15. Transcending the Quantitative-Qualitative Divide with Mixed Methods Research: A Multidimensional Framework for Understanding Congruence and Completeness in the Study of Values

    ERIC Educational Resources Information Center

    McLafferty, Charles L., Jr.; Slate, John R.; Onwuegbuzie, Anthony J.

    2010-01-01

    Quantitative research dominates published literature in the helping professions. Mixed methods research, which integrates quantitative and qualitative methodologies, has received a lukewarm reception. The authors address the iterative separation that infuses theory, praxis, philosophy, methodology, training, and public perception and propose a…

  16. Finite element concepts in computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    Finite element theory was employed to establish an implicit numerical solution algorithm for the time averaged unsteady Navier-Stokes equations. Both the multidimensional and a time-split form of the algorithm were considered, the latter of particular interest for problem specification on a regular mesh. A Newton matrix iteration procedure is outlined for solving the resultant nonlinear algebraic equation systems. Multidimensional discretization procedures are discussed with emphasis on automated generation of specific nonuniform solution grids and accounting of curved surfaces. The time-split algorithm was evaluated with regards to accuracy and convergence properties for hyperbolic equations on rectangular coordinates. An overall assessment of the viability of the finite element concept for computational aerodynamics is made.

  17. Morphological representation of order-statistics filters.

    PubMed

    Charif-Chefchaouni, M; Schonfeld, D

    1995-01-01

    We propose a comprehensive theory for the morphological bounds on order-statistics filters (and their repeated iterations). Conditions are derived for morphological openings and closings to serve as bounds (lower and upper, respectively) on order-statistics filters (and their repeated iterations). Under various assumptions, morphological open-closings and close-openings are also shown to serve as (tighter) bounds (lower and upper, respectively) on iterations of order-statistics filters. Simulations of the application of the results presented to image restoration are finally provided.

  18. A Multidimensional Scaling Approach to Dimensionality Assessment for Measurement Instruments Modeled by Multidimensional Item Response Theory

    ERIC Educational Resources Information Center

    Toro, Maritsa

    2011-01-01

    The statistical assessment of dimensionality provides evidence of the underlying constructs measured by a survey or test instrument. This study focuses on educational measurement, specifically tests comprised of items described as multidimensional. That is, items that require examinee proficiency in multiple content areas and/or multiple cognitive…

  19. SciSpark's SRDD : A Scientific Resilient Distributed Dataset for Multidimensional Data

    NASA Astrophysics Data System (ADS)

    Palamuttam, R. S.; Wilson, B. D.; Mogrovejo, R. M.; Whitehall, K. D.; Mattmann, C. A.; McGibbney, L. J.; Ramirez, P.

    2015-12-01

    Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We have developed SciSpark, a robust Big Data framework, that extends ApacheTM Spark for scaling scientific computations. Apache Spark improves the map-reduce implementation in ApacheTM Hadoop for parallel computing on a cluster, by emphasizing in-memory computation, "spilling" to disk only as needed, and relying on lazy evaluation. Central to Spark is the Resilient Distributed Dataset (RDD), an in-memory distributed data structure that extends the functional paradigm provided by the Scala programming language. However, RDDs are ideal for tabular or unstructured data, and not for highly dimensional data. The SciSpark project introduces the Scientific Resilient Distributed Dataset (sRDD), a distributed-computing array structure which supports iterative scientific algorithms for multidimensional data. SciSpark processes data stored in NetCDF and HDF files by partitioning them across time or space and distributing the partitions among a cluster of compute nodes. We show usability and extensibility of SciSpark by implementing distributed algorithms for geospatial operations on large collections of multi-dimensional grids. In particular we address the problem of scaling an automated method for finding Mesoscale Convective Complexes. SciSpark provides a tensor interface to support the pluggability of different matrix libraries. We evaluate performance of the various matrix libraries in distributed pipelines, such as Nd4jTM and BreezeTM. We detail the architecture and design of SciSpark, our efforts to integrate climate science algorithms, parallel ingest and partitioning (sharding) of A-Train satellite observations from model grids. These solutions are encompassed in SciSpark, an open-source software framework for distributed computing on scientific data.

  20. Ice Shape Characterization Using Self-Organizing Maps

    NASA Technical Reports Server (NTRS)

    McClain, Stephen T.; Tino, Peter; Kreeger, Richard E.

    2011-01-01

    A method for characterizing ice shapes using a self-organizing map (SOM) technique is presented. Self-organizing maps are neural-network techniques for representing noisy, multi-dimensional data aligned along a lower-dimensional and possibly nonlinear manifold. For a large set of noisy data, each element of a finite set of codebook vectors is iteratively moved in the direction of the data closest to the winner codebook vector. Through successive iterations, the codebook vectors begin to align with the trends of the higher-dimensional data. In information processing, the intent of SOM methods is to transmit the codebook vectors, which contains far fewer elements and requires much less memory or bandwidth, than the original noisy data set. When applied to airfoil ice accretion shapes, the properties of the codebook vectors and the statistical nature of the SOM methods allows for a quantitative comparison of experimentally measured mean or average ice shapes to ice shapes predicted using computer codes such as LEWICE. The nature of the codebook vectors also enables grid generation and surface roughness descriptions for use with the discrete-element roughness approach. In the present study, SOM characterizations are applied to a rime ice shape, a glaze ice shape at an angle of attack, a bi-modal glaze ice shape, and a multi-horn glaze ice shape. Improvements and future explorations will be discussed.

  1. A Multidimensional Partial Credit Model with Associated Item and Test Statistics: An Application to Mixed-Format Tests

    ERIC Educational Resources Information Center

    Yao, Lihua; Schwarz, Richard D.

    2006-01-01

    Multidimensional item response theory (IRT) models have been proposed for better understanding the dimensional structure of data or to define diagnostic profiles of student learning. A compensatory multidimensional two-parameter partial credit model (M-2PPC) for constructed-response items is presented that is a generalization of those proposed to…

  2. Path integral learning of multidimensional movement trajectories

    NASA Astrophysics Data System (ADS)

    André, João; Santos, Cristina; Costa, Lino

    2013-10-01

    This paper explores the use of Path Integral Methods, particularly several variants of the recent Path Integral Policy Improvement (PI2) algorithm in multidimensional movement parametrized policy learning. We rely on Dynamic Movement Primitives (DMPs) to codify discrete and rhythmic trajectories, and apply the PI2-CMA and PIBB methods in the learning of optimal policy parameters, according to different cost functions that inherently encode movement objectives. Additionally we merge both of these variants and propose the PIBB-CMA algorithm, comparing all of them with the vanilla version of PI2. From the obtained results we conclude that PIBB-CMA surpasses all other methods in terms of convergence speed and iterative final cost, which leads to an increased interest in its application to more complex robotic problems.

  3. DICON: interactive visual analysis of multidimensional clusters.

    PubMed

    Cao, Nan; Gotz, David; Sun, Jimeng; Qu, Huamin

    2011-12-01

    Clustering as a fundamental data analysis technique has been widely used in many analytic applications. However, it is often difficult for users to understand and evaluate multidimensional clustering results, especially the quality of clusters and their semantics. For large and complex data, high-level statistical information about the clusters is often needed for users to evaluate cluster quality while a detailed display of multidimensional attributes of the data is necessary to understand the meaning of clusters. In this paper, we introduce DICON, an icon-based cluster visualization that embeds statistical information into a multi-attribute display to facilitate cluster interpretation, evaluation, and comparison. We design a treemap-like icon to represent a multidimensional cluster, and the quality of the cluster can be conveniently evaluated with the embedded statistical information. We further develop a novel layout algorithm which can generate similar icons for similar clusters, making comparisons of clusters easier. User interaction and clutter reduction are integrated into the system to help users more effectively analyze and refine clustering results for large datasets. We demonstrate the power of DICON through a user study and a case study in the healthcare domain. Our evaluation shows the benefits of the technique, especially in support of complex multidimensional cluster analysis. © 2011 IEEE

  4. Proceedings of Colloquium on Stable Solutions of Some Ill-Posed Problems, October 9, 1979.

    DTIC Science & Technology

    1980-06-30

    4. In (24] iterative process (9) was applied for calculation of the magnetization of thin magnetic films . This problem is of interest for computer...equation fl I (x-t) -f(t) = g(x), x > 1. (i) Its multidimensional analogue fmX-tK-if(t)dt = g(x), xEA, AnD (2) can be intepreted as the problem of

  5. A hybrid heuristic for the multiple choice multidimensional knapsack problem

    NASA Astrophysics Data System (ADS)

    Mansi, Raïd; Alves, Cláudio; Valério de Carvalho, J. M.; Hanafi, Saïd

    2013-08-01

    In this article, a new solution approach for the multiple choice multidimensional knapsack problem is described. The problem is a variant of the multidimensional knapsack problem where items are divided into classes, and exactly one item per class has to be chosen. Both problems are NP-hard. However, the multiple choice multidimensional knapsack problem appears to be more difficult to solve in part because of its choice constraints. Many real applications lead to very large scale multiple choice multidimensional knapsack problems that can hardly be addressed using exact algorithms. A new hybrid heuristic is proposed that embeds several new procedures for this problem. The approach is based on the resolution of linear programming relaxations of the problem and reduced problems that are obtained by fixing some variables of the problem. The solutions of these problems are used to update the global lower and upper bounds for the optimal solution value. A new strategy for defining the reduced problems is explored, together with a new family of cuts and a reformulation procedure that is used at each iteration to improve the performance of the heuristic. An extensive set of computational experiments is reported for benchmark instances from the literature and for a large set of hard instances generated randomly. The results show that the approach outperforms other state-of-the-art methods described so far, providing the best known solution for a significant number of benchmark instances.

  6. FBILI method for multi-level line transfer

    NASA Astrophysics Data System (ADS)

    Kuzmanovska, O.; Atanacković, O.; Faurobert, M.

    2017-07-01

    Efficient non-LTE multilevel radiative transfer calculations are needed for a proper interpretation of astrophysical spectra. In particular, realistic simulations of time-dependent processes or multi-dimensional phenomena require that the iterative method used to solve such non-linear and non-local problem is as fast as possible. There are several multilevel codes based on efficient iterative schemes that provide a very high convergence rate, especially when combined with mathematical acceleration techniques. The Forth-and-Back Implicit Lambda Iteration (FBILI) developed by Atanacković-Vukmanović et al. [1] is a Gauss-Seidel-type iterative scheme that is characterized by a very high convergence rate without the need of complementing it with additional acceleration techniques. In this paper we make the implementation of the FBILI method to the multilevel atom line transfer in 1D more explicit. We also consider some of its variants and investigate their convergence properties by solving the benchmark problem of CaII line formation in the solar atmosphere. Finally, we compare our solutions with results obtained with the well known code MULTI.

  7. Multidimensional FEM-FCT schemes for arbitrary time stepping

    NASA Astrophysics Data System (ADS)

    Kuzmin, D.; Möller, M.; Turek, S.

    2003-05-01

    The flux-corrected-transport paradigm is generalized to finite-element schemes based on arbitrary time stepping. A conservative flux decomposition procedure is proposed for both convective and diffusive terms. Mathematical properties of positivity-preserving schemes are reviewed. A nonoscillatory low-order method is constructed by elimination of negative off-diagonal entries of the discrete transport operator. The linearization of source terms and extension to hyperbolic systems are discussed. Zalesak's multidimensional limiter is employed to switch between linear discretizations of high and low order. A rigorous proof of positivity is provided. The treatment of non-linearities and iterative solution of linear systems are addressed. The performance of the new algorithm is illustrated by numerical examples for the shock tube problem in one dimension and scalar transport equations in two dimensions.

  8. Non-homogeneous updates for the iterative coordinate descent algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Zhou; Thibault, Jean-Baptiste; Bouman, Charles A.; Sauer, Ken D.; Hsieh, Jiang

    2007-02-01

    Statistical reconstruction methods show great promise for improving resolution, and reducing noise and artifacts in helical X-ray CT. In fact, statistical reconstruction seems to be particularly valuable in maintaining reconstructed image quality when the dosage is low and the noise is therefore high. However, high computational cost and long reconstruction times remain as a barrier to the use of statistical reconstruction in practical applications. Among the various iterative methods that have been studied for statistical reconstruction, iterative coordinate descent (ICD) has been found to have relatively low overall computational requirements due to its fast convergence. This paper presents a novel method for further speeding the convergence of the ICD algorithm, and therefore reducing the overall reconstruction time for statistical reconstruction. The method, which we call nonhomogeneous iterative coordinate descent (NH-ICD) uses spatially non-homogeneous updates to speed convergence by focusing computation where it is most needed. Experimental results with real data indicate that the method speeds reconstruction by roughly a factor of two for typical 3D multi-slice geometries.

  9. The multidimensional perturbation value: a single metric to measure similarity and activity of treatments in high-throughput multidimensional screens.

    PubMed

    Hutz, Janna E; Nelson, Thomas; Wu, Hua; McAllister, Gregory; Moutsatsos, Ioannis; Jaeger, Savina A; Bandyopadhyay, Somnath; Nigsch, Florian; Cornett, Ben; Jenkins, Jeremy L; Selinger, Douglas W

    2013-04-01

    Screens using high-throughput, information-rich technologies such as microarrays, high-content screening (HCS), and next-generation sequencing (NGS) have become increasingly widespread. Compared with single-readout assays, these methods produce a more comprehensive picture of the effects of screened treatments. However, interpreting such multidimensional readouts is challenging. Univariate statistics such as t-tests and Z-factors cannot easily be applied to multidimensional profiles, leaving no obvious way to answer common screening questions such as "Is treatment X active in this assay?" and "Is treatment X different from (or equivalent to) treatment Y?" We have developed a simple, straightforward metric, the multidimensional perturbation value (mp-value), which can be used to answer these questions. Here, we demonstrate application of the mp-value to three data sets: a multiplexed gene expression screen of compounds and genomic reagents, a microarray-based gene expression screen of compounds, and an HCS compound screen. In all data sets, active treatments were successfully identified using the mp-value, and simulations and follow-up analyses supported the mp-value's statistical and biological validity. We believe the mp-value represents a promising way to simplify the analysis of multidimensional data while taking full advantage of its richness.

  10. A heuristic statistical stopping rule for iterative reconstruction in emission tomography.

    PubMed

    Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D

    2013-01-01

    We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.

  11. Assessment of Ice Shape Roughness Using a Self-Orgainizing Map Approach

    NASA Technical Reports Server (NTRS)

    Mcclain, Stephen T.; Kreeger, Richard E.

    2013-01-01

    Self-organizing maps are neural-network techniques for representing noisy, multidimensional data aligned along a lower-dimensional and nonlinear manifold. For a large set of noisy data, each element of a finite set of codebook vectors is iteratively moved in the direction of the data closest to the winner codebook vector. Through successive iterations, the codebook vectors begin to align with the trends of the higher-dimensional data. Prior investigations of ice shapes have focused on using self-organizing maps to characterize mean ice forms. The Icing Research Branch has recently acquired a high resolution three dimensional scanner system capable of resolving ice shape surface roughness. A method is presented for the evaluation of surface roughness variations using high-resolution surface scans based on a self-organizing map representation of the mean ice shape. The new method is demonstrated for 1) an 18-in. NACA 23012 airfoil 2 AOA just after the initial ice coverage of the leading 5 of the suction surface of the airfoil, 2) a 21-in. NACA 0012 at 0AOA following coverage of the leading 10 of the airfoil surface, and 3) a cold-soaked 21-in.NACA 0012 airfoil without ice. The SOM method resulted in descriptions of the statistical coverage limits and a quantitative representation of early stages of ice roughness formation on the airfoils. Limitations of the SOM method are explored, and the uncertainty limits of the method are investigated using the non-iced NACA 0012 airfoil measurements.

  12. An Iterative Decambering Approach for Post-Stall Prediction of Wing Characteristics using known Section Data

    NASA Technical Reports Server (NTRS)

    Mukherjee, Rinku; Gopalarathnam, Ashok; Kim, Sung Wan

    2003-01-01

    An iterative decambering approach for the post stall prediction of wings using known section data as inputs is presented. The method can currently be used for incompressible .ow and can be extended to compressible subsonic .ow using Mach number correction schemes. A detailed discussion of past work on this topic is presented first. Next, an overview of the decambering approach is presented and is illustrated by applying the approach to the prediction of the two-dimensional C(sub l) and C(sub m) curves for an airfoil. The implementation of the approach for iterative decambering of wing sections is then discussed. A novel feature of the current e.ort is the use of a multidimensional Newton iteration for taking into consideration the coupling between the di.erent sections of the wing. The approach lends itself to implementation in a variety of finite-wing analysis methods such as lifting-line theory, discrete-vortex Weissinger's method, and vortex lattice codes. Results are presented for a rectangular wing for a from 0 to 25 deg. The results are compared for both increasing and decreasing directions of a, and they show that a hysteresis loop can be predicted for post-stall angles of attack.

  13. Multiple pathways to identification: exploring the multidimensionality of academic identity formation in ethnic minority males.

    PubMed

    Matthews, Jamaal S

    2014-04-01

    Empirical trends denote the academic underachievement of ethnic minority males across various academic domains. Identity-based explanations for this persistent phenomenon describe ethnic minority males as disidentified with academics, alienated, and oppositional. The present work interrogates these theoretical explanations and empirically substantiates a multidimensional lens for discussing academic identity formation within 330 African American and Latino early-adolescent males. Both hierarchical and iterative person-centered methods were utilized and reveal 5 distinct profiles derived from 6 dimensions of academic identity. These profiles predict self-reported classroom grades, mastery orientation, and self-handicapping in meaningful and varied ways. The results demonstrate multiple pathways to motivation and achievement, challenging previous oversimplified stereotypes of marginalized males. This exploratory study triangulates unique interpersonal and intrapersonal attributes for promoting healthy identity development and academic achievement among ethnic minority adolescent males.

  14. Accelerating NLTE radiative transfer by means of the Forth-and-Back Implicit Lambda Iteration: A two-level atom line formation in 2D Cartesian coordinates

    NASA Astrophysics Data System (ADS)

    Milić, Ivan; Atanacković, Olga

    2014-10-01

    State-of-the-art methods in multidimensional NLTE radiative transfer are based on the use of local approximate lambda operator within either Jacobi or Gauss-Seidel iterative schemes. Here we propose another approach to the solution of 2D NLTE RT problems, Forth-and-Back Implicit Lambda Iteration (FBILI), developed earlier for 1D geometry. In order to present the method and examine its convergence properties we use the well-known instance of the two-level atom line formation with complete frequency redistribution. In the formal solution of the RT equation we employ short characteristics with two-point algorithm. Using an implicit representation of the source function in the computation of the specific intensities, we compute and store the coefficients of the linear relations J=a+bS between the mean intensity J and the corresponding source function S. The use of iteration factors in the ‘local’ coefficients of these implicit relations in two ‘inward’ sweeps of 2D grid, along with the update of the source function in other two ‘outward’ sweeps leads to four times faster solution than the Jacobi’s one. Moreover, the update made in all four consecutive sweeps of the grid leads to an acceleration by a factor of 6-7 compared to the Jacobi iterative scheme.

  15. Emerging Techniques for Dose Optimization in Abdominal CT

    PubMed Central

    Platt, Joel F.; Goodsitt, Mitchell M.; Al-Hawary, Mahmoud M.; Maturen, Katherine E.; Wasnik, Ashish P.; Pandya, Amit

    2014-01-01

    Recent advances in computed tomographic (CT) scanning technique such as automated tube current modulation (ATCM), optimized x-ray tube voltage, and better use of iterative image reconstruction have allowed maintenance of good CT image quality with reduced radiation dose. ATCM varies the tube current during scanning to account for differences in patient attenuation, ensuring a more homogeneous image quality, although selection of the appropriate image quality parameter is essential for achieving optimal dose reduction. Reducing the x-ray tube voltage is best suited for evaluating iodinated structures, since the effective energy of the x-ray beam will be closer to the k-edge of iodine, resulting in a higher attenuation for the iodine. The optimal kilovoltage for a CT study should be chosen on the basis of imaging task and patient habitus. The aim of iterative image reconstruction is to identify factors that contribute to noise on CT images with use of statistical models of noise (statistical iterative reconstruction) and selective removal of noise to improve image quality. The degree of noise suppression achieved with statistical iterative reconstruction can be customized to minimize the effect of altered image quality on CT images. Unlike with statistical iterative reconstruction, model-based iterative reconstruction algorithms model both the statistical noise and the physical acquisition process, allowing CT to be performed with further reduction in radiation dose without an increase in image noise or loss of spatial resolution. Understanding these recently developed scanning techniques is essential for optimization of imaging protocols designed to achieve the desired image quality with a reduced dose. © RSNA, 2014 PMID:24428277

  16. Using Multidimensional Scaling To Assess the Dimensionality of Dichotomous Item Data.

    ERIC Educational Resources Information Center

    Meara, Kevin; Robin, Frederic; Sireci, Stephen G.

    2000-01-01

    Investigated the usefulness of multidimensional scaling (MDS) for assessing the dimensionality of dichotomous test data. Focused on two MDS proximity measures, one based on the PC statistic (T. Chen and M. Davidson, 1996) and other, on interitem Euclidean distances. Simulation results show that both MDS procedures correctly identify…

  17. RADC Multi-Dimensional Signal-Processing Research Program.

    DTIC Science & Technology

    1980-09-30

    Formulation 7 3.2.2 Methods of Accelerating Convergence 8 3.2.3 Application to Image Deblurring 8 3.2.4 Extensions 11 3.3 Convergence of Iterative Signal... noise -driven linear filters, permit development of the joint probability density function oz " kelihood function for the image. With an expression...spatial linear filter driven by white noise (see Fig. i). If the probability density function for the white noise is known, Fig. t. Model for image

  18. CRISM Hyperspectral Data Filtering with Application to MSL Landing Site Selection

    NASA Astrophysics Data System (ADS)

    Seelos, F. P.; Parente, M.; Clark, T.; Morgan, F.; Barnouin-Jha, O. S.; McGovern, A.; Murchie, S. L.; Taylor, H.

    2009-12-01

    We report on the development and implementation of a custom filtering procedure for Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) IR hyperspectral data that is suitable for incorporation into the CRISM Reduced Data Record (RDR) calibration pipeline. Over the course of the Mars Reconnaissance Orbiter (MRO) Primary Science Phase (PSP) and the ongoing Extended Science Phase (ESP) CRISM has operated with an IR detector temperature between ~107 K and ~127 K. This ~20 K range in operational temperature has resulted in variable data quality, with observations acquired at higher detector temperatures exhibiting a marked increase in both systematic and stochastic noise. The CRISM filtering procedure consists of two main data processing capabilities. The primary systematic noise component in CRISM IR data appears as along track or column oriented striping. This is addressed by the robust derivation and application of an inter-column ratio correction frame. The correction frame is developed through the serial evaluation of band specific column ratio statistics and so does not compromise the spectral fidelity of the image cube. The dominant CRISM IR stochastic noise components appear as isolated data spikes or column oriented segments of variable length with erroneous data values. The non-systematic noise is identified and corrected through the application of an iterative-recursive kernel modeling procedure which employs a formal statistical outlier test as the iteration control and recursion termination criterion. This allows the filtering procedure to make a statistically supported determination between high frequency (spatial/spectral) signal and high frequency noise based on the information content of a given multidimensional data kernel. The governing statistical test also allows the kernel filtering procedure to be self regulating and adaptive to the intrinsic noise level in the data. The CRISM IR filtering procedure is scheduled to be incorporated into the next augmentation of the CRISM IR calibration (version 3). The filtering algorithm will be applied to the I/F data (IF) delivered to the Planetary Data System (PDS), but the radiance on sensor data (RA) will remain unfiltered. The development of CRISM hyperspectral analysis products in support of the Mars Science Laboratory (MSL) landing site selection process has motivated the advance of CRISM-specific data processing techniques. The quantitative results of the CRISM IR filtering procedure as applied to CRISM observations acquired in support of MSL landing site selection will be presented.

  19. 2.5D transient electromagnetic inversion with OCCAM method

    NASA Astrophysics Data System (ADS)

    Li, R.; Hu, X.

    2016-12-01

    In the application of time-domain electromagnetic method (TEM), some multidimensional inversion schemes are applied for imaging in the past few decades to overcome great error produced by 1D model inversion when the subsurface structure is complex. The current mainstream multidimensional inversion for EM data, with the finite-difference time-domain (FDTD) forward method, mainly implemented by Nonlinear Conjugate Gradient (NLCG). But the convergence rate of NLCG heavily depends on Lagrange multiplier and maybe fail to converge. We use the OCCAM inversion method to avoid the weakness. OCCAM inversion is proven to be a more stable and reliable method to image the subsurface 2.5D electrical conductivity. Firstly, we simulate the 3D transient EM fields governed by Maxwell's equations with FDTD method. Secondly, we use the OCCAM inversion scheme with the appropriate objective error functional we established to image the 2.5D structure. And the data space OCCAM's inversion (DASOCC) strategy based on OCCAM scheme were given in this paper. The sensitivity matrix is calculated with the method of time-integrated back-propagated fields. Imaging result of example model shown in Fig. 1 have proven that the OCCAM scheme is an efficient inversion method for TEM with FDTD method. The processes of the inversion iterations have shown the great ability of convergence with few iterations. Summarizing the process of the imaging, we can make the following conclusions. Firstly, the 2.5D imaging in FDTD system with OCCAM inversion demonstrates that we could get desired imaging results for the resistivity structure in the homogeneous half-space. Secondly, the imaging results usually do not over-depend on the initial model, but the iteration times can be reduced distinctly if the background resistivity of initial model get close to the truthful model. So it is batter to set the initial model based on the other geologic information in the application. When the background resistivity fit the truthful model well, the imaging of anomalous body only need a few iteration steps. Finally, the speed of imaging vertical boundaries is slower than the speed of imaging the horizontal boundaries.

  20. Reporting of Subscores Using Multidimensional Item Response Theory

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; Sinharay, Sandip

    2010-01-01

    Recently, there has been increasing interest in reporting subscores. This paper examines reporting of subscores using multidimensional item response theory (MIRT) models (e.g., Reckase in "Appl. Psychol. Meas." 21:25-36, 1997; C.R. Rao and S. Sinharay (Eds), "Handbook of Statistics, vol. 26," pp. 607-642, North-Holland, Amsterdam, 2007; Beguin &…

  1. Formulation for Simultaneous Aerodynamic Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, G. W.; Taylor, A. C., III; Mani, S. V.; Newman, P. A.

    1993-01-01

    An efficient approach for simultaneous aerodynamic analysis and design optimization is presented. This approach does not require the performance of many flow analyses at each design optimization step, which can be an expensive procedure. Thus, this approach brings us one step closer to meeting the challenge of incorporating computational fluid dynamic codes into gradient-based optimization techniques for aerodynamic design. An adjoint-variable method is introduced to nullify the effect of the increased number of design variables in the problem formulation. The method has been successfully tested on one-dimensional nozzle flow problems, including a sample problem with a normal shock. Implementations of the above algorithm are also presented that incorporate Newton iterations to secure a high-quality flow solution at the end of the design process. Implementations with iterative flow solvers are possible and will be required for large, multidimensional flow problems.

  2. Domain decomposition methods for the parallel computation of reacting flows

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1988-01-01

    Domain decomposition is a natural route to parallel computing for partial differential equation solvers. Subdomains of which the original domain of definition is comprised are assigned to independent processors at the price of periodic coordination between processors to compute global parameters and maintain the requisite degree of continuity of the solution at the subdomain interfaces. In the domain-decomposed solution of steady multidimensional systems of PDEs by finite difference methods using a pseudo-transient version of Newton iteration, the only portion of the computation which generally stands in the way of efficient parallelization is the solution of the large, sparse linear systems arising at each Newton step. For some Jacobian matrices drawn from an actual two-dimensional reacting flow problem, comparisons are made between relaxation-based linear solvers and also preconditioned iterative methods of Conjugate Gradient and Chebyshev type, focusing attention on both iteration count and global inner product count. The generalized minimum residual method with block-ILU preconditioning is judged the best serial method among those considered, and parallel numerical experiments on the Encore Multimax demonstrate for it approximately 10-fold speedup on 16 processors.

  3. Hydrometeor classification through statistical clustering of polarimetric radar measurements: a semi-supervised approach

    NASA Astrophysics Data System (ADS)

    Besic, Nikola; Ventura, Jordi Figueras i.; Grazioli, Jacopo; Gabella, Marco; Germann, Urs; Berne, Alexis

    2016-09-01

    Polarimetric radar-based hydrometeor classification is the procedure of identifying different types of hydrometeors by exploiting polarimetric radar observations. The main drawback of the existing supervised classification methods, mostly based on fuzzy logic, is a significant dependency on a presumed electromagnetic behaviour of different hydrometeor types. Namely, the results of the classification largely rely upon the quality of scattering simulations. When it comes to the unsupervised approach, it lacks the constraints related to the hydrometeor microphysics. The idea of the proposed method is to compensate for these drawbacks by combining the two approaches in a way that microphysical hypotheses can, to a degree, adjust the content of the classes obtained statistically from the observations. This is done by means of an iterative approach, performed offline, which, in a statistical framework, examines clustered representative polarimetric observations by comparing them to the presumed polarimetric properties of each hydrometeor class. Aside from comparing, a routine alters the content of clusters by encouraging further statistical clustering in case of non-identification. By merging all identified clusters, the multi-dimensional polarimetric signatures of various hydrometeor types are obtained for each of the studied representative datasets, i.e. for each radar system of interest. These are depicted by sets of centroids which are then employed in operational labelling of different hydrometeors. The method has been applied on three C-band datasets, each acquired by different operational radar from the MeteoSwiss Rad4Alp network, as well as on two X-band datasets acquired by two research mobile radars. The results are discussed through a comparative analysis which includes a corresponding supervised and unsupervised approach, emphasising the operational potential of the proposed method.

  4. Statistical Projections for Multi-resolution, Multi-dimensional Visual Data Exploration and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoa T. Nguyen; Stone, Daithi; E. Wes Bethel

    2016-01-01

    An ongoing challenge in visual exploration and analysis of large, multi-dimensional datasets is how to present useful, concise information to a user for some specific visualization tasks. Typical approaches to this problem have proposed either reduced-resolution versions of data, or projections of data, or both. These approaches still have some limitations such as consuming high computation or suffering from errors. In this work, we explore the use of a statistical metric as the basis for both projections and reduced-resolution versions of data, with a particular focus on preserving one key trait in data, namely variation. We use two different casemore » studies to explore this idea, one that uses a synthetic dataset, and another that uses a large ensemble collection produced by an atmospheric modeling code to study long-term changes in global precipitation. The primary findings of our work are that in terms of preserving the variation signal inherent in data, that using a statistical measure more faithfully preserves this key characteristic across both multi-dimensional projections and multi-resolution representations than a methodology based upon averaging.« less

  5. Multidimensional Rasch Analysis of a Psychological Test with Multiple Subtests: A Statistical Solution for the Bandwidth-Fidelity Dilemma

    ERIC Educational Resources Information Center

    Cheng, Ying-Yao; Wang, Wen-Chung; Ho, Yi-Hui

    2009-01-01

    Educational and psychological tests are often composed of multiple short subtests, each measuring a distinct latent trait. Unfortunately, short subtests suffer from low measurement precision, which makes the bandwidth-fidelity dilemma inevitable. In this study, the authors demonstrate how a multidimensional Rasch analysis can be employed to take…

  6. Measuring Multidimensional Latent Growth. Research Report. ETS RR-10-24

    ERIC Educational Resources Information Center

    Rijmen, Frank

    2010-01-01

    As is the case for any statistical model, a multidimensional latent growth model comes with certain requirements with respect to the data collection design. In order to measure growth, repeated measurements of the same set of individuals are required. Furthermore, the data collection design should be specified such that no individual is given the…

  7. Comparison of Unidimensional and Multidimensional Approaches to IRT Parameter Estimation. Research Report. ETS RR-04-44

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2004-01-01

    It is common to assume during statistical analysis of a multiscale assessment that the assessment has simple structure or that it is composed of several unidimensional subtests. Under this assumption, both the unidimensional and multidimensional approaches can be used to estimate item parameters. This paper theoretically demonstrates that these…

  8. Methods for a longitudinal quantitative outcome with a multivariate Gaussian distribution multi-dimensionally censored by therapeutic intervention.

    PubMed

    Sun, Wanjie; Larsen, Michael D; Lachin, John M

    2014-04-15

    In longitudinal studies, a quantitative outcome (such as blood pressure) may be altered during follow-up by the administration of a non-randomized, non-trial intervention (such as anti-hypertensive medication) that may seriously bias the study results. Current methods mainly address this issue for cross-sectional studies. For longitudinal data, the current methods are either restricted to a specific longitudinal data structure or are valid only under special circumstances. We propose two new methods for estimation of covariate effects on the underlying (untreated) general longitudinal outcomes: a single imputation method employing a modified expectation-maximization (EM)-type algorithm and a multiple imputation (MI) method utilizing a modified Monte Carlo EM-MI algorithm. Each method can be implemented as one-step, two-step, and full-iteration algorithms. They combine the advantages of the current statistical methods while reducing their restrictive assumptions and generalizing them to realistic scenarios. The proposed methods replace intractable numerical integration of a multi-dimensionally censored MVN posterior distribution with a simplified, sufficiently accurate approximation. It is particularly attractive when outcomes reach a plateau after intervention due to various reasons. Methods are studied via simulation and applied to data from the Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications study of treatment for type 1 diabetes. Methods proved to be robust to high dimensions, large amounts of censored data, low within-subject correlation, and when subjects receive non-trial intervention to treat the underlying condition only (with high Y), or for treatment in the majority of subjects (with high Y) in combination with prevention for a small fraction of subjects (with normal Y). Copyright © 2013 John Wiley & Sons, Ltd.

  9. Efficient computation paths for the systematic analysis of sensitivities

    NASA Astrophysics Data System (ADS)

    Greppi, Paolo; Arato, Elisabetta

    2013-01-01

    A systematic sensitivity analysis requires computing the model on all points of a multi-dimensional grid covering the domain of interest, defined by the ranges of variability of the inputs. The issues to efficiently perform such analyses on algebraic models are handling solution failures within and close to the feasible region and minimizing the total iteration count. Scanning the domain in the obvious order is sub-optimal in terms of total iterations and is likely to cause many solution failures. The problem of choosing a better order can be translated geometrically into finding Hamiltonian paths on certain grid graphs. This work proposes two paths, one based on a mixed-radix Gray code and the other, a quasi-spiral path, produced by a novel heuristic algorithm. Some simple, easy-to-visualize examples are presented, followed by performance results for the quasi-spiral algorithm and the practical application of the different paths in a process simulation tool.

  10. A geochemical transport model for redox-controlled movement of mineral fronts in groundwater flow systems: A case of nitrate removal by oxidation of pyrite

    USGS Publications Warehouse

    Engesgaard, Peter; Kipp, Kenneth L.

    1992-01-01

    A one-dimensional prototype geochemical transport model was developed in order to handle simultaneous precipitation-dissolution and oxidation-reduction reactions governed by chemical equilibria. Total aqueous component concentrations are the primary dependent variables, and a sequential iterative approach is used for the calculation. The model was verified by analytical and numerical comparisons and is able to simulate sharp mineral fronts. At a site in Denmark, denitrification has been observed by oxidation of pyrite. Simulation of nitrate movement at this site showed a redox front movement rate of 0.58 m yr−1, which agreed with calculations of others. It appears that the sequential iterative approach is the most practical for extension to multidimensional simulation and for handling large numbers of components and reactions. However, slow convergence may limit the size of redox systems that can be handled.

  11. New methods of testing nonlinear hypothesis using iterative NLLS estimator

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.

  12. Four-level conservative finite-difference schemes for Boussinesq paradigm equation

    NASA Astrophysics Data System (ADS)

    Kolkovska, N.

    2013-10-01

    In this paper a two-parametric family of four level conservative finite difference schemes is constructed for the multidimensional Boussinesq paradigm equation. The schemes are explicit in the sense that no inner iterations are needed for evaluation of the numerical solution. The preservation of the discrete energy with this method is proved. The schemes have been numerically tested on one soliton propagation model and two solitons interaction model. The numerical experiments demonstrate that the proposed family of schemes has second order of convergence in space and time steps in the discrete maximal norm.

  13. ULTRA-SHARP nonoscillatory convection schemes for high-speed steady multidimensional flow

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Mokhtari, Simin

    1990-01-01

    For convection-dominated flows, classical second-order methods are notoriously oscillatory and often unstable. For this reason, many computational fluid dynamicists have adopted various forms of (inherently stable) first-order upwinding over the past few decades. Although it is now well known that first-order convection schemes suffer from serious inaccuracies attributable to artificial viscosity or numerical diffusion under high convection conditions, these methods continue to enjoy widespread popularity for numerical heat transfer calculations, apparently due to a perceived lack of viable high accuracy alternatives. But alternatives are available. For example, nonoscillatory methods used in gasdynamics, including currently popular TVD schemes, can be easily adapted to multidimensional incompressible flow and convective transport. This, in itself, would be a major advance for numerical convective heat transfer, for example. But, as is shown, second-order TVD schemes form only a small, overly restrictive, subclass of a much more universal, and extremely simple, nonoscillatory flux-limiting strategy which can be applied to convection schemes of arbitrarily high order accuracy, while requiring only a simple tridiagonal ADI line-solver, as used in the majority of general purpose iterative codes for incompressible flow and numerical heat transfer. The new universal limiter and associated solution procedures form the so-called ULTRA-SHARP alternative for high resolution nonoscillatory multidimensional steady state high speed convective modelling.

  14. The Cognitive Visualization System with the Dynamic Projection of Multidimensional Data

    NASA Astrophysics Data System (ADS)

    Gorohov, V.; Vitkovskiy, V.

    2008-08-01

    The phenomenon of cognitive machine drawing consists in the generation on the screen the special graphic representations, which create in the brain of human operator entertainment means. These means seem man by aesthetically attractive and, thus, they stimulate its descriptive imagination, closely related to the intuitive mechanisms of thinking. The essence of cognitive effect lies in the fact that man receives the moving projection as pseudo-three-dimensional object characterizing multidimensional means in the multidimensional space. After the thorough qualitative study of the visual aspects of multidimensional means with the aid of the enumerated algorithms appears the possibility, using algorithms of standard machine drawing to paint the interesting user separate objects or the groups of objects. Then it is possible to again return to the dynamic behavior of the rotation of means for the purpose of checking the intuitive ideas of user about the clusters and the connections in multidimensional data. Is possible the development of the methods of cognitive machine drawing in combination with other information technologies, first of all with the packets of digital processing of images and multidimensional statistical analysis.

  15. Developing measures of educational change for academic health care teams implementing the chronic care model in teaching practices.

    PubMed

    Bowen, Judith L; Stevens, David P; Sixta, Connie S; Provost, Lloyd; Johnson, Julie K; Woods, Donna M; Wagner, Edward H

    2010-09-01

    The Chronic Care Model (CCM) is a multidimensional framework designed to improve care for patients with chronic health conditions. The model strives for productive interactions between informed, activated patients and proactive practice teams, resulting in better clinical outcomes and greater satisfaction. While measures for improving care may be clear, measures of residents' competency to provide chronic care do not exist. This report describes the process used to develop educational measures and results from CCM settings that used them to monitor curricular innovations. Twenty-six academic health care teams participating in the national and California Academic Chronic Care Collaboratives. Using successive discussion groups and surveys, participants engaged in an iterative process to identify desirable and feasible educational measures for curricula that addressed educational objectives linked to the CCM. The measures were designed to facilitate residency programs' abilities to address new accreditation requirements and tested with teams actively engaged in redesigning educational programs. Field notes from each discussion and lists from work groups were synthesized using the CCM framework. Descriptive statistics were used to report survey results and measurement performance. Work groups generated educational objectives and 17 associated measurements. Seventeen (65%) teams provided feasibility and desirability ratings for the 17 measures. Two process measures were selected for use by all teams. Teams reported variable success using the measures. Several teams reported use of additional measures, suggesting more extensive curricular change. Using an iterative process in collaboration with program participants, we successfully defined a set of feasible and desirable education measures for academic health care teams using the CCM. These were used variably to measure the results of curricular changes, while simultaneously addressing requirements for residency accreditation.

  16. Multidimensional Modeling of Coronal Rain Dynamics

    NASA Astrophysics Data System (ADS)

    Fang, X.; Xia, C.; Keppens, R.

    2013-07-01

    We present the first multidimensional, magnetohydrodynamic simulations that capture the initial formation and long-term sustainment of the enigmatic coronal rain phenomenon. We demonstrate how thermal instability can induce a spectacular display of in situ forming blob-like condensations which then start their intimate ballet on top of initially linear force-free arcades. Our magnetic arcades host a chromospheric, transition region, and coronal plasma. Following coronal rain dynamics for over 80 minutes of physical time, we collect enough statistics to quantify blob widths, lengths, velocity distributions, and other characteristics which directly match modern observational knowledge. Our virtual coronal rain displays the deformation of blobs into V-shaped features, interactions of blobs due to mostly pressure-mediated levitations, and gives the first views of blobs that evaporate in situ or are siphoned over the apex of the background arcade. Our simulations pave the way for systematic surveys of coronal rain showers in true multidimensional settings to connect parameterized heating prescriptions with rain statistics, ultimately allowing us to quantify the coronal heating input.

  17. Computers as an Instrument for Data Analysis. Technical Report No. 11.

    ERIC Educational Resources Information Center

    Muller, Mervin E.

    A review of statistical data analysis involving computers as a multi-dimensional problem provides the perspective for consideration of the use of computers in statistical analysis and the problems associated with large data files. An overall description of STATJOB, a particular system for doing statistical data analysis on a digital computer,…

  18. Development, Content Validity, and User Review of a Web-based Multidimensional Pain Diary for Adolescent and Young Adults With Sickle Cell Disease.

    PubMed

    Bakshi, Nitya; Stinson, Jennifer N; Ross, Diana; Lukombo, Ines; Mittal, Nonita; Joshi, Saumya V; Belfer, Inna; Krishnamurti, Lakshmanan

    2015-06-01

    Vaso-occlusive pain, the hallmark of sickle cell disease (SCD), is a major contributor to morbidity, poor health-related quality of life, and health care utilization associated with this disease. There is wide variation in the burden, frequency, and severity of pain experienced by patients with SCD. As compared with health care utilization for pain, a daily pain diary captures the breadth of the pain experience and is a superior measure of pain burden and its impact on patients. Electronic pain diaries based on real-time data capture methods overcome methodological barriers and limitations of paper pain diaries, but their psychometric properties have not been formally established in patients with SCD. To develop and establish the content validity of a web-based multidimensional pain diary for adolescents and young adults with SCD and conduct an end-user review to refine the prototype. Following identification of items, a conceptual model was developed. Interviews with adolescents and young adults with SCD were conducted. Subsequently, end-user review with use of the electronic pain diary prototype was conducted. Two iterative cycles of in-depth cognitive interviews in adolescents and young adults with SCD informed the design and guided the addition, removal, and modification of items in the multidimensional pain diary. Potential end-users provided positive feedback on the design and prototype of the electronic diary. A multidimensional web-based electronic pain diary for adolescents and young adults with SCD has been developed and content validity and initial end-user reviews have been completed.

  19. Hyperspectral chemical plume detection algorithms based on multidimensional iterative filtering decomposition.

    PubMed

    Cicone, A; Liu, J; Zhou, H

    2016-04-13

    Chemicals released in the air can be extremely dangerous for human beings and the environment. Hyperspectral images can be used to identify chemical plumes, however the task can be extremely challenging. Assuming we know a priori that some chemical plume, with a known frequency spectrum, has been photographed using a hyperspectral sensor, we can use standard techniques such as the so-called matched filter or adaptive cosine estimator, plus a properly chosen threshold value, to identify the position of the chemical plume. However, due to noise and inadequate sensing, the accurate identification of chemical pixels is not easy even in this apparently simple situation. In this paper, we present a post-processing tool that, in a completely adaptive and data-driven fashion, allows us to improve the performance of any classification methods in identifying the boundaries of a plume. This is done using the multidimensional iterative filtering (MIF) algorithm (Cicone et al. 2014 (http://arxiv.org/abs/1411.6051); Cicone & Zhou 2015 (http://arxiv.org/abs/1507.07173)), which is a non-stationary signal decomposition method like the pioneering empirical mode decomposition method (Huang et al. 1998 Proc. R. Soc. Lond. A 454, 903. (doi:10.1098/rspa.1998.0193)). Moreover, based on the MIF technique, we propose also a pre-processing method that allows us to decorrelate and mean-centre a hyperspectral dataset. The cosine similarity measure, which often fails in practice, appears to become a successful and outperforming classifier when equipped with such a pre-processing method. We show some examples of the proposed methods when applied to real-life problems. © 2016 The Author(s).

  20. Development and Validation of a Teaching Practice Scale (TISS) for Instructors of Introductory Statistics at the College Level

    ERIC Educational Resources Information Center

    Hassad, Rossi A.

    2009-01-01

    This study examined the teaching practices of 227 college instructors of introductory statistics (from the health and behavioral sciences). Using primarily multidimensional scaling (MDS) techniques, a two-dimensional, 10-item teaching practice scale, TISS (Teaching of Introductory Statistics Scale), was developed and validated. The two dimensions…

  1. Dose reduction with adaptive statistical iterative reconstruction for paediatric CT: phantom study and clinical experience on chest and abdomen CT.

    PubMed

    Gay, F; Pavia, Y; Pierrat, N; Lasalle, S; Neuenschwander, S; Brisse, H J

    2014-01-01

    To assess the benefit and limits of iterative reconstruction of paediatric chest and abdominal computed tomography (CT). The study compared adaptive statistical iterative reconstruction (ASIR) with filtered back projection (FBP) on 64-channel MDCT. A phantom study was first performed using variable tube potential, tube current and ASIR settings. The assessed image quality indices were the signal-to-noise ratio (SNR), the noise power spectrum, low contrast detectability (LCD) and spatial resolution. A clinical retrospective study of 26 children (M:F = 14/12, mean age: 4 years, range: 1-9 years) was secondarily performed allowing comparison of 18 chest and 14 abdominal CT pairs, one with a routine CT dose and FBP reconstruction, and the other with 30 % lower dose and 40 % ASIR reconstruction. Two radiologists independently compared the images for overall image quality, noise, sharpness and artefacts, and measured image noise. The phantom study demonstrated a significant increase in SNR without impairment of the LCD or spatial resolution, except for tube current values below 30-50 mA. On clinical images, no significant difference was observed between FBP and reduced dose ASIR images. Iterative reconstruction allows at least 30 % dose reduction in paediatric chest and abdominal CT, without impairment of image quality. • Iterative reconstruction helps lower radiation exposure levels in children undergoing CT. • Adaptive statistical iterative reconstruction (ASIR) significantly increases SNR without impairing spatial resolution. • For abdomen and chest CT, ASIR allows at least a 30 % dose reduction.

  2. Numerical methods of solving a system of multi-dimensional nonlinear equations of the diffusion type

    NASA Technical Reports Server (NTRS)

    Agapov, A. V.; Kolosov, B. I.

    1979-01-01

    The principles of conservation and stability of difference schemes achieved using the iteration control method were examined. For the schemes obtained of the predictor-corrector type, the conversion was proved for the control sequences of approximate solutions to the precise solutions in the Sobolev metrics. Algorithms were developed for reducing the differential problem to integral relationships, whose solution methods are known, were designed. The algorithms for the problem solution are classified depending on the non-linearity of the diffusion coefficients, and practical recommendations for their effective use are given.

  3. Proceedings of the NATO-Advanced Study Institute on Computer Aided Analysis of Rigid and Flexible Mechanical Systems Held in Troia, Portugal on 27 Jun-9 Jul, 1993. Volume 2. Contributed Papers

    DTIC Science & Technology

    1993-07-09

    Calculate Oil and solve iteratively equation (18) for q and (l)-(S) forex . 4, Solve the velocity problemn through equation (19) to calculate q and (6)-(10) to...object.oriented models for the database to store the system information f1l. Using OOP on the formalism level is more difficult and a current field of...Multidimensional Physical Systems: Graph-theoretic Modeling, Systems and Cybernetics, vol 21 (1992), 5 .9-71 JV A RELATIONAL DATABASE FOR GENERAL

  4. Vectorized and multitasked solution of the few-group neutron diffusion equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zee, S.K.; Turinsky, P.J.; Shayer, Z.

    1989-03-01

    A numerical algorithm with parallelism was used to solve the two-group, multidimensional neutron diffusion equations on computers characterized by shared memory, vector pipeline, and multi-CPU architecture features. Specifically, solutions were obtained on the Cray X/MP-48, the IBM-3090 with vector facilities, and the FPS-164. The material-centered mesh finite difference method approximation and outer-inner iteration method were employed. Parallelism was introduced in the inner iterations using the cyclic line successive overrelaxation iterative method and solving in parallel across lines. The outer iterations were completed using the Chebyshev semi-iterative method that allows parallelism to be introduced in both space and energy groups. Formore » the three-dimensional model, power, soluble boron, and transient fission product feedbacks were included. Concentrating on the pressurized water reactor (PWR), the thermal-hydraulic calculation of moderator density assumed single-phase flow and a closed flow channel, allowing parallelism to be introduced in the solution across the radial plane. Using a pinwise detail, quarter-core model of a typical PWR in cycle 1, for the two-dimensional model without feedback the measured million floating point operations per second (MFLOPS)/vector speedups were 83/11.7. 18/2.2, and 2.4/5.6 on the Cray, IBM, and FPS without multitasking, respectively. Lower performance was observed with a coarser mesh, i.e., shorter vector length, due to vector pipeline start-up. For an 18 x 18 x 30 (x-y-z) three-dimensional model with feedback of the same core, MFLOPS/vector speedups of --61/6.7 and an execution time of 0.8 CPU seconds on the Cray without multitasking were measured. Finally, using two CPUs and the vector pipelines of the Cray, a multitasking efficiency of 81% was noted for the three-dimensional model.« less

  5. Visualizing multiattribute Web transactions using a freeze technique

    NASA Astrophysics Data System (ADS)

    Hao, Ming C.; Cotting, Daniel; Dayal, Umeshwar; Machiraju, Vijay; Garg, Pankaj

    2003-05-01

    Web transactions are multidimensional and have a number of attributes: client, URL, response times, and numbers of messages. One of the key questions is how to simultaneously lay out in a graph the multiple relationships, such as the relationships between the web client response times and URLs in a web access application. In this paper, we describe a freeze technique to enhance a physics-based visualization system for web transactions. The idea is to freeze one set of objects before laying out the next set of objects during the construction of the graph. As a result, we substantially reduce the force computation time. This technique consists of three steps: automated classification, a freeze operation, and a graph layout. These three steps are iterated until the final graph is generated. This iterated-freeze technique has been prototyped in several e-service applications at Hewlett Packard Laboratories. It has been used to visually analyze large volumes of service and sales transactions at online web sites.

  6. Developing a patient-centered outcome measure for complementary and alternative medicine therapies II: Refining content validity through cognitive interviews

    PubMed Central

    2011-01-01

    Background Available measures of patient-reported outcomes for complementary and alternative medicine (CAM) inadequately capture the range of patient-reported treatment effects. The Self-Assessment of Change questionnaire was developed to measure multi-dimensional shifts in well-being for CAM users. With content derived from patient narratives, items were subsequently focused through interviews on a new cohort of participants. Here we present the development of the final version in which the content and format is refined through cognitive interviews. Methods We conducted cognitive interviews across five iterations of questionnaire refinement with a culturally diverse sample of 28 CAM users. In each iteration, participant critiques were used to revise the questionnaire, which was then re-tested in subsequent rounds of cognitive interviews. Following all five iterations, transcripts of cognitive interviews were systematically coded and analyzed to examine participants' understanding of the format and content of the final questionnaire. Based on this data, we established summary descriptions and selected exemplar quotations for each word pair on the final questionnaire. Results The final version of the Self-Assessment of Change questionnaire (SAC) includes 16 word pairs, nine of which remained unchanged from the original draft. Participants consistently said that these stable word pairs represented opposite ends of the same domain of experience and the meanings of these terms were stable across the participant pool. Five pairs underwent revision and two word pairs were added. Four word pairs were eliminated for redundancy or because participants did not agree on the meaning of the terms. Cognitive interviews indicate that participants understood the format of the questionnaire and considered each word pair to represent opposite poles of a shared domain of experience. Conclusions We have placed lay language and direct experience at the center of questionnaire revision and refinement. In so doing, we provide an innovative model for the development of truly patient-centered outcome measures. Although this instrument was designed and tested in a CAM-specific population, it may be useful in assessing multi-dimensional shifts in well-being across a broader patient population. PMID:22206409

  7. Multidimensional poverty and child survival in India.

    PubMed

    Mohanty, Sanjay K

    2011-01-01

    Though the concept of multidimensional poverty has been acknowledged cutting across the disciplines (among economists, public health professionals, development thinkers, social scientists, policy makers and international organizations) and included in the development agenda, its measurement and application are still limited. OBJECTIVES AND METHODOLOGY: Using unit data from the National Family and Health Survey 3, India, this paper measures poverty in multidimensional space and examine the linkages of multidimensional poverty with child survival. The multidimensional poverty is measured in the dimension of knowledge, health and wealth and the child survival is measured with respect to infant mortality and under-five mortality. Descriptive statistics, principal component analyses and the life table methods are used in the analyses. The estimates of multidimensional poverty are robust and the inter-state differentials are large. While infant mortality rate and under-five mortality rate are disproportionately higher among the abject poor compared to the non-poor, there are no significant differences in child survival among educationally, economically and health poor at the national level. State pattern in child survival among the education, economical and health poor are mixed. Use of multidimensional poverty measures help to identify abject poor who are unlikely to come out of poverty trap. The child survival is significantly lower among abject poor compared to moderate poor and non-poor. We urge to popularize the concept of multiple deprivations in research and program so as to reduce poverty and inequality in the population.

  8. Biodiversity as a multidimensional construct: a review, framework and case study of herbivory's impact on plant biodiversity

    PubMed Central

    Naeem, S.; Prager, Case; Weeks, Brian; Varga, Alex; Flynn, Dan F. B.; Griffin, Kevin; Muscarella, Robert; Palmer, Matthew; Wood, Stephen; Schuster, William

    2016-01-01

    Biodiversity is inherently multidimensional, encompassing taxonomic, functional, phylogenetic, genetic, landscape and many other elements of variability of life on the Earth. However, this fundamental principle of multidimensionality is rarely applied in research aimed at understanding biodiversity's value to ecosystem functions and the services they provide. This oversight means that our current understanding of the ecological and environmental consequences of biodiversity loss is limited primarily to what unidimensional studies have revealed. To address this issue, we review the literature, develop a conceptual framework for multidimensional biodiversity research based on this review and provide a case study to explore the framework. Our case study specifically examines how herbivory by whitetail deer (Odocoileus virginianus) alters the multidimensional influence of biodiversity on understory plant cover at Black Rock Forest, New York. Using three biodiversity dimensions (taxonomic, functional and phylogenetic diversity) to explore our framework, we found that herbivory alters biodiversity's multidimensional influence on plant cover; an effect not observable through a unidimensional approach. Although our review, framework and case study illustrate the advantages of multidimensional over unidimensional approaches, they also illustrate the statistical and empirical challenges such work entails. Meeting these challenges, however, where data and resources permit, will be important if we are to better understand and manage the consequences we face as biodiversity continues to decline in the foreseeable future. PMID:27928041

  9. Biodiversity as a multidimensional construct: a review, framework and case study of herbivory's impact on plant biodiversity.

    PubMed

    Naeem, S; Prager, Case; Weeks, Brian; Varga, Alex; Flynn, Dan F B; Griffin, Kevin; Muscarella, Robert; Palmer, Matthew; Wood, Stephen; Schuster, William

    2016-12-14

    Biodiversity is inherently multidimensional, encompassing taxonomic, functional, phylogenetic, genetic, landscape and many other elements of variability of life on the Earth. However, this fundamental principle of multidimensionality is rarely applied in research aimed at understanding biodiversity's value to ecosystem functions and the services they provide. This oversight means that our current understanding of the ecological and environmental consequences of biodiversity loss is limited primarily to what unidimensional studies have revealed. To address this issue, we review the literature, develop a conceptual framework for multidimensional biodiversity research based on this review and provide a case study to explore the framework. Our case study specifically examines how herbivory by whitetail deer (Odocoileus virginianus) alters the multidimensional influence of biodiversity on understory plant cover at Black Rock Forest, New York. Using three biodiversity dimensions (taxonomic, functional and phylogenetic diversity) to explore our framework, we found that herbivory alters biodiversity's multidimensional influence on plant cover; an effect not observable through a unidimensional approach. Although our review, framework and case study illustrate the advantages of multidimensional over unidimensional approaches, they also illustrate the statistical and empirical challenges such work entails. Meeting these challenges, however, where data and resources permit, will be important if we are to better understand and manage the consequences we face as biodiversity continues to decline in the foreseeable future. © 2016 The Authors.

  10. The Understanding and Interpretation of Innovative Technology-Enabled Multidimensional Physical Activity Feedback in Patients at Risk of Future Chronic Disease

    PubMed Central

    Western, Max J.; Peacock, Oliver J.; Stathi, Afroditi; Thompson, Dylan

    2015-01-01

    Background Innovative physical activity monitoring technology can be used to depict rich visual feedback that encompasses the various aspects of physical activity known to be important for health. However, it is unknown whether patients who are at risk of chronic disease would understand such sophisticated personalised feedback or whether they would find it useful and motivating. The purpose of the present study was to determine whether technology-enabled multidimensional physical activity graphics and visualisations are comprehensible and usable for patients at risk of chronic disease. Method We developed several iterations of graphics depicting minute-by-minute activity patterns and integrated physical activity health targets. Subsequently, patients at moderate/high risk of chronic disease (n=29) and healthcare practitioners (n=15) from South West England underwent full 7-days activity monitoring followed by individual semi-structured interviews in which they were asked to comment on their own personalised visual feedback Framework analysis was used to gauge their interpretation and of personalised feedback, graphics and visualisations. Results We identified two main components focussing on (a) the interpretation of feedback designs and data and (b) the impact of personalised visual physical activity feedback on facilitation of health behaviour change. Participants demonstrated a clear ability to understand the sophisticated personal information plus an enhanced physical activity knowledge. They reported that receiving multidimensional feedback was motivating and could be usefully applied to facilitate their efforts in becoming more physically active. Conclusion Multidimensional physical activity feedback can be made comprehensible, informative and motivational by using appropriate graphics and visualisations. There is an opportunity to exploit the full potential created by technological innovation and provide sophisticated personalised physical activity feedback as an adjunct to support behaviour change. PMID:25938455

  11. Adapting an Agent-Based Model of Socio-Technical Systems to Analyze System and Security Failures

    DTIC Science & Technology

    2016-05-09

    statistically significant amount, which it did with a p-valueɘ.0003 on a simulation of 3125 iterations; the data is shown in the Delegation 1 column of...Blackout metric to a statistically significant amount, with a p-valueɘ.0003 on a simulation of 3125 iterations; the data is shown in the Delegation 2...Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1-Volume 1, pp. 1007- 1014 . International Foundation

  12. DEVELOPMENT AND PSYCHOMETRIC TESTING OF A MULTIDIMENSIONAL INSTRUMENT OF PERCEIVED DISCRIMINATION AMONG AFRICAN AMERICANS IN THE JACKSON HEART STUDY

    PubMed Central

    Sims, Mario; Wyatt, Sharon B.; Gutierrez, Mary Lou; Taylor, Herman A.; Williams, David R.

    2009-01-01

    Objective Assessing the discrimination-health disparities hypothesis requires psychometrically sound, multidimensional measures of discrimination. Among the available discrimination measures, few are multidimensional and none have adequate psychometric testing in a large, African American sample. We report the development and psychometric testing of the multidimensional Jackson Heart Study Discrimination (JHSDIS) Instrument. Methods A multidimensional measure assessing the occurrence, frequency, attribution, and coping responses to perceived everyday and lifetime discrimination; lifetime burden of discrimination; and effect of skin color was developed and tested in the 5302-member cohort of the Jackson Heart Study. Internal consistency was calculated by using Cronbach α. coefficient. Confirmatory factor analysis established the dimensions, and intercorrelation coefficients assessed the discriminant validity of the instrument. Setting Tri-county area of the Jackson, MS metropolitan statistical area. Results The JHSDIS was psychometrically sound (overall α=.78, .84 and .77, respectively, for the everyday and lifetime subscales). Confirmatory factor analysis yielded 11 factors, which confirmed the a priori dimensions represented. Conclusions The JHSDIS combined three scales into a single multidimensional instrument with good psychometric properties in a large sample of African Americans. This analysis lays the foundation for using this instrument in research that will examine the association between perceived discrimination and CVD among African Americans. PMID:19341164

  13. A Novel Iterative Scheme for the Very Fast and Accurate Solution of Non-LTE Radiative Transfer Problems

    NASA Astrophysics Data System (ADS)

    Trujillo Bueno, J.; Fabiani Bendicho, P.

    1995-12-01

    Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel methods remain effective even under extreme non-LTE conditions in very fine grids.

  14. Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Bentler, Peter M.

    2000-01-01

    Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)

  15. Performance of the S - [chi][squared] Statistic for Full-Information Bifactor Models

    ERIC Educational Resources Information Center

    Li, Ying; Rupp, Andre A.

    2011-01-01

    This study investigated the Type I error rate and power of the multivariate extension of the S - [chi][squared] statistic using unidimensional and multidimensional item response theory (UIRT and MIRT, respectively) models as well as full-information bifactor (FI-bifactor) models through simulation. Manipulated factors included test length, sample…

  16. Multidimensional Poverty and Child Survival in India

    PubMed Central

    Mohanty, Sanjay K.

    2011-01-01

    Background Though the concept of multidimensional poverty has been acknowledged cutting across the disciplines (among economists, public health professionals, development thinkers, social scientists, policy makers and international organizations) and included in the development agenda, its measurement and application are still limited. Objectives and Methodology Using unit data from the National Family and Health Survey 3, India, this paper measures poverty in multidimensional space and examine the linkages of multidimensional poverty with child survival. The multidimensional poverty is measured in the dimension of knowledge, health and wealth and the child survival is measured with respect to infant mortality and under-five mortality. Descriptive statistics, principal component analyses and the life table methods are used in the analyses. Results The estimates of multidimensional poverty are robust and the inter-state differentials are large. While infant mortality rate and under-five mortality rate are disproportionately higher among the abject poor compared to the non-poor, there are no significant differences in child survival among educationally, economically and health poor at the national level. State pattern in child survival among the education, economical and health poor are mixed. Conclusion Use of multidimensional poverty measures help to identify abject poor who are unlikely to come out of poverty trap. The child survival is significantly lower among abject poor compared to moderate poor and non-poor. We urge to popularize the concept of multiple deprivations in research and program so as to reduce poverty and inequality in the population. PMID:22046384

  17. Determination of optimal imaging settings for urolithiasis CT using filtered back projection (FBP), statistical iterative reconstruction (IR) and knowledge-based iterative model reconstruction (IMR): a physical human phantom study

    PubMed Central

    Choi, Se Y; Ahn, Seung H; Choi, Jae D; Kim, Jung H; Lee, Byoung-Il; Kim, Jeong-In

    2016-01-01

    Objective: The purpose of this study was to compare CT image quality for evaluating urolithiasis using filtered back projection (FBP), statistical iterative reconstruction (IR) and knowledge-based iterative model reconstruction (IMR) according to various scan parameters and radiation doses. Methods: A 5 × 5 × 5 mm3 uric acid stone was placed in a physical human phantom at the level of the pelvis. 3 tube voltages (120, 100 and 80 kV) and 4 current–time products (100, 70, 30 and 15 mAs) were implemented in 12 scans. Each scan was reconstructed with FBP, statistical IR (Levels 5–7) and knowledge-based IMR (soft-tissue Levels 1–3). The radiation dose, objective image quality and signal-to-noise ratio (SNR) were evaluated, and subjective assessments were performed. Results: The effective doses ranged from 0.095 to 2.621 mSv. Knowledge-based IMR showed better objective image noise and SNR than did FBP and statistical IR. The subjective image noise of FBP was worse than that of statistical IR and knowledge-based IMR. The subjective assessment scores deteriorated after a break point of 100 kV and 30 mAs. Conclusion: At the setting of 100 kV and 30 mAs, the radiation dose can be decreased by approximately 84% while keeping the subjective image assessment. Advances in knowledge: Patients with urolithiasis can be evaluated with ultralow-dose non-enhanced CT using a knowledge-based IMR algorithm at a substantially reduced radiation dose with the imaging quality preserved, thereby minimizing the risks of radiation exposure while providing clinically relevant diagnostic benefits for patients. PMID:26577542

  18. Application of the Allan Variance to Time Series Analysis in Astrometry and Geodesy: A Review.

    PubMed

    Malkin, Zinovy

    2016-04-01

    The Allan variance (AVAR) was introduced 50 years ago as a statistical tool for assessing the frequency standards deviations. For the past decades, AVAR has increasingly been used in geodesy and astrometry to assess the noise characteristics in geodetic and astrometric time series. A specific feature of astrometric and geodetic measurements, as compared with clock measurements, is that they are generally associated with uncertainties; thus, an appropriate weighting should be applied during data analysis. In addition, some physically connected scalar time series naturally form series of multidimensional vectors. For example, three station coordinates time series X, Y, and Z can be combined to analyze 3-D station position variations. The classical AVAR is not intended for processing unevenly weighted and/or multidimensional data. Therefore, AVAR modifications, namely weighted AVAR (WAVAR), multidimensional AVAR (MAVAR), and weighted multidimensional AVAR (WMAVAR), were introduced to overcome these deficiencies. In this paper, a brief review is given of the experience of using AVAR and its modifications in processing astrogeodetic time series.

  19. High resolution 4-D spectroscopy with sparse concentric shell sampling and FFT-CLEAN.

    PubMed

    Coggins, Brian E; Zhou, Pei

    2008-12-01

    Recent efforts to reduce the measurement time for multidimensional NMR experiments have fostered the development of a variety of new procedures for sampling and data processing. We recently described concentric ring sampling for 3-D NMR experiments, which is superior to radial sampling as input for processing by a multidimensional discrete Fourier transform. Here, we report the extension of this approach to 4-D spectroscopy as Randomized Concentric Shell Sampling (RCSS), where sampling points for the indirect dimensions are positioned on concentric shells, and where random rotations in the angular space are used to avoid coherent artifacts. With simulations, we show that RCSS produces a very low level of artifacts, even with a very limited number of sampling points. The RCSS sampling patterns can be adapted to fine rectangular grids to permit use of the Fast Fourier Transform in data processing, without an apparent increase in the artifact level. These artifacts can be further reduced to the noise level using the iterative CLEAN algorithm developed in radioastronomy. We demonstrate these methods on the high resolution 4-D HCCH-TOCSY spectrum of protein G's B1 domain, using only 1.2% of the sampling that would be needed conventionally for this resolution. The use of a multidimensional FFT instead of the slow DFT for initial data processing and for subsequent CLEAN significantly reduces the calculation time, yielding an artifact level that is on par with the level of the true spectral noise.

  20. High Resolution 4-D Spectroscopy with Sparse Concentric Shell Sampling and FFT-CLEAN

    PubMed Central

    Coggins, Brian E.; Zhou, Pei

    2009-01-01

    SUMMARY Recent efforts to reduce the measurement time for multidimensional NMR experiments have fostered the development of a variety of new procedures for sampling and data processing. We recently described concentric ring sampling for 3-D NMR experiments, which is superior to radial sampling as input for processing by a multidimensional discrete Fourier transform. Here, we report the extension of this approach to 4-D spectroscopy as Randomized Concentric Shell Sampling (RCSS), where sampling points for the indirect dimensions are positioned on concentric shells, and where random rotations in the angular space are used to avoid coherent artifacts. With simulations, we show that RCSS produces a very low level of artifacts, even with a very limited number of sampling points. The RCSS sampling patterns can be adapted to fine rectangular grids to permit use of the Fast Fourier Transform in data processing, without an apparent increase in the artifact level. These artifacts can be further reduced to the noise level using the iterative CLEAN algorithm developed in radioastronomy. We demonstrate these methods on the high resolution 4-D HCCH-TOCSY spectrum of protein G's B1 domain, using only 1.2% of the sampling that would be needed conventionally for this resolution. The use of a multidimensional FFT instead of the slow DFT for initial data processing and for subsequent CLEAN significantly reduces the calculation time, yielding an artifact level that is on par with the level of the true spectral noise. PMID:18853260

  1. Bayesian reconstruction of projection reconstruction NMR (PR-NMR).

    PubMed

    Yoon, Ji Won

    2014-11-01

    Projection reconstruction nuclear magnetic resonance (PR-NMR) is a technique for generating multidimensional NMR spectra. A small number of projections from lower-dimensional NMR spectra are used to reconstruct the multidimensional NMR spectra. In our previous work, it was shown that multidimensional NMR spectra are efficiently reconstructed using peak-by-peak based reversible jump Markov chain Monte Carlo (RJMCMC) algorithm. We propose an extended and generalized RJMCMC algorithm replacing a simple linear model with a linear mixed model to reconstruct close NMR spectra into true spectra. This statistical method generates samples in a Bayesian scheme. Our proposed algorithm is tested on a set of six projections derived from the three-dimensional 700 MHz HNCO spectrum of a protein HasA. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Effectiveness of Multidimensional Cancer Survivor Rehabilitation and Cost-Effectiveness of Cancer Rehabilitation in General: A Systematic Review

    PubMed Central

    Mewes, Janne C.; IJzerman, Maarten J.; van Harten, Wim H.

    2012-01-01

    Introduction. Many cancer survivors suffer from a combination of disease- and treatment-related morbidities and complaints after primary treatment. There is a growing evidence base for the effectiveness of monodimensional rehabilitation interventions; in practice, however, patients often participate in multidimensional programs. This study systematically reviews evidence regarding effectiveness of multidimensional rehabilitation programs for cancer survivors and cost-effectiveness of cancer rehabilitation in general. Methods. The published literature was systematically reviewed. Data were extracted using standardized forms and were summarized narratively. Results. Sixteen effectiveness and six cost-effectiveness studies were included. Multidimensional rehabilitation programs were found to be effective, but not more effective than monodimensional interventions, and not on all outcome measures. Effect sizes for quality of life were in the range of −0.12 (95% confidence interval [CI], −0.45–0.20) to 0.98 (95% CI, 0.69–1.29). Incremental cost-effectiveness ratios ranged from −€16,976, indicating cost savings, to €11,057 per quality-adjusted life year. Conclusions. The evidence for multidimensional interventions and the economic impact of rehabilitation studies is scarce and dominated by breast cancer studies. Studies published so far report statistically significant benefits for multidimensional interventions over usual care, most notably for the outcomes fatigue and physical functioning. An additional benefit of multidimensional over monodimensional rehabilitation was not found, but this was also sparsely reported on. Available economic evaluations assessed very different rehabilitation interventions. Yet, despite low comparability, all showed favorable cost-effectiveness ratios. Future studies should focus their designs on the comparative effectiveness and cost-effectiveness of multidimensional programs. PMID:22982580

  3. ComVisMD - compact visualization of multidimensional data: experimenting with cricket players data

    NASA Astrophysics Data System (ADS)

    Dandin, Shridhar B.; Ducassé, Mireille

    2018-03-01

    Database information is multidimensional and often displayed in tabular format (row/column display). Presented in aggregated form, multidimensional data can be used to analyze the records or objects. Online Analytical database Processing (OLAP) proposes mechanisms to display multidimensional data in aggregated forms. A choropleth map is a thematic map in which areas are colored in proportion to the measurement of a statistical variable being displayed, such as population density. They are used mostly for compact graphical representation of geographical information. We propose a system, ComVisMD inspired by choropleth map and the OLAP cube to visualize multidimensional data in a compact way. ComVisMD displays multidimensional data like OLAP Cube, where we are mapping an attribute a (first dimension, e.g. year started playing cricket) in vertical direction, object coloring based on b (second dimension, e.g. batting average), mapping varying-size circles based on attribute c (third dimension, e.g. highest score), mapping numbers based on attribute d (fourth dimension, e.g. matches played). We illustrate our approach on cricket players data, namely on two tables Country and Player. They have a large number of rows and columns: 246 rows and 17 columns for players of one country. ComVisMD’s visualization reduces the size of the tabular display by a factor of about 4, allowing users to grasp more information at a time than the bare table display.

  4. Multidimensional radiative transfer with multilevel atoms. II. The non-linear multigrid method.

    NASA Astrophysics Data System (ADS)

    Fabiani Bendicho, P.; Trujillo Bueno, J.; Auer, L.

    1997-08-01

    A new iterative method for solving non-LTE multilevel radiative transfer (RT) problems in 1D, 2D or 3D geometries is presented. The scheme obtains the self-consistent solution of the kinetic and RT equations at the cost of only a few (<10) formal solutions of the RT equation. It combines, for the first time, non-linear multigrid iteration (Brandt, 1977, Math. Comp. 31, 333; Hackbush, 1985, Multi-Grid Methods and Applications, springer-Verlag, Berlin), an efficient multilevel RT scheme based on Gauss-Seidel iterations (cf. Trujillo Bueno & Fabiani Bendicho, 1995ApJ...455..646T), and accurate short-characteristics formal solution techniques. By combining a valid stopping criterion with a nested-grid strategy a converged solution with the desired true error is automatically guaranteed. Contrary to the current operator splitting methods the very high convergence speed of the new RT method does not deteriorate when the grid spatial resolution is increased. With this non-linear multigrid method non-LTE problems discretized on N grid points are solved in O(N) operations. The nested multigrid RT method presented here is, thus, particularly attractive in complicated multilevel transfer problems where small grid-sizes are required. The properties of the method are analyzed both analytically and with illustrative multilevel calculations for Ca II in 1D and 2D schematic model atmospheres.

  5. Full dose reduction potential of statistical iterative reconstruction for head CT protocols in a predominantly pediatric population

    PubMed Central

    Mirro, Amy E.; Brady, Samuel L.; Kaufman, Robert. A.

    2016-01-01

    Purpose To implement the maximum level of statistical iterative reconstruction that can be used to establish dose-reduced head CT protocols in a primarily pediatric population. Methods Select head examinations (brain, orbits, sinus, maxilla and temporal bones) were investigated. Dose-reduced head protocols using an adaptive statistical iterative reconstruction (ASiR) were compared for image quality with the original filtered back projection (FBP) reconstructed protocols in phantom using the following metrics: image noise frequency (change in perceived appearance of noise texture), image noise magnitude, contrast-to-noise ratio (CNR), and spatial resolution. Dose reduction estimates were based on computed tomography dose index (CTDIvol) values. Patient CTDIvol and image noise magnitude were assessed in 737 pre and post dose reduced examinations. Results Image noise texture was acceptable up to 60% ASiR for Soft reconstruction kernel (at both 100 and 120 kVp), and up to 40% ASiR for Standard reconstruction kernel. Implementation of 40% and 60% ASiR led to an average reduction in CTDIvol of 43% for brain, 41% for orbits, 30% maxilla, 43% for sinus, and 42% for temporal bone protocols for patients between 1 month and 26 years, while maintaining an average noise magnitude difference of 0.1% (range: −3% to 5%), improving CNR of low contrast soft tissue targets, and improving spatial resolution of high contrast bony anatomy, as compared to FBP. Conclusion The methodology in this study demonstrates a methodology for maximizing patient dose reduction and maintaining image quality using statistical iterative reconstruction for a primarily pediatric population undergoing head CT examination. PMID:27056425

  6. Adaptively Tuned Iterative Low Dose CT Image Denoising

    PubMed Central

    Hashemi, SayedMasoud; Paul, Narinder S.; Beheshti, Soosan; Cobbold, Richard S. C.

    2015-01-01

    Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction. PMID:26089972

  7. Quantitative evaluation of ASiR image quality: an adaptive statistical iterative reconstruction technique

    NASA Astrophysics Data System (ADS)

    Van de Casteele, Elke; Parizel, Paul; Sijbers, Jan

    2012-03-01

    Adaptive statistical iterative reconstruction (ASiR) is a new reconstruction algorithm used in the field of medical X-ray imaging. This new reconstruction method combines the idealized system representation, as we know it from the standard Filtered Back Projection (FBP) algorithm, and the strength of iterative reconstruction by including a noise model in the reconstruction scheme. It studies how noise propagates through the reconstruction steps, feeds this model back into the loop and iteratively reduces noise in the reconstructed image without affecting spatial resolution. In this paper the effect of ASiR on the contrast to noise ratio is studied using the low contrast module of the Catphan phantom. The experiments were done on a GE LightSpeed VCT system at different voltages and currents. The results show reduced noise and increased contrast for the ASiR reconstructions compared to the standard FBP method. For the same contrast to noise ratio the images from ASiR can be obtained using 60% less current, leading to a reduction in dose of the same amount.

  8. Comparison of adaptive statistical iterative and filtered back projection reconstruction techniques in quantifying coronary calcium.

    PubMed

    Takahashi, Masahiro; Kimura, Fumiko; Umezawa, Tatsuya; Watanabe, Yusuke; Ogawa, Harumi

    2016-01-01

    Adaptive statistical iterative reconstruction (ASIR) has been used to reduce radiation dose in cardiac computed tomography. However, change of image parameters by ASIR as compared to filtered back projection (FBP) may influence quantification of coronary calcium. To investigate the influence of ASIR on calcium quantification in comparison to FBP. In 352 patients, CT images were reconstructed using FBP alone, FBP combined with ASIR 30%, 50%, 70%, and ASIR 100% based on the same raw data. Image noise, plaque density, Agatston scores and calcium volumes were compared among the techniques. Image noise, Agatston score, and calcium volume decreased significantly with ASIR compared to FBP (each P < 0.001). Use of ASIR reduced Agatston score by 10.5% to 31.0%. In calcified plaques both of patients and a phantom, ASIR decreased maximum CT values and calcified plaque size. In comparison to FBP, adaptive statistical iterative reconstruction (ASIR) may significantly decrease Agatston scores and calcium volumes. Copyright © 2016 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.

  9. Individualized statistical learning from medical image databases: application to identification of brain lesions.

    PubMed

    Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos

    2014-04-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Individualized Statistical Learning from Medical Image Databases: Application to Identification of Brain Lesions

    PubMed Central

    Erus, Guray; Zacharaki, Evangelia I.; Davatzikos, Christos

    2014-01-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a “target-specific” feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject’s images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an “estimability” criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. PMID:24607564

  11. Iterative simulated quenching for designing irregular-spot-array generators.

    PubMed

    Gillet, J N; Sheng, Y

    2000-07-10

    We propose a novel, to our knowledge, algorithm of iterative simulated quenching with temperature rescaling for designing diffractive optical elements, based on an analogy between simulated annealing and statistical thermodynamics. The temperature is iteratively rescaled at the end of each quenching process according to ensemble statistics to bring the system back from a frozen imperfect state with a local minimum of energy to a dynamic state in a Boltzmann heat bath in thermal equilibrium at the rescaled temperature. The new algorithm achieves much lower cost function and reconstruction error and higher diffraction efficiency than conventional simulated annealing with a fast exponential cooling schedule and is easy to program. The algorithm is used to design binary-phase generators of large irregular spot arrays. The diffractive phase elements have trapezoidal apertures of varying heights, which fit ideal arbitrary-shaped apertures better than do trapezoidal apertures of fixed heights.

  12. The effects of iterative reconstruction in CT on low-contrast liver lesion volumetry: a phantom study

    NASA Astrophysics Data System (ADS)

    Li, Qin; Berman, Benjamin P.; Schumacher, Justin; Liang, Yongguang; Gavrielides, Marios A.; Yang, Hao; Zhao, Binsheng; Petrick, Nicholas

    2017-03-01

    Tumor volume measured from computed tomography images is considered a biomarker for disease progression or treatment response. The estimation of the tumor volume depends on the imaging system parameters selected, as well as lesion characteristics. In this study, we examined how different image reconstruction methods affect the measurement of lesions in an anthropomorphic liver phantom with a non-uniform background. Iterative statistics-based and model-based reconstructions, as well as filtered back-projection, were evaluated and compared in this study. Statistics-based and filtered back-projection yielded similar estimation performance, while model-based yielded higher precision but lower accuracy in the case of small lesions. Iterative reconstructions exhibited higher signal-to-noise ratio but slightly lower contrast of the lesion relative to the background. A better understanding of lesion volumetry performance as a function of acquisition parameters and lesion characteristics can lead to its incorporation as a routine sizing tool.

  13. Accelerated Path-following Iterative Shrinkage Thresholding Algorithm with Application to Semiparametric Graph Estimation

    PubMed Central

    Zhao, Tuo; Liu, Han

    2016-01-01

    We propose an accelerated path-following iterative shrinkage thresholding algorithm (APISTA) for solving high dimensional sparse nonconvex learning problems. The main difference between APISTA and the path-following iterative shrinkage thresholding algorithm (PISTA) is that APISTA exploits an additional coordinate descent subroutine to boost the computational performance. Such a modification, though simple, has profound impact: APISTA not only enjoys the same theoretical guarantee as that of PISTA, i.e., APISTA attains a linear rate of convergence to a unique sparse local optimum with good statistical properties, but also significantly outperforms PISTA in empirical benchmarks. As an application, we apply APISTA to solve a family of nonconvex optimization problems motivated by estimating sparse semiparametric graphical models. APISTA allows us to obtain new statistical recovery results which do not exist in the existing literature. Thorough numerical results are provided to back up our theory. PMID:28133430

  14. Evidence against the continuum structure underlying motivation measures derived from self-determination theory.

    PubMed

    Chemolli, Emanuela; Gagné, Marylène

    2014-06-01

    Self-determination theory (SDT) proposes a multidimensional conceptualization of motivation in which the different regulations are said to fall along a continuum of self-determination. The continuum has been used as a basis for using a relative autonomy index as a means to create motivational scores. Rasch analysis was used to verify the continuum structure of the Multidimensional Work Motivation Scale and of the Academic Motivation Scale. We discuss the concept of continuum against SDT's conceptualization of motivation and argue against the use of the relative autonomy index on the grounds that evidence for a continuum structure underlying the regulations is weak and because the index is statistically problematic. We suggest exploiting the full richness of SDT's multidimensional conceptualization of motivation through the use of alternative scoring methods when investigating motivational dynamics across life domains.

  15. Calibration and Data Analysis of the MC-130 Air Balance

    NASA Technical Reports Server (NTRS)

    Booth, Dennis; Ulbrich, N.

    2012-01-01

    Design, calibration, calibration analysis, and intended use of the MC-130 air balance are discussed. The MC-130 balance is an 8.0 inch diameter force balance that has two separate internal air flow systems and one external bellows system. The manual calibration of the balance consisted of a total of 1854 data points with both unpressurized and pressurized air flowing through the balance. A subset of 1160 data points was chosen for the calibration data analysis. The regression analysis of the subset was performed using two fundamentally different analysis approaches. First, the data analysis was performed using a recently developed extension of the Iterative Method. This approach fits gage outputs as a function of both applied balance loads and bellows pressures while still allowing the application of the iteration scheme that is used with the Iterative Method. Then, for comparison, the axial force was also analyzed using the Non-Iterative Method. This alternate approach directly fits loads as a function of measured gage outputs and bellows pressures and does not require a load iteration. The regression models used by both the extended Iterative and Non-Iterative Method were constructed such that they met a set of widely accepted statistical quality requirements. These requirements lead to reliable regression models and prevent overfitting of data because they ensure that no hidden near-linear dependencies between regression model terms exist and that only statistically significant terms are included. Finally, a comparison of the axial force residuals was performed. Overall, axial force estimates obtained from both methods show excellent agreement as the differences of the standard deviation of the axial force residuals are on the order of 0.001 % of the axial force capacity.

  16. Assessment of health surveys: fitting a multidimensional graded response model.

    PubMed

    Depaoli, Sarah; Tiemensma, Jitske; Felt, John M

    The multidimensional graded response model, an item response theory (IRT) model, can be used to improve the assessment of surveys, even when sample sizes are restricted. Typically, health-based survey development utilizes classical statistical techniques (e.g. reliability and factor analysis). In a review of four prominent journals within the field of Health Psychology, we found that IRT-based models were used in less than 10% of the studies examining scale development or assessment. However, implementing IRT-based methods can provide more details about individual survey items, which is useful when determining the final item content of surveys. An example using a quality of life survey for Cushing's syndrome (CushingQoL) highlights the main components for implementing the multidimensional graded response model. Patients with Cushing's syndrome (n = 397) completed the CushingQoL. Results from the multidimensional graded response model supported a 2-subscale scoring process for the survey. All items were deemed as worthy contributors to the survey. The graded response model can accommodate unidimensional or multidimensional scales, be used with relatively lower sample sizes, and is implemented in free software (example code provided in online Appendix). Use of this model can help to improve the quality of health-based scales being developed within the Health Sciences.

  17. Introductory Statistics in the Garden

    ERIC Educational Resources Information Center

    Wagaman, John C.

    2017-01-01

    This article describes four semesters of introductory statistics courses that incorporate service learning and gardening into the curriculum with applications of the binomial distribution, least squares regression and hypothesis testing. The activities span multiple semesters and are iterative in nature.

  18. HYBRID NEURAL NETWORK AND SUPPORT VECTOR MACHINE METHOD FOR OPTIMIZATION

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor)

    2005-01-01

    System and method for optimization of a design associated with a response function, using a hybrid neural net and support vector machine (NN/SVM) analysis to minimize or maximize an objective function, optionally subject to one or more constraints. As a first example, the NN/SVM analysis is applied iteratively to design of an aerodynamic component, such as an airfoil shape, where the objective function measures deviation from a target pressure distribution on the perimeter of the aerodynamic component. As a second example, the NN/SVM analysis is applied to data classification of a sequence of data points in a multidimensional space. The NN/SVM analysis is also applied to data regression.

  19. Hybrid Neural Network and Support Vector Machine Method for Optimization

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor)

    2007-01-01

    System and method for optimization of a design associated with a response function, using a hybrid neural net and support vector machine (NN/SVM) analysis to minimize or maximize an objective function, optionally subject to one or more constraints. As a first example, the NN/SVM analysis is applied iteratively to design of an aerodynamic component, such as an airfoil shape, where the objective function measures deviation from a target pressure distribution on the perimeter of the aerodynamic component. As a second example, the NN/SVM analysis is applied to data classification of a sequence of data points in a multidimensional space. The NN/SVM analysis is also applied to data regression.

  20. Application of a simple cerebellar model to geologic surface mapping

    USGS Publications Warehouse

    Hagens, A.; Doveton, J.H.

    1991-01-01

    Neurophysiological research into the structure and function of the cerebellum has inspired computational models that simulate information processing associated with coordination and motor movement. The cerebellar model arithmetic computer (CMAC) has a design structure which makes it readily applicable as an automated mapping device that "senses" a surface, based on a sample of discrete observations of surface elevation. The model operates as an iterative learning process, where cell weights are continuously modified by feedback to improve surface representation. The storage requirements are substantially less than those of a conventional memory allocation, and the model is extended easily to mapping in multidimensional space, where the memory savings are even greater. ?? 1991.

  1. The assessment of function: How is it measured? A clinical perspective

    PubMed Central

    Reiman, Michael P; Manske, Robert C

    2011-01-01

    Testing for outcome or performance can take many forms; including multiple iterations of self-reported measures of function (an assessment of the individual’s perceived dysfunction) and/or clinical special tests (which are primarily assessments of impairments). Typically absent within these testing mechanisms is whether or not one can perform a specific task associated with function. The paper will operationally define function, discuss the construct of function within the disablement model, will overview the multi-dimensional nature of ‘function’ as a concept, will examine the current evidence for functional testing methods, and will propose a functional testing continuum. Limitations of functional performance testing will be discussed including recommendations for future research. PMID:22547919

  2. The Detection of Focal Liver Lesions Using Abdominal CT: A Comparison of Image Quality Between Adaptive Statistical Iterative Reconstruction V and Adaptive Statistical Iterative Reconstruction.

    PubMed

    Lee, Sangyun; Kwon, Heejin; Cho, Jihan

    2016-12-01

    To investigate image quality characteristics of abdominal computed tomography (CT) scans reconstructed with adaptive statistical iterative reconstruction V (ASIR-V) vs currently using applied adaptive statistical iterative reconstruction (ASIR). This institutional review board-approved study included 35 consecutive patients who underwent CT of the abdomen. Among these 35 patients, 27 with focal liver lesions underwent abdomen CT with a 128-slice multidetector unit using the following parameters: fixed noise index of 30, 1.25 mm slice thickness, 120 kVp, and a gantry rotation time of 0.5 seconds. CT images were analyzed depending on the method of reconstruction: ASIR (30%, 50%, and 70%) vs ASIR-V (30%, 50%, and 70%). Three radiologists independently assessed randomized images in a blinded manner. Imaging sets were compared to focal lesion detection numbers, overall image quality, and objective noise with a paired sample t test. Interobserver agreement was assessed with the intraclass correlation coefficient. The detection of small focal liver lesions (<10 mm) was significantly higher when ASIR-V was used when compared to ASIR (P <0.001). Subjective image noise, artifact, and objective image noise in liver were generally significantly better for ASIR-V compared to ASIR, especially in 50% ASIR-V. Image sharpness and diagnostic acceptability were significantly worse in 70% ASIR-V compared to various levels of ASIR. Images analyzed using 50% ASIR-V were significantly better than three different series of ASIR or other ASIR-V conditions at providing diagnostically acceptable CT scans without compromising image quality and in the detection of focal liver lesions. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  3. Volumetric quantification of lung nodules in CT with iterative reconstruction (ASiR and MBIR).

    PubMed

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Robins, Marthony; Colsher, James; Samei, Ehsan

    2013-11-01

    Volume quantifications of lung nodules with multidetector computed tomography (CT) images provide useful information for monitoring nodule developments. The accuracy and precision of the volume quantification, however, can be impacted by imaging and reconstruction parameters. This study aimed to investigate the impact of iterative reconstruction algorithms on the accuracy and precision of volume quantification with dose and slice thickness as additional variables. Repeated CT images were acquired from an anthropomorphic chest phantom with synthetic nodules (9.5 and 4.8 mm) at six dose levels, and reconstructed with three reconstruction algorithms [filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASiR), and model based iterative reconstruction (MBIR)] into three slice thicknesses. The nodule volumes were measured with two clinical software (A: Lung VCAR, B: iNtuition), and analyzed for accuracy and precision. Precision was found to be generally comparable between FBP and iterative reconstruction with no statistically significant difference noted for different dose levels, slice thickness, and segmentation software. Accuracy was found to be more variable. For large nodules, the accuracy was significantly different between ASiR and FBP for all slice thicknesses with both software, and significantly different between MBIR and FBP for 0.625 mm slice thickness with Software A and for all slice thicknesses with Software B. For small nodules, the accuracy was more similar between FBP and iterative reconstruction, with the exception of ASIR vs FBP at 1.25 mm with Software A and MBIR vs FBP at 0.625 mm with Software A. The systematic difference between the accuracy of FBP and iterative reconstructions highlights the importance of extending current segmentation software to accommodate the image characteristics of iterative reconstructions. In addition, a calibration process may help reduce the dependency of accuracy on reconstruction algorithms, such that volumes quantified from scans of different reconstruction algorithms can be compared. The little difference found between the precision of FBP and iterative reconstructions could be a result of both iterative reconstruction's diminished noise reduction at the edge of the nodules as well as the loss of resolution at high noise levels with iterative reconstruction. The findings do not rule out potential advantage of IR that might be evident in a study that uses a larger number of nodules or repeated scans.

  4. Development of a Multidimensional Functional Health Scale for Older Adults in China.

    PubMed

    Mao, Fanzhen; Han, Yaofeng; Chen, Junze; Chen, Wei; Yuan, Manqiong; Alicia Hong, Y; Fang, Ya

    2016-05-01

    A first step to achieve successful aging is assessing functional wellbeing of older adults. This study reports the development of a culturally appropriate brief scale (the Multidimensional Functional Health Scale for Chinese Elderly, MFHSCE) to assess the functional health of Chinese elderly. Through systematic literature review, Delphi method, cultural adaptation, synthetic statistical item selection, Cronbach's alpha and confirmatory factor analysis, we conducted development of item pool, two rounds of item selection, and psychometric evaluation. Synthetic statistical item selection and psychometric evaluation was processed among 539 and 2032 older adults, separately. The MFHSCE consists of 30 items, covering activities of daily living, social relationships, physical health, mental health, cognitive function, and economic resources. The Cronbach's alpha was 0.92, and the comparative fit index was 0.917. The MFHSCE has good internal consistency and construct validity; it is also concise and easy to use in general practice, especially in communities in China.

  5. Statistical iterative material image reconstruction for spectral CT using a semi-empirical forward model

    NASA Astrophysics Data System (ADS)

    Mechlem, Korbinian; Ehn, Sebastian; Sellerer, Thorsten; Pfeiffer, Franz; Noël, Peter B.

    2017-03-01

    In spectral computed tomography (spectral CT), the additional information about the energy dependence of attenuation coefficients can be exploited to generate material selective images. These images have found applications in various areas such as artifact reduction, quantitative imaging or clinical diagnosis. However, significant noise amplification on material decomposed images remains a fundamental problem of spectral CT. Most spectral CT algorithms separate the process of material decomposition and image reconstruction. Separating these steps is suboptimal because the full statistical information contained in the spectral tomographic measurements cannot be exploited. Statistical iterative reconstruction (SIR) techniques provide an alternative, mathematically elegant approach to obtaining material selective images with improved tradeoffs between noise and resolution. Furthermore, image reconstruction and material decomposition can be performed jointly. This is accomplished by a forward model which directly connects the (expected) spectral projection measurements and the material selective images. To obtain this forward model, detailed knowledge of the different photon energy spectra and the detector response was assumed in previous work. However, accurately determining the spectrum is often difficult in practice. In this work, a new algorithm for statistical iterative material decomposition is presented. It uses a semi-empirical forward model which relies on simple calibration measurements. Furthermore, an efficient optimization algorithm based on separable surrogate functions is employed. This partially negates one of the major shortcomings of SIR, namely high computational cost and long reconstruction times. Numerical simulations and real experiments show strongly improved image quality and reduced statistical bias compared to projection-based material decomposition.

  6. Randomly iterated search and statistical competency as powerful inversion tools for deformation source modeling: Application to volcano interferometric synthetic aperture radar data

    NASA Astrophysics Data System (ADS)

    Shirzaei, M.; Walter, T. R.

    2009-10-01

    Modern geodetic techniques provide valuable and near real-time observations of volcanic activity. Characterizing the source of deformation based on these observations has become of major importance in related monitoring efforts. We investigate two random search approaches, simulated annealing (SA) and genetic algorithm (GA), and utilize them in an iterated manner. The iterated approach helps to prevent GA in general and SA in particular from getting trapped in local minima, and it also increases redundancy for exploring the search space. We apply a statistical competency test for estimating the confidence interval of the inversion source parameters, considering their internal interaction through the model, the effect of the model deficiency, and the observational error. Here, we present and test this new randomly iterated search and statistical competency (RISC) optimization method together with GA and SA for the modeling of data associated with volcanic deformations. Following synthetic and sensitivity tests, we apply the improved inversion techniques to two episodes of activity in the Campi Flegrei volcanic region in Italy, observed by the interferometric synthetic aperture radar technique. Inversion of these data allows derivation of deformation source parameters and their associated quality so that we can compare the two inversion methods. The RISC approach was found to be an efficient method in terms of computation time and search results and may be applied to other optimization problems in volcanic and tectonic environments.

  7. Fitting multidimensional splines using statistical variable selection techniques

    NASA Technical Reports Server (NTRS)

    Smith, P. L.

    1982-01-01

    This report demonstrates the successful application of statistical variable selection techniques to fit splines. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs using the B-spline basis were developed, and the one for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  8. Using Action Research to Develop a Course in Statistical Inference for Workplace-Based Adults

    ERIC Educational Resources Information Center

    Forbes, Sharleen

    2014-01-01

    Many adults who need an understanding of statistical concepts have limited mathematical skills. They need a teaching approach that includes as little mathematical context as possible. Iterative participatory qualitative research (action research) was used to develop a statistical literacy course for adult learners informed by teaching in…

  9. Accelerated iterative beam angle selection in IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bangert, Mark, E-mail: m.bangert@dkfz.de; Unkelbach, Jan

    2016-03-15

    Purpose: Iterative methods for beam angle selection (BAS) for intensity-modulated radiation therapy (IMRT) planning sequentially construct a beneficial ensemble of beam directions. In a naïve implementation, the nth beam is selected by adding beam orientations one-by-one from a discrete set of candidates to an existing ensemble of (n − 1) beams. The best beam orientation is identified in a time consuming process by solving the fluence map optimization (FMO) problem for every candidate beam and selecting the beam that yields the largest improvement to the objective function value. This paper evaluates two alternative methods to accelerate iterative BAS based onmore » surrogates for the FMO objective function value. Methods: We suggest to select candidate beams not based on the FMO objective function value after convergence but (1) based on the objective function value after five FMO iterations of a gradient based algorithm and (2) based on a projected gradient of the FMO problem in the first iteration. The performance of the objective function surrogates is evaluated based on the resulting objective function values and dose statistics in a treatment planning study comprising three intracranial, three pancreas, and three prostate cases. Furthermore, iterative BAS is evaluated for an application in which a small number of noncoplanar beams complement a set of coplanar beam orientations. This scenario is of practical interest as noncoplanar setups may require additional attention of the treatment personnel for every couch rotation. Results: Iterative BAS relying on objective function surrogates yields similar results compared to naïve BAS with regard to the objective function values and dose statistics. At the same time, early stopping of the FMO and using the projected gradient during the first iteration enable reductions in computation time by approximately one to two orders of magnitude. With regard to the clinical delivery of noncoplanar IMRT treatments, we could show that optimized beam ensembles using only a few noncoplanar beam orientations often approach the plan quality of fully noncoplanar ensembles. Conclusions: We conclude that iterative BAS in combination with objective function surrogates can be a viable option to implement automated BAS at clinically acceptable computation times.« less

  10. Accelerated iterative beam angle selection in IMRT.

    PubMed

    Bangert, Mark; Unkelbach, Jan

    2016-03-01

    Iterative methods for beam angle selection (BAS) for intensity-modulated radiation therapy (IMRT) planning sequentially construct a beneficial ensemble of beam directions. In a naïve implementation, the nth beam is selected by adding beam orientations one-by-one from a discrete set of candidates to an existing ensemble of (n - 1) beams. The best beam orientation is identified in a time consuming process by solving the fluence map optimization (FMO) problem for every candidate beam and selecting the beam that yields the largest improvement to the objective function value. This paper evaluates two alternative methods to accelerate iterative BAS based on surrogates for the FMO objective function value. We suggest to select candidate beams not based on the FMO objective function value after convergence but (1) based on the objective function value after five FMO iterations of a gradient based algorithm and (2) based on a projected gradient of the FMO problem in the first iteration. The performance of the objective function surrogates is evaluated based on the resulting objective function values and dose statistics in a treatment planning study comprising three intracranial, three pancreas, and three prostate cases. Furthermore, iterative BAS is evaluated for an application in which a small number of noncoplanar beams complement a set of coplanar beam orientations. This scenario is of practical interest as noncoplanar setups may require additional attention of the treatment personnel for every couch rotation. Iterative BAS relying on objective function surrogates yields similar results compared to naïve BAS with regard to the objective function values and dose statistics. At the same time, early stopping of the FMO and using the projected gradient during the first iteration enable reductions in computation time by approximately one to two orders of magnitude. With regard to the clinical delivery of noncoplanar IMRT treatments, we could show that optimized beam ensembles using only a few noncoplanar beam orientations often approach the plan quality of fully noncoplanar ensembles. We conclude that iterative BAS in combination with objective function surrogates can be a viable option to implement automated BAS at clinically acceptable computation times.

  11. Territories typification technique with use of statistical models

    NASA Astrophysics Data System (ADS)

    Galkin, V. I.; Rastegaev, A. V.; Seredin, V. V.; Andrianov, A. V.

    2018-05-01

    Territories typification is required for solution of many problems. The results of geological zoning received by means of various methods do not always agree. That is why the main goal of the research given is to develop a technique of obtaining a multidimensional standard classified indicator for geological zoning. In the course of the research, the probabilistic approach was used. In order to increase the reliability of geological information classification, the authors suggest using complex multidimensional probabilistic indicator P K as a criterion of the classification. The second criterion chosen is multidimensional standard classified indicator Z. These can serve as characteristics of classification in geological-engineering zoning. Above mentioned indicators P K and Z are in good correlation. Correlation coefficient values for the entire territory regardless of structural solidity equal r = 0.95 so each indicator can be used in geological-engineering zoning. The method suggested has been tested and the schematic map of zoning has been drawn.

  12. A methodology for finding the optimal iteration number of the SIRT algorithm for quantitative Electron Tomography.

    PubMed

    Okariz, Ana; Guraya, Teresa; Iturrondobeitia, Maider; Ibarretxe, Julen

    2017-02-01

    The SIRT (Simultaneous Iterative Reconstruction Technique) algorithm is commonly used in Electron Tomography to calculate the original volume of the sample from noisy images, but the results provided by this iterative procedure are strongly dependent on the specific implementation of the algorithm, as well as on the number of iterations employed for the reconstruction. In this work, a methodology for selecting the iteration number of the SIRT reconstruction that provides the most accurate segmentation is proposed. The methodology is based on the statistical analysis of the intensity profiles at the edge of the objects in the reconstructed volume. A phantom which resembles a a carbon black aggregate has been created to validate the methodology and the SIRT implementations of two free software packages (TOMOJ and TOMO3D) have been used. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Ultralow-dose CT of the craniofacial bone for navigated surgery using adaptive statistical iterative reconstruction and model-based iterative reconstruction: 2D and 3D image quality.

    PubMed

    Widmann, Gerlig; Schullian, Peter; Gassner, Eva-Maria; Hoermann, Romed; Bale, Reto; Puelacher, Wolfgang

    2015-03-01

    OBJECTIVE. The purpose of this article is to evaluate 2D and 3D image quality of high-resolution ultralow-dose CT images of the craniofacial bone for navigated surgery using adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR) in comparison with standard filtered backprojection (FBP). MATERIALS AND METHODS. A formalin-fixed human cadaver head was scanned using a clinical reference protocol at a CT dose index volume of 30.48 mGy and a series of five ultralow-dose protocols at 3.48, 2.19, 0.82, 0.44, and 0.22 mGy using FBP and ASIR at 50% (ASIR-50), ASIR at 100% (ASIR-100), and MBIR. Blinded 2D axial and 3D volume-rendered images were compared with each other by three readers using top-down scoring. Scores were analyzed per protocol or dose and reconstruction. All images were compared with the FBP reference at 30.48 mGy. A nonparametric Mann-Whitney U test was used. Statistical significance was set at p < 0.05. RESULTS. For 2D images, the FBP reference at 30.48 mGy did not statistically significantly differ from ASIR-100 at 3.48 mGy, ASIR-100 at 2.19 mGy, and MBIR at 0.82 mGy. MBIR at 2.19 and 3.48 mGy scored statistically significantly better than the FBP reference (p = 0.032 and 0.001, respectively). For 3D images, the FBP reference at 30.48 mGy did not statistically significantly differ from all reconstructions at 3.48 mGy; FBP and ASIR-100 at 2.19 mGy; FBP, ASIR-100, and MBIR at 0.82 mGy; MBIR at 0.44 mGy; and MBIR at 0.22 mGy. CONCLUSION. MBIR (2D and 3D) and ASIR-100 (2D) may significantly improve subjective image quality of ultralow-dose images and may allow more than 90% dose reductions.

  14. The role of simulation in the design of a neural network chip

    NASA Technical Reports Server (NTRS)

    Desai, Utpal; Roppel, Thaddeus A.; Padgett, Mary L.

    1993-01-01

    An iterative, simulation-based design procedure for a neural network chip is introduced. For this design procedure, the goal is to produce a chip layout for a neural network in which the weights are determined by transistor gate width-to-length ratios. In a given iteration, the current layout is simulated using the circuit simulator SPICE, and layout adjustments are made based on conventional gradient-decent methods. After the iteration converges, the chip is fabricated. Monte Carlo analysis is used to predict the effect of statistical fabrication process variations on the overall performance of the neural network chip.

  15. Proficiency Testing for Determination of Water Content in Toluene of Chemical Reagents by iteration robust statistic technique

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Wang, Qunwei; He, Ming

    2018-05-01

    In order to investigate and improve the level of detection technology of water content in liquid chemical reagents of domestic laboratories, proficiency testing provider PT0031 (CNAS) has organized proficiency testing program of water content in toluene, 48 laboratories from 18 provinces/cities/municipals took part in the PT. This paper introduces the implementation process of proficiency testing for determination of water content in toluene, including sample preparation, homogeneity and stability test, the results of statistics of iteration robust statistic technique and analysis, summarized and analyzed those of the different test standards which are widely used in the laboratories, put forward the technological suggestions for the improvement of the test quality of water content. Satisfactory results were obtained by 43 laboratories, amounting to 89.6% of the total participating laboratories.

  16. Rapid acquisition of data dense solid-state CPMG NMR spectral sets using multi-dimensional statistical analysis

    DOE PAGES

    Mason, H. E.; Uribe, E. C.; Shusterman, J. A.

    2018-01-01

    Tensor-rank decomposition methods have been applied to variable contact time 29 Si{ 1 H} CP/CPMG NMR data sets to extract NMR dynamics information and dramatically decrease conventional NMR acquisition times.

  17. Rapid acquisition of data dense solid-state CPMG NMR spectral sets using multi-dimensional statistical analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mason, H. E.; Uribe, E. C.; Shusterman, J. A.

    Tensor-rank decomposition methods have been applied to variable contact time 29 Si{ 1 H} CP/CPMG NMR data sets to extract NMR dynamics information and dramatically decrease conventional NMR acquisition times.

  18. Fisher's method of scoring in statistical image reconstruction: comparison of Jacobi and Gauss-Seidel iterative schemes.

    PubMed

    Hudson, H M; Ma, J; Green, P

    1994-01-01

    Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.

  19. High-Level Performance Modeling of SAR Systems

    NASA Technical Reports Server (NTRS)

    Chen, Curtis

    2006-01-01

    SAUSAGE (Still Another Utility for SAR Analysis that s General and Extensible) is a computer program for modeling (see figure) the performance of synthetic- aperture radar (SAR) or interferometric synthetic-aperture radar (InSAR or IFSAR) systems. The user is assumed to be familiar with the basic principles of SAR imaging and interferometry. Given design parameters (e.g., altitude, power, and bandwidth) that characterize a radar system, the software predicts various performance metrics (e.g., signal-to-noise ratio and resolution). SAUSAGE is intended to be a general software tool for quick, high-level evaluation of radar designs; it is not meant to capture all the subtleties, nuances, and particulars of specific systems. SAUSAGE was written to facilitate the exploration of engineering tradeoffs within the multidimensional space of design parameters. Typically, this space is examined through an iterative process of adjusting the values of the design parameters and examining the effects of the adjustments on the overall performance of the system at each iteration. The software is designed to be modular and extensible to enable consideration of a variety of operating modes and antenna beam patterns, including, for example, strip-map and spotlight SAR acquisitions, polarimetry, burst modes, and squinted geometries.

  20. SHARE: system design and case studies for statistical health information release

    PubMed Central

    Gardner, James; Xiong, Li; Xiao, Yonghui; Gao, Jingjing; Post, Andrew R; Jiang, Xiaoqian; Ohno-Machado, Lucila

    2013-01-01

    Objectives We present SHARE, a new system for statistical health information release with differential privacy. We present two case studies that evaluate the software on real medical datasets and demonstrate the feasibility and utility of applying the differential privacy framework on biomedical data. Materials and Methods SHARE releases statistical information in electronic health records with differential privacy, a strong privacy framework for statistical data release. It includes a number of state-of-the-art methods for releasing multidimensional histograms and longitudinal patterns. We performed a variety of experiments on two real datasets, the surveillance, epidemiology and end results (SEER) breast cancer dataset and the Emory electronic medical record (EeMR) dataset, to demonstrate the feasibility and utility of SHARE. Results Experimental results indicate that SHARE can deal with heterogeneous data present in medical data, and that the released statistics are useful. The Kullback–Leibler divergence between the released multidimensional histograms and the original data distribution is below 0.5 and 0.01 for seven-dimensional and three-dimensional data cubes generated from the SEER dataset, respectively. The relative error for longitudinal pattern queries on the EeMR dataset varies between 0 and 0.3. While the results are promising, they also suggest that challenges remain in applying statistical data release using the differential privacy framework for higher dimensional data. Conclusions SHARE is one of the first systems to provide a mechanism for custodians to release differentially private aggregate statistics for a variety of use cases in the medical domain. This proof-of-concept system is intended to be applied to large-scale medical data warehouses. PMID:23059729

  1. A Least-Squares Commutator in the Iterative Subspace Method for Accelerating Self-Consistent Field Convergence.

    PubMed

    Li, Haichen; Yaron, David J

    2016-11-08

    A least-squares commutator in the iterative subspace (LCIIS) approach is explored for accelerating self-consistent field (SCF) calculations. LCIIS is similar to direct inversion of the iterative subspace (DIIS) methods in that the next iterate of the density matrix is obtained as a linear combination of past iterates. However, whereas DIIS methods find the linear combination by minimizing a sum of error vectors, LCIIS minimizes the Frobenius norm of the commutator between the density matrix and the Fock matrix. This minimization leads to a quartic problem that can be solved iteratively through a constrained Newton's method. The relationship between LCIIS and DIIS is discussed. Numerical experiments suggest that LCIIS leads to faster convergence than other SCF convergence accelerating methods in a statistically significant sense, and in a number of cases LCIIS leads to stable SCF solutions that are not found by other methods. The computational cost involved in solving the quartic minimization problem is small compared to the typical cost of SCF iterations and the approach is easily integrated into existing codes. LCIIS can therefore serve as a powerful addition to SCF convergence accelerating methods in computational quantum chemistry packages.

  2. Influence of Ultra-Low-Dose and Iterative Reconstructions on the Visualization of Orbital Soft Tissues on Maxillofacial CT.

    PubMed

    Widmann, G; Juranek, D; Waldenberger, F; Schullian, P; Dennhardt, A; Hoermann, R; Steurer, M; Gassner, E-M; Puelacher, W

    2017-08-01

    Dose reduction on CT scans for surgical planning and postoperative evaluation of midface and orbital fractures is an important concern. The purpose of this study was to evaluate the variability of various low-dose and iterative reconstruction techniques on the visualization of orbital soft tissues. Contrast-to-noise ratios of the optic nerve and inferior rectus muscle and subjective scores of a human cadaver were calculated from CT with a reference dose protocol (CT dose index volume = 36.69 mGy) and a subsequent series of low-dose protocols (LDPs I-4: CT dose index volume = 4.18, 2.64, 0.99, and 0.53 mGy) with filtered back-projection (FBP) and adaptive statistical iterative reconstruction (ASIR)-50, ASIR-100, and model-based iterative reconstruction. The Dunn Multiple Comparison Test was used to compare each combination of protocols (α = .05). Compared with the reference dose protocol with FBP, the following statistically significant differences in contrast-to-noise ratios were shown (all, P ≤ .012) for the following: 1) optic nerve: LDP-I with FBP; LDP-II with FBP and ASIR-50; LDP-III with FBP, ASIR-50, and ASIR-100; and LDP-IV with FBP, ASIR-50, and ASIR-100; and 2) inferior rectus muscle: LDP-II with FBP, LDP-III with FBP and ASIR-50, and LDP-IV with FBP, ASIR-50, and ASIR-100. Model-based iterative reconstruction showed the best contrast-to-noise ratio in all images and provided similar subjective scores for LDP-II. ASIR-50 had no remarkable effect, and ASIR-100, a small effect on subjective scores. Compared with a reference dose protocol with FBP, model-based iterative reconstruction may show similar diagnostic visibility of orbital soft tissues at a CT dose index volume of 2.64 mGy. Low-dose technology and iterative reconstruction technology may redefine current reference dose levels in maxillofacial CT. © 2017 by American Journal of Neuroradiology.

  3. Study of multi-dimensional radiative energy transfer in molecular gases

    NASA Technical Reports Server (NTRS)

    Liu, Jiwen; Tiwari, S. N.

    1993-01-01

    The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical arrow band model with an exponential-tailed inverse intensity distribution. Consideration of spectral correlation results in some distinguishing features of the Monte Carlo formulations. Validation of the Monte Carlo formulations has been conducted by comparing results of this method with other solutions. Extension of a one-dimensional problem to a multi-dimensional problem requires some special treatments in the Monte Carlo analysis. Use of different assumptions results in different sets of Monte Carlo formulations. The nongray narrow band formulations provide the most accurate results.

  4. Multidimensional competences of supply chain managers: an empirical study

    NASA Astrophysics Data System (ADS)

    Shou, Yongyi; Wang, Weijiao

    2017-01-01

    Supply chain manager competences have attracted increasing attention from both practitioners and scholars in recent years. This paper conducted an explorative study to understand the dimensionality of supply chain manager competences. Online job advertisements for supply chain managers were collected as secondary data, since these advertisements reflect employers' real job requirements. We adopted the multidimensional scaling (MDS) technique to process and analyse the data. Five dimensions of supply chain manager competences are identified: generic skills, functional skills, supply chain management (SCM) qualifications and leadership, SCM expertise, and industry-specific and senior management skills. Statistic tests indicate that supply chain manager competence saliences vary in different industries and regions.

  5. Composite scores in comparative effectiveness research: counterbalancing parsimony and dimensionality in patient-reported outcomes.

    PubMed

    Schwartz, Carolyn E; Patrick, Donald L

    2014-07-01

    When planning a comparative effectiveness study comparing disease-modifying treatments, competing demands influence choice of outcomes. Current practice emphasizes parsimony, although understanding multidimensional treatment impact can help to personalize medical decision-making. We discuss both sides of this 'tug of war'. We discuss the assumptions, advantages and drawbacks of composite scores and multidimensional outcomes. We describe possible solutions to the multiple comparison problem, including conceptual hierarchy distinctions, statistical approaches, 'real-world' benchmarks of effectiveness and subgroup analysis. We conclude that comparative effectiveness research should consider multiple outcome dimensions and compare different approaches that fit the individual context of study objectives.

  6. Validation of Brief Multidimensional Spirituality/Religiousness Inventory (BMMRS) in Italian Adult Participants and in Participants with Medical Diseases.

    PubMed

    Vespa, Anna; Giulietti, Maria Velia; Spatuzzi, Roberta; Fabbietti, Paolo; Meloni, Cristina; Gattafoni, Pisana; Ottaviani, Marica

    2017-06-01

    This study aimed at assessing the reliability and construct validity of Brief Multidimensional Measure of Religiousness/Spirituality (BMMRS) on Italian sample. 353 Italian participants: 58.9% affected by different diseases and 41.1% healthy subjects. The results of descriptive statistics of internal consistency reliabilities (Chronbach's coefficient) of the BMMRS revealed a remarkable consistency and reliability of different scales DSE, SpC, SC, CSC, VB, SPY-WELL and a good Inter-Class Correlations ≥70 maintaining a good stability of the measures over the time. BMMRS is a useful inventory for the evaluation of the principal spiritual dimensions.

  7. Algorithm for loading shot noise microbunching in multi-dimensional, free-electron laser simulation codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fawley, William M.

    We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser(FEL) simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multi-dimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.

  8. Statistical Downscaling in Multi-dimensional Wave Climate Forecast

    NASA Astrophysics Data System (ADS)

    Camus, P.; Méndez, F. J.; Medina, R.; Losada, I. J.; Cofiño, A. S.; Gutiérrez, J. M.

    2009-04-01

    Wave climate at a particular site is defined by the statistical distribution of sea state parameters, such as significant wave height, mean wave period, mean wave direction, wind velocity, wind direction and storm surge. Nowadays, long-term time series of these parameters are available from reanalysis databases obtained by numerical models. The Self-Organizing Map (SOM) technique is applied to characterize multi-dimensional wave climate, obtaining the relevant "wave types" spanning the historical variability. This technique summarizes multi-dimension of wave climate in terms of a set of clusters projected in low-dimensional lattice with a spatial organization, providing Probability Density Functions (PDFs) on the lattice. On the other hand, wind and storm surge depend on instantaneous local large-scale sea level pressure (SLP) fields while waves depend on the recent history of these fields (say, 1 to 5 days). Thus, these variables are associated with large-scale atmospheric circulation patterns. In this work, a nearest-neighbors analog method is used to predict monthly multi-dimensional wave climate. This method establishes relationships between the large-scale atmospheric circulation patterns from numerical models (SLP fields as predictors) with local wave databases of observations (monthly wave climate SOM PDFs as predictand) to set up statistical models. A wave reanalysis database, developed by Puertos del Estado (Ministerio de Fomento), is considered as historical time series of local variables. The simultaneous SLP fields calculated by NCEP atmospheric reanalysis are used as predictors. Several applications with different size of sea level pressure grid and with different temporal domain resolution are compared to obtain the optimal statistical model that better represents the monthly wave climate at a particular site. In this work we examine the potential skill of this downscaling approach considering perfect-model conditions, but we will also analyze the suitability of this methodology to be used for seasonal forecast and for long-term climate change scenario projection of wave climate.

  9. An R package for analyzing and modeling ranking data

    PubMed Central

    2013-01-01

    Background In medical informatics, psychology, market research and many other fields, researchers often need to analyze and model ranking data. However, there is no statistical software that provides tools for the comprehensive analysis of ranking data. Here, we present pmr, an R package for analyzing and modeling ranking data with a bundle of tools. The pmr package enables descriptive statistics (mean rank, pairwise frequencies, and marginal matrix), Analytic Hierarchy Process models (with Saaty’s and Koczkodaj’s inconsistencies), probability models (Luce model, distance-based model, and rank-ordered logit model), and the visualization of ranking data with multidimensional preference analysis. Results Examples of the use of package pmr are given using a real ranking dataset from medical informatics, in which 566 Hong Kong physicians ranked the top five incentives (1: competitive pressures; 2: increased savings; 3: government regulation; 4: improved efficiency; 5: improved quality care; 6: patient demand; 7: financial incentives) to the computerization of clinical practice. The mean rank showed that item 4 is the most preferred item and item 3 is the least preferred item, and significance difference was found between physicians’ preferences with respect to their monthly income. A multidimensional preference analysis identified two dimensions that explain 42% of the total variance. The first can be interpreted as the overall preference of the seven items (labeled as “internal/external”), and the second dimension can be interpreted as their overall variance of (labeled as “push/pull factors”). Various statistical models were fitted, and the best were found to be weighted distance-based models with Spearman’s footrule distance. Conclusions In this paper, we presented the R package pmr, the first package for analyzing and modeling ranking data. The package provides insight to users through descriptive statistics of ranking data. Users can also visualize ranking data by applying a thought multidimensional preference analysis. Various probability models for ranking data are also included, allowing users to choose that which is most suitable to their specific situations. PMID:23672645

  10. An R package for analyzing and modeling ranking data.

    PubMed

    Lee, Paul H; Yu, Philip L H

    2013-05-14

    In medical informatics, psychology, market research and many other fields, researchers often need to analyze and model ranking data. However, there is no statistical software that provides tools for the comprehensive analysis of ranking data. Here, we present pmr, an R package for analyzing and modeling ranking data with a bundle of tools. The pmr package enables descriptive statistics (mean rank, pairwise frequencies, and marginal matrix), Analytic Hierarchy Process models (with Saaty's and Koczkodaj's inconsistencies), probability models (Luce model, distance-based model, and rank-ordered logit model), and the visualization of ranking data with multidimensional preference analysis. Examples of the use of package pmr are given using a real ranking dataset from medical informatics, in which 566 Hong Kong physicians ranked the top five incentives (1: competitive pressures; 2: increased savings; 3: government regulation; 4: improved efficiency; 5: improved quality care; 6: patient demand; 7: financial incentives) to the computerization of clinical practice. The mean rank showed that item 4 is the most preferred item and item 3 is the least preferred item, and significance difference was found between physicians' preferences with respect to their monthly income. A multidimensional preference analysis identified two dimensions that explain 42% of the total variance. The first can be interpreted as the overall preference of the seven items (labeled as "internal/external"), and the second dimension can be interpreted as their overall variance of (labeled as "push/pull factors"). Various statistical models were fitted, and the best were found to be weighted distance-based models with Spearman's footrule distance. In this paper, we presented the R package pmr, the first package for analyzing and modeling ranking data. The package provides insight to users through descriptive statistics of ranking data. Users can also visualize ranking data by applying a thought multidimensional preference analysis. Various probability models for ranking data are also included, allowing users to choose that which is most suitable to their specific situations.

  11. [Impact to Z-score Mapping of Hyperacute Stroke Images by Computed Tomography in Adaptive Statistical Iterative Reconstruction].

    PubMed

    Watanabe, Shota; Sakaguchi, Kenta; Hosono, Makoto; Ishii, Kazunari; Murakami, Takamichi; Ichikawa, Katsuhiro

    The purpose of this study was to evaluate the effect of a hybrid-type iterative reconstruction method on Z-score mapping of hyperacute stroke in unenhanced computed tomography (CT) images. We used a hybrid-type iterative reconstruction [adaptive statistical iterative reconstruction (ASiR)] implemented in a CT system (Optima CT660 Pro advance, GE Healthcare). With 15 normal brain cases, we reconstructed CT images with a filtered back projection (FBP) and ASiR with a blending factor of 100% (ASiR100%). Two standardized normal brain data were created from normal databases of FBP images (FBP-NDB) and ASiR100% images (ASiR-NDB), and standard deviation (SD) values in basal ganglia were measured. The Z-score mapping was performed for 12 hyperacute stroke cases by using FBP-NDB and ASiR-NDB, and compared Z-score value on hyperacute stroke area and normal area between FBP-NDB and ASiR-NDB. By using ASiR-NDB, the SD value of standardized brain was decreased by 16%. The Z-score value of ASiR-NDB on hyperacute stroke area was significantly higher than FBP-NDB (p<0.05). Therefore, the use of images reconstructed with ASiR100% for Z-score mapping had potential to improve the accuracy of Z-score mapping.

  12. For the Love of Statistics: Appreciating and Learning to Apply Experimental Analysis and Statistics through Computer Programming Activities

    ERIC Educational Resources Information Center

    Mascaró, Maite; Sacristán, Ana Isabel; Rufino, Marta M.

    2016-01-01

    For the past 4 years, we have been involved in a project that aims to enhance the teaching and learning of experimental analysis and statistics, of environmental and biological sciences students, through computational programming activities (using R code). In this project, through an iterative design, we have developed sequences of R-code-based…

  13. Statistical Physics for Adaptive Distributed Control

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2005-01-01

    A viewgraph presentation on statistical physics for distributed adaptive control is shown. The topics include: 1) The Golden Rule; 2) Advantages; 3) Roadmap; 4) What is Distributed Control? 5) Review of Information Theory; 6) Iterative Distributed Control; 7) Minimizing L(q) Via Gradient Descent; and 8) Adaptive Distributed Control.

  14. Effect of Low-Dose MDCT and Iterative Reconstruction on Trabecular Bone Microstructure Assessment.

    PubMed

    Kopp, Felix K; Holzapfel, Konstantin; Baum, Thomas; Nasirudin, Radin A; Mei, Kai; Garcia, Eduardo G; Burgkart, Rainer; Rummeny, Ernst J; Kirschke, Jan S; Noël, Peter B

    2016-01-01

    We investigated the effects of low-dose multi detector computed tomography (MDCT) in combination with statistical iterative reconstruction algorithms on trabecular bone microstructure parameters. Twelve donated vertebrae were scanned with the routine radiation exposure used in our department (standard-dose) and a low-dose protocol. Reconstructions were performed with filtered backprojection (FBP) and maximum-likelihood based statistical iterative reconstruction (SIR). Trabecular bone microstructure parameters were assessed and statistically compared for each reconstruction. Moreover, fracture loads of the vertebrae were biomechanically determined and correlated to the assessed microstructure parameters. Trabecular bone microstructure parameters based on low-dose MDCT and SIR significantly correlated with vertebral bone strength. There was no significant difference between microstructure parameters calculated on low-dose SIR and standard-dose FBP images. However, the results revealed a strong dependency on the regularization strength applied during SIR. It was observed that stronger regularization might corrupt the microstructure analysis, because the trabecular structure is a very small detail that might get lost during the regularization process. As a consequence, the introduction of SIR for trabecular bone microstructure analysis requires a specific optimization of the regularization parameters. Moreover, in comparison to other approaches, superior noise-resolution trade-offs can be found with the proposed methods.

  15. Lab-X-ray multidimensional imaging of processes inside porous media

    NASA Astrophysics Data System (ADS)

    Godinho, Jose

    2017-04-01

    Time-lapse and other multidimensional X-ray imaging techniques have mostly been applied using synchrotron radiation, which limits accessibility and complicates data analysis. Here, we present new time-lapse imaging approaches using laboratory X-ray computed microtomography (CT) to study transformations inside porous media. Specifically, three methods will be presented: 1) Quantitative time-lapse radiography to study sub-second processes. For example to study the penetration of particles into fractures and pores, which is essential to understand how proppants keep fractures opened during hydraulic fracturing and how filter cakes form during borehole drilling. 2) Combination of time-lapse CT with diffraction tomography to study the transformation between bio-inspired polymorphs in 6D, e.g. mineral phase transformation between ACC, Vaterite and Calcite - CaCO3, and between ACS, Anhydrite and Gypsum - CaSO4. Crystals can be resolved in nanopores down to 7 nm (over 100 times smaller than the resolution of CT), which allows studying the effect of confinement on phase stability and growth rates. 3) Fast iterative helical micro-CT scanning to study samples of high ratio height to width (e.g. long cores) with optimal resolution. Here we show how this can be useful to study the distribution of the products from fluid-mediated mineral reactions throughout longer reaction paths and more representative volumes. Using state of the art reconstruction algorithms allows reducing the scanning times from over ten hours to below two hours enabling time-lapse studies. It is expected that these new techniques will open new possibilities for time-lapse imaging of a wider range of geological processes using laboratory X-ray CT, thereby increasing the accessibility of multidimensional imaging to a larger number of users and applications in geology.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, X.; Xia, C.; Keppens, R.

    We present the first multidimensional, magnetohydrodynamic simulations that capture the initial formation and long-term sustainment of the enigmatic coronal rain phenomenon. We demonstrate how thermal instability can induce a spectacular display of in situ forming blob-like condensations which then start their intimate ballet on top of initially linear force-free arcades. Our magnetic arcades host a chromospheric, transition region, and coronal plasma. Following coronal rain dynamics for over 80 minutes of physical time, we collect enough statistics to quantify blob widths, lengths, velocity distributions, and other characteristics which directly match modern observational knowledge. Our virtual coronal rain displays the deformation ofmore » blobs into V-shaped features, interactions of blobs due to mostly pressure-mediated levitations, and gives the first views of blobs that evaporate in situ or are siphoned over the apex of the background arcade. Our simulations pave the way for systematic surveys of coronal rain showers in true multidimensional settings to connect parameterized heating prescriptions with rain statistics, ultimately allowing us to quantify the coronal heating input.« less

  17. Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace.

    PubMed

    Zhang, Cheng; Lai, Chun-Liang; Pettitt, B Montgomery

    The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution.

  18. Bayesian Statistics and Uncertainty Quantification for Safety Boundary Analysis in Complex Systems

    NASA Technical Reports Server (NTRS)

    He, Yuning; Davies, Misty Dawn

    2014-01-01

    The analysis of a safety-critical system often requires detailed knowledge of safe regions and their highdimensional non-linear boundaries. We present a statistical approach to iteratively detect and characterize the boundaries, which are provided as parameterized shape candidates. Using methods from uncertainty quantification and active learning, we incrementally construct a statistical model from only few simulation runs and obtain statistically sound estimates of the shape parameters for safety boundaries.

  19. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex.

    PubMed

    Sloas, David C; Zhuo, Ran; Xue, Hongbo; Chambers, Anna R; Kolaczyk, Eric; Polley, Daniel B; Sen, Kamal

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices.

  20. Hierarchical and Multidimensional Academic Self-Concept of Commercial Students.

    PubMed

    Yeung; Chui; Lau

    1999-10-01

    Adapting the Marsh (1990) Academic Self-Description Questionnaire (ASDQ), this study examined the academic self-concept of students in a school of commerce in Hong Kong (N = 212). Confirmatory factor analysis found that students clearly distinguished among self-concept constructs in English, Chinese, Math and Statistics, Economics, and Principles of Accounting, and each of these constructs was highly associated with a global Academic self-concept construct, reflecting the validity of each construct in measuring an academic component of self-concept. Domain-specific self-concepts were more highly related with students' intention of course selection in corresponding areas than in nonmatching areas, further supporting the multidimensionality of the students' academic self-concept. Students' self-concepts in the five curriculum domains can be represented by the global Academic self-concept, supporting the hierarchical structure of students' academic self-concept in an educational institution with a specific focus, such as commercial studies. The academic self-concepts of the commercial students are both multidimensional and hierarchical. Copyright 1999 Academic Press.

  1. Reciprocal effects between academic self-concept, self-esteem, achievement, and attainment over seven adolescent years: unidimensional and multidimensional perspectives of self-concept.

    PubMed

    Marsh, Herbert W; O'Mara, Alison

    2008-04-01

    In their influential review, Baumeister, Campbell, Krueger, and Vohs (2003) concluded that self-esteem--the global component of self-concept--has no effect on subsequent academic performance. In contrast, Marsh and Craven's (2006) review of reciprocal effects models from an explicitly multidimensional perspective demonstrated that academic self-concept and achievement are both a cause and an effect of each other. Ironically, both reviews cited classic Youth in Transition studies in support of their respective claims. In definitive tests of these counter claims, the authors reanalyze these data-including self-esteem (emphasized by Baumeister et al.), academic self-concept (emphasized by Marsh & Craven), and postsecondary educational attainment-using stronger statistical methods based on five waves of data (grade 10 through 5 years after graduation; N=2,213). Integrating apparently discrepant findings under a common theoretical framework based on a multidimensional perspective, academic self-concept had consistent reciprocal effects with both achievement and educational attainment, whereas self-esteem had almost none.

  2. High-frequency stock linkage and multi-dimensional stationary processes

    NASA Astrophysics Data System (ADS)

    Wang, Xi; Bao, Si; Chen, Jingchao

    2017-02-01

    In recent years, China's stock market has experienced dramatic fluctuations; in particular, in the second half of 2014 and 2015, the market rose sharply and fell quickly. Many classical financial phenomena, such as stock plate linkage, appeared repeatedly during this period. In general, these phenomena have usually been studied using daily-level data or minute-level data. Our paper focuses on the linkage phenomenon in Chinese stock 5-second-level data during this extremely volatile period. The method used to select the linkage points and the arbitrage strategy are both based on multi-dimensional stationary processes. A new program method for testing the multi-dimensional stationary process is proposed in our paper, and the detailed program is presented in the paper's appendix. Because of the existence of the stationary process, the strategy's logarithmic cumulative average return will converge under the condition of the strong ergodic theorem, and this ensures the effectiveness of the stocks' linkage points and the more stable statistical arbitrage strategy.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krause, Josua; Dasgupta, Aritra; Fekete, Jean-Daniel

    Dealing with the curse of dimensionality is a key challenge in high-dimensional data visualization. We present SeekAView to address three main gaps in the existing research literature. First, automated methods like dimensionality reduction or clustering suffer from a lack of transparency in letting analysts interact with their outputs in real-time to suit their exploration strategies. The results often suffer from a lack of interpretability, especially for domain experts not trained in statistics and machine learning. Second, exploratory visualization techniques like scatter plots or parallel coordinates suffer from a lack of visual scalability: it is difficult to present a coherent overviewmore » of interesting combinations of dimensions. Third, the existing techniques do not provide a flexible workflow that allows for multiple perspectives into the analysis process by automatically detecting and suggesting potentially interesting subspaces. In SeekAView we address these issues using suggestion based visual exploration of interesting patterns for building and refining multidimensional subspaces. Compared to the state-of-the-art in subspace search and visualization methods, we achieve higher transparency in showing not only the results of the algorithms, but also interesting dimensions calibrated against different metrics. We integrate a visually scalable design space with an iterative workflow guiding the analysts by choosing the starting points and letting them slice and dice through the data to find interesting subspaces and detect correlations, clusters, and outliers. We present two usage scenarios for demonstrating how SeekAView can be applied in real-world data analysis scenarios.« less

  4. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms.

    PubMed

    Tang, Jie; Nett, Brian E; Chen, Guang-Hong

    2009-10-07

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  5. Multidimensional Raman spectroscopic signature of sweat and its potential application to forensic body fluid identification.

    PubMed

    Sikirzhytski, Vitali; Sikirzhytskaya, Aliaksandra; Lednev, Igor K

    2012-03-09

    This proof-of-concept study demonstrated the potential of Raman microspectroscopy for nondestructive identification of traces of sweat for forensic purposes. Advanced statistical analysis of Raman spectra revealed that dry sweat was intrinsically heterogeneous, and its biochemical composition varies significantly with the donor. As a result, no single Raman spectrum could adequately represent sweat traces. Instead, a multidimensional spectroscopic signature of sweat was built that allowed for the presentation of any single experimental spectrum as a linear combination of two fluorescent backgrounds and three Raman spectral components dominated by the contribution from lactate, lactic acid, urea and single amino acids. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Adaptive Statistical Iterative Reconstruction-V: Impact on Image Quality in Ultralow-Dose Coronary Computed Tomography Angiography.

    PubMed

    Benz, Dominik C; Gräni, Christoph; Mikulicic, Fran; Vontobel, Jan; Fuchs, Tobias A; Possner, Mathias; Clerc, Olivier F; Stehli, Julia; Gaemperli, Oliver; Pazhenkottil, Aju P; Buechel, Ronny R; Kaufmann, Philipp A

    The clinical utility of a latest generation iterative reconstruction algorithm (adaptive statistical iterative reconstruction [ASiR-V]) has yet to be elucidated for coronary computed tomography angiography (CCTA). This study evaluates the impact of ASiR-V on signal, noise and image quality in CCTA. Sixty-five patients underwent clinically indicated CCTA on a 256-slice CT scanner using an ultralow-dose protocol. Data sets from each patient were reconstructed at 6 different levels of ASiR-V. Signal intensity was measured by placing a region of interest in the aortic root, LMA, and RCA. Similarly, noise was measured in the aortic root. Image quality was visually assessed by 2 readers. Median radiation dose was 0.49 mSv. Image noise decreased with increasing levels of ASiR-V resulting in a significant increase in signal-to-noise ratio in the RCA and LMA (P < 0.001). Correspondingly, image quality significantly increased with higher levels of ASiR-V (P < 0.001). ASiR-V yields substantial noise reduction and improved image quality enabling introduction of ultralow-dose CCTA.

  7. Single-indicator-based Multidimensional Sensing: Detection and Identification of Heavy Metal Ions and Understanding the Foundations from Experiment to Simulation

    PubMed Central

    Leng, Yumin; Qian, Sihua; Wang, Yuhui; Lu, Cheng; Ji, Xiaoxu; Lu, Zhiwen; Lin, Hengwei

    2016-01-01

    Multidimensional sensing offers advantages in accuracy, diversity and capability for the simultaneous detection and discrimination of multiple analytes, however, the previous reports usually require complicated synthesis/fabrication process and/or need a variety of techniques (or instruments) to acquire signals. Therefore, to take full advantages of this concept, simple designs are highly desirable. Herein, a novel concept is conceived to construct multidimensional sensing platforms based on a single indicator that has capability of showing diverse color/fluorescence responses with the addition of different analytes. Through extracting hidden information from these responses, such as red, green and blue (RGB) alterations, a triple-channel-based multidimensional sensing platform could consequently be fabricated, and the RGB alterations are further applicable to standard statistical methods. As a proof-of-concept study, a triple-channel sensing platform is fabricated solely using dithizone with assistance of cetyltrimethylammonium bromide (CTAB) for hyperchromicity and sensitization, which demonstrates superior capabilities in detection and identification of ten common heavy metal ions at their standard concentrations of wastewater-discharge of China. Moreover, this sensing platform exhibits promising applications in semi-quantitative and even quantitative analysis individuals of these heavy metal ions with high sensitivity as well. Finally, density functional theory calculations are performed to reveal the foundations for this analysis. PMID:27146105

  8. Data Visualization for ESM and ELINT: Visualizing 3D and Hyper Dimensional Data

    DTIC Science & Technology

    2011-06-01

    technique to present multiple 2D views was devised by D. Asimov . He assembled multiple two dimensional scatter plot views of the hyper dimensional...Viewing Multidimensional Data”, D. Asimov , DIAM Journal on Scientific and Statistical Computing, vol.61, pp.128-143, 1985. [2] “High-Dimensional

  9. Examining Multidimensional Middle Grade Outcomes after Early Elementary School Grade Retention

    ERIC Educational Resources Information Center

    Hwang, Sophia; Cappella, Elise; Schwartz, Kate

    2016-01-01

    Recently, researchers have begun to employ rigorous statistical methods and developmentally-informed theories to evaluate outcomes for students retained in non-kindergarten early elementary school. However, the majority of this research focuses on academic outcomes. Gaps remain regarding retention's effects on psychosocial outcomes important to…

  10. Relationship between Service Quality, Satisfaction, Motivation and Loyalty: A Multi-Dimensional Perspective

    ERIC Educational Resources Information Center

    Subrahmanyam, Annamdevula

    2017-01-01

    Purpose: This paper aims to identify and test four competing models with the interrelationships between students' perceived service quality, students' satisfaction, loyalty and motivation using structural equation modeling (SEM), and to select the best model using chi-square difference (??2) statistic test. Design/methodology/approach: The study…

  11. Bayesian analysis of spatially-dependent functional responses with spatially-dependent multi-dimensional functional predictors

    USDA-ARS?s Scientific Manuscript database

    Recent advances in technology have led to the collection of high-dimensional data not previously encountered in many scientific environments. As a result, scientists are often faced with the challenging task of including these high-dimensional data into statistical models. For example, data from sen...

  12. Model-based iterative reconstruction in low-dose CT colonography-feasibility study in 65 patients for symptomatic investigation.

    PubMed

    Vardhanabhuti, Varut; James, Julia; Nensey, Rehaan; Hyde, Christopher; Roobottom, Carl

    2015-05-01

    To compare image quality on computed tomographic colonography (CTC) acquired at standard dose (STD) and low dose (LD) using filtered-back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction (MBIR) techniques. A total of 65 symptomatic patients were prospectively enrolled for the study and underwent STD and LD CTC with filtered-back projection, adaptive statistical iterative reconstruction, and MBIR to allow direct per-patient comparison. Objective image noise, subjective image analyses, and polyp detection were assessed. Objective image noise analysis demonstrates significant noise reduction using MBIR technique (P < .05) despite being acquired at lower doses. Subjective image analyses were superior for LD MBIR in all parameters except visibility of extracolonic lesions (two-dimensional) and visibility of colonic wall (three-dimensional) where there were no significant differences. There was no significant difference in polyp detection rates (P > .05). Doses: LD (dose-length product, 257.7), STD (dose-length product, 483.6). LD MBIR CTC objectively shows improved image noise using parameters in our study. Subjectively, image quality is maintained. Polyp detection shows no significant difference but because of small numbers needs further validation. Average dose reduction of 47% can be achieved. This study confirms feasibility of using MBIR in this context of CTC in symptomatic population. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  13. Model-based iterative reconstruction for reduction of radiation dose in abdominopelvic CT: comparison to adaptive statistical iterative reconstruction.

    PubMed

    Yasaka, Koichiro; Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni

    2013-12-01

    To evaluate dose reduction and image quality of abdominopelvic computed tomography (CT) reconstructed with model-based iterative reconstruction (MBIR) compared to adaptive statistical iterative reconstruction (ASIR). In this prospective study, 85 patients underwent referential-, low-, and ultralow-dose unenhanced abdominopelvic CT. Images were reconstructed with ASIR for low-dose (L-ASIR) and ultralow-dose CT (UL-ASIR), and with MBIR for ultralow-dose CT (UL-MBIR). Image noise was measured in the abdominal aorta and iliopsoas muscle. Subjective image analyses and a lesion detection study (adrenal nodules) were conducted by two blinded radiologists. A reference standard was established by a consensus panel of two different radiologists using referential-dose CT reconstructed with filtered back projection. Compared to low-dose CT, there was a 63% decrease in dose-length product with ultralow-dose CT. UL-MBIR had significantly lower image noise than L-ASIR and UL-ASIR (all p<0.01). UL-MBIR was significantly better for subjective image noise and streak artifacts than L-ASIR and UL-ASIR (all p<0.01). There were no significant differences between UL-MBIR and L-ASIR in diagnostic acceptability (p>0.65), or diagnostic performance for adrenal nodules (p>0.87). MBIR significantly improves image noise and streak artifacts compared to ASIR, and can achieve radiation dose reduction without severely compromising image quality.

  14. The Use of Computer-Assisted Identification of ARIMA Time-Series.

    ERIC Educational Resources Information Center

    Brown, Roger L.

    This study was conducted to determine the effects of using various levels of tutorial statistical software for the tentative identification of nonseasonal ARIMA models, a statistical technique proposed by Box and Jenkins for the interpretation of time-series data. The Box-Jenkins approach is an iterative process encompassing several stages of…

  15. RAVE: Rapid Visualization Environment

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M.; Anderson, Kevin; Simoudis, Avangelos

    1994-01-01

    Visualization is used in the process of analyzing large, multidimensional data sets. However, the selection and creation of visualizations that are appropriate for the characteristics of a particular data set and the satisfaction of the analyst's goals is difficult. The process consists of three tasks that are performed iteratively: generate, test, and refine. The performance of these tasks requires the utilization of several types of domain knowledge that data analysts do not often have. Existing visualization systems and frameworks do not adequately support the performance of these tasks. In this paper we present the RApid Visualization Environment (RAVE), a knowledge-based system that interfaces with commercial visualization frameworks and assists a data analyst in quickly and easily generating, testing, and refining visualizations. RAVE was used for the visualization of in situ measurement data captured by spacecraft.

  16. A multi-dimensional Smolyak collocation method in curvilinear coordinates for computing vibrational spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avila, Gustavo, E-mail: Gustavo-Avila@telefonica.net; Carrington, Tucker, E-mail: Tucker.Carrington@queensu.ca

    In this paper, we improve the collocation method for computing vibrational spectra that was presented in Avila and Carrington, Jr. [J. Chem. Phys. 139, 134114 (2013)]. Using an iterative eigensolver, energy levels and wavefunctions are determined from values of the potential on a Smolyak grid. The kinetic energy matrix-vector product is evaluated by transforming a vector labelled with (nondirect product) grid indices to a vector labelled by (nondirect product) basis indices. Both the transformation and application of the kinetic energy operator (KEO) scale favorably. Collocation facilitates dealing with complicated KEOs because it obviates the need to calculate integrals of coordinatemore » dependent coefficients of differential operators. The ideas are tested by computing energy levels of HONO using a KEO in bond coordinates.« less

  17. A Model For Selecting An Environmentally Responsive Trait: Evaluating Micro-scale Fitness Through UV-C Resistance and Exposure in Escherichia coli.

    NASA Astrophysics Data System (ADS)

    Schenone, D. J.; Igama, S.; Marash-Whitman, D.; Sloan, C.; Okansinski, A.; Moffet, A.; Grace, J. M.; Gentry, D.

    2015-12-01

    Experimental evolution of microorganisms in controlled microenvironments serves as a powerful tool for understanding the relationship between micro-scale microbial interactions as well as local-to global-scale environmental factors. In response to iterative and targeted environmental pressures, mutagenesis drives the emergence of novel phenotypes. Current methods to induce expression of these phenotypes require repetitive and time intensive procedures and do not allow for the continuous monitoring of conditions such as optical density, pH and temperature. To address this shortcoming, an Automated Dynamic Directed Evolution Chamber is being developed. It will initially produce Escherichia coli cells with an elevated UV-C resistance phenotype that will ultimately be adapted for different organisms as well as studying environmental effects. A useful phenotype and environmental factor for examining this relationship is UV-C resistance and exposure. In order to build a baseline for the device's operational parameters, a UV-C assay was performed on six E. coli replicates with three exposure fluxes across seven iterations. The fluxes included a 0 second exposure (control), 6 seconds at 3.3 J/m2/s and 40 seconds at 0.5 J/m2/s. After each iteration the cells were regrown and tested for UV-C resistance. We sought to quantify the increase and variability of UV-C resistance among different fluxes, and observe changes in each replicate at each iteration in terms of variance. Under different fluxes, we observed that the 0s control showed no significant increase in resistance, while the 6s/40s fluxes showed increased resistance as the number of iterations increased. A one-million fold increase in survivability was observed after seven iterations. Through statistical analysis using Spearman's rank correlation, the 40s exposure showed signs of more consistently increased resistance, but seven iterations was insufficient to demonstrate statistical significance; to test this further, our experiments will include more iterations. Furthermore, we plan to sequence all the replicants. As adaptation dynamics under intense UV exposure leads to high rate of change, it would be useful to observe differences in tolerance-related and non-tolerance-related genes between the original and UV resistant strains.

  18. Multidimensional incremental parsing for universal source coding.

    PubMed

    Bae, Soo Hyun; Juang, Biing-Hwang

    2008-10-01

    A multidimensional incremental parsing algorithm (MDIP) for multidimensional discrete sources, as a generalization of the Lempel-Ziv coding algorithm, is investigated. It consists of three essential component schemes, maximum decimation matching, hierarchical structure of multidimensional source coding, and dictionary augmentation. As a counterpart of the longest match search in the Lempel-Ziv algorithm, two classes of maximum decimation matching are studied. Also, an underlying behavior of the dictionary augmentation scheme for estimating the source statistics is examined. For an m-dimensional source, m augmentative patches are appended into the dictionary at each coding epoch, thus requiring the transmission of a substantial amount of information to the decoder. The property of the hierarchical structure of the source coding algorithm resolves this issue by successively incorporating lower dimensional coding procedures in the scheme. In regard to universal lossy source coders, we propose two distortion functions, the local average distortion and the local minimax distortion with a set of threshold levels for each source symbol. For performance evaluation, we implemented three image compression algorithms based upon the MDIP; one is lossless and the others are lossy. The lossless image compression algorithm does not perform better than the Lempel-Ziv-Welch coding, but experimentally shows efficiency in capturing the source structure. The two lossy image compression algorithms are implemented using the two distortion functions, respectively. The algorithm based on the local average distortion is efficient at minimizing the signal distortion, but the images by the one with the local minimax distortion have a good perceptual fidelity among other compression algorithms. Our insights inspire future research on feature extraction of multidimensional discrete sources.

  19. Arc detection for the ICRF system on ITER

    NASA Astrophysics Data System (ADS)

    D'Inca, R.

    2011-12-01

    The ICRF system for ITER is designed to respect the high voltage breakdown limits. However arcs can still statistically happen and must be quickly detected and suppressed by shutting the RF power down. For the conception of a reliable and efficient detector, the analysis of the mechanism of arcs is necessary to find their unique signature. Numerous systems have been conceived to address the issues of arc detection. VSWR-based detectors, RF noise detectors, sound detectors, optical detectors, S-matrix based detectors. Until now, none of them has succeeded in demonstrating the fulfillment of all requirements and the studies for ITER now follow three directions: improvement of the existing concepts to fix their flaws, development of new theoretically fully compliant detectors (like the GUIDAR) and combination of several detectors to benefit from the advantages of each of them. Together with the physical and engineering challenges, the development of an arc detection system for ITER raises methodological concerns to extrapolate the results from basic experiments and present machines to the ITER scale ICRF system and to conduct a relevant risk analysis.

  20. A Generic multi-dimensional feature extraction method using multiobjective genetic programming.

    PubMed

    Zhang, Yang; Rockett, Peter I

    2009-01-01

    In this paper, we present a generic feature extraction method for pattern classification using multiobjective genetic programming. This not only evolves the (near-)optimal set of mappings from a pattern space to a multi-dimensional decision space, but also simultaneously optimizes the dimensionality of that decision space. The presented framework evolves vector-to-vector feature extractors that maximize class separability. We demonstrate the efficacy of our approach by making statistically-founded comparisons with a wide variety of established classifier paradigms over a range of datasets and find that for most of the pairwise comparisons, our evolutionary method delivers statistically smaller misclassification errors. At very worst, our method displays no statistical difference in a few pairwise comparisons with established classifier/dataset combinations; crucially, none of the misclassification results produced by our method is worse than any comparator classifier. Although principally focused on feature extraction, feature selection is also performed as an implicit side effect; we show that both feature extraction and selection are important to the success of our technique. The presented method has the practical consequence of obviating the need to exhaustively evaluate a large family of conventional classifiers when faced with a new pattern recognition problem in order to attain a good classification accuracy.

  1. Performance comparison of LUR and OK in PM2.5 concentration mapping: a multidimensional perspective

    PubMed Central

    Zou, Bin; Luo, Yanqing; Wan, Neng; Zheng, Zhong; Sternberg, Troy; Liao, Yilan

    2015-01-01

    Methods of Land Use Regression (LUR) modeling and Ordinary Kriging (OK) interpolation have been widely used to offset the shortcomings of PM2.5 data observed at sparse monitoring sites. However, traditional point-based performance evaluation strategy for these methods remains stagnant, which could cause unreasonable mapping results. To address this challenge, this study employs ‘information entropy’, an area-based statistic, along with traditional point-based statistics (e.g. error rate, RMSE) to evaluate the performance of LUR model and OK interpolation in mapping PM2.5 concentrations in Houston from a multidimensional perspective. The point-based validation reveals significant differences between LUR and OK at different test sites despite the similar end-result accuracy (e.g. error rate 6.13% vs. 7.01%). Meanwhile, the area-based validation demonstrates that the PM2.5 concentrations simulated by the LUR model exhibits more detailed variations than those interpolated by the OK method (i.e. information entropy, 7.79 vs. 3.63). Results suggest that LUR modeling could better refine the spatial distribution scenario of PM2.5 concentrations compared to OK interpolation. The significance of this study primarily lies in promoting the integration of point- and area-based statistics for model performance evaluation in air pollution mapping. PMID:25731103

  2. Model-based iterative reconstruction and adaptive statistical iterative reconstruction: dose-reduced CT for detecting pancreatic calcification.

    PubMed

    Yasaka, Koichiro; Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni

    2016-01-01

    Iterative reconstruction methods have attracted attention for reducing radiation doses in computed tomography (CT). To investigate the detectability of pancreatic calcification using dose-reduced CT reconstructed with model-based iterative construction (MBIR) and adaptive statistical iterative reconstruction (ASIR). This prospective study approved by Institutional Review Board included 85 patients (57 men, 28 women; mean age, 69.9 years; mean body weight, 61.2 kg). Unenhanced CT was performed three times with different radiation doses (reference-dose CT [RDCT], low-dose CT [LDCT], ultralow-dose CT [ULDCT]). From RDCT, LDCT, and ULDCT, images were reconstructed with filtered-back projection (R-FBP, used for establishing reference standard), ASIR (L-ASIR), and MBIR and ASIR (UL-MBIR and UL-ASIR), respectively. A lesion (pancreatic calcification) detection test was performed by two blinded radiologists with a five-point certainty level scale. Dose-length products of RDCT, LDCT, and ULDCT were 410, 97, and 36 mGy-cm, respectively. Nine patients had pancreatic calcification. The sensitivity for detecting pancreatic calcification with UL-MBIR was high (0.67-0.89) compared to L-ASIR or UL-ASIR (0.11-0.44), and a significant difference was seen between UL-MBIR and UL-ASIR for one reader (P = 0.014). The area under the receiver-operating characteristic curve for UL-MBIR (0.818-0.860) was comparable to that for L-ASIR (0.696-0.844). The specificity was lower with UL-MBIR (0.79-0.92) than with L-ASIR or UL-ASIR (0.96-0.99), and a significant difference was seen for one reader (P < 0.01). In UL-MBIR, pancreatic calcification can be detected with high sensitivity, however, we should pay attention to the slightly lower specificity.

  3. Micromagnetic Simulation of Thermal Effects in Magnetic Nanostructures

    DTIC Science & Technology

    2003-01-01

    NiFe magnetic nano- elements are calculated. INTRODUCTION With decreasing size of magnetic nanostructures thermal effects become increasingly important...thermal field. The thermal field is assumed to be a Gaussian random process with the following statistical properties : (H,,,(t))=0 and (H,I.(t),H,.1(t...following property DI " =VE(M’’) - [VE(M"’)• t] t =0, for k =1.m (12) 186 The optimal path can be found using an iterative scheme. In each iteration step the

  4. Fast iterative censoring CFAR algorithm for ship detection from SAR images

    NASA Astrophysics Data System (ADS)

    Gu, Dandan; Yue, Hui; Zhang, Yuan; Gao, Pengcheng

    2017-11-01

    Ship detection is one of the essential techniques for ship recognition from synthetic aperture radar (SAR) images. This paper presents a fast iterative detection procedure to eliminate the influence of target returns on the estimation of local sea clutter distributions for constant false alarm rate (CFAR) detectors. A fast block detector is first employed to extract potential target sub-images; and then, an iterative censoring CFAR algorithm is used to detect ship candidates from each target blocks adaptively and efficiently, where parallel detection is available, and statistical parameters of G0 distribution fitting local sea clutter well can be quickly estimated based on an integral image operator. Experimental results of TerraSAR-X images demonstrate the effectiveness of the proposed technique.

  5. A General Family of Limited Information Goodness-of-Fit Statistics for Multinomial Data

    ERIC Educational Resources Information Center

    Joe, Harry; Maydeu-Olivares, Alberto

    2010-01-01

    Maydeu-Olivares and Joe (J. Am. Stat. Assoc. 100:1009-1020, "2005"; Psychometrika 71:713-732, "2006") introduced classes of chi-square tests for (sparse) multidimensional multinomial data based on low-order marginal proportions. Our extension provides general conditions under which quadratic forms in linear functions of cell residuals are…

  6. Incremental Validity of Multidimensional Proficiency Scores from Diagnostic Classification Models: An Illustration for Elementary School Mathematics

    ERIC Educational Resources Information Center

    Kunina-Habenicht, Olga; Rupp, André A.; Wilhelm, Oliver

    2017-01-01

    Diagnostic classification models (DCMs) hold great potential for applications in summative and formative assessment by providing discrete multivariate proficiency scores that yield statistically driven classifications of students. Using data from a newly developed diagnostic arithmetic assessment that was administered to 2032 fourth-grade students…

  7. A Procedure To Detect Test Bias Present Simultaneously in Several Items.

    ERIC Educational Resources Information Center

    Shealy, Robin; Stout, William

    A statistical procedure is presented that is designed to test for unidirectional test bias existing simultaneously in several items of an ability test, based on the assumption that test bias is incipient within the two groups' ability differences. The proposed procedure--Simultaneous Item Bias (SIB)--is based on a multidimensional item response…

  8. A Monte Carlo Approach to Unidimensionality Testing in Polytomous Rasch Models

    ERIC Educational Resources Information Center

    Christensen, Karl Bang; Kreiner, Svend

    2007-01-01

    Many statistical tests are designed to test the different assumptions of the Rasch model, but only few are directed at detecting multidimensionality. The Martin-Lof test is an attractive approach, the disadvantage being that its null distribution deviates strongly from the asymptotic chi-square distribution for most realistic sample sizes. A Monte…

  9. An Investigation of Sample Size Splitting on ATFIND and DIMTEST

    ERIC Educational Resources Information Center

    Socha, Alan; DeMars, Christine E.

    2013-01-01

    Modeling multidimensional test data with a unidimensional model can result in serious statistical errors, such as bias in item parameter estimates. Many methods exist for assessing the dimensionality of a test. The current study focused on DIMTEST. Using simulated data, the effects of sample size splitting for use with the ATFIND procedure for…

  10. Calibration of Response Data Using MIRT Models with Simple and Mixed Structures

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2012-01-01

    It is common to assume during a statistical analysis of a multiscale assessment that the assessment is composed of several unidimensional subtests or that it has simple structure. Under this assumption, the unidimensional and multidimensional approaches can be used to estimate item parameters. These two approaches are equivalent in parameter…

  11. Multidimensional Analysis of Linguistic Networks

    NASA Astrophysics Data System (ADS)

    Araújo, Tanya; Banisch, Sven

    Network-based approaches play an increasingly important role in the analysis of data even in systems in which a network representation is not immediately apparent. This is particularly true for linguistic networks, which use to be induced from a linguistic data set for which a network perspective is only one out of several options for representation. Here we introduce a multidimensional framework for network construction and analysis with special focus on linguistic networks. Such a framework is used to show that the higher is the abstraction level of network induction, the harder is the interpretation of the topological indicators used in network analysis. Several examples are provided allowing for the comparison of different linguistic networks as well as to networks in other fields of application of network theory. The computation and the intelligibility of some statistical indicators frequently used in linguistic networks are discussed. It suggests that the field of linguistic networks, by applying statistical tools inspired by network studies in other domains, may, in its current state, have only a limited contribution to the development of linguistic theory.

  12. Condenser: a statistical aggregation tool for multi-sample quantitative proteomic data from Matrix Science Mascot Distiller™.

    PubMed

    Knudsen, Anders Dahl; Bennike, Tue; Kjeldal, Henrik; Birkelund, Svend; Otzen, Daniel Erik; Stensballe, Allan

    2014-05-30

    We describe Condenser, a freely available, comprehensive open-source tool for merging multidimensional quantitative proteomics data from the Matrix Science Mascot Distiller Quantitation Toolbox into a common format ready for subsequent bioinformatic analysis. A number of different relative quantitation technologies, such as metabolic (15)N and amino acid stable isotope incorporation, label-free and chemical-label quantitation are supported. The program features multiple options for curative filtering of the quantified peptides, allowing the user to choose data quality thresholds appropriate for the current dataset, and ensure the quality of the calculated relative protein abundances. Condenser also features optional global normalization, peptide outlier removal, multiple testing and calculation of t-test statistics for highlighting and evaluating proteins with significantly altered relative protein abundances. Condenser provides an attractive addition to the gold-standard quantitative workflow of Mascot Distiller, allowing easy handling of larger multi-dimensional experiments. Source code, binaries, test data set and documentation are available at http://condenser.googlecode.com/. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Reduced Radiation Dose with Model-based Iterative Reconstruction versus Standard Dose with Adaptive Statistical Iterative Reconstruction in Abdominal CT for Diagnosis of Acute Renal Colic.

    PubMed

    Fontarensky, Mikael; Alfidja, Agaïcha; Perignon, Renan; Schoenig, Arnaud; Perrier, Christophe; Mulliez, Aurélien; Guy, Laurent; Boyer, Louis

    2015-07-01

    To evaluate the accuracy of reduced-dose abdominal computed tomographic (CT) imaging by using a new generation model-based iterative reconstruction (MBIR) to diagnose acute renal colic compared with a standard-dose abdominal CT with 50% adaptive statistical iterative reconstruction (ASIR). This institutional review board-approved prospective study included 118 patients with symptoms of acute renal colic who underwent the following two successive CT examinations: standard-dose ASIR 50% and reduced-dose MBIR. Two radiologists independently reviewed both CT examinations for presence or absence of renal calculi, differential diagnoses, and associated abnormalities. The imaging findings, radiation dose estimates, and image quality of the two CT reconstruction methods were compared. Concordance was evaluated by κ coefficient, and descriptive statistics and t test were used for statistical analysis. Intraobserver correlation was 100% for the diagnosis of renal calculi (κ = 1). Renal calculus (τ = 98.7%; κ = 0.97) and obstructive upper urinary tract disease (τ = 98.16%; κ = 0.95) were detected, and differential or alternative diagnosis was performed (τ = 98.87% κ = 0.95). MBIR allowed a dose reduction of 84% versus standard-dose ASIR 50% (mean volume CT dose index, 1.7 mGy ± 0.8 [standard deviation] vs 10.9 mGy ± 4.6; mean size-specific dose estimate, 2.2 mGy ± 0.7 vs 13.7 mGy ± 3.9; P < .001) without a conspicuous deterioration in image quality (reduced-dose MBIR vs ASIR 50% mean scores, 3.83 ± 0.49 vs 3.92 ± 0.27, respectively; P = .32) or increase in noise (reduced-dose MBIR vs ASIR 50% mean, respectively, 18.36 HU ± 2.53 vs 17.40 HU ± 3.42). Its main drawback remains the long time required for reconstruction (mean, 40 minutes). A reduced-dose protocol with MBIR allowed a dose reduction of 84% without increasing noise and without an conspicuous deterioration in image quality in patients suspected of having renal colic.

  14. Controlling specific locomotor behaviors through multidimensional monoaminergic modulation of spinal circuitries

    PubMed Central

    Musienko, Pavel; van den Brand, Rubia; Märzendorfer, Olivia; Roy, Roland R.; Gerasimenko, Yury; Edgerton, V. Reggie; Courtine, Grégoire

    2012-01-01

    Descending monoaminergic inputs markedly influence spinal locomotor circuits, but the functional relationships between specific receptors and the control of walking behavior remain poorly understood. To identify these interactions, we manipulated serotonergic, dopaminergic, and noradrenergic neural pathways pharmacologically during locomotion enabled by electrical spinal cord stimulation in adult spinal rats in vivo. Using advanced neurobiomechanical recordings and multidimensional statistical procedures, we reveal that each monoaminergic receptor modulates a broad but distinct spectrum of kinematic, kinetic and EMG characteristics, which we expressed into receptor–specific functional maps. We then exploited this catalogue of monoaminergic tuning functions to devise optimal pharmacological combinations to encourage locomotion in paralyzed rats. We found that, in most cases, receptor-specific modulatory influences summed near algebraically when stimulating multiple pathways concurrently. Capitalizing on these predictive interactions, we elaborated a multidimensional monoaminergic intervention that restored coordinated hindlimb locomotion with normal levels of weight bearing and partial equilibrium maintenance in spinal rats. These findings provide new perspectives on the functions of and interactions between spinal monoaminergic receptor systems in producing stepping, and define a framework to tailor pharmacotherapies for improving neurological functions after CNS disorders. PMID:21697376

  15. Metaheuristics-Assisted Combinatorial Screening of Eu2+-Doped Ca-Sr-Ba-Li-Mg-Al-Si-Ge-N Compositional Space in Search of a Narrow-Band Green Emitting Phosphor and Density Functional Theory Calculations.

    PubMed

    Lee, Jin-Woong; Singh, Satendra Pal; Kim, Minseuk; Hong, Sung Un; Park, Woon Bae; Sohn, Kee-Sun

    2017-08-21

    A metaheuristics-based design would be of great help in relieving the enormous experimental burdens faced during the combinatorial screening of a huge, multidimensional search space, while providing the same effect as total enumeration. In order to tackle the high-throughput powder processing complications and to secure practical phosphors, metaheuristics, an elitism-reinforced nondominated sorting genetic algorithm (NSGA-II), was employed in this study. The NSGA-II iteration targeted two objective functions. The first was to search for a higher emission efficacy. The second was to search for narrow-band green color emissions. The NSGA-II iteration finally converged on BaLi 2 Al 2 Si 2 N 6 :Eu 2+ phosphors in the Eu 2+ -doped Ca-Sr-Ba-Li-Mg-Al-Si-Ge-N compositional search space. The BaLi 2 Al 2 Si 2 N 6 :Eu 2+ phosphor, which was synthesized with no human intervention via the assistance of NSGA-II, was a clear single phase and gave an acceptable luminescence. The BaLi 2 Al 2 Si 2 N 6 :Eu 2+ phosphor as well as all other phosphors that appeared during the NSGA-II iterations were examined in detail by employing powder X-ray diffraction-based Rietveld refinement, X-ray absorption near edge structure, density functional theory calculation, and time-resolved photoluminescence. The thermodynamic stability and the band structure plausibility were confirmed, and more importantly a novel approach to the energy transfer analysis was also introduced for BaLi 2 Al 2 Si 2 N 6 :Eu 2+ phosphors.

  16. RELAP5 Model of the First Wall/Blanket Primary Heat Transfer System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popov, Emilian L; Yoder Jr, Graydon L; Kim, Seokho H

    2010-06-01

    ITER inductive power operation is modeled and simulated using a system level computer code to evaluate the behavior of the Primary Heat Transfer System (PHTS) and predict parameter operational ranges. The control algorithm strategy and derivation are summarized in this report as well. A major feature of ITER is pulsed operation. The plasma does not burn continuously, but the power is pulsed with large periods of zero power between pulses. This feature requires active temperature control to maintain a constant blanket inlet temperature and requires accommodation of coolant thermal expansion during the pulse. In view of the transient nature ofmore » the power (plasma) operation state a transient system thermal-hydraulics code was selected: RELAP5. The code has a well-documented history for nuclear reactor transient analyses, it has been benchmarked against numerous experiments, and a large user database of commonly accepted modeling practices exists. The process of heat deposition and transfer in the blanket modules is multi-dimensional and cannot be accurately captured by a one-dimensional code such as RELAP5. To resolve this, a separate CFD calculation of blanket thermal power evolution was performed using the 3-D SC/Tetra thermofluid code. A 1D-3D co-simulation more realistically models FW/blanket internal time-dependent thermal inertia while eliminating uncertainties in the time constant assumed in a 1-D system code. Blanket water outlet temperature and heat release histories for any given ITER pulse operation scenario are calculated. These results provide the basis for developing time dependent power forcing functions which are used as input in the RELAP5 calculations.« less

  17. Simplified energy-balance model for pragmatic multi-dimensional device simulation

    NASA Astrophysics Data System (ADS)

    Chang, Duckhyun; Fossum, Jerry G.

    1997-11-01

    To pragmatically account for non-local carrier heating and hot-carrier effects such as velocity overshoot and impact ionization in multi-dimensional numerical device simulation, a new simplified energy-balance (SEB) model is developed and implemented in FLOODS[16] as a pragmatic option. In the SEB model, the energy-relaxation length is estimated from a pre-process drift-diffusion simulation using the carrier-velocity distribution predicted throughout the device domain, and is used without change in a subsequent simpler hydrodynamic (SHD) simulation. The new SEB model was verified by comparison of two-dimensional SHD and full HD DC simulations of a submicron MOSFET. The SHD simulations yield detailed distributions of carrier temperature, carrier velocity, and impact-ionization rate, which agree well with the full HD simulation results obtained with FLOODS. The most noteworthy feature of the new SEB/SHD model is its computational efficiency, which results from reduced Newton iteration counts caused by the enhanced linearity. Relative to full HD, SHD simulation times can be shorter by as much as an order of magnitude since larger voltage steps for DC sweeps and larger time steps for transient simulations can be used. The improved computational efficiency can enable pragmatic three-dimensional SHD device simulation as well, for which the SEB implementation would be straightforward as it is in FLOODS or any robust HD simulator.

  18. A mass, momentum, and energy conserving, fully implicit, scalable algorithm for the multi-dimensional, multi-species Rosenbluth-Fokker-Planck equation

    NASA Astrophysics Data System (ADS)

    Taitano, W. T.; Chacón, L.; Simakov, A. N.; Molvig, K.

    2015-09-01

    In this study, we demonstrate a fully implicit algorithm for the multi-species, multidimensional Rosenbluth-Fokker-Planck equation which is exactly mass-, momentum-, and energy-conserving, and which preserves positivity. Unlike most earlier studies, we base our development on the Rosenbluth (rather than Landau) form of the Fokker-Planck collision operator, which reduces complexity while allowing for an optimal fully implicit treatment. Our discrete conservation strategy employs nonlinear constraints that force the continuum symmetries of the collision operator to be satisfied upon discretization. We converge the resulting nonlinear system iteratively using Jacobian-free Newton-Krylov methods, effectively preconditioned with multigrid methods for efficiency. Single- and multi-species numerical examples demonstrate the advertised accuracy properties of the scheme, and the superior algorithmic performance of our approach. In particular, the discretization approach is numerically shown to be second-order accurate in time and velocity space and to exhibit manifestly positive entropy production. That is, H-theorem behavior is indicated for all the examples we have tested. The solution approach is demonstrated to scale optimally with respect to grid refinement (with CPU time growing linearly with the number of mesh points), and timestep (showing very weak dependence of CPU time with time-step size). As a result, the proposed algorithm delivers several orders-of-magnitude speedup vs. explicit algorithms.

  19. An iterative reduced field-of-view reconstruction for periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI.

    PubMed

    Lin, Jyh-Miin; Patterson, Andrew J; Chang, Hing-Chiu; Gillard, Jonathan H; Graves, Martin J

    2015-10-01

    To propose a new reduced field-of-view (rFOV) strategy for iterative reconstructions in a clinical environment. Iterative reconstructions can incorporate regularization terms to improve the image quality of periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI. However, the large amount of calculations required for full FOV iterative reconstructions has posed a huge computational challenge for clinical usage. By subdividing the entire problem into smaller rFOVs, the iterative reconstruction can be accelerated on a desktop with a single graphic processing unit (GPU). This rFOV strategy divides the iterative reconstruction into blocks, based on the block-diagonal dominant structure. A near real-time reconstruction system was developed for the clinical MR unit, and parallel computing was implemented using the object-oriented model. In addition, the Toeplitz method was implemented on the GPU to reduce the time required for full interpolation. Using the data acquired from the PROPELLER MRI, the reconstructed images were then saved in the digital imaging and communications in medicine format. The proposed rFOV reconstruction reduced the gridding time by 97%, as the total iteration time was 3 s even with multiple processes running. A phantom study showed that the structure similarity index for rFOV reconstruction was statistically superior to conventional density compensation (p < 0.001). In vivo study validated the increased signal-to-noise ratio, which is over four times higher than with density compensation. Image sharpness index was improved using the regularized reconstruction implemented. The rFOV strategy permits near real-time iterative reconstruction to improve the image quality of PROPELLER images. Substantial improvements in image quality metrics were validated in the experiments. The concept of rFOV reconstruction may potentially be applied to other kinds of iterative reconstructions for shortened reconstruction duration.

  20. Intra-patient comparison of reduced-dose model-based iterative reconstruction with standard-dose adaptive statistical iterative reconstruction in the CT diagnosis and follow-up of urolithiasis.

    PubMed

    Tenant, Sean; Pang, Chun Lap; Dissanayake, Prageeth; Vardhanabhuti, Varut; Stuckey, Colin; Gutteridge, Catherine; Hyde, Christopher; Roobottom, Carl

    2017-10-01

    To evaluate the accuracy of reduced-dose CT scans reconstructed using a new generation of model-based iterative reconstruction (MBIR) in the imaging of urinary tract stone disease, compared with a standard-dose CT using 30% adaptive statistical iterative reconstruction. This single-institution prospective study recruited 125 patients presenting either with acute renal colic or for follow-up of known urinary tract stones. They underwent two immediately consecutive scans, one at standard dose settings and one at the lowest dose (highest noise index) the scanner would allow. The reduced-dose scans were reconstructed using both ASIR 30% and MBIR algorithms and reviewed independently by two radiologists. Objective and subjective image quality measures as well as diagnostic data were obtained. The reduced-dose MBIR scan was 100% concordant with the reference standard for the assessment of ureteric stones. It was extremely accurate at identifying calculi of 3 mm and above. The algorithm allowed a dose reduction of 58% without any loss of scan quality. A reduced-dose CT scan using MBIR is accurate in acute imaging for renal colic symptoms and for urolithiasis follow-up and allows a significant reduction in dose. • MBIR allows reduced CT dose with similar diagnostic accuracy • MBIR outperforms ASIR when used for the reconstruction of reduced-dose scans • MBIR can be used to accurately assess stones 3 mm and above.

  1. COMPARISON OF ADAPTIVE STATISTICAL ITERATIVE RECONSTRUCTION (ASIR™) AND MODEL-BASED ITERATIVE RECONSTRUCTION (VEO™) FOR PAEDIATRIC ABDOMINAL CT EXAMINATIONS: AN OBSERVER PERFORMANCE STUDY OF DIAGNOSTIC IMAGE QUALITY.

    PubMed

    Hultenmo, Maria; Caisander, Håkan; Mack, Karsten; Thilander-Klang, Anne

    2016-06-01

    The diagnostic image quality of 75 paediatric abdominal computed tomography (CT) examinations reconstructed with two different iterative reconstruction (IR) algorithms-adaptive statistical IR (ASiR™) and model-based IR (Veo™)-was compared. Axial and coronal images were reconstructed with 70 % ASiR with the Soft™ convolution kernel and with the Veo algorithm. The thickness of the reconstructed images was 2.5 or 5 mm depending on the scanning protocol used. Four radiologists graded the delineation of six abdominal structures and the diagnostic usefulness of the image quality. The Veo reconstruction significantly improved the visibility of most of the structures compared with ASiR in all subgroups of images. For coronal images, the Veo reconstruction resulted in significantly improved ratings of the diagnostic use of the image quality compared with the ASiR reconstruction. This was not seen for the axial images. The greatest improvement using Veo reconstruction was observed for the 2.5 mm coronal slices. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Dynamic re-weighted total variation technique and statistic Iterative reconstruction method for x-ray CT metal artifact reduction

    NASA Astrophysics Data System (ADS)

    Peng, Chengtao; Qiu, Bensheng; Zhang, Cheng; Ma, Changyu; Yuan, Gang; Li, Ming

    2017-07-01

    Over the years, the X-ray computed tomography (CT) has been successfully used in clinical diagnosis. However, when the body of the patient to be examined contains metal objects, the image reconstructed would be polluted by severe metal artifacts, which affect the doctor's diagnosis of disease. In this work, we proposed a dynamic re-weighted total variation (DRWTV) technique combined with the statistic iterative reconstruction (SIR) method to reduce the artifacts. The DRWTV method is based on the total variation (TV) and re-weighted total variation (RWTV) techniques, but it provides a sparser representation than TV and protects the tissue details better than RWTV. Besides, the DRWTV can suppress the artifacts and noise, and the SIR convergence speed is also accelerated. The performance of the algorithm is tested on both simulated phantom dataset and clinical dataset, which are the teeth phantom with two metal implants and the skull with three metal implants, respectively. The proposed algorithm (SIR-DRWTV) is compared with two traditional iterative algorithms, which are SIR and SIR constrained by RWTV regulation (SIR-RWTV). The results show that the proposed algorithm has the best performance in reducing metal artifacts and protecting tissue details.

  3. Statistical iterative reconstruction to improve image quality for digital breast tomosynthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Shiyu, E-mail: shiyu.xu@gmail.com; Chen, Ying, E-mail: adachen@siu.edu; Lu, Jianping

    2015-09-15

    Purpose: Digital breast tomosynthesis (DBT) is a novel modality with the potential to improve early detection of breast cancer by providing three-dimensional (3D) imaging with a low radiation dose. 3D image reconstruction presents some challenges: cone-beam and flat-panel geometry, and highly incomplete sampling. A promising means to overcome these challenges is statistical iterative reconstruction (IR), since it provides the flexibility of accurate physics modeling and a general description of system geometry. The authors’ goal was to develop techniques for applying statistical IR to tomosynthesis imaging data. Methods: These techniques include the following: a physics model with a local voxel-pair basedmore » prior with flexible parameters to fine-tune image quality; a precomputed parameter λ in the prior, to remove data dependence and to achieve a uniform resolution property; an effective ray-driven technique to compute the forward and backprojection; and an oversampled, ray-driven method to perform high resolution reconstruction with a practical region-of-interest technique. To assess the performance of these techniques, the authors acquired phantom data on the stationary DBT prototype system. To solve the estimation problem, the authors proposed an optimization-transfer based algorithm framework that potentially allows fewer iterations to achieve an acceptably converged reconstruction. Results: IR improved the detectability of low-contrast and small microcalcifications, reduced cross-plane artifacts, improved spatial resolution, and lowered noise in reconstructed images. Conclusions: Although the computational load remains a significant challenge for practical development, the superior image quality provided by statistical IR, combined with advancing computational techniques, may bring benefits to screening, diagnostics, and intraoperative imaging in clinical applications.« less

  4. Initial phantom study comparing image quality in computed tomography using adaptive statistical iterative reconstruction and new adaptive statistical iterative reconstruction v.

    PubMed

    Lim, Kyungjae; Kwon, Heejin; Cho, Jinhan; Oh, Jongyoung; Yoon, Seongkuk; Kang, Myungjin; Ha, Dongho; Lee, Jinhwa; Kang, Eunju

    2015-01-01

    The purpose of this study was to assess the image quality of a novel advanced iterative reconstruction (IR) method called as "adaptive statistical IR V" (ASIR-V) by comparing the image noise, contrast-to-noise ratio (CNR), and spatial resolution from those of filtered back projection (FBP) and adaptive statistical IR (ASIR) on computed tomography (CT) phantom image. We performed CT scans at 5 different tube currents (50, 70, 100, 150, and 200 mA) using 3 types of CT phantoms. Scanned images were subsequently reconstructed in 7 different scan settings, such as FBP, and 3 levels of ASIR and ASIR-V (30%, 50%, and 70%). The image noise was measured in the first study using body phantom. The CNR was measured in the second study using contrast phantom and the spatial resolutions were measured in the third study using a high-resolution phantom. We compared the image noise, CNR, and spatial resolution among the 7 reconstructed image scan settings to determine whether noise reduction, high CNR, and high spatial resolution could be achieved at ASIR-V. At quantitative analysis of the first and second studies, it showed that the images reconstructed using ASIR-V had reduced image noise and improved CNR compared with those of FBP and ASIR (P < 0.001). At qualitative analysis of the third study, it also showed that the images reconstructed using ASIR-V had significantly improved spatial resolution than those of FBP and ASIR (P < 0.001). Our phantom studies showed that ASIR-V provides a significant reduction in image noise and a significant improvement in CNR as well as spatial resolution. Therefore, this technique has the potential to reduce the radiation dose further without compromising image quality.

  5. Lithium Depletion in Solar-like Stars: Effect of Overshooting Based on Realistic Multi-dimensional Simulations

    NASA Astrophysics Data System (ADS)

    Baraffe, I.; Pratt, J.; Goffrey, T.; Constantino, T.; Folini, D.; Popov, M. V.; Walder, R.; Viallet, M.

    2017-08-01

    We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a young low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ˜50 Myr to ˜4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.

  6. Lithium Depletion in Solar-like Stars: Effect of Overshooting Based on Realistic Multi-dimensional Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baraffe, I.; Pratt, J.; Goffrey, T.

    We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a youngmore » low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ∼50 Myr to ∼4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.« less

  7. Harvest: a web-based biomedical data discovery and reporting application development platform.

    PubMed

    Italia, Michael J; Pennington, Jeffrey W; Ruth, Byron; Wrazien, Stacey; Loutrel, Jennifer G; Crenshaw, E Bryan; Miller, Jeffrey; White, Peter S

    2013-01-01

    Biomedical researchers share a common challenge of making complex data understandable and accessible. This need is increasingly acute as investigators seek opportunities for discovery amidst an exponential growth in the volume and complexity of laboratory and clinical data. To address this need, we developed Harvest, an open source framework that provides a set of modular components to aid the rapid development and deployment of custom data discovery software applications. Harvest incorporates visual representations of multidimensional data types in an intuitive, web-based interface that promotes a real-time, iterative approach to exploring complex clinical and experimental data. The Harvest architecture capitalizes on standards-based, open source technologies to address multiple functional needs critical to a research and development environment, including domain-specific data modeling, abstraction of complex data models, and a customizable web client.

  8. Using partially labeled data for normal mixture identification with application to class definition

    NASA Technical Reports Server (NTRS)

    Shahshahani, Behzad M.; Landgrebe, David A.

    1992-01-01

    The problem of estimating the parameters of a normal mixture density when, in addition to the unlabeled samples, sets of partially labeled samples are available is addressed. The density of the multidimensional feature space is modeled with a normal mixture. It is assumed that the set of components of the mixture can be partitioned into several classes and that training samples are available from each class. Since for any training sample the class of origin is known but the exact component of origin within the corresponding class is unknown, the training samples as considered to be partially labeled. The EM iterative equations are derived for estimating the parameters of the normal mixture in the presence of partially labeled samples. These equations can be used to combine the supervised and nonsupervised learning processes.

  9. Multi-dimensional, fully implicit, exactly conserving electromagnetic particle-in-cell simulations in curvilinear geometry

    NASA Astrophysics Data System (ADS)

    Chen, Guangye; Chacon, Luis

    2015-11-01

    We discuss a new, conservative, fully implicit 2D3V Vlasov-Darwin particle-in-cell algorithm in curvilinear geometry for non-radiative, electromagnetic kinetic plasma simulations. Unlike standard explicit PIC schemes, fully implicit PIC algorithms are unconditionally stable and allow exact discrete energy and charge conservation. Here, we extend these algorithms to curvilinear geometry. The algorithm retains its exact conservation properties in curvilinear grids. The nonlinear iteration is effectively accelerated with a fluid preconditioner for weakly to modestly magnetized plasmas, which allows efficient use of large timesteps, O (√{mi/me}c/veT) larger than the explicit CFL. In this presentation, we will introduce the main algorithmic components of the approach, and demonstrate the accuracy and efficiency properties of the algorithm with various numerical experiments in 1D (slow shock) and 2D (island coalescense).

  10. Phase Helps Find Geometrically Optimal Gaits

    NASA Astrophysics Data System (ADS)

    Revzen, Shai; Hatton, Ross

    Geometric motion planning describes motions of animals and machines governed by g ˙ = gA (q) q ˙ - a connection A (.) relating shape q and shape velocity q ˙ to body frame velocity g-1 g ˙ ∈ se (3) . Measuring the entire connection over a multidimensional q is often unfeasible with current experimental methods. We show how using a phase estimator can make tractable measuring the local structure of the connection surrounding a periodic motion q (φ) driven by a phase φ ∈S1 . This approach reduces the complexity of the estimation problem by a factor of dimq . The results suggest that phase estimation can be combined with geometric optimization into an iterative gait optimization algorithm usable on experimental systems, or alternatively, to allow the geometric optimality of an observed gait to be detected. ARO W911NF-14-1-0573, NSF 1462555.

  11. A split finite element algorithm for the compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1979-01-01

    An accurate and efficient numerical solution algorithm is established for solution of the high Reynolds number limit of the Navier-Stokes equations governing the multidimensional flow of a compressible essentially inviscid fluid. Finite element interpolation theory is used within a dissipative formulation established using Galerkin criteria within the Method of Weighted Residuals. An implicit iterative solution algorithm is developed, employing tensor product bases within a fractional steps integration procedure, that significantly enhances solution economy concurrent with sharply reduced computer hardware demands. The algorithm is evaluated for resolution of steep field gradients and coarse grid accuracy using both linear and quadratic tensor product interpolation bases. Numerical solutions for linear and nonlinear, one, two and three dimensional examples confirm and extend the linearized theoretical analyses, and results are compared to competitive finite difference derived algorithms.

  12. Model Selection for Monitoring CO2 Plume during Sequestration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-12-31

    The model selection method developed as part of this project mainly includes four steps: (1) assessing the connectivity/dynamic characteristics of a large prior ensemble of models, (2) model clustering using multidimensional scaling coupled with k-mean clustering, (3) model selection using the Bayes' rule in the reduced model space, (4) model expansion using iterative resampling of the posterior models. The fourth step expresses one of the advantages of the method: it provides a built-in means of quantifying the uncertainty in predictions made with the selected models. In our application to plume monitoring, by expanding the posterior space of models, the finalmore » ensemble of representations of geological model can be used to assess the uncertainty in predicting the future displacement of the CO2 plume. The software implementation of this approach is attached here.« less

  13. Expectation maximization for hard X-ray count modulation profiles

    NASA Astrophysics Data System (ADS)

    Benvenuto, F.; Schwartz, R.; Piana, M.; Massone, A. M.

    2013-07-01

    Context. This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) instrument. Aims: Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized to analyze count modulation profiles in solar hard X-ray imaging based on rotating modulation collimators. Methods: The algorithm described in this paper solves the maximum likelihood problem iteratively and encodes a positivity constraint into the iterative optimization scheme. The result is therefore a classical expectation maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, at the same time, a very satisfactory Cash-statistic (C-statistic). Results: The method is applied to both reproduce synthetic flaring configurations and reconstruct images from experimental data corresponding to three real events. In this second case, the performance of expectation maximization, when compared to Pixon image reconstruction, shows a comparable accuracy and a notably reduced computational burden; when compared to CLEAN, shows a better fidelity with respect to the measurements with a comparable computational effectiveness. Conclusions: If optimally stopped, expectation maximization represents a very reliable method for image reconstruction in the RHESSI context when count modulation profiles are used as input data.

  14. Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Yupeng, E-mail: yupeng@ualberta.ca; Deutsch, Clayton V.

    2012-06-15

    In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells.more » In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.« less

  15. Translation and Validation of the Multidimensional Dyspnea-12 Questionnaire.

    PubMed

    Amado Diago, Carlos Antonio; Puente Maestu, Luis; Abascal Bolado, Beatriz; Agüero Calvo, Juan; Hernando Hernando, Mercedes; Puente Bats, Irene; Agüero Balbín, Ramón

    2018-02-01

    Dyspnea is a multidimensional symptom, but this multidimensionality is not considered in most dyspnea questionnaires. The Dyspnea-12 takes a multidimensional approach to the assessment of dyspnea, specifically the sensory and the affective response. The objective of this study was to translate into Spanish and validate the Dyspnea-12 questionnaire. The original English version of the Dyspnea-12 questionnaire was translated into Spanish and backtranslated to analyze its equivalence. Comprehension of the text was verified by analyzing the responses of 10 patients. Reliability and validation of the questionnaire were studied in an independent group of COPD patients attending the pulmonology clinics of Hospital Universitario Marqués de Valdecilla, diagnosed and categorized according to GOLD guidelines. The mean age of the group (n=51) was 65 years and mean FEV1 was 50%. All patients understood all questions of the translated version of Dyspnea-12. Internal consistency of the questionnaire was α=0.937 and intraclass correlation coefficient was=.969; P<.001. Statistically significant correlations were found with HADS (anxiety r=.608 and depression r=.615), mMRC dyspnea (r=.592), 6MWT (r=-0.445), FEV1 (r=-0.312), all dimensions of CRQ-SAS (dyspnea r=-0.626; fatigue r=-0.718; emotional function r=-0.663; mastery r=-0.740), CAT (r=0.669), and baseline dyspnea index (r=-0.615). Dyspnea-12 scores were 10.32 points higher in symptomatic GOLD groups (B and D) (P<.001). The Spanish version of Dyspnea-12 is a valid and reliable instrument to study the multidimensional nature of dyspnea. Copyright © 2017 SEPAR. Publicado por Elsevier España, S.L.U. All rights reserved.

  16. Adaptive Statistical Iterative Reconstruction-Applied Ultra-Low-Dose CT with Radiography-Comparable Radiation Dose: Usefulness for Lung Nodule Detection.

    PubMed

    Yoon, Hyun Jung; Chung, Myung Jin; Hwang, Hye Sun; Moon, Jung Won; Lee, Kyung Soo

    2015-01-01

    To assess the performance of adaptive statistical iterative reconstruction (ASIR)-applied ultra-low-dose CT (ULDCT) in detecting small lung nodules. Thirty patients underwent both ULDCT and standard dose CT (SCT). After determining the reference standard nodules, five observers, blinded to the reference standard reading results, independently evaluated SCT and both subsets of ASIR- and filtered back projection (FBP)-driven ULDCT images. Data assessed by observers were compared statistically. Converted effective doses in SCT and ULDCT were 2.81 ± 0.92 and 0.17 ± 0.02 mSv, respectively. A total of 114 lung nodules were detected on SCT as a standard reference. There was no statistically significant difference in sensitivity between ASIR-driven ULDCT and SCT for three out of the five observers (p = 0.678, 0.735, < 0.01, 0.038, and < 0.868 for observers 1, 2, 3, 4, and 5, respectively). The sensitivity of FBP-driven ULDCT was significantly lower than that of ASIR-driven ULDCT in three out of the five observers (p < 0.01 for three observers, and p = 0.064 and 0.146 for two observers). In jackknife alternative free-response receiver operating characteristic analysis, the mean values of figure-of-merit (FOM) for FBP, ASIR-driven ULDCT, and SCT were 0.682, 0.772, and 0.821, respectively, and there were no significant differences in FOM values between ASIR-driven ULDCT and SCT (p = 0.11), but the FOM value of FBP-driven ULDCT was significantly lower than that of ASIR-driven ULDCT and SCT (p = 0.01 and 0.00). Adaptive statistical iterative reconstruction-driven ULDCT delivering a radiation dose of only 0.17 mSv offers acceptable sensitivity in nodule detection compared with SCT and has better performance than FBP-driven ULDCT.

  17. Adaptive Statistical Iterative Reconstruction-Applied Ultra-Low-Dose CT with Radiography-Comparable Radiation Dose: Usefulness for Lung Nodule Detection

    PubMed Central

    Yoon, Hyun Jung; Hwang, Hye Sun; Moon, Jung Won; Lee, Kyung Soo

    2015-01-01

    Objective To assess the performance of adaptive statistical iterative reconstruction (ASIR)-applied ultra-low-dose CT (ULDCT) in detecting small lung nodules. Materials and Methods Thirty patients underwent both ULDCT and standard dose CT (SCT). After determining the reference standard nodules, five observers, blinded to the reference standard reading results, independently evaluated SCT and both subsets of ASIR- and filtered back projection (FBP)-driven ULDCT images. Data assessed by observers were compared statistically. Results Converted effective doses in SCT and ULDCT were 2.81 ± 0.92 and 0.17 ± 0.02 mSv, respectively. A total of 114 lung nodules were detected on SCT as a standard reference. There was no statistically significant difference in sensitivity between ASIR-driven ULDCT and SCT for three out of the five observers (p = 0.678, 0.735, < 0.01, 0.038, and < 0.868 for observers 1, 2, 3, 4, and 5, respectively). The sensitivity of FBP-driven ULDCT was significantly lower than that of ASIR-driven ULDCT in three out of the five observers (p < 0.01 for three observers, and p = 0.064 and 0.146 for two observers). In jackknife alternative free-response receiver operating characteristic analysis, the mean values of figure-of-merit (FOM) for FBP, ASIR-driven ULDCT, and SCT were 0.682, 0.772, and 0.821, respectively, and there were no significant differences in FOM values between ASIR-driven ULDCT and SCT (p = 0.11), but the FOM value of FBP-driven ULDCT was significantly lower than that of ASIR-driven ULDCT and SCT (p = 0.01 and 0.00). Conclusion Adaptive statistical iterative reconstruction-driven ULDCT delivering a radiation dose of only 0.17 mSv offers acceptable sensitivity in nodule detection compared with SCT and has better performance than FBP-driven ULDCT. PMID:26357505

  18. Confirmatory Factor Analysis of Persian Adaptation of Multidimensional Students' Life Satisfaction Scale (MSLSS)

    ERIC Educational Resources Information Center

    Hatami, Gissou; Motamed, Niloofar; Ashrafzadeh, Mahshid

    2010-01-01

    Validity and reliability of Persian adaptation of MSLSS in the 12-18 years, middle and high school students (430 students in grades 6-12 in Bushehr port, Iran) using confirmatory factor analysis by means of LISREL statistical package were checked. Internal consistency reliability estimates (Cronbach's coefficient [alpha]) were all above the…

  19. The Social Profile of Students in Basic General Education in Ecuador: A Data Analysis

    ERIC Educational Resources Information Center

    Buri, Olga Elizabeth Minchala; Stefos, Efstathios

    2017-01-01

    The objective of this study is to examine the social profile of students who are enrolled in Basic General Education in Ecuador. Both a descriptive and multidimensional statistical analysis was carried out based on the data provided by the National Survey of Employment, Unemployment and Underemployment in 2015. The descriptive analysis shows the…

  20. Visualizing Big Data Outliers through Distributed Aggregation.

    PubMed

    Wilkinson, Leland

    2017-08-29

    Visualizing outliers in massive datasets requires statistical pre-processing in order to reduce the scale of the problem to a size amenable to rendering systems like D3, Plotly or analytic systems like R or SAS. This paper presents a new algorithm, called hdoutliers, for detecting multidimensional outliers. It is unique for a) dealing with a mixture of categorical and continuous variables, b) dealing with big-p (many columns of data), c) dealing with big-n (many rows of data), d) dealing with outliers that mask other outliers, and e) dealing consistently with unidimensional and multidimensional datasets. Unlike ad hoc methods found in many machine learning papers, hdoutliers is based on a distributional model that allows outliers to be tagged with a probability. This critical feature reduces the likelihood of false discoveries.

  1. Evaluating Cellular Polyfunctionality with a Novel Polyfunctionality Index

    PubMed Central

    Larsen, Martin; Sauce, Delphine; Arnaud, Laurent; Fastenackels, Solène; Appay, Victor; Gorochov, Guy

    2012-01-01

    Functional evaluation of naturally occurring or vaccination-induced T cell responses in mice, men and monkeys has in recent years advanced from single-parameter (e.g. IFN-γ-secretion) to much more complex multidimensional measurements. Co-secretion of multiple functional molecules (such as cytokines and chemokines) at the single-cell level is now measurable due primarily to major advances in multiparametric flow cytometry. The very extensive and complex datasets generated by this technology raise the demand for proper analytical tools that enable the analysis of combinatorial functional properties of T cells, hence polyfunctionality. Presently, multidimensional functional measures are analysed either by evaluating all combinations of parameters individually or by summing frequencies of combinations that include the same number of simultaneous functions. Often these evaluations are visualized as pie charts. Whereas pie charts effectively represent and compare average polyfunctionality profiles of particular T cell subsets or patient groups, they do not document the degree or variation of polyfunctionality within a group nor does it allow more sophisticated statistical analysis. Here we propose a novel polyfunctionality index that numerically evaluates the degree and variation of polyfuntionality, and enable comparative and correlative parametric and non-parametric statistical tests. Moreover, it allows the usage of more advanced statistical approaches, such as cluster analysis. We believe that the polyfunctionality index will render polyfunctionality an appropriate end-point measure in future studies of T cell responsiveness. PMID:22860124

  2. Probabilistic-driven oriented Speckle reducing anisotropic diffusion with application to cardiac ultrasonic images.

    PubMed

    Vegas-Sanchez-Ferrero, G; Aja-Fernandez, S; Martin-Fernandez, M; Frangi, A F; Palencia, C

    2010-01-01

    A novel anisotropic diffusion filter is proposed in this work with application to cardiac ultrasonic images. It includes probabilistic models which describe the probability density function (PDF) of tissues and adapts the diffusion tensor to the image iteratively. For this purpose, a preliminary study is performed in order to select the probability models that best fit the stastitical behavior of each tissue class in cardiac ultrasonic images. Then, the parameters of the diffusion tensor are defined taking into account the statistical properties of the image at each voxel. When the structure tensor of the probability of belonging to each tissue is included in the diffusion tensor definition, a better boundaries estimates can be obtained instead of calculating directly the boundaries from the image. This is the main contribution of this work. Additionally, the proposed method follows the statistical properties of the image in each iteration. This is considered as a second contribution since state-of-the-art methods suppose that noise or statistical properties of the image do not change during the filter process.

  3. The Biomarker-Surrogacy Evaluation Schema: a review of the biomarker-surrogate literature and a proposal for a criterion-based, quantitative, multidimensional hierarchical levels of evidence schema for evaluating the status of biomarkers as surrogate endpoints.

    PubMed

    Lassere, Marissa N

    2008-06-01

    There are clear advantages to using biomarkers and surrogate endpoints, but concerns about clinical and statistical validity and systematic methods to evaluate these aspects hinder their efficient application. Section 2 is a systematic, historical review of the biomarker-surrogate endpoint literature with special reference to the nomenclature, the systems of classification and statistical methods developed for their evaluation. In Section 3 an explicit, criterion-based, quantitative, multidimensional hierarchical levels of evidence schema - Biomarker-Surrogacy Evaluation Schema - is proposed to evaluate and co-ordinate the multiple dimensions (biological, epidemiological, statistical, clinical trial and risk-benefit evidence) of the biomarker clinical endpoint relationships. The schema systematically evaluates and ranks the surrogacy status of biomarkers and surrogate endpoints using defined levels of evidence. The schema incorporates the three independent domains: Study Design, Target Outcome and Statistical Evaluation. Each domain has items ranked from zero to five. An additional category called Penalties incorporates additional considerations of biological plausibility, risk-benefit and generalizability. The total score (0-15) determines the level of evidence, with Level 1 the strongest and Level 5 the weakest. The term ;surrogate' is restricted to markers attaining Levels 1 or 2 only. Surrogacy status of markers can then be directly compared within and across different areas of medicine to guide individual, trial-based or drug-development decisions. This schema would facilitate communication between clinical, researcher, regulatory, industry and consumer participants necessary for evaluation of the biomarker-surrogate-clinical endpoint relationship in their different settings.

  4. Optimizing convergence rates of alternating minimization reconstruction algorithms for real-time explosive detection applications

    NASA Astrophysics Data System (ADS)

    Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.

    2016-05-01

    X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.

  5. Sorting points into neighborhoods (SPIN): data analysis and visualization by ordering distance matrices.

    PubMed

    Tsafrir, D; Tsafrir, I; Ein-Dor, L; Zuk, O; Notterman, D A; Domany, E

    2005-05-15

    We introduce a novel unsupervised approach for the organization and visualization of multidimensional data. At the heart of the method is a presentation of the full pairwise distance matrix of the data points, viewed in pseudocolor. The ordering of points is iteratively permuted in search of a linear ordering, which can be used to study embedded shapes. Several examples indicate how the shapes of certain structures in the data (elongated, circular and compact) manifest themselves visually in our permuted distance matrix. It is important to identify the elongated objects since they are often associated with a set of hidden variables, underlying continuous variation in the data. The problem of determining an optimal linear ordering is shown to be NP-Complete, and therefore an iterative search algorithm with O(n3) step-complexity is suggested. By using sorting points into neighborhoods, i.e. SPIN to analyze colon cancer expression data we were able to address the serious problem of sample heterogeneity, which hinders identification of metastasis related genes in our data. Our methodology brings to light the continuous variation of heterogeneity--starting with homogeneous tumor samples and gradually increasing the amount of another tissue. Ordering the samples according to their degree of contamination by unrelated tissue allows the separation of genes associated with irrelevant contamination from those related to cancer progression. Software package will be available for academic users upon request.

  6. Diagnostic accuracy of combined FP-CIT, IBZM, and MIBG scintigraphy in the differential diagnosis of degenerative parkinsonism: a multidimensional statistical approach.

    PubMed

    Südmeyer, Martin; Antke, Christina; Zizek, Tanja; Beu, Markus; Nikolaus, Susanne; Wojtecki, Lars; Schnitzler, Alfons; Müller, Hans-Wilhelm

    2011-05-01

    In vivo molecular imaging of pre- and postsynaptic nigrostriatal neuronal degeneration and sympathetic cardiac innervation with SPECT is used to distinguish idiopathic Parkinson disease (PD) from atypical parkinsonian disorder (APD). However, the diagnostic accuracy of these imaging approaches as stand-alone procedures is often unsatisfying. The aim of this study was therefore to evaluate to which extent diagnostic accuracy can be increased by their combined use together with a multidimensional statistical algorithm. The SPECT radiotracers (123)I-(S)-2-hydroxy-3-iodo-6-methoxy-N-[1-ethyl-2-pyrrodinyl)-methyl]benzamide (IBZM), (123)I-N-ω-fluoropropyl-2β-carbomethoxy-3β-(4-iodophenyl)nortropan (FP-CIT), and meta-(123)I-iodobenzylguanidine (MIBG) were used to assess striatal postsynaptic D(2) receptor binding, striatal presynaptic dopamine transporter binding, and myocardial adrenergic innervation, respectively. Thirty-one PD and 17 APD patients were prospectively investigated. PD and APD diagnoses were established using consensus criteria and reevaluated after 37.4 ± 12.4 and 26 ± 11.6 mo in PD and APD, respectively. Test accuracy (TA) for PD-APD differentiation was computed for all logical (Boolean) combinations of imaging modalities by receiver-operating-characteristic analysis--that is, after multidimensional optimization of cutoff values. Analysis showed moderate TA for PD-APD differentiation using each molecular approach alone (IBZM, 79%; MIBG, 73%; and FP-CIT, 73%). For combined use, the highest TA resulted under the assumption that at least 2 of the 3 biologic markers had to be positive for APD using the following cutoff values: 1.46 or less for IBZM, less than 2.10 for FP-CIT, and greater than 1.43 for MIBG. This algorithm distinguished APD from PD with a sensitivity of 94%, specificity of 94% (TA, 94%), positive predictive value of 89%, and negative predictive value of 97%. Results suggest that the multidimensional combination of FP-CIT, IBZM, and MIBG scintigraphy is likely to significantly increase TA in differentiating PD from APD. The differential diagnosis of degenerative parkinsonism may thus be facilitated.

  7. Changes in frontal plane dynamics and the loading response phase of the gait cycle are characteristic of severe knee osteoarthritis application of a multidimensional analysis technique.

    PubMed

    Astephen, J L; Deluzio, K J

    2005-02-01

    Osteoarthritis of the knee is related to many correlated mechanical factors that can be measured with gait analysis. Gait analysis results in large data sets. The analysis of these data is difficult due to the correlated, multidimensional nature of the measures. A multidimensional model that uses two multivariate statistical techniques, principal component analysis and discriminant analysis, was used to discriminate between the gait patterns of the normal subject group and the osteoarthritis subject group. Nine time varying gait measures and eight discrete measures were included in the analysis. All interrelationships between and within the measures were retained in the analysis. The multidimensional analysis technique successfully separated the gait patterns of normal and knee osteoarthritis subjects with a misclassification error rate of <6%. The most discriminatory feature described a static and dynamic alignment factor. The second most discriminatory feature described a gait pattern change during the loading response phase of the gait cycle. The interrelationships between gait measures and between the time instants of the gait cycle can provide insight into the mechanical mechanisms of pathologies such as knee osteoarthritis. These results suggest that changes in frontal plane loading and alignment and the loading response phase of the gait cycle are characteristic of severe knee osteoarthritis gait patterns. Subsequent investigations earlier in the disease process may suggest the importance of these factors to the progression of knee osteoarthritis.

  8. Iterative LQG Controller Design Through Closed-Loop Identification

    NASA Technical Reports Server (NTRS)

    Hsiao, Min-Hung; Huang, Jen-Kuang; Cox, David E.

    1996-01-01

    This paper presents an iterative Linear Quadratic Gaussian (LQG) controller design approach for a linear stochastic system with an uncertain open-loop model and unknown noise statistics. This approach consists of closed-loop identification and controller redesign cycles. In each cycle, the closed-loop identification method is used to identify an open-loop model and a steady-state Kalman filter gain from closed-loop input/output test data obtained by using a feedback LQG controller designed from the previous cycle. Then the identified open-loop model is used to redesign the state feedback. The state feedback and the identified Kalman filter gain are used to form an updated LQC controller for the next cycle. This iterative process continues until the updated controller converges. The proposed controller design is demonstrated by numerical simulations and experiments on a highly unstable large-gap magnetic suspension system.

  9. Comparison of 3D displacements of screw-retained zirconia implant crowns into implants with different internal connections with respect to screw tightening.

    PubMed

    Rebeeah, Hanadi A; Yilmaz, Burak; Seidt, Jeremy D; McGlumphy, Edwin; Clelland, Nancy; Brantley, William

    2018-01-01

    Internal conical implant-abutment connections without horizontal platforms may lead to crown displacement during screw tightening and torque application. This displacement may affect the proximal contacts and occlusion of the definitive prosthesis. The purpose of this in vitro study was to evaluate the displacement of custom screw-retained zirconia single crowns into a recently introduced internal conical seal implant-abutment connection in 3D during hand and torque driver screw tightening. Stereolithic acrylic resin models were printed using computed tomography data from a patient missing the maxillary right central incisor. Two different internal connection implant systems (both ∼11.5 mm) were placed in the edentulous site in each model using a surgical guide. Five screw-retained single zirconia computer-aided design and computer-aided manufacturing (CAD-CAM) crowns were fabricated for each system. A pair of high-resolution digital cameras was used to record the relationship of the crown to the model. The crowns were tightened according to the manufacturers' specifications using a torque driver, and the cameras recorded their relative position again. Three-dimensional image correlation was used to measure and compare crown positions, first hand tightened and then torque driven. The displacement test was repeated 3 times for each crown. Commercial image correlation software was used to extract the data and compare the amount of displacement vertically, mesiodistally, and buccolingually. Repeated-measures ANOVA calculated the relative displacements for all 5 specimens for each implant for both crown screw hand tightening and after applied torque. A Student t test with Bonferroni correction was used for pairwise comparison of interest to determine statistical differences between the 2 implants (α=.05). The mean vertical displacements were statistically higher than the mean displacements in the mesiodistal and buccolingual directions for both implants (P<.001). Mean displacements in all directions were statistically significant between iterations for both implants (P<.001). No statistically significant differences were found for displacements between implants at different directions and at different iterations (P>.05). Within the limitations of this in vitro study, screw-retained zirconia crowns tended to displace in all 3 directions, with the highest mean displacement in the vertical direction at iteration 1. However, the amount of displacement of crowns between the 2 different implants was statistically insignificant for all directions and iterations. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  10. Assessing the Primary Schools--A Multi-Dimensional Approach: A School Level Analysis Based on Indian Data

    ERIC Educational Resources Information Center

    Sengupta, Atanu; Pal, Naibedya Prasun

    2012-01-01

    Primary education is essential for the economic development in any country. Most studies give more emphasis to the final output (such as literacy, enrolment etc.) rather than the delivery of the entire primary education system. In this paper, we study the school level data from an Indian district, collected under the official DISE statistics. We…

  11. Citation Patterns of Engineering, Statistics, and Computer Science Researchers: An Internal and External Citation Analysis across Multiple Engineering Subfields

    ERIC Educational Resources Information Center

    Kelly, Madeline

    2015-01-01

    This study takes a multidimensional approach to citation analysis, examining citations in multiple subfields of engineering, from both scholarly journals and doctoral dissertations. The three major goals of the study are to determine whether there are differences between citations drawn from dissertations and those drawn from journal articles; to…

  12. Reduction of Metal Artifact in Single Photon-Counting Computed Tomography by Spectral-Driven Iterative Reconstruction Technique

    PubMed Central

    Nasirudin, Radin A.; Mei, Kai; Panchev, Petar; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Fiebich, Martin; Noël, Peter B.

    2015-01-01

    Purpose The exciting prospect of Spectral CT (SCT) using photon-counting detectors (PCD) will lead to new techniques in computed tomography (CT) that take advantage of the additional spectral information provided. We introduce a method to reduce metal artifact in X-ray tomography by incorporating knowledge obtained from SCT into a statistical iterative reconstruction scheme. We call our method Spectral-driven Iterative Reconstruction (SPIR). Method The proposed algorithm consists of two main components: material decomposition and penalized maximum likelihood iterative reconstruction. In this study, the spectral data acquisitions with an energy-resolving PCD were simulated using a Monte-Carlo simulator based on EGSnrc C++ class library. A jaw phantom with a dental implant made of gold was used as an object in this study. A total of three dental implant shapes were simulated separately to test the influence of prior knowledge on the overall performance of the algorithm. The generated projection data was first decomposed into three basis functions: photoelectric absorption, Compton scattering and attenuation of gold. A pseudo-monochromatic sinogram was calculated and used as input in the reconstruction, while the spatial information of the gold implant was used as a prior. The results from the algorithm were assessed and benchmarked with state-of-the-art reconstruction methods. Results Decomposition results illustrate that gold implant of any shape can be distinguished from other components of the phantom. Additionally, the result from the penalized maximum likelihood iterative reconstruction shows that artifacts are significantly reduced in SPIR reconstructed slices in comparison to other known techniques, while at the same time details around the implant are preserved. Quantitatively, the SPIR algorithm best reflects the true attenuation value in comparison to other algorithms. Conclusion It is demonstrated that the combination of the additional information from Spectral CT and statistical reconstruction can significantly improve image quality, especially streaking artifacts caused by the presence of materials with high atomic numbers. PMID:25955019

  13. Model-based iterative reconstruction and adaptive statistical iterative reconstruction: dose-reduced CT for detecting pancreatic calcification

    PubMed Central

    Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni

    2016-01-01

    Background Iterative reconstruction methods have attracted attention for reducing radiation doses in computed tomography (CT). Purpose To investigate the detectability of pancreatic calcification using dose-reduced CT reconstructed with model-based iterative construction (MBIR) and adaptive statistical iterative reconstruction (ASIR). Material and Methods This prospective study approved by Institutional Review Board included 85 patients (57 men, 28 women; mean age, 69.9 years; mean body weight, 61.2 kg). Unenhanced CT was performed three times with different radiation doses (reference-dose CT [RDCT], low-dose CT [LDCT], ultralow-dose CT [ULDCT]). From RDCT, LDCT, and ULDCT, images were reconstructed with filtered-back projection (R-FBP, used for establishing reference standard), ASIR (L-ASIR), and MBIR and ASIR (UL-MBIR and UL-ASIR), respectively. A lesion (pancreatic calcification) detection test was performed by two blinded radiologists with a five-point certainty level scale. Results Dose-length products of RDCT, LDCT, and ULDCT were 410, 97, and 36 mGy-cm, respectively. Nine patients had pancreatic calcification. The sensitivity for detecting pancreatic calcification with UL-MBIR was high (0.67–0.89) compared to L-ASIR or UL-ASIR (0.11–0.44), and a significant difference was seen between UL-MBIR and UL-ASIR for one reader (P = 0.014). The area under the receiver-operating characteristic curve for UL-MBIR (0.818–0.860) was comparable to that for L-ASIR (0.696–0.844). The specificity was lower with UL-MBIR (0.79–0.92) than with L-ASIR or UL-ASIR (0.96–0.99), and a significant difference was seen for one reader (P < 0.01). Conclusion In UL-MBIR, pancreatic calcification can be detected with high sensitivity, however, we should pay attention to the slightly lower specificity. PMID:27110389

  14. Statistical segmentation of multidimensional brain datasets

    NASA Astrophysics Data System (ADS)

    Desco, Manuel; Gispert, Juan D.; Reig, Santiago; Santos, Andres; Pascau, Javier; Malpica, Norberto; Garcia-Barreno, Pedro

    2001-07-01

    This paper presents an automatic segmentation procedure for MRI neuroimages that overcomes part of the problems involved in multidimensional clustering techniques like partial volume effects (PVE), processing speed and difficulty of incorporating a priori knowledge. The method is a three-stage procedure: 1) Exclusion of background and skull voxels using threshold-based region growing techniques with fully automated seed selection. 2) Expectation Maximization algorithms are used to estimate the probability density function (PDF) of the remaining pixels, which are assumed to be mixtures of gaussians. These pixels can then be classified into cerebrospinal fluid (CSF), white matter and grey matter. Using this procedure, our method takes advantage of using the full covariance matrix (instead of the diagonal) for the joint PDF estimation. On the other hand, logistic discrimination techniques are more robust against violation of multi-gaussian assumptions. 3) A priori knowledge is added using Markov Random Field techniques. The algorithm has been tested with a dataset of 30 brain MRI studies (co-registered T1 and T2 MRI). Our method was compared with clustering techniques and with template-based statistical segmentation, using manual segmentation as a gold-standard. Our results were more robust and closer to the gold-standard.

  15. Real-time object recognition in multidimensional images based on joined extended structural tensor and higher-order tensor decomposition methods

    NASA Astrophysics Data System (ADS)

    Cyganek, Boguslaw; Smolka, Bogdan

    2015-02-01

    In this paper a system for real-time recognition of objects in multidimensional video signals is proposed. Object recognition is done by pattern projection into the tensor subspaces obtained from the factorization of the signal tensors representing the input signal. However, instead of taking only the intensity signal the novelty of this paper is first to build the Extended Structural Tensor representation from the intensity signal that conveys information on signal intensities, as well as on higher-order statistics of the input signals. This way the higher-order input pattern tensors are built from the training samples. Then, the tensor subspaces are built based on the Higher-Order Singular Value Decomposition of the prototype pattern tensors. Finally, recognition relies on measurements of the distance of a test pattern projected into the tensor subspaces obtained from the training tensors. Due to high-dimensionality of the input data, tensor based methods require high memory and computational resources. However, recent achievements in the technology of the multi-core microprocessors and graphic cards allows real-time operation of the multidimensional methods as is shown and analyzed in this paper based on real examples of object detection in digital images.

  16. Differentially Private Synthesization of Multi-Dimensional Data using Copula Functions

    PubMed Central

    Li, Haoran; Xiong, Li; Jiang, Xiaoqian

    2014-01-01

    Differential privacy has recently emerged in private statistical data release as one of the strongest privacy guarantees. Most of the existing techniques that generate differentially private histograms or synthetic data only work well for single dimensional or low-dimensional histograms. They become problematic for high dimensional and large domain data due to increased perturbation error and computation complexity. In this paper, we propose DPCopula, a differentially private data synthesization technique using Copula functions for multi-dimensional data. The core of our method is to compute a differentially private copula function from which we can sample synthetic data. Copula functions are used to describe the dependence between multivariate random vectors and allow us to build the multivariate joint distribution using one-dimensional marginal distributions. We present two methods for estimating the parameters of the copula functions with differential privacy: maximum likelihood estimation and Kendall’s τ estimation. We present formal proofs for the privacy guarantee as well as the convergence property of our methods. Extensive experiments using both real datasets and synthetic datasets demonstrate that DPCopula generates highly accurate synthetic multi-dimensional data with significantly better utility than state-of-the-art techniques. PMID:25405241

  17. Exact and approximate Fourier rebinning algorithms for the solution of the data truncation problem in 3-D PET.

    PubMed

    Bouallègue, Fayçal Ben; Crouzet, Jean-François; Comtat, Claude; Fourcade, Marjolaine; Mohammadi, Bijan; Mariano-Goulart, Denis

    2007-07-01

    This paper presents an extended 3-D exact rebinning formula in the Fourier space that leads to an iterative reprojection algorithm (iterative FOREPROJ), which enables the estimation of unmeasured oblique projection data on the basis of the whole set of measured data. In first approximation, this analytical formula also leads to an extended Fourier rebinning equation that is the basis for an approximate reprojection algorithm (extended FORE). These algorithms were evaluated on numerically simulated 3-D positron emission tomography (PET) data for the solution of the truncation problem, i.e., the estimation of the missing portions in the oblique projection data, before the application of algorithms that require complete projection data such as some rebinning methods (FOREX) or 3-D reconstruction algorithms (3DRP or direct Fourier methods). By taking advantage of all the 3-D data statistics, the iterative FOREPROJ reprojection provides a reliable alternative to the classical FOREPROJ method, which only exploits the low-statistics nonoblique data. It significantly improves the quality of the external reconstructed slices without loss of spatial resolution. As for the approximate extended FORE algorithm, it clearly exhibits limitations due to axial interpolations, but will require clinical studies with more realistic measured data in order to decide on its pertinence.

  18. Adaptive statistical iterative reconstruction use for radiation dose reduction in pediatric lower-extremity CT: impact on diagnostic image quality.

    PubMed

    Shah, Amisha; Rees, Mitchell; Kar, Erica; Bolton, Kimberly; Lee, Vincent; Panigrahy, Ashok

    2018-06-01

    For the past several years, increased levels of imaging radiation and cumulative radiation to children has been a significant concern. Although several measures have been taken to reduce radiation dose during computed tomography (CT) scan, the newer dose reduction software adaptive statistical iterative reconstruction (ASIR) has been an effective technique in reducing radiation dose. To our knowledge, no studies are published that assess the effect of ASIR on extremity CT scans in children. To compare radiation dose, image noise, and subjective image quality in pediatric lower extremity CT scans acquired with and without ASIR. The study group consisted of 53 patients imaged on a CT scanner equipped with ASIR software. The control group consisted of 37 patients whose CT images were acquired without ASIR. Image noise, Computed Tomography Dose Index (CTDI) and dose length product (DLP) were measured. Two pediatric radiologists rated the studies in subjective categories: image sharpness, noise, diagnostic acceptability, and artifacts. The CTDI (p value = 0.0184) and DLP (p value <0.0002) were significantly decreased with the use of ASIR compared with non-ASIR studies. However, the subjective ratings for sharpness (p < 0.0001) and diagnostic acceptability of the ASIR images (p < 0.0128) were decreased compared with standard, non-ASIR CT studies. Adaptive statistical iterative reconstruction reduces radiation dose for lower extremity CTs in children, but at the expense of diagnostic imaging quality. Further studies are warranted to determine the specific utility of ASIR for pediatric musculoskeletal CT imaging.

  19. A fast and objective multidimensional kernel density estimation method: fastKDE

    DOE PAGES

    O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.; ...

    2016-03-07

    Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flory, John Andrew; Padilla, Denise D.; Gauthier, John H.

    Upcoming weapon programs require an aggressive increase in Application Specific Integrated Circuit (ASIC) production at Sandia National Laboratories (SNL). SNL has developed unique modeling and optimization tools that have been instrumental in improving ASIC production productivity and efficiency, identifying optimal operational and tactical execution plans under resource constraints, and providing confidence in successful mission execution. With ten products and unprecedented levels of demand, a single set of shared resources, highly variable processes, and the need for external supplier task synchronization, scheduling is an integral part of successful manufacturing. The scheduler uses an iterative multi-objective genetic algorithm and a multi-dimensional performancemore » evaluator. Schedule feasibility is assessed using a discrete event simulation (DES) that incorporates operational uncertainty, variability, and resource availability. The tools provide rapid scenario assessments and responses to variances in the operational environment, and have been used to inform major equipment investments and workforce planning decisions in multiple SNL facilities.« less

  1. A mixed-effects regression model for longitudinal multivariate ordinal data.

    PubMed

    Liu, Li C; Hedeker, Donald

    2006-03-01

    A mixed-effects item response theory model that allows for three-level multivariate ordinal outcomes and accommodates multiple random subject effects is proposed for analysis of multivariate ordinal outcomes in longitudinal studies. This model allows for the estimation of different item factor loadings (item discrimination parameters) for the multiple outcomes. The covariates in the model do not have to follow the proportional odds assumption and can be at any level. Assuming either a probit or logistic response function, maximum marginal likelihood estimation is proposed utilizing multidimensional Gauss-Hermite quadrature for integration of the random effects. An iterative Fisher scoring solution, which provides standard errors for all model parameters, is used. An analysis of a longitudinal substance use data set, where four items of substance use behavior (cigarette use, alcohol use, marijuana use, and getting drunk or high) are repeatedly measured over time, is used to illustrate application of the proposed model.

  2. Thermal Model of a Current-Carrying Wire in a Vacuum

    NASA Technical Reports Server (NTRS)

    Border, James

    2006-01-01

    A computer program implements a thermal model of an insulated wire carrying electric current and surrounded by a vacuum. The model includes the effects of Joule heating, conduction of heat along the wire, and radiation of heat from the outer surface of the insulation on the wire. The model takes account of the temperature dependences of the thermal and electrical properties of the wire, the emissivity of the insulation, and the possibility that not only can temperature vary along the wire but, in addition, the ends of the wire can be thermally grounded at different temperatures. The resulting second-order differential equation for the steady-state temperature as a function of position along the wire is highly nonlinear. The wire is discretized along its length, and the equation is solved numerically by use of an iterative algorithm that utilizes a multidimensional version of the Newton-Raphson method.

  3. Application of the Disruption Predictor Feature Developer to developing a machine-portable disruption predictor

    NASA Astrophysics Data System (ADS)

    Parsons, Matthew; Tang, William; Feibush, Eliot

    2016-10-01

    Plasma disruptions pose a major threat to the operation of tokamaks which confine a large amount of stored energy. In order to effectively mitigate this damage it is necessary to predict an oncoming disruption with sufficient warning time to take mitigative action. Machine learning approaches to this problem have shown promise but require further developments to address (1) the need for machine-portable predictors and (2) the availability of multi-dimensional signal inputs. Here we demonstrate progress in these two areas by applying the Disruption Predictor Feature Developer to data from JET and NSTX, and discuss topics of focus for ongoing work in support of ITER. The author is also supported under the Fulbright U.S. Student Program as a graduate student in the department of Nuclear, Plasma and Radiological Engineering at the University of Illinois at Urbana-Champaign.

  4. Mixture Model and MDSDCA for Textual Data

    NASA Astrophysics Data System (ADS)

    Allouti, Faryel; Nadif, Mohamed; Hoai An, Le Thi; Otjacques, Benoît

    E-mailing has become an essential component of cooperation in business. Consequently, the large number of messages manually produced or automatically generated can rapidly cause information overflow for users. Many research projects have examined this issue but surprisingly few have tackled the problem of the files attached to e-mails that, in many cases, contain a substantial part of the semantics of the message. This paper considers this specific topic and focuses on the problem of clustering and visualization of attached files. Relying on the multinomial mixture model, we used the Classification EM algorithm (CEM) to cluster the set of files, and MDSDCA to visualize the obtained classes of documents. Like the Multidimensional Scaling method, the aim of the MDSDCA algorithm based on the Difference of Convex functions is to optimize the stress criterion. As MDSDCA is iterative, we propose an initialization approach to avoid starting with random values. Experiments are investigated using simulations and textual data.

  5. Restoration of MRI Data for Field Nonuniformities using High Order Neighborhood Statistics

    PubMed Central

    Hadjidemetriou, Stathis; Studholme, Colin; Mueller, Susanne; Weiner, Michael; Schuff, Norbert

    2007-01-01

    MRI at high magnetic fields (> 3.0 T ) is complicated by strong inhomogeneous radio-frequency fields, sometimes termed the “bias field”. These lead to nonuniformity of image intensity, greatly complicating further analysis such as registration and segmentation. Existing methods for bias field correction are effective for 1.5 T or 3.0 T MRI, but are not completely satisfactory for higher field data. This paper develops an effective bias field correction for high field MRI based on the assumption that the nonuniformity is smoothly varying in space. Also, nonuniformity is quantified and unmixed using high order neighborhood statistics of intensity cooccurrences. They are computed within spherical windows of limited size over the entire image. The restoration is iterative and makes use of a novel stable stopping criterion that depends on the scaled entropy of the cooccurrence statistics, which is a non monotonic function of the iterations; the Shannon entropy of the cooccurrence statistics normalized to the effective dynamic range of the image. The algorithm restores whole head data, is robust to intense nonuniformities present in high field acquisitions, and is robust to variations in anatomy. This algorithm significantly improves bias field correction in comparison to N3 on phantom 1.5 T head data and high field 4 T human head data. PMID:18193095

  6. Subjective health literacy: Development of a brief instrument for school-aged children.

    PubMed

    Paakkari, Olli; Torppa, Minna; Kannas, Lasse; Paakkari, Leena

    2016-12-01

    The present paper focuses on the measurement of health literacy (HL), which is an important determinant of health and health behaviours. HL starts to develop in childhood and adolescence; hence, there is a need for instruments to monitor HL among younger age groups. These instruments are still rare. The aim of the project reported here was, therefore, to develop a brief, multidimensional, theory-based instrument to measure subjective HL among school-aged children. The development of the instrument covered four phases: item generation based on a conceptual framework; a pilot study ( n = 405); test-retest ( n = 117); and construction of the instrument ( n = 3853). All the samples were taken from Finnish 7th and 9th graders. Initially, 65 items were generated, of which 32 items were selected for the pilot study. After item reduction, the instrument contained 16 items. The test-retest phase produced estimates of stability. In the final phase a 10-item instrument was constructed, referred to as Health Literacy for School-Aged Children (HLSAC). The instrument exhibited a high Cronbach alpha (0.93), and included two items from each of the five predetermined theoretical components (theoretical knowledge, practical knowledge, critical thinking, self-awareness, citizenship). The iterative and validity-driven development process made it possible to construct a brief multidimensional HLSAC instrument. Such instruments are suitable for large-scale studies, and for use with children and adolescents. Validation will require further testing for use in other countries.

  7. SciSpark: Highly Interactive and Scalable Model Evaluation and Climate Metrics

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Palamuttam, R. S.; Mogrovejo, R. M.; Whitehall, K. D.; Mattmann, C. A.; Verma, R.; Waliser, D. E.; Lee, H.

    2015-12-01

    Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We are developing a lightning fast Big Data technology called SciSpark based on ApacheTM Spark under a NASA AIST grant (PI Mattmann). Spark implements the map-reduce paradigm for parallel computing on a cluster, but emphasizes in-memory computation, "spilling" to disk only as needed, and so outperforms the disk-based ApacheTM Hadoop by 100x in memory and by 10x on disk. SciSpark will enable scalable model evaluation by executing large-scale comparisons of A-Train satellite observations to model grids on a cluster of 10 to 1000 compute nodes. This 2nd generation capability for NASA's Regional Climate Model Evaluation System (RCMES) will compute simple climate metrics at interactive speeds, and extend to quite sophisticated iterative algorithms such as machine-learning based clustering of temperature PDFs, and even graph-based algorithms for searching for Mesocale Convective Complexes. We have implemented a parallel data ingest capability in which the user specifies desired variables (arrays) as several time-sorted lists of URL's (i.e. using OPeNDAP model.nc?varname, or local files). The specified variables are partitioned by time/space and then each Spark node pulls its bundle of arrays into memory to begin a computation pipeline. We also investigated the performance of several N-dim. array libraries (scala breeze, java jblas & netlib-java, and ND4J). We are currently developing science codes using ND4J and studying memory behavior on the JVM. On the pyspark side, many of our science codes already use the numpy and SciPy ecosystems. The talk will cover: the architecture of SciSpark, the design of the scientific RDD (sRDD) data structure, our efforts to integrate climate science algorithms in Python and Scala, parallel ingest and partitioning of A-Train satellite observations from HDF files and model grids from netCDF files, first parallel runs to compute comparison statistics and PDF's, and first metrics quantifying parallel speedups and memory & disk usage.

  8. Reciprocity and depressive symptoms in Belgian workers: a cross-sectional multilevel analysis.

    PubMed

    De Clercq, Bart; Clays, Els; Janssens, Heidi; De Bacquer, Dirk; Casini, Annalisa; Kittel, France; Braeckman, Lutgart

    2013-07-01

    This study examines the multidimensional association between reciprocity at work and depressive symptoms. Data from the Belgian BELSTRESS survey (32 companies; N = 24,402) were analyzed. Multilevel statistical procedures were used to account for company-level associations while controlling for individual-level associations. Different dimensions of individual reciprocity were negatively associated with depressive symptoms. On the company level, only vertical emotional reciprocity was negatively associated (β = -4.660; SE = 1.117) independently from individual reciprocity (β = -0.557; SE = 0.042). Complex interactions were found such that workplace reciprocity (1) may not uniformly benefit individuals and (2) related differently to depressive symptoms, depending on occupational group. This study extends the existing literature with evidence on the multidimensional, contextual, and cross-level interaction associations of reciprocity as a key aspect of social capital on depressive symptoms.

  9. Efficient multidimensional free energy calculations for ab initio molecular dynamics using classical bias potentials

    NASA Astrophysics Data System (ADS)

    VandeVondele, Joost; Rothlisberger, Ursula

    2000-09-01

    We present a method for calculating multidimensional free energy surfaces within the limited time scale of a first-principles molecular dynamics scheme. The sampling efficiency is enhanced using selected terms of a classical force field as a bias potential. This simple procedure yields a very substantial increase in sampling accuracy while retaining the high quality of the underlying ab initio potential surface and can thus be used for a parameter free calculation of free energy surfaces. The success of the method is demonstrated by the applications to two gas phase molecules, ethane and peroxynitrous acid, as test case systems. A statistical analysis of the results shows that the entire free energy landscape is well converged within a 40 ps simulation at 500 K, even for a system with barriers as high as 15 kcal/mol.

  10. Reasons for leaving nursing: a study among Turkish nurses.

    PubMed

    Gök, Ayşen Uğur; Kocaman, Gülseren

    2011-08-01

    Reasons for the growing nursing shortage are often complex and multidimensional. To explore the phenomenon of why Turkish nurses leave nursing. The sample in this descriptive study was 134 nurses who had left the profession. A snowball sampling method was used to identify subjects and multiple methods were used to elicit reasons for leaving. Data analysis included descriptive statistics. The main reasons for leaving nursing were related to unsatisfactory working conditions and a negative perception of nursing. Of the respondents, 69.4% received education in a non-nursing field. The most popular career choice was teaching (27.6%). The results of this study indicate that working conditions and public opinion adversely affect a nurse's interest in the profession. The results of the study indicate a need to improve working conditions and to approach this subject from a multidimensional perspective.

  11. Iterated function systems for DNA replication

    NASA Astrophysics Data System (ADS)

    Gaspard, Pierre

    2017-10-01

    The kinetic equations of DNA replication are shown to be exactly solved in terms of iterated function systems, running along the template sequence and giving the statistical properties of the copy sequences, as well as the kinetic and thermodynamic properties of the replication process. With this method, different effects due to sequence heterogeneity can be studied, in particular, a transition between linear and sublinear growths in time of the copies, and a transition between continuous and fractal distributions of the local velocities of the DNA polymerase along the template. The method is applied to the human mitochondrial DNA polymerase γ without and with exonuclease proofreading.

  12. The CLASSY clustering algorithm: Description, evaluation, and comparison with the iterative self-organizing clustering system (ISOCLS). [used for LACIE data

    NASA Technical Reports Server (NTRS)

    Lennington, R. K.; Malek, H.

    1978-01-01

    A clustering method, CLASSY, was developed, which alternates maximum likelihood iteration with a procedure for splitting, combining, and eliminating the resulting statistics. The method maximizes the fit of a mixture of normal distributions to the observed first through fourth central moments of the data and produces an estimate of the proportions, means, and covariances in this mixture. The mathematical model which is the basic for CLASSY and the actual operation of the algorithm is described. Data comparing the performances of CLASSY and ISOCLS on simulated and actual LACIE data are presented.

  13. ICRH system performance during ITER-Like Wall operations at JET and the outlook for DT campaign

    NASA Astrophysics Data System (ADS)

    Monakhov, Igor; Blackman, Trevor; Dumortier, Pierre; Durodié, Frederic; Jacquet, Philippe; Lerche, Ernesto; Noble, Craig

    2017-10-01

    Performance of JET ICRH system since installation of the metal ITER-Like Wall (ILW) has been assessed statistically. The data demonstrate steady increase of the RF power coupled to plasmas over recent years with the maximum pulse-average and peak values exceeding respectively 6MW and 8MW in 2016. Analysis and extrapolation of power capabilities of conventional JET ICRH antennas is provided and key performance-limiting factors are discussed. The RF plant operational frequency options are presented highlighting the issues of efficient ICRH application within a foreseeable range of DT plasma scenarios.

  14. Metal-induced streak artifact reduction using iterative reconstruction algorithms in x-ray computed tomography image of the dentoalveolar region.

    PubMed

    Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia

    2013-02-01

    The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. Influence of iterative reconstruction on coronary calcium scores at multiple heart rates: a multivendor phantom study on state-of-the-art CT systems.

    PubMed

    van der Werf, N R; Willemink, M J; Willems, T P; Greuter, M J W; Leiner, T

    2017-12-28

    The objective of this study was to evaluate the influence of iterative reconstruction on coronary calcium scores (CCS) at different heart rates for four state-of-the-art CT systems. Within an anthropomorphic chest phantom, artificial coronary arteries were translated in a water-filled compartment. The arteries contained three different calcifications with low (38 mg), medium (80 mg) and high (157 mg) mass. Linear velocities were applied, corresponding to heart rates of 0, < 60, 60-75 and > 75 bpm. Data were acquired on four state-of-the-art CT systems (CT1-CT4) with routinely used CCS protocols. Filtered back projection (FBP) and three increasing levels of iterative reconstruction (L1-L3) were used for reconstruction. CCS were quantified as Agatston score and mass score. An iterative reconstruction susceptibility (IRS) index was used to assess susceptibility of Agatston score (IRS AS ) and mass score (IRS MS ) to iterative reconstruction. IRS values were compared between CT systems and between calcification masses. For each heart rate, differences in CCS of iterative reconstructed images were evaluated with CCS of FBP images as reference, and indicated as small (< 5%), medium (5-10%) or large (> 10%). Statistical analysis was performed with repeated measures ANOVA tests. While subtle differences were found for Agatston scores of low mass calcification, medium and high mass calcifications showed increased CCS up to 77% with increasing heart rates. IRS AS of CT1-T4 were 17, 41, 130 and 22% higher than IRS MS . Not only were IRS significantly different between all CT systems, but also between calcification masses. Up to a fourfold increase in IRS was found for the low mass calcification in comparison with the high mass calcification. With increasing iterative reconstruction strength, maximum decreases of 21 and 13% for Agatston and mass score were found. In total, 21 large differences between Agatston scores from FBP and iterative reconstruction were found, while only five large differences were found between FBP and iterative reconstruction mass scores. Iterative reconstruction results in reduced CCS. The effect of iterative reconstruction on CCS is more prominent with low-density calcifications, high heart rates and increasing iterative reconstruction strength.

  16. Molecular Modeling in Drug Design for the Development of Organophosphorus Antidotes/Prophylactics.

    DTIC Science & Technology

    1986-06-01

    multidimensional statistical QSAR analysis techniques to suggest new structures for synthesis and evaluation. C. Application of quantum chemical techniques to...compounds for synthesis and testing for antidotal potency. E. Use of computer-assisted methods to determine the steric constraints at the active site...modeling techniques to model the enzyme acetylcholinester-se. H. Suggestion of some novel compounds for synthesis and testing for reactivating

  17. Nonlinear Multidimensional Assignment Problems Efficient Conic Optimization Methods and Applications

    DTIC Science & Technology

    2015-06-24

    WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Arizona State University School of Mathematical & Statistical Sciences 901 S...SUPPLEMENTARY NOTES 14. ABSTRACT The major goals of this project were completed: the exact solution of previously unsolved challenging combinatorial optimization... combinatorial optimization problem, the Directional Sensor Problem, was solved in two ways. First, heuristically in an engineering fashion and second, exactly

  18. A method of using cluster analysis to study statistical dependence in multivariate data

    NASA Technical Reports Server (NTRS)

    Borucki, W. J.; Card, D. H.; Lyle, G. C.

    1975-01-01

    A technique is presented that uses both cluster analysis and a Monte Carlo significance test of clusters to discover associations between variables in multidimensional data. The method is applied to an example of a noisy function in three-dimensional space, to a sample from a mixture of three bivariate normal distributions, and to the well-known Fisher's Iris data.

  19. Adaptive iterated function systems filter for images highly corrupted with fixed - Value impulse noise

    NASA Astrophysics Data System (ADS)

    Shanmugavadivu, P.; Eliahim Jeevaraj, P. S.

    2014-06-01

    The Adaptive Iterated Functions Systems (AIFS) Filter presented in this paper has an outstanding potential to attenuate the fixed-value impulse noise in images. This filter has two distinct phases namely noise detection and noise correction which uses Measure of Statistics and Iterated Function Systems (IFS) respectively. The performance of AIFS filter is assessed by three metrics namely, Peak Signal-to-Noise Ratio (PSNR), Mean Structural Similarity Index Matrix (MSSIM) and Human Visual Perception (HVP). The quantitative measures PSNR and MSSIM endorse the merit of this filter in terms of degree of noise suppression and details/edge preservation respectively, in comparison with the high performing filters reported in the recent literature. The qualitative measure HVP confirms the noise suppression ability of the devised filter. This computationally simple noise filter broadly finds application wherein the images are highly degraded by fixed-value impulse noise.

  20. The solution of radiative transfer problems in molecular bands without the LTE assumption by accelerated lambda iteration methods

    NASA Technical Reports Server (NTRS)

    Kutepov, A. A.; Kunze, D.; Hummer, D. G.; Rybicki, G. B.

    1991-01-01

    An iterative method based on the use of approximate transfer operators, which was designed initially to solve multilevel NLTE line formation problems in stellar atmospheres, is adapted and applied to the solution of the NLTE molecular band radiative transfer in planetary atmospheres. The matrices to be constructed and inverted are much smaller than those used in the traditional Curtis matrix technique, which makes possible the treatment of more realistic problems using relatively small computers. This technique converges much more rapidly than straightforward iteration between the transfer equation and the equations of statistical equilibrium. A test application of this new technique to the solution of NLTE radiative transfer problems for optically thick and thin bands (the 4.3 micron CO2 band in the Venusian atmosphere and the 4.7 and 2.3 micron CO bands in the earth's atmosphere) is described.

  1. Self-consistent determination of the spike-train power spectrum in a neural network with sparse connectivity.

    PubMed

    Dummer, Benjamin; Wieland, Stefan; Lindner, Benjamin

    2014-01-01

    A major source of random variability in cortical networks is the quasi-random arrival of presynaptic action potentials from many other cells. In network studies as well as in the study of the response properties of single cells embedded in a network, synaptic background input is often approximated by Poissonian spike trains. However, the output statistics of the cells is in most cases far from being Poisson. This is inconsistent with the assumption of similar spike-train statistics for pre- and postsynaptic cells in a recurrent network. Here we tackle this problem for the popular class of integrate-and-fire neurons and study a self-consistent statistics of input and output spectra of neural spike trains. Instead of actually using a large network, we use an iterative scheme, in which we simulate a single neuron over several generations. In each of these generations, the neuron is stimulated with surrogate stochastic input that has a similar statistics as the output of the previous generation. For the surrogate input, we employ two distinct approximations: (i) a superposition of renewal spike trains with the same interspike interval density as observed in the previous generation and (ii) a Gaussian current with a power spectrum proportional to that observed in the previous generation. For input parameters that correspond to balanced input in the network, both the renewal and the Gaussian iteration procedure converge quickly and yield comparable results for the self-consistent spike-train power spectrum. We compare our results to large-scale simulations of a random sparsely connected network of leaky integrate-and-fire neurons (Brunel, 2000) and show that in the asynchronous regime close to a state of balanced synaptic input from the network, our iterative schemes provide an excellent approximations to the autocorrelation of spike trains in the recurrent network.

  2. The Research of Multiple Attenuation Based on Feedback Iteration and Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Xu, X.; Tong, S.; Wang, L.

    2017-12-01

    How to solve the problem of multiple suppression is a difficult problem in seismic data processing. The traditional technology for multiple attenuation is based on the principle of the minimum output energy of the seismic signal, this criterion is based on the second order statistics, and it can't achieve the multiple attenuation when the primaries and multiples are non-orthogonal. In order to solve the above problems, we combine the feedback iteration method based on the wave equation and the improved independent component analysis (ICA) based on high order statistics to suppress the multiple waves. We first use iterative feedback method to predict the free surface multiples of each order. Then, in order to predict multiples from real multiple in amplitude and phase, we design an expanded pseudo multi-channel matching filtering method to get a more accurate matching multiple result. Finally, we present the improved fast ICA algorithm which is based on the maximum non-Gauss criterion of output signal to the matching multiples and get better separation results of the primaries and the multiples. The advantage of our method is that we don't need any priori information to the prediction of the multiples, and can have a better separation result. The method has been applied to several synthetic data generated by finite-difference model technique and the Sigsbee2B model multiple data, the primaries and multiples are non-orthogonal in these models. The experiments show that after three to four iterations, we can get the perfect multiple results. Using our matching method and Fast ICA adaptive multiple subtraction, we can not only effectively preserve the effective wave energy in seismic records, but also can effectively suppress the free surface multiples, especially the multiples related to the middle and deep areas.

  3. Right adrenal vein: comparison between adaptive statistical iterative reconstruction and model-based iterative reconstruction.

    PubMed

    Noda, Y; Goshima, S; Nagata, S; Miyoshi, T; Kawada, H; Kawai, N; Tanahashi, Y; Matsuo, M

    2018-06-01

    To compare right adrenal vein (RAV) visualisation and contrast enhancement degree on adrenal venous phase images reconstructed using adaptive statistical iterative reconstruction (ASiR) and model-based iterative reconstruction (MBIR) techniques. This prospective study was approved by the institutional review board, and written informed consent was waived. Fifty-seven consecutive patients who underwent adrenal venous phase imaging were enrolled. The same raw data were reconstructed using ASiR 40% and MBIR. The expert and beginner independently reviewed computed tomography (CT) images. RAV visualisation rates, background noise, and CT attenuation of the RAV, right adrenal gland, inferior vena cava (IVC), hepatic vein, and bilateral renal veins were compared between the two reconstruction techniques. RAV visualisation rates were higher with MBIR than with ASiR (95% versus 88%, p=0.13 in expert and 93% versus 75%, p=0.002 in beginner, respectively). RAV visualisation confidence ratings with MBIR were significantly greater than with ASiR (p<0.0001, both in the beginner and the expert). The mean background noise was significantly lower with MBIR than with ASiR (p<0.0001). Mean CT attenuation values of the RAV, right adrenal gland, IVC, and hepatic vein were comparable between the two techniques (p=0.12-0.91). Mean CT attenuation values of the bilateral renal veins were significantly higher with MBIR than with ASiR (p=0.0013 and 0.02). Reconstruction of adrenal venous phase images using MBIR significantly reduces background noise, leading to an improvement in the RAV visualisation compared with ASiR. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  4. Psychometric properties of the Polish version of the Multidimensional Fatigue Inventory-20 in cancer patients.

    PubMed

    Buss, Tomasz; Kruk, Agnieszka; Wiśniewski, Piotr; Modlinska, Aleksandra; Janiszewska, Justyna; Lichodziejewska-Niemierko, Monika

    2014-10-01

    Multidimensional questionnaires estimating cancer-related fatigue (CRF) as a symptom cluster or a clinical syndrome primarily have been used and validated in English-speaking populations. However, cultural issues and language peculiarities can affect CRF assessment The main aims of this study were to evaluate the psychometric properties of the Polish version of the Multidimensional Fatigue Inventory-20 (MFI-20) and to deliver to clinicians a multidimensional tool for CRF assessment in Polish-speaking patients with cancer. After forward-backward translation procedures, the Polish version of MFI-20 was administered to 340 cancer patients. The Polish MFI-20 was appraised in terms of acceptability, reliability, and validity. Internal consistency was assessed by calculating Cronbach's alpha coefficients. Structural validity was evaluated with confirmatory factor analysis. The translated MFI-20 was well accepted; 90% of subjects fully completed the questionnaire. The overall Cronbach's alpha coefficient was 0.9, ranging from 0.57 to 0.81. All correlation coefficients among Numeric Rating Scale-fatigue, fatigue-related items from the European Organization for Research and Treatment of Cancer Quality of Life Core-30 questionnaire, and the MFI--20 were statistically significant (P < 0.001). Confirmatory factor analysis demonstrated good structural validity and revealed only three dimensions in the Polish version of the MFI-20-physical and mental fatigue as well as reduced motivation. The Polish version of the MFI-20 is well accepted by patients, reliable, and a valid instrument to assess CRF in Polish cancer patients. Copyright © 2014 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  5. Ultralow dose computed tomography attenuation correction for pediatric PET CT using adaptive statistical iterative reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brady, Samuel L., E-mail: samuel.brady@stjude.org; Shulkin, Barry L.

    2015-02-15

    Purpose: To develop ultralow dose computed tomography (CT) attenuation correction (CTAC) acquisition protocols for pediatric positron emission tomography CT (PET CT). Methods: A GE Discovery 690 PET CT hybrid scanner was used to investigate the change to quantitative PET and CT measurements when operated at ultralow doses (10–35 mA s). CT quantitation: noise, low-contrast resolution, and CT numbers for 11 tissue substitutes were analyzed in-phantom. CT quantitation was analyzed to a reduction of 90% volume computed tomography dose index (0.39/3.64; mGy) from baseline. To minimize noise infiltration, 100% adaptive statistical iterative reconstruction (ASiR) was used for CT reconstruction. PET imagesmore » were reconstructed with the lower-dose CTAC iterations and analyzed for: maximum body weight standardized uptake value (SUV{sub bw}) of various diameter targets (range 8–37 mm), background uniformity, and spatial resolution. Radiation dose and CTAC noise magnitude were compared for 140 patient examinations (76 post-ASiR implementation) to determine relative dose reduction and noise control. Results: CT numbers were constant to within 10% from the nondose reduced CTAC image for 90% dose reduction. No change in SUV{sub bw}, background percent uniformity, or spatial resolution for PET images reconstructed with CTAC protocols was found down to 90% dose reduction. Patient population effective dose analysis demonstrated relative CTAC dose reductions between 62% and 86% (3.2/8.3–0.9/6.2). Noise magnitude in dose-reduced patient images increased but was not statistically different from predose-reduced patient images. Conclusions: Using ASiR allowed for aggressive reduction in CT dose with no change in PET reconstructed images while maintaining sufficient image quality for colocalization of hybrid CT anatomy and PET radioisotope uptake.« less

  6. Integrated Array/Metadata Analytics

    NASA Astrophysics Data System (ADS)

    Misev, Dimitar; Baumann, Peter

    2015-04-01

    Data comes in various forms and types, and integration usually presents a problem that is often simply ignored and solved with ad-hoc solutions. Multidimensional arrays are an ubiquitous data type, that we find at the core of virtually all science and engineering domains, as sensor, model, image, statistics data. Naturally, arrays are richly described by and intertwined with additional metadata (alphanumeric relational data, XML, JSON, etc). Database systems, however, a fundamental building block of what we call "Big Data", lack adequate support for modelling and expressing these array data/metadata relationships. Array analytics is hence quite primitive or non-existent at all in modern relational DBMS. Recognizing this, we extended SQL with a new SQL/MDA part seamlessly integrating multidimensional array analytics into the standard database query language. We demonstrate the benefits of SQL/MDA with real-world examples executed in ASQLDB, an open-source mediator system based on HSQLDB and rasdaman, that already implements SQL/MDA.

  7. Multidimensional Scaling Analysis of the Dynamics of a Country Economy

    PubMed Central

    Mata, Maria Eugénia

    2013-01-01

    This paper analyzes the Portuguese short-run business cycles over the last 150 years and presents the multidimensional scaling (MDS) for visualizing the results. The analytical and numerical assessment of this long-run perspective reveals periods with close connections between the macroeconomic variables related to government accounts equilibrium, balance of payments equilibrium, and economic growth. The MDS method is adopted for a quantitative statistical analysis. In this way, similarity clusters of several historical periods emerge in the MDS maps, namely, in identifying similarities and dissimilarities that identify periods of prosperity and crises, growth, and stagnation. Such features are major aspects of collective national achievement, to which can be associated the impact of international problems such as the World Wars, the Great Depression, or the current global financial crisis, as well as national events in the context of broad political blueprints for the Portuguese society in the rising globalization process. PMID:24294132

  8. Parametric instabilities and their control in multidimensional nonuniform gain media

    NASA Astrophysics Data System (ADS)

    Charbonneau-Lefort, Mathieu; Afeyan, Bedros; Fejer, Martin

    2007-11-01

    In order to control parametric instabilities in large scale long pulse laser produced plasmas, optical mixing techniques seem most promising [1]. We examine ways of controlling the growth of some modes while creating other unstable ones in nonuniform gain media, including the effects of transverse localization of the pump wave. We show that multidimensional effects are essential to understand laser-gain medium interactions [2] and that one dimensional models such as the celebrated Rosenbluth result [3] can be misleading [4]. These findings are verified in experiments carried out in a chirped quasi-phase-matched gratings in optical parametric amplifiers where thousands of shots can be taken and statistically significant and stable results obtained. [1] B. Afeyan, et al., IFSA Proceedings, 2003. [2] M. M. Sushchik and G. I. Freidman, Radiofizika 13, 1354 (1970). [3] M. N. Rosenbluth, Phys. Rev. Lett. 29, 565 (1972). [4] M. Charbonneau-Lefort, PhD thesis, Stanford University, 2007.

  9. Facilities Performance Indicators Report, 2008-09

    ERIC Educational Resources Information Center

    Hills, Christina, Ed.

    2010-01-01

    This paper features another expanded Web-based Facilities Performance Indicators Report (FPI). The purpose of APPA's Facilities Performance Indicators is to provide a representative set of statistics about facilities in educational institutions. The 2008-09 iteration of the Web-based Facilities Performance Indicators Survey was posted and…

  10. Solution of a tridiagonal system of equations on the finite element machine

    NASA Technical Reports Server (NTRS)

    Bostic, S. W.

    1984-01-01

    Two parallel algorithms for the solution of tridiagonal systems of equations were implemented on the Finite Element Machine. The Accelerated Parallel Gauss method, an iterative method, and the Buneman algorithm, a direct method, are discussed and execution statistics are presented.

  11. Pain assessment in cats undergoing ovariohysterectomy by midline or lateral celiotomy through use of a previously validated multidimensional composite pain scale.

    PubMed

    Oliveira, Jéssica Pecene; Mencalha, Rodrigo; Sousa, Carlos Augusto dos Santos; Abidu-Figueiredo, Marcelo; Jorge, Síria da Fonseca

    2014-10-01

    To assess pain in the immediate postoperative period in cats submitted into two different celiotomy techniques for ovariohysterectomy. Fourteen healthy female cats up to three years old with a mean weight 2.75 kg, without breed specification, were used in this double blind experiment. The animals were randomly assigned to two treatments: I- ovariohysterectomy by lateral approach (LA) or II - by midline approach (MA). The anesthesia consisted of acepromazine (0.1 mg.kg-1) and midazolam (0.25mg.kg-1) followed isoflurane vaporization to induce and maintain hypnosis. A bolus of fentanyl (5 μg.kg-1) was administered intravenously to provide intraoperative analgesia. After surgery, pain scores were assessed through a multidimensional composite pain scale at four different times. Generally all factors related to psychomotor changes and pain expression showed higher scores in cats neutered by LA, but only psychomotor changes and total pain score presented statistical differences (p<0.05). The animals that underwent lateral celiotomy showed higher pain scores, at 1, 4 and 6 hours after surgery. Multidimensional analgesic scales were highly reliable. There was a tendency for the cats neutered by lateral approach to suffer more postoperative pain, including requiring a large number of analgesic rescues.

  12. Multi-dimensional self-esteem and magnitude of change in the treatment of anorexia nervosa.

    PubMed

    Collin, Paula; Karatzias, Thanos; Power, Kevin; Howard, Ruth; Grierson, David; Yellowlees, Alex

    2016-03-30

    Self-esteem improvement is one of the main targets of inpatient eating disorder programmes. The present study sought to examine multi-dimensional self-esteem and magnitude of change in eating psychopathology among adults participating in a specialist inpatient treatment programme for anorexia nervosa. A standardised assessment battery, including multi-dimensional measures of eating psychopathology and self-esteem, was completed pre- and post-treatment for 60 participants (all white Scottish female, mean age=25.63 years). Statistical analyses indicated that self-esteem improved with eating psychopathology and weight over the course of treatment, but that improvements were domain-specific and small in size. Global self-esteem was not predictive of treatment outcome. Dimensions of self-esteem at baseline (Lovability and Moral Self-approval), however, were predictive of magnitude of change in dimensions of eating psychopathology (Shape and Weight Concern). Magnitude of change in Self-Control and Lovability dimensions were predictive of magnitude of change in eating psychopathology (Global, Dietary Restraint, and Shape Concern). The results of this study demonstrate that the relationship between self-esteem and eating disorder is far from straightforward, and suggest that future research and interventions should focus less exclusively on self-esteem as a uni-dimensional psychological construct. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. State Energy Data System

    EIA Publications

    2017-01-01

    The State Energy Data System (SEDS) is the U.S. Energy Information Administration's (EIA) source for comprehensive state energy statistics. Included are estimates of energy production, consumption, prices, and expenditures broken down by energy source and sector. Production and consumption estimates begin with the year 1960 while price and expenditure estimates begin with 1970. The multidimensional completeness of SEDS allows users to make comparisons across states, energy sources, sectors, and over time.

  14. Accuracy Quantification of the Loci-CHEM Code for Chamber Wall Heat Transfer in a GO2/GH2 Single Element Injector Model Problem

    NASA Technical Reports Server (NTRS)

    West, Jeff; Westra, Doug; Lin, Jeff; Tucker, Kevin

    2006-01-01

    A robust rocket engine combustor design and development process must include tools which can accurately predict the multi-dimensional thermal environments imposed on solid surfaces by the hot combustion products. Currently, empirical methods used in the design process are typically one dimensional and do not adequately account for the heat flux rise rate in the near-injector region of the chamber. Computational Fluid Dynamics holds promise to meet the design tool requirement, but requires accuracy quantification, or validation, before it can be confidently applied in the design process. This effort presents the beginning of such a validation process for the Loci-CHEM CFD code. The model problem examined here is a gaseous oxygen (GO2)/gaseous hydrogen (GH2) shear coaxial single element injector operating at a chamber pressure of 5.42 MPa. The GO2/GH2 propellant combination in this geometry represents one the simplest rocket model problems and is thus foundational to subsequent validation efforts for more complex injectors. Multiple steady state solutions have been produced with Loci-CHEM employing different hybrid grids and two-equation turbulence models. Iterative convergence for each solution is demonstrated via mass conservation, flow variable monitoring at discrete flow field locations as a function of solution iteration and overall residual performance. A baseline hybrid was used and then locally refined to demonstrate grid convergence. Solutions were obtained with three variations of the k-omega turbulence model.

  15. Accuracy Quantification of the Loci-CHEM Code for Chamber Wall Heat Fluxes in a G02/GH2 Single Element Injector Model Problem

    NASA Technical Reports Server (NTRS)

    West, Jeff; Westra, Doug; Lin, Jeff; Tucker, Kevin

    2006-01-01

    A robust rocket engine combustor design and development process must include tools which can accurately predict the multi-dimensional thermal environments imposed on solid surfaces by the hot combustion products. Currently, empirical methods used in the design process are typically one dimensional and do not adequately account for the heat flux rise rate in the near-injector region of the chamber. Computational Fluid Dynamics holds promise to meet the design tool requirement, but requires accuracy quantification, or validation, before it can be confidently applied in the design process. This effort presents the beginning of such a validation process for the Loci- CHEM CPD code. The model problem examined here is a gaseous oxygen (GO2)/gaseous hydrogen (GH2) shear coaxial single element injector operating at a chamber pressure of 5.42 MPa. The GO2/GH2 propellant combination in this geometry represents one the simplest rocket model problems and is thus foundational to subsequent validation efforts for more complex injectors. Multiple steady state solutions have been produced with Loci-CHEM employing different hybrid grids and two-equation turbulence models. Iterative convergence for each solution is demonstrated via mass conservation, flow variable monitoring at discrete flow field locations as a function of solution iteration and overall residual performance. A baseline hybrid grid was used and then locally refined to demonstrate grid convergence. Solutions were also obtained with three variations of the k-omega turbulence model.

  16. Measurement of the relationship between perceived and computed color differences

    NASA Astrophysics Data System (ADS)

    García, Pedro A.; Huertas, Rafael; Melgosa, Manuel; Cui, Guihua

    2007-07-01

    Using simulated data sets, we have analyzed some mathematical properties of different statistical measurements that have been employed in previous literature to test the performance of different color-difference formulas. Specifically, the properties of the combined index PF/3 (performance factor obtained as average of three terms), widely employed in current literature, have been considered. A new index named standardized residual sum of squares (STRESS), employed in multidimensional scaling techniques, is recommended. The main difference between PF/3 and STRESS is that the latter is simpler and allows inferences on the statistical significance of two color-difference formulas with respect to a given set of visual data.

  17. Image quality of iterative reconstruction in cranial CT imaging: comparison of model-based iterative reconstruction (MBIR) and adaptive statistical iterative reconstruction (ASiR).

    PubMed

    Notohamiprodjo, S; Deak, Z; Meurer, F; Maertz, F; Mueck, F G; Geyer, L L; Wirth, S

    2015-01-01

    The purpose of this study was to compare cranial CT (CCT) image quality (IQ) of the MBIR algorithm with standard iterative reconstruction (ASiR). In this institutional review board (IRB)-approved study, raw data sets of 100 unenhanced CCT examinations (120 kV, 50-260 mAs, 20 mm collimation, 0.984 pitch) were reconstructed with both ASiR and MBIR. Signal-to-noise (SNR) and contrast-to-noise (CNR) were calculated from attenuation values measured in caudate nucleus, frontal white matter, anterior ventricle horn, fourth ventricle, and pons. Two radiologists, who were blinded to the reconstruction algorithms, evaluated anonymized multiplanar reformations of 2.5 mm with respect to depiction of different parenchymal structures and impact of artefacts on IQ with a five-point scale (0: unacceptable, 1: less than average, 2: average, 3: above average, 4: excellent). MBIR decreased artefacts more effectively than ASiR (p < 0.01). The median depiction score for MBIR was 3, whereas the median value for ASiR was 2 (p < 0.01). SNR and CNR were significantly higher in MBIR than ASiR (p < 0.01). MBIR showed significant improvement of IQ parameters compared to ASiR. As CCT is an examination that is frequently required, the use of MBIR may allow for substantial reduction of radiation exposure caused by medical diagnostics. • Model-Based iterative reconstruction (MBIR) effectively decreased artefacts in cranial CT. • MBIR reconstructed images were rated with significantly higher scores for image quality. • Model-Based iterative reconstruction may allow reduced-dose diagnostic examination protocols.

  18. Iterative blip-summed path integral for quantum dynamics in strongly dissipative environments

    NASA Astrophysics Data System (ADS)

    Makri, Nancy

    2017-04-01

    The iterative decomposition of the blip-summed path integral [N. Makri, J. Chem. Phys. 141, 134117 (2014)] is described. The starting point is the expression of the reduced density matrix for a quantum system interacting with a harmonic dissipative bath in the form of a forward-backward path sum, where the effects of the bath enter through the Feynman-Vernon influence functional. The path sum is evaluated iteratively in time by propagating an array that stores blip configurations within the memory interval. Convergence with respect to the number of blips and the memory length yields numerically exact results which are free of statistical error. In situations of strongly dissipative, sluggish baths, the algorithm leads to a dramatic reduction of computational effort in comparison with iterative path integral methods that do not implement the blip decomposition. This gain in efficiency arises from (i) the rapid convergence of the blip series and (ii) circumventing the explicit enumeration of between-blip path segments, whose number grows exponentially with the memory length. Application to an asymmetric dissipative two-level system illustrates the rapid convergence of the algorithm even when the bath memory is extremely long.

  19. Hidden Connections between Regression Models of Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert

    2013-01-01

    Hidden connections between regression models of wind tunnel strain-gage balance calibration data are investigated. These connections become visible whenever balance calibration data is supplied in its design format and both the Iterative and Non-Iterative Method are used to process the data. First, it is shown how the regression coefficients of the fitted balance loads of a force balance can be approximated by using the corresponding regression coefficients of the fitted strain-gage outputs. Then, data from the manual calibration of the Ames MK40 six-component force balance is chosen to illustrate how estimates of the regression coefficients of the fitted balance loads can be obtained from the regression coefficients of the fitted strain-gage outputs. The study illustrates that load predictions obtained by applying the Iterative or the Non-Iterative Method originate from two related regression solutions of the balance calibration data as long as balance loads are given in the design format of the balance, gage outputs behave highly linear, strict statistical quality metrics are used to assess regression models of the data, and regression model term combinations of the fitted loads and gage outputs can be obtained by a simple variable exchange.

  20. Inverse imaging of the breast with a material classification technique.

    PubMed

    Manry, C W; Broschat, S L

    1998-03-01

    In recent publications [Chew et al., IEEE Trans. Blomed. Eng. BME-9, 218-225 (1990); Borup et al., Ultrason. Imaging 14, 69-85 (1992)] the inverse imaging problem has been solved by means of a two-step iterative method. In this paper, a third step is introduced for ultrasound imaging of the breast. In this step, which is based on statistical pattern recognition, classification of tissue types and a priori knowledge of the anatomy of the breast are integrated into the iterative method. Use of this material classification technique results in more rapid convergence to the inverse solution--approximately 40% fewer iterations are required--as well as greater accuracy. In addition, tumors are detected early in the reconstruction process. Results for reconstructions of a simple two-dimensional model of the human breast are presented. These reconstructions are extremely accurate when system noise and variations in tissue parameters are not too great. However, for the algorithm used, degradation of the reconstructions and divergence from the correct solution occur when system noise and variations in parameters exceed threshold values. Even in this case, however, tumors are still identified within a few iterations.

  1. X-ray dose reduction in abdominal computed tomography using advanced iterative reconstruction algorithms.

    PubMed

    Ning, Peigang; Zhu, Shaocheng; Shi, Dapeng; Guo, Ying; Sun, Minghua

    2014-01-01

    This work aims to explore the effects of adaptive statistical iterative reconstruction (ASiR) and model-based iterative reconstruction (MBIR) algorithms in reducing computed tomography (CT) radiation dosages in abdominal imaging. CT scans on a standard male phantom were performed at different tube currents. Images at the different tube currents were reconstructed with the filtered back-projection (FBP), 50% ASiR and MBIR algorithms and compared. The CT value, image noise and contrast-to-noise ratios (CNRs) of the reconstructed abdominal images were measured. Volumetric CT dose indexes (CTDIvol) were recorded. At different tube currents, 50% ASiR and MBIR significantly reduced image noise and increased the CNR when compared with FBP. The minimal tube current values required by FBP, 50% ASiR, and MBIR to achieve acceptable image quality using this phantom were 200, 140, and 80 mA, respectively. At the identical image quality, 50% ASiR and MBIR reduced the radiation dose by 35.9% and 59.9% respectively when compared with FBP. Advanced iterative reconstruction techniques are able to reduce image noise and increase image CNRs. Compared with FBP, 50% ASiR and MBIR reduced radiation doses by 35.9% and 59.9%, respectively.

  2. Radiative interactions in multi-dimensional chemically reacting flows using Monte Carlo simulations

    NASA Technical Reports Server (NTRS)

    Liu, Jiwen; Tiwari, Surendra N.

    1994-01-01

    The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical narrow band model with an exponential-tailed inverse intensity distribution. The amount and transfer of the emitted radiative energy in a finite volume element within a medium are considered in an exact manner. The spectral correlation between transmittances of two different segments of the same path in a medium makes the statistical relationship different from the conventional relationship, which only provides the non-correlated results for nongray methods is discussed. Validation of the Monte Carlo formulations is conducted by comparing results of this method of other solutions. In order to further establish the validity of the MCM, a relatively simple problem of radiative interactions in laminar parallel plate flows is considered. One-dimensional correlated Monte Carlo formulations are applied to investigate radiative heat transfer. The nongray Monte Carlo solutions are also obtained for the same problem and they also essentially match the available analytical solutions. the exact correlated and non-correlated Monte Carlo formulations are very complicated for multi-dimensional systems. However, by introducing the assumption of an infinitesimal volume element, the approximate correlated and non-correlated formulations are obtained which are much simpler than the exact formulations. Consideration of different problems and comparison of different solutions reveal that the approximate and exact correlated solutions agree very well, and so do the approximate and exact non-correlated solutions. However, the two non-correlated solutions have no physical meaning because they significantly differ from the correlated solutions. An accurate prediction of radiative heat transfer in any nongray and multi-dimensional system is possible by using the approximate correlated formulations. Radiative interactions are investigated in chemically reacting compressible flows of premixed hydrogen and air in an expanding nozzle. The governing equations are based on the fully elliptic Navier-Stokes equations. Chemical reaction mechanisms were described by a finite rate chemistry model. The correlated Monte Carlo method developed earlier was employed to simulate multi-dimensional radiative heat transfer. Results obtained demonstrate that radiative effects on the flowfield are minimal but radiative effects on the wall heat transfer are significant. Extensive parametric studies are conducted to investigate the effects of equivalence ratio, wall temperature, inlet flow temperature, and nozzle size on the radiative and conductive wall fluxes.

  3. [Statistical validity of the Mexican Food Security Scale and the Latin American and Caribbean Food Security Scale].

    PubMed

    Villagómez-Ornelas, Paloma; Hernández-López, Pedro; Carrasco-Enríquez, Brenda; Barrios-Sánchez, Karina; Pérez-Escamilla, Rafael; Melgar-Quiñónez, Hugo

    2014-01-01

    This article validates the statistical consistency of two food security scales: the Mexican Food Security Scale (EMSA) and the Latin American and Caribbean Food Security Scale (ELCSA). Validity tests were conducted in order to verify that both scales were consistent instruments, conformed by independent, properly calibrated and adequately sorted items, arranged in a continuum of severity. The following tests were developed: sorting of items; Cronbach's alpha analysis; parallelism of prevalence curves; Rasch models; sensitivity analysis through mean differences' hypothesis test. The tests showed that both scales meet the required attributes and are robust statistical instruments for food security measurement. This is relevant given that the lack of access to food indicator, included in multidimensional poverty measurement in Mexico, is calculated with EMSA.

  4. Fetoscopic Open Neural Tube Defect Repair: Development and Refinement of a Two-Port, Carbon Dioxide Insufflation Technique.

    PubMed

    Belfort, Michael A; Whitehead, William E; Shamshirsaz, Alireza A; Bateni, Zhoobin H; Olutoye, Oluyinka O; Olutoye, Olutoyin A; Mann, David G; Espinoza, Jimmy; Williams, Erin; Lee, Timothy C; Keswani, Sundeep G; Ayres, Nancy; Cassady, Christopher I; Mehollin-Ray, Amy R; Sanz Cortes, Magdalena; Carreras, Elena; Peiro, Jose L; Ruano, Rodrigo; Cass, Darrell L

    2017-04-01

    To describe development of a two-port fetoscopic technique for spina bifida repair in the exteriorized, carbon dioxide-filled uterus and report early results of two cohorts of patients: the first 15 treated with an iterative technique and the latter 13 with a standardized technique. This was a retrospective cohort study (2014-2016). All patients met Management of Myelomeningocele Study selection criteria. The intraoperative approach was iterative in the first 15 patients and was then standardized. Obstetric, maternal, fetal, and early neonatal outcomes were compared. Standard parametric and nonparametric tests were used as appropriate. Data for 28 patients (22 endoscopic only, four hybrid, two abandoned) are reported, but only those with a complete fetoscopic repair were analyzed (iterative technique [n=10] compared with standardized technique [n=12]). Maternal demographics and gestational age (median [range]) at fetal surgery (25.4 [22.9-25.9] compared with 24.8 [24-25.6] weeks) were similar, but delivery occurred at 35.9 (26-39) weeks of gestation with the iterative technique compared with 39 (35.9-40) weeks of gestation with the standardized technique (P<.01). Duration of surgery (267 [107-434] compared with 246 [206-333] minutes), complication rates, preterm prelabor rupture of membranes rates (4/12 [33%] compared with 1/10 [10%]), and vaginal delivery rates (5/12 [42%] compared with 6/10 [60%]) were not statistically different in the iterative and standardized techniques, respectively. In 6 of 12 (50%) compared with 1 of 10 (10%), respectively (P=.07), there was leakage of cerebrospinal fluid from the repair site at birth. Management of Myelomeningocele Study criteria for hydrocephalus-death at discharge were met in 9 of 12 (75%) and 3 of 10 (30%), respectively, and 7 of 12 (58%) compared with 2 of 10 (20%) have been treated for hydrocephalus to date. These latter differences were not statistically significant. Fetoscopic open neural tube defect repair does not appear to increase maternal-fetal complications as compared with repair by hysterotomy, allows for vaginal delivery, and may reduce long-term maternal risks. ClinicalTrials.gov, https://clinicaltrials.gov, NCT02230072.

  5. Incorporating Multi-criteria Optimization and Uncertainty Analysis in the Model-Based Systems Engineering of an Autonomous Surface Craft

    DTIC Science & Technology

    2009-09-01

    SAS Statistical Analysis Software SE Systems Engineering SEP Systems Engineering Process SHP Shaft Horsepower SIGINT Signals Intelligence......management occurs (OSD 2002). The Systems Engineering Process (SEP), displayed in Figure 2, is a comprehensive , iterative and recursive problem

  6. HEROIC: 3D general relativistic radiative post-processor with comptonization for black hole accretion discs

    NASA Astrophysics Data System (ADS)

    Narayan, Ramesh; Zhu, Yucong; Psaltis, Dimitrios; Saḑowski, Aleksander

    2016-03-01

    We describe Hybrid Evaluator for Radiative Objects Including Comptonization (HEROIC), an upgraded version of the relativistic radiative post-processor code HERO described in a previous paper, but which now Includes Comptonization. HEROIC models Comptonization via the Kompaneets equation, using a quadratic approximation for the source function in a short characteristics radiation solver. It employs a simple form of accelerated lambda iteration to handle regions of high scattering opacity. In addition to solving for the radiation field, HEROIC also solves for the gas temperature by applying the condition of radiative equilibrium. We present benchmarks and tests of the Comptonization module in HEROIC with simple 1D and 3D scattering problems. We also test the ability of the code to handle various relativistic effects using model atmospheres and accretion flows in a black hole space-time. We present two applications of HEROIC to general relativistic magnetohydrodynamics simulations of accretion discs. One application is to a thin accretion disc around a black hole. We find that the gas below the photosphere in the multidimensional HEROIC solution is nearly isothermal, quite different from previous solutions based on 1D plane parallel atmospheres. The second application is to a geometrically thick radiation-dominated accretion disc accreting at 11 times the Eddington rate. Here, the multidimensional HEROIC solution shows that, for observers who are on axis and look down the polar funnel, the isotropic equivalent luminosity could be more than 10 times the Eddington limit, even though the spectrum might still look thermal and show no signs of relativistic beaming.

  7. Classification of the European Union member states according to the relative level of sustainable development.

    PubMed

    Anna, Bluszcz

    Nowadays methods of measurement and assessment of the level of sustained development at the international, national and regional level are a current research problem, which requires multi-dimensional analysis. The relative assessment of the sustainability level of the European Union member states and the comparative analysis of the position of Poland relative to other countries was the aim of the conducted studies in the article. EU member states were treated as objects in the multi-dimensional space. Dimensions of space were specified by ten diagnostic variables describing the sustainability level of UE countries in three dimensions, i.e., social, economic and environmental. Because the compiled statistical data were expressed in different units of measure, taxonomic methods were used for building an aggregated measure to assess the level of sustainable development of EU member states, which through normalisation of variables enabled the comparative analysis between countries. Methodology of studies consisted of eight stages, which included, among others: defining data matrices, calculating the variability coefficient for all variables, which variability coefficient was under 10 %, division of variables into stimulants and destimulants, selection of the method of variable normalisation, developing matrices of normalised data, selection of the formula and calculating the aggregated indicator of the relative level of sustainable development of the EU countries, calculating partial development indicators for three studies dimensions: social, economic and environmental and the classification of the EU countries according to the relative level of sustainable development. Statistical date were collected based on the Polish Central Statistical Office publication.

  8. A new multidimensional population health indicator for policy makers: absolute level, inequality and spatial clustering - an empirical application using global sub-national infant mortality data.

    PubMed

    Sartorius, Benn K D; Sartorius, Kurt

    2014-11-01

    The need for a multidimensional measure of population health that accounts for its distribution remains a central problem to guide the allocation of limited resources. Absolute proxy measures, like the infant mortality rate (IMR), are limited because they ignore inequality and spatial clustering. We propose a novel, three-part, multidimensional mortality indicator that can be used as the first step to differentiate interventions in a region or country. The three-part indicator (MortalityABC index) combines absolute mortality rate, the Theil Index to calculate mortality inequality and the Getis-Ord G statistic to determine the degree of spatial clustering. The analysis utilises global sub-national IMR data to empirically illustrate the proposed indicator. The three-part indicator is mapped globally to display regional/country variation and further highlight its potential application. Developing countries (e.g. in sub-Saharan Africa) display high levels of absolute mortality as well as variable mortality inequality with evidence of spatial clustering within certain sub-national units ("hotspots"). Although greater inequality is observed outside developed regions, high mortality inequality and spatial clustering are common in both developed and developing countries. Significant positive correlation was observed between the degree of spatial clustering and absolute mortality. The proposed multidimensional indicator should prove useful for spatial allocation of healthcare resources within a country, because it can prompt a wide range of policy options and prioritise high-risk areas. The new indicator demonstrates the inadequacy of IMR as a single measure of population health, and it can also be adapted to lower administrative levels within a country and other population health measures.

  9. Statistical Mechanics of Combinatorial Auctions

    NASA Astrophysics Data System (ADS)

    Galla, Tobias; Leone, Michele; Marsili, Matteo; Sellitto, Mauro; Weigt, Martin; Zecchina, Riccardo

    2006-09-01

    Combinatorial auctions are formulated as frustrated lattice gases on sparse random graphs, allowing the determination of the optimal revenue by methods of statistical physics. Transitions between computationally easy and hard regimes are found and interpreted in terms of the geometric structure of the space of solutions. We introduce an iterative algorithm to solve intermediate and large instances, and discuss competing states of optimal revenue and maximal number of satisfied bidders. The algorithm can be generalized to the hard phase and to more sophisticated auction protocols.

  10. A Visual Analytic for High-Dimensional Data Exploitation: The Heterogeneous Data-Reduction Proximity Tool

    DTIC Science & Technology

    2013-07-01

    structure of the data and Gower’s similarity coefficient as the algorithm for calculating the proximity matrices. The following section provides a...representative set of terrorist event data. Attribute Day Location Time Prim /Attack Sec/Attack Weight 1 1 1 1 1 Scale Nominal Nominal Interval Nominal...calculate the similarity it uses Gower’s similarity and multidimensional scaling algorithms contained in an R statistical computing environment

  11. Enhancement of event related potentials by iterative restoration algorithms

    NASA Astrophysics Data System (ADS)

    Pomalaza-Raez, Carlos A.; McGillem, Clare D.

    1986-12-01

    An iterative procedure for the restoration of event related potentials (ERP) is proposed and implemented. The method makes use of assumed or measured statistical information about latency variations in the individual ERP components. The signal model used for the restoration algorithm consists of a time-varying linear distortion and a positivity/negativity constraint. Additional preprocessing in the form of low-pass filtering is needed in order to mitigate the effects of additive noise. Numerical results obtained with real data show clearly the presence of enhanced and regenerated components in the restored ERP's. The procedure is easy to implement which makes it convenient when compared to other proposed techniques for the restoration of ERP signals.

  12. Learning to improve iterative repair scheduling

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene

    1992-01-01

    This paper presents a general learning method for dynamically selecting between repair heuristics in an iterative repair scheduling system. The system employs a version of explanation-based learning called Plausible Explanation-Based Learning (PEBL) that uses multiple examples to confirm conjectured explanations. The basic approach is to conjecture contradictions between a heuristic and statistics that measure the quality of the heuristic. When these contradictions are confirmed, a different heuristic is selected. To motivate the utility of this approach we present an empirical evaluation of the performance of a scheduling system with respect to two different repair strategies. We show that the scheduler that learns to choose between the heuristics outperforms the same scheduler with any one of two heuristics alone.

  13. A support vector machine based test for incongruence between sets of trees in tree space

    PubMed Central

    2012-01-01

    Background The increased use of multi-locus data sets for phylogenetic reconstruction has increased the need to determine whether a set of gene trees significantly deviate from the phylogenetic patterns of other genes. Such unusual gene trees may have been influenced by other evolutionary processes such as selection, gene duplication, or horizontal gene transfer. Results Motivated by this problem we propose a nonparametric goodness-of-fit test for two empirical distributions of gene trees, and we developed the software GeneOut to estimate a p-value for the test. Our approach maps trees into a multi-dimensional vector space and then applies support vector machines (SVMs) to measure the separation between two sets of pre-defined trees. We use a permutation test to assess the significance of the SVM separation. To demonstrate the performance of GeneOut, we applied it to the comparison of gene trees simulated within different species trees across a range of species tree depths. Applied directly to sets of simulated gene trees with large sample sizes, GeneOut was able to detect very small differences between two set of gene trees generated under different species trees. Our statistical test can also include tree reconstruction into its test framework through a variety of phylogenetic optimality criteria. When applied to DNA sequence data simulated from different sets of gene trees, results in the form of receiver operating characteristic (ROC) curves indicated that GeneOut performed well in the detection of differences between sets of trees with different distributions in a multi-dimensional space. Furthermore, it controlled false positive and false negative rates very well, indicating a high degree of accuracy. Conclusions The non-parametric nature of our statistical test provides fast and efficient analyses, and makes it an applicable test for any scenario where evolutionary or other factors can lead to trees with different multi-dimensional distributions. The software GeneOut is freely available under the GNU public license. PMID:22909268

  14. Full data acquisition in Kelvin Probe Force Microscopy: Mapping dynamic electric phenomena in real space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balke, Nina; Kalinin, Sergei V.; Jesse, Stephen

    Kelvin probe force microscopy (KPFM) has provided deep insights into the role local electronic, ionic and electrochemical processes play on the global functionality of materials and devices, even down to the atomic scale. Conventional KPFM utilizes heterodyne detection and bias feedback to measure the contact potential difference (CPD) between tip and sample. This measurement paradigm, however, permits only partial recovery of the information encoded in bias- and time-dependent electrostatic interactions between the tip and sample and effectively down-samples the cantilever response to a single measurement of CPD per pixel. This level of detail is insufficient for electroactive materials, devices, ormore » solid-liquid interfaces, where non-linear dielectrics are present or spurious electrostatic events are possible. Here, we simulate and experimentally validate a novel approach for spatially resolved KPFM capable of a full information transfer of the dynamic electric processes occurring between tip and sample. General acquisition mode, or G-Mode, adopts a big data approach utilising high speed detection, compression, and storage of the raw cantilever deflection signal in its entirety at high sampling rates (> 4 MHz), providing a permanent record of the tip trajectory. We develop a range of methodologies for analysing the resultant large multidimensional datasets involving classical, physics-based and information-based approaches. Physics-based analysis of G-Mode KPFM data recovers the parabolic bias dependence of the electrostatic force for each cycle of the excitation voltage, leading to a multidimensional dataset containing spatial and temporal dependence of the CPD and capacitance channels. We use multivariate statistical methods to reduce data volume and separate the complex multidimensional data sets into statistically significant components that can then be mapped onto separate physical mechanisms. Overall, G-Mode KPFM offers a new paradigm to study dynamic electric phenomena in electroactive interfaces as well as offer a promising approach to extend KPFM to solid-liquid interfaces.« less

  15. Evaluation of multidimensional COPD-related subjective fatigue following a pulmonary rehabilitation programme.

    PubMed

    Lewko, Agnieszka; Bidgood, Penelope L; Jewell, Andy; Garrod, Rachel

    2014-01-01

    Subjective fatigue has been recognised as an important, multi-component symptom in COPD. Pulmonary Rehabilitation (PR) improves fatigue component of the Chronic Respiratory Questionnaire, a quality of life (QoL) measure. However, it is not clear if all fatigue dimensions are affected equally. This study aims to evaluate changes in subjective multidimensional fatigue among people with COPD who participated in PR. Thirty seven stable COPD patients were recruited; 23 patients (15 male) mean age 68.5 (range 49-86) yrs, mean (SD) %predicted FEV1 45.3 (19.8); completed 7 weeks of PR. Assessments (pre and post PR) consisted of the Multidimensional Fatigue Inventory (MFI-20), QoL (SGRQ), Anxiety and Depression (HADS), the London Chest Activity of Daily Living Scale (LCADL), muscle strength, incremental (ISWT) and endurance (ESWT) shuttle walk tests. The differences between pre and post PR fatigue were tested using Wilcoxon's test and relationships with other outcomes were examined using Spearman's correlation. There were statistically significant improvements in Reduced Activity (RA) (p = 0.01), General (GF) (p < 0.01) and Physical Fatigue (PF) (p = 0.03) components of MFI-20 after PR, but there were no differences in Motivation or Mental Fatigue (p > 0.05). There were significant improvements in ISWT (p < 0.05), ESWT (p < 0.01) and muscle strength (p = 0.03). Statistically significant correlations (p < 0.05) were found between changes in GF and in both ISWT (r = -0.43) and SGRQ impact (r = 0.46); and between RA and ESWT changes (r = -0.45). Some dimensions of fatigue in COPD are modifiable by a 7-week PR programme. Change in fatigue dimensions in COPD may be associated with a change in maximal or endurance walking distances or QoL. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. The relation between cognitive and metacognitive strategic processing during a science simulation.

    PubMed

    Dinsmore, Daniel L; Zoellner, Brian P

    2018-03-01

    This investigation was designed to uncover the relations between students' cognitive and metacognitive strategies used during a complex climate simulation. While cognitive strategy use during science inquiry has been studied, the factors related to this strategy use, such as concurrent metacognition, prior knowledge, and prior interest, have not been investigated in a multidimensional fashion. This study addressed current issues in strategy research by examining not only how metacognitive, surface-level, and deep-level strategies influence performance, but also how these strategies related to each other during a contextually relevant science simulation. The sample for this study consisted of 70 undergraduates from a mid-sized Southeastern university in the United States. These participants were recruited from both physical and life science (e.g., biology) and education majors to obtain a sample with variance in terms of their prior knowledge, interest, and strategy use. Participants completed measures of prior knowledge and interest about global climate change. Then, they were asked to engage in an online climate simulator for up to 30 min while thinking aloud. Finally, participants were asked to answer three outcome questions about global climate change. Results indicated a poor fit for the statistical model of the frequency and level of processing predicting performance. However, a statistical model that independently examined the influence of metacognitive monitoring and control of cognitive strategies showed a very strong relation between the metacognitive and cognitive strategies. Finally, smallest space analysis results provided evidence that strategy use may be better captured in a multidimensional fashion, particularly with attention paid towards the combination of strategies employed. Conclusions drawn from the evidence point to the need for more dynamic, multidimensional models of strategic processing that account for the patterns of optimal and non-optimal strategy use. Additionally, analyses that can capture these complex patterns need to be further explored. © 2017 The British Psychological Society.

  17. Full data acquisition in Kelvin Probe Force Microscopy: Mapping dynamic electric phenomena in real space

    DOE PAGES

    Balke, Nina; Kalinin, Sergei V.; Jesse, Stephen; ...

    2016-08-12

    Kelvin probe force microscopy (KPFM) has provided deep insights into the role local electronic, ionic and electrochemical processes play on the global functionality of materials and devices, even down to the atomic scale. Conventional KPFM utilizes heterodyne detection and bias feedback to measure the contact potential difference (CPD) between tip and sample. This measurement paradigm, however, permits only partial recovery of the information encoded in bias- and time-dependent electrostatic interactions between the tip and sample and effectively down-samples the cantilever response to a single measurement of CPD per pixel. This level of detail is insufficient for electroactive materials, devices, ormore » solid-liquid interfaces, where non-linear dielectrics are present or spurious electrostatic events are possible. Here, we simulate and experimentally validate a novel approach for spatially resolved KPFM capable of a full information transfer of the dynamic electric processes occurring between tip and sample. General acquisition mode, or G-Mode, adopts a big data approach utilising high speed detection, compression, and storage of the raw cantilever deflection signal in its entirety at high sampling rates (> 4 MHz), providing a permanent record of the tip trajectory. We develop a range of methodologies for analysing the resultant large multidimensional datasets involving classical, physics-based and information-based approaches. Physics-based analysis of G-Mode KPFM data recovers the parabolic bias dependence of the electrostatic force for each cycle of the excitation voltage, leading to a multidimensional dataset containing spatial and temporal dependence of the CPD and capacitance channels. We use multivariate statistical methods to reduce data volume and separate the complex multidimensional data sets into statistically significant components that can then be mapped onto separate physical mechanisms. Overall, G-Mode KPFM offers a new paradigm to study dynamic electric phenomena in electroactive interfaces as well as offer a promising approach to extend KPFM to solid-liquid interfaces.« less

  18. Statistical mechanics of unsupervised feature learning in a restricted Boltzmann machine with binary synapses

    NASA Astrophysics Data System (ADS)

    Huang, Haiping

    2017-05-01

    Revealing hidden features in unlabeled data is called unsupervised feature learning, which plays an important role in pretraining a deep neural network. Here we provide a statistical mechanics analysis of the unsupervised learning in a restricted Boltzmann machine with binary synapses. A message passing equation to infer the hidden feature is derived, and furthermore, variants of this equation are analyzed. A statistical analysis by replica theory describes the thermodynamic properties of the model. Our analysis confirms an entropy crisis preceding the non-convergence of the message passing equation, suggesting a discontinuous phase transition as a key characteristic of the restricted Boltzmann machine. Continuous phase transition is also confirmed depending on the embedded feature strength in the data. The mean-field result under the replica symmetric assumption agrees with that obtained by running message passing algorithms on single instances of finite sizes. Interestingly, in an approximate Hopfield model, the entropy crisis is absent, and a continuous phase transition is observed instead. We also develop an iterative equation to infer the hyper-parameter (temperature) hidden in the data, which in physics corresponds to iteratively imposing Nishimori condition. Our study provides insights towards understanding the thermodynamic properties of the restricted Boltzmann machine learning, and moreover important theoretical basis to build simplified deep networks.

  19. Improved Diffuse Foreground Subtraction with the ILC Method: CMB Map and Angular Power Spectrum Using Planck and WMAP Observations

    NASA Astrophysics Data System (ADS)

    Sudevan, Vipin; Aluri, Pavan K.; Yadav, Sarvesh Kumar; Saha, Rajib; Souradeep, Tarun

    2017-06-01

    We report an improved technique for diffuse foreground minimization from Cosmic Microwave Background (CMB) maps using a new multiphase iterative harmonic space internal-linear-combination (HILC) approach. Our method nullifies a foreground leakage that was present in the old and usual iterative HILC method. In phase 1 of the multiphase technique, we obtain an initial cleaned map using the single iteration HILC approach over the desired portion of the sky. In phase 2, we obtain a final CMB map using the iterative HILC approach; however, now, to nullify the leakage, during each iteration, some of the regions of the sky that are not being cleaned in the current iteration are replaced by the corresponding cleaned portions of the phase 1 map. We bring all input frequency maps to a common and maximum possible beam and pixel resolution at the beginning of the analysis, which significantly reduces data redundancy, memory usage, and computational cost, and avoids, during the HILC weight calculation, the deconvolution of partial sky harmonic coefficients by the azimuthally symmetric beam and pixel window functions, which in a strict mathematical sense, are not well defined. Using WMAP 9 year and Planck 2015 frequency maps, we obtain foreground-cleaned CMB maps and a CMB angular power spectrum for the multipole range 2≤slant {\\ell }≤slant 2500. Our power spectrum matches the published Planck results with some differences at different multipole ranges. We validate our method by performing Monte Carlo simulations. Finally, we show that the weights for HILC foreground minimization have the intrinsic characteristic that they also tend to produce a statistically isotropic CMB map.

  20. Performance analysis of Rogowski coils and the measurement of the total toroidal current in the ITER machine

    NASA Astrophysics Data System (ADS)

    Quercia, A.; Albanese, R.; Fresa, R.; Minucci, S.; Arshad, S.; Vayakis, G.

    2017-12-01

    The paper carries out a comprehensive study of the performances of Rogowski coils. It describes methodologies that were developed in order to assess the capabilities of the Continuous External Rogowski (CER), which measures the total toroidal current in the ITER machine. Even though the paper mainly considers the CER, the contents are general and relevant to any Rogowski sensor. The CER consists of two concentric helical coils which are wound along a complex closed path. Modelling and computational activities were performed to quantify the measurement errors, taking detailed account of the ITER environment. The geometrical complexity of the sensor is accurately accounted for and the standard model which provides the classical expression to compute the flux linkage of Rogowski sensors is quantitatively validated. Then, in order to take into account the non-ideality of the winding, a generalized expression, formally analogue to the classical one, is presented. Models to determine the worst case and the statistical measurement accuracies are hence provided. The following sources of error are considered: effect of the joints, disturbances due to external sources of field (the currents flowing in the poloidal field coils and the ferromagnetic inserts of ITER), deviations from ideal geometry, toroidal field variations, calibration, noise and integration drift. The proposed methods are applied to the measurement error of the CER, in particular in its high and low operating ranges, as prescribed by the ITER system design description documents, and during transients, which highlight the large time constant related to the shielding of the vacuum vessel. The analyses presented in the paper show that the design of the CER diagnostic is capable of achieving the requisite performance as needed for the operation of the ITER machine.

  1. Optimal application of Morrison's iterative noise removal for deconvolution. Appendices

    NASA Technical Reports Server (NTRS)

    Ioup, George E.; Ioup, Juliette W.

    1987-01-01

    Morrison's iterative method of noise removal, or Morrison's smoothing, is applied in a simulation to noise-added data sets of various noise levels to determine its optimum use. Morrison's smoothing is applied for noise removal alone, and for noise removal prior to deconvolution. For the latter, an accurate method is analyzed to provide confidence in the optimization. The method consists of convolving the data with an inverse filter calculated by taking the inverse discrete Fourier transform of the reciprocal of the transform of the response of the system. Various length filters are calculated for the narrow and wide Gaussian response functions used. Deconvolution of non-noisy data is performed, and the error in each deconvolution calculated. Plots are produced of error versus filter length; and from these plots the most accurate length filters determined. The statistical methodologies employed in the optimizations of Morrison's method are similar. A typical peak-type input is selected and convolved with the two response functions to produce the data sets to be analyzed. Both constant and ordinate-dependent Gaussian distributed noise is added to the data, where the noise levels of the data are characterized by their signal-to-noise ratios. The error measures employed in the optimizations are the L1 and L2 norms. Results of the optimizations for both Gaussians, both noise types, and both norms include figures of optimum iteration number and error improvement versus signal-to-noise ratio, and tables of results. The statistical variation of all quantities considered is also given.

  2. Influence of adaptive statistical iterative reconstruction algorithm on image quality in coronary computed tomography angiography.

    PubMed

    Precht, Helle; Thygesen, Jesper; Gerke, Oke; Egstrup, Kenneth; Waaler, Dag; Lambrechtsen, Jess

    2016-12-01

    Coronary computed tomography angiography (CCTA) requires high spatial and temporal resolution, increased low contrast resolution for the assessment of coronary artery stenosis, plaque detection, and/or non-coronary pathology. Therefore, new reconstruction algorithms, particularly iterative reconstruction (IR) techniques, have been developed in an attempt to improve image quality with no cost in radiation exposure. To evaluate whether adaptive statistical iterative reconstruction (ASIR) enhances perceived image quality in CCTA compared to filtered back projection (FBP). Thirty patients underwent CCTA due to suspected coronary artery disease. Images were reconstructed using FBP, 30% ASIR, and 60% ASIR. Ninety image sets were evaluated by five observers using the subjective visual grading analysis (VGA) and assessed by proportional odds modeling. Objective quality assessment (contrast, noise, and the contrast-to-noise ratio [CNR]) was analyzed with linear mixed effects modeling on log-transformed data. The need for ethical approval was waived by the local ethics committee as the study only involved anonymously collected clinical data. VGA showed significant improvements in sharpness by comparing FBP with ASIR, resulting in odds ratios of 1.54 for 30% ASIR and 1.89 for 60% ASIR ( P  = 0.004). The objective measures showed significant differences between FBP and 60% ASIR ( P  < 0.0001) for noise, with an estimated ratio of 0.82, and for CNR, with an estimated ratio of 1.26. ASIR improved the subjective image quality of parameter sharpness and, objectively, reduced noise and increased CNR.

  3. 2D and 3D registration methods for dual-energy contrast-enhanced digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Lau, Kristen C.; Roth, Susan; Maidment, Andrew D. A.

    2014-03-01

    Contrast-enhanced digital breast tomosynthesis (CE-DBT) uses an iodinated contrast agent to image the threedimensional breast vasculature. The University of Pennsylvania is conducting a CE-DBT clinical study in patients with known breast cancers. The breast is compressed continuously and imaged at four time points (1 pre-contrast; 3 postcontrast). A hybrid subtraction scheme is proposed. First, dual-energy (DE) images are obtained by a weighted logarithmic subtraction of the high-energy and low-energy image pairs. Then, post-contrast DE images are subtracted from the pre-contrast DE image. This hybrid temporal subtraction of DE images is performed to analyze iodine uptake, but suffers from motion artifacts. Employing image registration further helps to correct for motion, enhancing the evaluation of vascular kinetics. Registration using ANTS (Advanced Normalization Tools) is performed in an iterative manner. Mutual information optimization first corrects large-scale motions. Normalized cross-correlation optimization then iteratively corrects fine-scale misalignment. Two methods have been evaluated: a 2D method using a slice-by-slice approach, and a 3D method using a volumetric approach to account for out-of-plane breast motion. Our results demonstrate that iterative registration qualitatively improves with each iteration (five iterations total). Motion artifacts near the edge of the breast are corrected effectively and structures within the breast (e.g. blood vessels, surgical clip) are better visualized. Statistical and clinical evaluations of registration accuracy in the CE-DBT images are ongoing.

  4. M-estimator for the 3D symmetric Helmert coordinate transformation

    NASA Astrophysics Data System (ADS)

    Chang, Guobin; Xu, Tianhe; Wang, Qianxin

    2018-01-01

    The M-estimator for the 3D symmetric Helmert coordinate transformation problem is developed. Small-angle rotation assumption is abandoned. The direction cosine matrix or the quaternion is used to represent the rotation. The 3 × 1 multiplicative error vector is defined to represent the rotation estimation error. An analytical solution can be employed to provide the initial approximate for iteration, if the outliers are not large. The iteration is carried out using the iterative reweighted least-squares scheme. In each iteration after the first one, the measurement equation is linearized using the available parameter estimates, the reweighting matrix is constructed using the residuals obtained in the previous iteration, and then the parameter estimates with their variance-covariance matrix are calculated. The influence functions of a single pseudo-measurement on the least-squares estimator and on the M-estimator are derived to theoretically show the robustness. In the solution process, the parameter is rescaled in order to improve the numerical stability. Monte Carlo experiments are conducted to check the developed method. Different cases to investigate whether the assumed stochastic model is correct are considered. The results with the simulated data slightly deviating from the true model are used to show the developed method's statistical efficacy at the assumed stochastic model, its robustness against the deviations from the assumed stochastic model, and the validity of the estimated variance-covariance matrix no matter whether the assumed stochastic model is correct or not.

  5. Iterative near-term ecological forecasting: Needs, opportunities, and challenges

    USGS Publications Warehouse

    Dietze, Michael C.; Fox, Andrew; Beck-Johnson, Lindsay; Betancourt, Julio L.; Hooten, Mevin B.; Jarnevich, Catherine S.; Keitt, Timothy H.; Kenney, Melissa A.; Laney, Christine M.; Larsen, Laurel G.; Loescher, Henry W.; Lunch, Claire K.; Pijanowski, Bryan; Randerson, James T.; Read, Emily; Tredennick, Andrew T.; Vargas, Rodrigo; Weathers, Kathleen C.; White, Ethan P.

    2018-01-01

    Two foundational questions about sustainability are “How are ecosystems and the services they provide going to change in the future?” and “How do human decisions affect these trajectories?” Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.

  6. Iterative near-term ecological forecasting: Needs, opportunities, and challenges.

    PubMed

    Dietze, Michael C; Fox, Andrew; Beck-Johnson, Lindsay M; Betancourt, Julio L; Hooten, Mevin B; Jarnevich, Catherine S; Keitt, Timothy H; Kenney, Melissa A; Laney, Christine M; Larsen, Laurel G; Loescher, Henry W; Lunch, Claire K; Pijanowski, Bryan C; Randerson, James T; Read, Emily K; Tredennick, Andrew T; Vargas, Rodrigo; Weathers, Kathleen C; White, Ethan P

    2018-02-13

    Two foundational questions about sustainability are "How are ecosystems and the services they provide going to change in the future?" and "How do human decisions affect these trajectories?" Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.

  7. Graph-based analysis of kinetics on multidimensional potential-energy surfaces.

    PubMed

    Okushima, T; Niiyama, T; Ikeda, K S; Shimizu, Y

    2009-09-01

    The aim of this paper is twofold: one is to give a detailed description of an alternative graph-based analysis method, which we call saddle connectivity graph, for analyzing the global topography and the dynamical properties of many-dimensional potential-energy landscapes and the other is to give examples of applications of this method in the analysis of the kinetics of realistic systems. A Dijkstra-type shortest path algorithm is proposed to extract dynamically dominant transition pathways by kinetically defining transition costs. The applicability of this approach is first confirmed by an illustrative example of a low-dimensional random potential. We then show that a coarse-graining procedure tailored for saddle connectivity graphs can be used to obtain the kinetic properties of 13- and 38-atom Lennard-Jones clusters. The coarse-graining method not only reduces the complexity of the graphs, but also, with iterative use, reveals a self-similar hierarchical structure in these clusters. We also propose that the self-similarity is common to many-atom Lennard-Jones clusters.

  8. Application of differential evolution algorithm on self-potential data.

    PubMed

    Li, Xiangtao; Yin, Minghao

    2012-01-01

    Differential evolution (DE) is a population based evolutionary algorithm widely used for solving multidimensional global optimization problems over continuous spaces, and has been successfully used to solve several kinds of problems. In this paper, differential evolution is used for quantitative interpretation of self-potential data in geophysics. Six parameters are estimated including the electrical dipole moment, the depth of the source, the distance from the origin, the polarization angle and the regional coefficients. This study considers three kinds of data from Turkey: noise-free data, contaminated synthetic data, and Field example. The differential evolution and the corresponding model parameters are constructed as regards the number of the generations. Then, we show the vibration of the parameters at the vicinity of the low misfit area. Moreover, we show how the frequency distribution of each parameter is related to the number of the DE iteration. Experimental results show the DE can be used for solving the quantitative interpretation of self-potential data efficiently compared with previous methods.

  9. Application of Differential Evolution Algorithm on Self-Potential Data

    PubMed Central

    Li, Xiangtao; Yin, Minghao

    2012-01-01

    Differential evolution (DE) is a population based evolutionary algorithm widely used for solving multidimensional global optimization problems over continuous spaces, and has been successfully used to solve several kinds of problems. In this paper, differential evolution is used for quantitative interpretation of self-potential data in geophysics. Six parameters are estimated including the electrical dipole moment, the depth of the source, the distance from the origin, the polarization angle and the regional coefficients. This study considers three kinds of data from Turkey: noise-free data, contaminated synthetic data, and Field example. The differential evolution and the corresponding model parameters are constructed as regards the number of the generations. Then, we show the vibration of the parameters at the vicinity of the low misfit area. Moreover, we show how the frequency distribution of each parameter is related to the number of the DE iteration. Experimental results show the DE can be used for solving the quantitative interpretation of self-potential data efficiently compared with previous methods. PMID:23240004

  10. Nonparametric estimation and testing of fixed effects panel data models

    PubMed Central

    Henderson, Daniel J.; Carroll, Raymond J.; Li, Qi

    2009-01-01

    In this paper we consider the problem of estimating nonparametric panel data models with fixed effects. We introduce an iterative nonparametric kernel estimator. We also extend the estimation method to the case of a semiparametric partially linear fixed effects model. To determine whether a parametric, semiparametric or nonparametric model is appropriate, we propose test statistics to test between the three alternatives in practice. We further propose a test statistic for testing the null hypothesis of random effects against fixed effects in a nonparametric panel data regression model. Simulations are used to examine the finite sample performance of the proposed estimators and the test statistics. PMID:19444335

  11. Automated information and control complex of hydro-gas endogenous mine processes

    NASA Astrophysics Data System (ADS)

    Davkaev, K. S.; Lyakhovets, M. V.; Gulevich, T. M.; Zolin, K. A.

    2017-09-01

    The automated information and control complex designed to prevent accidents, related to aerological situation in the underground workings, accounting of the received and handed over individual devices, transmission and display of measurement data, and the formation of preemptive solutions is considered. Examples for the automated workplace of an airgas control operator by individual means are given. The statistical characteristics of field data characterizing the aerological situation in the mine are obtained. The conducted studies of statistical characteristics confirm the feasibility of creating a subsystem of controlled gas distribution with an adaptive arrangement of points for gas control. The adaptive (multivariant) algorithm for processing measuring information of continuous multidimensional quantities and influencing factors has been developed.

  12. MANCOVA for one way classification with homogeneity of regression coefficient vectors

    NASA Astrophysics Data System (ADS)

    Mokesh Rayalu, G.; Ravisankar, J.; Mythili, G. Y.

    2017-11-01

    The MANOVA and MANCOVA are the extensions of the univariate ANOVA and ANCOVA techniques to multidimensional or vector valued observations. The assumption of a Gaussian distribution has been replaced with the Multivariate Gaussian distribution for the vectors data and residual term variables in the statistical models of these techniques. The objective of MANCOVA is to determine if there are statistically reliable mean differences that can be demonstrated between groups later modifying the newly created variable. When randomization assignment of samples or subjects to groups is not possible, multivariate analysis of covariance (MANCOVA) provides statistical matching of groups by adjusting dependent variables as if all subjects scored the same on the covariates. In this research article, an extension has been made to the MANCOVA technique with more number of covariates and homogeneity of regression coefficient vectors is also tested.

  13. The adaptive statistical iterative reconstruction-V technique for radiation dose reduction in abdominal CT: comparison with the adaptive statistical iterative reconstruction technique.

    PubMed

    Kwon, Heejin; Cho, Jinhan; Oh, Jongyeong; Kim, Dongwon; Cho, Junghyun; Kim, Sanghyun; Lee, Sangyun; Lee, Jihyun

    2015-10-01

    To investigate whether reduced radiation dose abdominal CT images reconstructed with adaptive statistical iterative reconstruction V (ASIR-V) compromise the depiction of clinically competent features when compared with the currently used routine radiation dose CT images reconstructed with ASIR. 27 consecutive patients (mean body mass index: 23.55 kg m(-2) underwent CT of the abdomen at two time points. At the first time point, abdominal CT was scanned at 21.45 noise index levels of automatic current modulation at 120 kV. Images were reconstructed with 40% ASIR, the routine protocol of Dong-A University Hospital. At the second time point, follow-up scans were performed at 30 noise index levels. Images were reconstructed with filtered back projection (FBP), 40% ASIR, 30% ASIR-V, 50% ASIR-V and 70% ASIR-V for the reduced radiation dose. Both quantitative and qualitative analyses of image quality were conducted. The CT dose index was also recorded. At the follow-up study, the mean dose reduction relative to the currently used common radiation dose was 35.37% (range: 19-49%). The overall subjective image quality and diagnostic acceptability of the 50% ASIR-V scores at the reduced radiation dose were nearly identical to those recorded when using the initial routine-dose CT with 40% ASIR. Subjective ratings of the qualitative analysis revealed that of all reduced radiation dose CT series reconstructed, 30% ASIR-V and 50% ASIR-V were associated with higher image quality with lower noise and artefacts as well as good sharpness when compared with 40% ASIR and FBP. However, the sharpness score at 70% ASIR-V was considered to be worse than that at 40% ASIR. Objective image noise for 50% ASIR-V was 34.24% and 46.34% which was lower than 40% ASIR and FBP. Abdominal CT images reconstructed with ASIR-V facilitate radiation dose reductions of to 35% when compared with the ASIR. This study represents the first clinical research experiment to use ASIR-V, the newest version of iterative reconstruction. Use of the ASIR-V algorithm decreased image noise and increased image quality when compared with the ASIR and FBP methods. These results suggest that high-quality low-dose CT may represent a new clinical option.

  14. The adaptive statistical iterative reconstruction-V technique for radiation dose reduction in abdominal CT: comparison with the adaptive statistical iterative reconstruction technique

    PubMed Central

    Cho, Jinhan; Oh, Jongyeong; Kim, Dongwon; Cho, Junghyun; Kim, Sanghyun; Lee, Sangyun; Lee, Jihyun

    2015-01-01

    Objective: To investigate whether reduced radiation dose abdominal CT images reconstructed with adaptive statistical iterative reconstruction V (ASIR-V) compromise the depiction of clinically competent features when compared with the currently used routine radiation dose CT images reconstructed with ASIR. Methods: 27 consecutive patients (mean body mass index: 23.55 kg m−2 underwent CT of the abdomen at two time points. At the first time point, abdominal CT was scanned at 21.45 noise index levels of automatic current modulation at 120 kV. Images were reconstructed with 40% ASIR, the routine protocol of Dong-A University Hospital. At the second time point, follow-up scans were performed at 30 noise index levels. Images were reconstructed with filtered back projection (FBP), 40% ASIR, 30% ASIR-V, 50% ASIR-V and 70% ASIR-V for the reduced radiation dose. Both quantitative and qualitative analyses of image quality were conducted. The CT dose index was also recorded. Results: At the follow-up study, the mean dose reduction relative to the currently used common radiation dose was 35.37% (range: 19–49%). The overall subjective image quality and diagnostic acceptability of the 50% ASIR-V scores at the reduced radiation dose were nearly identical to those recorded when using the initial routine-dose CT with 40% ASIR. Subjective ratings of the qualitative analysis revealed that of all reduced radiation dose CT series reconstructed, 30% ASIR-V and 50% ASIR-V were associated with higher image quality with lower noise and artefacts as well as good sharpness when compared with 40% ASIR and FBP. However, the sharpness score at 70% ASIR-V was considered to be worse than that at 40% ASIR. Objective image noise for 50% ASIR-V was 34.24% and 46.34% which was lower than 40% ASIR and FBP. Conclusion: Abdominal CT images reconstructed with ASIR-V facilitate radiation dose reductions of to 35% when compared with the ASIR. Advances in knowledge: This study represents the first clinical research experiment to use ASIR-V, the newest version of iterative reconstruction. Use of the ASIR-V algorithm decreased image noise and increased image quality when compared with the ASIR and FBP methods. These results suggest that high-quality low-dose CT may represent a new clinical option. PMID:26234823

  15. Statistical shape model-based reconstruction of a scaled, patient-specific surface model of the pelvis from a single standard AP x-ray radiograph

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng Guoyan

    2010-04-15

    Purpose: The aim of this article is to investigate the feasibility of using a statistical shape model (SSM)-based reconstruction technique to derive a scaled, patient-specific surface model of the pelvis from a single standard anteroposterior (AP) x-ray radiograph and the feasibility of estimating the scale of the reconstructed surface model by performing a surface-based 3D/3D matching. Methods: Data sets of 14 pelvises (one plastic bone, 12 cadavers, and one patient) were used to validate the single-image based reconstruction technique. This reconstruction technique is based on a hybrid 2D/3D deformable registration process combining a landmark-to-ray registration with a SSM-based 2D/3D reconstruction.more » The landmark-to-ray registration was used to find an initial scale and an initial rigid transformation between the x-ray image and the SSM. The estimated scale and rigid transformation were used to initialize the SSM-based 2D/3D reconstruction. The optimal reconstruction was then achieved in three stages by iteratively matching the projections of the apparent contours extracted from a 3D model derived from the SSM to the image contours extracted from the x-ray radiograph: Iterative affine registration, statistical instantiation, and iterative regularized shape deformation. The image contours are first detected by using a semiautomatic segmentation tool based on the Livewire algorithm and then approximated by a set of sparse dominant points that are adaptively sampled from the detected contours. The unknown scales of the reconstructed models were estimated by performing a surface-based 3D/3D matching between the reconstructed models and the associated ground truth models that were derived from a CT-based reconstruction method. Such a matching also allowed for computing the errors between the reconstructed models and the associated ground truth models. Results: The technique could reconstruct the surface models of all 14 pelvises directly from the landmark-based initialization. Depending on the surface-based matching techniques, the reconstruction errors were slightly different. When a surface-based iterative affine registration was used, an average reconstruction error of 1.6 mm was observed. This error was increased to 1.9 mm, when a surface-based iterative scaled rigid registration was used. Conclusions: It is feasible to reconstruct a scaled, patient-specific surface model of the pelvis from single standard AP x-ray radiograph using the present approach. The unknown scale of the reconstructed model can be estimated by performing a surface-based 3D/3D matching.« less

  16. Is Going Beyond Rasch Analysis Necessary to Assess the Construct Validity of a Motor Function Scale?

    PubMed

    Guillot, Tiffanie; Roche, Sylvain; Rippert, Pascal; Hamroun, Dalil; Iwaz, Jean; Ecochard, René; Vuillerot, Carole

    2018-04-03

    To examine whether a Rasch analysis is sufficient to establish the construct validity of the Motor Function Measure (MFM) and discuss whether weighting the MFM item scores would improve the MFM construct validity. Observational cross-sectional multicenter study. Twenty-three physical medicine departments, neurology departments, or reference centers for neuromuscular diseases. Patients (N=911) aged 6 to 60 years with Charcot-Marie-Tooth disease (CMT), facioscapulohumeral dystrophy (FSHD), or myotonic dystrophy type 1 (DM1). None. Comparison of the goodness-of-fit of the confirmatory factor analysis (CFA) model vs that of a modified multidimensional Rasch model on MFM item scores in each considered disease. The CFA model showed good fit to the data and significantly better goodness of fit than the modified multidimensional Rasch model regardless of the disease (P<.001). Statistically significant differences in item standardized factor loadings were found between DM1, CMT, and FSHD in only 6 of 32 items (items 6, 27, 2, 7, 9 and 17). For multidimensional scales designed to measure patient abilities in various diseases, a Rasch analysis might not be the most convenient, whereas a CFA is able to establish the scale construct validity and provide weights to adapt the item scores to a specific disease. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  17. Algorithm for Identifying Erroneous Rain-Gauge Readings

    NASA Technical Reports Server (NTRS)

    Rickman, Doug

    2005-01-01

    An algorithm analyzes rain-gauge data to identify statistical outliers that could be deemed to be erroneous readings. Heretofore, analyses of this type have been performed in burdensome manual procedures that have involved subjective judgements. Sometimes, the analyses have included computational assistance for detecting values falling outside of arbitrary limits. The analyses have been performed without statistically valid knowledge of the spatial and temporal variations of precipitation within rain events. In contrast, the present algorithm makes it possible to automate such an analysis, makes the analysis objective, takes account of the spatial distribution of rain gauges in conjunction with the statistical nature of spatial variations in rainfall readings, and minimizes the use of arbitrary criteria. The algorithm implements an iterative process that involves nonparametric statistics.

  18. Performance of spectral MSE diagnostic on C-Mod and ITER

    NASA Astrophysics Data System (ADS)

    Liao, Ken; Rowan, William; Mumgaard, Robert; Granetz, Robert; Scott, Steve; Marchuk, Oleksandr; Ralchenko, Yuri; Alcator C-Mod Team

    2015-11-01

    Magnetic field was measured on Alcator C-mod by applying spectral Motional Stark Effect techniques based on line shift (MSE-LS) and line ratio (MSE-LR) to the H-alpha emission spectrum of the diagnostic neutral beam atoms. The high field of Alcator C-mod allows measurements to be made at close to ITER values of Stark splitting (~ Bv⊥) with similar background levels to those expected for ITER. Accurate modeling of the spectrum requires a non-statistical, collisional-radiative analysis of the excited beam population and quadratic and Zeeman corrections to the Stark shift. A detailed synthetic diagnostic was developed and used to estimate the performance of the diagnostic at C-Mod and ITER parameters. Our analysis includes the sensitivity to view and beam geometry, aperture and divergence broadening, magnetic field, pixel size, background noise, and signal levels. Analysis of preliminary experiments agree with Kinetic+(polarization)MSE EFIT within ~2° in pitch angle and simulations predict uncertainties of 20 mT in | B | and <2° in pitch angle. This material is based upon work supported by the U.S. Department of Energy Office of Science, Office of Fusion Energy Sciences under Award Number DE-FG03-96ER-54373 and DE-FC02-99ER54512.

  19. Tuning without over-tuning: parametric uncertainty quantification for the NEMO ocean model

    NASA Astrophysics Data System (ADS)

    Williamson, Daniel B.; Blaker, Adam T.; Sinha, Bablu

    2017-04-01

    In this paper we discuss climate model tuning and present an iterative automatic tuning method from the statistical science literature. The method, which we refer to here as iterative refocussing (though also known as history matching), avoids many of the common pitfalls of automatic tuning procedures that are based on optimisation of a cost function, principally the over-tuning of a climate model due to using only partial observations. This avoidance comes by seeking to rule out parameter choices that we are confident could not reproduce the observations, rather than seeking the model that is closest to them (a procedure that risks over-tuning). We comment on the state of climate model tuning and illustrate our approach through three waves of iterative refocussing of the NEMO (Nucleus for European Modelling of the Ocean) ORCA2 global ocean model run at 2° resolution. We show how at certain depths the anomalies of global mean temperature and salinity in a standard configuration of the model exceeds 10 standard deviations away from observations and show the extent to which this can be alleviated by iterative refocussing without compromising model performance spatially. We show how model improvements can be achieved by simultaneously perturbing multiple parameters, and illustrate the potential of using low-resolution ensembles to tune NEMO ORCA configurations at higher resolutions.

  20. Image transmission system using adaptive joint source and channel decoding

    NASA Astrophysics Data System (ADS)

    Liu, Weiliang; Daut, David G.

    2005-03-01

    In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.

  1. Parallel solution of the symmetric tridiagonal eigenproblem. Research report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1989-10-01

    This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed-memory Multiple Instruction, Multiple Data multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speed up, and accuracy. Experiments on an IPSC hypercube multiprocessor reveal that Cuppen's method ismore » the most accurate approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effect of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptions of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less

  2. Parallel solution of the symmetric tridiagonal eigenproblem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1989-01-01

    This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed memory MIMD multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues, and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speedup, and accuracy. Experiments on an iPSC hyper-cube multiprocessor reveal that Cuppen's method is the most accuratemore » approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effects of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptations of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less

  3. Global strength assessment in oblique waves of a large gas carrier ship, based on a non-linear iterative method

    NASA Astrophysics Data System (ADS)

    Domnisoru, L.; Modiga, A.; Gasparotti, C.

    2016-08-01

    At the ship's design, the first step of the hull structural assessment is based on the longitudinal strength analysis, with head wave equivalent loads by the ships' classification societies’ rules. This paper presents an enhancement of the longitudinal strength analysis, considering the general case of the oblique quasi-static equivalent waves, based on the own non-linear iterative procedure and in-house program. The numerical approach is developed for the mono-hull ships, without restrictions on 3D-hull offset lines non-linearities, and involves three interlinked iterative cycles on floating, pitch and roll trim equilibrium conditions. Besides the ship-wave equilibrium parameters, the ship's girder wave induced loads are obtained. As numerical study case we have considered a large LPG liquefied petroleum gas carrier. The numerical results of the large LPG are compared with the statistical design values from several ships' classification societies’ rules. This study makes possible to obtain the oblique wave conditions that are inducing the maximum loads into the large LPG ship's girder. The numerical results of this study are pointing out that the non-linear iterative approach is necessary for the computation of the extreme loads induced by the oblique waves, ensuring better accuracy of the large LPG ship's longitudinal strength assessment.

  4. Ultra-Low-Dose Fetal CT With Model-Based Iterative Reconstruction: A Prospective Pilot Study.

    PubMed

    Imai, Rumi; Miyazaki, Osamu; Horiuchi, Tetsuya; Asano, Keisuke; Nishimura, Gen; Sago, Haruhiko; Nosaka, Shunsuke

    2017-06-01

    Prenatal diagnosis of skeletal dysplasia by means of 3D skeletal CT examination is highly accurate. However, it carries a risk of fetal exposure to radiation. Model-based iterative reconstruction (MBIR) technology can reduce radiation exposure; however, to our knowledge, the lower limit of an optimal dose is currently unknown. The objectives of this study are to establish ultra-low-dose fetal CT as a method for prenatal diagnosis of skeletal dysplasia and to evaluate the appropriate radiation dose for ultra-low-dose fetal CT. Relationships between tube current and image noise in adaptive statistical iterative reconstruction and MBIR were examined using a 32-cm CT dose index (CTDI) phantom. On the basis of the results of this examination and the recommended methods for the MBIR option and the known relationship between noise and tube current for filtered back projection, as represented by the expression SD = (milliamperes) -0.5 , the lower limit of the optimal dose in ultra-low-dose fetal CT with MBIR was set. The diagnostic power of the CT images obtained using the aforementioned scanning conditions was evaluated, and the radiation exposure associated with ultra-low-dose fetal CT was compared with that noted in previous reports. Noise increased in nearly inverse proportion to the square root of the dose in adaptive statistical iterative reconstruction and in inverse proportion to the fourth root of the dose in MBIR. Ultra-low-dose fetal CT was found to have a volume CTDI of 0.5 mGy. Prenatal diagnosis was accurately performed on the basis of ultra-low-dose fetal CT images that were obtained using this protocol. The level of fetal exposure to radiation was 0.7 mSv. The use of ultra-low-dose fetal CT with MBIR led to a substantial reduction in radiation exposure, compared with the CT imaging method currently used at our institution, but it still enabled diagnosis of skeletal dysplasia without reducing diagnostic power.

  5. Update on the non-prewhitening model observer in computed tomography for the assessment of the adaptive statistical and model-based iterative reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Ott, Julien G.; Becce, Fabio; Monnin, Pascal; Schmidt, Sabine; Bochud, François O.; Verdun, Francis R.

    2014-08-01

    The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications.

  6. A multidimensional model of police legitimacy: A cross-cultural assessment.

    PubMed

    Tankebe, Justice; Reisig, Michael D; Wang, Xia

    2016-02-01

    This study used survey data from cross-sectional, university-based samples of young adults in different cultural settings (i.e., the United States and Ghana) to accomplish 2 main objectives: (1) to construct a 4-dimensional police legitimacy scale, and (2) to assess the relationship that police legitimacy and feelings of obligation to obey the police have with 2 outcome measures. The fit statistics for the second-order confirmatory factor models indicated that the 4-dimensional police legitimacy model is reasonably consistent with the data in both samples. Results from the linear regression analyses showed that the police legitimacy scale is related to cooperation with the police, and that the observed association is attenuated when the obligation to obey scale is included in the model specification in both the United States and Ghana data. A similar pattern emerged in the U.S. sample when estimating compliance with the law models. However, although police legitimacy was associated with compliance in the Ghana sample, this relationship along with the test statistic for the sense of obligation to obey estimate were both null in the fully saturated equation. The findings provide support for the Bottoms and Tankebe's (2012) argument that legitimacy is multidimensional, comprising police lawfulness, distributive fairness, procedural fairness, and effectiveness. However, the link between police legitimacy and social order appears to be culturally variable. PsycINFO Database Record (c) 2016 APA, all rights reserved.

  7. The Serbian version of the Juvenile Arthritis Multidimensional Assessment Report (JAMAR).

    PubMed

    Susic, Gordana; Vojinovic, Jelena; Vijatov-Djuric, Gordana; Stevanovic, Dejan; Lazarevic, Dragana; Djurovic, Nada; Novakovic, Dusica; Consolaro, Alessandro; Bovis, Francesca; Ruperto, Nicolino

    2018-04-01

    The Juvenile Arthritis Multidimensional Assessment Report (JAMAR) is a new parent/patient-reported outcome measure that enables a thorough assessment of the disease status in children with juvenile idiopathic arthritis (JIA). We report the results of the cross-cultural adaptation and validation of the parent and patient versions of the JAMAR in the Serbian language. The reading comprehension of the questionnaire was tested in 10 JIA parents and patients. Each participating centre was asked to collect demographic, clinical data and the JAMAR in 100 consecutive JIA patients or all consecutive patients seen in a 6-month period and to administer the JAMAR to 100 healthy children and their parents. The statistical validation phase explored descriptive statistics and the psychometric issues of the JAMAR: the three Likert assumptions, floor/ceiling effects, internal consistency, Cronbach's alpha, interscale correlations, test-retest reliability, and construct validity (convergent and discriminant validity). A total of 248 JIA patients (5.2% systemic, 44.3% oligoarticular, 23.8% RF-negative polyarthritis, 26.7% other categories) and 100 healthy children were enrolled in three centres. The JAMAR components discriminated healthy subjects from JIA patients. All JAMAR components revealed good psychometric performances. In conclusion, the Serbian version of the JAMAR is a valid tool for the assessment of children with JIA and is suitable for use both in routine clinical practice and clinical research.

  8. The German version of the Juvenile Arthritis Multidimensional Assessment Report (JAMAR).

    PubMed

    Holzinger, Dirk; Foell, Dirk; Horneff, Gerd; Foeldvari, Ivan; Tzaribachev, Nikolay; Tzaribachev, Catrin; Minden, Kirsten; Kallinich, Tilmann; Ganser, Gerd; Clara, Lucia; Haas, Johannes-Peter; Hügle, Boris; Huppertz, Hans-Iko; Weller, Frank; Consolaro, Alessandro; Bovis, Francesca; Ruperto, Nicolino

    2018-04-01

    The Juvenile Arthritis Multidimensional Assessment Report (JAMAR) is a new parent/patient reported outcome measure that enables a thorough assessment of the disease status in children with juvenile idiopathic arthritis (JIA). We report the results of the cross-cultural adaptation and validation of the parent and patient versions of the JAMAR in the German language. The reading comprehension of the questionnaire was tested in 10 JIA parents and patients. The participating centres were asked to collect demographic and clinical data along the JAMAR questionnaire in 100 consecutive JIA patients or all consecutive patients seen in a 6-month period and to administer the JAMAR to 100 healthy children and their parents. The statistical validation phase explored descriptive statistics and the psychometric issues of the JAMAR: the three Likert assumptions, floor/ceiling effects, internal consistency, Cronbach's alpha, interscale correlations, test-retest reliability, and construct validity (convergent and discriminant validity). A total of 319 JIA patients (2.8% systemic, 36.7% oligoarticular, 23.5% RF negative polyarthritis, and 37% other categories) and 100 healthy children were enrolled in eight centres. The JAMAR components discriminated well healthy subjects from JIA patients. All JAMAR components revealed good psychometric performances. In conclusion, the German version of the JAMAR is a valid tool for the assessment of children with JIA and is suitable for use both in routine clinical practice and in clinical research.

  9. Facilities Performance Indicators Report, 2004-05. Facilities Core Data Survey

    ERIC Educational Resources Information Center

    Glazner, Steve, Ed.

    2006-01-01

    The purpose of "Facilities Performance Indicators" is to provide a representative set of statistics about facilities in educational institutions. The second iteration of the web-based Facilities Core Data Survey was posted and available to facilities professionals at more than 3,000 institutions in the Fall of 2005. The website offered a printed…

  10. Standard and Robust Methods in Regression Imputation

    ERIC Educational Resources Information Center

    Moraveji, Behjat; Jafarian, Koorosh

    2014-01-01

    The aim of this paper is to provide an introduction of new imputation algorithms for estimating missing values from official statistics in larger data sets of data pre-processing, or outliers. The goal is to propose a new algorithm called IRMI (iterative robust model-based imputation). This algorithm is able to deal with all challenges like…

  11. Estimating Standardized Linear Contrasts of Means with Desired Precision

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2009-01-01

    L. Wilkinson and the Task Force on Statistical Inference (1999) recommended reporting confidence intervals for measures of effect sizes. If the sample size is too small, the confidence interval may be too wide to provide meaningful information. Recently, K. Kelley and J. R. Rausch (2006) used an iterative approach to computer-generate tables of…

  12. Regression Model Term Selection for the Analysis of Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred; Volden, Thomas R.

    2010-01-01

    The paper discusses the selection of regression model terms for the analysis of wind tunnel strain-gage balance calibration data. Different function class combinations are presented that may be used to analyze calibration data using either a non-iterative or an iterative method. The role of the intercept term in a regression model of calibration data is reviewed. In addition, useful algorithms and metrics originating from linear algebra and statistics are recommended that will help an analyst (i) to identify and avoid both linear and near-linear dependencies between regression model terms and (ii) to make sure that the selected regression model of the calibration data uses only statistically significant terms. Three different tests are suggested that may be used to objectively assess the predictive capability of the final regression model of the calibration data. These tests use both the original data points and regression model independent confirmation points. Finally, data from a simplified manual calibration of the Ames MK40 balance is used to illustrate the application of some of the metrics and tests to a realistic calibration data set.

  13. SOCIAL STABILITY AND HIV RISK BEHAVIOR: EVALUATING THE ROLE OF ACCUMULATED VULNERABILITY

    PubMed Central

    German, Danielle; Latkin, Carl A.

    2011-01-01

    This study evaluated a cumulative and syndromic relationship among commonly co-occurring vulnerabilites (homelessness, incarceration, low-income, residential transition) in association with HIV-related risk behaviors among 635 low-income women in Baltimore. Analysis included descriptive statistics, logistic regression, latent class analysis and latent class regression. Both methods of assessing multidimensional instability showed significant associations with risk indicators. Risk of multiple partners, sex exchange, and drug use decreased significantly with each additional domain. Higher stability class membership (77%) was associated with decreased likelihood of multiple partners, exchange partners, recent drug use, and recent STI. Multidimensional social vulnerabilities were cumulatively and synergistically linked to HIV risk behavior. Independent instability measures may miss important contextual determinants of risk. Social stability offers a useful framework to understand the synergy of social vulnerabilities that shape sexual risk behavior. Social policies and programs aiming to enhance housing and overall social stability are likely to be beneficial for HIV prevention. PMID:21259043

  14. Machine Detection of Enhanced Electromechanical Energy Conversion in PbZr 0.2Ti 0.8O 3 Thin Films

    DOE PAGES

    Agar, Joshua C.; Cao, Ye; Naul, Brett; ...

    2018-05-28

    Many energy conversion, sensing, and microelectronic applications based on ferroic materials are determined by the domain structure evolution under applied stimuli. New hyperspectral, multidimensional spectroscopic techniques now probe dynamic responses at relevant length and time scales to provide an understanding of how these nanoscale domain structures impact macroscopic properties. Such approaches, however, remain limited in use because of the difficulties that exist in extracting and visualizing scientific insights from these complex datasets. Using multidimensional band-excitation scanning probe spectroscopy and adapting tools from both computer vision and machine learning, an automated workflow is developed to featurize, detect, and classify signatures ofmore » ferroelectric/ferroelastic switching processes in complex ferroelectric domain structures. This approach enables the identification and nanoscale visualization of varied modes of response and a pathway to statistically meaningful quantification of the differences between those modes. Lastly, among other things, the importance of domain geometry is spatially visualized for enhancing nanoscale electromechanical energy conversion.« less

  15. A Computational Model of Multidimensional Shape

    PubMed Central

    Liu, Xiuwen; Shi, Yonggang; Dinov, Ivo

    2010-01-01

    We develop a computational model of shape that extends existing Riemannian models of curves to multidimensional objects of general topological type. We construct shape spaces equipped with geodesic metrics that measure how costly it is to interpolate two shapes through elastic deformations. The model employs a representation of shape based on the discrete exterior derivative of parametrizations over a finite simplicial complex. We develop algorithms to calculate geodesics and geodesic distances, as well as tools to quantify local shape similarities and contrasts, thus obtaining a formulation that accounts for regional differences and integrates them into a global measure of dissimilarity. The Riemannian shape spaces provide a common framework to treat numerous problems such as the statistical modeling of shapes, the comparison of shapes associated with different individuals or groups, and modeling and simulation of shape dynamics. We give multiple examples of geodesic interpolations and illustrations of the use of the models in brain mapping, particularly, the analysis of anatomical variation based on neuroimaging data. PMID:21057668

  16. Entropy of Leukemia on Multidimensional Morphological and Molecular Landscapes

    NASA Astrophysics Data System (ADS)

    Vilar, Jose M. G.

    2014-04-01

    Leukemia epitomizes the class of highly complex diseases that new technologies aim to tackle by using large sets of single-cell-level information. Achieving such a goal depends critically not only on experimental techniques but also on approaches to interpret the data. A most pressing issue is to identify the salient quantitative features of the disease from the resulting massive amounts of information. Here, I show that the entropies of cell-population distributions on specific multidimensional molecular and morphological landscapes provide a set of measures for the precise characterization of normal and pathological states, such as those corresponding to healthy individuals and acute myeloid leukemia (AML) patients. I provide a systematic procedure to identify the specific landscapes and illustrate how, applied to cell samples from peripheral blood and bone marrow aspirates, this characterization accurately diagnoses AML from just flow cytometry data. The methodology can generally be applied to other types of cell populations and establishes a straightforward link between the traditional statistical thermodynamics methodology and biomedical applications.

  17. Multidimensional scaling of D15 caps: color-vision defects among tobacco smokers?

    PubMed

    Bimler, David; Kirkland, John

    2004-01-01

    Tobacco smoke contains a range of toxins including carbon monoxide and cyanide. With specialized cells and high metabolic demands, the optic nerve and retina are vulnerable to toxic exposure. We examined the possible effects of smoking on color vision: specifically, whether smokers perceive a different pattern of suprathreshold color dissimilarities from nonsmokers. It is already known that smokers differ in threshold color discrimination, with elevated scores on the Roth 28-Hue Desaturated panel test. Groups of smokers and nonsmokers, matched for sex and age, followed a triadic procedure to compare dissimilarities among 32 pigmented stimuli (the caps of the saturated and desaturated versions of the D15 panel test). Multidimensional scaling was applied to quantify individual variations in the salience of the axes of color space. Despite the briefness, simplicity, and "low-tech" nature of the procedure, subtle but statistically significant differences did emerge: on average the smoking group were significantly less sensitive to red-green differences. This is consistent with some form of injury to the optic nerve.

  18. Machine Detection of Enhanced Electromechanical Energy Conversion in PbZr 0.2Ti 0.8O 3 Thin Films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agar, Joshua C.; Cao, Ye; Naul, Brett

    Many energy conversion, sensing, and microelectronic applications based on ferroic materials are determined by the domain structure evolution under applied stimuli. New hyperspectral, multidimensional spectroscopic techniques now probe dynamic responses at relevant length and time scales to provide an understanding of how these nanoscale domain structures impact macroscopic properties. Such approaches, however, remain limited in use because of the difficulties that exist in extracting and visualizing scientific insights from these complex datasets. Using multidimensional band-excitation scanning probe spectroscopy and adapting tools from both computer vision and machine learning, an automated workflow is developed to featurize, detect, and classify signatures ofmore » ferroelectric/ferroelastic switching processes in complex ferroelectric domain structures. This approach enables the identification and nanoscale visualization of varied modes of response and a pathway to statistically meaningful quantification of the differences between those modes. Lastly, among other things, the importance of domain geometry is spatially visualized for enhancing nanoscale electromechanical energy conversion.« less

  19. Adaptive reference update (ARU) algorithm. A stochastic search algorithm for efficient optimization of multi-drug cocktails

    PubMed Central

    2012-01-01

    Background Multi-target therapeutics has been shown to be effective for treating complex diseases, and currently, it is a common practice to combine multiple drugs to treat such diseases to optimize the therapeutic outcomes. However, considering the huge number of possible ways to mix multiple drugs at different concentrations, it is practically difficult to identify the optimal drug combination through exhaustive testing. Results In this paper, we propose a novel stochastic search algorithm, called the adaptive reference update (ARU) algorithm, that can provide an efficient and systematic way for optimizing multi-drug cocktails. The ARU algorithm iteratively updates the drug combination to improve its response, where the update is made by comparing the response of the current combination with that of a reference combination, based on which the beneficial update direction is predicted. The reference combination is continuously updated based on the drug response values observed in the past, thereby adapting to the underlying drug response function. To demonstrate the effectiveness of the proposed algorithm, we evaluated its performance based on various multi-dimensional drug functions and compared it with existing algorithms. Conclusions Simulation results show that the ARU algorithm significantly outperforms existing stochastic search algorithms, including the Gur Game algorithm. In fact, the ARU algorithm can more effectively identify potent drug combinations and it typically spends fewer iterations for finding effective combinations. Furthermore, the ARU algorithm is robust to random fluctuations and noise in the measured drug response, which makes the algorithm well-suited for practical drug optimization applications. PMID:23134742

  20. Is the Bifactor Model a Better Model or Is It Just Better at Modeling Implausible Responses? Application of Iteratively Reweighted Least Squares to the Rosenberg Self-Esteem Scale.

    PubMed

    Reise, Steven P; Kim, Dale S; Mansolf, Maxwell; Widaman, Keith F

    2016-01-01

    Although the structure of the Rosenberg Self-Esteem Scale (RSES) has been exhaustively evaluated, questions regarding dimensionality and direction of wording effects continue to be debated. To shed new light on these issues, we ask (a) for what percentage of individuals is a unidimensional model adequate, (b) what additional percentage of individuals can be modeled with multidimensional specifications, and (c) what percentage of individuals respond so inconsistently that they cannot be well modeled? To estimate these percentages, we applied iteratively reweighted least squares (IRLS) to examine the structure of the RSES in a large, publicly available data set. A distance measure, d s , reflecting a distance between a response pattern and an estimated model, was used for case weighting. We found that a bifactor model provided the best overall model fit, with one general factor and two wording-related group factors. However, on the basis of d r  values, a distance measure based on individual residuals, we concluded that approximately 86% of cases were adequately modeled through a unidimensional structure, and only an additional 3% required a bifactor model. Roughly 11% of cases were judged as "unmodelable" due to their significant residuals in all models considered. Finally, analysis of d s revealed that some, but not all, of the superior fit of the bifactor model is owed to that model's ability to better accommodate implausible and possibly invalid response patterns, and not necessarily because it better accounts for the effects of direction of wording.

  1. A rapid local singularity analysis algorithm with applications

    NASA Astrophysics Data System (ADS)

    Chen, Zhijun; Cheng, Qiuming; Agterberg, Frits

    2015-04-01

    The local singularity model developed by Cheng is fast gaining popularity in characterizing mineralization and detecting anomalies of geochemical, geophysical and remote sensing data. However in one of the conventional algorithms involving the moving average values with different scales is time-consuming especially while analyzing a large dataset. Summed area table (SAT), also called as integral image, is a fast algorithm used within the Viola-Jones object detection framework in computer vision area. Historically, the principle of SAT is well-known in the study of multi-dimensional probability distribution functions, namely in computing 2D (or ND) probabilities (area under the probability distribution) from the respective cumulative distribution functions. We introduce SAT and it's variation Rotated Summed Area Table in the isotropic, anisotropic or directional local singularity mapping in this study. Once computed using SAT, any one of the rectangular sum can be computed at any scale or location in constant time. The area for any rectangular region in the image can be computed by using only 4 array accesses in constant time independently of the size of the region; effectively reducing the time complexity from O(n) to O(1). New programs using Python, Julia, matlab and C++ are implemented respectively to satisfy different applications, especially to the big data analysis. Several large geochemical and remote sensing datasets are tested. A wide variety of scale changes (linear spacing or log spacing) for non-iterative or iterative approach are adopted to calculate the singularity index values and compare the results. The results indicate that the local singularity analysis with SAT is more robust and superior to traditional approach in identifying anomalies.

  2. Is the Bifactor Model a Better Model or is it Just Better at Modeling Implausible Responses? Application of Iteratively Reweighted Least Squares to the Rosenberg Self-Esteem Scale

    PubMed Central

    Reise, Steven P.; Kim, Dale S.; Mansolf, Maxwell; Widaman, Keith F.

    2017-01-01

    Although the structure of the Rosenberg Self-Esteem Scale (RSES; Rosenberg, 1965) has been exhaustively evaluated, questions regarding dimensionality and direction of wording effects continue to be debated. To shed new light on these issues, we ask: (1) for what percentage of individuals is a unidimensional model adequate, (2) what additional percentage of individuals can be modeled with multidimensional specifications, and (3) what percentage of individuals respond so inconsistently that they cannot be well modeled? To estimate these percentages, we applied iteratively reweighted least squares (IRLS; Yuan & Bentler, 2000) to examine the structure of the RSES in a large, publicly available dataset. A distance measure, ds, reflecting a distance between a response pattern and an estimated model, was used for case weighting. We found that a bifactor model provided the best overall model fit, with one general factor and two wording-related group factors. But, based on dr values, a distance measure based on individual residuals, we concluded that approximately 86% of cases were adequately modeled through a unidimensional structure, and only an additional 3% required a bifactor model. Roughly 11% of cases were judged as “unmodelable” due to their significant residuals in all models considered. Finally, analysis of ds revealed that some, but not all, of the superior fit of the bifactor model is owed to that model’s ability to better accommodate implausible and possibly invalid response patterns, and not necessarily because it better accounts for the effects of direction of wording. PMID:27834509

  3. Model-based Iterative Reconstruction: Effect on Patient Radiation Dose and Image Quality in Pediatric Body CT

    PubMed Central

    Dillman, Jonathan R.; Goodsitt, Mitchell M.; Christodoulou, Emmanuel G.; Keshavarzi, Nahid; Strouse, Peter J.

    2014-01-01

    Purpose To retrospectively compare image quality and radiation dose between a reduced-dose computed tomographic (CT) protocol that uses model-based iterative reconstruction (MBIR) and a standard-dose CT protocol that uses 30% adaptive statistical iterative reconstruction (ASIR) with filtered back projection. Materials and Methods Institutional review board approval was obtained. Clinical CT images of the chest, abdomen, and pelvis obtained with a reduced-dose protocol were identified. Images were reconstructed with two algorithms: MBIR and 100% ASIR. All subjects had undergone standard-dose CT within the prior year, and the images were reconstructed with 30% ASIR. Reduced- and standard-dose images were evaluated objectively and subjectively. Reduced-dose images were evaluated for lesion detectability. Spatial resolution was assessed in a phantom. Radiation dose was estimated by using volumetric CT dose index (CTDIvol) and calculated size-specific dose estimates (SSDE). A combination of descriptive statistics, analysis of variance, and t tests was used for statistical analysis. Results In the 25 patients who underwent the reduced-dose protocol, mean decrease in CTDIvol was 46% (range, 19%–65%) and mean decrease in SSDE was 44% (range, 19%–64%). Reduced-dose MBIR images had less noise (P > .004). Spatial resolution was superior for reduced-dose MBIR images. Reduced-dose MBIR images were equivalent to standard-dose images for lungs and soft tissues (P > .05) but were inferior for bones (P = .004). Reduced-dose 100% ASIR images were inferior for soft tissues (P < .002), lungs (P < .001), and bones (P < .001). By using the same reduced-dose acquisition, lesion detectability was better (38% [32 of 84 rated lesions]) or the same (62% [52 of 84 rated lesions]) with MBIR as compared with 100% ASIR. Conclusion CT performed with a reduced-dose protocol and MBIR is feasible in the pediatric population, and it maintains diagnostic quality. © RSNA, 2013 Online supplemental material is available for this article. PMID:24091359

  4. Upgrade to iterative image reconstruction (IR) in abdominal MDCT imaging: a clinical study for detailed parameter optimization beyond vendor recommendations using the adaptive statistical iterative reconstruction environment (ASIR).

    PubMed

    Mueck, F G; Körner, M; Scherr, M K; Geyer, L L; Deak, Z; Linsenmaier, U; Reiser, M; Wirth, S

    2012-03-01

    To compare the image quality of dose-reduced 64-row abdominal CT reconstructed at different levels of adaptive statistical iterative reconstruction (ASIR) to full-dose baseline examinations reconstructed with filtered back-projection (FBP) in a clinical setting and upgrade situation. Abdominal baseline examinations (noise index NI = 29; LightSpeed VCT XT, GE) were intra-individually compared to follow-up studies on a CT with an ASIR option (NI = 43; Discovery HD750, GE), n = 42. Standard-kernel images were calculated with ASIR blendings of 0 - 100 % in slice and volume mode, respectively. Three experienced radiologists compared the image quality of these 567 sets to their corresponding full-dose baseline examination (- 2: diagnostically inferior, - 1: inferior, 0: equal, + 1: superior, + 2: diagnostically superior). Furthermore, a phantom was scanned. Statistical analysis used the Wilcoxon - the Mann-Whitney U-test and the intra-class correlation (ICC). The mean CTDIvol decreased from 19.7 ± 5.5 to 12.2 ± 4.7 mGy (p < 0.001). The ICC was 0.861. The total image quality of the dose-reduced ASIR studies was comparable to the baseline at ASIR 50 % in slice (p = 0.18) and ASIR 50 - 100 % in volume mode (p > 0.10). Volume mode performed 73 % slower than slice mode (p < 0.01). After the system upgrade, the vendor recommendation of ASIR 50 % in slice mode allowed for a dose reduction of 38 % in abdominal CT with comparable image quality and time expenditure. However, there is still further dose reduction potential for more complex reconstruction settings. © Georg Thieme Verlag KG Stuttgart · New York.

  5. Upgrade to iterative image reconstruction (IR) in MDCT imaging: a clinical study for detailed parameter optimization beyond vendor recommendations using the adaptive statistical iterative reconstruction environment (ASIR) Part2: The chest.

    PubMed

    Mueck, F G; Michael, L; Deak, Z; Scherr, M K; Maxien, D; Geyer, L L; Reiser, M; Wirth, S

    2013-07-01

    To compare the image quality in dose-reduced 64-row CT of the chest at different levels of adaptive statistical iterative reconstruction (ASIR) to full-dose baseline examinations reconstructed solely with filtered back projection (FBP) in a realistic upgrade scenario. A waiver of consent was granted by the institutional review board (IRB). The noise index (NI) relates to the standard deviation of Hounsfield units in a water phantom. Baseline exams of the chest (NI = 29; LightSpeed VCT XT, GE Healthcare) were intra-individually compared to follow-up studies on a CT with ASIR after system upgrade (NI = 45; Discovery HD750, GE Healthcare), n = 46. Images were calculated in slice and volume mode with ASIR levels of 0 - 100 % in the standard and lung kernel. Three radiologists independently compared the image quality to the corresponding full-dose baseline examinations (-2: diagnostically inferior, -1: inferior, 0: equal, + 1: superior, + 2: diagnostically superior). Statistical analysis used Wilcoxon's test, Mann-Whitney U test and the intraclass correlation coefficient (ICC). The mean CTDIvol decreased by 53 % from the FBP baseline to 8.0 ± 2.3 mGy for ASIR follow-ups; p < 0.001. The ICC was 0.70. Regarding the standard kernel, the image quality in dose-reduced studies was comparable to the baseline at ASIR 70 % in volume mode (-0.07 ± 0.29, p = 0.29). Concerning the lung kernel, every ASIR level outperformed the baseline image quality (p < 0.001), with ASIR 30 % rated best (slice: 0.70 ± 0.6, volume: 0.74 ± 0.61). Vendors' recommendation of 50 % ASIR is fair. In detail, the ASIR 70 % in volume mode for the standard kernel and ASIR 30 % for the lung kernel performed best, allowing for a dose reduction of approximately 50 %. © Georg Thieme Verlag KG Stuttgart · New York.

  6. An Iterative Procedure for Obtaining I-Projections onto the Intersection of Convex Sets.

    DTIC Science & Technology

    1984-06-01

    Dykstra Department of Statistics and Actuarial Science The University of Iowa Iowa City, Iowa 52242 Technical Report #106 June 1984D I e ELECTE lSEP...t Theorem ~ ~ 2.. Asm i where the 4 are closed, convex sets of PD’s and R d 0 is a nonnegative vector such that there exists a T E 4 where I(TIR) < M...PERFOMING ORGANIZATION NAME AND ADDRESS 1. PROGIRA ILEMNT. PROJECT. TAK Department of Statistics and Actuarial Science AEAS a WORK UNIT Numaa The

  7. Additive scales in degenerative disease--calculation of effect sizes and clinical judgment.

    PubMed

    Riepe, Matthias W; Wilkinson, David; Förstl, Hans; Brieden, Andreas

    2011-12-16

    The therapeutic efficacy of an intervention is often assessed in clinical trials by scales measuring multiple diverse activities that are added to produce a cumulative global score. Medical communities and health care systems subsequently use these data to calculate pooled effect sizes to compare treatments. This is done because major doubt has been cast over the clinical relevance of statistically significant findings relying on p values with the potential to report chance findings. Hence in an aim to overcome this pooling the results of clinical studies into a meta-analyses with a statistical calculus has been assumed to be a more definitive way of deciding of efficacy. We simulate the therapeutic effects as measured with additive scales in patient cohorts with different disease severity and assess the limitations of an effect size calculation of additive scales which are proven mathematically. We demonstrate that the major problem, which cannot be overcome by current numerical methods, is the complex nature and neurobiological foundation of clinical psychiatric endpoints in particular and additive scales in general. This is particularly relevant for endpoints used in dementia research. 'Cognition' is composed of functions such as memory, attention, orientation and many more. These individual functions decline in varied and non-linear ways. Here we demonstrate that with progressive diseases cumulative values from multidimensional scales are subject to distortion by the limitations of the additive scale. The non-linearity of the decline of function impedes the calculation of effect sizes based on cumulative values from these multidimensional scales. Statistical analysis needs to be guided by boundaries of the biological condition. Alternatively, we suggest a different approach avoiding the error imposed by over-analysis of cumulative global scores from additive scales.

  8. Iterative Monte Carlo analysis of spin-dependent parton distributions

    DOE PAGES

    Sato, Nobuo; Melnitchouk, Wally; Kuhn, Sebastian E.; ...

    2016-04-05

    We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳ 0.1. Furthermore, the study also provides the first determination of the flavor-separated twist-3 PDFsmore » and the d 2 moment of the nucleon within a global PDF analysis.« less

  9. Design and optimization of color lookup tables on a simplex topology.

    PubMed

    Monga, Vishal; Bala, Raja; Mo, Xuan

    2012-04-01

    An important computational problem in color imaging is the design of color transforms that map color between devices or from a device-dependent space (e.g., RGB/CMYK) to a device-independent space (e.g., CIELAB) and vice versa. Real-time processing constraints entail that such nonlinear color transforms be implemented using multidimensional lookup tables (LUTs). Furthermore, relatively sparse LUTs (with efficient interpolation) are employed in practice because of storage and memory constraints. This paper presents a principled design methodology rooted in constrained convex optimization to design color LUTs on a simplex topology. The use of n simplexes, i.e., simplexes in n dimensions, as opposed to traditional lattices, recently has been of great interest in color LUT design for simplex topologies that allow both more analytically tractable formulations and greater efficiency in the LUT. In this framework of n-simplex interpolation, our central contribution is to develop an elegant iterative algorithm that jointly optimizes the placement of nodes of the color LUT and the output values at those nodes to minimize interpolation error in an expected sense. This is in contrast to existing work, which exclusively designs either node locations or the output values. We also develop new analytical results for the problem of node location optimization, which reduces to constrained optimization of a large but sparse interpolation matrix in our framework. We evaluate our n -simplex color LUTs against the state-of-the-art lattice (e.g., International Color Consortium profiles) and simplex-based techniques for approximating two representative multidimensional color transforms that characterize a CMYK xerographic printer and an RGB scanner, respectively. The results show that color LUTs designed on simplexes offer very significant benefits over traditional lattice-based alternatives in improving color transform accuracy even with a much smaller number of nodes.

  10. Body Awareness: Construct and Self-Report Measures

    PubMed Central

    Mehling, Wolf E.; Gopisetty, Viranjini; Daubenmier, Jennifer; Price, Cynthia J.; Hecht, Frederick M.; Stewart, Anita

    2009-01-01

    Objectives Heightened body awareness can be adaptive and maladaptive. Improving body awareness has been suggested as an approach for treating patients with conditions such as chronic pain, obesity and post-traumatic stress disorder. We assessed the psychometric quality of selected self-report measures and examined their items for underlying definitions of the construct. Data sources PubMed, PsychINFO, HaPI, Embase, Digital Dissertations Database. Review methods Abstracts were screened; potentially relevant instruments were obtained and systematically reviewed. Instruments were excluded if they exclusively measured anxiety, covered emotions without related physical sensations, used observer ratings only, or were unobtainable. We restricted our study to the proprioceptive and interoceptive channels of body awareness. The psychometric properties of each scale were rated using a structured evaluation according to the method of McDowell. Following a working definition of the multi-dimensional construct, an inter-disciplinary team systematically examined the items of existing body awareness instruments, identified the dimensions queried and used an iterative qualitative process to refine the dimensions of the construct. Results From 1,825 abstracts, 39 instruments were screened. 12 were included for psychometric evaluation. Only two were rated as high standard for reliability, four for validity. Four domains of body awareness with 11 sub-domains emerged. Neither a single nor a compilation of several instruments covered all dimensions. Key domains that might potentially differentiate adaptive and maladaptive aspects of body awareness were missing in the reviewed instruments. Conclusion Existing self-report instruments do not address important domains of the construct of body awareness, are unable to discern between adaptive and maladaptive aspects of body awareness, or exhibit other psychometric limitations. Restricting the construct to its proprio- and interoceptive channels, we explore the current understanding of the multi-dimensional construct and suggest next steps for further research. PMID:19440300

  11. Modelling the Probability of Landslides Impacting Road Networks

    NASA Astrophysics Data System (ADS)

    Taylor, F. E.; Malamud, B. D.

    2012-04-01

    During a landslide triggering event, the threat of landslides blocking roads poses a risk to logistics, rescue efforts and communities dependant on those road networks. Here we present preliminary results of a stochastic model we have developed to evaluate the probability of landslides intersecting a simple road network during a landslide triggering event and apply simple network indices to measure the state of the road network in the affected region. A 4000 x 4000 cell array with a 5 m x 5 m resolution was used, with a pre-defined simple road network laid onto it, and landslides 'randomly' dropped onto it. Landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power-law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL; the rollover (maximum probability) occurs at about AL = 400 m2 This statistical distribution was chosen based on three substantially complete triggered landslide inventories recorded in existing literature. The number of landslide areas (NL) selected for each triggered event iteration was chosen to have an average density of 1 landslide km-2, i.e. NL = 400 landslide areas chosen randomly for each iteration, and was based on several existing triggered landslide event inventories. A simple road network was chosen, in a 'T' shape configuration, with one road 1 x 4000 cells (5 m x 20 km) in a 'T' formation with another road 1 x 2000 cells (5 m x 10 km). The landslide areas were then randomly 'dropped' over the road array and indices such as the location, size (ABL) and number of road blockages (NBL) recorded. This process was performed 500 times (iterations) in a Monte-Carlo type simulation. Initial results show that for a landslide triggering event with 400 landslides over a 400 km2 region, the number of road blocks per iteration, NBL,ranges from 0 to 7. The average blockage area for the 500 iterations (A¯ BL) is about 3000 m2, which closely matches the value of A¯ L for the triggered landslide inventories. We further find that over the 500 iterations, the probability of a given number of road blocks occurring on any given iteration, p(NBL) as a function of NBL, follows reasonably well a three-parameter inverse gamma probability density distribution with an exponential rollover (i.e., the most frequent value) at NBL = 1.3. In this paper we have begun to calculate the probability of the number of landslides blocking roads during a triggering event, and have found that this follows an inverse-gamma distribution, which is similar to that found for the statistics of landslide areas resulting from triggers. As we progress to model more realistic road networks, this work will aid in both long-term and disaster management for road networks by allowing probabilistic assessment of road network potential damage during different magnitude landslide triggering event scenarios.

  12. Mean-variance analysis of block-iterative reconstruction algorithms modeling 3D detector response in SPECT

    NASA Astrophysics Data System (ADS)

    Lalush, D. S.; Tsui, B. M. W.

    1998-06-01

    We study the statistical convergence properties of two fast iterative reconstruction algorithms, the rescaled block-iterative (RBI) and ordered subset (OS) EM algorithms, in the context of cardiac SPECT with 3D detector response modeling. The Monte Carlo method was used to generate nearly noise-free projection data modeling the effects of attenuation, detector response, and scatter from the MCAT phantom. One thousand noise realizations were generated with an average count level approximating a typical T1-201 cardiac study. Each noise realization was reconstructed using the RBI and OS algorithms for cases with and without detector response modeling. For each iteration up to twenty, we generated mean and variance images, as well as covariance images for six specific locations. Both OS and RBI converged in the mean to results that were close to the noise-free ML-EM result using the same projection model. When detector response was not modeled in the reconstruction, RBI exhibited considerably lower noise variance than OS for the same resolution. When 3D detector response was modeled, the RBI-EM provided a small improvement in the tradeoff between noise level and resolution recovery, primarily in the axial direction, while OS required about half the number of iterations of RBI to reach the same resolution. We conclude that OS is faster than RBI, but may be sensitive to errors in the projection model. Both OS-EM and RBI-EM are effective alternatives to the EVIL-EM algorithm, but noise level and speed of convergence depend on the projection model used.

  13. BaTMAn: Bayesian Technique for Multi-image Analysis

    NASA Astrophysics Data System (ADS)

    Casado, J.; Ascasibar, Y.; García-Benito, R.; Guidi, G.; Choudhury, O. S.; Bellocchi, E.; Sánchez, S. F.; Díaz, A. I.

    2016-12-01

    Bayesian Technique for Multi-image Analysis (BaTMAn) characterizes any astronomical dataset containing spatial information and performs a tessellation based on the measurements and errors provided as input. The algorithm iteratively merges spatial elements as long as they are statistically consistent with carrying the same information (i.e. identical signal within the errors). The output segmentations successfully adapt to the underlying spatial structure, regardless of its morphology and/or the statistical properties of the noise. BaTMAn identifies (and keeps) all the statistically-significant information contained in the input multi-image (e.g. an IFS datacube). The main aim of the algorithm is to characterize spatially-resolved data prior to their analysis.

  14. Compressing random microstructures via stochastic Wang tilings.

    PubMed

    Novák, Jan; Kučerová, Anna; Zeman, Jan

    2012-10-01

    This Rapid Communication presents a stochastic Wang tiling-based technique to compress or reconstruct disordered microstructures on the basis of given spatial statistics. Unlike the existing approaches based on a single unit cell, it utilizes a finite set of tiles assembled by a stochastic tiling algorithm, thereby allowing to accurately reproduce long-range orientation orders in a computationally efficient manner. Although the basic features of the method are demonstrated for a two-dimensional particulate suspension, the present framework is fully extensible to generic multidimensional media.

  15. Integral equations in the study of polar and ionic interaction site fluids

    PubMed Central

    Howard, Jesse J.

    2011-01-01

    In this review article we consider some of the current integral equation approaches and application to model polar liquid mixtures. We consider the use of multidimensional integral equations and in particular progress on the theory and applications of three dimensional integral equations. The IEs we consider may be derived from equilibrium statistical mechanical expressions incorporating a classical Hamiltonian description of the system. We give example including salt solutions, inhomogeneous solutions and systems including proteins and nucleic acids. PMID:22383857

  16. Trends in modeling Biomedical Complex Systems

    PubMed Central

    Milanesi, Luciano; Romano, Paolo; Castellani, Gastone; Remondini, Daniel; Liò, Petro

    2009-01-01

    In this paper we provide an introduction to the techniques for multi-scale complex biological systems, from the single bio-molecule to the cell, combining theoretical modeling, experiments, informatics tools and technologies suitable for biological and biomedical research, which are becoming increasingly multidisciplinary, multidimensional and information-driven. The most important concepts on mathematical modeling methodologies and statistical inference, bioinformatics and standards tools to investigate complex biomedical systems are discussed and the prominent literature useful to both the practitioner and the theoretician are presented. PMID:19828068

  17. Path integral molecular dynamics for exact quantum statistics of multi-electronic-state systems.

    PubMed

    Liu, Xinzijian; Liu, Jian

    2018-03-14

    An exact approach to compute physical properties for general multi-electronic-state (MES) systems in thermal equilibrium is presented. The approach is extended from our recent progress on path integral molecular dynamics (PIMD), Liu et al. [J. Chem. Phys. 145, 024103 (2016)] and Zhang et al. [J. Chem. Phys. 147, 034109 (2017)], for quantum statistical mechanics when a single potential energy surface is involved. We first define an effective potential function that is numerically favorable for MES-PIMD and then derive corresponding estimators in MES-PIMD for evaluating various physical properties. Its application to several representative one-dimensional and multi-dimensional models demonstrates that MES-PIMD in principle offers a practical tool in either of the diabatic and adiabatic representations for studying exact quantum statistics of complex/large MES systems when the Born-Oppenheimer approximation, Condon approximation, and harmonic bath approximation are broken.

  18. Path integral molecular dynamics for exact quantum statistics of multi-electronic-state systems

    NASA Astrophysics Data System (ADS)

    Liu, Xinzijian; Liu, Jian

    2018-03-01

    An exact approach to compute physical properties for general multi-electronic-state (MES) systems in thermal equilibrium is presented. The approach is extended from our recent progress on path integral molecular dynamics (PIMD), Liu et al. [J. Chem. Phys. 145, 024103 (2016)] and Zhang et al. [J. Chem. Phys. 147, 034109 (2017)], for quantum statistical mechanics when a single potential energy surface is involved. We first define an effective potential function that is numerically favorable for MES-PIMD and then derive corresponding estimators in MES-PIMD for evaluating various physical properties. Its application to several representative one-dimensional and multi-dimensional models demonstrates that MES-PIMD in principle offers a practical tool in either of the diabatic and adiabatic representations for studying exact quantum statistics of complex/large MES systems when the Born-Oppenheimer approximation, Condon approximation, and harmonic bath approximation are broken.

  19. Continuity vs. the Crowd-Tradeoffs Between Continuous and Intermittent Citizen Hydrology Streamflow Observations.

    PubMed

    Davids, Jeffrey C; van de Giesen, Nick; Rutten, Martine

    2017-07-01

    Hydrologic data has traditionally been collected with permanent installations of sophisticated and accurate but expensive monitoring equipment at limited numbers of sites. Consequently, observation frequency and costs are high, but spatial coverage of the data is limited. Citizen Hydrology can possibly overcome these challenges by leveraging easily scaled mobile technology and local residents to collect hydrologic data at many sites. However, understanding of how decreased observational frequency impacts the accuracy of key streamflow statistics such as minimum flow, maximum flow, and runoff is limited. To evaluate this impact, we randomly selected 50 active United States Geological Survey streamflow gauges in California. We used 7 years of historical 15-min flow data from 2008 to 2014 to develop minimum flow, maximum flow, and runoff values for each gauge. To mimic lower frequency Citizen Hydrology observations, we developed a bootstrap randomized subsampling with replacement procedure. We calculated the same statistics, and their respective distributions, from 50 subsample iterations with four different subsampling frequencies ranging from daily to monthly. Minimum flows were estimated within 10% for half of the subsample iterations at 39 (daily) and 23 (monthly) of the 50 sites. However, maximum flows were estimated within 10% at only 7 (daily) and 0 (monthly) sites. Runoff volumes were estimated within 10% for half of the iterations at 44 (daily) and 12 (monthly) sites. Watershed flashiness most strongly impacted accuracy of minimum flow, maximum flow, and runoff estimates from subsampled data. Depending on the questions being asked, lower frequency Citizen Hydrology observations can provide useful hydrologic information.

  20. Validating Lung Models Using the ASL 5000 Breathing Simulator.

    PubMed

    Dexter, Amanda; McNinch, Neil; Kaznoch, Destiny; Volsko, Teresa A

    2018-04-01

    This study sought to validate pediatric models with normal and altered pulmonary mechanics. PubMed and CINAHL databases were searched for studies directly measuring pulmonary mechanics of healthy infants and children, infants with severe bronchopulmonary dysplasia and neuromuscular disease. The ASL 5000 was used to construct models using tidal volume (VT), inspiratory time (TI), respiratory rate, resistance, compliance, and esophageal pressure gleaned from literature. Data were collected for a 1-minute period and repeated three times for each model. t tests compared modeled data with data abstracted from the literature. Repeated measures analyses evaluated model performance over multiple iterations. Statistical significance was established at a P value of less than 0.05. Maximum differences of means (experimental iteration mean - clinical standard mean) for TI and VT are the following: term infant without lung disease (TI = 0.09 s, VT = 0.29 mL), severe bronchopulmonary dysplasia (TI = 0.08 s, VT = 0.17 mL), child without lung disease (TI = 0.10 s, VT = 0.17 mL), and child with neuromuscular disease (TI = 0.09 s, VT = 0.57 mL). One-sample testing demonstrated statistically significant differences between clinical controls and VT and TI values produced by the ASL 5000 for each iteration and model (P < 0.01). The greatest magnitude of differences was negligible (VT < 1.6%, TI = 18%) and not clinically relevant. Inconsistencies occurred with the models constructed on the ASL 5000. It was deemed accurate for the study purposes. It is therefore essential to test models and evaluate magnitude of differences before use.

  1. Evaluation of reconstruction techniques in regional cerebral blood flow SPECT using trade-off plots: a Monte Carlo study.

    PubMed

    Olsson, Anna; Arlig, Asa; Carlsson, Gudrun Alm; Gustafsson, Agnetha

    2007-09-01

    The image quality of single photon emission computed tomography (SPECT) depends on the reconstruction algorithm used. The purpose of the present study was to evaluate parameters in ordered subset expectation maximization (OSEM) and to compare systematically with filtered back-projection (FBP) for reconstruction of regional cerebral blood flow (rCBF) SPECT, incorporating attenuation and scatter correction. The evaluation was based on the trade-off between contrast recovery and statistical noise using different sizes of subsets, number of iterations and filter parameters. Monte Carlo simulated SPECT studies of a digital human brain phantom were used. The contrast recovery was calculated as measured contrast divided by true contrast. Statistical noise in the reconstructed images was calculated as the coefficient of variation in pixel values. A constant contrast level was reached above 195 equivalent maximum likelihood expectation maximization iterations. The choice of subset size was not crucial as long as there were > or = 2 projections per subset. The OSEM reconstruction was found to give 5-14% higher contrast recovery than FBP for all clinically relevant noise levels in rCBF SPECT. The Butterworth filter, power 6, achieved the highest stable contrast recovery level at all clinically relevant noise levels. The cut-off frequency should be chosen according to the noise level accepted in the image. Trade-off plots are shown to be a practical way of deciding the number of iterations and subset size for the OSEM reconstruction and can be used for other examination types in nuclear medicine.

  2. Influence of adaptive statistical iterative reconstruction algorithm on image quality in coronary computed tomography angiography

    PubMed Central

    Thygesen, Jesper; Gerke, Oke; Egstrup, Kenneth; Waaler, Dag; Lambrechtsen, Jess

    2016-01-01

    Background Coronary computed tomography angiography (CCTA) requires high spatial and temporal resolution, increased low contrast resolution for the assessment of coronary artery stenosis, plaque detection, and/or non-coronary pathology. Therefore, new reconstruction algorithms, particularly iterative reconstruction (IR) techniques, have been developed in an attempt to improve image quality with no cost in radiation exposure. Purpose To evaluate whether adaptive statistical iterative reconstruction (ASIR) enhances perceived image quality in CCTA compared to filtered back projection (FBP). Material and Methods Thirty patients underwent CCTA due to suspected coronary artery disease. Images were reconstructed using FBP, 30% ASIR, and 60% ASIR. Ninety image sets were evaluated by five observers using the subjective visual grading analysis (VGA) and assessed by proportional odds modeling. Objective quality assessment (contrast, noise, and the contrast-to-noise ratio [CNR]) was analyzed with linear mixed effects modeling on log-transformed data. The need for ethical approval was waived by the local ethics committee as the study only involved anonymously collected clinical data. Results VGA showed significant improvements in sharpness by comparing FBP with ASIR, resulting in odds ratios of 1.54 for 30% ASIR and 1.89 for 60% ASIR (P = 0.004). The objective measures showed significant differences between FBP and 60% ASIR (P < 0.0001) for noise, with an estimated ratio of 0.82, and for CNR, with an estimated ratio of 1.26. Conclusion ASIR improved the subjective image quality of parameter sharpness and, objectively, reduced noise and increased CNR. PMID:28405477

  3. Multidimensional poverty, household environment and short-term morbidity in India.

    PubMed

    Dehury, Bidyadhar; Mohanty, Sanjay K

    2017-01-01

    Using the unit data from the second round of the Indian Human Development Survey (IHDS-II), 2011-2012, which covered 42,152 households, this paper examines the association between multidimensional poverty, household environmental deprivation and short-term morbidities (fever, cough and diarrhoea) in India. Poverty is measured in a multidimensional framework that includes the dimensions of education, health and income, while household environmental deprivation is defined as lack of access to improved sanitation, drinking water and cooking fuel. A composite index combining multidimensional poverty and household environmental deprivation has been computed, and households are classified as follows: multidimensional poor and living in a poor household environment, multidimensional non-poor and living in a poor household environment, multidimensional poor and living in a good household environment and multidimensional non-poor and living in a good household environment. Results suggest that about 23% of the population belonging to multidimensional poor households and living in a poor household environment had experienced short-term morbidities in a reference period of 30 days compared to 20% of the population belonging to multidimensional non-poor households and living in a poor household environment, 19% of the population belonging to multidimensional poor households and living in a good household environment and 15% of the population belonging to multidimensional non-poor households and living in a good household environment. Controlling for socioeconomic covariates, the odds of short-term morbidity was 1.47 [CI 1.40-1.53] among the multidimensional poor and living in a poor household environment, 1.28 [CI 1.21-1.37] among the multidimensional non-poor and living in a poor household environment and 1.21 [CI 1.64-1.28] among the multidimensional poor and living in a good household environment compared to the multidimensional non-poor and living in a good household environment. Results are robust across states and hold good for each of the three morbidities: fever, cough and diarrhoea. This establishes that along with poverty, household environmental conditions have a significant bearing on short-term morbidities in India. Public investment in sanitation, drinking water and cooking fuel can reduce the morbidity and improve the health of the population.

  4. Adaptive statistical iterative reconstruction improves image quality without affecting perfusion CT quantitation in primary colorectal cancer.

    PubMed

    Prezzi, D; Goh, V; Virdi, S; Mallett, S; Grierson, C; Breen, D J

    2017-01-01

    To determine the effect of Adaptive Statistical Iterative Reconstruction (ASIR) on perfusion CT (pCT) parameter quantitation and image quality in primary colorectal cancer. Prospective observational study. Following institutional review board approval and informed consent, 32 patients with colorectal adenocarcinoma underwent pCT (100 kV, 150 mA, 120 s acquisition, axial mode). Tumour regional blood flow (BF), blood volume (BV), mean transit time (MTT) and permeability surface area product (PS) were determined using identical regions-of-interests for ASIR percentages of 0%, 20%, 40%, 60%, 80% and 100%. Image noise, contrast-to-noise ratio (CNR) and pCT parameters were assessed across ASIR percentages. Coefficients of variation (CV), repeated measures analysis of variance (rANOVA) and Spearman' rank order correlation were performed with statistical significance at 5%. With increasing ASIR percentages, image noise decreased by 33% while CNR increased by 61%; peak tumour CNR was greater than 1.5 with 60% ASIR and above. Mean BF, BV, MTT and PS differed by less than 1.8%, 2.9%, 2.5% and 2.6% across ASIR percentages. CV were 4.9%, 4.2%, 3.3% and 7.9%; rANOVA P values: 0.85, 0.62, 0.02 and 0.81 respectively. ASIR improves image noise and CNR without altering pCT parameters substantially.

  5. FAST COGNITIVE AND TASK ORIENTED, ITERATIVE DATA DISPLAY (FACTOID)

    DTIC Science & Technology

    2017-06-01

    approaches. As a result, the following assumptions guided our efforts in developing modeling and descriptive metrics for evaluation purposes...Application Evaluation . Our analytic workflow for evaluation is to first provide descriptive statistics about applications across metrics (performance...distributions for evaluation purposes because the goal of evaluation is accurate description , not inference (e.g., prediction). Outliers depicted

  6. Using Learning Trajectories for Teacher Learning to Structure Professional Development

    ERIC Educational Resources Information Center

    Bargagliotti, Anna E.; Anderson, Celia Rousseau

    2017-01-01

    As a result of the increased focus on data literacy and data science across the world, there has been a large demand for professional development in statistics. However, exactly how these professional development opportunities should be structured remains an open question. The purpose of this paper is to describe the first iteration of a design…

  7. Panic Disorder and Agoraphobia: Considerations for DSM-V

    ERIC Educational Resources Information Center

    Schmidt, Norman B.; Norr, Aaron M.; Korte, Kristina J.

    2014-01-01

    With the upcoming release of the fifth edition of the "Diagnostic and Statistical Manual of Mental Disorders" (DSM-V) there has been a necessary critique of the DSM-IV including questions regarding how to best improve the next iteration of the DSM classification system. The aim of this article is to provide commentary on the probable…

  8. Coherent changes of multifractal properties of continuous acoustic emission at failure of heterogeneous materials

    NASA Astrophysics Data System (ADS)

    Panteleev, Ivan; Bayandin, Yuriy; Naimark, Oleg

    2017-12-01

    This work performs a correlation analysis of the statistical properties of continuous acoustic emission recorded in different parts of marble and fiberglass laminate samples under quasi-static deformation. A spectral coherent measure of time series, which is a generalization of the squared coherence spectrum on a multidimensional series, was chosen. The spectral coherent measure was estimated in a sliding time window for two parameters of the acoustic emission multifractal singularity spectrum: the spectrum width and the generalized Hurst exponent realizing the maximum of the singularity spectrum. It is shown that the preparation of the macrofracture focus is accompanied by the synchronization (coherent behavior) of the statistical properties of acoustic emission in allocated frequency intervals.

  9. Accelerated perturbation-resilient block-iterative projection methods with application to image reconstruction

    PubMed Central

    Nikazad, T; Davidi, R; Herman, G. T.

    2013-01-01

    We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data. PMID:23440911

  10. Accelerated perturbation-resilient block-iterative projection methods with application to image reconstruction.

    PubMed

    Nikazad, T; Davidi, R; Herman, G T

    2012-03-01

    We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data.

  11. Multidimensional chromatography in food analysis.

    PubMed

    Herrero, Miguel; Ibáñez, Elena; Cifuentes, Alejandro; Bernal, Jose

    2009-10-23

    In this work, the main developments and applications of multidimensional chromatographic techniques in food analysis are reviewed. Different aspects related to the existing couplings involving chromatographic techniques are examined. These couplings include multidimensional GC, multidimensional LC, multidimensional SFC as well as all their possible combinations. Main advantages and drawbacks of each coupling are critically discussed and their key applications in food analysis described.

  12. Multidimensional Riemann problem with self-similar internal structure - part III - a multidimensional analogue of the HLLI Riemann solver for conservative hyperbolic systems

    NASA Astrophysics Data System (ADS)

    Balsara, Dinshaw S.; Nkonga, Boniface

    2017-10-01

    Just as the quality of a one-dimensional approximate Riemann solver is improved by the inclusion of internal sub-structure, the quality of a multidimensional Riemann solver is also similarly improved. Such multidimensional Riemann problems arise when multiple states come together at the vertex of a mesh. The interaction of the resulting one-dimensional Riemann problems gives rise to a strongly-interacting state. We wish to endow this strongly-interacting state with physically-motivated sub-structure. The fastest way of endowing such sub-structure consists of making a multidimensional extension of the HLLI Riemann solver for hyperbolic conservation laws. Presenting such a multidimensional analogue of the HLLI Riemann solver with linear sub-structure for use on structured meshes is the goal of this work. The multidimensional MuSIC Riemann solver documented here is universal in the sense that it can be applied to any hyperbolic conservation law. The multidimensional Riemann solver is made to be consistent with constraints that emerge naturally from the Galerkin projection of the self-similar states within the wave model. When the full eigenstructure in both directions is used in the present Riemann solver, it becomes a complete Riemann solver in a multidimensional sense. I.e., all the intermediate waves are represented in the multidimensional wave model. The work also presents, for the very first time, an important analysis of the dissipation characteristics of multidimensional Riemann solvers. The present Riemann solver results in the most efficient implementation of a multidimensional Riemann solver with sub-structure. Because it preserves stationary linearly degenerate waves, it might also help with well-balancing. Implementation-related details are presented in pointwise fashion for the one-dimensional HLLI Riemann solver as well as the multidimensional MuSIC Riemann solver.

  13. Solution of the within-group multidimensional discrete ordinates transport equations on massively parallel architectures

    NASA Astrophysics Data System (ADS)

    Zerr, Robert Joseph

    2011-12-01

    The integral transport matrix method (ITMM) has been used as the kernel of new parallel solution methods for the discrete ordinates approximation of the within-group neutron transport equation. The ITMM abandons the repetitive mesh sweeps of the traditional source iterations (SI) scheme in favor of constructing stored operators that account for the direct coupling factors among all the cells and between the cells and boundary surfaces. The main goals of this work were to develop the algorithms that construct these operators and employ them in the solution process, determine the most suitable way to parallelize the entire procedure, and evaluate the behavior and performance of the developed methods for increasing number of processes. This project compares the effectiveness of the ITMM with the SI scheme parallelized with the Koch-Baker-Alcouffe (KBA) method. The primary parallel solution method involves a decomposition of the domain into smaller spatial sub-domains, each with their own transport matrices, and coupled together via interface boundary angular fluxes. Each sub-domain has its own set of ITMM operators and represents an independent transport problem. Multiple iterative parallel solution methods have investigated, including parallel block Jacobi (PBJ), parallel red/black Gauss-Seidel (PGS), and parallel GMRES (PGMRES). The fastest observed parallel solution method, PGS, was used in a weak scaling comparison with the PARTISN code. Compared to the state-of-the-art SI-KBA with diffusion synthetic acceleration (DSA), this new method without acceleration/preconditioning is not competitive for any problem parameters considered. The best comparisons occur for problems that are difficult for SI DSA, namely highly scattering and optically thick. SI DSA execution time curves are generally steeper than the PGS ones. However, until further testing is performed it cannot be concluded that SI DSA does not outperform the ITMM with PGS even on several thousand or tens of thousands of processors. The PGS method does outperform SI DSA for the periodic heterogeneous layers (PHL) configuration problems. Although this demonstrates a relative strength/weakness between the two methods, the practicality of these problems is much less, further limiting instances where it would be beneficial to select ITMM over SI DSA. The results strongly indicate a need for a robust, stable, and efficient acceleration method (or preconditioner for PGMRES). The spatial multigrid (SMG) method is currently incomplete in that it does not work for all cases considered and does not effectively improve the convergence rate for all values of scattering ratio c or cell dimension h. Nevertheless, it does display the desired trend for highly scattering, optically thin problems. That is, it tends to lower the rate of growth of number of iterations with increasing number of processes, P, while not increasing the number of additional operations per iteration to the extent that the total execution time of the rapidly converging accelerated iterations exceeds that of the slower unaccelerated iterations. A predictive parallel performance model has been developed for the PBJ method. Timing tests were performed such that trend lines could be fitted to the data for the different components and used to estimate the execution times. Applied to the weak scaling results, the model notably underestimates construction time, but combined with a slight overestimation in iterative solution time, the model predicts total execution time very well for large P. It also does a decent job with the strong scaling results, closely predicting the construction time and time per iteration, especially as P increases. Although not shown to be competitive up to 1,024 processing elements with the current state of the art, the parallelized ITMM exhibits promising scaling trends. Ultimately, compared to the KBA method, the parallelized ITMM may be found to be a very attractive option for transport calculations spatially decomposed over several tens of thousands of processes. Acceleration/preconditioning of the parallelized ITMM once developed will improve the convergence rate and improve its competitiveness. (Abstract shortened by UMI.)

  14. Penalized weighted least-squares approach for low-dose x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong

    2006-03-01

    The noise of low-dose computed tomography (CT) sinogram follows approximately a Gaussian distribution with nonlinear dependence between the sample mean and variance. The noise is statistically uncorrelated among detector bins at any view angle. However the correlation coefficient matrix of data signal indicates a strong signal correlation among neighboring views. Based on above observations, Karhunen-Loeve (KL) transform can be used to de-correlate the signal among the neighboring views. In each KL component, a penalized weighted least-squares (PWLS) objective function can be constructed and optimal sinogram can be estimated by minimizing the objective function, followed by filtered backprojection (FBP) for CT image reconstruction. In this work, we compared the KL-PWLS method with an iterative image reconstruction algorithm, which uses the Gauss-Seidel iterative calculation to minimize the PWLS objective function in image domain. We also compared the KL-PWLS with an iterative sinogram smoothing algorithm, which uses the iterated conditional mode calculation to minimize the PWLS objective function in sinogram space, followed by FBP for image reconstruction. Phantom experiments show a comparable performance of these three PWLS methods in suppressing the noise-induced artifacts and preserving resolution in reconstructed images. Computer simulation concurs with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS noise reduction may have the advantage in computation for low-dose CT imaging, especially for dynamic high-resolution studies.

  15. Iterative raw measurements restoration method with penalized weighted least squares approach for low-dose CT

    NASA Astrophysics Data System (ADS)

    Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu

    2014-03-01

    Statistical iterative reconstruction and post-log data restoration algorithms for CT noise reduction have been widely studied and these techniques have enabled us to reduce irradiation doses while maintaining image qualities. In low dose scanning, electronic noise becomes obvious and it results in some non-positive signals in raw measurements. The nonpositive signal should be converted to positive signal so that it can be log-transformed. Since conventional conversion methods do not consider local variance on the sinogram, they have difficulty of controlling the strength of the filtering. Thus, in this work, we propose a method to convert the non-positive signal to the positive signal by mainly controlling the local variance. The method is implemented in two separate steps. First, an iterative restoration algorithm based on penalized weighted least squares is used to mitigate the effect of electronic noise. The algorithm preserves the local mean and reduces the local variance induced by the electronic noise. Second, smoothed raw measurements by the iterative algorithm are converted to the positive signal according to a function which replaces the non-positive signal with its local mean. In phantom studies, we confirm that the proposed method properly preserves the local mean and reduce the variance induced by the electronic noise. Our technique results in dramatically reduced shading artifacts and can also successfully cooperate with the post-log data filter to reduce streak artifacts.

  16. Limiting CT radiation dose in children with craniosynostosis: phantom study using model-based iterative reconstruction.

    PubMed

    Kaasalainen, Touko; Palmu, Kirsi; Lampinen, Anniina; Reijonen, Vappu; Leikola, Junnu; Kivisaari, Riku; Kortesniemi, Mika

    2015-09-01

    Medical professionals need to exercise particular caution when developing CT scanning protocols for children who require multiple CT studies, such as those with craniosynostosis. To evaluate the utility of ultra-low-dose CT protocols with model-based iterative reconstruction techniques for craniosynostosis imaging. We scanned two pediatric anthropomorphic phantoms with a 64-slice CT scanner using different low-dose protocols for craniosynostosis. We measured organ doses in the head region with metal-oxide-semiconductor field-effect transistor (MOSFET) dosimeters. Numerical simulations served to estimate organ and effective doses. We objectively and subjectively evaluated the quality of images produced by adaptive statistical iterative reconstruction (ASiR) 30%, ASiR 50% and Veo (all by GE Healthcare, Waukesha, WI). Image noise and contrast were determined for different tissues. Mean organ dose with the newborn phantom was decreased up to 83% compared to the routine protocol when using ultra-low-dose scanning settings. Similarly, for the 5-year phantom the greatest radiation dose reduction was 88%. The numerical simulations supported the findings with MOSFET measurements. The image quality remained adequate with Veo reconstruction, even at the lowest dose level. Craniosynostosis CT with model-based iterative reconstruction could be performed with a 20-μSv effective dose, corresponding to the radiation exposure of plain skull radiography, without compromising required image quality.

  17. Multidimensionality of the Zarit Burden Interview across the severity spectrum of cognitive impairment: an Asian perspective.

    PubMed

    Cheah, Wee Kooi; Han, Huey Charn; Chong, Mei Sian; Anthony, Philomena Vasantha; Lim, Wee Shiong

    2012-11-01

    We aimed to examine the multidimensionality of the Zarit Burden Interview (ZBI) beyond the conventional dual-factor structure among caregivers of persons with cognitive impairment in a predominantly Chinese multiethnic Asian population, and ascertain how these dimensions vary across the spectrum of disease severity. We studied 130 consecutive dyads of primary caregivers and patients attending a memory clinic over a six-month period. Caregiver burden was measured by the 22-item ZBI, and disease severity was staged via the Clinical Dementia Rating (CDR) scale. We performed principal component analysis (PCA) with varimax rotation to determine the factor structure of the ZBI. The magnitude of burden in each factor was expressed as the item to total ratio (ITR) and plotted against the stages of cognitive impairment. Descriptive and inferential statistics were applied to study the relationships between dimensions with disease and caregiver characteristics. We identified four factors: demands of care and social impact, control over the situation, psychological impact, and worry about caregiving performance. ITRs of the first three factors increased with severity of disease and were related to recipients' functional status and disease characteristics. ITR in the dimension of worry about performance was endorsed highest across the spectrum of disease severity, starting as early as the stage of mild cognitive impairment and peaking at CDR 1. Multidimensionality of ZBI was confirmed in our local setting. Each dimension of burden was unique and expressed differentially across disease severity. The dimension of worry about performance merits further study.

  18. Multidimensional spectrometer

    DOEpatents

    Zanni, Martin Thomas; Damrauer, Niels H.

    2010-07-20

    A multidimensional spectrometer for the infrared, visible, and ultraviolet regions of the electromagnetic spectrum, and a method for making multidimensional spectroscopic measurements in the infrared, visible, and ultraviolet regions of the electromagnetic spectrum. The multidimensional spectrometer facilitates measurements of inter- and intra-molecular interactions.

  19. An accelerated lambda iteration method for multilevel radiative transfer. I - Non-overlapping lines with background continuum

    NASA Technical Reports Server (NTRS)

    Rybicki, G. B.; Hummer, D. G.

    1991-01-01

    A method is presented for solving multilevel transfer problems when nonoverlapping lines and background continuum are present and active continuum transfer is absent. An approximate lambda operator is employed to derive linear, 'preconditioned', statistical-equilibrium equations. A method is described for finding the diagonal elements of the 'true' numerical lambda operator, and therefore for obtaining the coefficients of the equations. Iterations of the preconditioned equations, in conjunction with the transfer equation's formal solution, are used to solve linear equations. Some multilevel problems are considered, including an eleven-level neutral helium atom. Diagonal and tridiagonal approximate lambda operators are utilized in the problems to examine the convergence properties of the method, and it is found to be effective for the line transfer problems.

  20. The French version of the Juvenile Arthritis Multidimensional Assessment Report (JAMAR).

    PubMed

    Quartier, Pierre; Hofer, Michael; Wouters, Carine; Truong, Thi Thanh Thao; Duong, Ngoc-Phoi; Agbo-Kpati, Kokou-Placide; Uettwiller, Florence; Melki, Isabelle; Mouy, Richard; Bader-Meunier, Brigitte; Consolaro, Alessandro; Bovis, Francesca; Ruperto, Nicolino

    2018-04-01

    The Juvenile Arthritis Multidimensional Assessment Report (JAMAR) is a new parent/patient reported outcome measure that enables a thorough assessment of the disease status in children with juvenile idiopathic arthritis (JIA). We report the results of the cross-cultural adaptation and validation of the parent and patient versions of the JAMAR in the French language. The reading comprehension of the questionnaire was tested in 10 JIA parents and patients. Each participating centre was asked to collect demographic, clinical data and the JAMAR in 100 consecutive JIA patients or all consecutive patients seen in a 6-month period and to administer the JAMAR to 100 healthy children and their parents. The statistical validation phase explored descriptive statistics and the psychometric issues of the JAMAR: the three Likert assumptions, floor/ceiling effects, internal consistency, Cronbach's alpha, interscale correlations and construct validity (convergent and discriminant validity). A total of 100 JIA patients (23% systemic, 45% oligoarticular, 20% RF negative polyarthritis, 12% other categories) and 122 healthy children, were enrolled at the paediatric rheumatology centre of the Necker Children's Hospital in Paris. Notably, none of the enrolled JIA patients is affected with psoriatic arthritis. The JAMAR components discriminated well healthy subjects from JIA patients. All JAMAR components revealed good psychometric performances. In conclusion, the French version of the JAMAR is a valid tool for the assessment of children with JIA and is suitable for use both in routine clinical practice and clinical research.

  1. The Italian version of the Juvenile Arthritis Multidimensional Assessment Report (JAMAR).

    PubMed

    Consolaro, Alessandro; Bovis, Francesca; Pistorio, Angela; Cimaz, Rolando; De Benedetti, Fabrizio; Miniaci, Angela; Corona, Fabrizia; Gerloni, Valeria; Martino, Silvana; Pastore, Serena; Barone, Patrizia; Pieropan, Sara; Cortis, Elisabetta; Podda, Rosa Anna; Gallizzi, Romina; Civino, Adele; Torre, Francesco La; Rigante, Donato; Consolini, Rita; Maggio, Maria Cristina; Magni-Manzoni, Silvia; Perfetti, Francesca; Filocamo, Giovanni; Toppino, Claudia; Licciardi, Francesco; Garrone, Marco; Scala, Silvia; Patrone, Elisa; Tonelli, Monica; Tani, Daniela; Ravelli, Angelo; Martini, Alberto; Ruperto, Nicolino

    2018-04-01

    The Juvenile Arthritis Multidimensional Assessment Report (JAMAR) is a new parent/patient reported outcome measure that enables a thorough assessment of the disease status in children with juvenile idiopathic arthritis (JIA). We report the results of the cross-cultural adaptation and validation of the parent and patient versions of the JAMAR in the Italian language.The reading comprehension of the questionnaire was tested in 10 JIA parents and patients. Each participating centre was asked to collect demographic, clinical data and the JAMAR in 100 consecutive JIA patients or all consecutive patients seen in a 6-month period and to administer the JAMAR to 100 healthy children and their parents.The statistical validation phase explored descriptive statistics and the psychometric issues of the JAMAR: the 3 Likert assumptions, floor/ceiling effects, internal consistency, Cronbach's alpha, interscale correlations, test-retest reliability, and construct validity (convergent and discriminant validity).A total of 1296 JIA patients (7.2% systemic, 59.5% oligoarticular, 21.4% RF negative polyarthritis, 11.9% other categories) and 100 healthy children, were enrolled in 18 centres. The JAMAR components discriminated well healthy subjects from JIA patients except for the Health Related Quality of Life (HRQoL) Psychosocial Health (PsH) subscales. All JAMAR components revealed good psychometric performances.In conclusion, the Italian version of the JAMAR is a valid tool for the assessment of children with JIA and is suitable for use both in routine clinical practice and clinical research.

  2. The Paraguayan Spanish version of the Juvenile Arthritis Multidimensional Assessment Report (JAMAR).

    PubMed

    Morel Ayala, Zoilo; Burgos-Vargas, Ruben; Consolaro, Alessandro; Bovis, Francesca; Ruperto, Nicolino

    2018-04-01

    The Juvenile Arthritis Multidimensional Assessment Report (JAMAR) is a new parent/patient reported outcome measure that enables a thorough assessment of the disease status in children with juvenile idiopathic arthritis (JIA). We report the results of the cross-cultural adaptation and validation of the parent and patient versions of the JAMAR in the Paraguayan Spanish language. The reading comprehension of the questionnaire was tested in 10 JIA parents and patients. Each participating centre was asked to collect demographic, clinical data and the JAMAR in 100 consecutive JIA patients or all consecutive patients seen in a 6-month period and to administer the JAMAR to 100 healthy children and their parents. The statistical validation phase explored descriptive statistics and the psychometric issues of the JAMAR: the 3 Likert assumptions, floor/ceiling effects, internal consistency, Cronbach's alpha, interscale correlations, and construct validity (convergent and discriminant validity). A total of 51 JIA patients (2% systemic, 27.4% oligoarticular, 37.2% RF negative polyarthritis, 33.4% other categories) and 100 healthy children, were enrolled. The JAMAR components discriminated well healthy subjects from JIA patients. Notably, there was no significant difference between healthy subjects and their affected peers in the school-related problem variable. All JAMAR components revealed good psychometric performances. In conclusion, the Paraguayan Spanish version of the JAMAR is a valid tool for the assessment of children with JIA and is suitable for use both in routine clinical practice and clinical research.

  3. An Algebraic Implicitization and Specialization of Minimum KL-Divergence Models

    NASA Astrophysics Data System (ADS)

    Dukkipati, Ambedkar; Manathara, Joel George

    In this paper we study representation of KL-divergence minimization, in the cases where integer sufficient statistics exists, using tools from polynomial algebra. We show that the estimation of parametric statistical models in this case can be transformed to solving a system of polynomial equations. In particular, we also study the case of Kullback-Csisźar iteration scheme. We present implicit descriptions of these models and show that implicitization preserves specialization of prior distribution. This result leads us to a Gröbner bases method to compute an implicit representation of minimum KL-divergence models.

  4. A statistically valid method for using FIA plots to guide spectral class rejection in producing stratification maps

    Treesearch

    Michael L. Hoppus; Andrew J. Lister

    2002-01-01

    A Landsat TM classification method (iterative guided spectral class rejection) produced a forest cover map of southern West Virginia that provided the stratification layer for producing estimates of timberland area from Forest Service FIA ground plots using a stratified sampling technique. These same high quality and expensive FIA ground plots provided ground reference...

  5. A study of the effects of strong magnetic fields on the image resolution of PET scanners

    NASA Astrophysics Data System (ADS)

    Burdette, Don J.

    Very high resolution images can be achieved in small animal PET systems utilizing solid state silicon pad detectors. In such systems using detectors with sub-millimeter intrinsic resolutions, the range of the positron is the largest contribution to the image blur. The size of the positron range effect depends on the initial positron energy and hence the radioactive tracer used. For higher energy positron emitters, such as 68Ga and 94mTc, the variation of the annihilation point dominates the spatial resolution. In this study two techniques are investigated to improve the image resolution of PET scanners limited by the range of the positron. One, the positron range can be reduced by embedding the PET field of view in a strong magnetic field. We have developed a silicon pad detector based PET instrument that can operate in strong magnetic fields with an image resolution of 0.7 mm FWHM to study this effect. Two, iterative reconstruction methods can be used to statistically correct for the range of the positron. Both strong magnetic fields and iterative reconstruction algorithms that statistically account for the positron range distribution are investigated in this work.

  6. An Efficient Augmented Lagrangian Method for Statistical X-Ray CT Image Reconstruction.

    PubMed

    Li, Jiaojiao; Niu, Shanzhou; Huang, Jing; Bian, Zhaoying; Feng, Qianjin; Yu, Gaohang; Liang, Zhengrong; Chen, Wufan; Ma, Jianhua

    2015-01-01

    Statistical iterative reconstruction (SIR) for X-ray computed tomography (CT) under the penalized weighted least-squares criteria can yield significant gains over conventional analytical reconstruction from the noisy measurement. However, due to the nonlinear expression of the objective function, most exiting algorithms related to the SIR unavoidably suffer from heavy computation load and slow convergence rate, especially when an edge-preserving or sparsity-based penalty or regularization is incorporated. In this work, to address abovementioned issues of the general algorithms related to the SIR, we propose an adaptive nonmonotone alternating direction algorithm in the framework of augmented Lagrangian multiplier method, which is termed as "ALM-ANAD". The algorithm effectively combines an alternating direction technique with an adaptive nonmonotone line search to minimize the augmented Lagrangian function at each iteration. To evaluate the present ALM-ANAD algorithm, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present ALM-ANAD algorithm can achieve noticeable gains over the classical nonlinear conjugate gradient algorithm and state-of-the-art split Bregman algorithm in terms of noise reduction, contrast-to-noise ratio, convergence rate, and universal quality index metrics.

  7. ASSESSMENT OF CLINICAL IMAGE QUALITY IN PAEDIATRIC ABDOMINAL CT EXAMINATIONS: DEPENDENCY ON THE LEVEL OF ADAPTIVE STATISTICAL ITERATIVE RECONSTRUCTION (ASiR) AND THE TYPE OF CONVOLUTION KERNEL.

    PubMed

    Larsson, Joel; Båth, Magnus; Ledenius, Kerstin; Caisander, Håkan; Thilander-Klang, Anne

    2016-06-01

    The purpose of this study was to investigate the effect of different combinations of convolution kernel and the level of Adaptive Statistical iterative Reconstruction (ASiR™) on diagnostic image quality as well as visualisation of anatomical structures in paediatric abdominal computed tomography (CT) examinations. Thirty-five paediatric patients with abdominal pain with non-specified pathology undergoing abdominal CT were included in the study. Transaxial stacks of 5-mm-thick images were retrospectively reconstructed at various ASiR levels, in combination with three convolution kernels. Four paediatric radiologists rated the diagnostic image quality and the delineation of six anatomical structures in a blinded randomised visual grading study. Image quality at a given ASiR level was found to be dependent on the kernel, and a more edge-enhancing kernel benefitted from a higher ASiR level. An ASiR level of 70 % together with the Soft™ or Standard™ kernel was suggested to be the optimal combination for paediatric abdominal CT examinations. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Low-dose CT image reconstruction using gain intervention-based dictionary learning

    NASA Astrophysics Data System (ADS)

    Pathak, Yadunath; Arya, K. V.; Tiwari, Shailendra

    2018-05-01

    Computed tomography (CT) approach is extensively utilized in clinical diagnoses. However, X-ray residue in human body may introduce somatic damage such as cancer. Owing to radiation risk, research has focused on the radiation exposure distributed to patients through CT investigations. Therefore, low-dose CT has become a significant research area. Many researchers have proposed different low-dose CT reconstruction techniques. But, these techniques suffer from various issues such as over smoothing, artifacts, noise, etc. Therefore, in this paper, we have proposed a novel integrated low-dose CT reconstruction technique. The proposed technique utilizes global dictionary-based statistical iterative reconstruction (GDSIR) and adaptive dictionary-based statistical iterative reconstruction (ADSIR)-based reconstruction techniques. In case the dictionary (D) is predetermined, then GDSIR can be used and if D is adaptively defined then ADSIR is appropriate choice. The gain intervention-based filter is also used as a post-processing technique for removing the artifacts from low-dose CT reconstructed images. Experiments have been done by considering the proposed and other low-dose CT reconstruction techniques on well-known benchmark CT images. Extensive experiments have shown that the proposed technique outperforms the available approaches.

  9. Finnish upper secondary students' collaborative processes in learning statistics in a CSCL environment

    NASA Astrophysics Data System (ADS)

    Kaleva Oikarinen, Juho; Järvelä, Sanna; Kaasila, Raimo

    2014-04-01

    This design-based research project focuses on documenting statistical learning among 16-17-year-old Finnish upper secondary school students (N = 78) in a computer-supported collaborative learning (CSCL) environment. One novel value of this study is in reporting the shift from teacher-led mathematical teaching to autonomous small-group learning in statistics. The main aim of this study is to examine how student collaboration occurs in learning statistics in a CSCL environment. The data include material from videotaped classroom observations and the researcher's notes. In this paper, the inter-subjective phenomena of students' interactions in a CSCL environment are analysed by using a contact summary sheet (CSS). The development of the multi-dimensional coding procedure of the CSS instrument is presented. Aptly selected video episodes were transcribed and coded in terms of conversational acts, which were divided into non-task-related and task-related categories to depict students' levels of collaboration. The results show that collaborative learning (CL) can facilitate cohesion and responsibility and reduce students' feelings of detachment in our classless, periodic school system. The interactive .pdf material and collaboration in small groups enable statistical learning. It is concluded that CSCL is one possible method of promoting statistical teaching. CL using interactive materials seems to foster and facilitate statistical learning processes.

  10. Numeric invariants from multidimensional persistence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skryzalin, Jacek; Carlsson, Gunnar

    2017-05-19

    In this paper, we analyze the space of multidimensional persistence modules from the perspectives of algebraic geometry. We first build a moduli space of a certain subclass of easily analyzed multidimensional persistence modules, which we construct specifically to capture much of the information which can be gained by using multidimensional persistence over one-dimensional persistence. We argue that the global sections of this space provide interesting numeric invariants when evaluated against our subclass of multidimensional persistence modules. Lastly, we extend these global sections to the space of all multidimensional persistence modules and discuss how the resulting numeric invariants might be usedmore » to study data.« less

  11. Systematic iteration between model and methodology: A proposed approach to evaluating unintended consequences.

    PubMed

    Morell, Jonathan A

    2018-06-01

    This article argues that evaluators could better deal with unintended consequences if they improved their methods of systematically and methodically combining empirical data collection and model building over the life cycle of an evaluation. This process would be helpful because it can increase the timespan from when the need for a change in methodology is first suspected to the time when the new element of the methodology is operational. The article begins with an explanation of why logic models are so important in evaluation, and why the utility of models is limited if they are not continually revised based on empirical evaluation data. It sets the argument within the larger context of the value and limitations of models in the scientific enterprise. Following will be a discussion of various issues that are relevant to model development and revision. What is the relevance of complex system behavior for understanding predictable and unpredictable unintended consequences, and the methods needed to deal with them? How might understanding of unintended consequences be improved with an appreciation of generic patterns of change that are independent of any particular program or change effort? What are the social and organizational dynamics that make it rational and adaptive to design programs around single-outcome solutions to multi-dimensional problems? How does cognitive bias affect our ability to identify likely program outcomes? Why is it hard to discern change as a result of programs being embedded in multi-component, continually fluctuating, settings? The last part of the paper outlines a process for actualizing systematic iteration between model and methodology, and concludes with a set of research questions that speak to how the model/data process can be made efficient and effective. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. The effects of context on multidimensional spatial cognitive models. Ph.D. Thesis - Arizona Univ.

    NASA Technical Reports Server (NTRS)

    Dupnick, E. G.

    1979-01-01

    Spatial cognitive models obtained by multidimensional scaling represent cognitive structure by defining alternatives as points in a coordinate space based on relevant dimensions such that interstimulus dissimilarities perceived by the individual correspond to distances between the respective alternatives. The dependence of spatial models on the context of the judgments required of the individual was investigated. Context, which is defined as a perceptual interpretation and cognitive understanding of a judgment situation, was analyzed and classified with respect to five characteristics: physical environment, social environment, task definition, individual perspective, and temporal setting. Four experiments designed to produce changes in the characteristics of context and to test the effects of these changes upon individual cognitive spaces are described with focus on experiment design, objectives, statistical analysis, results, and conclusions. The hypothesis is advanced that an individual can be characterized as having a master cognitive space for a set of alternatives. When the context changes, the individual appears to change the dimension weights to give a new spatial configuration. Factor analysis was used in the interpretation and labeling of cognitive space dimensions.

  13. Validation of a systems-actuarial computer process for multidimensional classification of child psychopathology.

    PubMed

    McDermott, P A; Hale, R L

    1982-07-01

    Tested diagnostic classifications of child psychopathology produced by a computerized technique known as multidimensional actuarial classification (MAC) against the criterion of expert psychological opinion. The MAC program applies series of statistical decision rules to assess the importance of and relationships among several dimensions of classification, i.e., intellectual functioning, academic achievement, adaptive behavior, and social and behavioral adjustment, to perform differential diagnosis of children's mental retardation, specific learning disabilities, behavioral and emotional disturbance, possible communication or perceptual-motor impairment, and academic under- and overachievement in reading and mathematics. Classifications rendered by MAC are compared to those offered by two expert child psychologists for cases of 73 children referred for psychological services. Experts' agreement with MAC was significant for all classification areas, as was MAC's agreement with the experts held as a conjoint reference standard. Whereas the experts' agreement with MAC averaged 86.0% above chance, their agreement with one another averaged 76.5% above chance. Implications of the findings are explored and potential advantages of the systems-actuarial approach are discussed.

  14. Computed tomography imaging with the Adaptive Statistical Iterative Reconstruction (ASIR) algorithm: dependence of image quality on the blending level of reconstruction.

    PubMed

    Barca, Patrizio; Giannelli, Marco; Fantacci, Maria Evelina; Caramella, Davide

    2018-06-01

    Computed tomography (CT) is a useful and widely employed imaging technique, which represents the largest source of population exposure to ionizing radiation in industrialized countries. Adaptive Statistical Iterative Reconstruction (ASIR) is an iterative reconstruction algorithm with the potential to allow reduction of radiation exposure while preserving diagnostic information. The aim of this phantom study was to assess the performance of ASIR, in terms of a number of image quality indices, when different reconstruction blending levels are employed. CT images of the Catphan-504 phantom were reconstructed using conventional filtered back-projection (FBP) and ASIR with reconstruction blending levels of 20, 40, 60, 80, and 100%. Noise, noise power spectrum (NPS), contrast-to-noise ratio (CNR) and modulation transfer function (MTF) were estimated for different scanning parameters and contrast objects. Noise decreased and CNR increased non-linearly up to 50 and 100%, respectively, with increasing blending level of reconstruction. Also, ASIR has proven to modify the NPS curve shape. The MTF of ASIR reconstructed images depended on tube load/contrast and decreased with increasing blending level of reconstruction. In particular, for low radiation exposure and low contrast acquisitions, ASIR showed lower performance than FBP, in terms of spatial resolution for all blending levels of reconstruction. CT image quality varies substantially with the blending level of reconstruction. ASIR has the potential to reduce noise whilst maintaining diagnostic information in low radiation exposure CT imaging. Given the opposite variation of CNR and spatial resolution with the blending level of reconstruction, it is recommended to use an optimal value of this parameter for each specific clinical application.

  15. Can use of adaptive statistical iterative reconstruction reduce radiation dose in unenhanced head CT? An analysis of qualitative and quantitative image quality

    PubMed Central

    Heggen, Kristin Livelten; Pedersen, Hans Kristian; Andersen, Hilde Kjernlie; Martinsen, Anne Catrine T

    2016-01-01

    Background Iterative reconstruction can reduce image noise and thereby facilitate dose reduction. Purpose To evaluate qualitative and quantitative image quality for full dose and dose reduced head computed tomography (CT) protocols reconstructed using filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR). Material and Methods Fourteen patients undergoing follow-up head CT were included. All patients underwent full dose (FD) exam and subsequent 15% dose reduced (DR) exam, reconstructed using FBP and 30% ASIR. Qualitative image quality was assessed using visual grading characteristics. Quantitative image quality was assessed using ROI measurements in cerebrospinal fluid (CSF), white matter, peripheral and central gray matter. Additionally, quantitative image quality was measured in Catphan and vendor’s water phantom. Results There was no significant difference in qualitative image quality between FD FBP and DR ASIR. Comparing same scan FBP versus ASIR, a noise reduction of 28.6% in CSF and between −3.7 and 3.5% in brain parenchyma was observed. Comparing FD FBP versus DR ASIR, a noise reduction of 25.7% in CSF, and −7.5 and 6.3% in brain parenchyma was observed. Image contrast increased in ASIR reconstructions. Contrast-to-noise ratio was improved in DR ASIR compared to FD FBP. In phantoms, noise reduction was in the range of 3 to 28% with image content. Conclusion There was no significant difference in qualitative image quality between full dose FBP and dose reduced ASIR. CNR improved in DR ASIR compared to FD FBP mostly due to increased contrast, not reduced noise. Therefore, we recommend using caution if reducing dose and applying ASIR to maintain image quality. PMID:27583169

  16. The use of adaptive statistical iterative reconstruction in pediatric head CT: a feasibility study.

    PubMed

    Vorona, G A; Zuccoli, G; Sutcavage, T; Clayton, B L; Ceschin, R C; Panigrahy, A

    2013-01-01

    Iterative reconstruction techniques facilitate CT dose reduction; though to our knowledge, no group has explored using iterative reconstruction with pediatric head CT. Our purpose was to perform a feasibility study to assess the use of ASIR in a small group of pediatric patients undergoing head CT. An Alderson-Rando head phantom was scanned at decreasing 10% mA intervals relative to our standard protocol, and each study was then reconstructed at 10% ASIR intervals. An intracranial region of interest was consistently placed to estimate noise. Our ventriculoperitoneal shunt CT protocol was subsequently modified, and patients were scanned at 20% ASIR with approximately 20% mA reductions. ASIR studies were anonymously compared with older non-ASIR studies from the same patients by 2 attending pediatric neuroradiologists for diagnostic utility, sharpness, noise, and artifacts. The phantom study demonstrated similar noise at 100% mA/0% ASIR (3.9) and 80% mA/20% ASIR (3.7). Twelve pediatric patients were scanned at reduced dose at 20% ASIR. The average CTDI(vol) and DLP values of the 20% ASIR studies were 22.4 mGy and 338.4 mGy-cm, and for the non-ASIR studies, they were 28.8 mGy and 444.5 mGy-cm, representing statistically significant decreases in the CTDI(vol) (22.1%, P = .00007) and DLP (23.9%, P = .0005) values. There were no significant differences between the ASIR studies and non-ASIR studies with respect to diagnostic acceptability, sharpness, noise, or artifacts. Our findings suggest that 20% ASIR can provide approximately 22% dose reduction in pediatric head CT without affecting image quality.

  17. Can use of adaptive statistical iterative reconstruction reduce radiation dose in unenhanced head CT? An analysis of qualitative and quantitative image quality.

    PubMed

    Østerås, Bjørn Helge; Heggen, Kristin Livelten; Pedersen, Hans Kristian; Andersen, Hilde Kjernlie; Martinsen, Anne Catrine T

    2016-08-01

    Iterative reconstruction can reduce image noise and thereby facilitate dose reduction. To evaluate qualitative and quantitative image quality for full dose and dose reduced head computed tomography (CT) protocols reconstructed using filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR). Fourteen patients undergoing follow-up head CT were included. All patients underwent full dose (FD) exam and subsequent 15% dose reduced (DR) exam, reconstructed using FBP and 30% ASIR. Qualitative image quality was assessed using visual grading characteristics. Quantitative image quality was assessed using ROI measurements in cerebrospinal fluid (CSF), white matter, peripheral and central gray matter. Additionally, quantitative image quality was measured in Catphan and vendor's water phantom. There was no significant difference in qualitative image quality between FD FBP and DR ASIR. Comparing same scan FBP versus ASIR, a noise reduction of 28.6% in CSF and between -3.7 and 3.5% in brain parenchyma was observed. Comparing FD FBP versus DR ASIR, a noise reduction of 25.7% in CSF, and -7.5 and 6.3% in brain parenchyma was observed. Image contrast increased in ASIR reconstructions. Contrast-to-noise ratio was improved in DR ASIR compared to FD FBP. In phantoms, noise reduction was in the range of 3 to 28% with image content. There was no significant difference in qualitative image quality between full dose FBP and dose reduced ASIR. CNR improved in DR ASIR compared to FD FBP mostly due to increased contrast, not reduced noise. Therefore, we recommend using caution if reducing dose and applying ASIR to maintain image quality.

  18. Feasibility of dual-low scheme combined with iterative reconstruction technique in acute cerebral infarction volume CT whole brain perfusion imaging.

    PubMed

    Wang, Tao; Gong, Yi; Shi, Yibing; Hua, Rong; Zhang, Qingshan

    2017-07-01

    The feasibility of application of low-concentration contrast agent and low tube voltage combined with iterative reconstruction in whole brain computed tomography perfusion (CTP) imaging of patients with acute cerebral infarction was investigated. Fifty-nine patients who underwent whole brain CTP examination and diagnosed with acute cerebral infarction from September 2014 to March 2016 were selected. Patients were randomly divided into groups A and B. There were 28 cases in group A [tube voltage, 100 kV; contrast agent, iohexol (350 mg I/ml), reconstructed by filtered back projection] and 31 cases in group B [tube voltage, 80 kV; contrast agent, iodixanol (270 mg I/ml), reconstructed by algebraic reconstruction technique]. The artery CT value, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), dose length product, effective dose (ED) of radiation and brain iodine intake of both groups were measured and statistically analyzed. Two physicians carried out kappa (κ) analysis on the consistency of image quality evaluation. The difference in subjective image quality evaluation between the groups was tested by χ 2 . The differences in CT value, SNR, CNR, CTP and CT angiography subjective image quality evaluation between both groups were not statistically significant (P>0.05); the diagnosis rate of the acute infarcts between the two groups was not significantly different; while the ED and iodine intake in group B (dual low-dose group) were lower than group A. In conclusion, combination of low tube voltage and iterative reconstruction technique, and application of low-concentration contrast agent (270 mg I/ml) in whole brain CTP examination reduced ED and iodine intake without compromising image quality, thereby reducing the risk of contrast-induced nephropathy.

  19. A model for incomplete longitudinal multivariate ordinal data.

    PubMed

    Liu, Li C

    2008-12-30

    In studies where multiple outcome items are repeatedly measured over time, missing data often occur. A longitudinal item response theory model is proposed for analysis of multivariate ordinal outcomes that are repeatedly measured. Under the MAR assumption, this model accommodates missing data at any level (missing item at any time point and/or missing time point). It allows for multiple random subject effects and the estimation of item discrimination parameters for the multiple outcome items. The covariates in the model can be at any level. Assuming either a probit or logistic response function, maximum marginal likelihood estimation is described utilizing multidimensional Gauss-Hermite quadrature for integration of the random effects. An iterative Fisher-scoring solution, which provides standard errors for all model parameters, is used. A data set from a longitudinal prevention study is used to motivate the application of the proposed model. In this study, multiple ordinal items of health behavior are repeatedly measured over time. Because of a planned missing design, subjects answered only two-third of all items at a given point. Copyright 2008 John Wiley & Sons, Ltd.

  20. The neural network approximation method for solving multidimensional nonlinear inverse problems of geophysics

    NASA Astrophysics Data System (ADS)

    Shimelevich, M. I.; Obornev, E. A.; Obornev, I. E.; Rodionov, E. A.

    2017-07-01

    The iterative approximation neural network method for solving conditionally well-posed nonlinear inverse problems of geophysics is presented. The method is based on the neural network approximation of the inverse operator. The inverse problem is solved in the class of grid (block) models of the medium on a regularized parameterization grid. The construction principle of this grid relies on using the calculated values of the continuity modulus of the inverse operator and its modifications determining the degree of ambiguity of the solutions. The method provides approximate solutions of inverse problems with the maximal degree of detail given the specified degree of ambiguity with the total number of the sought parameters n × 103 of the medium. The a priori and a posteriori estimates of the degree of ambiguity of the approximated solutions are calculated. The work of the method is illustrated by the example of the three-dimensional (3D) inversion of the synthesized 2D areal geoelectrical (audio magnetotelluric sounding, AMTS) data corresponding to the schematic model of a kimberlite pipe.

  1. Cultural Determinants of Help Seeking: A model for research and practice

    PubMed Central

    2007-01-01

    Increasing access to, and use of, health promotion strategies and health care services for diverse cultural groups is a National priority. While theories about the structural determinants of help seeking have received empirical testing, studies about cultural determinants have been primarily descriptive, making theoretical and empirical analysis difficult. This article synthesizes concepts and research by the author and others from diverse disciplines to develop the mid-range theoretical model called the Cultural Determinants of Help Seeking (CDHS). The multidimensional construct of culture, which defines the iterative dimensions of ideology, political-economy, practice and the body, is outlined. The notion of cultural models of wellness and illness as cognitive guides for perception, emotion and behavior; as well as the synthesized concept of idioms of wellness and distress, are introduced. Next, the CDHS theory proposes that sign and symptom perception, the interpretation of their meaning and the dynamics of the social distribution of resources, are all shaped by cultural models. Then, the CDHS model is applied to practice using research with Asians. Lastly, implications for research and practice are discussed. PMID:19999745

  2. A rigorous full-dimensional quantum dynamics study of tunneling splitting of rovibrational states of vinyl radical C 2 H 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Hua-Gen; Song, Hongwei; Yang, Minghui

    Here, we report a rigorous quantum mechanical study of the rovibrational energy levels of vinyl radical C 2H 3. The calculations are carried out using a real two-component multi-layer Lanczos algorithm in a set of orthogonal polyspherical coordinates based on a recently developed accurate ab initio potential energy surface of C 2H 3. All well converged 158 vibrational bands up to 3200 cm -1 are determined, together with a comparison to previous calculations and experimental results. Our results show a remarkable multi-dimensional tunneling effect on the vibrational spectra of the radical. The vibrational tunneling splitting is substantially different from thatmore » of previous reduced dimensional calculations. The rotational constants of the fundamental vibrational bands of C 2H 3 are also given. It was found that the rovibrational states are strongly coupled, especially among those bending vibrational modes. Additionally, the perturbative iteration approach of Gruebele has been extended to assign the rovibrational energy levels of C 2H 3 without the requirement of explicit wavefunctions.« less

  3. A rigorous full-dimensional quantum dynamics study of tunneling splitting of rovibrational states of vinyl radical C2H3.

    PubMed

    Yu, Hua-Gen; Song, Hongwei; Yang, Minghui

    2017-06-14

    We report a rigorous quantum mechanical study of the rovibrational energy levels of vinyl radical C 2 H 3 . The calculations are carried out using a real two-component multi-layer Lanczos algorithm in a set of orthogonal polyspherical coordinates based on a recently developed accurate ab initio potential energy surface of C 2 H 3 . All well converged 158 vibrational bands up to 3200 cm -1 are determined, together with a comparison to previous calculations and experimental results. Results show a remarkable multi-dimensional tunneling effect on the vibrational spectra of the radical. The vibrational tunneling splitting is substantially different from that of previous reduced dimensional calculations. The rotational constants of the fundamental vibrational bands of C 2 H 3 are also given. It was found that the rovibrational states are strongly coupled, especially among those bending vibrational modes. In addition, the perturbative iteration approach of Gruebele has been extended to assign the rovibrational energy levels of C 2 H 3 without the requirement of explicit wavefunctions.

  4. A rigorous full-dimensional quantum dynamics study of tunneling splitting of rovibrational states of vinyl radical C 2 H 3

    DOE PAGES

    Yu, Hua-Gen; Song, Hongwei; Yang, Minghui

    2017-06-12

    Here, we report a rigorous quantum mechanical study of the rovibrational energy levels of vinyl radical C 2H 3. The calculations are carried out using a real two-component multi-layer Lanczos algorithm in a set of orthogonal polyspherical coordinates based on a recently developed accurate ab initio potential energy surface of C 2H 3. All well converged 158 vibrational bands up to 3200 cm -1 are determined, together with a comparison to previous calculations and experimental results. Our results show a remarkable multi-dimensional tunneling effect on the vibrational spectra of the radical. The vibrational tunneling splitting is substantially different from thatmore » of previous reduced dimensional calculations. The rotational constants of the fundamental vibrational bands of C 2H 3 are also given. It was found that the rovibrational states are strongly coupled, especially among those bending vibrational modes. Additionally, the perturbative iteration approach of Gruebele has been extended to assign the rovibrational energy levels of C 2H 3 without the requirement of explicit wavefunctions.« less

  5. A Comparative Study of Online Item Calibration Methods in Multidimensional Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Chen, Ping

    2017-01-01

    Calibration of new items online has been an important topic in item replenishment for multidimensional computerized adaptive testing (MCAT). Several online calibration methods have been proposed for MCAT, such as multidimensional "one expectation-maximization (EM) cycle" (M-OEM) and multidimensional "multiple EM cycles"…

  6. Best Design for Multidimensional Computerized Adaptive Testing with the Bifactor Model

    ERIC Educational Resources Information Center

    Seo, Dong Gi; Weiss, David J.

    2015-01-01

    Most computerized adaptive tests (CATs) have been studied using the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CATs. This study investigated the accuracy, fidelity, and efficiency of a fully multidimensional CAT algorithm…

  7. Multidimensional Measurement of Poverty among Women in Sub-Saharan Africa

    ERIC Educational Resources Information Center

    Batana, Yele Maweki

    2013-01-01

    Since the seminal work of Sen, poverty has been recognized as a multidimensional phenomenon. The recent availability of relevant databases renewed the interest in this approach. This paper estimates multidimensional poverty among women in fourteen Sub-Saharan African countries using the Alkire and Foster multidimensional poverty measures, whose…

  8. The Efficacy of Multidimensional Constraint Keys in Database Query Performance

    ERIC Educational Resources Information Center

    Cardwell, Leslie K.

    2012-01-01

    This work is intended to introduce a database design method to resolve the two-dimensional complexities inherent in the relational data model and its resulting performance challenges through abstract multidimensional constructs. A multidimensional constraint is derived and utilized to implement an indexed Multidimensional Key (MK) to abstract a…

  9. GSHSite: Exploiting an Iteratively Statistical Method to Identify S-Glutathionylation Sites with Substrate Specificity

    PubMed Central

    Chen, Yi-Ju; Lu, Cheng-Tsung; Huang, Kai-Yao; Wu, Hsin-Yi; Chen, Yu-Ju; Lee, Tzong-Yi

    2015-01-01

    S-glutathionylation, the covalent attachment of a glutathione (GSH) to the sulfur atom of cysteine, is a selective and reversible protein post-translational modification (PTM) that regulates protein activity, localization, and stability. Despite its implication in the regulation of protein functions and cell signaling, the substrate specificity of cysteine S-glutathionylation remains unknown. Based on a total of 1783 experimentally identified S-glutathionylation sites from mouse macrophages, this work presents an informatics investigation on S-glutathionylation sites including structural factors such as the flanking amino acids composition and the accessible surface area (ASA). TwoSampleLogo presents that positively charged amino acids flanking the S-glutathionylated cysteine may influence the formation of S-glutathionylation in closed three-dimensional environment. A statistical method is further applied to iteratively detect the conserved substrate motifs with statistical significance. Support vector machine (SVM) is then applied to generate predictive model considering the substrate motifs. According to five-fold cross-validation, the SVMs trained with substrate motifs could achieve an enhanced sensitivity, specificity, and accuracy, and provides a promising performance in an independent test set. The effectiveness of the proposed method is demonstrated by the correct identification of previously reported S-glutathionylation sites of mouse thioredoxin (TXN) and human protein tyrosine phosphatase 1b (PTP1B). Finally, the constructed models are adopted to implement an effective web-based tool, named GSHSite (http://csb.cse.yzu.edu.tw/GSHSite/), for identifying uncharacterized GSH substrate sites on the protein sequences. PMID:25849935

  10. Sources of Safety Data and Statistical Strategies for Design and Analysis: Transforming Data Into Evidence.

    PubMed

    Ma, Haijun; Russek-Cohen, Estelle; Izem, Rima; Marchenko, Olga V; Jiang, Qi

    2018-03-01

    Safety evaluation is a key aspect of medical product development. It is a continual and iterative process requiring thorough thinking, and dedicated time and resources. In this article, we discuss how safety data are transformed into evidence to establish and refine the safety profile of a medical product, and how the focus of safety evaluation, data sources, and statistical methods change throughout a medical product's life cycle. Some challenges and statistical strategies for medical product safety evaluation are discussed. Examples of safety issues identified in different periods, that is, premarketing and postmarketing, are discussed to illustrate how different sources are used in the safety signal identification and the iterative process of safety assessment. The examples highlighted range from commonly used pediatric vaccine given to healthy children to medical products primarily used to treat a medical condition in adults. These case studies illustrate that different products may require different approaches, and once a signal is discovered, it could impact future safety assessments. Many challenges still remain in this area despite advances in methodologies, infrastructure, public awareness, international harmonization, and regulatory enforcement. Innovations in safety assessment methodologies are pressing in order to make the medical product development process more efficient and effective, and the assessment of medical product marketing approval more streamlined and structured. Health care payers, providers, and patients may have different perspectives when weighing in on clinical, financial and personal needs when therapies are being evaluated.

  11. Statistical iterative reconstruction for streak artefact reduction when using multidetector CT to image the dento-alveolar structures.

    PubMed

    Dong, J; Hayakawa, Y; Kober, C

    2014-01-01

    When metallic prosthetic appliances and dental fillings exist in the oral cavity, the appearance of metal-induced streak artefacts is not avoidable in CT images. The aim of this study was to develop a method for artefact reduction using the statistical reconstruction on multidetector row CT images. Adjacent CT images often depict similar anatomical structures. Therefore, reconstructed images with weak artefacts were attempted using projection data of an artefact-free image in a neighbouring thin slice. Images with moderate and strong artefacts were continuously processed in sequence by successive iterative restoration where the projection data was generated from the adjacent reconstructed slice. First, the basic maximum likelihood-expectation maximization algorithm was applied. Next, the ordered subset-expectation maximization algorithm was examined. Alternatively, a small region of interest setting was designated. Finally, the general purpose graphic processing unit machine was applied in both situations. The algorithms reduced the metal-induced streak artefacts on multidetector row CT images when the sequential processing method was applied. The ordered subset-expectation maximization and small region of interest reduced the processing duration without apparent detriments. A general-purpose graphic processing unit realized the high performance. A statistical reconstruction method was applied for the streak artefact reduction. The alternative algorithms applied were effective. Both software and hardware tools, such as ordered subset-expectation maximization, small region of interest and general-purpose graphic processing unit achieved fast artefact correction.

  12. Recent developments in the theory of protein folding: searching for the global energy minimum.

    PubMed

    Scheraga, H A

    1996-04-16

    Statistical mechanical theories and computer simulation are being used to gain an understanding of the fundamental features of protein folding. A major obstacle in the computation of protein structures is the multiple-minima problem arising from the existence of many local minima in the multidimensional energy landscape of the protein. This problem has been surmounted for small open-chain and cyclic peptides, and for regular-repeating sequences of models of fibrous proteins. Progress is being made in resolving this problem for globular proteins.

  13. PROC IRT: A SAS Procedure for Item Response Theory

    PubMed Central

    Matlock Cole, Ki; Paek, Insu

    2017-01-01

    This article reviews the procedure for item response theory (PROC IRT) procedure in SAS/STAT 14.1 to conduct item response theory (IRT) analyses of dichotomous and polytomous datasets that are unidimensional or multidimensional. The review provides an overview of available features, including models, estimation procedures, interfacing, input, and output files. A small-scale simulation study evaluates the IRT model parameter recovery of the PROC IRT procedure. The use of the IRT procedure in Statistical Analysis Software (SAS) may be useful for researchers who frequently utilize SAS for analyses, research, and teaching.

  14. Authenticating concealed private data while maintaining concealment

    DOEpatents

    Thomas, Edward V [Albuquerque, NM; Draelos, Timothy J [Albuquerque, NM

    2007-06-26

    A method of and system for authenticating concealed and statistically varying multi-dimensional data comprising: acquiring an initial measurement of an item, wherein the initial measurement is subject to measurement error; applying a transformation to the initial measurement to generate reference template data; acquiring a subsequent measurement of an item, wherein the subsequent measurement is subject to measurement error; applying the transformation to the subsequent measurement; and calculating a Euclidean distance metric between the transformed measurements; wherein the calculated Euclidean distance metric is identical to a Euclidean distance metric between the measurement prior to transformation.

  15. The COSMIC-DANCE project: Unravelling the origin of the mass function

    NASA Astrophysics Data System (ADS)

    Bouy, H.; Bertin, E.; Sarro, L. M.; Barrado, D.; Berihuete, A.; Olivares, J.; Moraux, E.; Bouvier, J.; Tamura, M.; Cuillandre, J.-C.; Beletsky, Y.; Wright, N.; Huelamo, N.; Allen, L.; Solano, E.; Brandner, B.

    2017-03-01

    The COSMIC-DANCE project is an observational program aiming at understanding the origin and evolution of ultracool objects by measuring the mass function and internal dynamics of young nearby associations down to the fragmentation limit. The least massive members of young nearby associations are identified using modern statistical methods in a multi-dimensional space made of optical and infrared luminosities and colors and proper motions. The photometry and astrometry are obtained by combining ground and in some case space based archival observations with new observations, covering between one and two decades.

  16. A new multidimensional measure of African adolescents' perceptions of teachers' behaviors.

    PubMed

    Mboya, M M

    1994-04-01

    The Perceived Teacher Behavior Inventory was designed to measure three dimensions of students' perceptions of the behaviors of their teachers. This research was conducted to assess the statistical validity and reliability of the instrument administered to 770 students attending two coeducational high schools in Cape Town, South Africa. Factor analysis clearly identified three subscales indicating that the instrument distinguished the students' perceptions of their teachers' behaviors in three areas. Estimates of internal consistency of the subscales were assessed using the squared multiple correlation as the index of reliability.

  17. Robust parallel iterative solvers for linear and least-squares problems, Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saad, Yousef

    2014-01-16

    The primary goal of this project is to study and develop robust iterative methods for solving linear systems of equations and least squares systems. The focus of the Minnesota team is on algorithms development, robustness issues, and on tests and validation of the methods on realistic problems. 1. The project begun with an investigation on how to practically update a preconditioner obtained from an ILU-type factorization, when the coefficient matrix changes. 2. We investigated strategies to improve robustness in parallel preconditioners in a specific case of a PDE with discontinuous coefficients. 3. We explored ways to adapt standard preconditioners formore » solving linear systems arising from the Helmholtz equation. These are often difficult linear systems to solve by iterative methods. 4. We have also worked on purely theoretical issues related to the analysis of Krylov subspace methods for linear systems. 5. We developed an effective strategy for performing ILU factorizations for the case when the matrix is highly indefinite. The strategy uses shifting in some optimal way. The method was extended to the solution of Helmholtz equations by using complex shifts, yielding very good results in many cases. 6. We addressed the difficult problem of preconditioning sparse systems of equations on GPUs. 7. A by-product of the above work is a software package consisting of an iterative solver library for GPUs based on CUDA. This was made publicly available. It was the first such library that offers complete iterative solvers for GPUs. 8. We considered another form of ILU which blends coarsening techniques from Multigrid with algebraic multilevel methods. 9. We have released a new version on our parallel solver - called pARMS [new version is version 3]. As part of this we have tested the code in complex settings - including the solution of Maxwell and Helmholtz equations and for a problem of crystal growth.10. As an application of polynomial preconditioning we considered the problem of evaluating f(A)v which arises in statistical sampling. 11. As an application to the methods we developed, we tackled the problem of computing the diagonal of the inverse of a matrix. This arises in statistical applications as well as in many applications in physics. We explored probing methods as well as domain-decomposition type methods. 12. A collaboration with researchers from Toulouse, France, considered the important problem of computing the Schur complement in a domain-decomposition approach. 13. We explored new ways of preconditioning linear systems, based on low-rank approximations.« less

  18. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  19. Multimachine data–based prediction of high-frequency sensor signal noise for resistive wall mode control in ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yueqiang; Sabbagh, S. A.; Chapman, I. T.

    The high-frequency noise measured by magnetic sensors, at levels above the typical frequency of resistive wall modes, is analyzed across a range of present tokamak devices including DIII-D, JET, MAST, ASDEX Upgrade, JT-60U, and NSTX. A high-pass filter enables identification of the noise component with Gaussian-like statistics that shares certain common characteristics in all devices considered. A conservative prediction is made for ITER plasma operation of the high-frequency noise component of the sensor signals, to be used for resistive wall mode feedback stabilization, based on the multimachine database. The predicted root-mean-square n = 1 (n is the toroidal mode number)more » noise level is 10 4 to 10 5 G/s for the voltage signal, and 0.1 to 1 G for the perturbed magnetic field signal. The lower cutoff frequency of the Gaussian pickup noise scales linearly with the sampling frequency, with a scaling coefficient of about 0.1. As a result, these basic noise characteristics should be useful for the modeling-based design of the feedback control system for the resistive wall mode in ITER.« less

  20. Multimachine data–based prediction of high-frequency sensor signal noise for resistive wall mode control in ITER

    DOE PAGES

    Liu, Yueqiang; Sabbagh, S. A.; Chapman, I. T.; ...

    2017-03-27

    The high-frequency noise measured by magnetic sensors, at levels above the typical frequency of resistive wall modes, is analyzed across a range of present tokamak devices including DIII-D, JET, MAST, ASDEX Upgrade, JT-60U, and NSTX. A high-pass filter enables identification of the noise component with Gaussian-like statistics that shares certain common characteristics in all devices considered. A conservative prediction is made for ITER plasma operation of the high-frequency noise component of the sensor signals, to be used for resistive wall mode feedback stabilization, based on the multimachine database. The predicted root-mean-square n = 1 (n is the toroidal mode number)more » noise level is 10 4 to 10 5 G/s for the voltage signal, and 0.1 to 1 G for the perturbed magnetic field signal. The lower cutoff frequency of the Gaussian pickup noise scales linearly with the sampling frequency, with a scaling coefficient of about 0.1. As a result, these basic noise characteristics should be useful for the modeling-based design of the feedback control system for the resistive wall mode in ITER.« less

  1. Deconvolution of interferometric data using interior point iterative algorithms

    NASA Astrophysics Data System (ADS)

    Theys, C.; Lantéri, H.; Aime, C.

    2016-09-01

    We address the problem of deconvolution of astronomical images that could be obtained with future large interferometers in space. The presentation is made in two complementary parts. The first part gives an introduction to the image deconvolution with linear and nonlinear algorithms. The emphasis is made on nonlinear iterative algorithms that verify the constraints of non-negativity and constant flux. The Richardson-Lucy algorithm appears there as a special case for photon counting conditions. More generally, the algorithm published recently by Lanteri et al. (2015) is based on scale invariant divergences without assumption on the statistic model of the data. The two proposed algorithms are interior-point algorithms, the latter being more efficient in terms of speed of calculation. These algorithms are applied to the deconvolution of simulated images corresponding to an interferometric system of 16 diluted telescopes in space. Two non-redundant configurations, one disposed around a circle and the other on an hexagonal lattice, are compared for their effectiveness on a simple astronomical object. The comparison is made in the direct and Fourier spaces. Raw "dirty" images have many artifacts due to replicas of the original object. Linear methods cannot remove these replicas while iterative methods clearly show their efficacy in these examples.

  2. The ALI-ARMS Code for Modeling Atmospheric non-LTE Molecular Band Emissions: Current Status and Applications

    NASA Technical Reports Server (NTRS)

    Kutepov, A. A.; Feofilov, A. G.; Manuilova, R. O.; Yankovsky, V. A.; Rezac, L.; Pesnell, W. D.; Goldberg, R. A.

    2008-01-01

    The Accelerated Lambda Iteration (ALI) technique was developed in stellar astrophysics at the beginning of 1990s for solving the non-LTE radiative transfer problem in atomic lines and multiplets in stellar atmospheres. It was later successfully applied to modeling the non-LTE emissions and radiative cooling/heating in the vibrational-rotational bands of molecules in planetary atmospheres. Similar to the standard lambda iterations ALI operates with the matrices of minimal dimension. However, it provides higher convergence rate and stability due to removing from the iterating process the photons trapped in the optically thick line cores. In the current ALI-ARMS (ALI for Atmospheric Radiation and Molecular Spectra) code version additional acceleration of calculations is provided by utilizing the opacity distribution function (ODF) approach and "decoupling". The former allows replacing the band branches by single lines of special shape, whereas the latter treats non-linearity caused by strong near-resonant vibration-vibrational level coupling without additional linearizing the statistical equilibrium equations. Latest code application for the non-LTE diagnostics of the molecular band emissions of Earth's and Martian atmospheres as well as for the non-LTE IR cooling/heating calculations are discussed.

  3. Iterative reconstruction for x-ray computed tomography using prior-image induced nonlocal regularization.

    PubMed

    Zhang, Hua; Huang, Jing; Ma, Jianhua; Bian, Zhaoying; Feng, Qianjin; Lu, Hongbing; Liang, Zhengrong; Chen, Wufan

    2014-09-01

    Repeated X-ray computed tomography (CT) scans are often required in several specific applications such as perfusion imaging, image-guided biopsy needle, image-guided intervention, and radiotherapy with noticeable benefits. However, the associated cumulative radiation dose significantly increases as comparison with that used in the conventional CT scan, which has raised major concerns in patients. In this study, to realize radiation dose reduction by reducing the X-ray tube current and exposure time (mAs) in repeated CT scans, we propose a prior-image induced nonlocal (PINL) regularization for statistical iterative reconstruction via the penalized weighted least-squares (PWLS) criteria, which we refer to as "PWLS-PINL". Specifically, the PINL regularization utilizes the redundant information in the prior image and the weighted least-squares term considers a data-dependent variance estimation, aiming to improve current low-dose image quality. Subsequently, a modified iterative successive overrelaxation algorithm is adopted to optimize the associative objective function. Experimental results on both phantom and patient data show that the present PWLS-PINL method can achieve promising gains over the other existing methods in terms of the noise reduction, low-contrast object detection, and edge detail preservation.

  4. Iterative Reconstruction for X-Ray Computed Tomography using Prior-Image Induced Nonlocal Regularization

    PubMed Central

    Ma, Jianhua; Bian, Zhaoying; Feng, Qianjin; Lu, Hongbing; Liang, Zhengrong; Chen, Wufan

    2014-01-01

    Repeated x-ray computed tomography (CT) scans are often required in several specific applications such as perfusion imaging, image-guided biopsy needle, image-guided intervention, and radiotherapy with noticeable benefits. However, the associated cumulative radiation dose significantly increases as comparison with that used in the conventional CT scan, which has raised major concerns in patients. In this study, to realize radiation dose reduction by reducing the x-ray tube current and exposure time (mAs) in repeated CT scans, we propose a prior-image induced nonlocal (PINL) regularization for statistical iterative reconstruction via the penalized weighted least-squares (PWLS) criteria, which we refer to as “PWLS-PINL”. Specifically, the PINL regularization utilizes the redundant information in the prior image and the weighted least-squares term considers a data-dependent variance estimation, aiming to improve current low-dose image quality. Subsequently, a modified iterative successive over-relaxation algorithm is adopted to optimize the associative objective function. Experimental results on both phantom and patient data show that the present PWLS-PINL method can achieve promising gains over the other existing methods in terms of the noise reduction, low-contrast object detection and edge detail preservation. PMID:24235272

  5. Operational Planning of Channel Airlift Missions Using Forecasted Demand

    DTIC Science & Technology

    2013-03-01

    tailored to the specific problem ( Metaheuristics , 2005). As seen in the section Cargo Loading Algorithm , heuristic methods are often iterative...that are equivalent to the forecasted cargo amount. The simulated pallets are then used in a heuristic cargo loading algorithm . The loading... algorithm places cargo onto available aircraft (based on real schedules) given the date and the destination and outputs statistics based on the aircraft ton

  6. Radiation dose reduction with chest computed tomography using adaptive statistical iterative reconstruction technique: initial experience.

    PubMed

    Prakash, Priyanka; Kalra, Mannudeep K; Digumarthy, Subba R; Hsieh, Jiang; Pien, Homer; Singh, Sarabjeet; Gilman, Matthew D; Shepard, Jo-Anne O

    2010-01-01

    To assess radiation dose reduction and image quality for weight-based chest computed tomographic (CT) examination results reconstructed using adaptive statistical iterative reconstruction (ASIR) technique. With local ethical committee approval, weight-adjusted chest CT examinations were performed using ASIR in 98 patients and filtered backprojection (FBP) in 54 weight-matched patients on a 64-slice multidetector CT. Patients were categorized into 3 groups: 60 kg or less (n = 32), 61 to 90 kg (n = 77), and 91 kg or more (n = 43) for weight-based adjustment of noise indices for automatic exposure control (Auto mA; GE Healthcare, Waukesha, Wis). Remaining scan parameters were held constant at 0.984:1 pitch, 120 kilovolts (peak), 40-mm table feed per rotation, and 2.5-mm section thickness. Patients' weight, scanning parameters, and CT dose index volume were recorded. Effective doses (EDs) were estimated. Image noise was measured in the descending thoracic aorta at the level of the carina. Data were analyzed using analysis of variance. Compared with FBP, ASIR was associated with an overall mean (SD) decrease of 27.6% in ED (ASIR, 8.8 [2.3] mSv; FBP, 12.2 [2.1] mSv; P < 0.0001). With the use of ASIR, the ED values were 6.5 (1.8) mSv (28.8% decrease), 7.3 (1.6) mSv (27.3% decrease), and 12.8 (2.3) mSv (26.8% decrease) for the weight groups of 60 kg or less, 61 to 90 kg, and 91 kg or more, respectively, compared with 9.2 (2.3) mSv, 10.0 (2.0) mSv, and 17.4 (2.1) mSv with FBP (P < 0.0001). Despite dose reduction, there was less noise with ASIR (12.6 [2.9] mSv) than with FBP (16.6 [6.2] mSv; P < 0.0001). Adaptive statistical iterative reconstruction helps reduce chest CT radiation dose and improve image quality compared with the conventionally used FBP image reconstruction.

  7. Microwave beam broadening due to turbulent plasma density fluctuations within the limit of the Born approximation and beyond

    NASA Astrophysics Data System (ADS)

    Köhn, A.; Guidi, L.; Holzhauer, E.; Maj, O.; Poli, E.; Snicker, A.; Weber, H.

    2018-07-01

    Plasma turbulence, and edge density fluctuations in particular, can under certain conditions broaden the cross-section of injected microwave beams significantly. This can be a severe problem for applications relying on well-localized deposition of the microwave power, like the control of MHD instabilities. Here we investigate this broadening mechanism as a function of fluctuation level, background density and propagation length in a fusion-relevant scenario using two numerical codes, the full-wave code IPF-FDMC and the novel wave kinetic equation solver WKBeam. The latter treats the effects of fluctuations using a statistical approach, based on an iterative solution of the scattering problem (Born approximation). The full-wave simulations are used to benchmark this approach. The Born approximation is shown to be valid over a large parameter range, including ITER-relevant scenarios.

  8. Accessing Multi-Dimensional Images and Data Cubes in the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Tody, Douglas; Plante, R. L.; Berriman, G. B.; Cresitello-Dittmar, M.; Good, J.; Graham, M.; Greene, G.; Hanisch, R. J.; Jenness, T.; Lazio, J.; Norris, P.; Pevunova, O.; Rots, A. H.

    2014-01-01

    Telescopes across the spectrum are routinely producing multi-dimensional images and datasets, such as Doppler velocity cubes, polarization datasets, and time-resolved “movies.” Examples of current telescopes producing such multi-dimensional images include the JVLA, ALMA, and the IFU instruments on large optical and near-infrared wavelength telescopes. In the near future, both the LSST and JWST will also produce such multi-dimensional images routinely. High-energy instruments such as Chandra produce event datasets that are also a form of multi-dimensional data, in effect being a very sparse multi-dimensional image. Ensuring that the data sets produced by these telescopes can be both discovered and accessed by the community is essential and is part of the mission of the Virtual Observatory (VO). The Virtual Astronomical Observatory (VAO, http://www.usvao.org/), in conjunction with its international partners in the International Virtual Observatory Alliance (IVOA), has developed a protocol and an initial demonstration service designed for the publication, discovery, and access of arbitrarily large multi-dimensional images. The protocol describing multi-dimensional images is the Simple Image Access Protocol, version 2, which provides the minimal set of metadata required to characterize a multi-dimensional image for its discovery and access. A companion Image Data Model formally defines the semantics and structure of multi-dimensional images independently of how they are serialized, while providing capabilities such as support for sparse data that are essential to deal effectively with large cubes. A prototype data access service has been deployed and tested, using a suite of multi-dimensional images from a variety of telescopes. The prototype has demonstrated the capability to discover and remotely access multi-dimensional data via standard VO protocols. The prototype informs the specification of a protocol that will be submitted to the IVOA for approval, with an operational data cube service to be delivered in mid-2014. An associated user-installable VO data service framework will provide the capabilities required to publish VO-compatible multi-dimensional images or data cubes.

  9. Students' proficiency scores within multitrait item response theory

    NASA Astrophysics Data System (ADS)

    Scott, Terry F.; Schumayer, Daniel

    2015-12-01

    In this paper we present a series of item response models of data collected using the Force Concept Inventory. The Force Concept Inventory (FCI) was designed to poll the Newtonian conception of force viewed as a multidimensional concept, that is, as a complex of distinguishable conceptual dimensions. Several previous studies have developed single-trait item response models of FCI data; however, we feel that multidimensional models are also appropriate given the explicitly multidimensional design of the inventory. The models employed in the research reported here vary in both the number of fitting parameters and the number of underlying latent traits assumed. We calculate several model information statistics to ensure adequate model fit and to determine which of the models provides the optimal balance of information and parsimony. Our analysis indicates that all item response models tested, from the single-trait Rasch model through to a model with ten latent traits, satisfy the standard requirements of fit. However, analysis of model information criteria indicates that the five-trait model is optimal. We note that an earlier factor analysis of the same FCI data also led to a five-factor model. Furthermore the factors in our previous study and the traits identified in the current work match each other well. The optimal five-trait model assigns proficiency scores to all respondents for each of the five traits. We construct a correlation matrix between the proficiencies in each of these traits. This correlation matrix shows strong correlations between some proficiencies, and strong anticorrelations between others. We present an interpretation of this correlation matrix.

  10. Construct validity of the Swedish version of the revised piper fatigue scale in an oncology sample--a Rasch analysis.

    PubMed

    Lundgren-Nilsson, Asa; Dencker, Anna; Jakobsson, Sofie; Taft, Charles; Tennant, Alan

    2014-06-01

    Fatigue is a common and distressing symptom in cancer patients due to both the disease and its treatments. The concept of fatigue is multidimensional and includes both physical and mental components. The 22-item Revised Piper Fatigue Scale (RPFS) is a multidimensional instrument developed to assess cancer-related fatigue. This study reports on the construct validity of the Swedish version of the RPFS from the perspective of Rasch measurement. The Swedish version of the RPFS was answered by 196 cancer patients fatigued after 4 to 5 weeks of curative radiation therapy. Data from the scale were fitted to the Rasch measurement model. This involved testing a series of assumptions, including the stochastic ordering of items, local response dependency, and unidimensionality. A series of fit statistics were computed, differential item functioning (DIF) was tested, and local response dependency was accommodated through testlets. The Behavioral, Affective and Sensory domains all satisfied the Rasch model expectations. No DIF was observed, and all domains were found to be unidimensional. The Mood/Cognitive scale failed to fit the model, and substantial multidimensionality was found. Splitting the scale between Mood and Cognitive items resolved fit to the Rasch model, and new domains were unidimensional without DIF. The current Rasch analyses add to the evidence of measurement properties of the scale and show that the RPFS has good psychometric properties and works well to measure fatigue. The original four-factor structure, however, was not supported. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  11. Multidimensional Single-Cell Analysis of BCR Signaling Reveals Proximal Activation Defect As a Hallmark of Chronic Lymphocytic Leukemia B Cells

    PubMed Central

    Palomba, M. Lia; Piersanti, Kelly; Ziegler, Carly G. K.; Decker, Hugo; Cotari, Jesse W.; Bantilan, Kurt; Rijo, Ivelise; Gardner, Jeff R.; Heaney, Mark; Bemis, Debra; Balderas, Robert; Malek, Sami N.; Seymour, Erlene; Zelenetz, Andrew D.

    2014-01-01

    Purpose Chronic Lymphocytic Leukemia (CLL) is defined by a perturbed B-cell receptor-mediated signaling machinery. We aimed to model differential signaling behavior between B cells from CLL and healthy individuals to pinpoint modes of dysregulation. Experimental Design We developed an experimental methodology combining immunophenotyping, multiplexed phosphospecific flow cytometry, and multifactorial statistical modeling. Utilizing patterns of signaling network covariance, we modeled BCR signaling in 67 CLL patients using Partial Least Squares Regression (PLSR). Results from multidimensional modeling were validated using an independent test cohort of 38 patients. Results We identified a dynamic and variable imbalance between proximal (pSYK, pBTK) and distal (pPLCγ2, pBLNK, ppERK) phosphoresponses. PLSR identified the relationship between upstream tyrosine kinase SYK and its target, PLCγ2, as maximally predictive and sufficient to distinguish CLL from healthy samples, pointing to this juncture in the signaling pathway as a hallmark of CLL B cells. Specific BCR pathway signaling signatures that correlate with the disease and its degree of aggressiveness were identified. Heterogeneity in the PLSR response variable within the B cell population is both a characteristic mark of healthy samples and predictive of disease aggressiveness. Conclusion Single-cell multidimensional analysis of BCR signaling permitted focused analysis of the variability and heterogeneity of signaling behavior from patient-to-patient, and from cell-to-cell. Disruption of the pSYK/pPLCγ2 relationship is uncovered as a robust hallmark of CLL B cell signaling behavior. Together, these observations implicate novel elements of the BCR signal transduction as potential therapeutic targets. PMID:24489640

  12. Multidimensional Unfolding by Nonmetric Multidimensional Scaling of Spearman Distances in the Extended Permutation Polytope

    ERIC Educational Resources Information Center

    Van Deun, Katrijn; Heiser, Willem J.; Delbeke, Luc

    2007-01-01

    A multidimensional unfolding technique that is not prone to degenerate solutions and is based on multidimensional scaling of a complete data matrix is proposed: distance information about the unfolding data and about the distances both among judges and among objects is included in the complete matrix. The latter information is derived from the…

  13. Statistical distributions of ultra-low dose CT sinograms and their fundamental limits

    NASA Astrophysics Data System (ADS)

    Lee, Tzu-Cheng; Zhang, Ruoqiao; Alessio, Adam M.; Fu, Lin; De Man, Bruno; Kinahan, Paul E.

    2017-03-01

    Low dose CT imaging is typically constrained to be diagnostic. However, there are applications for even lowerdose CT imaging, including image registration across multi-frame CT images and attenuation correction for PET/CT imaging. We define this as the ultra-low-dose (ULD) CT regime where the exposure level is a factor of 10 lower than current low-dose CT technique levels. In the ULD regime it is possible to use statistically-principled image reconstruction methods that make full use of the raw data information. Since most statistical based iterative reconstruction methods are based on the assumption of that post-log noise distribution is close to Poisson or Gaussian, our goal is to understand the statistical distribution of ULD CT data with different non-positivity correction methods, and to understand when iterative reconstruction methods may be effective in producing images that are useful for image registration or attenuation correction in PET/CT imaging. We first used phantom measurement and calibrated simulation to reveal how the noise distribution deviate from normal assumption under the ULD CT flux environment. In summary, our results indicate that there are three general regimes: (1) Diagnostic CT, where post-log data are well modeled by normal distribution. (2) Lowdose CT, where normal distribution remains a reasonable approximation and statistically-principled (post-log) methods that assume a normal distribution have an advantage. (3) An ULD regime that is photon-starved and the quadratic approximation is no longer effective. For instance, a total integral density of 4.8 (ideal pi for 24 cm of water) for 120kVp, 0.5mAs of radiation source is the maximum pi value where a definitive maximum likelihood value could be found. This leads to fundamental limits in the estimation of ULD CT data when using a standard data processing stream

  14. Multivariate meta-analysis: a robust approach based on the theory of U-statistic.

    PubMed

    Ma, Yan; Mazumdar, Madhu

    2011-10-30

    Meta-analysis is the methodology for combining findings from similar research studies asking the same question. When the question of interest involves multiple outcomes, multivariate meta-analysis is used to synthesize the outcomes simultaneously taking into account the correlation between the outcomes. Likelihood-based approaches, in particular restricted maximum likelihood (REML) method, are commonly utilized in this context. REML assumes a multivariate normal distribution for the random-effects model. This assumption is difficult to verify, especially for meta-analysis with small number of component studies. The use of REML also requires iterative estimation between parameters, needing moderately high computation time, especially when the dimension of outcomes is large. A multivariate method of moments (MMM) is available and is shown to perform equally well to REML. However, there is a lack of information on the performance of these two methods when the true data distribution is far from normality. In this paper, we propose a new nonparametric and non-iterative method for multivariate meta-analysis on the basis of the theory of U-statistic and compare the properties of these three procedures under both normal and skewed data through simulation studies. It is shown that the effect on estimates from REML because of non-normal data distribution is marginal and that the estimates from MMM and U-statistic-based approaches are very similar. Therefore, we conclude that for performing multivariate meta-analysis, the U-statistic estimation procedure is a viable alternative to REML and MMM. Easy implementation of all three methods are illustrated by their application to data from two published meta-analysis from the fields of hip fracture and periodontal disease. We discuss ideas for future research based on U-statistic for testing significance of between-study heterogeneity and for extending the work to meta-regression setting. Copyright © 2011 John Wiley & Sons, Ltd.

  15. Color Image Segmentation Based on Statistics of Location and Feature Similarity

    NASA Astrophysics Data System (ADS)

    Mori, Fumihiko; Yamada, Hiromitsu; Mizuno, Makoto; Sugano, Naotoshi

    The process of “image segmentation and extracting remarkable regions” is an important research subject for the image understanding. However, an algorithm based on the global features is hardly found. The requisite of such an image segmentation algorism is to reduce as much as possible the over segmentation and over unification. We developed an algorithm using the multidimensional convex hull based on the density as the global feature. In the concrete, we propose a new algorithm in which regions are expanded according to the statistics of the region such as the mean value, standard deviation, maximum value and minimum value of pixel location, brightness and color elements and the statistics are updated. We also introduced a new concept of conspicuity degree and applied it to the various 21 images to examine the effectiveness. The remarkable object regions, which were extracted by the presented system, highly coincided with those which were pointed by the sixty four subjects who attended the psychological experiment.

  16. Application of artificial neural network to search for gravitational-wave signals associated with short gamma-ray bursts

    NASA Astrophysics Data System (ADS)

    Kim, Kyungmin; Harry, Ian W.; Hodge, Kari A.; Kim, Young-Min; Lee, Chang-Hwan; Lee, Hyun Kyu; Oh, John J.; Oh, Sang Hoon; Son, Edwin J.

    2015-12-01

    We apply a machine learning algorithm, the artificial neural network, to the search for gravitational-wave signals associated with short gamma-ray bursts (GRBs). The multi-dimensional samples consisting of data corresponding to the statistical and physical quantities from the coherent search pipeline are fed into the artificial neural network to distinguish simulated gravitational-wave signals from background noise artifacts. Our result shows that the data classification efficiency at a fixed false alarm probability (FAP) is improved by the artificial neural network in comparison to the conventional detection statistic. Specifically, the distance at 50% detection probability at a fixed false positive rate is increased about 8%-14% for the considered waveform models. We also evaluate a few seconds of the gravitational-wave data segment using the trained networks and obtain the FAP. We suggest that the artificial neural network can be a complementary method to the conventional detection statistic for identifying gravitational-wave signals related to the short GRBs.

  17. Chain Pooling modeling selection as developed for the statistical analysis of a rotor burst protection experiment

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1977-01-01

    As many as three iterated statistical model deletion procedures were considered for an experiment. Population model coefficients were chosen to simulate a saturated 2 to the 4th power experiment having an unfavorable distribution of parameter values. Using random number studies, three model selection strategies were developed, namely, (1) a strategy to be used in anticipation of large coefficients of variation, approximately 65 percent, (2) a strategy to be sued in anticipation of small coefficients of variation, 4 percent or less, and (3) a security regret strategy to be used in the absence of such prior knowledge.

  18. Optimal Power Allocation for CC-HARQ-based Cognitive Radio with Statistical CSI in Nakagami Slow Fading Channels

    NASA Astrophysics Data System (ADS)

    Xu, Ding; Li, Qun

    2017-01-01

    This paper addresses the power allocation problem for cognitive radio (CR) based on hybrid-automatic-repeat-request (HARQ) with chase combining (CC) in Nakagamimslow fading channels. We assume that, instead of the perfect instantaneous channel state information (CSI), only the statistical CSI is available at the secondary user (SU) transmitter. The aim is to minimize the SU outage probability under the primary user (PU) interference outage constraint. Using the Lagrange multiplier method, an iterative and recursive algorithm is derived to obtain the optimal power allocation for each transmission round. Extensive numerical results are presented to illustrate the performance of the proposed algorithm.

  19. Subband Image Coding with Jointly Optimized Quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith Mark J. T.

    1995-01-01

    An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional.

  20. Evaluating self-esteem modifications after a Life Skills Based Education (LSBE) intervention.

    PubMed

    Zangirolami, Francesca; Iemmi, Diego; Vighi, Valentina; Pellai, Alberto

    2016-12-22

    A satisfactory level of self-esteem has been recognized as crucial factor contributing to healthy lifestyle, especially among children and adolescents. We performed an analysis of the impact of Life-Skills Based Education (LSBE) in a cohort of pupils in a Primary School of Sondrio (Northern Italy) and we made a comparison with a control group in a Primary school of the same province where no intervention was performed. Changes in levels of self-esteem were assessed through Italian version of the Multidimensional Self-concept Test of Bruce Bracken - T.M.A. For research purpose we used four of the six scales of the Italian version of the Multidimensional Self-esteem Test - T.M.A. The questionnaire was handed out to a total of 318 pupils: 132 students had received a LSBE intervention and 186 hadn't received any intervention. Median and interquartile range are in the normal range, both for the intervention and control groups. The four subscales showed an improving trend from the beginning (T1) to the end (T2) of the school year, both for the intervention and control groups. Regarding the intervention group, we found statistically significant changes in the subscales of quality of interpersonal relationships (p=0.003) and emotional competencies (p=0.02); regarding the control group, we found statistically significant changes in all the subscales analyzed. Considering the variable "sex", we found a statistically significant improvement only for male students and for the subscale "quality of interpersonal relationships" (p=0.007). The population trend observed suggests an improvement in competencies and levels of self-esteem in the cohort subjected to a LSBE intervention. Data analysis revealed significant differences in the subscales of quality of interpersonal relationships and emotional competencies, suggesting that LSBE interventions have an higher impact on males than on females. A longer follow-up could be useful in order to provide more reliable and significant data about LSBE programs' real efficacy.

  1. Data-adaptive harmonic spectra and multilayer Stuart-Landau models

    NASA Astrophysics Data System (ADS)

    Chekroun, Mickaël D.; Kondrashov, Dmitri

    2017-09-01

    Harmonic decompositions of multivariate time series are considered for which we adopt an integral operator approach with periodic semigroup kernels. Spectral decomposition theorems are derived that cover the important cases of two-time statistics drawn from a mixing invariant measure. The corresponding eigenvalues can be grouped per Fourier frequency and are actually given, at each frequency, as the singular values of a cross-spectral matrix depending on the data. These eigenvalues obey, furthermore, a variational principle that allows us to define naturally a multidimensional power spectrum. The eigenmodes, as far as they are concerned, exhibit a data-adaptive character manifested in their phase which allows us in turn to define a multidimensional phase spectrum. The resulting data-adaptive harmonic (DAH) modes allow for reducing the data-driven modeling effort to elemental models stacked per frequency, only coupled at different frequencies by the same noise realization. In particular, the DAH decomposition extracts time-dependent coefficients stacked by Fourier frequency which can be efficiently modeled—provided the decay of temporal correlations is sufficiently well-resolved—within a class of multilayer stochastic models (MSMs) tailored here on stochastic Stuart-Landau oscillators. Applications to the Lorenz 96 model and to a stochastic heat equation driven by a space-time white noise are considered. In both cases, the DAH decomposition allows for an extraction of spatio-temporal modes revealing key features of the dynamics in the embedded phase space. The multilayer Stuart-Landau models (MSLMs) are shown to successfully model the typical patterns of the corresponding time-evolving fields, as well as their statistics of occurrence.

  2. Dynamic State Estimation of Terrestrial and Solar Plasmas

    NASA Astrophysics Data System (ADS)

    Kamalabadi, Farzad

    A pervasive problem in virtually all branches of space science is the estimation of multi-dimensional state parameters of a dynamical system from a collection of indirect, often incomplete, and imprecise measurements. Subsequent scientific inference is predicated on rigorous analysis, interpretation, and understanding of physical observations and on the reliability of the associated quantitative statistical bounds and performance characteristics of the algorithms used. In this work, we focus on these dynamic state estimation problems and illustrate their importance in the context of two timely activities in space remote sensing. First, we discuss the estimation of multi-dimensional ionospheric state parameters from UV spectral imaging measurements anticipated to be acquired the recently selected NASA Heliophysics mission, Ionospheric Connection Explorer (ICON). Next, we illustrate that similar state-space formulations provide the means for the estimation of 3D, time-dependent densities and temperatures in the solar corona from a series of white-light and EUV measurements. We demonstrate that, while a general framework for the stochastic formulation of the state estimation problem is suited for systematic inference of the parameters of a hidden Markov process, several challenges must be addressed in the assimilation of an increasing volume and diversity of space observations. These challenges are: (1) the computational tractability when faced with voluminous and multimodal data, (2) the inherent limitations of the underlying models which assume, often incorrectly, linear dynamics and Gaussian noise, and (3) the unavailability or inaccuracy of transition probabilities and noise statistics. We argue that pursuing answers to these questions necessitates cross-disciplinary research that enables progress toward systematically reconciling observational and theoretical understanding of the space environment.

  3. Development and validation of the multidimensional vaginal penetration disorder questionnaire (MVPDQ) for assessment of lifelong vaginismus in a sample of Iranian women.

    PubMed

    Molaeinezhad, Mitra; Roudsari, Robab Latifnejad; Yousefy, Alireza; Salehi, Mehrdad; Khoei, Effat Merghati

    2014-04-01

    Vaginismus is considered as one of the most common female psychosexual dysfunctions. Although the importance of using a multidisciplinary approach for assessment of vaginal penetration disorder is emphasized, the paucity of instruments for this purpose is clear. We designed a study to develop and investigate the psychometric properties of a multidimensional vaginal penetration disorder questionnaire (MVPDQ), thereby assisting specialists for clinical assessment of women with lifelong vaginismus (LLV). MVPDQ was developed using the findings from a thematic qualitative research conducted with 20 unconsummated couples from a former study, which was followed by an extensive literature review. Then, during a cross-sectional design, a consecutive sample of 214 women, who were diagnosed as LLV based on Diagnostic and Statistical Manual of Mental Disorders (DSM)-IV-TR criteria completed MVPDQ and additional questions regarding their demographic and sexual history. Validation measures and reliability were tested by exploratory factor analysis and Cronbach's alpha coefficient via Statistical Package for the Social Sciences (SPSS) version 16. After conducting exploratory factor analysis, MVPDQ emerged with 72 items and 9 dimensions: Catastrophic cognitions and tightening, helplessness, marital adjustment, hypervigilance, avoidance, penetration motivation, sexual information, genital incompatibility, and optimism. Subscales of MVPDQ showed a significant reliability that varied between 0.70 and 0.87 and results of test-retest were satisfactory. The present study shows that MVPDQ is a valid and reliable self-report questionnaire for clinical assessment of women complaining of LLV. This instrument may assist specialists to make a clinical judgment and plan appropriately for clinical management.

  4. Machine Learning Approaches for Clinical Psychology and Psychiatry.

    PubMed

    Dwyer, Dominic B; Falkai, Peter; Koutsouleris, Nikolaos

    2018-05-07

    Machine learning approaches for clinical psychology and psychiatry explicitly focus on learning statistical functions from multidimensional data sets to make generalizable predictions about individuals. The goal of this review is to provide an accessible understanding of why this approach is important for future practice given its potential to augment decisions associated with the diagnosis, prognosis, and treatment of people suffering from mental illness using clinical and biological data. To this end, the limitations of current statistical paradigms in mental health research are critiqued, and an introduction is provided to critical machine learning methods used in clinical studies. A selective literature review is then presented aiming to reinforce the usefulness of machine learning methods and provide evidence of their potential. In the context of promising initial results, the current limitations of machine learning approaches are addressed, and considerations for future clinical translation are outlined.

  5. SU-G-IeP2-12: The Effect of Iterative Reconstruction and CT Tube Voltage On Hounsfield Unit Values of Iodinated Contrast

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogden, K; Greene-Donnelly, K; Vallabhaneni, D

    Purpose: To investigate the effects of changing iterative reconstruction strength and tube voltage on Hounsfield Unit (HU) values of varying concentrations of Iodinated contrast medium in a phantom. Method: Iodinated contrast (Omnipaque 300, GE Healthcare, Princeton NJ) was diluted with distilled water to concentrations of 0.6, 0.9, 1.8, 3.6, 7.2, and 10.8 mg/mL of Iodine. The solutions were scanned in a patient equivalent water phantom on two MDCT scanners: VCT 64 slice (GE Medical Systems, Waukesha, WI) and an Aquilion One 320 slice scanner (Toshiba America Medical Systems, Tustin CA). The phantom was scanned at 80, 100, 120, 140 kVmore » using 400, 255, 180, and 130 mAs, respectively, for the VCT scanner, and 80, 100, 120, and 135 kV using 400, 250, 200, and 150 mAs, respectively, on the Aquilion One. Images were reconstructed at 2.5 mm (VCT) and 0.5 mm (Aquilion One). The VCT images were reconstructed using Advanced Statistical Iterative Reconstruction (ASIR) at 6 different strengths: 0%, 20%, 40%, 60%, 80%, and 100%. Aquilion One images were reconstructed using Adaptive Iterative Dose Reduction (AIDR) at 4 strengths: no AIDR, Weak AIDR, Standard AIDR, and Strong AIDR. Regions of interest (ROIs) were drawn on the images to measure the HU values and standard deviations of the diluted contrast. Second order polynomials were used to fit the HU values as a function of Iodine concentration. Results: For both scanners, there was no significant effect of changing the iterative reconstruction strength. The polynomial fits yielded goodness-of-fit (R2) values averaging 0.997. Conclusion: Changing the strength of the iterative reconstruction has no significant effect on the HU values of Iodinated contrast in a tissue-equivalent phantom. Fit values of HU vs Iodine concentration are useful in quantitative imaging protocols such as the determination of cardiac output from time-density curves in the main pulmonary artery.« less

  6. Population Estimates, Health Care Characteristics, and Material Hardship Experiences of U.S. Children with Parent-Reported Speech-Language Difficulties: Evidence from Three Nationally Representative Surveys

    ERIC Educational Resources Information Center

    Sonik, Rajan A.; Parish, Susan L.; Akorbirshoev, Ilhom; Son, Esther; Rosenthal, Eliana

    2014-01-01

    Purpose: To provide estimates for the prevalence of parent-reported speech-language difficulties in U.S. children, and to describe the levels of health care access and material hardship in this population. Method: We tabulated descriptive and bivariate statistics using cross-sectional data from the 2007 and 2011/2012 iterations of the National…

  7. Chemical Plume Detection with an Iterative Background Estimation Technique

    DTIC Science & Technology

    2016-05-17

    schemes because of contamination of background statistics by the plume. To mitigate the effects of plume contamination , a first pass of the detector...can be used to create a background mask. However, large diffuse plumes are typically not removed by a single pass. Instead, contamination can be...is estimated using plume-pixels, the covariance matrix is contaminated and detection performance may be significantly reduced. To avoid Further author

  8. 4D PET iterative deconvolution with spatiotemporal regularization for quantitative dynamic PET imaging.

    PubMed

    Reilhac, Anthonin; Charil, Arnaud; Wimberley, Catriona; Angelis, Georgios; Hamze, Hasar; Callaghan, Paul; Garcia, Marie-Paule; Boisson, Frederic; Ryder, Will; Meikle, Steven R; Gregoire, Marie-Claude

    2015-09-01

    Quantitative measurements in dynamic PET imaging are usually limited by the poor counting statistics particularly in short dynamic frames and by the low spatial resolution of the detection system, resulting in partial volume effects (PVEs). In this work, we present a fast and easy to implement method for the restoration of dynamic PET images that have suffered from both PVE and noise degradation. It is based on a weighted least squares iterative deconvolution approach of the dynamic PET image with spatial and temporal regularization. Using simulated dynamic [(11)C] Raclopride PET data with controlled biological variations in the striata between scans, we showed that the restoration method provides images which exhibit less noise and better contrast between emitting structures than the original images. In addition, the method is able to recover the true time activity curve in the striata region with an error below 3% while it was underestimated by more than 20% without correction. As a result, the method improves the accuracy and reduces the variability of the kinetic parameter estimates calculated from the corrected images. More importantly it increases the accuracy (from less than 66% to more than 95%) of measured biological variations as well as their statistical detectivity. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  9. Statistical detection of geographic clusters of resistant Escherichia coli in a regional network with WHONET and SaTScan.

    PubMed

    Park, Rachel; O'Brien, Thomas F; Huang, Susan S; Baker, Meghan A; Yokoe, Deborah S; Kulldorff, Martin; Barrett, Craig; Swift, Jamie; Stelling, John

    2016-11-01

    While antimicrobial resistance threatens the prevention, treatment, and control of infectious diseases, systematic analysis of routine microbiology laboratory test results worldwide can alert new threats and promote timely response. This study explores statistical algorithms for recognizing geographic clustering of multi-resistant microbes within a healthcare network and monitoring the dissemination of new strains over time. Escherichia coli antimicrobial susceptibility data from a three-year period stored in WHONET were analyzed across ten facilities in a healthcare network utilizing SaTScan's spatial multinomial model with two models for defining geographic proximity. We explored geographic clustering of multi-resistance phenotypes within the network and changes in clustering over time. Geographic clustering identified from both latitude/longitude and non-parametric facility groupings geographic models were similar, while the latter was offers greater flexibility and generalizability. Iterative application of the clustering algorithms suggested the possible recognition of the initial appearance of invasive E. coli ST131 in the clinical database of a single hospital and subsequent dissemination to others. Systematic analysis of routine antimicrobial resistance susceptibility test results supports the recognition of geographic clustering of microbial phenotypic subpopulations with WHONET and SaTScan, and iterative application of these algorithms can detect the initial appearance in and dissemination across a region prompting early investigation, response, and containment measures.

  10. CT coronary angiography: impact of adapted statistical iterative reconstruction (ASIR) on coronary stenosis and plaque composition analysis.

    PubMed

    Fuchs, Tobias A; Fiechter, Michael; Gebhard, Cathérine; Stehli, Julia; Ghadri, Jelena R; Kazakauskaite, Egle; Herzog, Bernhard A; Husmann, Lars; Gaemperli, Oliver; Kaufmann, Philipp A

    2013-03-01

    To assess the impact of adaptive statistical iterative reconstruction (ASIR) on coronary plaque volume and composition analysis as well as on stenosis quantification in high definition coronary computed tomography angiography (CCTA). We included 50 plaques in 29 consecutive patients who were referred for the assessment of known or suspected coronary artery disease (CAD) with contrast-enhanced CCTA on a 64-slice high definition CT scanner (Discovery HD 750, GE Healthcare). CCTA scans were reconstructed with standard filtered back projection (FBP) with no ASIR (0 %) or with increasing contributions of ASIR, i.e. 20, 40, 60, 80 and 100 % (no FBP). Plaque analysis (volume, components and stenosis degree) was performed using a previously validated automated software. Mean values for minimal diameter and minimal area as well as degree of stenosis did not change significantly using different ASIR reconstructions. There was virtually no impact of reconstruction algorithms on mean plaque volume or plaque composition (e.g. soft, intermediate and calcified component). However, with increasing ASIR contribution, the percentage of plaque volume component between 401 and 500 HU decreased significantly (p < 0.05). Modern image reconstruction algorithms such as ASIR, which has been developed for noise reduction in latest high resolution CCTA scans, can be used reliably without interfering with the plaque analysis and stenosis severity assessment.

  11. On the Need for Multidimensional Stirling Simulations

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako

    2005-01-01

    Given the cost and complication of simulating Stirling convertors, do we really need multidimensional modeling when one-dimensional capabilities exist? This paper provides a comprehensive description of when and why multidimensional simulation is needed.

  12. Beef quality grading using machine vision

    NASA Astrophysics Data System (ADS)

    Jeyamkondan, S.; Ray, N.; Kranzler, Glenn A.; Biju, Nisha

    2000-12-01

    A video image analysis system was developed to support automation of beef quality grading. Forty images of ribeye steaks were acquired. Fat and lean meat were differentiated using a fuzzy c-means clustering algorithm. Muscle longissimus dorsi (l.d.) was segmented from the ribeye using morphological operations. At the end of each iteration of erosion and dilation, a convex hull was fitted to the image and compactness was measured. The number of iterations was selected to yield the most compact l.d. Match between the l.d. muscle traced by an expert grader and that segmented by the program was 95.9%. Marbling and color features were extracted from the l.d. muscle and were used to build regression models to predict marbling and color scores. Quality grade was predicted using another regression model incorporating all features. Grades predicted by the model were statistically equivalent to the grades assigned by expert graders.

  13. Iterative model building, structure refinement and density modification with the PHENIX AutoBuild wizard.

    PubMed

    Terwilliger, Thomas C; Grosse-Kunstleve, Ralf W; Afonine, Pavel V; Moriarty, Nigel W; Zwart, Peter H; Hung, Li Wei; Read, Randy J; Adams, Paul D

    2008-01-01

    The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 A, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution.

  14. Polarimetric signatures of a coniferous forest canopy based on vector radiative transfer theory

    NASA Technical Reports Server (NTRS)

    Karam, M. A.; Fung, A. K.; Amar, F.; Mougin, E.; Lopes, A.; Beaudoin, A.

    1992-01-01

    Complete polarization signatures of a coniferous forest canopy are studied by the iterative solution of the vector radiative transfer equations up to the second order. The forest canopy constituents (leaves, branches, stems, and trunk) are embedded in a multi-layered medium over a rough interface. The branches, stems and trunk scatterers are modeled as finite randomly oriented cylinders. The leaves are modeled as randomly oriented needles. For a plane wave exciting the canopy, the average Mueller matrix is formulated in terms of the iterative solution of the radiative transfer solution and used to determine the linearly polarized backscattering coefficients, the co-polarized and cross-polarized power returns, and the phase difference statistics. Numerical results are presented to investigate the effect of transmitting and receiving antenna configurations on the polarimetric signature of a pine forest. Comparison is made with measurements.

  15. Comparison of SIRT and SQS for Regularized Weighted Least Squares Image Reconstruction

    PubMed Central

    Gregor, Jens; Fessler, Jeffrey A.

    2015-01-01

    Tomographic image reconstruction is often formulated as a regularized weighted least squares (RWLS) problem optimized by iterative algorithms that are either inherently algebraic or derived from a statistical point of view. This paper compares a modified version of SIRT (Simultaneous Iterative Reconstruction Technique), which is of the former type, with a version of SQS (Separable Quadratic Surrogates), which is of the latter type. We show that the two algorithms minimize the same criterion function using similar forms of preconditioned gradient descent. We present near-optimal relaxation for both based on eigenvalue bounds and include a heuristic extension for use with ordered subsets. We provide empirical evidence that SIRT and SQS converge at the same rate for all intents and purposes. For context, we compare their performance with an implementation of preconditioned conjugate gradient. The illustrative application is X-ray CT of luggage for aviation security. PMID:26478906

  16. Handling Big Data in Medical Imaging: Iterative Reconstruction with Large-Scale Automated Parallel Computation

    PubMed Central

    Lee, Jae H.; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T.; Seo, Youngho

    2014-01-01

    The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting. PMID:27081299

  17. Handling Big Data in Medical Imaging: Iterative Reconstruction with Large-Scale Automated Parallel Computation.

    PubMed

    Lee, Jae H; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T; Seo, Youngho

    2014-11-01

    The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting.

  18. High-order noise analysis for low dose iterative image reconstruction methods: ASIR, IRIS, and MBAI

    NASA Astrophysics Data System (ADS)

    Do, Synho; Singh, Sarabjeet; Kalra, Mannudeep K.; Karl, W. Clem; Brady, Thomas J.; Pien, Homer

    2011-03-01

    Iterative reconstruction techniques (IRTs) has been shown to suppress noise significantly in low dose CT imaging. However, medical doctors hesitate to accept this new technology because visual impression of IRT images are different from full-dose filtered back-projection (FBP) images. Most common noise measurements such as the mean and standard deviation of homogeneous region in the image that do not provide sufficient characterization of noise statistics when probability density function becomes non-Gaussian. In this study, we measure L-moments of intensity values of images acquired at 10% of normal dose and reconstructed by IRT methods of two state-of-art clinical scanners (i.e., GE HDCT and Siemens DSCT flash) by keeping dosage level identical to each other. The high- and low-dose scans (i.e., 10% of high dose) were acquired from each scanner and L-moments of noise patches were calculated for the comparison.

  19. Iterative algorithms for a non-linear inverse problem in atmospheric lidar

    NASA Astrophysics Data System (ADS)

    Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto

    2017-08-01

    We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.

  20. Multidimensional Scaling in the Poincare Disk

    DTIC Science & Technology

    2011-05-01

    REPORT Multidimensional Scaling in the Poincare Dis 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: Multidimensional scaling (MDS) is a class of projective...DATES COVERED (From - To) Standard Form 298 (Rev 8/98) Prescribed by ANSI Std. Z39.18 - Multidimensional Scaling in the Poincare Dis Report Title... plane . Our construction is based on an approximate hyperbolic line search and exempli?es some of the particulars that need to be addressed when

  1. Identification of key regulators of pancreatic cancer progression through multidimensional systems-level analysis.

    PubMed

    Rajamani, Deepa; Bhasin, Manoj K

    2016-05-03

    Pancreatic cancer is an aggressive cancer with dismal prognosis, urgently necessitating better biomarkers to improve therapeutic options and early diagnosis. Traditional approaches of biomarker detection that consider only one aspect of the biological continuum like gene expression alone are limited in their scope and lack robustness in identifying the key regulators of the disease. We have adopted a multidimensional approach involving the cross-talk between the omics spaces to identify key regulators of disease progression. Multidimensional domain-specific disease signatures were obtained using rank-based meta-analysis of individual omics profiles (mRNA, miRNA, DNA methylation) related to pancreatic ductal adenocarcinoma (PDAC). These domain-specific PDAC signatures were integrated to identify genes that were affected across multiple dimensions of omics space in PDAC (genes under multiple regulatory controls, GMCs). To further pin down the regulators of PDAC pathophysiology, a systems-level network was generated from knowledge-based interaction information applied to the above identified GMCs. Key regulators were identified from the GMC network based on network statistics and their functional importance was validated using gene set enrichment analysis and survival analysis. Rank-based meta-analysis identified 5391 genes, 109 miRNAs and 2081 methylation-sites significantly differentially expressed in PDAC (false discovery rate ≤ 0.05). Bimodal integration of meta-analysis signatures revealed 1150 and 715 genes regulated by miRNAs and methylation, respectively. Further analysis identified 189 altered genes that are commonly regulated by miRNA and methylation, hence considered GMCs. Systems-level analysis of the scale-free GMCs network identified eight potential key regulator hubs, namely E2F3, HMGA2, RASA1, IRS1, NUAK1, ACTN1, SKI and DLL1, associated with important pathways driving cancer progression. Survival analysis on individual key regulators revealed that higher expression of IRS1 and DLL1 and lower expression of HMGA2, ACTN1 and SKI were associated with better survival probabilities. It is evident from the results that our hierarchical systems-level multidimensional analysis approach has been successful in isolating the converging regulatory modules and associated key regulatory molecules that are potential biomarkers for pancreatic cancer progression.

  2. Scaling Laws for the Multidimensional Burgers Equation with Quadratic External Potential

    NASA Astrophysics Data System (ADS)

    Leonenko, N. N.; Ruiz-Medina, M. D.

    2006-07-01

    The reordering of the multidimensional exponential quadratic operator in coordinate-momentum space (see X. Wang, C.H. Oh and L.C. Kwek (1998). J. Phys. A.: Math. Gen. 31:4329-4336) is applied to derive an explicit formulation of the solution to the multidimensional heat equation with quadratic external potential and random initial conditions. The solution to the multidimensional Burgers equation with quadratic external potential under Gaussian strongly dependent scenarios is also obtained via the Hopf-Cole transformation. The limiting distributions of scaling solutions to the multidimensional heat and Burgers equations with quadratic external potential are then obtained under such scenarios.

  3. Dual Energy CT (DECT) Monochromatic Imaging: Added Value of Adaptive Statistical Iterative Reconstructions (ASIR) in Portal Venography.

    PubMed

    Zhao, Liqin; Winklhofer, Sebastian; Jiang, Rong; Wang, Xinlian; He, Wen

    2016-01-01

    To investigate the effect of the adaptive statistical iterative reconstructions (ASIR) on image quality in portal venography by dual energy CT (DECT) imaging. DECT scans of 45 cirrhotic patients obtained in the portal venous phase were analyzed. Monochromatic images at 70keV were reconstructed with the following 4 ASIR percentages: 0%, 30%, 50%, and 70%. The image noise (IN) (standard deviation, SD) of portal vein (PV), the contrast-to-noise-ratio (CNR), and the subjective score for the sharpness of PV boundaries, and the diagnostic acceptability (DA) were obtained. The IN, CNR, and the subjective scores were compared among the four ASIR groups. The IN (in HU) of PV (10.05±3.14, 9.23±3.05, 8.44±2.95 and 7.83±2.90) decreased and CNR values of PV (8.04±3.32, 8.95±3.63, 9.80±4.12 and 10.74±4.73) increased with the increase in ASIR percentage (0%, 30%, 50%, and 70%, respectively), and were statistically different for the 4 ASIR groups (p<0.05). The subjective scores showed that the sharpness of portal vein boundaries (3.13±0.59, 2.82±0.44, 2.73±0.54 and 2.07±0.54) decreased with higher ASIR percentages (p<0.05). The subjective diagnostic acceptability was highest at 30% ASIR (p<0.05). 30% ASIR addition in DECT portal venography could improve the 70 keV monochromatic image quality.

  4. Dual Energy CT (DECT) Monochromatic Imaging: Added Value of Adaptive Statistical Iterative Reconstructions (ASIR) in Portal Venography

    PubMed Central

    Winklhofer, Sebastian; Jiang, Rong; Wang, Xinlian; He, Wen

    2016-01-01

    Objective To investigate the effect of the adaptive statistical iterative reconstructions (ASIR) on image quality in portal venography by dual energy CT (DECT) imaging. Materials and Methods DECT scans of 45 cirrhotic patients obtained in the portal venous phase were analyzed. Monochromatic images at 70keV were reconstructed with the following 4 ASIR percentages: 0%, 30%, 50%, and 70%. The image noise (IN) (standard deviation, SD) of portal vein (PV), the contrast-to-noise-ratio (CNR), and the subjective score for the sharpness of PV boundaries, and the diagnostic acceptability (DA) were obtained. The IN, CNR, and the subjective scores were compared among the four ASIR groups. Results The IN (in HU) of PV (10.05±3.14, 9.23±3.05, 8.44±2.95 and 7.83±2.90) decreased and CNR values of PV (8.04±3.32, 8.95±3.63, 9.80±4.12 and 10.74±4.73) increased with the increase in ASIR percentage (0%, 30%, 50%, and 70%, respectively), and were statistically different for the 4 ASIR groups (p<0.05). The subjective scores showed that the sharpness of portal vein boundaries (3.13±0.59, 2.82±0.44, 2.73±0.54 and 2.07±0.54) decreased with higher ASIR percentages (p<0.05). The subjective diagnostic acceptability was highest at 30% ASIR (p<0.05). Conclusions 30% ASIR addition in DECT portal venography could improve the 70 keV monochromatic image quality. PMID:27315158

  5. Undersampling strategies for compressed sensing accelerated MR spectroscopic imaging

    NASA Astrophysics Data System (ADS)

    Vidya Shankar, Rohini; Hu, Houchun Harry; Bikkamane Jayadev, Nutandev; Chang, John C.; Kodibagkar, Vikram D.

    2017-03-01

    Compressed sensing (CS) can accelerate magnetic resonance spectroscopic imaging (MRSI), facilitating its widespread clinical integration. The objective of this study was to assess the effect of different undersampling strategy on CS-MRSI reconstruction quality. Phantom data were acquired on a Philips 3 T Ingenia scanner. Four types of undersampling masks, corresponding to each strategy, namely, low resolution, variable density, iterative design, and a priori were simulated in Matlab and retrospectively applied to the test 1X MRSI data to generate undersampled datasets corresponding to the 2X - 5X, and 7X accelerations for each type of mask. Reconstruction parameters were kept the same in each case(all masks and accelerations) to ensure that any resulting differences can be attributed to the type of mask being employed. The reconstructed datasets from each mask were statistically compared with the reference 1X, and assessed using metrics like the root mean square error and metabolite ratios. Simulation results indicate that both the a priori and variable density undersampling masks maintain high fidelity with the 1X up to five-fold acceleration. The low resolution mask based reconstructions showed statistically significant differences from the 1X with the reconstruction failing at 3X, while the iterative design reconstructions maintained fidelity with the 1X till 4X acceleration. In summary, a pilot study was conducted to identify an optimal sampling mask in CS-MRSI. Simulation results demonstrate that the a priori and variable density masks can provide statistically similar results to the fully sampled reference. Future work would involve implementing these two masks prospectively on a clinical scanner.

  6. Models of multidimensional discrete distribution of probabilities of random variables in information systems

    NASA Astrophysics Data System (ADS)

    Gromov, Yu Yu; Minin, Yu V.; Ivanova, O. G.; Morozova, O. N.

    2018-03-01

    Multidimensional discrete distributions of probabilities of independent random values were received. Their one-dimensional distribution is widely used in probability theory. Producing functions of those multidimensional distributions were also received.

  7. A New Time-varying Concept of Risk in a Changing Climate.

    PubMed

    Sarhadi, Ali; Ausín, María Concepción; Wiper, Michael P

    2016-10-20

    In a changing climate arising from anthropogenic global warming, the nature of extreme climatic events is changing over time. Existing analytical stationary-based risk methods, however, assume multi-dimensional extreme climate phenomena will not significantly vary over time. To strengthen the reliability of infrastructure designs and the management of water systems in the changing environment, multidimensional stationary risk studies should be replaced with a new adaptive perspective. The results of a comparison indicate that current multi-dimensional stationary risk frameworks are no longer applicable to projecting the changing behaviour of multi-dimensional extreme climate processes. Using static stationary-based multivariate risk methods may lead to undesirable consequences in designing water system infrastructures. The static stationary concept should be replaced with a flexible multi-dimensional time-varying risk framework. The present study introduces a new multi-dimensional time-varying risk concept to be incorporated in updating infrastructure design strategies under changing environments arising from human-induced climate change. The proposed generalized time-varying risk concept can be applied for all stochastic multi-dimensional systems that are under the influence of changing environments.

  8. Operating Characteristics in DIII-D ELM-Suppressed RMP H-modes with ITER Similar Shapes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, T E; Fenstermacher, M E; Jakubowski, M

    2008-10-13

    Fast energy transients, incident on the DIII-D divertors due to Type-I edge localized modes (ELMs), are eliminated using small dc currents in a simple set of non-axisymmetric coils that produce edge resonant magnetic perturbations (RMP). In ITER similar shaped (ISS) plasmas, with electron pedestal collisionalities matched to those expected in ITER a sharp resonant window in the safety factor at the 95 percent normalized poloidal flux surface is observed for ELM suppression at q{sub 95}=3.57 with a minimum width {delta}q{sub 95} of {+-}0.05. The size of this resonant window has been increased by a factor of 4 in ISS plasmasmore » by increasing the magnitude of the current in an n=3 coil set along with the current in a separate n=1 coil set. The resonant ELM-suppression window is highly reproducible for a given plasma shape, coil configuration and coil current but can vary with other operating conditions such as {beta}{sub N}. Isolated resonant windows have also been found at other q95 values when using different RMP coil configurations. For example, when the I-coil is operated in an n=3 up-down asymmetric configuration rather than an up-down symmetric configuration a resonant window is found near q{sub 95}=7.4. A Fourier analysis of the applied vacuum magnetic field demonstrates a statistical correlation between the Chirikov island overlap parameter and ELM suppression. These results have been used as a guide for RMP coil design studies in various ITER operating scenarios.« less

  9. Spatial and contrast resolution of ultralow dose dentomaxillofacial CT imaging using iterative reconstruction technology

    PubMed Central

    Bischel, Alexander; Stratis, Andreas; Bosmans, Hilde; Jacobs, Reinhilde; Gassner, Eva-Maria; Puelacher, Wolfgang; Pauwels, Ruben

    2017-01-01

    Objectives: The objective of this study was to determine how iterative reconstruction technology (IRT) influences contrast and spatial resolution in ultralow-dose dentomaxillofacial CT imaging. Methods: A polymethyl methacrylate phantom with various inserts was scanned using a reference protocol (RP) at CT dose index volume 36.56 mGy, a sinus protocol at 18.28 mGy and ultralow-dose protocols (LD) at 4.17 mGy, 2.36 mGy, 0.99 mGy and 0.53 mGy. All data sets were reconstructed using filtered back projection (FBP) and the following IRTs: adaptive statistical iterative reconstructions (ASIRs) (ASIR-50, ASIR-100) and model-based iterative reconstruction (MBIR). Inserts containing line-pair patterns and contrast detail patterns for three different materials were scored by three observers. Observer agreement was analyzed using Cohen's kappa and difference in performance between the protocols and reconstruction was analyzed with Dunn's test at α = 0.05. Results: Interobserver agreement was acceptable with a mean kappa value of 0.59. Compared with the RP using FBP, similar scores were achieved at 2.36 mGy using MBIR. MIBR reconstructions showed the highest noise suppression as well as good contrast even at the lowest doses. Overall, ASIR reconstructions did not outperform FBP. Conclusions: LD and MBIR at a dose reduction of >90% may show no significant differences in spatial and contrast resolution compared with an RP and FBP. Ultralow-dose CT and IRT should be further explored in clinical studies. PMID:28059562

  10. The cancellous bone multiscale morphology-elasticity relationship.

    PubMed

    Agić, Ante; Nikolić, Vasilije; Mijović, Budimir

    2006-06-01

    The cancellous bone effective properties relations are analysed on multiscale across two aspects; properties of representative volume element on micro scale and statistical measure of trabecular trajectory orientation on mesoscale. Anisotropy of the microstructure is described across fabric tensor measure with trajectory orientation tensor as bridging scale connection. The scatter measured data (elastic modulus, trajectory orientation, apparent density) from compression test are fitted by stochastic interpolation procedure. The engineering constants of the elasticity tensor are estimated by last square fitt procedure in multidimensional space by Nelder-Mead simplex. The multiaxial failure surface in strain space is constructed and interpolated by modified super-ellipsoid.

  11. Modular Spectral Inference Framework Applied to Young Stars and Brown Dwarfs

    NASA Technical Reports Server (NTRS)

    Gully-Santiago, Michael A.; Marley, Mark S.

    2017-01-01

    In practice, synthetic spectral models are imperfect, causing inaccurate estimates of stellar parameters. Using forward modeling and statistical inference, we derive accurate stellar parameters for a given observed spectrum by emulating a grid of precomputed spectra to track uncertainties. Spectral inference as applied to brown dwarfs re: Synthetic spectral models (Marley et al 1996 and 2014) via the newest grid spans a massive multi-dimensional grid applied to IGRINS spectra, improving atmospheric models for JWST. When applied to young stars(10Myr) with large starpots, they can be measured spectroscopically, especially in the near-IR with IGRINS.

  12. A Non Local Electron Heat Transport Model for Multi-Dimensional Fluid Codes

    NASA Astrophysics Data System (ADS)

    Schurtz, Guy

    2000-10-01

    Apparent inhibition of thermal heat flow is one of the most ancient problems in computational Inertial Fusion and flux-limited Spitzer-Harm conduction has been a mainstay in multi-dimensional hydrodynamic codes for more than 25 years. Theoretical investigation of the problem indicates that heat transport in laser produced plasmas has to be considered as a non local process. Various authors contributed to the non local theory and proposed convolution formulas designed for practical implementation in one-dimensional fluid codes. Though the theory, confirmed by kinetic calculations, actually predicts a reduced heat flux, it fails to explain the very small limiters required in two-dimensional simulations. Fokker-Planck simulations by Epperlein, Rickard and Bell [PRL 61, 2453 (1988)] demonstrated that non local effects could lead to a strong reduction of heat flow in two dimensions, even in situations where a one-dimensional analysis suggests that the heat flow is nearly classical. We developed at CEA/DAM a non local electron heat transport model suitable for implementation in our two-dimensional radiation hydrodynamic code FCI2. This model may be envisionned as the first step of an iterative solution of the Fokker-Planck equations; it takes the mathematical form of multigroup diffusion equations, the solution of which yields both the heat flux and the departure of the electron distribution function to the Maxwellian. Although direct implementation of the model is straightforward, formal solutions of it can be expressed in convolution form, exhibiting a three-dimensional tensor propagator. Reduction to one dimension retrieves the original formula of Luciani, Mora and Virmont [PRL 51, 1664 (1983)]. Intense magnetic fields may be generated by thermal effects in laser targets; these fields, as well as non local effects, will inhibit electron conduction. We present simulations where both effects are taken into account and shortly discuss the coupling strategy between them.

  13. Multidimensional Knowledge Structures.

    ERIC Educational Resources Information Center

    Schuh, Kathy L.

    Multidimensional knowledge structures, described from a constructivist perspective and aligned with the "Mind as Rhizome" metaphor, provide support for constructivist learning strategies. This qualitative study was conducted to seek empirical support for a description of multidimensional knowledge structures, focusing on the…

  14. Multidimensional quantum entanglement with large-scale integrated optics.

    PubMed

    Wang, Jianwei; Paesani, Stefano; Ding, Yunhong; Santagati, Raffaele; Skrzypczyk, Paul; Salavrakos, Alexia; Tura, Jordi; Augusiak, Remigiusz; Mančinska, Laura; Bacco, Davide; Bonneau, Damien; Silverstone, Joshua W; Gong, Qihuang; Acín, Antonio; Rottwitt, Karsten; Oxenløwe, Leif K; O'Brien, Jeremy L; Laing, Anthony; Thompson, Mark G

    2018-04-20

    The ability to control multidimensional quantum systems is central to the development of advanced quantum technologies. We demonstrate a multidimensional integrated quantum photonic platform able to generate, control, and analyze high-dimensional entanglement. A programmable bipartite entangled system is realized with dimensions up to 15 × 15 on a large-scale silicon photonics quantum circuit. The device integrates more than 550 photonic components on a single chip, including 16 identical photon-pair sources. We verify the high precision, generality, and controllability of our multidimensional technology, and further exploit these abilities to demonstrate previously unexplored quantum applications, such as quantum randomness expansion and self-testing on multidimensional states. Our work provides an experimental platform for the development of multidimensional quantum technologies. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  15. Prospective ECG-Triggered Coronary CT Angiography: Clinical Value of Noise-Based Tube Current Reduction Method with Iterative Reconstruction

    PubMed Central

    Shen, Junlin; Du, Xiangying; Guo, Daode; Cao, Lizhen; Gao, Yan; Yang, Qi; Li, Pengyu; Liu, Jiabin; Li, Kuncheng

    2013-01-01

    Objectives To evaluate the clinical value of noise-based tube current reduction method with iterative reconstruction for obtaining consistent image quality with dose optimization in prospective electrocardiogram (ECG)-triggered coronary CT angiography (CCTA). Materials and Methods We performed a prospective randomized study evaluating 338 patients undergoing CCTA with prospective ECG-triggering. Patients were randomly assigned to fixed tube current with filtered back projection (Group 1, n = 113), noise-based tube current with filtered back projection (Group 2, n = 109) or with iterative reconstruction (Group 3, n = 116). Tube voltage was fixed at 120 kV. Qualitative image quality was rated on a 5-point scale (1 = impaired, to 5 = excellent, with 3–5 defined as diagnostic). Image noise and signal intensity were measured; signal-to-noise ratio was calculated; radiation dose parameters were recorded. Statistical analyses included one-way analysis of variance, chi-square test, Kruskal-Wallis test and multivariable linear regression. Results Image noise was maintained at the target value of 35HU with small interquartile range for Group 2 (35.00–35.03HU) and Group 3 (34.99–35.02HU), while from 28.73 to 37.87HU for Group 1. All images in the three groups were acceptable for diagnosis. A relative 20% and 51% reduction in effective dose for Group 2 (2.9 mSv) and Group 3 (1.8 mSv) were achieved compared with Group 1 (3.7 mSv). After adjustment for scan characteristics, iterative reconstruction was associated with 26% reduction in effective dose. Conclusion Noise-based tube current reduction method with iterative reconstruction maintains image noise precisely at the desired level and achieves consistent image quality. Meanwhile, effective dose can be reduced by more than 50%. PMID:23741444

  16. Statistics of intensity in adaptive-optics images and their usefulness for detection and photometry of exoplanets.

    PubMed

    Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C

    2010-11-01

    This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.

  17. Reliability and Validity of the Multidimensional Scale of Perceived Social Support (MSPSS): Thai Version.

    PubMed

    Wongpakaran, Tinakon; Wongpakaran, Nahathai; Ruktrakul, Ruk

    2011-01-01

    This study examines the Thai version of the Multidimensional Scale of Perceived Social Support (MSPSS) for its psychometric properties. In total 462 participants were recruited - 310 medical students from Chiang Mai University and 152 psychiatric patients, and they completed the Thai version of the MSPSS, the State Trait Anxiety Inventory (STAI), the Rosenberg Self-Esteem Scale (RSES) and the Thai Depression Inventory (TDI). Test-retest reliability was conducted over a four week period. Factor analysis produced three-factor solutions for both patient (PG) and student groups (SG), and overall the model demonstrated adequate fit indices. The mean total score and the sub-scale score for the SG were statistically higher than those in the PG, except for 'Significant Others'. The internal consistency of the scale was good, with a Cronbach's alpha of 0.91 for the SG and 0.87 for the PG. After a four week retest for reliability exercise, the intra-class correlation coefficient (ICC) was found to be 0.84. The Thai-MSPSS was found to have a negative correlation with the STAI and the TDI, but was positively correlated with the RSES. The Thai MSPSS is a reliable and valid instrument to use.

  18. Observing Galaxy Mergers in Simulations

    NASA Astrophysics Data System (ADS)

    Snyder, Gregory

    2018-01-01

    I will describe results on mergers and morphology of distant galaxies. By mock-observing 3D cosmological simulations, we aim to contrast theory with data, design better diagnostics of physical processes, and examine unexpected signatures of galaxy formation. Recently, we conducted mock surveys of the Illustris Simulations to learn how mergers would appear in deep HST and JWST surveys. With this approach, we reconciled merger rates estimated using observed close galaxy pairs with intrinsic merger rates predicted by theory. This implies that the merger-pair observability time is probably shorter in the early universe, and therefore that major mergers are more common than implied by the simplest arguments. Further, we show that disturbance-based diagnostics of late-stage mergers can be improved significantly by combining multi-dimensional image information with simulated merger identifications to train automated classifiers. We then apply these classifiers to real measurements from the CANDELS fields, recovering a merger fraction increasing with redshift in broad agreement with pair fractions and simulations, and with statistical errors smaller by a factor of two than classical morphology estimators. This emphasizes the importance of using robust training sets, including cosmological simulations and multidimensional data, for interpreting observed processes in galaxy evolution.

  19. A noise power spectrum study of a new model-based iterative reconstruction system: Veo 3.0.

    PubMed

    Li, Guang; Liu, Xinming; Dodge, Cristina T; Jensen, Corey T; Rong, X John

    2016-09-08

    The purpose of this study was to evaluate performance of the third generation of model-based iterative reconstruction (MBIR) system, Veo 3.0, based on noise power spectrum (NPS) analysis with various clinical presets over a wide range of clinically applicable dose levels. A CatPhan 600 surrounded by an oval, fat-equivalent ring to mimic patient size/shape was scanned 10 times at each of six dose levels on a GE HD 750 scanner. NPS analysis was performed on images reconstructed with various Veo 3.0 preset combinations for comparisons of those images reconstructed using Veo 2.0, filtered back projection (FBP) and adaptive statistical iterative reconstruc-tion (ASiR). The new Target Thickness setting resulted in higher noise in thicker axial images. The new Texture Enhancement function achieved a more isotropic noise behavior with less image artifacts. Veo 3.0 provides additional reconstruction options designed to allow the user choice of balance between spatial resolution and image noise, relative to Veo 2.0. Veo 3.0 provides more user selectable options and in general improved isotropic noise behavior in comparison to Veo 2.0. The overall noise reduction performance of both versions of MBIR was improved in comparison to FBP and ASiR, especially at low-dose levels. © 2016 The Authors.

  20. TRIPOLI-4® - MCNP5 ITER A-lite neutronic model benchmarking

    NASA Astrophysics Data System (ADS)

    Jaboulay, J.-C.; Cayla, P.-Y.; Fausser, C.; Lee, Y.-K.; Trama, J.-C.; Li-Puma, A.

    2014-06-01

    The aim of this paper is to present the capability of TRIPOLI-4®, the CEA Monte Carlo code, to model a large-scale fusion reactor with complex neutron source and geometry. In the past, numerous benchmarks were conducted for TRIPOLI-4® assessment on fusion applications. Experiments (KANT, OKTAVIAN, FNG) analysis and numerical benchmarks (between TRIPOLI-4® and MCNP5) on the HCLL DEMO2007 and ITER models were carried out successively. In this previous ITER benchmark, nevertheless, only the neutron wall loading was analyzed, its main purpose was to present MCAM (the FDS Team CAD import tool) extension for TRIPOLI-4®. Starting from this work a more extended benchmark has been performed about the estimation of neutron flux, nuclear heating in the shielding blankets and tritium production rate in the European TBMs (HCLL and HCPB) and it is presented in this paper. The methodology to build the TRIPOLI-4® A-lite model is based on MCAM and the MCNP A-lite model (version 4.1). Simplified TBMs (from KIT) have been integrated in the equatorial-port. Comparisons of neutron wall loading, flux, nuclear heating and tritium production rate show a good agreement between the two codes. Discrepancies are mainly included in the Monte Carlo codes statistical error.

  1. Evaluation of Bias and Variance in Low-count OSEM List Mode Reconstruction

    PubMed Central

    Jian, Y; Planeta, B; Carson, R E

    2016-01-01

    Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization (MLEM) reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combination of subsets and iterations. Regions of interest (ROIs) were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations x subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1–5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR. PMID:25479254

  2. Evaluation of bias and variance in low-count OSEM list mode reconstruction

    NASA Astrophysics Data System (ADS)

    Jian, Y.; Planeta, B.; Carson, R. E.

    2015-01-01

    Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combinations of subsets and iterations. Regions of interest were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations × subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1-5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR.

  3. PET Image Reconstruction Incorporating 3D Mean-Median Sinogram Filtering

    NASA Astrophysics Data System (ADS)

    Mokri, S. S.; Saripan, M. I.; Rahni, A. A. Abd; Nordin, A. J.; Hashim, S.; Marhaban, M. H.

    2016-02-01

    Positron Emission Tomography (PET) projection data or sinogram contained poor statistics and randomness that produced noisy PET images. In order to improve the PET image, we proposed an implementation of pre-reconstruction sinogram filtering based on 3D mean-median filter. The proposed filter is designed based on three aims; to minimise angular blurring artifacts, to smooth flat region and to preserve the edges in the reconstructed PET image. The performance of the pre-reconstruction sinogram filter prior to three established reconstruction methods namely filtered-backprojection (FBP), Maximum likelihood expectation maximization-Ordered Subset (OSEM) and OSEM with median root prior (OSEM-MRP) is investigated using simulated NCAT phantom PET sinogram as generated by the PET Analytical Simulator (ASIM). The improvement on the quality of the reconstructed images with and without sinogram filtering is assessed according to visual as well as quantitative evaluation based on global signal to noise ratio (SNR), local SNR, contrast to noise ratio (CNR) and edge preservation capability. Further analysis on the achieved improvement is also carried out specific to iterative OSEM and OSEM-MRP reconstruction methods with and without pre-reconstruction filtering in terms of contrast recovery curve (CRC) versus noise trade off, normalised mean square error versus iteration, local CNR versus iteration and lesion detectability. Overall, satisfactory results are obtained from both visual and quantitative evaluations.

  4. Turbulence Enhancement by Fractal Square Grids: Effects of the Number of Fractal Scales

    NASA Astrophysics Data System (ADS)

    Omilion, Alexis; Ibrahim, Mounir; Zhang, Wei

    2017-11-01

    Fractal square grids offer a unique solution for passive flow control as they can produce wakes with a distinct turbulence intensity peak and a prolonged turbulence decay region at the expense of only minimal pressure drop. While previous studies have solidified this characteristic of fractal square grids, how the number of scales (or fractal iterations N) affect turbulence production and decay of the induced wake is still not well understood. The focus of this research is to determine the relationship between the fractal iteration N and the turbulence produced in the wake flow using well-controlled water-tunnel experiments. Particle Image Velocimetry (PIV) is used to measure the instantaneous velocity fields downstream of four different fractal grids with increasing number of scales (N = 1, 2, 3, and 4) and a conventional single-scale grid. By comparing the turbulent scales and statistics of the wake, we are able to determine how each iteration affects the peak turbulence intensity and the production/decay of turbulence from the grid. In light of the ability of these fractal grids to increase turbulence intensity with low pressure drop, this work can potentially benefit a wide variety of applications where energy efficient mixing or convective heat transfer is a key process.

  5. Self-transcendence, nurse-patient interaction and the outcome of multidimensional well-being in cognitively intact nursing home patients.

    PubMed

    Haugan, Gørill; Hanssen, Brith; Moksnes, Unni K

    2013-12-01

    The aim of this study was to investigate the associations between age, gender, self-transcendence, nurse-patient interaction and multidimensional well-being as the outcome among cognitively intact nursing home patients. Self-transcendence is considered to be a vital resource of well-being in vulnerable populations and at the end of life. Moreover, the quality of care and the nurse-patient interaction is found to influence self-transcendence and well-being in nursing home patients. A cross-sectional design employing the Self-Transcendence Scale, the Nurse-Patient Interaction Scale, the FACT-G Quality of Life and the FACIT-Sp Spiritual Well-Being questionnaires was adopted. A sample of 202 cognitively intact nursing home patients from 44 nursing homes in central Norway was selected. A previous documented two-factor construct of self-transcendence was applied. The statistical analyses were carried out by means of independent sample t-test, correlation and regression analyses. Multiple linear regression analyses revealed significant relationships between interpersonal self-transcendence and social, functional and spiritual well-being, whereas intrapersonal self-transcendence significantly related to emotional, social, functional and spiritual well-being. Nurse-patient interaction related to physical, emotional and functional well-being. Age and gender were not significant predictors for well-being, except for functional and spiritual well-being where women scored higher than men. Nurse-patient interaction and self-transcendence are vital resources for promoting well-being physically, emotionally, functionally, socially and spiritually among cognitively intact nursing home patients. Nurse-patient interaction signifies vital and ultimate nursing qualities promoting self-transcendence and multidimensional well-being. These findings are important for clinical nursing intending to increase patients' well-being. © 2012 The Authors Scandinavian Journal of Caring Sciences © 2012 Nordic College of Caring Science.

  6. On the importance of variable soil depth and process representation in the modeling of shallow landslide initiation

    NASA Astrophysics Data System (ADS)

    Fatichi, S.; Burlando, P.; Anagnostopoulos, G.

    2014-12-01

    Sub-surface hydrology has a dominant role on the initiation of rainfall-induced landslides, since changes in the soil water potential affect soil shear strength and thus apparent cohesion. Especially on steep slopes and shallow soils, loss of shear strength can lead to failure even in unsaturated conditions. A process based model, HYDROlisthisis, characterized by high resolution in space and, time is developed to investigate the interactions between surface and subsurface hydrology and shallow landslide initiation. Specifically, 3D variably saturated flow conditions, including soil hydraulic hysteresis and preferential flow, are simulated for the subsurface flow, coupled with a surface runoff routine. Evapotranspiration and specific root water uptake are taken into account for continuous simulations of soil water content during storm and inter-storm periods. The geotechnical component of the model is based on a multidimensional limit equilibrium analysis, which takes into account the basic principles of unsaturated soil mechanics. The model is applied to a small catchment in Switzerland historically prone to rainfall-triggered landslides. A series of numerical simulations were carried out with various boundary conditions (soil depths) and using hydrological and geotechnical components of different complexity. Specifically, the sensitivity to the inclusion of preferential flow and soil hydraulic hysteresis was tested together with the replacement of the infinite slope assumption with a multi-dimensional limit equilibrium analysis. The effect of the different model components on model performance was assessed using accuracy statistics and Receiver Operating Characteristic (ROC) curve. The results show that boundary conditions play a crucial role in the model performance and that the introduced hydrological (preferential flow and soil hydraulic hysteresis) and geotechnical components (multidimensional limit equilibrium analysis) considerably improve predictive capabilities in the presented case study.

  7. Validation of the partner version of the multidimensional vaginal penetration disorder questionnaire: A tool for clinical assessment of lifelong vaginismus in a sample of Iranian population.

    PubMed

    Molaeinezhad, Mitra; Khoei, Effat Merghati; Salehi, Mehrdad; Yousfy, Alireza; Roudsari, Robab Latifnejad

    2014-01-01

    The role of spousal response in woman's experience of pain during the vaginal penetration attempts believed to be an important factor; however, studies are rather limited in this area. The aim of this study was to develop and investigate the psychometric indexes of the partner version of a multidimensional vaginal penetration disorder questionnaire (PV-MVPDQ); hence, the clinical assessment of spousal psychosexual reactions to vaginismus by specialists will be easier. A mixed-methods sequential exploratory design was used, through that, the findings from a thematic qualitative research with 20 unconsummated couples, which followed by an extensive literature review used for development of PV-MVPDQ. A consecutive sample of 214 men who their wives' suffered from lifelong vaginismus (LLV) based on Diagnostic and Statistical Manual of Mental Disorders 4(th) version (DSM)-IVTR criteria during a cross-sectional design, completed the questionnaire and additional questions regarding their demographic and sexual history. Validation measures and reliability were conducted by exploratory factor analysis (EFA) and Cronbach's alpha coefficient through SPSS version 16 manufactured by SPSS Inc. (IBM corporation, Armonk, USA). After conducting EFA PV-MVPDQ emerged as having 40 items and 7 dimensions: Helplessness, sexual information, vicious cycle of penetration, hypervigilance and solicitous, catastrophic cognitions, sexual and marital adjustment and optimism. Subscales of PV-MVPDQ showed a significant reliability (0.71-0.85) and results of test-retest were satisfactory. The present study shows PV-MVPDQ is a multi-dimensional valid and reliable self-report questionnaire for assessment of cognitions, sexual and marital relations related to vaginal penetrations in spouses of women with LLV. It may assist specialists to base on which clinical judgment and appropriate planning for clinical management.

  8. Validation of the partner version of the multidimensional vaginal penetration disorder questionnaire: A tool for clinical assessment of lifelong vaginismus in a sample of Iranian population

    PubMed Central

    Molaeinezhad, Mitra; Khoei, Effat Merghati; Salehi, Mehrdad; Yousfy, Alireza; Roudsari, Robab Latifnejad

    2014-01-01

    Background: The role of spousal response in woman's experience of pain during the vaginal penetration attempts believed to be an important factor; however, studies are rather limited in this area. The aim of this study was to develop and investigate the psychometric indexes of the partner version of a multidimensional vaginal penetration disorder questionnaire (PV-MVPDQ); hence, the clinical assessment of spousal psychosexual reactions to vaginismus by specialists will be easier. Materials and Methods: A mixed-methods sequential exploratory design was used, through that, the findings from a thematic qualitative research with 20 unconsummated couples, which followed by an extensive literature review used for development of PV-MVPDQ. A consecutive sample of 214 men who their wives’ suffered from lifelong vaginismus (LLV) based on Diagnostic and Statistical Manual of Mental Disorders 4th version (DSM)-IVTR criteria during a cross-sectional design, completed the questionnaire and additional questions regarding their demographic and sexual history. Validation measures and reliability were conducted by exploratory factor analysis (EFA) and Cronbach's alpha coefficient through SPSS version 16 manufactured by SPSS Inc. (IBM corporation, Armonk, USA). Results: After conducting EFA PV-MVPDQ emerged as having 40 items and 7 dimensions: Helplessness, sexual information, vicious cycle of penetration, hypervigilance and solicitous, catastrophic cognitions, sexual and marital adjustment and optimism. Subscales of PV-MVPDQ showed a significant reliability (0.71-0.85) and results of test-retest were satisfactory. Conclusion: The present study shows PV-MVPDQ is a multi-dimensional valid and reliable self-report questionnaire for assessment of cognitions, sexual and marital relations related to vaginal penetrations in spouses of women with LLV. It may assist specialists to base on which clinical judgment and appropriate planning for clinical management. PMID:25540787

  9. Multidimensional Perfectionism and the Self

    ERIC Educational Resources Information Center

    Ward, Andrew M.; Ashby, Jeffrey S.

    2008-01-01

    This study examined multidimensional perfectionism and self-development. Two hundred seventy-one undergraduates completed a measure of multidimensional perfectionism and two Kohutian measures designed to measure aspects of self-development including social connectedness, social assurance, goal instability (idealization), and grandiosity. The…

  10. Chemical space visualization: transforming multidimensional chemical spaces into similarity-based molecular networks.

    PubMed

    de la Vega de León, Antonio; Bajorath, Jürgen

    2016-09-01

    The concept of chemical space is of fundamental relevance for medicinal chemistry and chemical informatics. Multidimensional chemical space representations are coordinate-based. Chemical space networks (CSNs) have been introduced as a coordinate-free representation. A computational approach is presented for the transformation of multidimensional chemical space into CSNs. The design of transformation CSNs (TRANS-CSNs) is based upon a similarity function that directly reflects distance relationships in original multidimensional space. TRANS-CSNs provide an immediate visualization of coordinate-based chemical space and do not require the use of dimensionality reduction techniques. At low network density, TRANS-CSNs are readily interpretable and make it possible to evaluate structure-activity relationship information originating from multidimensional chemical space.

  11. Brownian motion properties of optoelectronic random bit generators based on laser chaos.

    PubMed

    Li, Pu; Yi, Xiaogang; Liu, Xianglian; Wang, Yuncai; Wang, Yongge

    2016-07-11

    The nondeterministic property of the optoelectronic random bit generator (RBG) based on laser chaos are experimentally analyzed from two aspects of the central limit theorem and law of iterated logarithm. The random bits are extracted from an optical feedback chaotic laser diode using a multi-bit extraction technique in the electrical domain. Our experimental results demonstrate that the generated random bits have no statistical distance from the Brownian motion, besides that they can pass the state-of-the-art industry-benchmark statistical test suite (NIST SP800-22). All of them give a mathematically provable evidence that the ultrafast random bit generator based on laser chaos can be used as a nondeterministic random bit source.

  12. Statistical Significance of Optical Map Alignments

    PubMed Central

    Sarkar, Deepayan; Goldstein, Steve; Schwartz, David C.

    2012-01-01

    Abstract The Optical Mapping System constructs ordered restriction maps spanning entire genomes through the assembly and analysis of large datasets comprising individually analyzed genomic DNA molecules. Such restriction maps uniquely reveal mammalian genome structure and variation, but also raise computational and statistical questions beyond those that have been solved in the analysis of smaller, microbial genomes. We address the problem of how to filter maps that align poorly to a reference genome. We obtain map-specific thresholds that control errors and improve iterative assembly. We also show how an optimal self-alignment score provides an accurate approximation to the probability of alignment, which is useful in applications seeking to identify structural genomic abnormalities. PMID:22506568

  13. Statistical computation of tolerance limits

    NASA Technical Reports Server (NTRS)

    Wheeler, J. T.

    1993-01-01

    Based on a new theory, two computer codes were developed specifically to calculate the exact statistical tolerance limits for normal distributions within unknown means and variances for the one-sided and two-sided cases for the tolerance factor, k. The quantity k is defined equivalently in terms of the noncentral t-distribution by the probability equation. Two of the four mathematical methods employ the theory developed for the numerical simulation. Several algorithms for numerically integrating and iteratively root-solving the working equations are written to augment the program simulation. The program codes generate some tables of k's associated with the varying values of the proportion and sample size for each given probability to show accuracy obtained for small sample sizes.

  14. Using missing ordinal patterns to detect nonlinearity in time series data.

    PubMed

    Kulp, Christopher W; Zunino, Luciano; Osborne, Thomas; Zawadzki, Brianna

    2017-08-01

    The number of missing ordinal patterns (NMP) is the number of ordinal patterns that do not appear in a series after it has been symbolized using the Bandt and Pompe methodology. In this paper, the NMP is demonstrated as a test for nonlinearity using a surrogate framework in order to see if the NMP for a series is statistically different from the NMP of iterative amplitude adjusted Fourier transform (IAAFT) surrogates. It is found that the NMP works well as a test statistic for nonlinearity, even in the cases of very short time series. Both model and experimental time series are used to demonstrate the efficacy of the NMP as a test for nonlinearity.

  15. Statistical Deviations From the Theoretical Only-SBU Model to Estimate MCU Rates in SRAMs

    NASA Astrophysics Data System (ADS)

    Franco, Francisco J.; Clemente, Juan Antonio; Baylac, Maud; Rey, Solenne; Villa, Francesca; Mecha, Hortensia; Agapito, Juan A.; Puchner, Helmut; Hubert, Guillaume; Velazco, Raoul

    2017-08-01

    This paper addresses a well-known problem that occurs when memories are exposed to radiation: the determination if a bit flip is isolated or if it belongs to a multiple event. As it is unusual to know the physical layout of the memory, this paper proposes to evaluate the statistical properties of the sets of corrupted addresses and to compare the results with a mathematical prediction model where all of the events are single bit upsets. A set of rules easy to implement in common programming languages can be iteratively applied if anomalies are observed, thus yielding a classification of errors quite closer to reality (more than 80% accuracy in our experiments).

  16. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory. [Project Psychometric Aspects of Item Banking No. 53.] Research Report 91-1.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual counts in the full contingency table. This is…

  17. The closure approximation in the hierarchy equations.

    NASA Technical Reports Server (NTRS)

    Adomian, G.

    1971-01-01

    The expectation of the solution process in a stochastic operator equation can be obtained from averaged equations only under very special circumstances. Conditions for validity are given and the significance and validity of the approximation in widely used hierarchy methods and the ?self-consistent field' approximation in nonequilibrium statistical mechanics are clarified. The error at any level of the hierarchy can be given and can be avoided by the use of the iterative method.

  18. Speckle evolution with multiple steps of least-squares phase removal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Mingzhou; Dainty, Chris; Roux, Filippus S.

    2011-08-15

    We study numerically the evolution of speckle fields due to the annihilation of optical vortices after the least-squares phase has been removed. A process with multiple steps of least-squares phase removal is carried out to minimize both vortex density and scintillation index. Statistical results show that almost all the optical vortices can be removed from a speckle field, which finally decays into a quasiplane wave after such an iterative process.

  19. Statistical Engineering in Air Traffic Management Research

    NASA Technical Reports Server (NTRS)

    Wilson, Sara R.

    2015-01-01

    NASA is working to develop an integrated set of advanced technologies to enable efficient arrival operations in high-density terminal airspace for the Next Generation Air Transportation System. This integrated arrival solution is being validated and verified in laboratories and transitioned to a field prototype for an operational demonstration at a major U.S. airport. Within NASA, this is a collaborative effort between Ames and Langley Research Centers involving a multi-year iterative experimentation process. Designing and analyzing a series of sequential batch computer simulations and human-in-the-loop experiments across multiple facilities and simulation environments involves a number of statistical challenges. Experiments conducted in separate laboratories typically have different limitations and constraints, and can take different approaches with respect to the fundamental principles of statistical design of experiments. This often makes it difficult to compare results from multiple experiments and incorporate findings into the next experiment in the series. A statistical engineering approach is being employed within this project to support risk-informed decision making and maximize the knowledge gained within the available resources. This presentation describes a statistical engineering case study from NASA, highlights statistical challenges, and discusses areas where existing statistical methodology is adapted and extended.

  20. Multidimensional poverty and catastrophic health spending in the mountainous regions of Myanmar, Nepal and India.

    PubMed

    Mohanty, Sanjay K; Agrawal, Nand Kishor; Mahapatra, Bidhubhusan; Choudhury, Dhrupad; Tuladhar, Sabarnee; Holmgren, E Valdemar

    2017-01-18

    Economic burden to households due to out-of-pocket expenditure (OOPE) is large in many Asian countries. Though studies suggest increasing household poverty due to high OOPE in developing countries, studies on association of multidimensional poverty and household health spending is limited. This paper tests the hypothesis that the multidimensionally poor are more likely to incur catastrophic health spending cutting across countries. Data from the Poverty and Vulnerability Assessment (PVA) Survey carried out by the International Center for Integrated Mountain Development (ICIMOD) has been used in the analyses. The PVA survey was a comprehensive household survey that covered the mountainous regions of India, Nepal and Myanmar. A total of 2647 households from India, 2310 households in Nepal and 4290 households in Myanmar covered under the PVA survey. Poverty is measured in a multidimensional framework by including the dimensions of education, income and energy, water and sanitation using the Alkire and Foster method. Health shock is measured using the frequency of illness, family sickness and death of any family member in a reference period of one year. Catastrophic health expenditure is defined as 40% above the household's capacity to pay. Results suggest that about three-fifths of the population in Myanmar, two-fifths of the population in Nepal and one-third of the population in India are multidimensionally poor. About 47% of the multidimensionally poor in India had incurred catastrophic health spending compared to 35% of the multidimensionally non-poor and the pattern was similar in both Nepal and Myanmar. The odds of incurring catastrophic health spending was 56% more among the multidimensionally poor than among the multidimensionally non-poor [95% CI: 1.35-1.76]. While health shocks to households are consistently significant predictors of catastrophic health spending cutting across country of residence, the educational attainment of the head of the household is not significant. The multidimensionally poor in the poorer regions are more likely to face health shocks and are less likely to afford professional health services. Increasing government spending on health and increasing households' access to health insurance can reduce catastrophic health spending and multidimensional poverty.

  1. An introduction to multidimensional measurement using Rasch models.

    PubMed

    Briggs, Derek C; Wilson, Mark

    2003-01-01

    The act of constructing a measure requires a number of important assumptions. Principle among these assumptions is that the construct is unidimensional. In practice there are many instances when the assumption of unidimensionality does not hold, and where the application of a multidimensional measurement model is both technically appropriate and substantively advantageous. In this paper we illustrate the usefulness of a multidimensional approach to measurement with the Multidimensional Random Coefficient Multinomial Logit (MRCML) model, an extension of the unidimensional Rasch model. An empirical example is taken from a collection of embedded assessments administered to 541 students enrolled in middle school science classes with a hands-on science curriculum. Student achievement on these assessments are multidimensional in nature, but can also be treated as consecutive unidimensional estimates, or as is most common, as a composite unidimensional estimate. Structural parameters are estimated for each model using ConQuest, and model fit is compared. Student achievement in science is also compared across models. The multidimensional approach has the best fit to the data, and provides more reliable estimates of student achievement than under the consecutive unidimensional approach. Finally, at an interpretational level, the multidimensional approach may well provide richer information to the classroom teacher about the nature of student achievement.

  2. [The application of the multidimensional statistical methods in the evaluation of the influence of atmospheric pollution on the population's health].

    PubMed

    Surzhikov, V D; Surzhikov, D V

    2014-01-01

    The search and measurement of causal relationships between exposure to air pollution and health state of the population is based on the system analysis and risk assessment to improve the quality of research. With this purpose there is applied the modern statistical analysis with the use of criteria of independence, principal component analysis and discriminate function analysis. As a result of analysis out of all atmospheric pollutants there were separated four main components: for diseases of the circulatory system main principal component is implied with concentrations of suspended solids, nitrogen dioxide, carbon monoxide, hydrogen fluoride, for the respiratory diseases the main c principal component is closely associated with suspended solids, sulfur dioxide and nitrogen dioxide, charcoal black. The discriminant function was shown to be used as a measure of the level of air pollution.

  3. Incorporating spatial context into statistical classification of multidimensional image data

    NASA Technical Reports Server (NTRS)

    Bauer, M. E. (Principal Investigator); Tilton, J. C.; Swain, P. H.

    1981-01-01

    Compound decision theory is employed to develop a general statistical model for classifying image data using spatial context. The classification algorithm developed from this model exploits the tendency of certain ground-cover classes to occur more frequently in some spatial contexts than in others. A key input to this contextural classifier is a quantitative characterization of this tendency: the context function. Several methods for estimating the context function are explored, and two complementary methods are recommended. The contextural classifier is shown to produce substantial improvements in classification accuracy compared to the accuracy produced by a non-contextural uniform-priors maximum likelihood classifier when these methods of estimating the context function are used. An approximate algorithm, which cuts computational requirements by over one-half, is presented. The search for an optimal implementation is furthered by an exploration of the relative merits of using spectral classes or information classes for classification and/or context function estimation.

  4. Efficient global fiber tracking on multi-dimensional diffusion direction maps

    NASA Astrophysics Data System (ADS)

    Klein, Jan; Köhler, Benjamin; Hahn, Horst K.

    2012-02-01

    Global fiber tracking algorithms have recently been proposed which were able to compute results of unprecedented quality. They account for avoiding accumulation errors by a global optimization process at the cost of a high computation time of several hours or even days. In this paper, we introduce a novel global fiber tracking algorithm which, for the first time, globally optimizes the underlying diffusion direction map obtained from DTI or HARDI data, instead of single fiber segments. As a consequence, the number of iterations in the optimization process can drastically be reduced by about three orders of magnitude. Furthermore, in contrast to all previous algorithms, the density of the tracked fibers can be adjusted after the optimization within a few seconds. We evaluated our method for diffusion-weighted images obtained from software phantoms, healthy volunteers, and tumor patients. We show that difficult fiber bundles, e.g., the visual pathways or tracts for different motor functions can be determined and separated in an excellent quality. Furthermore, crossing and kissing bundles are correctly resolved. On current standard hardware, a dense fiber tracking result of a whole brain can be determined in less than half an hour which is a strong improvement compared to previous work.

  5. Core Competencies for Pain Management: Results of an Interprofessional Consensus Summit

    PubMed Central

    Fishman, Scott M; Young, Heather M; Lucas Arwood, Ellyn; Chou, Roger; Herr, Keela; Murinson, Beth B; Watt-Watson, Judy; Carr, Daniel B; Gordon, Debra B; Stevens, Bonnie J; Bakerjian, Debra; Ballantyne, Jane C; Courtenay, Molly; Djukic, Maja; Koebner, Ian J; Mongoven, Jennifer M; Paice, Judith A; Prasad, Ravi; Singh, Naileshni; Sluka, Kathleen A; St Marie, Barbara; Strassels, Scott A

    2013-01-01

    Objective The objective of this project was to develop core competencies in pain assessment and management for prelicensure health professional education. Such core pain competencies common to all prelicensure health professionals have not been previously reported. Methods An interprofessional executive committee led a consensus-building process to develop the core competencies. An in-depth literature review was conducted followed by engagement of an interprofessional Competency Advisory Committee to critique competencies through an iterative process. A 2-day summit was held so that consensus could be reached. Results The consensus-derived competencies were categorized within four domains: multidimensional nature of pain, pain assessment and measurement, management of pain, and context of pain management. These domains address the fundamental concepts and complexity of pain; how pain is observed and assessed; collaborative approaches to treatment options; and application of competencies across the life span in the context of various settings, populations, and care team models. A set of values and guiding principles are embedded within each domain. Conclusions These competencies can serve as a foundation for developing, defining, and revising curricula and as a resource for the creation of learning activities across health professions designed to advance care that effectively responds to pain. PMID:23577878

  6. Core competencies for pain management: results of an interprofessional consensus summit.

    PubMed

    Fishman, Scott M; Young, Heather M; Lucas Arwood, Ellyn; Chou, Roger; Herr, Keela; Murinson, Beth B; Watt-Watson, Judy; Carr, Daniel B; Gordon, Debra B; Stevens, Bonnie J; Bakerjian, Debra; Ballantyne, Jane C; Courtenay, Molly; Djukic, Maja; Koebner, Ian J; Mongoven, Jennifer M; Paice, Judith A; Prasad, Ravi; Singh, Naileshni; Sluka, Kathleen A; St Marie, Barbara; Strassels, Scott A

    2013-07-01

    The objective of this project was to develop core competencies in pain assessment and management for prelicensure health professional education. Such core pain competencies common to all prelicensure health professionals have not been previously reported. An interprofessional executive committee led a consensus-building process to develop the core competencies. An in-depth literature review was conducted followed by engagement of an interprofessional Competency Advisory Committee to critique competencies through an iterative process. A 2-day summit was held so that consensus could be reached. The consensus-derived competencies were categorized within four domains: multidimensional nature of pain, pain assessment and measurement, management of pain, and context of pain management. These domains address the fundamental concepts and complexity of pain; how pain is observed and assessed; collaborative approaches to treatment options; and application of competencies across the life span in the context of various settings, populations, and care team models. A set of values and guiding principles are embedded within each domain. These competencies can serve as a foundation for developing, defining, and revising curricula and as a resource for the creation of learning activities across health professions designed to advance care that effectively responds to pain. Wiley Periodicals, Inc.

  7. Incremental isometric embedding of high-dimensional data using connected neighborhood graphs.

    PubMed

    Zhao, Dongfang; Yang, Li

    2009-01-01

    Most nonlinear data embedding methods use bottom-up approaches for capturing the underlying structure of data distributed on a manifold in high dimensional space. These methods often share the first step which defines neighbor points of every data point by building a connected neighborhood graph so that all data points can be embedded to a single coordinate system. These methods are required to work incrementally for dimensionality reduction in many applications. Because input data stream may be under-sampled or skewed from time to time, building connected neighborhood graph is crucial to the success of incremental data embedding using these methods. This paper presents algorithms for updating $k$-edge-connected and $k$-connected neighborhood graphs after a new data point is added or an old data point is deleted. It further utilizes a simple algorithm for updating all-pair shortest distances on the neighborhood graph. Together with incremental classical multidimensional scaling using iterative subspace approximation, this paper devises an incremental version of Isomap with enhancements to deal with under-sampled or unevenly distributed data. Experiments on both synthetic and real-world data sets show that the algorithm is efficient and maintains low dimensional configurations of high dimensional data under various data distributions.

  8. Conservative supra-characteristics method for splitting the hyperbolic systems of gasdynamics for real and perfect gases

    NASA Technical Reports Server (NTRS)

    Lombard, C. K.

    1982-01-01

    A conservative flux difference splitting is presented for the hyperbolic systems of gasdynamics. The stable robust method is suitable for wide application in a variety of schemes, explicit or implicit, iterative or direct, for marching in either time or space. The splitting is modeled on the local quasi one dimensional characteristics system for multi-dimensional flow similar to Chakravarthy's nonconservative split coefficient matrix method; but, as the result of maintaining global conservation, the method is able to capture sharp shocks correctly. The embedded characteristics formulation is cast in a primitive variable the volumetric internal energy (rather than the pressure) that is effective for treating real as well as perfect gases. Finally the relationship of the splitting to characteristics boundary conditions is discussed and the associated conservative matrix formulation for a computed blown wall boundary condition is developed as an example. The theoretical development employs and extends the notion of Roe of constructing stable upwind difference formulae by sending split simple one sided flux difference pieces to appropriate mesh sites. The developments are also believed to have the potential for aiding in the analysis of both existing and new conservative difference schemes.

  9. Systematic Evaluation of Non-Uniform Sampling Parameters in the Targeted Analysis of Urine Metabolites by 1H,1H 2D NMR Spectroscopy.

    PubMed

    Schlippenbach, Trixi von; Oefner, Peter J; Gronwald, Wolfram

    2018-03-09

    Non-uniform sampling (NUS) allows the accelerated acquisition of multidimensional NMR spectra. The aim of this contribution was the systematic evaluation of the impact of various quantitative NUS parameters on the accuracy and precision of 2D NMR measurements of urinary metabolites. Urine aliquots spiked with varying concentrations (15.6-500.0 µM) of tryptophan, tyrosine, glutamine, glutamic acid, lactic acid, and threonine, which can only be resolved fully by 2D NMR, were used to assess the influence of the sampling scheme, reconstruction algorithm, amount of omitted data points, and seed value on the quantitative performance of NUS in 1 H, 1 H-TOCSY and 1 H, 1 H-COSY45 NMR spectroscopy. Sinusoidal Poisson-gap sampling and a compressed sensing approach employing the iterative re-weighted least squares method for spectral reconstruction allowed a 50% reduction in measurement time while maintaining sufficient quantitative accuracy and precision for both types of homonuclear 2D NMR spectroscopy. Together with other advances in instrument design, such as state-of-the-art cryogenic probes, use of 2D NMR spectroscopy in large biomedical cohort studies seems feasible.

  10. Development of a conceptual model of cancer caregiver health literacy.

    PubMed

    Yuen, E Y N; Dodson, S; Batterham, R W; Knight, T; Chirgwin, J; Livingston, P M

    2016-03-01

    Caregivers play a vital role in caring for people diagnosed with cancer. However, little is understood about caregivers' capacity to find, understand, appraise and use information to improve health outcomes. The study aimed to develop a conceptual model that describes the elements of cancer caregiver health literacy. Six concept mapping workshops were conducted with 13 caregivers, 13 people with cancer and 11 healthcare providers/policymakers. An iterative, mixed methods approach was used to analyse and synthesise workshop data and to generate the conceptual model. Six major themes and 17 subthemes were identified from 279 statements generated by participants during concept mapping workshops. Major themes included: access to information, understanding of information, relationship with healthcare providers, relationship with the care recipient, managing challenges of caregiving and support systems. The study extends conceptualisations of health literacy by identifying factors specific to caregiving within the cancer context. The findings demonstrate that caregiver health literacy is multidimensional, includes a broad range of individual and interpersonal elements, and is influenced by broader healthcare system and community factors. These results provide guidance for the development of: caregiver health literacy measurement tools; strategies for improving health service delivery, and; interventions to improve caregiver health literacy. © 2015 John Wiley & Sons Ltd.

  11. An extended harmonic balance method based on incremental nonlinear control parameters

    NASA Astrophysics Data System (ADS)

    Khodaparast, Hamed Haddad; Madinei, Hadi; Friswell, Michael I.; Adhikari, Sondipon; Coggon, Simon; Cooper, Jonathan E.

    2017-02-01

    A new formulation for calculating the steady-state responses of multiple-degree-of-freedom (MDOF) non-linear dynamic systems due to harmonic excitation is developed. This is aimed at solving multi-dimensional nonlinear systems using linear equations. Nonlinearity is parameterised by a set of 'non-linear control parameters' such that the dynamic system is effectively linear for zero values of these parameters and nonlinearity increases with increasing values of these parameters. Two sets of linear equations which are formed from a first-order truncated Taylor series expansion are developed. The first set of linear equations provides the summation of sensitivities of linear system responses with respect to non-linear control parameters and the second set are recursive equations that use the previous responses to update the sensitivities. The obtained sensitivities of steady-state responses are then used to calculate the steady state responses of non-linear dynamic systems in an iterative process. The application and verification of the method are illustrated using a non-linear Micro-Electro-Mechanical System (MEMS) subject to a base harmonic excitation. The non-linear control parameters in these examples are the DC voltages that are applied to the electrodes of the MEMS devices.

  12. Proceedings: Sisal `93

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feo, J.T.

    1993-10-01

    This report contain papers on: Programmability and performance issues; The case of an iterative partial differential equation solver; Implementing the kernal of the Australian Region Weather Prediction Model in Sisal; Even and quarter-even prime length symmetric FFTs and their Sisal Implementations; Top-down thread generation for Sisal; Overlapping communications and computations on NUMA architechtures; Compiling technique based on dataflow analysis for funtional programming language Valid; Copy elimination for true multidimensional arrays in Sisal 2.0; Increasing parallelism for an optimization that reduces copying in IF2 graphs; Caching in on Sisal; Cache performance of Sisal Vs. FORTRAN; FFT algorithms on a shared-memory multiprocessor;more » A parallel implementation of nonnumeric search problems in Sisal; Computer vision algorithms in Sisal; Compilation of Sisal for a high-performance data driven vector processor; Sisal on distributed memory machines; A virtual shared addressing system for distributed memory Sisal; Developing a high-performance FFT algorithm in Sisal for a vector supercomputer; Implementation issues for IF2 on a static data-flow architechture; and Systematic control of parallelism in array-based data-flow computation. Selected papers have been indexed separately for inclusion in the Energy Science and Technology Database.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.

    Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less

  14. Development and validation of the multidimensional vaginal penetration disorder questionnaire (MVPDQ) for assessment of lifelong vaginismus in a sample of Iranian women

    PubMed Central

    Molaeinezhad, Mitra; Roudsari, Robab Latifnejad; Yousefy, Alireza; Salehi, Mehrdad; Khoei, Effat Merghati

    2014-01-01

    Background: Vaginismus is considered as one of the most common female psychosexual dysfunctions. Although the importance of using a multidisciplinary approach for assessment of vaginal penetration disorder is emphasized, the paucity of instruments for this purpose is clear. We designed a study to develop and investigate the psychometric properties of a multidimensional vaginal penetration disorder questionnaire (MVPDQ), thereby assisting specialists for clinical assessment of women with lifelong vaginismus (LLV). Materials and Methods: MVPDQ was developed using the findings from a thematic qualitative research conducted with 20 unconsummated couples from a former study, which was followed by an extensive literature review. Then, during a cross-sectional design, a consecutive sample of 214 women, who were diagnosed as LLV based on Diagnostic and Statistical Manual of Mental Disorders (DSM)-IV-TR criteria completed MVPDQ and additional questions regarding their demographic and sexual history. Validation measures and reliability were tested by exploratory factor analysis and Cronbach's alpha coefficient via Statistical Package for the Social Sciences (SPSS) version 16. Results: After conducting exploratory factor analysis, MVPDQ emerged with 72 items and 9 dimensions: Catastrophic cognitions and tightening, helplessness, marital adjustment, hypervigilance, avoidance, penetration motivation, sexual information, genital incompatibility, and optimism. Subscales of MVPDQ showed a significant reliability that varied between 0.70 and 0.87 and results of test–retest were satisfactory. Conclusion: The present study shows that MVPDQ is a valid and reliable self-report questionnaire for clinical assessment of women complaining of LLV. This instrument may assist specialists to make a clinical judgment and plan appropriately for clinical management. PMID:25097607

  15. Diagnostic accuracy of 256-row multidetector CT coronary angiography with prospective ECG-gating combined with fourth-generation iterative reconstruction algorithm in the assessment of coronary artery bypass: evaluation of dose reduction and image quality.

    PubMed

    Ippolito, Davide; Fior, Davide; Franzesi, Cammillo Talei; Riva, Luca; Casiraghi, Alessandra; Sironi, Sandro

    2017-12-01

    Effective radiation dose in coronary CT angiography (CTCA) for coronary artery bypass graft (CABG) evaluation is remarkably high because of long scan lengths. Prospective electrocardiographic gating with iterative reconstruction can reduce effective radiation dose. To evaluate the diagnostic performance of low-kV CT angiography protocol with prospective ecg-gating technique and iterative reconstruction (IR) algorithm in follow-up of CABG patients compared with standard retrospective protocol. Seventy-four non-obese patients with known coronary disease treated with artery bypass grafting were prospectively enrolled. All the patients underwent 256 MDCT (Brilliance iCT, Philips) CTCA using low-dose protocol (100 kV; 800 mAs; rotation time: 0.275 s) combined with prospective ECG-triggering acquisition and fourth-generation IR technique (iDose 4 ; Philips); all the lengths of the bypass graft were included in the evaluation. A control group of 42 similar patients was evaluated with a standard retrospective ECG-gated CTCA (100 kV; 800 mAs).On both CT examinations, ROIs were placed to calculate standard deviation of pixel values and intra-vessel density. Diagnostic quality was also evaluated using a 4-point quality scale. Despite the statistically significant reduction of radiation dose evaluated with DLP (study group mean DLP: 274 mGy cm; control group mean DLP: 1224 mGy cm; P value < 0.001). No statistical differences were found between PGA group and RGH group regarding intra-vessel density absolute values and SNR. Qualitative analysis, evaluated by two radiologists in "double blind", did not reveal any significant difference in diagnostic quality of the two groups. The development of high-speed MDCT scans combined with modern IR allows an accurate evaluation of CABG with prospective ECG-gating protocols in a single breath hold, obtaining a significant reduction in radiation dose.

  16. Validity of linear measurements of the jaws using ultralow-dose MDCT and the iterative techniques of ASIR and MBIR.

    PubMed

    Al-Ekrish, Asma'a A; Al-Shawaf, Reema; Schullian, Peter; Al-Sadhan, Ra'ed; Hörmann, Romed; Widmann, Gerlig

    2016-10-01

    To assess the comparability of linear measurements of dental implant sites recorded from multidetector computed tomography (MDCT) images obtained using standard-dose filtered backprojection (FBP) technique with those from various ultralow doses combined with FBP, adaptive statistical iterative reconstruction (ASIR), and model-based iterative reconstruction (MBIR) techniques. The results of the study may contribute to MDCT dose optimization for dental implant site imaging. MDCT scans of two cadavers were acquired using a standard reference protocol and four ultralow-dose test protocols (TP). The volume CT dose index of the different dose protocols ranged from a maximum of 30.48-36.71 mGy to a minimum of 0.44-0.53 mGy. All scans were reconstructed using FBP, ASIR-50, ASIR-100, and MBIR, and either a bone or standard reconstruction kernel. Linear measurements were recorded from standardized images of the jaws by two examiners. Intra- and inter-examiner reliability of the measurements were analyzed using Cronbach's alpha and inter-item correlation. Agreement between the measurements obtained with the reference-dose/FBP protocol and each of the test protocols was determined with Bland-Altman plots and linear regression. Statistical significance was set at a P-value of 0.05. No systematic variation was found between the linear measurements obtained with the reference protocol and the other imaging protocols. The only exceptions were TP3/ASIR-50 (bone kernel) and TP4/ASIR-100 (bone and standard kernels). The mean measurement differences between these three protocols and the reference protocol were within ±0.1 mm, with the 95 % confidence interval limits being within the range of ±1.15 mm. A nearly 97.5 % reduction in dose did not significantly affect the height and width measurements of edentulous jaws regardless of the reconstruction algorithm used.

  17. Radiation dose reduction in soft tissue neck CT using adaptive statistical iterative reconstruction (ASIR).

    PubMed

    Vachha, Behroze; Brodoefel, Harald; Wilcox, Carol; Hackney, David B; Moonis, Gul

    2013-12-01

    To compare objective and subjective image quality in neck CT images acquired at different tube current-time products (275 mAs and 340 mAs) and reconstructed with filtered-back-projection (FBP) and adaptive statistical iterative reconstruction (ASIR). HIPAA-compliant study with IRB approval and waiver of informed consent. 66 consecutive patients were randomly assigned to undergo contrast-enhanced neck CT at a standard tube-current-time-product (340 mAs; n = 33) or reduced tube-current-time-product (275 mAs, n = 33). Data sets were reconstructed with FBP and 2 levels (30%, 40%) of ASIR-FBP blending at 340 mAs and 275 mAs. Two neuroradiologists assessed subjective image quality in a blinded and randomized manner. Volume CT dose index (CTDIvol), dose-length-product (DLP), effective dose, and objective image noise were recorded. Signal-to-noise ratio (SNR) was computed as mean attenuation in a region of interest in the sternocleidomastoid muscle divided by image noise. Compared with FBP, ASIR resulted in a reduction of image noise at both 340 mAs and 275 mAs. Reduction of tube current from 340 mAs to 275 mAs resulted in an increase in mean objective image noise (p=0.02) and a decrease in SNR (p = 0.03) when images were reconstructed with FBP. However, when the 275 mAs images were reconstructed using ASIR, the mean objective image noise and SNR were similar to those of the standard 340 mAs CT images reconstructed with FBP (p>0.05). Subjective image noise was ranked by both raters as either average or less-than-average irrespective of the tube current and iterative reconstruction technique. Adapting ASIR into neck CT protocols reduced effective dose by 17% without compromising image quality. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. The Tunneling Method for Global Optimization in Multidimensional Scaling.

    ERIC Educational Resources Information Center

    Groenen, Patrick J. F.; Heiser, Willem J.

    1996-01-01

    A tunneling method for global minimization in multidimensional scaling is introduced and adjusted for multidimensional scaling with general Minkowski distances. The method alternates a local search step with a tunneling step in which a different configuration is sought with the same STRESS implementation. (SLD)

  19. Multidimensional Poverty and Health Status as a Predictor of Chronic Income Poverty.

    PubMed

    Callander, Emily J; Schofield, Deborah J

    2015-12-01

    Longitudinal analysis of Wave 5 to 10 of the nationally representative Household, Income and Labour Dynamics in Australia dataset was undertaken to assess whether multidimensional poverty status can predict chronic income poverty. Of those who were multidimensionally poor (low income plus poor health or poor health and insufficient education attainment) in 2007, and those who were in income poverty only (no other forms of disadvantage) in 2007, a greater proportion of those in multidimensional poverty continued to be in income poverty for the subsequent 5 years through to 2012. People who were multidimensionally poor in 2007 had 2.17 times the odds of being in income poverty each year through to 2012 than those who were in income poverty only in 2005 (95% CI: 1.23-3.83). Multidimensional poverty measures are a useful tool for policymakers to identify target populations for policies aiming to improve equity and reduce chronic disadvantage. Copyright © 2014 John Wiley & Sons, Ltd.

  20. Multidimensional upwind hydrodynamics on unstructured meshes using graphics processing units - I. Two-dimensional uniform meshes

    NASA Astrophysics Data System (ADS)

    Paardekooper, S.-J.

    2017-08-01

    We present a new method for numerical hydrodynamics which uses a multidimensional generalization of the Roe solver and operates on an unstructured triangular mesh. The main advantage over traditional methods based on Riemann solvers, which commonly use one-dimensional flux estimates as building blocks for a multidimensional integration, is its inherently multidimensional nature, and as a consequence its ability to recognize multidimensional stationary states that are not hydrostatic. A second novelty is the focus on graphics processing units (GPUs). By tailoring the algorithms specifically to GPUs, we are able to get speedups of 100-250 compared to a desktop machine. We compare the multidimensional upwind scheme to a traditional, dimensionally split implementation of the Roe solver on several test problems, and we find that the new method significantly outperforms the Roe solver in almost all cases. This comes with increased computational costs per time-step, which makes the new method approximately a factor of 2 slower than a dimensionally split scheme acting on a structured grid.

Top