Center-of-Mass Tomography and Wigner Function for Multimode Photon States
NASA Astrophysics Data System (ADS)
Dudinets, Ivan V.; Man'ko, Vladimir I.
2018-06-01
Tomographic probability representation of multimode electromagnetic field states in the scheme of center-of-mass tomography is reviewed. Both connection of the field state Wigner function and observable Weyl symbols with the center-of-mass tomograms as well as connection of the Grönewold kernel with the center-of-mass tomographic kernel determining the noncommutative product of the tomograms are obtained. The dual center-of-mass tomogram of the photon states are constructed and the dual tomographic kernel is obtained. The models of other generalized center-of-mass tomographies are discussed. Example of two-mode even and odd Schrödinger cat states is presented in details.
Representation of photon limited data in emission tomography using origin ensembles
NASA Astrophysics Data System (ADS)
Sitek, A.
2008-06-01
Representation and reconstruction of data obtained by emission tomography scanners are challenging due to high noise levels in the data. Typically, images obtained using tomographic measurements are represented using grids. In this work, we define images as sets of origins of events detected during tomographic measurements; we call these origin ensembles (OEs). A state in the ensemble is characterized by a vector of 3N parameters Y, where the parameters are the coordinates of origins of detected events in a three-dimensional space and N is the number of detected events. The 3N-dimensional probability density function (PDF) for that ensemble is derived, and we present an algorithm for OE image estimation from tomographic measurements. A displayable image (e.g. grid based image) is derived from the OE formulation by calculating ensemble expectations based on the PDF using the Markov chain Monte Carlo method. The approach was applied to computer-simulated 3D list-mode positron emission tomography data. The reconstruction errors for a 10 000 000 event acquisition for simulated ranged from 0.1 to 34.8%, depending on object size and sampling density. The method was also applied to experimental data and the results of the OE method were consistent with those obtained by a standard maximum-likelihood approach. The method is a new approach to representation and reconstruction of data obtained by photon-limited emission tomography measurements.
Acoustic representation of tomographic data
NASA Astrophysics Data System (ADS)
Wampler, Cheryl; Zahrt, John D.; Hotchkiss, Robert S.; Zahrt, Rebecca; Kust, Mark
1993-04-01
Tomographic data and tomographic reconstructions are naturally periodic in the angle of rotation of the turntable and the polar angel of the coordinates in the object, respectively. Similarly, acoustic waves are periodic and have amplitude and wavelength as free parameters that can be fit to another representation. Work has been in progress for some time in bringing the acoustic senses to bear on large data sets rather than just the visual sense. We will provide several different acoustic representations of both raw data and density maps. Rather than graphical portrayal of the data and reconstructions, you will be presented various 'tone poems.'
Stereo-tomography in triangulated models
NASA Astrophysics Data System (ADS)
Yang, Kai; Shao, Wei-Dong; Xing, Feng-yuan; Xiong, Kai
2018-04-01
Stereo-tomography is a distinctive tomographic method. It is capable of estimating the scatterer position, the local dip of scatterer and the background velocity simultaneously. Building a geologically consistent velocity model is always appealing for applied and earthquake seismologists. Differing from the previous work to incorporate various regularization techniques into the cost function of stereo-tomography, we think extending stereo-tomography to the triangulated model will be the most straightforward way to achieve this goal. In this paper, we provided all the Fréchet derivatives of stereo-tomographic data components with respect to model components for slowness-squared triangulated model (or sloth model) in 2D Cartesian coordinate based on the ray perturbation theory for interfaces. A sloth model representation means a sparser model representation when compared with conventional B-spline model representation. A sparser model representation leads to a smaller scale of stereo-tomographic (Fréchet) matrix, a higher-accuracy solution when solving linear equations, a faster convergence rate and a lower requirement for quantity of data space. Moreover, a quantitative representation of interface strengthens the relationships among different model components, which makes the cross regularizations among these model components, such as node coordinates, scatterer coordinates and scattering angles, etc., more straightforward and easier to be implemented. The sensitivity analysis, the model resolution matrix analysis and a series of synthetic data examples demonstrate the correctness of the Fréchet derivatives, the applicability of the regularization terms and the robustness of the stereo-tomography in triangulated model. It provides a solid theoretical foundation for the real applications in the future.
Forest representation of vessels in cone-beam computed tomographic angiography.
Chen, Zikuan; Ning, Ruola
2005-01-01
Cone-beam computed tomographic angiography (CBCTA) provides a fast three-dimensional (3D) vascular imaging modality, aiming at digitally representing the spatial vascular structure in an angiographic volume. Due to the finite coverage of cone-beam scan, as well as the volume cropping in volumetric image processing, an angiographic volume may fail to contain a whole vascular tree, but rather consist of a multitude of vessel segments or subtrees. As such, it is convenient to represent multitudinal components by a forest. The vessel tracking issue then becomes component characterization/identification in the forest. The forest representation brings several conveniences for vessel tracking: (1) to sort and count the vessels in an angiographic volume, for example, according to spatial occupancy and skeleton pathlength; (2) to single out a vessel and perform in situ 3D measurement and 3D visualization in the support space; (3) to delineate individual vessels from the original angiographic volume; and (4) to cull the forest by getting rid of non-vessels and small vessels. A 3D skeletonization is used to generate component skeletons. For tree construction from skeletons, we suggest a pathlength-based procedure, which lifts the restrictions of unit-width skeleton and root determination. We experimentally demonstrate the forest representation of a dog's carotid arteries in a CBCTA system. In principle, the forest representation is useful for managing vessels in both 2D angiographic images and 3D angiographic volumes.
Tomographic imaging using poissonian detector data
Aspelmeier, Timo; Ebel, Gernot; Hoeschen, Christoph
2013-10-15
An image reconstruction method for reconstructing a tomographic image (f.sub.j) of a region of investigation within an object (1), comprises the steps of providing detector data (y.sub.i) comprising Poisson random values measured at an i-th of a plurality of different positions, e.g. i=(k,l) with pixel index k on a detector device and angular index l referring to both the angular position (.alpha..sub.l) and the rotation radius (r.sub.l) of the detector device (10) relative to the object (1), providing a predetermined system matrix A.sub.ij assigning a j-th voxel of the object (1) to the i-th detector data (y.sub.i), and reconstructing the tomographic image (f.sub.j) based on the detector data (y.sub.i), said reconstructing step including a procedure of minimizing a functional F(f) depending on the detector data (y.sub.i) and the system matrix A.sub.ij and additionally including a sparse or compressive representation of the object (1) in an orthobasis T, wherein the tomographic image (f.sub.j) represents the global minimum of the functional F(f). Furthermore, an imaging method and an imaging device using the image reconstruction method are described.
X-ray tomographic image magnification process, system and apparatus therefor
Kinney, J.H.; Bonse, U.K.; Johnson, Q.C.; Nichols, M.C.; Saroyan, R.A.; Massey, W.N.; Nusshardt, R.
1993-09-14
A computerized three-dimensional x-ray tomographic microscopy system is disclosed, comprising: (a) source means for providing a source of parallel x-ray beams, (b) staging means for staging and sequentially rotating a sample to be positioned in the path of the (c) x-ray image magnifier means positioned in the path of the beams downstream from the sample, (d) detecting means for detecting the beams after being passed through and magnified by the image magnifier means, and (e) computing means for analyzing values received from the detecting means, and converting the values into three-dimensional representations. Also disclosed is a process for magnifying an x-ray image, and apparatus therefor. 25 figures.
X-ray tomographic image magnification process, system and apparatus therefor
Kinney, John H.; Bonse, Ulrich K.; Johnson, Quintin C.; Nichols, Monte C.; Saroyan, Ralph A.; Massey, Warren N.; Nusshardt, Rudolph
1993-01-01
A computerized three-dimensional x-ray tomographic microscopy system is disclosed, comprising: a) source means for providing a source of parallel x-ray beams, b) staging means for staging and sequentially rotating a sample to be positioned in the path of the c) x-ray image magnifier means positioned in the path of the beams downstream from the sample, d) detecting means for detecting the beams after being passed through and magnified by the image magnifier means, and e) computing means for analyzing values received from the detecting means, and converting the values into three-dimensional representations. Also disclosed is a process for magnifying an x-ray image, and apparatus therefor.
Tomographic PIV: particles versus blobs
NASA Astrophysics Data System (ADS)
Champagnat, Frédéric; Cornic, Philippe; Cheminet, Adam; Leclaire, Benjamin; Le Besnerais, Guy; Plyer, Aurélien
2014-08-01
We present an alternative approach to tomographic particle image velocimetry (tomo-PIV) that seeks to recover nearly single voxel particles rather than blobs of extended size. The baseline of our approach is a particle-based representation of image data. An appropriate discretization of this representation yields an original linear forward model with a weight matrix built with specific samples of the system’s point spread function (PSF). Such an approach requires only a few voxels to explain the image appearance, therefore it favors much more sparsely reconstructed volumes than classic tomo-PIV. The proposed forward model is general and flexible and can be embedded in a classical multiplicative algebraic reconstruction technique (MART) or a simultaneous multiplicative algebraic reconstruction technique (SMART) inversion procedure. We show, using synthetic PIV images and by way of a large exploration of the generating conditions and a variety of performance metrics, that the model leads to better results than the classical tomo-PIV approach, in particular in the case of seeding densities greater than 0.06 particles per pixel and of PSFs characterized by a standard deviation larger than 0.8 pixels.
NASA Astrophysics Data System (ADS)
Kostencka, Julianna; Kozacki, Tomasz; Hennelly, Bryan; Sheridan, John T.
2017-06-01
Holographic tomography (HT) allows noninvasive, quantitative, 3D imaging of transparent microobjects, such as living biological cells and fiber optics elements. The technique is based on acquisition of multiple scattered fields for various sample perspectives using digital holographic microscopy. Then, the captured data is processed with one of the tomographic reconstruction algorithms, which enables 3D reconstruction of refractive index distribution. In our recent works we addressed the issue of spatially variant accuracy of the HT reconstructions, which results from the insufficient model of diffraction that is applied in the widely-used tomographic reconstruction algorithms basing on the Rytov approximation. In the present study, we continue investigating the spatially variant properties of the HT imaging, however, we are now focusing on the limited spatial size of holograms as a source of this problem. Using the Wigner distribution representation and the Ewald sphere approach, we show that the limited size of the holograms results in a decreased quality of tomographic imaging in off-center regions of the HT reconstructions. This is because the finite detector extent becomes a limiting aperture that prohibits acquisition of full information about diffracted fields coming from the out-of-focus structures of a sample. The incompleteness of the data results in an effective truncation of the tomographic transfer function for the out-of-center regions of the tomographic image. In this paper, the described effect is quantitatively characterized for three types of the tomographic systems: the configuration with 1) object rotation, 2) scanning of the illumination direction, 3) the hybrid HT solution combing both previous approaches.
Robust statistical reconstruction for charged particle tomography
Schultz, Larry Joe; Klimenko, Alexei Vasilievich; Fraser, Andrew Mcleod; Morris, Christopher; Orum, John Christopher; Borozdin, Konstantin N; Sossong, Michael James; Hengartner, Nicolas W
2013-10-08
Systems and methods for charged particle detection including statistical reconstruction of object volume scattering density profiles from charged particle tomographic data to determine the probability distribution of charged particle scattering using a statistical multiple scattering model and determine a substantially maximum likelihood estimate of object volume scattering density using expectation maximization (ML/EM) algorithm to reconstruct the object volume scattering density. The presence of and/or type of object occupying the volume of interest can be identified from the reconstructed volume scattering density profile. The charged particle tomographic data can be cosmic ray muon tomographic data from a muon tracker for scanning packages, containers, vehicles or cargo. The method can be implemented using a computer program which is executable on a computer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Z; Terry, N; Hubbard, S S
2013-02-12
In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability distribution functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSim) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Zhangshuan; Terry, Neil C.; Hubbard, Susan S.
2013-02-22
In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability density functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSIM) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less
A Tomographic Method for the Reconstruction of Local Probability Density Functions
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.
Metric on the space of quantum states from relative entropy. Tomographic reconstruction
NASA Astrophysics Data System (ADS)
Man'ko, Vladimir I.; Marmo, Giuseppe; Ventriglia, Franco; Vitale, Patrizia
2017-08-01
In the framework of quantum information geometry, we derive, from quantum relative Tsallis entropy, a family of quantum metrics on the space of full rank, N level quantum states, by means of a suitably defined coordinate free differential calculus. The cases N=2, N=3 are discussed in detail and notable limits are analyzed. The radial limit procedure has been used to recover quantum metrics for lower rank states, such as pure states. By using the tomographic picture of quantum mechanics we have obtained the Fisher-Rao metric for the space of quantum tomograms and derived a reconstruction formula of the quantum metric of density states out of the tomographic one. A new inequality obtained for probabilities of three spin-1/2 projections in three perpendicular directions is proposed to be checked in experiments with superconducting circuits.
Estimating crustal heterogeneity from double-difference tomography
Got, J.-L.; Monteiller, V.; Virieux, J.; Okubo, P.
2006-01-01
Seismic velocity parameters in limited, but heterogeneous volumes can be inferred using a double-difference tomographic algorithm, but to obtain meaningful results accuracy must be maintained at every step of the computation. MONTEILLER et al. (2005) have devised a double-difference tomographic algorithm that takes full advantage of the accuracy of cross-spectral time-delays of large correlated event sets. This algorithm performs an accurate computation of theoretical travel-time delays in heterogeneous media and applies a suitable inversion scheme based on optimization theory. When applied to Kilauea Volcano, in Hawaii, the double-difference tomography approach shows significant and coherent changes to the velocity model in the well-resolved volumes beneath the Kilauea caldera and the upper east rift. In this paper, we first compare the results obtained using MONTEILLER et al.'s algorithm with those obtained using the classic travel-time tomographic approach. Then, we evaluated the effect of using data series of different accuracies, such as handpicked arrival-time differences ("picking differences"), on the results produced by double-difference tomographic algorithms. We show that picking differences have a non-Gaussian probability density function (pdf). Using a hyperbolic secant pdf instead of a Gaussian pdf allows improvement of the double-difference tomographic result when using picking difference data. We completed our study by investigating the use of spatially discontinuous time-delay data. ?? Birkha??user Verlag, Basel, 2006.
The Process of Probability Problem Solving: Use of External Visual Representations
ERIC Educational Resources Information Center
Zahner, Doris; Corter, James E.
2010-01-01
We investigate the role of external inscriptions, particularly those of a spatial or visual nature, in the solution of probability word problems. We define a taxonomy of external visual representations used in probability problem solving that includes "pictures," "spatial reorganization of the given information," "outcome listings," "contingency…
Prol, Fabricio S; Camargo, Paulo O; Muella, Marcio T A H
2017-01-01
The incomplete geometrical coverage of the Global Navigation Satellite System (GNSS) makes the ionospheric tomographic system an ill-conditioned problem for ionospheric imaging. In order to detect the principal limitations of the ill-conditioned tomographic solutions, numerical simulations of the ionosphere are under constant investigation. In this paper, we show an investigation of the accuracy of Algebraic Reconstruction Technique (ART) and Multiplicative ART (MART) for performing tomographic reconstruction of Chapman profiles using a simulated optimum scenario of GNSS signals tracked by ground-based receivers. Chapman functions were used to represent the ionospheric morphology and a set of analyses was conducted to assess ART and MART performance for estimating the Total Electron Content (TEC) and parameters that describes the Chapman function. The results showed that MART performed better in the reconstruction of the electron density peak and ART gave a better representation for estimating TEC and the shape of the ionosphere. Since we used an optimum scenario of the GNSS signals, the analyses indicate the intrinsic problems that may occur with ART and MART to recover valuable information for many applications of Telecommunication, Spatial Geodesy and Space Weather.
Baskerville, Jerry Ray; Herrick, John
2012-02-01
This study focuses on clinically assigned prospective estimated pretest probability and pretest perception of legal risk as independent variables in the ordering of multidetector computed tomographic (MDCT) head scans. Our primary aim is to measure the association between pretest probability of a significant finding and pretest perception of legal risk. Secondarily, we measure the percentage of MDCT scans that physicians would not order if there was no legal risk. This study is a prospective, cross-sectional, descriptive analysis of patients 18 years and older for whom emergency medicine physicians ordered a head MDCT. We collected a sample of 138 patients subjected to head MDCT scans. The prevalence of a significant finding in our population was 6%, yet the pretest probability expectation of a significant finding was 33%. The legal risk presumed was even more dramatic at 54%. These data support the hypothesis that physicians presume the legal risk to be significantly higher than the risk of a significant finding. A total of 21% or 15% patients (95% confidence interval, ±5.9%) would not have been subjected to MDCT if there was no legal risk. Physicians overestimated the probability that the computed tomographic scan would yield a significant result and indicated an even greater perceived medicolegal risk if the scan was not obtained. Physician test-ordering behavior is complex, and our study queries pertinent aspects of MDCT testing. The magnification of legal risk vs the pretest probability of a significant finding is demonstrated. Physicians significantly overestimated pretest probability of a significant finding on head MDCT scans and presumed legal risk. Copyright © 2012 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Beitzel, Brian D.; Staley, Richard K.; DuBois, Nelson F.
2011-01-01
Previous research has cast doubt on the efficacy of utilizing external representations as an aid to solving word problems. The present study replicates previous findings that concrete representations hinder college students' ability to solve probability word problems, and extends those findings to apply to a multimedia instructional context. Our…
NASA Astrophysics Data System (ADS)
Huh, C.; Bolch, W. E.
2003-10-01
Two classes of anatomic models currently exist for use in both radiation protection and radiation dose reconstruction: stylized mathematical models and tomographic voxel models. The former utilize 3D surface equations to represent internal organ structure and external body shape, while the latter are based on segmented CT or MR images of a single individual. While tomographic models are clearly more anthropomorphic than stylized models, a given model's characterization as being anthropometric is dependent upon the reference human to which the model is compared. In the present study, data on total body mass, standing/sitting heights and body mass index are collected and reviewed for the US population covering the time interval from 1971 to 2000. These same anthropometric parameters are then assembled for the ORNL series of stylized models, the GSF series of tomographic models (Golem, Helga, Donna, etc), the adult male Zubal tomographic model and the UF newborn tomographic model. The stylized ORNL models of the adult male and female are found to be fairly representative of present-day average US males and females, respectively, in terms of both standing and sitting heights for ages between 20 and 60-80 years. While the ORNL adult male model provides a reasonably close match to the total body mass of the average US 21-year-old male (within ~5%), present-day 40-year-old males have an average total body mass that is ~16% higher. For radiation protection purposes, the use of the larger 73.7 kg adult ORNL stylized hermaphrodite model provides a much closer representation of average present-day US females at ages ranging from 20 to 70 years. In terms of the adult tomographic models from the GSF series, only Donna (40-year-old F) closely matches her age-matched US counterpart in terms of average body mass. Regarding standing heights, the better matches to US age-correlated averages belong to Irene (32-year-old F) for the females and Golem (38-year-old M) for the males. Both Helga (27-year-old F) and Donna, however, provide good matches to average US sitting heights for adult females, while Golem and Otoko (male of unknown age) yield sitting heights that are slightly below US adult male averages. Finally, Helga is seen as the only GSF tomographic female model that yields a body mass index in line with her average US female counterpart at age 26. In terms of dose reconstruction activities, however, all current tomographic voxel models are valuable assets in attempting to cover the broad distribution of individual anthropometric parameters representative of the current US population. It is highly recommended that similar attempts to create a broad library of tomographic models be initiated in the United States and elsewhere to complement and extend the limited number of tomographic models presently available for these efforts.
Probability and possibility-based representations of uncertainty in fault tree analysis.
Flage, Roger; Baraldi, Piero; Zio, Enrico; Aven, Terje
2013-01-01
Expert knowledge is an important source of input to risk analysis. In practice, experts might be reluctant to characterize their knowledge and the related (epistemic) uncertainty using precise probabilities. The theory of possibility allows for imprecision in probability assignments. The associated possibilistic representation of epistemic uncertainty can be combined with, and transformed into, a probabilistic representation; in this article, we show this with reference to a simple fault tree analysis. We apply an integrated (hybrid) probabilistic-possibilistic computational framework for the joint propagation of the epistemic uncertainty on the values of the (limiting relative frequency) probabilities of the basic events of the fault tree, and we use possibility-probability (probability-possibility) transformations for propagating the epistemic uncertainty within purely probabilistic and possibilistic settings. The results of the different approaches (hybrid, probabilistic, and possibilistic) are compared with respect to the representation of uncertainty about the top event (limiting relative frequency) probability. Both the rationale underpinning the approaches and the computational efforts they require are critically examined. We conclude that the approaches relevant in a given setting depend on the purpose of the risk analysis, and that further research is required to make the possibilistic approaches operational in a risk analysis context. © 2012 Society for Risk Analysis.
Structural Features of Algebraic Quantum Notations
ERIC Educational Resources Information Center
Gire, Elizabeth; Price, Edward
2015-01-01
The formalism of quantum mechanics includes a rich collection of representations for describing quantum systems, including functions, graphs, matrices, histograms of probabilities, and Dirac notation. The varied features of these representations affect how computations are performed. For example, identifying probabilities of measurement outcomes…
From Tomography to Material Properties of Thermal Protection Systems
NASA Technical Reports Server (NTRS)
Mansour, Nagi N.; Panerai, Francesco; Ferguson, Joseph C.; Borner, Arnaud; Barnhardt, Michael; Wright, Michael
2017-01-01
A NASA Ames Research Center (ARC) effort, under the Entry Systems Modeling (ESM) project, aims at developing micro-tomography (micro-CT) experiments and simulations for studying materials used in hypersonic entry systems. X-ray micro-tomography allows for non-destructive 3D imaging of a materials micro-structure at the sub-micron scale, providing fiber-scale representations of porous thermal protection systems (TPS) materials. The technique has also allowed for In-situ experiments that can resolve response phenomena under realistic environmental conditions such as high temperature, mechanical loads, and oxidizing atmospheres. Simulation tools have been developed at the NASA Ames Research Center to determine material properties and material response from the high-fidelity tomographic representations of the porous materials with the goal of informing macroscopic TPS response models and guiding future TPS design.
Moores, L; Kline, J; Portillo, A K; Resano, S; Vicente, A; Arrieta, P; Corres, J; Tapson, V; Yusen, R D; Jiménez, D
2016-01-01
ESSENTIALS: When high probability of pulmonary embolism (PE), sensitivity of computed tomography (CT) is unclear. We investigated the sensitivity of multidetector CT among 134 patients with a high probability of PE. A normal CT alone may not safely exclude PE in patients with a high clinical pretest probability. In patients with no clear alternative diagnosis after CTPA, further testing should be strongly considered. Whether patients with a negative multidetector computed tomographic pulmonary angiography (CTPA) result and a high clinical pretest probability of pulmonary embolism (PE) should be further investigated is controversial. This was a prospective investigation of the sensitivity of multidetector CTPA among patients with a priori clinical assessment of a high probability of PE according to the Wells criteria. Among patients with a negative CTPA result, the diagnosis of PE required at least one of the following conditions: ventilation/perfusion lung scan showing a high probability of PE in a patient with no history of PE, abnormal findings on venous ultrasonography in a patient without previous deep vein thrombosis at that site, or the occurrence of venous thromboembolism (VTE) in a 3-month follow-up period after anticoagulation was withheld because of a negative multidetector CTPA result. We identified 498 patients with a priori clinical assessment of a high probability of PE and a completed CTPA study. CTPA excluded PE in 134 patients; in these patients, the pooled incidence of VTE was 5.2% (seven of 134 patients; 95% confidence interval [CI] 1.5-9.0). Five patients had VTEs that were confirmed by an additional imaging test despite a negative CTPA result (five of 48 patients; 10.4%; 95% CI 1.8-19.1), and two patients had objectively confirmed VTEs that occurred during clinical follow-up of at least 3 months (two of 86 patients; 2.3%; 95% CI 0-5.5). None of the patients had a fatal PE during follow-up. A normal multidetector CTPA result alone may not safely exclude PE in patients with a high clinical pretest probability. © 2015 International Society on Thrombosis and Haemostasis.
Electron tomography and 3D molecular simulations of platinum nanocrystals
NASA Astrophysics Data System (ADS)
Florea, Ileana; Demortière, Arnaud; Petit, Christophe; Bulou, Hervé; Hirlimann, Charles; Ersen, Ovidiu
2012-07-01
This work reports on the morphology of individual platinum nanocrystals with sizes of about 5 nm. By using the electron tomography technique that gives 3D spatial selectivity, access to quantitative information in the real space was obtained. The morphology of individual nanoparticles was characterized using HAADF-STEM tomography and it was shown to be close to a truncated octahedron. Using molecular dynamics simulations, this geometrical shape was found to be the one minimizing the nanocrystal energy. Starting from the tomographic reconstruction, 3D crystallographic representations of the studied Pt nanocrystals were obtained at the nanometer scale, allowing the quantification of the relative amount of the crystallographic facets present on the particle surface.This work reports on the morphology of individual platinum nanocrystals with sizes of about 5 nm. By using the electron tomography technique that gives 3D spatial selectivity, access to quantitative information in the real space was obtained. The morphology of individual nanoparticles was characterized using HAADF-STEM tomography and it was shown to be close to a truncated octahedron. Using molecular dynamics simulations, this geometrical shape was found to be the one minimizing the nanocrystal energy. Starting from the tomographic reconstruction, 3D crystallographic representations of the studied Pt nanocrystals were obtained at the nanometer scale, allowing the quantification of the relative amount of the crystallographic facets present on the particle surface. Electronic supplementary information (ESI) available. See DOI: 10.1039/c2nr30990d
Design of an activity landscape view taking compound-based feature probabilities into account.
Zhang, Bijun; Vogt, Martin; Bajorath, Jürgen
2014-09-01
Activity landscapes (ALs) of compound data sets are rationalized as graphical representations that integrate similarity and potency relationships between active compounds. ALs enable the visualization of structure-activity relationship (SAR) information and are thus computational tools of interest for medicinal chemistry. For AL generation, similarity and potency relationships are typically evaluated in a pairwise manner and major AL features are assessed at the level of compound pairs. In this study, we add a conditional probability formalism to AL design that makes it possible to quantify the probability of individual compounds to contribute to characteristic AL features. Making this information graphically accessible in a molecular network-based AL representation is shown to further increase AL information content and helps to quickly focus on SAR-informative compound subsets. This feature probability-based AL variant extends the current spectrum of AL representations for medicinal chemistry applications.
Tomographic measurement of joint photon statistics of the twin-beam quantum state
Vasilyev; Choi; Kumar; D'Ariano
2000-03-13
We report the first measurement of the joint photon-number probability distribution for a two-mode quantum state created by a nondegenerate optical parametric amplifier. The measured distributions exhibit up to 1.9 dB of quantum correlation between the signal and idler photon numbers, whereas the marginal distributions are thermal as expected for parametric fluorescence.
Joint measurement of complementary observables in moment tomography
NASA Astrophysics Data System (ADS)
Teo, Yong Siah; Müller, Christian R.; Jeong, Hyunseok; Hradil, Zdeněk; Řeháček, Jaroslav; Sánchez-Soto, Luis L.
Wigner and Husimi quasi-distributions, owing to their functional regularity, give the two archetypal and equivalent representations of all observable-parameters in continuous-variable quantum information. Balanced homodyning (HOM) and heterodyning (HET) that correspond to their associated sampling procedures, on the other hand, fare very differently concerning their state or parameter reconstruction accuracies. We present a general theory of a now-known fact that HET can be tomographically more powerful than balanced homodyning to many interesting classes of single-mode quantum states, and discuss the treatment for two-mode sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, Shirong; Davis, Michael J.; Skodje, Rex T.
2015-11-12
The sensitivity of kinetic observables is analyzed using a newly developed sum over histories representation of chemical kinetics. In the sum over histories representation, the concentrations of the chemical species are decomposed into the sum of probabilities for chemical pathways that follow molecules from reactants to products or intermediates. Unlike static flux methods for reaction path analysis, the sum over histories approach includes the explicit time dependence of the pathway probabilities. Using the sum over histories representation, the sensitivity of an observable with respect to a kinetic parameter such as a rate coefficient is then analyzed in terms of howmore » that parameter affects the chemical pathway probabilities. The method is illustrated for species concentration target functions in H-2 combustion where the rate coefficients are allowed to vary over their associated uncertainty ranges. It is found that large sensitivities are often associated with rate limiting steps along important chemical pathways or by reactions that control the branching of reactive flux« less
NASA Astrophysics Data System (ADS)
Postpischl, L.; Morelli, A.; Danecek, P.
2009-04-01
Formats used to represent (and distribute) tomographic earth models differ considerably and are rarely self-consistent. In fact, each earth scientist, or research group, uses specific conventions to encode the various parameterizations used to describe, e.g., seismic wave speed or density in three dimensions, and complete information is often found in related documents or publications (if available at all) only. As a consequence, use of various tomographic models from different authors requires considerable effort, is more cumbersome than it should be and prevents widespread exchange and circulation within the community. We propose a format, based on modern web standards, able to represent different (grid-based) model parameterizations within the same simple text-based environment, easy to write, to parse, and to visualise. The aim is the creation of self-describing data-structures, both human and machine readable, that are automatically recognised by general-purpose software agents, and easily imported in the scientific programming environment. We think that the adoption of such a representation as a standard for the exchange and distribution of earth models can greatly ease their usage and enhance their circulation, both among fellow seismologists and among a broader non-specialist community. The proposed solution uses semantic web technologies, fully fitting the current trends in data accessibility. It is based on Json (JavaScript Object Notation), a plain-text, human-readable lightweight computer data interchange format, which adopts a hierarchical name-value model for representing simple data structures and associative arrays (called objects). Our implementation allows integration of large datasets with metadata (authors, affiliations, bibliographic references, units of measure etc.) into a single resource. It is equally suited to represent other geo-referenced volumetric quantities — beyond tomographic models — as well as (structured and unstructured) computational meshes. This approach can exploit the capabilities of the web browser as a computing platform: a series of in-page quick tools for comparative analysis between models will be presented, as well as visualisation techniques for tomographic layers in Google Maps and Google Earth. We are working on tools for conversion into common scientific format like netCDF, to allow easy visualisation in GEON-IDV or gmt.
New Possibilities of Positron-Emission Tomography
NASA Astrophysics Data System (ADS)
Volobuev, A. N.
2018-01-01
The reasons for the emergence of the angular distribution of photons generated as a result of annihilation of an electron and a positron in a positron-emission tomograph are investigated. It is shown that the angular distribution of the radiation intensity (i.e., the probability of photon emission at different angles) is a consequence of the Doppler effect in the center-of-mass reference system of the electron and the positron. In the reference frame attached to the electron, the angular distribution of the number of emitted photons does not exists but is replaced by the Doppler shift of the frequency of photons. The results obtained in this study make it possible to extend the potentialities of the positron-emission tomograph in the diagnostics of diseases and to obtain additional mechanical characteristics of human tissues, such as density and viscosity.
Hogg, Melanie M.; Courtney, D. Mark; Miller, Chadwick D.; Jones, Alan E.; Smithline, Howard A
2012-01-01
Background Increasing the threshold to define a positive D-dimer could reduce unnecessary computed tomographic pulmonary angiography (CTPA) for suspected PE but might increase rates of missed PE and missed pneumonia, the most common nonthromboembolic diagnosis seen on CTPA. Objective Measure the effect of doubling the standard D-dimer threshold for “PE unlikely” Revised Geneva (RGS) or Wells’ scores on the exclusion rate, frequency and size of missed PE and missed pneumonia. Methods Patients evaluated for suspected PE with 64-channel CTPA were prospectively enrolled from EDs and inpatient units of four hospitals. Pretest probability data were collected in real time and the D-dimer was measured in a central laboratory. Criterion standard was CPTA interpretation by two independent radiologists combined with clinical outcome at 30 days. Results Of 678 patients enrolled, 126 (19%) were PE+ and 93 (14%) had pneumonia. Use of either Wells≤4 or RGS≤6 produced similar results. For example, with RGS≤6 and standard threshold (<500 ng/mL), D-dimer was negative in 110/678 (16%), and 4/110 were PE+ (posterior probability 3.8%), and 9/110 (8.2%) had pneumonia. With RGS≤6 and a threshold <1000 ng/mL, D-dimer was negative in 208/678 (31%) and 11/208 (5.3%) were PE+, but 10/11 missed PEs were subsegmental, and none had concomitant DVT. Pneumonia was found in 12/208 (5.4%) with RGS≤6 and D-dimer<1000 ng/mL. Conclusions Doubling the threshold for a positive D-dimer with a PE unlikely pretest probability could reduce CTPA scanning with a slightly increased risk of missed isolated subsegmental PE, and no increase in rate of missed pneumonia. PMID:22284935
Fast Acquisition and Reconstruction of Optical Coherence Tomography Images via Sparse Representation
Li, Shutao; McNabb, Ryan P.; Nie, Qing; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.; Farsiu, Sina
2014-01-01
In this paper, we present a novel technique, based on compressive sensing principles, for reconstruction and enhancement of multi-dimensional image data. Our method is a major improvement and generalization of the multi-scale sparsity based tomographic denoising (MSBTD) algorithm we recently introduced for reducing speckle noise. Our new technique exhibits several advantages over MSBTD, including its capability to simultaneously reduce noise and interpolate missing data. Unlike MSBTD, our new method does not require an a priori high-quality image from the target imaging subject and thus offers the potential to shorten clinical imaging sessions. This novel image restoration method, which we termed sparsity based simultaneous denoising and interpolation (SBSDI), utilizes sparse representation dictionaries constructed from previously collected datasets. We tested the SBSDI algorithm on retinal spectral domain optical coherence tomography images captured in the clinic. Experiments showed that the SBSDI algorithm qualitatively and quantitatively outperforms other state-of-the-art methods. PMID:23846467
Tomographic iterative reconstruction of a passive scalar in a 3D turbulent flow
NASA Astrophysics Data System (ADS)
Pisso, Ignacio; Kylling, Arve; Cassiani, Massimo; Solveig Dinger, Anne; Stebel, Kerstin; Schmidbauer, Norbert; Stohl, Andreas
2017-04-01
Turbulence in stable planetary boundary layers often encountered in high latitudes influences the exchange fluxes of heat, momentum, water vapor and greenhouse gases between the Earth's surface and the atmosphere. In climate and meteorological models, such effects of turbulence need to be parameterized, ultimately based on experimental data. A novel experimental approach is being developed within the COMTESSA project in order to study turbulence statistics at high resolution. Using controlled tracer releases, high-resolution camera images and estimates of the background radiation, different tomographic algorithms can be applied in order to obtain time series of 3D representations of the scalar dispersion. In this preliminary work, using synthetic data, we investigate different reconstruction algorithms with emphasis on algebraic methods. We study the dependence of the reconstruction quality on the discretization resolution and the geometry of the experimental device in both 2 and 3-D cases. We assess the computational aspects of the iterative algorithms focusing of the phenomenon of semi-convergence applying a variety of stopping rules. We discuss different strategies for error reduction and regularization of the ill-posed problem.
Deep brain stimulation abolishes slowing of reactions to unlikely stimuli.
Antoniades, Chrystalina A; Bogacz, Rafal; Kennard, Christopher; FitzGerald, James J; Aziz, Tipu; Green, Alexander L
2014-08-13
The cortico-basal-ganglia circuit plays a critical role in decision making on the basis of probabilistic information. Computational models have suggested how this circuit could compute the probabilities of actions being appropriate according to Bayes' theorem. These models predict that the subthalamic nucleus (STN) provides feedback that normalizes the neural representation of probabilities, such that if the probability of one action increases, the probabilities of all other available actions decrease. Here we report the results of an experiment testing a prediction of this theory that disrupting information processing in the STN with deep brain stimulation should abolish the normalization of the neural representation of probabilities. In our experiment, we asked patients with Parkinson's disease to saccade to a target that could appear in one of two locations, and the probability of the target appearing in each location was periodically changed. When the stimulator was switched off, the target probability affected the reaction times (RT) of patients in a similar way to healthy participants. Specifically, the RTs were shorter for more probable targets and, importantly, they were longer for the unlikely targets. When the stimulator was switched on, the patients were still faster for more probable targets, but critically they did not increase RTs as the target was becoming less likely. This pattern of results is consistent with the prediction of the model that the patients on DBS no longer normalized their neural representation of prior probabilities. We discuss alternative explanations for the data in the context of other published results. Copyright © 2014 the authors 0270-6474/14/3410844-09$15.00/0.
The Cognitive Substrate of Subjective Probability
ERIC Educational Resources Information Center
Nilsson, Hakan; Olsson, Henrik; Juslin, Peter
2005-01-01
The prominent cognitive theories of probability judgment were primarily developed to explain cognitive biases rather than to account for the cognitive processes in probability judgment. In this article the authors compare 3 major theories of the processes and representations in probability judgment: the representativeness heuristic, implemented as…
Radar Imaging Using The Wigner-Ville Distribution
NASA Astrophysics Data System (ADS)
Boashash, Boualem; Kenny, Owen P.; Whitehouse, Harper J.
1989-12-01
The need for analysis of time-varying signals has led to the formulation of a class of joint time-frequency distributions (TFDs). One of these TFDs, the Wigner-Ville distribution (WVD), has useful properties which can be applied to radar imaging. This paper first discusses the radar equation in terms of the time-frequency representation of the signal received from a radar system. It then presents a method of tomographic reconstruction for time-frequency images to estimate the scattering function of the aircraft. An optical archi-tecture is then discussed for the real-time implementation of the analysis method based on the WVD.
Uncovering Mental Representations with Markov Chain Monte Carlo
ERIC Educational Resources Information Center
Sanborn, Adam N.; Griffiths, Thomas L.; Shiffrin, Richard M.
2010-01-01
A key challenge for cognitive psychology is the investigation of mental representations, such as object categories, subjective probabilities, choice utilities, and memory traces. In many cases, these representations can be expressed as a non-negative function defined over a set of objects. We present a behavioral method for estimating these…
Representation of complex probabilities and complex Gibbs sampling
NASA Astrophysics Data System (ADS)
Salcedo, Lorenzo Luis
2018-03-01
Complex weights appear in Physics which are beyond a straightforward importance sampling treatment, as required in Monte Carlo calculations. This is the wellknown sign problem. The complex Langevin approach amounts to effectively construct a positive distribution on the complexified manifold reproducing the expectation values of the observables through their analytical extension. Here we discuss the direct construction of such positive distributions paying attention to their localization on the complexified manifold. Explicit localized representations are obtained for complex probabilities defined on Abelian and non Abelian groups. The viability and performance of a complex version of the heat bath method, based on such representations, is analyzed.
NASA Technical Reports Server (NTRS)
Mcdade, Ian C.
1991-01-01
Techniques were developed for recovering two-dimensional distributions of auroral volume emission rates from rocket photometer measurements made in a tomographic spin scan mode. These tomographic inversion procedures are based upon an algebraic reconstruction technique (ART) and utilize two different iterative relaxation techniques for solving the problems associated with noise in the observational data. One of the inversion algorithms is based upon a least squares method and the other on a maximum probability approach. The performance of the inversion algorithms, and the limitations of the rocket tomography technique, were critically assessed using various factors such as (1) statistical and non-statistical noise in the observational data, (2) rocket penetration of the auroral form, (3) background sources of emission, (4) smearing due to the photometer field of view, and (5) temporal variations in the auroral form. These tests show that the inversion procedures may be successfully applied to rocket observations made in medium intensity aurora with standard rocket photometer instruments. The inversion procedures have been used to recover two-dimensional distributions of auroral emission rates and ionization rates from an existing set of N2+3914A rocket photometer measurements which were made in a tomographic spin scan mode during the ARIES auroral campaign. The two-dimensional distributions of the 3914A volume emission rates recoverd from the inversion of the rocket data compare very well with the distributions that were inferred from ground-based measurements using triangulation-tomography techniques and the N2 ionization rates derived from the rocket tomography results are in very good agreement with the in situ particle measurements that were made during the flight. Three pre-prints describing the tomographic inversion techniques and the tomographic analysis of the ARIES rocket data are included as appendices.
Effects of long-term representations on free recall of unrelated words
Katkov, Mikhail; Romani, Sandro
2015-01-01
Human memory stores vast amounts of information. Yet recalling this information is often challenging when specific cues are lacking. Here we consider an associative model of retrieval where each recalled item triggers the recall of the next item based on the similarity between their long-term neuronal representations. The model predicts that different items stored in memory have different probability to be recalled depending on the size of their representation. Moreover, items with high recall probability tend to be recalled earlier and suppress other items. We performed an analysis of a large data set on free recall and found a highly specific pattern of statistical dependencies predicted by the model, in particular negative correlations between the number of words recalled and their average recall probability. Taken together, experimental and modeling results presented here reveal complex interactions between memory items during recall that severely constrain recall capacity. PMID:25593296
Lederer, Kristina; Ludewig, Eberhard; Hechinger, Harald; Parry, Andrew T; Lamb, Christopher R; Kneissl, Sibylle
2015-07-01
To identify computed tomographic (CT) signs that could be used to differentiate inflammatory from neoplastic orbital conditions in small animals. Fifty-two animals (25 cats, 21 dogs, 4 rabbits, and 2 rodents). Case-control study in which CT images of animals with histopathologic diagnosis of inflammatory (n = 11), neoplastic orbital conditions (n = 31), or normal control animals (n = 10) were reviewed independently by five observers without the knowledge of the history or diagnosis. Observers recorded their observations regarding specific anatomical structures within the orbit using an itemized form containing the following characteristics: definitely normal; probably normal; equivocal; probably abnormal; and definitely abnormal. Results were statistically analyzed using Fleiss' kappa and logistic regression analyses. The overall level of agreement between observers about the presence or absence of abnormal CT signs in animals with orbital disease was poor to moderate, but was highest for observations concerning orbital bones (κ = 0.62) and involvement of the posterior segment (κ = 0.52). Significant associations between abnormalities and diagnosis were found for four structures: Abnormalities affecting orbital bones (odds ratio [OR], 1.7) and anterior ocular structures (OR, 1.5) were predictive of neoplasia, while abnormalities affecting extraconal fat (OR, 1.7) and skin (OR, 1.4) were predictive of inflammatory conditions. Orbital CT is an imaging test with high specificity. Fat stranding, a CT sign not previously emphasized in veterinary medicine, was significantly associated with inflammatory conditions. Low observer agreement probably reflects the limited resolution of CT for small orbital structures. © 2014 American College of Veterinary Ophthalmologists.
The possible social representations of astronomy by students from integrated high school
NASA Astrophysics Data System (ADS)
Barbosa, J. I. L.; Voelzke, M. R.
2017-12-01
In this paper, we present the possible Social Representations, which students of the Integrated High School of the Federal Institute of Alagoas (IFAL) have on the term inductor Astronomy, as well as identifying how they were probably elaborated. Therefore, in agreement with Moscovici (2010) is used the Theory of Social Representations.
NASA Astrophysics Data System (ADS)
Di Pietra, V.; Donadio, E.; Picchi, D.; Sambuelli, L.; Spanò, A.
2017-02-01
The paper presents the workflow and the results of an ultrasonic 3D investigation and a 3D survey application aimed at the assessment of the internal integrity of an ancient sculpture. The work aimed at highlighting the ability of methods devoted to the 3D geometry acquisition of small objects when applied to diagnosis performed by geophysical investigation. In particular, two methods widely applied for small objects modelling are considered and compared, the digital Photogrammetry with the Structure from Motion (SFM) technique and hand-held 3D scanners. The study concludes with the aim to enhance the final graphical representation of the tomographic results and to subject the obtained results to a quantitative analysis. The survey is applied to the Egyptian naophorous statue of Amenmes and Reshpu, which dates to the reign of Ramses II (1279-1213 BC) or later and is now preserved in the Civic Archaeological Museum in Bologna. In order to evaluate the internal persistency of fractures and visible damages, a 3D Ultrasonic Tomographic Imaging (UTI) test has been performed and a multi-sensor survey (image and range based) was conducted, in order to evaluate the locations of the source and receiver points as accurate as possible The presented test allowed to evaluate the material characteristics, its porosity and degradation state, which particularly affect the lower part of the statue. More in general, the project demonstrated how solution coming from the field of 3D modelling of Cultural Heritage allow the application of 3D ultrasonic tomography also on objects with complex shapes, in addition to the improved representation of the obtained results.
NASA Technical Reports Server (NTRS)
Celaya, Jose R.; Saxen, Abhinav; Goebel, Kai
2012-01-01
This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process and how it relates to uncertainty representation, management, and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function and the true remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for the two while considering prognostics in making critical decisions.
The Efficacy of Using Diagrams When Solving Probability Word Problems in College
ERIC Educational Resources Information Center
Beitzel, Brian D.; Staley, Richard K.
2015-01-01
Previous experiments have shown a deleterious effect of visual representations on college students' ability to solve total- and joint-probability word problems. The present experiments used conditional-probability problems, known to be more difficult than total- and joint-probability problems. The diagram group was instructed in how to use tree…
Probability Issues in without Replacement Sampling
ERIC Educational Resources Information Center
Joarder, A. H.; Al-Sabah, W. S.
2007-01-01
Sampling without replacement is an important aspect in teaching conditional probabilities in elementary statistics courses. Different methods proposed in different texts for calculating probabilities of events in this context are reviewed and their relative merits and limitations in applications are pinpointed. An alternative representation of…
Three-dimensional ophthalmic optical coherence tomography with a refraction correction algorithm
NASA Astrophysics Data System (ADS)
Zawadzki, Robert J.; Leisser, Christoph; Leitgeb, Rainer; Pircher, Michael; Fercher, Adolf F.
2003-10-01
We built an optical coherence tomography (OCT) system with a rapid scanning optical delay (RSOD) line, which allows probing full axial eye length. The system produces Three-dimensional (3D) data sets that are used to generate 3D tomograms of the model eye. The raw tomographic data were processed by an algorithm, which is based on Snell"s law to correct the interface positions. The Zernike polynomials representation of the interfaces allows quantitative wave aberration measurements. 3D images of our results are presented to illustrate the capabilities of the system and the algorithm performance. The system allows us to measure intra-ocular distances.
Representation of Odds in Terms of Frequencies Reduces Probability Discounting
ERIC Educational Resources Information Center
Yi, Richard; Bickel, Warren K.
2005-01-01
In studies of probability discounting, the reduction in the value of an outcome as a result of its degree of uncertainty is calculated. Decision making studies suggest two issues with probability that may play a role in data obtained in probability discounting studies. The first issue involves the reduction of risk aversion via subdivision of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, J. D.; Oberkampf, William Louis; Helton, Jon Craig
2006-10-01
Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a modelmore » is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.« less
Pancreatic changes in cystic fibrosis: CT and sonographic appearances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daneman, A.; Gaskin, K.; Martin, D.J.
1983-10-01
The computed tomographic (CT) and sonographic appearances of the late stages of pancreatic damage in three patients with cystic fibrosis are illustrated. All three had severe exocrine pancreatic insufficiency with steatorrhea. In two patients CT revealed complete fatty replacement of the entire pancreas. In the third, increased echogenicity of the pancreas on sonography and the inhomogeneous attenuation on CT were interpreted as being the result of a combination of fibrosis, fatty replacement, calcification, and probable cyst formation.
Interference in the classical probabilistic model and its representation in complex Hilbert space
NASA Astrophysics Data System (ADS)
Khrennikov, Andrei Yu.
2005-10-01
The notion of a context (complex of physical conditions, that is to say: specification of the measurement setup) is basic in this paper.We show that the main structures of quantum theory (interference of probabilities, Born's rule, complex probabilistic amplitudes, Hilbert state space, representation of observables by operators) are present already in a latent form in the classical Kolmogorov probability model. However, this model should be considered as a calculus of contextual probabilities. In our approach it is forbidden to consider abstract context independent probabilities: “first context and only then probability”. We construct the representation of the general contextual probabilistic dynamics in the complex Hilbert space. Thus dynamics of the wave function (in particular, Schrödinger's dynamics) can be considered as Hilbert space projections of a realistic dynamics in a “prespace”. The basic condition for representing of the prespace-dynamics is the law of statistical conservation of energy-conservation of probabilities. In general the Hilbert space projection of the “prespace” dynamics can be nonlinear and even irreversible (but it is always unitary). Methods developed in this paper can be applied not only to quantum mechanics, but also to classical statistical mechanics. The main quantum-like structures (e.g., interference of probabilities) might be found in some models of classical statistical mechanics. Quantum-like probabilistic behavior can be demonstrated by biological systems. In particular, it was recently found in some psychological experiments.
3D reconstruction of the magnetic vector potential using model based iterative reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less
Intensity-enhanced MART for tomographic PIV
NASA Astrophysics Data System (ADS)
Wang, HongPing; Gao, Qi; Wei, RunJie; Wang, JinJun
2016-05-01
A novel technique to shrink the elongated particles and suppress the ghost particles in particle reconstruction of tomographic particle image velocimetry is presented. This method, named as intensity-enhanced multiplicative algebraic reconstruction technique (IntE-MART), utilizes an inverse diffusion function and an intensity suppressing factor to improve the quality of particle reconstruction and consequently the precision of velocimetry. A numerical assessment about vortex ring motion with and without image noise is performed to evaluate the new algorithm in terms of reconstruction, particle elongation and velocimetry. The simulation is performed at seven different seeding densities. The comparison of spatial filter MART and IntE-MART on the probability density function of particle peak intensity suggests that one of the local minima of the distribution can be used to separate the ghosts and actual particles. Thus, ghost removal based on IntE-MART is also introduced. To verify the application of IntE-MART, a real plate turbulent boundary layer experiment is performed. The result indicates that ghost reduction can increase the accuracy of RMS of velocity field.
3D reconstruction of the magnetic vector potential using model based iterative reconstruction.
Prabhat, K C; Aditya Mohan, K; Phatak, Charudatta; Bouman, Charles; De Graef, Marc
2017-11-01
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model for image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. A comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach. Copyright © 2017 Elsevier B.V. All rights reserved.
3D reconstruction of the magnetic vector potential using model based iterative reconstruction
Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta; ...
2017-07-03
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less
The Existence of Smooth Densities for the Prediction, Filtering and Smoothing Problems
1990-12-20
128 - 139. [14] With D. COLWELL and P.E. KOPP, Martingale representation and hedging policies. Stochastic Processes and Applications. (Accepted) [5j...Martingale Representation and Hedging Policies David B. COLWELL Robert J. ELLIOTT P. Ekkehard KOPP* Department of Statistics and Applied Probability...is determined by elementary methods in the Markov situation. Applications to hedging portfolios in finance are described. martingale representation
Representation of analysis results involving aleatory and epistemic uncertainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jay Dean; Helton, Jon Craig; Oberkampf, William Louis
2008-08-01
Procedures are described for the representation of results in analyses that involve both aleatory uncertainty and epistemic uncertainty, with aleatory uncertainty deriving from an inherent randomness in the behavior of the system under study and epistemic uncertainty deriving from a lack of knowledge about the appropriate values to use for quantities that are assumed to have fixed but poorly known values in the context of a specific study. Aleatory uncertainty is usually represented with probability and leads to cumulative distribution functions (CDFs) or complementary cumulative distribution functions (CCDFs) for analysis results of interest. Several mathematical structures are available for themore » representation of epistemic uncertainty, including interval analysis, possibility theory, evidence theory and probability theory. In the presence of epistemic uncertainty, there is not a single CDF or CCDF for a given analysis result. Rather, there is a family of CDFs and a corresponding family of CCDFs that derive from epistemic uncertainty and have an uncertainty structure that derives from the particular uncertainty structure (i.e., interval analysis, possibility theory, evidence theory, probability theory) used to represent epistemic uncertainty. Graphical formats for the representation of epistemic uncertainty in families of CDFs and CCDFs are investigated and presented for the indicated characterizations of epistemic uncertainty.« less
Analysis of tomographic mineralogical data using YaDiV—Overview and practical case study
NASA Astrophysics Data System (ADS)
Friese, Karl-Ingo; Cichy, Sarah B.; Wolter, Franz-Erich; Botcharnikov, Roman E.
2013-07-01
We introduce the 3D-segmentation and -visualization software YaDiV to the mineralogical application of rock texture analysis. YaDiV has been originally designed to process medical DICOM datasets. But due to software advancements and additional plugins, this open-source software can now be easily used for the fast quantitative morphological characterization of geological objects from tomographic datasets. In this paper, we give a summary of YaDiV's features and demonstrate the advantages of 3D-stereographic visualization and the accuracy of 3D-segmentation for the analysis of geological samples. For this purpose, we present a virtual and a real use case (here: experimentally crystallized and vesiculated magmatic rocks, corresponding to the composition of the 1991-1995 Unzen eruption, Japan). Especially the spacial representation of structures in YaDiV allows an immediate, intuitive understanding of the 3D-structures, which may not become clear by only looking on 2D-images. We compare our results of object number density calculations with the established classical stereological 3D-correction methods for 2D-images and show that it was possible to achieve a seriously higher quality and accuracy. The methods described in this paper are not dependent on the nature of the object. The fact, that YaDiV is open-source and users with programming skills can create new plugins themselves, may allow this platform to become applicable to a variety of geological scenarios from the analysis of textures in tiny rock samples to the interpretation of global geophysical data, as long as the data are provided in tomographic form.
The Collaborative Seismic Earth Model Project
NASA Astrophysics Data System (ADS)
Fichtner, A.; van Herwaarden, D. P.; Afanasiev, M.
2017-12-01
We present the first generation of the Collaborative Seismic Earth Model (CSEM). This effort is intended to address grand challenges in tomography that currently inhibit imaging the Earth's interior across the seismically accessible scales: [1] For decades to come, computational resources will remain insufficient for the exploitation of the full observable seismic bandwidth. [2] With the man power of individual research groups, only small fractions of available waveform data can be incorporated into seismic tomographies. [3] The limited incorporation of prior knowledge on 3D structure leads to slow progress and inefficient use of resources. The CSEM is a multi-scale model of global 3D Earth structure that evolves continuously through successive regional refinements. Taking the current state of the CSEM as initial model, these refinements are contributed by external collaborators, and used to advance the CSEM to the next state. This mode of operation allows the CSEM to [1] harness the distributed man and computing power of the community, [2] to make consistent use of prior knowledge, and [3] to combine different tomographic techniques, needed to cover the seismic data bandwidth. Furthermore, the CSEM has the potential to serve as a unified and accessible representation of tomographic Earth models. Generation 1 comprises around 15 regional tomographic refinements, computed with full-waveform inversion. These include continental-scale mantle models of North America, Australasia, Europe and the South Atlantic, as well as detailed regional models of the crust beneath the Iberian Peninsula and western Turkey. A global-scale full-waveform inversion ensures that regional refinements are consistent with whole-Earth structure. This first generation will serve as the basis for further automation and methodological improvements concerning validation and uncertainty quantification.
NASA Astrophysics Data System (ADS)
Heublein, Marion; Alshawaf, Fadwa; Zhu, Xiao Xiang; Hinz, Stefan
2016-04-01
An accurate knowledge of the 3D distribution of water vapor in the atmosphere is a key element for weather forecasting and climate research. On the other hand, as water vapor causes a delay in the microwave signal propagation within the atmosphere, a precise determination of water vapor is required for accurate positioning and deformation monitoring using Global Navigation Satellite Systems (GNSS) and Interferometric Synthetic Aperture Radar (InSAR). However, due to its high variability in time and space, the atmospheric water vapor distribution is difficult to model. Since GNSS meteorology was introduced about twenty years ago, it has increasingly been used as a geodetic technique to generate maps of 2D Precipitable Water Vapor (PWV). Moreover, several approaches for 3D tomographic water vapor reconstruction from GNSS-based estimates using the simple least squares adjustment were presented. In this poster, we present an innovative and sophisticated Compressive Sensing (CS) concept for sparsity-driven tomographic reconstruction of 3D atmospheric wet refractivity fields using data from GNSS and InSAR. The 2D zenith wet delay (ZWD) estimates are obtained by a combination of point-wise estimates of the wet delay using GNSS observations and partial InSAR wet delay maps. These ZWD estimates are aggregated to derive realistic wet delay input data of 100 points as if corresponding to 100 GNSS sites within an area of 100 km × 100 km in the test region of the Upper Rhine Graben. The made-up ZWD values can be mapped into different elevation and azimuth angles. Using the Cosine transform, a sparse representation of the wet refractivity field is obtained. In contrast to existing tomographic approaches, we exploit sparsity as a prior for the regularization of the underdetermined inverse system. The new aspects of this work include both the combination of GNSS and InSAR data for water vapor tomography and the sophisticated CS estimation. The accuracy of the estimated 3D water vapor field is determined by comparing slant integrated wet delays computed from the estimated wet refractivities with real GNSS wet delay estimates. This comparison is performed along different elevation and azimuth angles.
Seeing the Forest when Entry Is Unlikely: Probability and the Mental Representation of Events
ERIC Educational Resources Information Center
Wakslak, Cheryl J.; Trope, Yaacov; Liberman, Nira; Alony, Rotem
2006-01-01
Conceptualizing probability as psychological distance, the authors draw on construal level theory (Y. Trope & N. Liberman, 2003) to propose that decreasing an event's probability leads individuals to represent the event by its central, abstract, general features (high-level construal) rather than by its peripheral, concrete, specific features…
Zhang, Hang; Maloney, Laurence T.
2012-01-01
In decision from experience, the source of probability information affects how probability is distorted in the decision task. Understanding how and why probability is distorted is a key issue in understanding the peculiar character of experience-based decision. We consider how probability information is used not just in decision-making but also in a wide variety of cognitive, perceptual, and motor tasks. Very similar patterns of distortion of probability/frequency information have been found in visual frequency estimation, frequency estimation based on memory, signal detection theory, and in the use of probability information in decision-making under risk and uncertainty. We show that distortion of probability in all cases is well captured as linear transformations of the log odds of frequency and/or probability, a model with a slope parameter, and an intercept parameter. We then consider how task and experience influence these two parameters and the resulting distortion of probability. We review how the probability distortions change in systematic ways with task and report three experiments on frequency distortion where the distortions change systematically in the same task. We found that the slope of frequency distortions decreases with the sample size, which is echoed by findings in decision from experience. We review previous models of the representation of uncertainty and find that none can account for the empirical findings. PMID:22294978
NASA Astrophysics Data System (ADS)
Liu, Zhangjun; Liu, Zenghui
2018-06-01
This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.
Object-based attention: strength of object representation and attentional guidance.
Shomstein, Sarah; Behrmann, Marlene
2008-01-01
Two or more features belonging to a single object are identified more quickly and more accurately than are features belonging to different objects--a finding attributed to sensory enhancement of all features belonging to an attended or selected object. However, several recent studies have suggested that this "single-object advantage" may be a product of probabilistic and configural strategic prioritizations rather than of object-based perceptual enhancement per se, challenging the underlying mechanism that is thought to give rise to object-based attention. In the present article, we further explore constraints on the mechanisms of object-based selection by examining the contribution of the strength of object representations to the single-object advantage. We manipulated factors such as exposure duration (i.e., preview time) and salience of configuration (i.e., objects). Varying preview time changes the magnitude of the object-based effect, so that if there is ample time to establish an object representation (i.e., preview time of 1,000 msec), then both probability and configuration (i.e., objects) guide attentional selection. If, however, insufficient time is provided to establish a robust object-based representation, then only probabilities guide attentional selection. Interestingly, at a short preview time of 200 msec, when the two objects were sufficiently different from each other (i.e., different colors), both configuration and probability guided attention selection. These results suggest that object-based effects can be explained both in terms of strength of object representations (established at longer exposure durations and by pictorial cues) and probabilistic contingencies in the visual environment.
Kim, Jong Gyu
2012-01-01
Background During the planning of a thoracodorsal artery perforator (TDAP) free flap, preoperative multidetector-row computed tomographic (MDCT) angiography is valuable for predicting the locations of perforators. However, CT-based perforator mapping of the thoracodorsal artery is not easy because of its small diameter. Thus, we evaluated 1-mm-thick MDCT images in multiple planes to search for reliable perforators accurately. Methods Between July 2010 and October 2011, 19 consecutive patients (13 males, 6 females) who underwent MDCT prior to TDAP free flap operations were enrolled in this study. Patients ranged in age from 10 to 75 years (mean, 39.3 years). MDCT images were acquired at a thickness of 1 mm in the axial, coronal, and sagittal planes. Results The thoracodorsal artery perforators were detected in all 19 cases. The reliable perforators originating from the descending branch were found in 14 cases, of which 6 had transverse branches. The former were well identified in the coronal view, and the latter in the axial view. The location of the most reliable perforators on MDCT images corresponded well with the surgical findings. Conclusions Though MDCT has been widely used in performing the abdominal perforator free flap for detecting reliable perforating vessels, it is not popular in the TDAP free flap. The results of this study suggest that multiple planes of MDCT may increase the probability of detecting the most reliable perforators, along with decreasing the probability of missing available vessels. PMID:22872839
Dimensional Representation and Gradient Boosting for Seismic Event Classification
NASA Astrophysics Data System (ADS)
Semmelmayer, F. C.; Kappedal, R. D.; Magana-Zook, S. A.
2017-12-01
In this research, we conducted experiments of representational structures on 5009 seismic signals with the intent of finding a method to classify signals as either an explosion or an earthquake in an automated fashion. We also applied a gradient boosted classifier. While perfect classification was not attained (approximately 88% was our best model), some cases demonstrate that many events can be filtered out as very high probability being explosions or earthquakes, diminishing subject-matter experts'(SME) workload for first stage analysis. It is our hope that these methods can be refined, further increasing the classification probability.
Automation Activities that Support C2 Agility to Mitigate Type 7 Risks
2014-06-01
on business trip • Space ship runs into space junk What are the probabilities for these events in a 45-year career time frame? Event that...representation that information system understands State- Space Diagram Common Agility Space (CAS) A simple C2 organization representation
Two methods of Haustral fold detection from computed tomographic virtual colonoscopy images
NASA Astrophysics Data System (ADS)
Chowdhury, Ananda S.; Tan, Sovira; Yao, Jianhua; Linguraru, Marius G.; Summers, Ronald M.
2009-02-01
Virtual colonoscopy (VC) has gained popularity as a new colon diagnostic method over the last decade. VC is a new, less invasive alternative to the usually practiced optical colonoscopy for colorectal polyp and cancer screening, the second major cause of cancer related deaths in industrial nations. Haustral (colonic) folds serve as important landmarks for virtual endoscopic navigation in the existing computer-aided-diagnosis (CAD) system. In this paper, we propose and compare two different methods of haustral fold detection from volumetric computed tomographic virtual colonoscopy images. The colon lumen is segmented from the input using modified region growing and fuzzy connectedness. The first method for fold detection uses a level set that evolves on a mesh representation of the colon surface. The colon surface is obtained from the segmented colon lumen using the Marching Cubes algorithm. The second method for fold detection, based on a combination of heat diffusion and fuzzy c-means algorithm, is employed on the segmented colon volume. Folds obtained on the colon volume using this method are then transferred to the corresponding colon surface. After experimentation with different datasets, results are found to be promising. The results also demonstrate that the first method has a tendency of slight under-segmentation while the second method tends to slightly over-segment the folds.
Automatic and strategic effects in the guidance of attention by working memory representations
Carlisle, Nancy B.; Woodman, Geoffrey F.
2010-01-01
Theories of visual attention suggest that working memory representations automatically guide attention toward memory-matching objects. Some empirical tests of this prediction have produced results consistent with working memory automatically guiding attention. However, others have shown that individuals can strategically control whether working memory representations guide visual attention. Previous studies have not independently measured automatic and strategic contributions to the interactions between working memory and attention. In this study, we used a classic manipulation of the probability of valid, neutral, and invalid cues to tease apart the nature of such interactions. This framework utilizes measures of reaction time (RT) to quantify the costs and benefits of attending to memory-matching items and infer the relative magnitudes of automatic and strategic effects. We found both costs and benefits even when the memory-matching item was no more likely to be the target than other items, indicating an automatic component of attentional guidance. However, the costs and benefits essentially doubled as the probability of a trial with a valid cue increased from 20% to 80%, demonstrating a potent strategic effect. We also show that the instructions given to participants led to a significant change in guidance distinct from the actual probability of events during the experiment. Together, these findings demonstrate that the influence of working memory representations on attention is driven by both automatic and strategic interactions. PMID:20643386
Automatic and strategic effects in the guidance of attention by working memory representations.
Carlisle, Nancy B; Woodman, Geoffrey F
2011-06-01
Theories of visual attention suggest that working memory representations automatically guide attention toward memory-matching objects. Some empirical tests of this prediction have produced results consistent with working memory automatically guiding attention. However, others have shown that individuals can strategically control whether working memory representations guide visual attention. Previous studies have not independently measured automatic and strategic contributions to the interactions between working memory and attention. In this study, we used a classic manipulation of the probability of valid, neutral, and invalid cues to tease apart the nature of such interactions. This framework utilizes measures of reaction time (RT) to quantify the costs and benefits of attending to memory-matching items and infer the relative magnitudes of automatic and strategic effects. We found both costs and benefits even when the memory-matching item was no more likely to be the target than other items, indicating an automatic component of attentional guidance. However, the costs and benefits essentially doubled as the probability of a trial with a valid cue increased from 20% to 80%, demonstrating a potent strategic effect. We also show that the instructions given to participants led to a significant change in guidance distinct from the actual probability of events during the experiment. Together, these findings demonstrate that the influence of working memory representations on attention is driven by both automatic and strategic interactions. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Atkinson, Callum; Coudert, Sebastien; Foucaut, Jean-Marc; Stanislas, Michel; Soria, Julio
2011-04-01
To investigate the accuracy of tomographic particle image velocimetry (Tomo-PIV) for turbulent boundary layer measurements, a series of synthetic image-based simulations and practical experiments are performed on a high Reynolds number turbulent boundary layer at Reθ = 7,800. Two different approaches to Tomo-PIV are examined using a full-volume slab measurement and a thin-volume "fat" light sheet approach. Tomographic reconstruction is performed using both the standard MART technique and the more efficient MLOS-SMART approach, showing a 10-time increase in processing speed. Random and bias errors are quantified under the influence of the near-wall velocity gradient, reconstruction method, ghost particles, seeding density and volume thickness, using synthetic images. Experimental Tomo-PIV results are compared with hot-wire measurements and errors are examined in terms of the measured mean and fluctuating profiles, probability density functions of the fluctuations, distributions of fluctuating divergence through the volume and velocity power spectra. Velocity gradients have a large effect on errors near the wall and also increase the errors associated with ghost particles, which convect at mean velocities through the volume thickness. Tomo-PIV provides accurate experimental measurements at low wave numbers; however, reconstruction introduces high noise levels that reduces the effective spatial resolution. A thinner volume is shown to provide a higher measurement accuracy at the expense of the measurement domain, albeit still at a lower effective spatial resolution than planar and Stereo-PIV.
Kozłowski, T; Cybulska, M; Błaszczyk, B; Krajewska, M; Jeśman, C
2014-10-01
of morphological and tomographic (CT) studies of the skull that was found in the crypt of the Silesian Piasts in the St. Jadwiga church in Brzeg (Silesia, Poland) are presented and discussed here. The established date of burial of probably a 20-30 years old male was 16th-17th century. The analyzed skull showed premature obliteration of the major skull sutures. It resulted in the braincase deformation, similar to the forms found in oxycephaly and microcephaly. Tomographic analysis revealed gross pathology. Signs of increased intracranial pressure, basilar invagination and hypoplasia of the occipital bone were observed. Those results suggested the occurrence of the very rare Arnold-Chiari syndrome. Lesions found in the sella turcica indicated the development of pituitary macroadenoma, which resulted in the occurrence of discreet features of acromegaly in the facial bones. The studied skull was characterized by a significantly smaller size of the neurocranium (horizontal circumference 471 mm, cranial capacity ∼ 1080 ml) and strongly expressed brachycephaly (cranial index=86.3), while its height remained within the range for non-deformed skulls. A narrow face, high eye-sockets and prognathism were also observed. Signs of alveolar process hypertrophy with rotation and displacement of the teeth were noted. The skull showed significant morphological differences compared to both normal and other pathological skulls such as those with pituitary gigantism, scaphocephaly and microcephaly. Copyright © 2014 Elsevier GmbH. All rights reserved.
Ancient representation of Meige's syndrome in the Moche culture in the pre-Columbian Peru.
Martinez-Castrillo, Juan Carlos; Mariscal, Ana; Garcia-Ruiz, Pedro
2010-03-15
The Moches were a pre-Columbian culture from Peru, who had a fine ceramic technique and used to represent diseases. One example is the potter presented here which represents a man with a probable Meige's syndrome and may be the first artistic representation of this disease.
State-wide monitoring based on probability survey designs requires a spatially explicit representation of all streams and rivers of interest within a state, i.e., a sample frame. The sample frame should be the best available map representation of the resource. Many stream progr...
Naive Probability: Model-Based Estimates of Unique Events.
Khemlani, Sangeet S; Lotstein, Max; Johnson-Laird, Philip N
2015-08-01
We describe a dual-process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non-numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non-numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory's predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P(A), P(B), P(A and B), disjunctions: P(A), P(B), P(A or B or both), and conditional probabilities P(A), P(B), P(B|A). They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning. © 2014 Cognitive Science Society, Inc.
NASA Astrophysics Data System (ADS)
Li, Hechao
An accurate knowledge of the complex microstructure of a heterogeneous material is crucial for quantitative structure-property relations establishment and its performance prediction and optimization. X-ray tomography has provided a non-destructive means for microstructure characterization in both 3D and 4D (i.e., structural evolution over time). Traditional reconstruction algorithms like filtered-back-projection (FBP) method or algebraic reconstruction techniques (ART) require huge number of tomographic projections and segmentation process before conducting microstructural quantification. This can be quite time consuming and computationally intensive. In this thesis, a novel procedure is first presented that allows one to directly extract key structural information in forms of spatial correlation functions from limited x-ray tomography data. The key component of the procedure is the computation of a "probability map", which provides the probability of an arbitrary point in the material system belonging to specific phase. The correlation functions of interest are then readily computed from the probability map. Using effective medium theory, accurate predictions of physical properties (e.g., elastic moduli) can be obtained. Secondly, a stochastic optimization procedure that enables one to accurately reconstruct material microstructure from a small number of x-ray tomographic projections (e.g., 20 - 40) is presented. Moreover, a stochastic procedure for multi-modal data fusion is proposed, where both X-ray projections and correlation functions computed from limited 2D optical images are fused to accurately reconstruct complex heterogeneous materials in 3D. This multi-modal reconstruction algorithm is proved to be able to integrate the complementary data to perform an excellent optimization procedure, which indicates its high efficiency in using limited structural information. Finally, the accuracy of the stochastic reconstruction procedure using limited X-ray projection data is ascertained by analyzing the microstructural degeneracy and the roughness of energy landscape associated with different number of projections. Ground-state degeneracy of a microstructure is found to decrease with increasing number of projections, which indicates a higher probability that the reconstructed configurations match the actual microstructure. The roughness of energy landscape can also provide information about the complexity and convergence behavior of the reconstruction for given microstructures and projection number.
NASA Astrophysics Data System (ADS)
Zheng, Guoyan
2007-03-01
Surgical navigation systems visualize the positions and orientations of surgical instruments and implants as graphical overlays onto a medical image of the operated anatomy on a computer monitor. The orthopaedic surgical navigation systems could be categorized according to the image modalities that are used for the visualization of surgical action. In the so-called CT-based systems or 'surgeon-defined anatomy' based systems, where a 3D volume or surface representation of the operated anatomy could be constructed from the preoperatively acquired tomographic data or through intraoperatively digitized anatomy landmarks, a photorealistic rendering of the surgical action has been identified to greatly improve usability of these navigation systems. However, this may not hold true when the virtual representation of surgical instruments and implants is superimposed onto 2D projection images in a fluoroscopy-based navigation system due to the so-called image occlusion problem. Image occlusion occurs when the field of view of the fluoroscopic image is occupied by the virtual representation of surgical implants or instruments. In these situations, the surgeon may miss part of the image details, even if transparency and/or wire-frame rendering is used. In this paper, we propose to use non-photorealistic rendering to overcome this difficulty. Laboratory testing results on foamed plastic bones during various computer-assisted fluoroscopybased surgical procedures including total hip arthroplasty and long bone fracture reduction and osteosynthesis are shown.
Brain tumor segmentation from multimodal magnetic resonance images via sparse representation.
Li, Yuhong; Jia, Fucang; Qin, Jing
2016-10-01
Accurately segmenting and quantifying brain gliomas from magnetic resonance (MR) images remains a challenging task because of the large spatial and structural variability among brain tumors. To develop a fully automatic and accurate brain tumor segmentation algorithm, we present a probabilistic model of multimodal MR brain tumor segmentation. This model combines sparse representation and the Markov random field (MRF) to solve the spatial and structural variability problem. We formulate the tumor segmentation problem as a multi-classification task by labeling each voxel as the maximum posterior probability. We estimate the maximum a posteriori (MAP) probability by introducing the sparse representation into a likelihood probability and a MRF into the prior probability. Considering the MAP as an NP-hard problem, we convert the maximum posterior probability estimation into a minimum energy optimization problem and employ graph cuts to find the solution to the MAP estimation. Our method is evaluated using the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013) and obtained Dice coefficient metric values of 0.85, 0.75, and 0.69 on the high-grade Challenge data set, 0.73, 0.56, and 0.54 on the high-grade Challenge LeaderBoard data set, and 0.84, 0.54, and 0.57 on the low-grade Challenge data set for the complete, core, and enhancing regions. The experimental results show that the proposed algorithm is valid and ranks 2nd compared with the state-of-the-art tumor segmentation algorithms in the MICCAI BRATS 2013 challenge. Copyright © 2016 Elsevier B.V. All rights reserved.
REPRESENTATIONS OF WEAK AND STRONG INTEGRALS IN BANACH SPACES
Brooks, James K.
1969-01-01
We establish a representation of the Gelfand-Pettis (weak) integral in terms of unconditionally convergent series. Moreover, absolute convergence of the series is a necessary and sufficient condition in order that the weak integral coincide with the Bochner integral. Two applications of the representation are given. The first is a simplified proof of the countable additivity and absolute continuity of the indefinite weak integral. The second application is to probability theory; we characterize the conditional expectation of a weakly integrable function. PMID:16591755
Application of wavefield compressive sensing in surface wave tomography
NASA Astrophysics Data System (ADS)
Zhan, Zhongwen; Li, Qingyang; Huang, Jianping
2018-06-01
Dense arrays allow sampling of seismic wavefield without significant aliasing, and surface wave tomography has benefitted from exploiting wavefield coherence among neighbouring stations. However, explicit or implicit assumptions about wavefield, irregular station spacing and noise still limit the applicability and resolution of current surface wave methods. Here, we propose to apply the theory of compressive sensing (CS) to seek a sparse representation of the surface wavefield using a plane-wave basis. Then we reconstruct the continuous surface wavefield on a dense regular grid before applying any tomographic methods. Synthetic tests demonstrate that wavefield CS improves robustness and resolution of Helmholtz tomography and wavefield gradiometry, especially when traditional approaches have difficulties due to sub-Nyquist sampling or complexities in wavefield.
ERIC Educational Resources Information Center
Abrahamson, Dor
2006-01-01
This snapshot introduces a computer-based representation and activity that enables students to simultaneously "see" the combinatorial space of a stochastic device (e.g., dice, spinner, coins) and its outcome distribution. The author argues that the "ambiguous" representation fosters student insight into probability. [Snapshots are subject to peer…
Caglar, Çagatay; Gul, Adem; Batur, Muhammed; Yasar, Tekin
2017-01-01
To compare the sensitivity and specificity of Moorfields regression analysis (MRA) and glaucoma probability score (GPS) between healthy and glaucomatous eyes with Heidelberg Retinal Tomograph 3 (HRT-3). The study included 120 eyes of 75 glaucoma patients and 138 eyes of 73 normal subjects, for a total of 258 eyes of 148 individuals. All measurements were performed with the HRT-3. Diagnostic test criteria (sensitivity, specificity, etc.) were used to evaluate how efficiently GPS and MRA algorithms in the HRT-3 discriminated between the glaucoma and control groups. The GPS showed 88 % sensitivity and 66 % specificity, whereas MRA had 71.5 % sensitivity and 82.5 % specificity. There was 71 % agreement between the final results of MRA and GPS in the glaucoma group. Excluding borderline patients from both analyses resulted in 91.6 % agreement. In the control group the level of agreement between MRA and GPS was 64 % including borderline patients and 84.1 % after excluding borderline patients. The accuracy rate is 92 % for MRA and 91 % for GPS in the glaucoma group excluding borderline patients. The difference was nor statistically different. In both cases, agreement was higher between MRA and GPS in the glaucoma group. We found that both sensitivity and specificity increased with disc size for MRA, while the sensitivity increased and specificity decreased with larger disc sizes for GPS. HRT is able to quantify and clearly reveal structural changes in the ONH and RNFL in glaucoma.
Jindal, Shveta; Dada, Tanuj; Sreenivas, V; Gupta, Viney; Sihota, Ramanjit; Panda, Anita
2010-01-01
Purpose: To compare the diagnostic performance of the Heidelberg retinal tomograph (HRT) glaucoma probability score (GPS) with that of Moorfield’s regression analysis (MRA). Materials and Methods: The study included 50 eyes of normal subjects and 50 eyes of subjects with early-to-moderate primary open angle glaucoma. Images were obtained by using HRT version 3.0. Results: The agreement coefficient (weighted k) for the overall MRA and GPS classification was 0.216 (95% CI: 0.119 – 0.315). The sensitivity and specificity were evaluated using the most specific (borderline results included as test negatives) and least specific criteria (borderline results included as test positives). The MRA sensitivity and specificity were 30.61 and 98% (most specific) and 57.14 and 98% (least specific). The GPS sensitivity and specificity were 81.63 and 73.47% (most specific) and 95.92 and 34.69% (least specific). The MRA gave a higher positive likelihood ratio (28.57 vs. 3.08) and the GPS gave a higher negative likelihood ratio (0.25 vs. 0.44).The sensitivity increased with increasing disc size for both MRA and GPS. Conclusions: There was a poor agreement between the overall MRA and GPS classifications. GPS tended to have higher sensitivities, lower specificities, and lower likelihood ratios than the MRA. The disc size should be taken into consideration when interpreting the results of HRT, as both the GPS and MRA showed decreased sensitivity for smaller discs and the GPS showed decreased specificity for larger discs. PMID:20952832
Bots, Michiel L.; Selvarajah, Sharmini; Kappelle, L. Jaap; Abdul Aziz, Zariah; Sidek, Norsima Nazifah; Vaartjes, Ilonca
2016-01-01
Background A shortage of computed tomographic (CT) machines in low and middle income countries often results in delayed CT imaging for patients suspected of a stroke. Yet, time constraint is one of the most important aspects for patients with an ischemic stroke to benefit from thrombolytic therapy. We set out to assess whether application of the Siriraj Stroke Score is able to assist physicians in prioritizing patients with a high probability of having an ischemic stroke for urgent CT imaging. Methods From the Malaysian National Neurology Registry, we selected patients aged 18 years and over with clinical features suggesting of a stroke, who arrived in the hospital 4.5 hours or less from ictus. The prioritization of receiving CT imaging was left to the discretion of the treating physician. We applied the Siriraj Stroke Score to all patients, refitted the score and defined a cut-off value to best distinguish an ischemic stroke from a hemorrhagic stroke. Results Of the 2176 patients included, 73% had an ischemic stroke. Only 33% of the ischemic stroke patients had CT imaging within 4.5 hours. The median door-to-scan time for these patients was 4 hours (IQR: 1;16). With the recalibrated score, it would have been possible to prioritize 95% (95% CI: 94%–96%) of patients with an ischemic stroke for urgent CT imaging. Conclusions In settings where CT imaging capacity is limited, we propose the use of the Siriraj Stroke Score to prioritize patients with a probable ischemic stroke for urgent CT imaging. PMID:27768752
Deacon, D; Nousak, J M; Pilotti, M; Ritter, W; Yang, C M
1998-07-01
The effects of global and feature-specific probabilities of auditory stimuli were manipulated to determine their effects on the mismatch negativity (MMN) of the human event-related potential. The question of interest was whether the automatic comparison of stimuli indexed by the MMN was performed on representations of individual stimulus features or on gestalt representations of their combined attributes. The design of the study was such that both feature and gestalt representations could have been available to the comparator mechanism generating the MMN. The data were consistent with the interpretation that the MMN was generated following an analysis of stimulus features.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, Samrat; Tipireddy, Ramakrishna; Oster, Matthew R.
Securing cyber-systems on a continual basis against a multitude of adverse events is a challenging undertaking. Game-theoretic approaches, that model actions of strategic decision-makers, are increasingly being applied to address cybersecurity resource allocation challenges. Such game-based models account for multiple player actions and represent cyber attacker payoffs mostly as point utility estimates. Since a cyber-attacker’s payoff generation mechanism is largely unknown, appropriate representation and propagation of uncertainty is a critical task. In this paper we expand on prior work and focus on operationalizing the probabilistic uncertainty quantification framework, for a notional cyber system, through: 1) representation of uncertain attacker andmore » system-related modeling variables as probability distributions and mathematical intervals, and 2) exploration of uncertainty propagation techniques including two-phase Monte Carlo sampling and probability bounds analysis.« less
Emotion and decision-making: affect-driven belief systems in anxiety and depression.
Paulus, Martin P; Yu, Angela J
2012-09-01
Emotion processing and decision-making are integral aspects of daily life. However, our understanding of the interaction between these constructs is limited. In this review, we summarize theoretical approaches that link emotion and decision-making, and focus on research with anxious or depressed individuals to show how emotions can interfere with decision-making. We integrate the emotional framework based on valence and arousal with a Bayesian approach to decision-making in terms of probability and value processing. We discuss how studies of individuals with emotional dysfunctions provide evidence that alterations of decision-making can be viewed in terms of altered probability and value computation. We argue that the probabilistic representation of belief states in the context of partially observable Markov decision processes provides a useful approach to examine alterations in probability and value representation in individuals with anxiety and depression, and outline the broader implications of this approach. Copyright © 2012. Published by Elsevier Ltd.
Emotion and decision-making: affect-driven belief systems in anxiety and depression
Paulus, Martin P.; Yu, Angela J.
2012-01-01
Emotion processing and decision-making are integral aspects of daily life. However, our understanding of the interaction between these constructs is limited. In this review, we summarize theoretical approaches to the link between emotion and decision-making, and focus on research with anxious or depressed individuals that reveals how emotions can interfere with decision-making. We integrate the emotional framework based on valence and arousal with a Bayesian approach to decision-making in terms of probability and value processing. We then discuss how studies of individuals with emotional dysfunctions provide evidence that alterations of decision-making can be viewed in terms of altered probability and value computation. We argue that the probabilistic representation of belief states in the context of partially observable Markov decision processes provides a useful approach to examine alterations in probability and value representation in individuals with anxiety and depression and outline the broader implications of this approach. PMID:22898207
van Lamsweerde, Amanda E; Beck, Melissa R
2015-12-01
In this study, we investigated whether the ability to learn probability information is affected by the type of representation held in visual working memory. Across 4 experiments, participants detected changes to displays of coloured shapes. While participants detected changes in 1 dimension (e.g., colour), a feature from a second, nonchanging dimension (e.g., shape) predicted which object was most likely to change. In Experiments 1 and 3, items could be grouped by similarity in the changing dimension across items (e.g., colours and shapes were repeated in the display), while in Experiments 2 and 4 items could not be grouped by similarity (all features were unique). Probability information from the predictive dimension was learned and used to increase performance, but only when all of the features within a display were unique (Experiments 2 and 4). When it was possible to group by feature similarity in the changing dimension (e.g., 2 blue objects appeared within an array), participants were unable to learn probability information and use it to improve performance (Experiments 1 and 3). The results suggest that probability information can be learned in a dimension that is not explicitly task-relevant, but only when the probability information is represented with the changing dimension in visual working memory. (c) 2015 APA, all rights reserved).
A unified data representation theory for network visualization, ordering and coarse-graining
Kovács, István A.; Mizsei, Réka; Csermely, Péter
2015-01-01
Representation of large data sets became a key question of many scientific disciplines in the last decade. Several approaches for network visualization, data ordering and coarse-graining accomplished this goal. However, there was no underlying theoretical framework linking these problems. Here we show an elegant, information theoretic data representation approach as a unified solution of network visualization, data ordering and coarse-graining. The optimal representation is the hardest to distinguish from the original data matrix, measured by the relative entropy. The representation of network nodes as probability distributions provides an efficient visualization method and, in one dimension, an ordering of network nodes and edges. Coarse-grained representations of the input network enable both efficient data compression and hierarchical visualization to achieve high quality representations of larger data sets. Our unified data representation theory will help the analysis of extensive data sets, by revealing the large-scale structure of complex networks in a comprehensible form. PMID:26348923
NASA Astrophysics Data System (ADS)
Tsukahara, M.; Mitrovic, S.; Gajdosik, V.; Margaritondo, G.; Pournin, L.; Ramaioli, M.; Sage, D.; Hwu, Y.; Unser, M.; Liebling, Th. M.
2008-06-01
We describe an approach for exploring microscopic properties of granular media that couples x-ray microtomography and distinct-element-method (DEM) simulations through image analysis. We illustrate it via the study of the intriguing phenomenon of instant arching in an hourglass (in our case a cylinder filled with a polydisperse mixture of glass beads that has a small circular shutter in the bottom). X-ray tomography provides three-dimensional snapshots of the microscopic conditions of the system both prior to opening the shutter, and thereafter, once jamming is completed. The process time in between is bridged using DEM simulation, which settles to positions in remarkably good agreement with the x-ray images. Specifically designed image analysis procedures accurately extract the geometrical information, i.e., the positions and sizes of the beads, from the raw x-ray tomographs, and compress the data representation from initially 5 gigabytes to a few tens of kilobytes per tomograph. The scope of the approach is explored through a sensitivity analysis to input data perturbations in both bead sizes and positions. We establish that accuracy of size—much more than position—estimates is critical, thus explaining the difficulty in considering a mixture of beads of different sizes. We further point to limits in the replication ability of granular flows away from equilibrium; i.e., the difficulty of numerically reproducing chaotic motion.
Secondary School Students' Reasoning about Conditional Probability, Samples, and Sampling Procedures
ERIC Educational Resources Information Center
Prodromou, Theodosia
2016-01-01
In the Australian mathematics curriculum, Year 12 students (aged 16-17) are asked to solve conditional probability problems that involve the representation of the problem situation with two-way tables or three-dimensional diagrams and consider sampling procedures that result in different correct answers. In a small exploratory study, we…
Attention as Inference: Selection Is Probabilistic; Responses Are All-or-None Samples
ERIC Educational Resources Information Center
Vul, Edward; Hanus, Deborah; Kanwisher, Nancy
2009-01-01
Theories of probabilistic cognition postulate that internal representations are made up of multiple simultaneously held hypotheses, each with its own probability of being correct (henceforth, "probability distributions"). However, subjects make discrete responses and report the phenomenal contents of their mind to be all-or-none states rather than…
Striatal activity is modulated by target probability.
Hon, Nicholas
2017-06-14
Target probability has well-known neural effects. In the brain, target probability is known to affect frontal activity, with lower probability targets producing more prefrontal activation than those that occur with higher probability. Although the effect of target probability on cortical activity is well specified, its effect on subcortical structures such as the striatum is less well understood. Here, I examined this issue and found that the striatum was highly responsive to target probability. This is consistent with its hypothesized role in the gating of salient information into higher-order task representations. The current data are interpreted in light of that fact that different components of the striatum are sensitive to different types of task-relevant information.
A description of discrete internal representation schemes for visual pattern discrimination.
Foster, D H
1980-01-01
A general description of a class of schemes for pattern vision is outlined in which the visual system is assumed to form a discrete internal representation of the stimulus. These representations are discrete in that they are considered to comprise finite combinations of "components" which are selected from a fixed and finite repertoire, and which designate certain simple pattern properties or features. In the proposed description it is supposed that the construction of an internal representation is a probabilistic process. A relationship is then formulated associating the probability density functions governing this construction and performance in visually discriminating patterns when differences in pattern shape are small. Some questions related to the application of this relationship to the experimental investigation of discrete internal representations are briefly discussed.
Hattori, Masasi
2016-12-01
This paper presents a new theory of syllogistic reasoning. The proposed model assumes there are probabilistic representations of given signature situations. Instead of conducting an exhaustive search, the model constructs an individual-based "logical" mental representation that expresses the most probable state of affairs, and derives a necessary conclusion that is not inconsistent with the model using heuristics based on informativeness. The model is a unification of previous influential models. Its descriptive validity has been evaluated against existing empirical data and two new experiments, and by qualitative analyses based on previous empirical findings, all of which supported the theory. The model's behavior is also consistent with findings in other areas, including working memory capacity. The results indicate that people assume the probabilities of all target events mentioned in a syllogism to be almost equal, which suggests links between syllogistic reasoning and other areas of cognition. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.
A quantum probability framework for human probabilistic inference.
Trueblood, Jennifer S; Yearsley, James M; Pothos, Emmanuel M
2017-09-01
There is considerable variety in human inference (e.g., a doctor inferring the presence of a disease, a juror inferring the guilt of a defendant, or someone inferring future weight loss based on diet and exercise). As such, people display a wide range of behaviors when making inference judgments. Sometimes, people's judgments appear Bayesian (i.e., normative), but in other cases, judgments deviate from the normative prescription of classical probability theory. How can we combine both Bayesian and non-Bayesian influences in a principled way? We propose a unified explanation of human inference using quantum probability theory. In our approach, we postulate a hierarchy of mental representations, from 'fully' quantum to 'fully' classical, which could be adopted in different situations. In our hierarchy of models, moving from the lowest level to the highest involves changing assumptions about compatibility (i.e., how joint events are represented). Using results from 3 experiments, we show that our modeling approach explains 5 key phenomena in human inference including order effects, reciprocity (i.e., the inverse fallacy), memorylessness, violations of the Markov condition, and antidiscounting. As far as we are aware, no existing theory or model can explain all 5 phenomena. We also explore transitions in our hierarchy, examining how representations change from more quantum to more classical. We show that classical representations provide a better account of data as individuals gain familiarity with a task. We also show that representations vary between individuals, in a way that relates to a simple measure of cognitive style, the Cognitive Reflection Test. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
GPS water vapour tomography: preliminary results from the ESCOMPTE field experiment
NASA Astrophysics Data System (ADS)
Champollion, C.; Masson, F.; Bouin, M.-N.; Walpersdorf, A.; Doerflinger, E.; Bock, O.; Van Baelen, J.
2005-03-01
Water vapour plays a major role in atmospheric processes but remains difficult to quantify due to its high variability in time and space and the sparse set of available measurements. The GPS has proved its capacity to measure the integrated water vapour at zenith with the same accuracy as other methods. Recent studies show that it is possible to quantify the integrated water vapour in the line of sight of the GPS satellite. These observations can be used to study the 3D heterogeneity of the troposphere using tomographic techniques. We develop three-dimensional tomographic software to model the three-dimensional distribution of the tropospheric water vapour from GPS data. First, the tomographic software is validated by simulations based on the realistic ESCOMPTE GPS network configuration. Without a priori information, the absolute value of water vapour is less resolved as opposed to relative horizontal variations. During the ESCOMPTE field experiment, a dense network of 17 dual frequency GPS receivers was operated for 2 weeks within a 20×20-km area around Marseille (southern France). The network extends from sea level to the top of the Etoile chain (˜700 m high). Optimal results have been obtained with time windows of 30-min intervals and input data evaluation every 15 min. The optimal grid for the ESCOMTE geometrical configuration has a horizontal step size of 0.05°×0.05° and 500 m vertical step size. Second, we have compared the results of real data inversions with independent observations. Three inversions have been compared to three successive radiosonde launches and shown to be consistent. A good resolution compared to the a priori information is obtained up to heights of 3000 m. A humidity spike at 4000-m altitude remains unresolved. The reason is probably that the signal is spread homogeneously over the whole network and that such a feature is not resolvable by tomographic techniques. The results of our pure GPS inversion show a correlation with meteorological phenomena. Our measurements could be related to the land-sea breeze. Undoubtedly, tomography has some interesting potential for the water vapour cycle studies at small temporal and spatial scales.
Eriksson, Mats E.; Terfelt, Fredrik
2012-01-01
The Cambrian ‘Orsten’ fauna comprises exceptionally preserved and phosphatised microscopic arthropods. The external morphology of these fossils is well known, but their internal soft-tissue anatomy has remained virtually unknown. Here, we report the first non-biomineralised tissues from a juvenile polymerid trilobite, represented by digestive structures, glands, and connective strands harboured in a hypostome from the Swedish ‘Orsten’ fauna. Synchrotron-radiation X-ray tomographic microscopy enabled three-dimensional internal recordings at sub-micrometre resolution. The specimen provides the first unambiguous evidence for a J-shaped anterior gut and the presence of a crop with a constricted alimentary tract in the Trilobita. Moreover, the gut is Y-shaped in cross section, probably due to a collapsed lumen of that shape, another feature which has not previously been observed in trilobites. The combination of anatomical features suggests that the trilobite hypostome is functionally analogous to the labrum of euarthropods and that it was a sophisticated element closely integrated with the digestive system. This study also briefly addresses the preservational bias of the ‘Orsten’ fauna, particularly the near-absence of polymerid trilobites, and the taphonomy of the soft-tissue-harbouring hypostome. PMID:22558180
Eriksson, Mats E; Terfelt, Fredrik
2012-01-01
The Cambrian 'Orsten' fauna comprises exceptionally preserved and phosphatised microscopic arthropods. The external morphology of these fossils is well known, but their internal soft-tissue anatomy has remained virtually unknown. Here, we report the first non-biomineralised tissues from a juvenile polymerid trilobite, represented by digestive structures, glands, and connective strands harboured in a hypostome from the Swedish 'Orsten' fauna. Synchrotron-radiation X-ray tomographic microscopy enabled three-dimensional internal recordings at sub-micrometre resolution. The specimen provides the first unambiguous evidence for a J-shaped anterior gut and the presence of a crop with a constricted alimentary tract in the Trilobita. Moreover, the gut is Y-shaped in cross section, probably due to a collapsed lumen of that shape, another feature which has not previously been observed in trilobites. The combination of anatomical features suggests that the trilobite hypostome is functionally analogous to the labrum of euarthropods and that it was a sophisticated element closely integrated with the digestive system. This study also briefly addresses the preservational bias of the 'Orsten' fauna, particularly the near-absence of polymerid trilobites, and the taphonomy of the soft-tissue-harbouring hypostome.
Data analysis in emission tomography using emission-count posteriors
NASA Astrophysics Data System (ADS)
Sitek, Arkadiusz
2012-11-01
A novel approach to the analysis of emission tomography data using the posterior probability of the number of emissions per voxel (emission count) conditioned on acquired tomographic data is explored. The posterior is derived from the prior and the Poisson likelihood of the emission-count data by marginalizing voxel activities. Based on emission-count posteriors, examples of Bayesian analysis including estimation and classification tasks in emission tomography are provided. The application of the method to computer simulations of 2D tomography is demonstrated. In particular, the minimum-mean-square-error point estimator of the emission count is demonstrated. The process of finding this estimator can be considered as a tomographic image reconstruction technique since the estimates of the number of emissions per voxel divided by voxel sensitivities and acquisition time are the estimates of the voxel activities. As an example of a classification task, a hypothesis stating that some region of interest (ROI) emitted at least or at most r-times the number of events in some other ROI is tested. The ROIs are specified by the user. The analysis described in this work provides new quantitative statistical measures that can be used in decision making in diagnostic imaging using emission tomography.
Numerical Representations and Intuitions of Probabilities at 12 Months
ERIC Educational Resources Information Center
Téglás, Erno; Ibanez-Lillo, Alexandra; Costa, Albert; Bonatti, Luca L.
2015-01-01
Recent research shows that preverbal infants can reason about single-case probabilities without relying on observed frequencies, adapting their predictions to relevant dynamic parameters of the situation (Téglás, Vul, Girotto, Gonzalez, Tenenbaum & Bonatti, [Téglás, E., 2011]; Téglás, Girotto, Gonzalez & Bonatti, [Téglás, E., 2007]). Here…
ERIC Educational Resources Information Center
Kaplan, Danielle E.; Wu, Erin Chia-ling
2006-01-01
Our research suggests static and animated graphics can lead to more animated thinking and more correct problem solving in computer-based probability learning. Pilot software modules were developed for graduate online statistics courses and representation research. A study with novice graduate student statisticians compared problem solving in five…
Maximum parsimony, substitution model, and probability phylogenetic trees.
Weng, J F; Thomas, D A; Mareels, I
2011-01-01
The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.
NASA Astrophysics Data System (ADS)
Wan, Kuiyuan; Xia, Shaohong; Cao, Jinghe; Sun, Jinlong; Xu, Huilong
2017-04-01
We present a 2-D seismic tomographic image of the crustal structure along the OBS2012 profile, which delineates the Moho morphology and magmatic features of the northeastern South China Sea margin. The image was created by forward modeling (RayInvr) and traveltime tomographic inversion (Tomo2D). Overall, the continental crust thins seaward from 27 km to 21 km within the continental shelf across the Zhu I Depression and Dongsha Rise, with slight local thickening beneath the Dongsha Rise accompanying the increase in the Moho depth. The Dongsha Rise is also characterized by 4-7 km thick high-velocity layer (HVL) ( 7.0-7.6 km/s) in the lower crust and exhibits a relatively high velocity ( 5.5-6.4 km/s) in the upper crust with a velocity gradient lower than those of the Zhu I Depression and Tainan Basin. Across the continental slope and continent-ocean transition (COT), which contain the Tainan Basin, the crust sharply thins from 20 km to 10 km seaward and a 2-3 km thick HVL is imaged in the lower crust. We observed that volcanoes are located only within the COT, but none exist in the continental shelf; the Dongsha Rise exhibits a high magnetic anomaly zone and different geochemical characteristics from the COT. Based on those observations, we conclude that the HVL underlying the COT is probably extension related resulting from the decompression melting in the Cenozoic, whereas the HVL beneath the Dongsha Rise is probably arc related and associated with the subduction of the paleo-Pacific plate. These findings are inconsistent with those of some previous studies.
NASA Technical Reports Server (NTRS)
Galvan, Jose Ramon; Saxena, Abhinav; Goebel, Kai Frank
2012-01-01
This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process, and how it relates to uncertainty representation, management and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for two while considering prognostics in making critical decisions.
Miedl, Stephan F; Peters, Jan; Büchel, Christian
2012-02-01
The neural basis of excessive delay discounting and reduced risk sensitivity of pathological gamblers with a particular focus on subjective neural reward representations has not been previously examined. To examine how pathological gamblers represent subjective reward value at a neural level and how this is affected by gambling severity. Model-based functional magnetic resonance imaging study with patients and control subjects. Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf. Participants were recruited from the local community by advertisement and through self-help groups. A sample of 16 pathological gamblers (according to the DSM-IV definition) was matched by age, sex, smoking status, income, educational level, and handedness to 16 healthy controls. Pathological gamblers showed increased discounting of delayed rewards and a trend toward decreased discounting of probabilistic rewards compared with matched controls. At the neural level, a significant group × condition interaction indicated that reward representations in the gamblers were modulated in a condition-specific manner, such that they exhibited increased (delay discounting) and decreased (probability discounting) neural value correlations in the reward system. In addition, throughout the reward system, neuronal value signals for delayed rewards were negatively correlated with gambling severity. The results extend previous reports of a generally hypoactive reward system in pathological gamblers by showing that, even when subjective reward valuation is accounted for, gamblers still show altered reward representations. Furthermore, results point toward a gradual degradation of mesolimbic reward representations for delayed rewards during the course of pathological gambling.
Quantum probability ranking principle for ligand-based virtual screening.
Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Himmat, Mubarak; Ahmed, Ali; Saeed, Faisal
2017-04-01
Chemical libraries contain thousands of compounds that need screening, which increases the need for computational methods that can rank or prioritize compounds. The tools of virtual screening are widely exploited to enhance the cost effectiveness of lead drug discovery programs by ranking chemical compounds databases in decreasing probability of biological activity based upon probability ranking principle (PRP). In this paper, we developed a novel ranking approach for molecular compounds inspired by quantum mechanics, called quantum probability ranking principle (QPRP). The QPRP ranking criteria would make an attempt to draw an analogy between the physical experiment and molecular structure ranking process for 2D fingerprints in ligand based virtual screening (LBVS). The development of QPRP criteria in LBVS has employed the concepts of quantum at three different levels, firstly at representation level, this model makes an effort to develop a new framework of molecular representation by connecting the molecular compounds with mathematical quantum space. Secondly, estimate the similarity between chemical libraries and references based on quantum-based similarity searching method. Finally, rank the molecules using QPRP approach. Simulated virtual screening experiments with MDL drug data report (MDDR) data sets showed that QPRP outperformed the classical ranking principle (PRP) for molecular chemical compounds.
Quantum probability ranking principle for ligand-based virtual screening
NASA Astrophysics Data System (ADS)
Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Himmat, Mubarak; Ahmed, Ali; Saeed, Faisal
2017-04-01
Chemical libraries contain thousands of compounds that need screening, which increases the need for computational methods that can rank or prioritize compounds. The tools of virtual screening are widely exploited to enhance the cost effectiveness of lead drug discovery programs by ranking chemical compounds databases in decreasing probability of biological activity based upon probability ranking principle (PRP). In this paper, we developed a novel ranking approach for molecular compounds inspired by quantum mechanics, called quantum probability ranking principle (QPRP). The QPRP ranking criteria would make an attempt to draw an analogy between the physical experiment and molecular structure ranking process for 2D fingerprints in ligand based virtual screening (LBVS). The development of QPRP criteria in LBVS has employed the concepts of quantum at three different levels, firstly at representation level, this model makes an effort to develop a new framework of molecular representation by connecting the molecular compounds with mathematical quantum space. Secondly, estimate the similarity between chemical libraries and references based on quantum-based similarity searching method. Finally, rank the molecules using QPRP approach. Simulated virtual screening experiments with MDL drug data report (MDDR) data sets showed that QPRP outperformed the classical ranking principle (PRP) for molecular chemical compounds.
Tomographic diagnostics of nonthermal plasmas
NASA Astrophysics Data System (ADS)
Denisova, Natalia
2009-10-01
In the previous work [1], we discussed a ``technology'' of tomographic method and relations between the tomographic diagnostics in thermal (equilibrium) and nonthermal (nonequilibrium) plasma sources. The conclusion has been made that tomographic reconstruction in thermal plasma sources is the standard procedure at present, which can provide much useful information on the plasma structure and its evolution in time, while the tomographic reconstruction of nonthermal plasma has a great potential at making a contribution to understanding the fundamental problem of substance behavior in strongly nonequilibrium conditions. Using medical terminology, one could say, that tomographic diagnostics of the equilibrium plasma sources studies their ``anatomic'' structure, while reconstruction of the nonequilibrium plasma is similar to the ``physiological'' examination: it is directed to study the physical mechanisms and processes. The present work is focused on nonthermal plasma research. The tomographic diagnostics is directed to study spatial structures formed in the gas discharge plasmas under the influence of electrical and gravitational fields. The ways of plasma ``self-organization'' in changing and extreme conditions are analyzed. The analysis has been made using some examples from our practical tomographic diagnostics of nonthermal plasma sources, such as low-pressure capacitive and inductive discharges. [0pt] [1] Denisova N. Plasma diagnostics using computed tomography method // IEEE Trans. Plasma Sci. 2009 37 4 502.
Costa, Tadeu Lessa da; Oliveira, Denize Cristina de; Formozo, Gláucia Alexandre
2015-02-01
This descriptive qualitative study had the following objectives: identify the content and structure of social representations of quality of life and AIDS for persons living with the disease and analyze the structural relations between such representations. The sample included 103 persons with HIV in a municipality (county) in northern Rio de Janeiro State, Brazil. The methodology used free and hierarchical recall of words for the inductive terms "AIDS" and "quality of life for persons with AIDS", with analysis by the EVOC software. The probable core representation of AIDS was identified as: prejudice, treatment, family, and medications, with the same components identified for quality of life, plus healthy diet and work. We thus elaborated the hypothesis of joint, coordinated representational interaction, fitting the representations together, with implications for the symbolic grasp and quality of life for persons living with HIV. The findings provide backing for collective and individual health approaches to improve quality of life in this group.
Sekine, J; Irie, A; Dotsu, H; Inokuchi, T
2000-10-01
This report describes a case of bilateral pneumothorax with extensive subcutaneous emphysema in a 45-year-old man that occurred during surgery to extract the left lower third molar, performed with the use of an air turbine dental handpiece. Computed tomographic scanning showed severe subcutaneous emphysema extending bilaterally from the cervicofacial region and the deep anatomic spaces (including the pterygomandibular, parapharyngeal, retropharyngeal, and deep temporal spaces) to the anterior wall of the chest. Furthermore, bilateral pneumothorax and pneumomediastinum were present. In our patient, air dissection was probably caused by pressurized air being forced through the operating site into the surrounding connective tissue.
Tensor-based Dictionary Learning for Dynamic Tomographic Reconstruction
Tan, Shengqi; Zhang, Yanbo; Wang, Ge; Mou, Xuanqin; Cao, Guohua; Wu, Zhifang; Yu, Hengyong
2015-01-01
In dynamic computed tomography (CT) reconstruction, the data acquisition speed limits the spatio-temporal resolution. Recently, compressed sensing theory has been instrumental in improving CT reconstruction from far few-view projections. In this paper, we present an adaptive method to train a tensor-based spatio-temporal dictionary for sparse representation of an image sequence during the reconstruction process. The correlations among atoms and across phases are considered to capture the characteristics of an object. The reconstruction problem is solved by the alternating direction method of multipliers. To recover fine or sharp structures such as edges, the nonlocal total variation is incorporated into the algorithmic framework. Preclinical examples including a sheep lung perfusion study and a dynamic mouse cardiac imaging demonstrate that the proposed approach outperforms the vectorized dictionary-based CT reconstruction in the case of few-view reconstruction. PMID:25779991
EPR oximetry in three spatial dimensions using sparse spin distribution
NASA Astrophysics Data System (ADS)
Som, Subhojit; Potter, Lee C.; Ahmad, Rizwan; Vikram, Deepti S.; Kuppusamy, Periannan
2008-08-01
A method is presented to use continuous wave electron paramagnetic resonance imaging for rapid measurement of oxygen partial pressure in three spatial dimensions. A particulate paramagnetic probe is employed to create a sparse distribution of spins in a volume of interest. Information encoding location and spectral linewidth is collected by varying the spatial orientation and strength of an applied magnetic gradient field. Data processing exploits the spatial sparseness of spins to detect voxels with nonzero spin and to estimate the spectral linewidth for those voxels. The parsimonious representation of spin locations and linewidths permits an order of magnitude reduction in data acquisition time, compared to four-dimensional tomographic reconstruction using traditional spectral-spatial imaging. The proposed oximetry method is experimentally demonstrated for a lithium octa- n-butoxy naphthalocyanine (LiNc-BuO) probe using an L-band EPR spectrometer.
A Multi-Camera System for Bioluminescence Tomography in Preclinical Oncology Research
Lewis, Matthew A.; Richer, Edmond; Slavine, Nikolai V.; Kodibagkar, Vikram D.; Soesbe, Todd C.; Antich, Peter P.; Mason, Ralph P.
2013-01-01
Bioluminescent imaging (BLI) of cells expressing luciferase is a valuable noninvasive technique for investigating molecular events and tumor dynamics in the living animal. Current usage is often limited to planar imaging, but tomographic imaging can enhance the usefulness of this technique in quantitative biomedical studies by allowing accurate determination of tumor size and attribution of the emitted light to a specific organ or tissue. Bioluminescence tomography based on a single camera with source rotation or mirrors to provide additional views has previously been reported. We report here in vivo studies using a novel approach with multiple rotating cameras that, when combined with image reconstruction software, provides the desired representation of point source metastases and other small lesions. Comparison with MRI validated the ability to detect lung tumor colonization in mouse lung. PMID:26824926
Bittencourt, Marcio Sommer; Hulten, Edward; Polonsky, Tamar S; Hoffman, Udo; Nasir, Khurram; Abbara, Suhny; Di Carli, Marcelo; Blankstein, Ron
2016-07-19
The most appropriate score for evaluating the pretest probability of obstructive coronary artery disease (CAD) is unknown. We sought to compare the Diamond-Forrester (DF) score with the 2 CAD consortium scores recently recommended by the European Society of Cardiology. We included 2274 consecutive patients (age, 56±13 years; 57% male) without prior CAD referred for coronary computed tomographic angiography. Computed tomographic angiography findings were used to determine the presence or absence of obstructive CAD (≥50% stenosis). We compared the DF score with the 2 CAD consortium scores with respect to their ability to predict obstructive CAD and the potential implications of these scores on the downstream use of testing for CAD, as recommended by current guidelines. The DF score did not satisfactorily fit the data and resulted in a significant overestimation of the prevalence of obstructive CAD (P<0.001); the CAD consortium basic score had no significant lack of fitness; and the CAD consortium clinical provided adequate goodness of fit (P=0.39). The DF score had a lower discrimination for obstructive CAD, with an area under the receiver-operating characteristics curve of 0.713 versus 0.752 and 0.791 for the CAD consortium models (P<0.001 for both). Consequently, the use of the DF score was associated with fewer individuals being categorized as requiring no additional testing (8.3%) compared with the CAD consortium models (24.6% and 30.0%; P<0.001). The proportion of individuals with a high pretest probability was 18% with the DF and only 1.1% with the CAD consortium scores (P<0.001) CONCLUSIONS: Among contemporary patients referred for noninvasive testing, the DF risk score overestimates the risk of obstructive CAD. On the other hand, the CAD consortium scores offered improved goodness of fit and discrimination; thus, their use could decrease the need for noninvasive or invasive testing while increasing the yield of such tests. © 2016 American Heart Association, Inc.
The analytical design of spectral measurements for multispectral remote sensor systems
NASA Technical Reports Server (NTRS)
Wiersma, D. J.; Landgrebe, D. A. (Principal Investigator)
1979-01-01
The author has identified the following significant results. In order to choose a design which will be optimal for the largest class of remote sensing problems, a method was developed which attempted to represent the spectral response function from a scene as accurately as possible. The performance of the overall recognition system was studied relative to the accuracy of the spectral representation. The spectral representation was only one of a set of five interrelated parameter categories which also included the spatial representation parameter, the signal to noise ratio, ancillary data, and information classes. The spectral response functions observed from a stratum were modeled as a stochastic process with a Gaussian probability measure. The criterion for spectral representation was defined by the minimum expected mean-square error.
Master equations and the theory of stochastic path integrals
NASA Astrophysics Data System (ADS)
Weber, Markus F.; Frey, Erwin
2017-04-01
This review provides a pedagogic and self-contained introduction to master equations and to their representation by path integrals. Since the 1930s, master equations have served as a fundamental tool to understand the role of fluctuations in complex biological, chemical, and physical systems. Despite their simple appearance, analyses of master equations most often rely on low-noise approximations such as the Kramers-Moyal or the system size expansion, or require ad-hoc closure schemes for the derivation of low-order moment equations. We focus on numerical and analytical methods going beyond the low-noise limit and provide a unified framework for the study of master equations. After deriving the forward and backward master equations from the Chapman-Kolmogorov equation, we show how the two master equations can be cast into either of four linear partial differential equations (PDEs). Three of these PDEs are discussed in detail. The first PDE governs the time evolution of a generalized probability generating function whose basis depends on the stochastic process under consideration. Spectral methods, WKB approximations, and a variational approach have been proposed for the analysis of the PDE. The second PDE is novel and is obeyed by a distribution that is marginalized over an initial state. It proves useful for the computation of mean extinction times. The third PDE describes the time evolution of a ‘generating functional’, which generalizes the so-called Poisson representation. Subsequently, the solutions of the PDEs are expressed in terms of two path integrals: a ‘forward’ and a ‘backward’ path integral. Combined with inverse transformations, one obtains two distinct path integral representations of the conditional probability distribution solving the master equations. We exemplify both path integrals in analysing elementary chemical reactions. Moreover, we show how a well-known path integral representation of averaged observables can be recovered from them. Upon expanding the forward and the backward path integrals around stationary paths, we then discuss and extend a recent method for the computation of rare event probabilities. Besides, we also derive path integral representations for processes with continuous state spaces whose forward and backward master equations admit Kramers-Moyal expansions. A truncation of the backward expansion at the level of a diffusion approximation recovers a classic path integral representation of the (backward) Fokker-Planck equation. One can rewrite this path integral in terms of an Onsager-Machlup function and, for purely diffusive Brownian motion, it simplifies to the path integral of Wiener. To make this review accessible to a broad community, we have used the language of probability theory rather than quantum (field) theory and do not assume any knowledge of the latter. The probabilistic structures underpinning various technical concepts, such as coherent states, the Doi-shift, and normal-ordered observables, are thereby made explicit.
Master equations and the theory of stochastic path integrals.
Weber, Markus F; Frey, Erwin
2017-04-01
This review provides a pedagogic and self-contained introduction to master equations and to their representation by path integrals. Since the 1930s, master equations have served as a fundamental tool to understand the role of fluctuations in complex biological, chemical, and physical systems. Despite their simple appearance, analyses of master equations most often rely on low-noise approximations such as the Kramers-Moyal or the system size expansion, or require ad-hoc closure schemes for the derivation of low-order moment equations. We focus on numerical and analytical methods going beyond the low-noise limit and provide a unified framework for the study of master equations. After deriving the forward and backward master equations from the Chapman-Kolmogorov equation, we show how the two master equations can be cast into either of four linear partial differential equations (PDEs). Three of these PDEs are discussed in detail. The first PDE governs the time evolution of a generalized probability generating function whose basis depends on the stochastic process under consideration. Spectral methods, WKB approximations, and a variational approach have been proposed for the analysis of the PDE. The second PDE is novel and is obeyed by a distribution that is marginalized over an initial state. It proves useful for the computation of mean extinction times. The third PDE describes the time evolution of a 'generating functional', which generalizes the so-called Poisson representation. Subsequently, the solutions of the PDEs are expressed in terms of two path integrals: a 'forward' and a 'backward' path integral. Combined with inverse transformations, one obtains two distinct path integral representations of the conditional probability distribution solving the master equations. We exemplify both path integrals in analysing elementary chemical reactions. Moreover, we show how a well-known path integral representation of averaged observables can be recovered from them. Upon expanding the forward and the backward path integrals around stationary paths, we then discuss and extend a recent method for the computation of rare event probabilities. Besides, we also derive path integral representations for processes with continuous state spaces whose forward and backward master equations admit Kramers-Moyal expansions. A truncation of the backward expansion at the level of a diffusion approximation recovers a classic path integral representation of the (backward) Fokker-Planck equation. One can rewrite this path integral in terms of an Onsager-Machlup function and, for purely diffusive Brownian motion, it simplifies to the path integral of Wiener. To make this review accessible to a broad community, we have used the language of probability theory rather than quantum (field) theory and do not assume any knowledge of the latter. The probabilistic structures underpinning various technical concepts, such as coherent states, the Doi-shift, and normal-ordered observables, are thereby made explicit.
Quantum-like Modeling of Cognition
NASA Astrophysics Data System (ADS)
Khrennikov, Andrei
2015-09-01
This paper begins with a historical review of the mutual influence of physics and psychology, from Freud's invention of psychic energy inspired by von Boltzmann' thermodynamics to the enrichment quantum physics gained from the side of psychology by the notion of complementarity (the invention of Niels Bohr who was inspired by William James), besides we consider the resonance of the correspondence between Wolfgang Pauli and Carl Jung in both physics and psychology. Then we turn to the problem of development of mathematical models for laws of thought starting with Boolean logic and progressing towards foundations of classical probability theory. Interestingly, the laws of classical logic and probability are routinely violated not only by quantum statistical phenomena but by cognitive phenomena as well. This is yet another common feature between quantum physics and psychology. In particular, cognitive data can exhibit a kind of the probabilistic interference effect. This similarity with quantum physics convinced a multi-disciplinary group of scientists (physicists, psychologists, economists, sociologists) to apply the mathematical apparatus of quantum mechanics to modeling of cognition. We illustrate this activity by considering a few concrete phenomena: the order and disjunction effects, recognition of ambiguous figures, categorization-decision making. In Appendix 1 we briefly present essentials of theory of contextual probability and a method of representations of contextual probabilities by complex probability amplitudes (solution of the ``inverse Born's problem'') based on a quantum-like representation algorithm (QLRA).
NASA Technical Reports Server (NTRS)
Kim, Hakil; Swain, Philip H.
1990-01-01
An axiomatic approach to intervalued (IV) probabilities is presented, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach representation of statistical evidence and combination of multiple bodies of evidence are emphasized. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. The development of decision rules over IV probabilities is discussed from the viewpoint of statistical pattern recognition. The proposed method, so called evidential reasoning method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data, Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor. In each case a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a divide-and-combine process, the method is able to utilize more features than the conventional maximum likelihood method.
FUNSTAT and statistical image representations
NASA Technical Reports Server (NTRS)
Parzen, E.
1983-01-01
General ideas of functional statistical inference analysis of one sample and two samples, univariate and bivariate are outlined. ONESAM program is applied to analyze the univariate probability distributions of multi-spectral image data.
Ghuman, Marcus; Bates, Ngaire; Moore, Helen
2012-06-08
To review local CT colonography (CTC) data with regard to demographics, and both colonic and extracolonic findings. To improve performance by identifying any deficiencies that need to be addressed, in relation to a literature review of the current status of CTC. A retrospective observational analysis was conducted of all the patients undergoing CTC for the 3-year period from 9 August 2007 - 12 August 2010 (n=302) conducted at a single site: Greenlane Hospital (ADHB outpatients). In total, 12 of the 302 patients (4%) were found to have cancer, 24 polyps (8%), and 111 diverticular disease (37%). 21 patients (7%) were referred on for optical colonoscopy following their CTC, and 34 patients (11%) had follow-up recommendations resulting from extracolonic findings, including 24 recommendations for further imaging. A trend towards under-representation of both Māori and Pacific Island groups undergoing CTC, and over-representation of Asians was identified. This study has reported on the experience of CT colonography at Greenlane Hospital over a 3-year period. It has provided important local data on rates of detection of colonic pathology. Māori and Pacific Islanders need encouragement from primary health practitioners to present for bowel examination.
Bastien, Olivier; Ortet, Philippe; Roy, Sylvaine; Maréchal, Eric
2005-03-10
Popular methods to reconstruct molecular phylogenies are based on multiple sequence alignments, in which addition or removal of data may change the resulting tree topology. We have sought a representation of homologous proteins that would conserve the information of pair-wise sequence alignments, respect probabilistic properties of Z-scores (Monte Carlo methods applied to pair-wise comparisons) and be the basis for a novel method of consistent and stable phylogenetic reconstruction. We have built up a spatial representation of protein sequences using concepts from particle physics (configuration space) and respecting a frame of constraints deduced from pair-wise alignment score properties in information theory. The obtained configuration space of homologous proteins (CSHP) allows the representation of real and shuffled sequences, and thereupon an expression of the TULIP theorem for Z-score probabilities. Based on the CSHP, we propose a phylogeny reconstruction using Z-scores. Deduced trees, called TULIP trees, are consistent with multiple-alignment based trees. Furthermore, the TULIP tree reconstruction method provides a solution for some previously reported incongruent results, such as the apicomplexan enolase phylogeny. The CSHP is a unified model that conserves mutual information between proteins in the way physical models conserve energy. Applications include the reconstruction of evolutionary consistent and robust trees, the topology of which is based on a spatial representation that is not reordered after addition or removal of sequences. The CSHP and its assigned phylogenetic topology, provide a powerful and easily updated representation for massive pair-wise genome comparisons based on Z-score computations.
Tomographic Imaging of a Forested Area By Airborne Multi-Baseline P-Band SAR.
Frey, Othmar; Morsdorf, Felix; Meier, Erich
2008-09-24
In recent years, various attempts have been undertaken to obtain information about the structure of forested areas from multi-baseline synthetic aperture radar data. Tomographic processing of such data has been demonstrated for airborne L-band data but the quality of the focused tomographic images is limited by several factors. In particular, the common Fourierbased focusing methods are susceptible to irregular and sparse sampling, two problems, that are unavoidable in case of multi-pass, multi-baseline SAR data acquired by an airborne system. In this paper, a tomographic focusing method based on the time-domain back-projection algorithm is proposed, which maintains the geometric relationship between the original sensor positions and the imaged target and is therefore able to cope with irregular sampling without introducing any approximations with respect to the geometry. The tomographic focusing quality is assessed by analysing the impulse response of simulated point targets and an in-scene corner reflector. And, in particular, several tomographic slices of a volume representing a forested area are given. The respective P-band tomographic data set consisting of eleven flight tracks has been acquired by the airborne E-SAR sensor of the German Aerospace Center (DLR).
Tracing the footsteps of Sherlock Holmes: cognitive representations of hypothesis testing.
Van Wallendael, L R; Hastie, R
1990-05-01
A well-documented phenomenon in opinion-revision literature is subjects' failure to revise probability estimates for an exhaustive set of mutually exclusive hypotheses in a complementary manner. However, prior research has not addressed the question of whether such behavior simply represents a misunderstanding of mathematical rules, or whether it is a consequence of a cognitive representation of hypotheses that is at odds with the Bayesian notion of a set relationship. Two alternatives to the Bayesian representation, a belief system (Shafer, 1976) and a system of independent hypotheses, were proposed, and three experiments were conducted to examine cognitive representations of hypothesis sets in the testing of multiple competing hypotheses. Subjects were given brief murder mysteries to solve and allowed to request various types of information about the suspects; after having received each new piece of information, subjects rated each suspect's probability of being the murderer. Presence and timing of suspect eliminations were varied in the first two experiments; the final experiment involved the varying of percentages of clues that referred to more than one suspect (for example, all of the female suspects). The noncomplementarity of opinion revisions remained a strong phenomenon in all conditions. Information-search data refuted the idea that subjects represented hypotheses as a Bayesian set; further study of the independent hypotheses theory and Shaferian belief functions as descriptive models is encouraged.
Forced to remember: when memory is biased by salient information.
Santangelo, Valerio
2015-04-15
The last decades have seen a rapid growing in the attempt to understand the key factors involved in the internal memory representation of the external world. Visual salience have been found to provide a major contribution in predicting the probability for an item/object embedded in a complex setting (i.e., a natural scene) to be encoded and then remembered later on. Here I review the existing literature highlighting the impact of perceptual- (based on low-level sensory features) and semantics-related salience (based on high-level knowledge) on short-term memory representation, along with the neural mechanisms underpinning the interplay between these factors. The available evidence reveal that both perceptual- and semantics-related factors affect attention selection mechanisms during the encoding of natural scenes. Biasing internal memory representation, both perceptual and semantics factors increase the probability to remember high- to the detriment of low-saliency items. The available evidence also highlight an interplay between these factors, with a reduced impact of perceptual-related salience in biasing memory representation as a function of the increasing availability of semantics-related salient information. The neural mechanisms underpinning this interplay involve the activation of different portions of the frontoparietal attention control network. Ventral regions support the assignment of selection/encoding priorities based on high-level semantics, while the involvement of dorsal regions reflects priorities assignment based on low-level sensory features. Copyright © 2015 Elsevier B.V. All rights reserved.
X-ray computed tomography for virtually unrolling damaged papyri
NASA Astrophysics Data System (ADS)
Allegra, Dario; Ciliberto, Enrico; Ciliberto, Paolo; Petrillo, Giuseppe; Stanco, Filippo; Trombatore, Claudia
2016-03-01
The regular format for ancient works of literature was the papyrus roll. Recently many efforts to perform virtual restoration of this archeological artifact have been done. In fact the case of ancient rolled papyrus is very intriguing. Old papyruses are the substrates of very important historical information, probably being the use of papyrus dated to the Pre-Dynastic Period. Papyrus degradation is often very hard so that physical unrolling is sometime absolutely impossible. In this paper, authors describe their effort in setting a new virtual restoration methodology based on software manipulation of X-ray tomographic images. A realistic model, obtained by painting a hieroglyph inscription of Thutmosis III on a papyrus substrate made by the original method described by Plinius the Elder and by pigments and binders compatible with the Egyptian use (ochers with natural glue), was made for the X-ray investigation. A GE Optima 660 64 slice was used to obtain a stack of tomographic slices of the rolled model. Each slice appears as spiral. The intensity variations along the cross-sectional result from ink on the papyrus. The files were elaborated with original software, written by the use of MATLAB high-level language, and the final result was quite similar to the radiography of the physically unrolled sheet.
Lau, S F; Wolschrijn, C F; Hazewinkel, H A W; Siebelt, M; Voorhout, G
2013-09-01
Medial coronoid disease (MCD) encompasses lesions of the entire medial coronoid process (MCP), both of the articular cartilage and the subchondral bone. To detect the earliest signs of MCD, radiography and computed tomography were used to monitor the development of MCD in 14 Labrador retrievers, from 6 to 7 weeks of age until euthanasia. The definitive diagnosis of MCD was based on necropsy and micro-computed tomography findings. The frequency of MCD in the dogs studied was 50%. Radiographic findings did not provide evidence of MCD, ulnar subtrochlear sclerosis or blunting of the cranial edge of the MCP. Computed tomography was more sensitive (30.8%) than radiography (0%) in detecting early MCD, with the earliest signs detectable at 14 weeks of age. A combination of the necropsy and micro-computed tomography findings of the MCP showed that MCD was manifested as a lesion of only the subchondral bone in dogs <18 weeks of age. In all dogs (affected and unaffected), there was close contact between the base of the MCP and the proximal radial head in the congruent joints. Computed tomography and micro-computed tomography findings indicated that the lesions of MCD probably originated at the base of the MCP. Copyright © 2013 Elsevier Ltd. All rights reserved.
Offodile, Anaeze C; Chatterjee, Abhishek; Vallejo, Sergio; Fisher, Carla S; Tchou, Julia C; Guo, Lifei
2015-04-01
Computed tomographic angiography is a diagnostic tool increasingly used for preoperative vascular mapping in abdomen-based perforator flap breast reconstruction. This study compared the use of computed tomographic angiography and the conventional practice of Doppler ultrasonography only in postmastectomy reconstruction using a cost-utility model. Following a comprehensive literature review, a decision analytic model was created using the three most clinically relevant health outcomes in free autologous breast reconstruction with computed tomographic angiography versus Doppler ultrasonography only. Cost and utility estimates for each health outcome were used to derive the quality-adjusted life-years and incremental cost-utility ratio. One-way sensitivity analysis was performed to scrutinize the robustness of the authors' results. Six studies and 782 patients were identified. Cost-utility analysis revealed a baseline cost savings of $3179, a gain in quality-adjusted life-years of 0.25. This yielded an incremental cost-utility ratio of -$12,716, implying a dominant choice favoring preoperative computed tomographic angiography. Sensitivity analysis revealed that computed tomographic angiography was costlier when the operative time difference between the two techniques was less than 21.3 minutes. However, the clinical advantage of computed tomographic angiography over Doppler ultrasonography only showed that computed tomographic angiography would still remain the cost-effective option even if it offered no additional operating time advantage. The authors' results show that computed tomographic angiography is a cost-effective technology for identifying lower abdominal perforators for autologous breast reconstruction. Although the perfect study would be a randomized controlled trial of the two approaches with true cost accrual, the authors' results represent the best available evidence.
Tentori, Katya; Chater, Nick; Crupi, Vincenzo
2016-04-01
Inductive reasoning requires exploiting links between evidence and hypotheses. This can be done focusing either on the posterior probability of the hypothesis when updated on the new evidence or on the impact of the new evidence on the credibility of the hypothesis. But are these two cognitive representations equally reliable? This study investigates this question by comparing probability and impact judgments on the same experimental materials. The results indicate that impact judgments are more consistent in time and more accurate than probability judgments. Impact judgments also predict the direction of errors in probability judgments. These findings suggest that human inductive reasoning relies more on estimating evidential impact than on posterior probability. Copyright © 2015 Cognitive Science Society, Inc.
Probability shapes perceptual precision: A study in orientation estimation.
Jabar, Syaheed B; Anderson, Britt
2015-12-01
Probability is known to affect perceptual estimations, but an understanding of mechanisms is lacking. Moving beyond binary classification tasks, we had naive participants report the orientation of briefly viewed gratings where we systematically manipulated contingent probability. Participants rapidly developed faster and more precise estimations for high-probability tilts. The shapes of their error distributions, as indexed by a kurtosis measure, also showed a distortion from Gaussian. This kurtosis metric was robust, capturing probability effects that were graded, contextual, and varying as a function of stimulus orientation. Our data can be understood as a probability-induced reduction in the variability or "shape" of estimation errors, as would be expected if probability affects the perceptual representations. As probability manipulations are an implicit component of many endogenous cuing paradigms, changes at the perceptual level could account for changes in performance that might have traditionally been ascribed to "attention." (c) 2015 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell
2012-01-01
The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.
Development and neurophysiology of mentalizing.
Frith, Uta; Frith, Christopher D
2003-01-01
The mentalizing (theory of mind) system of the brain is probably in operation from ca. 18 months of age, allowing implicit attribution of intentions and other mental states. Between the ages of 4 and 6 years explicit mentalizing becomes possible, and from this age children are able to explain the misleading reasons that have given rise to a false belief. Neuroimaging studies of mentalizing have so far only been carried out in adults. They reveal a system with three components consistently activated during both implicit and explicit mentalizing tasks: medial prefrontal cortex (MPFC), temporal poles and posterior superior temporal sulcus (STS). The functions of these components can be elucidated, to some extent, from their role in other tasks used in neuroimaging studies. Thus, the MPFC region is probably the basis of the decoupling mechanism that distinguishes mental state representations from physical state representations; the STS region is probably the basis of the detection of agency, and the temporal poles might be involved in access to social knowledge in the form of scripts. The activation of these components in concert appears to be critical to mentalizing. PMID:12689373
Fife organizes synaptic vesicles and calcium channels for high-probability neurotransmitter release
Rao, Monica; Ukken, Fiona
2017-01-01
The strength of synaptic connections varies significantly and is a key determinant of communication within neural circuits. Mechanistic insight into presynaptic factors that establish and modulate neurotransmitter release properties is crucial to understanding synapse strength, circuit function, and neural plasticity. We previously identified Drosophila Piccolo-RIM-related Fife, which regulates neurotransmission and motor behavior through an unknown mechanism. Here, we demonstrate that Fife localizes and interacts with RIM at the active zone cytomatrix to promote neurotransmitter release. Loss of Fife results in the severe disruption of active zone cytomatrix architecture and molecular organization. Through electron tomographic and electrophysiological studies, we find a decrease in the accumulation of release-ready synaptic vesicles and their release probability caused by impaired coupling to Ca2+ channels. Finally, we find that Fife is essential for the homeostatic modulation of neurotransmission. We propose that Fife organizes active zones to create synaptic vesicle release sites within nanometer distance of Ca2+ channel clusters for reliable and modifiable neurotransmitter release. PMID:27998991
ERIC Educational Resources Information Center
MacRoy-Higgins, Michelle; Dalton, Kevin Patrick
2015-01-01
Purpose: The purpose of this study was to examine the influence of phonotactic probability on sublexical (phonological) and lexical representations in 3-year-olds who had a history of being late talkers in comparison with their peers with typical language development. Method: Ten 3-year-olds who were late talkers and 10 age-matched typically…
NASA Astrophysics Data System (ADS)
Youssof, Mohammad; Yuan, Xiaohui; Tilmann, Frederik; Heit, Benjamin; Weber, Michael; Jokat, Wilfried; Geissler, Wolfram; Laske, Gabi; Eken, Tuna; Lushetile, Bufelo
2015-04-01
We present a 3D high-resolution seismic model of the southwestern Africa region from teleseismic tomographic inversion of the P- and S- wave data recorded by the amphibious WALPASS network. We used 40 temporary stations in southwestern Africa with records for a period of 2 years (the OBS operated for 1 year), between November 2010 and November 2012. The array covers a surface area of approximately 600 by 1200 km and is located at the intersection of the Walvis Ridge, the continental margin of northern Namibia, and extends into the Congo craton. Major questions that need to be understood are related to the impact of asthenosphere-lithosphere interaction, (plume-related features), on the continental areas and the evolution of the continent-ocean transition that followed the break-up of Gondwana. This process is supposed to leave its imprint as distinct seismic signature in the upper mantle. Utilizing 3D sensitivity kernels, we invert traveltime residuals to image velocity perturbations in the upper mantle down to 1000 km depth. To test the robustness of our tomographic image we employed various resolution tests which allow us to evaluate the extent of smearing effects and help defining the optimum inversion parameters (i.e., damping and smoothness) used during the regularization of inversion process. Resolution assessment procedure includes also a detailed investigation of the effect of the crustal corrections on the final images, which strongly influenced the resolution for the mantle structures. We present detailed tomographic images of the oceanic and continental lithosphere beneath the study area. The fast lithospheric keel of the Congo Craton reaches a depth of ~250 km. Relatively low velocity perturbations have been imaged within the orogenic Damara Belt down to a depth of ~150 km, probably related to surficial suture zones and the presence of fertile material. A shallower depth extent of the lithospheric plate of ~100 km was observed beneath the ocean, consistent with plate-cooling models. In addition to tomographic images, the seismic anisotropy measurements within the upper mantle inferred from teleseismic shear waves indicate a predominant NE-SW orientation for most of the land stations. Current results indicate no evidence for a consistent signature of fossil plume.
Exploring the Structure of Spatial Representations
Madl, Tamas; Franklin, Stan; Chen, Ke; Trappl, Robert; Montaldi, Daniela
2016-01-01
It has been suggested that the map-like representations that support human spatial memory are fragmented into sub-maps with local reference frames, rather than being unitary and global. However, the principles underlying the structure of these ‘cognitive maps’ are not well understood. We propose that the structure of the representations of navigation space arises from clustering within individual psychological spaces, i.e. from a process that groups together objects that are close in these spaces. Building on the ideas of representational geometry and similarity-based representations in cognitive science, we formulate methods for learning dissimilarity functions (metrics) characterizing participants’ psychological spaces. We show that these learned metrics, together with a probabilistic model of clustering based on the Bayesian cognition paradigm, allow prediction of participants’ cognitive map structures in advance. Apart from insights into spatial representation learning in human cognition, these methods could facilitate novel computational tools capable of using human-like spatial concepts. We also compare several features influencing spatial memory structure, including spatial distance, visual similarity and functional similarity, and report strong correlations between these dimensions and the grouping probability in participants’ spatial representations, providing further support for clustering in spatial memory. PMID:27347681
Tomographic Reconstruction from a Few Views: A Multi-Marginal Optimal Transport Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abraham, I., E-mail: isabelle.abraham@cea.fr; Abraham, R., E-mail: romain.abraham@univ-orleans.fr; Bergounioux, M., E-mail: maitine.bergounioux@univ-orleans.fr
2017-02-15
In this article, we focus on tomographic reconstruction. The problem is to determine the shape of the interior interface using a tomographic approach while very few X-ray radiographs are performed. We use a multi-marginal optimal transport approach. Preliminary numerical results are presented.
NASA Astrophysics Data System (ADS)
Borgherini, M.; Garbin, E.
2011-06-01
Eight centuries of the history of art and of Padua's scientific and technological culture deposited on the stones and frescoes of its Palace of Law ("Palazzo della Ragione") make this great work of urban architecture a part of the city's collective identity. This "palimpsest", legible only to a restricted circle of specialists, should be accessible to a vaster public interested in understanding this object symbol of local culture. The project planned for interactive exploration on the web is a series of digital models, employing tomographic-endoscopic visualizations and, in future, multi-resolution images. The various models devised allow the visitor to superimpose the Palace's current conditions on the various transformations undergone over the centuries. Similarly, comparisons can be made between the astrological fresco cycle with maps of the heavens, cosmological hypotheses, ancient and contemporary astrological treatises, and the related exchange of knowledge between the Orient and the Occident.
Computer-aided detection of initial polyp candidates with level set-based adaptive convolution
NASA Astrophysics Data System (ADS)
Zhu, Hongbin; Duan, Chaijie; Liang, Zhengrong
2009-02-01
In order to eliminate or weaken the interference between different topological structures on the colon wall, adaptive and normalized convolution methods were used to compute the first and second order spatial derivatives of computed tomographic colonography images, which is the beginning of various geometric analyses. However, the performance of such methods greatly depends on the single-layer representation of the colon wall, which is called the starting layer (SL) in the following text. In this paper, we introduce a level set-based adaptive convolution (LSAC) method to compute the spatial derivatives, in which the level set method is employed to determine a more reasonable SL. The LSAC was applied to a computer-aided detection (CAD) scheme to detect the initial polyp candidates, and experiments showed that it benefits the CAD scheme in both the detection sensitivity and specificity as compared to our previous work.
Rolland, N; Larson, D J; Geiser, B P; Duguay, S; Vurpillot, F; Blavette, D
2015-12-01
An analytical model describing the field evaporation dynamics of a tip made of a thin layer deposited on a substrate is presented in this paper. The difference in evaporation field between the materials is taken into account in this approach in which the tip shape is modeled at a mesoscopic scale. It was found that the non-existence of sharp edge on the surface is a sufficient condition to derive the morphological evolution during successive evaporation of the layers. This modeling gives an instantaneous and smooth analytical representation of the surface that shows good agreement with finite difference simulations results, and a specific regime of evaporation was highlighted when the substrate is a low evaporation field phase. In addition, the model makes it possible to calculate theoretically the tip analyzed volume, potentially opening up new horizons for atom probe tomographic reconstruction. Copyright © 2015 Elsevier B.V. All rights reserved.
Shrink-wrapped isosurface from cross sectional images
Choi, Y. K.; Hahn, J. K.
2010-01-01
Summary This paper addresses a new surface reconstruction scheme for approximating the isosurface from a set of tomographic cross sectional images. Differently from the novel Marching Cubes (MC) algorithm, our method does not extract the iso-density surface (isosurface) directly from the voxel data but calculates the iso-density point (isopoint) first. After building a coarse initial mesh approximating the ideal isosurface by the cell-boundary representation, it metamorphoses the mesh into the final isosurface by a relaxation scheme, called shrink-wrapping process. Compared with the MC algorithm, our method is robust and does not make any cracks on surface. Furthermore, since it is possible to utilize lots of additional isopoints during the surface reconstruction process by extending the adjacency definition, theoretically the resulting surface can be better in quality than the MC algorithm. According to experiments, it is proved to be very robust and efficient for isosurface reconstruction from cross sectional images. PMID:20703361
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Rui; Singh, Sudhanshu S.; Chawla, Nikhilesh
2016-08-15
We present a robust method for automating removal of “segregation artifacts” in segmented tomographic images of three-dimensional heterogeneous microstructures. The objective of this method is to accurately identify and separate discrete features in composite materials where limitations in imaging resolution lead to spurious connections near close contacts. The method utilizes betweenness centrality, a measure of the importance of a node in the connectivity of a graph network, to identify voxels that create artificial bridges between otherwise distinct geometric features. To facilitate automation of the algorithm, we develop a relative centrality metric to allow for the selection of a threshold criterionmore » that is not sensitive to inclusion size or shape. As a demonstration of the effectiveness of the algorithm, we report on the segmentation of a 3D reconstruction of a SiC particle reinforced aluminum alloy, imaged by X-ray synchrotron tomography.« less
New signal processing technique for density profile reconstruction using reflectometry.
Clairet, F; Ricaud, B; Briolle, F; Heuraux, S; Bottereau, C
2011-08-01
Reflectometry profile measurement requires an accurate determination of the plasma reflected signal. Along with a good resolution and a high signal to noise ratio of the phase measurement, adequate data analysis is required. A new data processing based on time-frequency tomographic representation is used. It provides a clearer separation between multiple components and improves isolation of the relevant signals. In this paper, this data processing technique is applied to two sets of signals coming from two different reflectometer devices used on the Tore Supra tokamak. For the standard density profile reflectometry, it improves the initialization process and its reliability, providing a more accurate profile determination in the far scrape-off layer with density measurements as low as 10(16) m(-1). For a second reflectometer, which provides measurements in front of a lower hybrid launcher, this method improves the separation of the relevant plasma signal from multi-reflection processes due to the proximity of the plasma.
An evaluation of the effectiveness of adaptive histogram equalization for contrast enhancement.
Zimmerman, J B; Pizer, S M; Staab, E V; Perry, J R; McCartney, W; Brenton, B C
1988-01-01
Adaptive histogram equalization (AHE) and intensity windowing have been compared using psychophysical observer studies. Experienced radiologists were shown clinical CT (computerized tomographic) images of the chest. Into some of the images, appropriate artificial lesions were introduced; the physicians were then shown the images processed with both AHE and intensity windowing. They were asked to assess the probability that a given image contained the artificial lesion, and their accuracy was measured. The results of these experiments show that for this particular diagnostic task, there was no significant difference in the ability of the two methods to depict luminance contrast; thus, further evaluation of AHE using controlled clinical trials is indicated.
Rudebeck, Peter H; Saunders, Richard C; Lundgren, Dawn A; Murray, Elisabeth A
2017-08-30
Advantageous foraging choices benefit from an estimation of two aspects of a resource's value: its current desirability and availability. Both orbitofrontal and ventrolateral prefrontal areas contribute to updating these valuations, but their precise roles remain unclear. To explore their specializations, we trained macaque monkeys on two tasks: one required updating representations of a predicted outcome's desirability, as adjusted by selective satiation, and the other required updating representations of an outcome's availability, as indexed by its probability. We evaluated performance on both tasks in three groups of monkeys: unoperated controls and those with selective, fiber-sparing lesions of either the OFC or VLPFC. Representations that depend on the VLPFC but not the OFC play a necessary role in choices based on outcome availability; in contrast, representations that depend on the OFC but not the VLPFC play a necessary role in choices based on outcome desirability. Copyright © 2017 Elsevier Inc. All rights reserved.
Prevention 0f Unwanted Free-Declaration of Static Obstacles in Probability Occupancy Grids
NASA Astrophysics Data System (ADS)
Krause, Stefan; Scholz, M.; Hohmann, R.
2017-10-01
Obstacle detection and avoidance are major research fields in unmanned aviation. Map based obstacle detection approaches often use discrete world representations such as probabilistic grid maps to fuse incremental environment data from different views or sensors to build a comprehensive representation. The integration of continuous measurements into a discrete representation can result in rounding errors which, in turn, leads to differences between the artificial model and real environment. The cause of these deviations is a low spatial resolution of the world representation comparison to the used sensor data. Differences between artificial representations which are used for path planning or obstacle avoidance and the real world can lead to unexpected behavior up to collisions with unmapped obstacles. This paper presents three approaches to the treatment of errors that can occur during the integration of continuous laser measurement in the discrete probabilistic grid. Further, the quality of the error prevention and the processing performance are compared with real sensor data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.
Probability of loss of assured safety (PLOAS) is modeled for weak link (WL)/strong link (SL) systems in which one or more WLs or SLs could potentially degrade into a precursor condition to link failure that will be followed by an actual failure after some amount of elapsed time. The following topics are considered: (i) Definition of precursor occurrence time cumulative distribution functions (CDFs) for individual WLs and SLs, (ii) Formal representation of PLOAS with constant delay times, (iii) Approximation and illustration of PLOAS with constant delay times, (iv) Formal representation of PLOAS with aleatory uncertainty in delay times, (v) Approximationmore » and illustration of PLOAS with aleatory uncertainty in delay times, (vi) Formal representation of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, (vii) Approximation and illustration of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, and (viii) Procedures for the verification of PLOAS calculations for the three indicated definitions of delayed link failure.« less
Fuzzy rationality and parameter elicitation in decision analysis
NASA Astrophysics Data System (ADS)
Nikolova, Natalia D.; Tenekedjiev, Kiril I.
2010-07-01
It is widely recognised by decision analysts that real decision-makers always make estimates in an interval form. An overview of techniques to find an optimal alternative among such with imprecise and interval probabilities is presented. Scalarisation methods are outlined as most appropriate. A proper continuation of such techniques is fuzzy rational (FR) decision analysis. A detailed representation of the elicitation process influenced by fuzzy rationality is given. The interval character of probabilities leads to the introduction of ribbon functions, whose general form and special cases are compared with the p-boxes. As demonstrated, approximation of utilities in FR decision analysis does not depend on the probabilities, but the approximation of probabilities is dependent on preferences.
NASA Astrophysics Data System (ADS)
Eliaš, Peter; Frič, Roman
2017-12-01
Categorical approach to probability leads to better understanding of basic notions and constructions in generalized (fuzzy, operational, quantum) probability, where observables—dual notions to generalized random variables (statistical maps)—play a major role. First, to avoid inconsistencies, we introduce three categories L, S, and P, the objects and morphisms of which correspond to basic notions of fuzzy probability theory and operational probability theory, and describe their relationships. To illustrate the advantages of categorical approach, we show that two categorical constructions involving observables (related to the representation of generalized random variables via products, or smearing of sharp observables, respectively) can be described as factorizing a morphism into composition of two morphisms having desired properties. We close with a remark concerning products.
Display And Analysis Of Tomographic Volumetric Images Utilizing A Vari-Focal Mirror
NASA Astrophysics Data System (ADS)
Harris, L. D.; Camp, J. J.
1984-10-01
A system for the three-dimensional (3-D) display and analysis of stacks of tomographic images is described. The device utilizes the principle of a variable focal (vari-focal) length optical element in the form of an aluminized membrane stretched over a loudspeaker to generate a virtual 3-D image which is a visible representation of a 3-D array of image elements (voxels). The system displays 500,000 voxels per mirror cycle in a 3-D raster which appears continuous and demonstrates no distracting artifacts. The display is bright enough so that portions of the image can be dimmed without compromising the number of shades of gray. For x-ray CT, a displayed volume image looks like a 3-D radiograph which appears to be in the space directly behind the mirror. The viewer sees new views by moving his/her head from side to side or up and down. The system facilitates a variety of operator interactive functions which allow the user to point at objects within the image, control the orientation and location of brightened oblique planes within the volume, numerically dissect away selected image regions, and control intensity window levels. Photographs of example volume images displayed on the system illustrate, to the degree possible in a flat picture, the nature of displayed images and the capabilities of the system. Preliminary application of the display device to the analysis of volume reconstructions obtained from the Dynamic Spatial Reconstructor indicates significant utility of the system in selecting oblique sections and gaining an appreciation of the shape and dimensions of complex organ systems.
Allnutt, Thomas F.; McClanahan, Timothy R.; Andréfouët, Serge; Baker, Merrill; Lagabrielle, Erwann; McClennen, Caleb; Rakotomanjaka, Andry J. M.; Tianarisoa, Tantely F.; Watson, Reg; Kremen, Claire
2012-01-01
The Government of Madagascar plans to increase marine protected area coverage by over one million hectares. To assist this process, we compare four methods for marine spatial planning of Madagascar's west coast. Input data for each method was drawn from the same variables: fishing pressure, exposure to climate change, and biodiversity (habitats, species distributions, biological richness, and biodiversity value). The first method compares visual color classifications of primary variables, the second uses binary combinations of these variables to produce a categorical classification of management actions, the third is a target-based optimization using Marxan, and the fourth is conservation ranking with Zonation. We present results from each method, and compare the latter three approaches for spatial coverage, biodiversity representation, fishing cost and persistence probability. All results included large areas in the north, central, and southern parts of western Madagascar. Achieving 30% representation targets with Marxan required twice the fish catch loss than the categorical method. The categorical classification and Zonation do not consider targets for conservation features. However, when we reduced Marxan targets to 16.3%, matching the representation level of the “strict protection” class of the categorical result, the methods show similar catch losses. The management category portfolio has complete coverage, and presents several management recommendations including strict protection. Zonation produces rapid conservation rankings across large, diverse datasets. Marxan is useful for identifying strict protected areas that meet representation targets, and minimize exposure probabilities for conservation features at low economic cost. We show that methods based on Zonation and a simple combination of variables can produce results comparable to Marxan for species representation and catch losses, demonstrating the value of comparing alternative approaches during initial stages of the planning process. Choosing an appropriate approach ultimately depends on scientific and political factors including representation targets, likelihood of adoption, and persistence goals. PMID:22359534
Allnutt, Thomas F; McClanahan, Timothy R; Andréfouët, Serge; Baker, Merrill; Lagabrielle, Erwann; McClennen, Caleb; Rakotomanjaka, Andry J M; Tianarisoa, Tantely F; Watson, Reg; Kremen, Claire
2012-01-01
The Government of Madagascar plans to increase marine protected area coverage by over one million hectares. To assist this process, we compare four methods for marine spatial planning of Madagascar's west coast. Input data for each method was drawn from the same variables: fishing pressure, exposure to climate change, and biodiversity (habitats, species distributions, biological richness, and biodiversity value). The first method compares visual color classifications of primary variables, the second uses binary combinations of these variables to produce a categorical classification of management actions, the third is a target-based optimization using Marxan, and the fourth is conservation ranking with Zonation. We present results from each method, and compare the latter three approaches for spatial coverage, biodiversity representation, fishing cost and persistence probability. All results included large areas in the north, central, and southern parts of western Madagascar. Achieving 30% representation targets with Marxan required twice the fish catch loss than the categorical method. The categorical classification and Zonation do not consider targets for conservation features. However, when we reduced Marxan targets to 16.3%, matching the representation level of the "strict protection" class of the categorical result, the methods show similar catch losses. The management category portfolio has complete coverage, and presents several management recommendations including strict protection. Zonation produces rapid conservation rankings across large, diverse datasets. Marxan is useful for identifying strict protected areas that meet representation targets, and minimize exposure probabilities for conservation features at low economic cost. We show that methods based on Zonation and a simple combination of variables can produce results comparable to Marxan for species representation and catch losses, demonstrating the value of comparing alternative approaches during initial stages of the planning process. Choosing an appropriate approach ultimately depends on scientific and political factors including representation targets, likelihood of adoption, and persistence goals.
Bai, Shirong; Skodje, Rex T
2017-08-17
A new approach is presented for simulating the time-evolution of chemically reactive systems. This method provides an alternative to conventional modeling of mass-action kinetics that involves solving differential equations for the species concentrations. The method presented here avoids the need to solve the rate equations by switching to a representation based on chemical pathways. In the Sum Over Histories Representation (or SOHR) method, any time-dependent kinetic observable, such as concentration, is written as a linear combination of probabilities for chemical pathways leading to a desired outcome. In this work, an iterative method is introduced that allows the time-dependent pathway probabilities to be generated from a knowledge of the elementary rate coefficients, thus avoiding the pitfalls involved in solving the differential equations of kinetics. The method is successfully applied to the model Lotka-Volterra system and to a realistic H 2 combustion model.
System and method for generating motion corrected tomographic images
Gleason, Shaun S [Knoxville, TN; Goddard, Jr., James S.
2012-05-01
A method and related system for generating motion corrected tomographic images includes the steps of illuminating a region of interest (ROI) to be imaged being part of an unrestrained live subject and having at least three spaced apart optical markers thereon. Simultaneous images are acquired from a first and a second camera of the markers from different angles. Motion data comprising 3D position and orientation of the markers relative to an initial reference position is then calculated. Motion corrected tomographic data obtained from the ROI using the motion data is then obtained, where motion corrected tomographic images obtained therefrom.
Use of the Wigner representation in scattering problems
NASA Technical Reports Server (NTRS)
Bemler, E. A.
1975-01-01
The basic equations of quantum scattering were translated into the Wigner representation, putting quantum mechanics in the form of a stochastic process in phase space, with real valued probability distributions and source functions. The interpretative picture associated with this representation is developed and stressed and results used in applications published elsewhere are derived. The form of the integral equation for scattering as well as its multiple scattering expansion in this representation are derived. Quantum corrections to classical propagators are briefly discussed. The basic approximation used in the Monte-Carlo method is derived in a fashion which allows for future refinement and which includes bound state production. Finally, as a simple illustration of some of the formalism, scattering is treated by a bound two body problem. Simple expressions for single and double scattering contributions to total and differential cross-sections as well as for all necessary shadow corrections are obtained.
Ambient Noise Interferometry and Surface Wave Array Tomography: Promises and Problems
NASA Astrophysics Data System (ADS)
van der Hilst, R. D.; Yao, H.; de Hoop, M. V.; Campman, X.; Solna, K.
2008-12-01
In the late 1990ies most seismologists would have frowned at the possibility of doing high-resolution surface wave tomography with noise instead of with signal associated with ballistic source-receiver propagation. Some may still do, but surface wave tomography with Green's functions estimated through ambient noise interferometry ('sourceless tomography') has transformed from a curiosity into one of the (almost) standard tools for analysis of data from dense seismograph arrays. Indeed, spectacular applications of ambient noise surface wave tomography have recently been published. For example, application to data from arrays in SE Tibet revealed structures in the crust beneath the Tibetan plateau that could not be resolved by traditional tomography (Yao et al., GJI, 2006, 2008). While the approach is conceptually simple, in application the proverbial devil is in the detail. Full reconstruction of the Green's function requires that the wavefields used are diffusive and that ambient noise energy is evenly distributed in the spatial dimensions of interest. In the field, these conditions are not usually met, and (frequency dependent) non-uniformity of the noise sources may lead to incomplete reconstruction of the Green's function. Furthermore, ambient noise distributions can be time-dependent, and seasonal variations have been documented. Naive use of empirical Green's functions may produce (unknown) bias in the tomographic models. The degrading effect on EGFs of the directionality of noise distribution forms particular challenges for applications beyond isotropic surface wave inversions, such as inversions for (azimuthal) anisotropy and attempts to use higher modes (or body waves). Incomplete Green's function reconstruction can (probably) not be prevented, but it may be possible to reduce the problem and - at least - understand the degree of incomplete reconstruction and prevent it from degrading the tomographic model. We will present examples of Rayleigh wave inversions and discuss strategies to mitigate effects of incomplete Green's function reconstruction on tomographic images.
NASA Astrophysics Data System (ADS)
Díaz, D.; Maksymowicz, A.; Vargas, G.; Vera, E.; Contreras-Reyes, E.; Rebolledo, S.
2014-01-01
The crustal-scale west-vergent San Ramón thrust fault system at the foot of the main Andean Cordillera in central Chile is a geologically active structure with Quaternary manifestations of complex surface rupture along fault segments in the eastern border of Santiago city. From the comparison of geophysical and geological observations, we assessed the subsurface structure pattern affecting sedimentary cover and rock-substratum topography across fault scarps, which is critic for evaluating structural modeling and associated seismic hazard along this kind of faults. We performed seismic profiles with an average length of 250 m, using an array of twenty-four geophones (GEODE), and 25 shots per profile, supporting high-resolution seismic tomography for interpreting impedance changes associated to deformed sedimentary cover. The recorded traveltime refractions and reflections were jointly inverted by using a 2-D tomographic approach, which resulted in variations across the scarp axis in both velocities and reflections interpreted as the sedimentary cover-rock substratum topography. Seismic anisotropy observed from tomographic profiles is consistent with sediment deformation triggered by west-vergent thrust tectonics along the fault. Electrical soundings crossing two fault scarps supported subsurface resistivity tomographic profiles, which revealed systematic differences between lower resistivity values in the hanging wall with respect to the footwall of the geological structure, clearly limited by well-defined east-dipping resistivity boundaries. The latter can be interpreted in terms of structurally driven fluid content-change between the hanging wall and the footwall of a permeability boundary associated with the San Ramón fault. The overall results are consistent with a west-vergent thrust structure dipping ∼55° E at subsurface levels in piedmont sediments, with local complexities being probably associated to fault surface rupture propagation, fault-splay and fault segment transfer zones.
Optical tomographic memories: algorithms for the efficient information readout
NASA Astrophysics Data System (ADS)
Pantelic, Dejan V.
1990-07-01
Tomographic alogithms are modified in order to reconstruct the inf ormation previously stored by focusing laser radiation in a volume of photosensitive media. Apriori information about the position of bits of inf ormation is used. 1. THE PRINCIPLES OF TOMOGRAPHIC MEMORIES Tomographic principles can be used to store and reconstruct the inf ormation artificially stored in a bulk of a photosensitive media 1 The information is stored by changing some characteristics of a memory material (e. g. refractive index). Radiation from the two independent light sources (e. g. lasers) is f ocused inside the memory material. In this way the intensity of the light is above the threshold only in the localized point where the light rays intersect. By scanning the material the information can be stored in binary or nary format. When the information is stored it can be read by tomographic methods. However the situation is quite different from the classical tomographic problem. Here a lot of apriori information is present regarding the p0- sitions of the bits of information profile representing single bit and a mode of operation (binary or n-ary). 2. ALGORITHMS FOR THE READOUT OF THE TOMOGRAPHIC MEMORIES Apriori information enables efficient reconstruction of the memory contents. In this paper a few methods for the information readout together with the simulation results will be presented. Special attention will be given to the noise considerations. Two different
Multiscale 3-D shape representation and segmentation using spherical wavelets.
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen
2007-04-01
This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of our multiscale prior and 2) a segmentation task. In the reconstruction task, our results show that for a given training set size, our algorithm significantly improves the approximation of shapes in a testing set over the Point Distribution Model, which tends to oversmooth data. In the segmentation task, our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm, by capturing finer shape details.
Multiscale 3-D Shape Representation and Segmentation Using Spherical Wavelets
Nain, Delphine; Haker, Steven; Bobick, Aaron
2013-01-01
This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of our multiscale prior and 2) a segmentation task. In the reconstruction task, our results show that for a given training set size, our algorithm significantly improves the approximation of shapes in a testing set over the Point Distribution Model, which tends to oversmooth data. In the segmentation task, our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm, by capturing finer shape details. PMID:17427745
Coriton, Bruno; Frank, Jonathan H.
2016-02-16
In turbulent flows, the interaction between vorticity, ω, and strain rate, s, is considered a primary mechanism for the transfer of energy from large to small scales through vortex stretching. The ω-s coupling in turbulent jet flames is investigated using tomographic particle image velocimetry (TPIV). TPIV provides a direct measurement of the three-dimensional velocity field from which ω and s are determined. The effects of combustion and mean shear on the ω-s interaction are investigated in turbulent partially premixed methane/air jet flames with high and low probabilities of localized extinction as well as in a non-reacting isothermal air jet withmore » Reynolds number of approximately 13,000. Results show that combustion causes structures of high vorticity and strain rate to agglomerate in highly correlated, elongated layers that span the height of the probe volume. In the non-reacting jet, these structures have a more varied morphology, greater fragmentation, and are not as well correlated. The enhanced spatiotemporal correlation of vorticity and strain rate in the stable flame results in stronger ω-s interaction characterized by increased enstrophy and strain-rate production rates via vortex stretching and straining, respectively. The probability of preferential local alignment between ω and the eigenvector of the intermediate principal strain rate, s 2, which is intrinsic to the ω-s coupling in turbulent flows, is larger in the flames and increases with the flame stability. The larger mean shear in the flame imposes a preferential orientation of ω and s 2 tangential to the shear layer. The extensive and compressive principal strain rates, s 1 and s 3, respectively, are preferentially oriented at approximately 45° with respect to the jet axis. As a result, the production rates of strain and vorticity tend to be dominated by instances in which ω is parallel to the s 1¯-s 2¯ plane and orthogonal to s 3¯.« less
Özarslan, Evren; Koay, Cheng Guan; Shepherd, Timothy M; Komlosh, Michal E; İrfanoğlu, M Okan; Pierpaoli, Carlo; Basser, Peter J
2013-09-01
Diffusion-weighted magnetic resonance (MR) signals reflect information about underlying tissue microstructure and cytoarchitecture. We propose a quantitative, efficient, and robust mathematical and physical framework for representing diffusion-weighted MR imaging (MRI) data obtained in "q-space," and the corresponding "mean apparent propagator (MAP)" describing molecular displacements in "r-space." We also define and map novel quantitative descriptors of diffusion that can be computed robustly using this MAP-MRI framework. We describe efficient analytical representation of the three-dimensional q-space MR signal in a series expansion of basis functions that accurately describes diffusion in many complex geometries. The lowest order term in this expansion contains a diffusion tensor that characterizes the Gaussian displacement distribution, equivalent to diffusion tensor MRI (DTI). Inclusion of higher order terms enables the reconstruction of the true average propagator whose projection onto the unit "displacement" sphere provides an orientational distribution function (ODF) that contains only the orientational dependence of the diffusion process. The representation characterizes novel features of diffusion anisotropy and the non-Gaussian character of the three-dimensional diffusion process. Other important measures this representation provides include the return-to-the-origin probability (RTOP), and its variants for diffusion in one- and two-dimensions-the return-to-the-plane probability (RTPP), and the return-to-the-axis probability (RTAP), respectively. These zero net displacement probabilities measure the mean compartment (pore) volume and cross-sectional area in distributions of isolated pores irrespective of the pore shape. MAP-MRI represents a new comprehensive framework to model the three-dimensional q-space signal and transform it into diffusion propagators. Experiments on an excised marmoset brain specimen demonstrate that MAP-MRI provides several novel, quantifiable parameters that capture previously obscured intrinsic features of nervous tissue microstructure. This should prove helpful for investigating the functional organization of normal and pathologic nervous tissue. Copyright © 2013 Elsevier Inc. All rights reserved.
Visual Tracking Based on Extreme Learning Machine and Sparse Representation
Wang, Baoxian; Tang, Linbo; Yang, Jinglin; Zhao, Baojun; Wang, Shuigen
2015-01-01
The existing sparse representation-based visual trackers mostly suffer from both being time consuming and having poor robustness problems. To address these issues, a novel tracking method is presented via combining sparse representation and an emerging learning technique, namely extreme learning machine (ELM). Specifically, visual tracking can be divided into two consecutive processes. Firstly, ELM is utilized to find the optimal separate hyperplane between the target observations and background ones. Thus, the trained ELM classification function is able to remove most of the candidate samples related to background contents efficiently, thereby reducing the total computational cost of the following sparse representation. Secondly, to further combine ELM and sparse representation, the resultant confidence values (i.e., probabilities to be a target) of samples on the ELM classification function are used to construct a new manifold learning constraint term of the sparse representation framework, which tends to achieve robuster results. Moreover, the accelerated proximal gradient method is used for deriving the optimal solution (in matrix form) of the constrained sparse tracking model. Additionally, the matrix form solution allows the candidate samples to be calculated in parallel, thereby leading to a higher efficiency. Experiments demonstrate the effectiveness of the proposed tracker. PMID:26506359
A tool for simulating collision probabilities of animals with marine renewable energy devices.
Schmitt, Pál; Culloch, Ross; Lieber, Lilian; Molander, Sverker; Hammar, Linus; Kregting, Louise
2017-01-01
The mathematical problem of establishing a collision probability distribution is often not trivial. The shape and motion of the animal as well as of the the device must be evaluated in a four-dimensional space (3D motion over time). Earlier work on wind and tidal turbines was limited to a simplified two-dimensional representation, which cannot be applied to many new structures. We present a numerical algorithm to obtain such probability distributions using transient, three-dimensional numerical simulations. The method is demonstrated using a sub-surface tidal kite as an example. Necessary pre- and post-processing of the data created by the model is explained, numerical details and potential issues and limitations in the application of resulting probability distributions are highlighted.
Sell, Kathleen; Enzmann, Frieder; Kersten, Michael; Spangenberg, Erik
2013-01-02
We combined a noninvasive tomographic imaging technique with an invasive open-system core-flooding experiment and compared the results of the pre- and postflooded states of an experimental sandstone core sample from an ongoing field trial for carbon dioxide geosequestration. For the experiment, a rock core sample of 80 mL volume was taken from the 629 m Stuttgart Formation storage domain of a saline sandstone aquifer at the CCS research pilot plant Ketzin, Germany. Supercritical carbon dioxide and synthetical brine were injected under in situ reservoir p/T-conditions at an average flow rate of 0.1 mL/min for 256 h. X-ray computed microtomographic imaging was carried out before and after the core-flooding experiment at a spatial voxel resolution of 27 μm. No significant changes in microstructure were found at the tomographic imaging resolution including porosity and pore size distribution, except of an increase of depositional heterogeneous distribution of clay minerals in the pores. The digitized rock data were used as direct real microstructure input to the GeoDict software package, to simulate Navier-Stokes flow by a lattice Boltzmann equation solver. This procedure yielded 3D pressure and flow velocity fields, and revealed that the migration of clay particles decreased the permeability tensor probably due to clogging of pore openings.
Heidelberg Retina Tomograph 3 machine learning classifiers for glaucoma detection
Townsend, K A; Wollstein, G; Danks, D; Sung, K R; Ishikawa, H; Kagemann, L; Gabriele, M L; Schuman, J S
2010-01-01
Aims To assess performance of classifiers trained on Heidelberg Retina Tomograph 3 (HRT3) parameters for discriminating between healthy and glaucomatous eyes. Methods Classifiers were trained using HRT3 parameters from 60 healthy subjects and 140 glaucomatous subjects. The classifiers were trained on all 95 variables and smaller sets created with backward elimination. Seven types of classifiers, including Support Vector Machines with radial basis (SVM-radial), and Recursive Partitioning and Regression Trees (RPART), were trained on the parameters. The area under the ROC curve (AUC) was calculated for classifiers, individual parameters and HRT3 glaucoma probability scores (GPS). Classifier AUCs and leave-one-out accuracy were compared with the highest individual parameter and GPS AUCs and accuracies. Results The highest AUC and accuracy for an individual parameter were 0.848 and 0.79, for vertical cup/disc ratio (vC/D). For GPS, global GPS performed best with AUC 0.829 and accuracy 0.78. SVM-radial with all parameters showed significant improvement over global GPS and vC/ D with AUC 0.916 and accuracy 0.85. RPART with all parameters provided significant improvement over global GPS with AUC 0.899 and significant improvement over global GPS and vC/D with accuracy 0.875. Conclusions Machine learning classifiers of HRT3 data provide significant enhancement over current methods for detection of glaucoma. PMID:18523087
New developments in multimodal clinical multiphoton tomography
NASA Astrophysics Data System (ADS)
König, Karsten
2011-03-01
80 years ago, the PhD student Maria Goeppert predicted in her thesis in Goettingen, Germany, two-photon effects. It took 30 years to prove her theory, and another three decades to realize the first two-photon microscope. With the beginning of this millennium, first clinical multiphoton tomographs started operation in research institutions, hospitals, and in the cosmetic industry. The multiphoton tomograph MPTflexTM with its miniaturized flexible scan head became the Prism-Award 2010 winner in the category Life Sciences. Multiphoton tomographs with its superior submicron spatial resolution can be upgraded to 5D imaging tools by adding spectral time-correlated single photon counting units. Furthermore, multimodal hybrid tomographs provide chemical fingerprinting and fast wide-field imaging. The world's first clinical CARS studies have been performed with a hybrid multimodal multiphoton tomograph in spring 2010. In particular, nonfluorescent lipids and water as well as mitochondrial fluorescent NAD(P)H, fluorescent elastin, keratin, and melanin as well as SHG-active collagen have been imaged in patients with dermatological disorders. Further multimodal approaches include the combination of multiphoton tomographs with low-resolution imaging tools such as ultrasound, optoacoustic, OCT, and dermoscopy systems. Multiphoton tomographs are currently employed in Australia, Japan, the US, and in several European countries for early diagnosis of skin cancer (malignant melanoma), optimization of treatment strategies (wound healing, dermatitis), and cosmetic research including long-term biosafety tests of ZnO sunscreen nanoparticles and the measurement of the stimulated biosynthesis of collagen by anti-ageing products.
Rose, Michael; Rubal, Bernard; Hulten, Edward; Slim, Jennifer N; Steel, Kevin; Furgerson, James L; Villines, Todd C
2014-01-01
Background: The correlation between normal cardiac chamber linear dimensions measured during retrospective coronary computed tomographic angiography as compared to transthoracic echocardiography using the American Society of Echocardiography guidelines is not well established. Methods: We performed a review from January 2005 to July 2011 to identify subjects with retrospective electrocardiogram-gated coronary computed tomographic angiography scans for chest pain and transthoracic echocardiography with normal cardiac structures performed within 90 days. Dimensions were manually calculated in both imaging modalities in accordance with the American Society of Echocardiography published guidelines. Left ventricular ejection fraction was calculated on echocardiography manually using the Simpson’s formula and by coronary computed tomographic angiography using the end-systolic and end-diastolic volumes. Results: We reviewed 532 studies, rejected 412 and had 120 cases for review with a median time between studies of 7 days (interquartile range (IQR25,75) = 0–22 days) with no correlation between the measurements made by coronary computed tomographic angiography and transthoracic echocardiography using Bland–Altman analysis. We generated coronary computed tomographic angiography cardiac dimension reference ranges for both genders for our population. Conclusion: Our findings represent a step towards generating cardiac chamber dimensions’ reference ranges for coronary computed tomographic angiography as compared to transthoracic echocardiography in patients with normal cardiac morphology and function using the American Society of Echocardiography guideline measurements that are commonly used by cardiologists. PMID:26770706
Neurons That Update Representations of the Future.
Seriès, Peggy
2018-06-11
A recent article shows that the brain automatically estimates the probabilities of possible future actions before it has even received all the information necessary to decide what to do next. Crown Copyright © 2018. Published by Elsevier Ltd. All rights reserved.
Ho, Lam Si Tung; Xu, Jason; Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A
2018-03-01
Birth-death processes track the size of a univariate population, but many biological systems involve interaction between populations, necessitating models for two or more populations simultaneously. A lack of efficient methods for evaluating finite-time transition probabilities of bivariate processes, however, has restricted statistical inference in these models. Researchers rely on computationally expensive methods such as matrix exponentiation or Monte Carlo approximation, restricting likelihood-based inference to small systems, or indirect methods such as approximate Bayesian computation. In this paper, we introduce the birth/birth-death process, a tractable bivariate extension of the birth-death process, where rates are allowed to be nonlinear. We develop an efficient algorithm to calculate its transition probabilities using a continued fraction representation of their Laplace transforms. Next, we identify several exemplary models arising in molecular epidemiology, macro-parasite evolution, and infectious disease modeling that fall within this class, and demonstrate advantages of our proposed method over existing approaches to inference in these models. Notably, the ubiquitous stochastic susceptible-infectious-removed (SIR) model falls within this class, and we emphasize that computable transition probabilities newly enable direct inference of parameters in the SIR model. We also propose a very fast method for approximating the transition probabilities under the SIR model via a novel branching process simplification, and compare it to the continued fraction representation method with application to the 17th century plague in Eyam. Although the two methods produce similar maximum a posteriori estimates, the branching process approximation fails to capture the correlation structure in the joint posterior distribution.
High-order statistics of weber local descriptors for image representation.
Han, Xian-Hua; Chen, Yen-Wei; Xu, Gang
2015-06-01
Highly discriminant visual features play a key role in different image classification applications. This study aims to realize a method for extracting highly-discriminant features from images by exploring a robust local descriptor inspired by Weber's law. The investigated local descriptor is based on the fact that human perception for distinguishing a pattern depends not only on the absolute intensity of the stimulus but also on the relative variance of the stimulus. Therefore, we firstly transform the original stimulus (the images in our study) into a differential excitation-domain according to Weber's law, and then explore a local patch, called micro-Texton, in the transformed domain as Weber local descriptor (WLD). Furthermore, we propose to employ a parametric probability process to model the Weber local descriptors, and extract the higher-order statistics to the model parameters for image representation. The proposed strategy can adaptively characterize the WLD space using generative probability model, and then learn the parameters for better fitting the training space, which would lead to more discriminant representation for images. In order to validate the efficiency of the proposed strategy, we apply three different image classification applications including texture, food images and HEp-2 cell pattern recognition, which validates that our proposed strategy has advantages over the state-of-the-art approaches.
Representation of Probability Density Functions from Orbit Determination using the Particle Filter
NASA Technical Reports Server (NTRS)
Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell
2012-01-01
Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.
Confidence as Bayesian Probability: From Neural Origins to Behavior.
Meyniel, Florent; Sigman, Mariano; Mainen, Zachary F
2015-10-07
Research on confidence spreads across several sub-fields of psychology and neuroscience. Here, we explore how a definition of confidence as Bayesian probability can unify these viewpoints. This computational view entails that there are distinct forms in which confidence is represented and used in the brain, including distributional confidence, pertaining to neural representations of probability distributions, and summary confidence, pertaining to scalar summaries of those distributions. Summary confidence is, normatively, derived or "read out" from distributional confidence. Neural implementations of readout will trade off optimality versus flexibility of routing across brain systems, allowing confidence to serve diverse cognitive functions. Copyright © 2015 Elsevier Inc. All rights reserved.
Barbour, Randall L.; Barbour, San-Lian S.
2018-01-01
Summary In this report we introduce a weak-model approach for examination of the intrinsic time-varying properties of the hemoglobin signal, with the aim of advancing the application of functional near infrared spectroscopy (fNIRS) for the detection of breast cancer, among other potential uses. The developed methodology integrates concepts from stochastic network theory with known modulatory features of the vascular bed, and in doing so provides access to a previously unrecognized dense feature space that is shown to have promising diagnostic potential. Notable features of the methodology include access to this information solely from measures acquired in the resting state, and analysis of these by treating the various components of the hemoglobin (Hb) signal as a co-varying interacting system. Approach The principal data-transform kernel projects Hb state-space trajectories onto a coordinate system that constitutes a finite-state representation of covariations among the principal elements of the Hb signal (i.e., its oxygenated (ΔoxyHb) and deoxygenated (ΔdeoxyHb) forms and the associated dependent quantities: total hemoglobin (ΔtotalHb = ΔoxyHb + ΔdeoxyHb), hemoglobin oxygen saturation (ΔHbO2Sat = 100Δ(oxyHb/totalHb)), and tissue-hemoglobin oxygen exchange (ΔHbO2Exc = ΔdeoxyHb—ΔoxyHb)). The resulting ten-state representation treats the evolution of this signal as a one-space, spatiotemporal network that undergoes transitions from one state to another. States of the network are defined by the algebraic signs of the amplitudes of the time-varying components of the Hb signal relative to their temporal mean values. This assignment produces several classes of coefficient arrays, most with a dimension of 10×10. Biological motivation Motivating our approach is the understanding that effector mechanisms that modulate blood delivery to tissue operate on macroscopic scales, in a spatially and temporally varying manner. Also recognized is that this behavior is sensitive to nonlinear actions of these effectors, which include the binding properties of hemoglobin. Accessible phenomenology includes measures of the kinetics and probabilities of network dynamics, which we treat as surrogates for the actions of feedback mechanisms that modulate tissue-vascular coupling. Findings Qualitative and quantitative features of this space, and their potential to serve as markers of disease, have been explored by examining continuous-wave fNIRS 3D tomographic time series obtained from the breasts of women who do and do not have breast cancer. Inspection of the coefficient arrays reveals that they are governed predominantly by first-order rate processes, and that each array class exhibits preferred structure that is mainly independent of the others. Discussed are strategies that may serve to extend evaluation of the accessible feature space and how the character of this information holds potential for development of novel clinical and preclinical uses. PMID:29883456
Probability density cloud as a geometrical tool to describe statistics of scattered light.
Yaitskova, Natalia
2017-04-01
First-order statistics of scattered light is described using the representation of the probability density cloud, which visualizes a two-dimensional distribution for complex amplitude. The geometric parameters of the cloud are studied in detail and are connected to the statistical properties of phase. The moment-generating function for intensity is obtained in a closed form through these parameters. An example of exponentially modified normal distribution is provided to illustrate the functioning of this geometrical approach.
Robust GNSS and InSAR tomography of neutrospheric refractivity using a Compressive Sensing approach
NASA Astrophysics Data System (ADS)
Heublein, Marion; Alshawaf, Fadwa; Zhu, Xiao Xiang; Hinz, Stefan
2017-04-01
Motivation: An accurate knowledge of the 3D distribution of water vapor in the atmosphere is a key element for weather forecasting and climate research. In addition, a precise determination of water vapor is also required for accurate positioning and deformation monitoring using Global Navigation Satellite Systems (GNSS) and Interferometric Synthetic Aperture Radar (InSAR). Several approaches for 3D tomographic water vapor reconstruction from GNSS-based Slant Wet Delay (SWD) estimates using the least squares (LSQ) adjustment exist. However, the tomographic system is in general ill-conditioned and its solution is unstable. Therefore, additional information or constraints need to be added in order to regularize the system. Goal of this work: In this work, we analyze the potential of Compressive Sensing (CS) for robustly reconstructing neutrospheric refractivity from GNSS SWD estimates. Moreover, the benefit of adding InSAR SWD estimates into the tomographic system is studied. Approach: A sparse representation of the refractivity field is obtained using a dictionary composed of Discrete Cosine Transforms (DCT) in longitude and latitude direction and of an Euler transform in height direction. This sparsity of the signal can be used as a prior for regularization and the CS inversion is solved by minimizing the number of non-zero entries of the sparse solution in the DCT-Euler domain. No other regularization constraints or prior knowledge is applied. The tomographic reconstruction relies on total SWD estimates from GNSS Precise Point Positioning (PPP) and Persistent Scatterer (PS) InSAR. On the one hand, GNSS PPP SWD estimates are included into the system of equations. On the other hand, 2D ZWD maps are obtained by a combination of point-wise estimates of the wet delay using GNSS observations and partial InSAR wet delay maps. These ZWD estimates are aggregated to derive realistic wet delay input data at given points as if corresponding to GNSS sites within the study area. The made-up ZWD values can be mapped into different elevation and azimuth angles. Moreover, using the same observation geometry as in the case of the GNSS and InSAR data, a synthetic set of SWD values was generated based on WRF simulations. Results: The CS approach shows particular strength in the case of a small number of SWD estimates. When compared to LSQ, the sparse reconstruction is much more robust. In the case of a low density of GNSS sites, adding InSAR SWD estimates improves the reconstruction accuracy for both LSQ and CS. Based on a synthetic SWD dataset generated using WRF simulations of wet refractivity, the CS based solution of the tomographic system is validated. In the vertical direction, the refractivity distribution deduced from GNSS and InSAR SWD estimates is compared to a tropospheric humidity data set provided by EUMETSAT consisting of daily mean values of specific humidity given on six pressure levels between 1000 hPa and 200 hPa. Study area: The Upper Rhine Graben (URG) characterized by negligible surface deformations is chosen as study area. A network of seven permanent GNSS receivers is used for this study, and a total number of 17 SAR images, acquired by ENVISAT ASAR is available.
Semantic image segmentation with fused CNN features
NASA Astrophysics Data System (ADS)
Geng, Hui-qiang; Zhang, Hua; Xue, Yan-bing; Zhou, Mian; Xu, Guang-ping; Gao, Zan
2017-09-01
Semantic image segmentation is a task to predict a category label for every image pixel. The key challenge of it is to design a strong feature representation. In this paper, we fuse the hierarchical convolutional neural network (CNN) features and the region-based features as the feature representation. The hierarchical features contain more global information, while the region-based features contain more local information. The combination of these two kinds of features significantly enhances the feature representation. Then the fused features are used to train a softmax classifier to produce per-pixel label assignment probability. And a fully connected conditional random field (CRF) is used as a post-processing method to improve the labeling consistency. We conduct experiments on SIFT flow dataset. The pixel accuracy and class accuracy are 84.4% and 34.86%, respectively.
NASA Astrophysics Data System (ADS)
Thampi, Smitha V.; Yamamoto, Mamoru
2010-03-01
A chain of newly designed GNU (GNU is not UNIX) Radio Beacon Receivers (GRBR) has recently been established over Japan, primarily for tomographic imaging of the ionosphere over this region. Receivers installed at Shionomisaki (33.45°N, 135.8°E), Shigaraki (34.8°N, 136.1°E), and Fukui (36°N, 136°E) continuously track low earth orbiting satellites (LEOS), mainly OSCAR, Cosmos, and FORMOSAT-3/COSMIC, to obtain simultaneous total electron content (TEC) data from these three locations, which are then used for the tomographic reconstruction of ionospheric electron densities. This is the first GRBR network established for TEC observations, and the first beacon-based tomographic imaging in Japanese longitudes. The first tomographic images revealed the temporal evolution with all of the major features in the ionospheric electron density distribution over Japan. A comparison of the tomographically reconstructed electron densities with the ƒ o F 2 data from Kokubunji (35°N, 139°E) revealed that there was good agreement between the datasets. These first results show the potential of GRBR and its network for making continuous, unattended ionospheric TEC measurements and for tomographic imaging of the ionosphere.
Stanley, W.D.; Benz, H.M.; Walters, M.A.; Villasenor, A.; Rodriguez, B.D.
1998-01-01
In order to study magmatism and geothermal systems in The Geysers-Clear Lake region, we developed a detailed three-dimensional tomographic velocity model based on local earthquakes. This high-resolution model resolves the velocity structure of the crust in the region to depths of approximately 12 km. The most significant velocity contrasts in The Geysers-Clear Lake region occur in the steam production area, where high velocities are associated with a Quaternary granitic pluton, and in the Mount Hannah region, where low velocities occur in a 5-km-thick section of Mesozoic argillites. In addition, a more regional tomographic model was developed using traveltimes from earthquakes covering most of northern California. This regional model sampled the whole crust, but at a lower resolution than the local model. The regional model outlines low velocities at depths of 8-12 km in The Geysers-Clear Lake area, which extend eastward to the Coast Range thrust. These low velocities are inferred to be related to unmetamorphosed Mesozoic sedimentary rocks. In addition, the regional velocity model indicates high velocities in the lower crust beneath the Clear Lake volcanic field, which we interpret to be associated with mafic underplating. No large silicic magma chamber is noted in either the local or regional tomographic models. A three-dimensional gravity model also has been developed in the area of the tomographic imaging. Our gravity model demonstrates that all density contrasts can be accounted for in the upper 5-7 km of the crust. Two-dimensional magnetotelluric models of data from a regional, east-west profile indicate high resistivities associated with the granitic pluton in The Geysers production area and low resistivities in the low-velocity section of Mesozoic argillites near Mount Hannah. No indication of midcrustal magma bodies is present in the magnetotelluric data. On the basis of heat flow and geologic evidence, Holocene intrusive activity is thought to have occurred near the Northwest Geysers, Mount Hannah, Sulphur Bank Mine, and perhaps other areas. The geophysical data provide no conclusive evidence for such activity, but the detailed velocity model is suggestive of intrusive activity near Mount Hannah similar to that in the 'felsite' of The Geysers production area. The geophysical models, seismicity patterns, distribution of volcanic vents, heat flow, and other data indicate that small, young intrusive bodies that were injected along a northeast trend from The Geysers to Clear Lake probably control the thermal regime.
NASA Astrophysics Data System (ADS)
Boxx, I.; Carter, C. D.; Meier, W.
2014-08-01
Tomographic particle image velocimetry (tomographic-PIV) is a recently developed measurement technique used to acquire volumetric velocity field data in liquid and gaseous flows. The technique relies on line-of-sight reconstruction of the rays between a 3D particle distribution and a multi-camera imaging system. In a turbulent flame, however, index-of-refraction variations resulting from local heat-release may inhibit reconstruction and thereby render the technique infeasible. The objective of this study was to test the efficacy of tomographic-PIV in a turbulent flame. An additional goal was to determine the feasibility of acquiring usable tomographic-PIV measurements in a turbulent flame at multi-kHz acquisition rates with current-generation laser and camera technology. To this end, a setup consisting of four complementary metal oxide semiconductor cameras and a dual-cavity Nd:YAG laser was implemented to test the technique in a lifted turbulent jet flame. While the cameras were capable of kHz-rate image acquisition, the laser operated at a pulse repetition rate of only 10 Hz. However, use of this laser allowed exploration of the required pulse energy and thus power for a kHz-rate system. The imaged region was 29 × 28 × 2.7 mm in size. The tomographic reconstruction of the 3D particle distributions was accomplished using the multiplicative algebraic reconstruction technique. The results indicate that volumetric velocimetry via tomographic-PIV is feasible with pulse energies of 25 mJ, which is within the capability of current-generation kHz-rate diode-pumped solid-state lasers.
Large-scale tomographic particle image velocimetry using helium-filled soap bubbles
NASA Astrophysics Data System (ADS)
Kühn, Matthias; Ehrenfried, Klaus; Bosbach, Johannes; Wagner, Claus
2011-04-01
To measure large-scale flow structures in air, a tomographic particle image velocimetry (tomographic PIV) system for measurement volumes of the order of one cubic metre is developed, which employs helium-filled soap bubbles (HFSBs) as tracer particles. The technique has several specific characteristics compared to most conventional tomographic PIV systems, which are usually applied to small measurement volumes. One of them is spot lights on the HFSB tracers, which slightly change their position, when the direction of observation is altered. Further issues are the large particle to voxel ratio and the short focal length of the used camera lenses, which result in a noticeable variation of the magnification factor in volume depth direction. Taking the specific characteristics of the HFSBs into account, the feasibility of our large-scale tomographic PIV system is demonstrated by showing that the calibration errors can be reduced down to 0.1 pixels as required. Further, an accurate and fast implementation of the multiplicative algebraic reconstruction technique, which calculates the weighting coefficients when needed instead of storing them, is discussed. The tomographic PIV system is applied to measure forced convection in a convection cell at a Reynolds number of 530 based on the inlet channel height and the mean inlet velocity. The size of the measurement volume and the interrogation volumes amount to 750 mm × 450 mm × 165 mm and 48 mm × 48 mm × 24 mm, respectively. Validation of the tomographic PIV technique employing HFSBs is further provided by comparing profiles of the mean velocity and of the root mean square velocity fluctuations to respective planar PIV data.
Vanderperren, K; Bergman, H J; Spoormakers, T J P; Pille, F; Duchateau, L; Puchalski, S M; Saunders, J H
2014-07-01
Lysis of the axial aspect of equine proximal sesamoid bones (PSBs) is a rare condition reported to have septic or traumatic origins. Limited information exists regarding imaging of nonseptic axial osteitis of a PSB. To report the clinical, radiographic, ultrasonographic, computed tomographic and intra-arterial contrast-enhanced computed tomographic abnormalities in horses with axial nonseptic osteitis of a PSB. Retrospective clinical study. Eighteen horses diagnosed with nonseptic osteitis of the axial border of a PSB between 2007 and 2012 were reviewed retrospectively. Case details, clinical examination, radiographic, ultrasonographic, computed tomographic and intra-arterial/intra-articular contrast-enhanced computed tomographic features were recorded, when available. Radiographic, ultrasonographic and computed tomographic evaluations of the fetlock region had been performed on 18, 15 and 9 horses, respectively. The effect of the degree of lysis on the grade and duration of lameness was determined. All horses had chronic unilateral lameness, 4 with forelimb and 14 with hindlimb signs. On radiographs, lysis was identified in both PSBs in 14 horses, one PSB in 3 horses and in one horse no lysis was identified. The degree of osteolysis was variable. Ultrasonography identified variably sized irregularities of the bone surface and alteration in echogenicity of the palmar/plantar ligament (PL). All horses undergoing computed tomographic examination (n = 9) had biaxial lysis. The lesions were significantly longer and deeper on computed tomographic images compared with radiographic images. Intra-arterial contrast-enhanced computed tomography may reveal moderate to marked contrast enhancement of the PL. There was no significant effect of the degree of lysis on the grade and duration of lameness. Lesions of nonseptic axial osteitis of a PSB can be identified using a combination of radiography and ultrasonography. Computed tomography provides additional information regarding the extent of the pathology. © 2013 EVJ Ltd.
First tomographic observations of gravity waves by the infrared limb imager GLORIA
NASA Astrophysics Data System (ADS)
Krisch, Isabell; Preusse, Peter; Ungermann, Jörn; Dörnbrack, Andreas; Eckermann, Stephen D.; Ern, Manfred; Friedl-Vallon, Felix; Kaufmann, Martin; Oelhaf, Hermann; Rapp, Markus; Strube, Cornelia; Riese, Martin
2017-12-01
Atmospheric gravity waves are a major cause of uncertainty in atmosphere general circulation models. This uncertainty affects regional climate projections and seasonal weather predictions. Improving the representation of gravity waves in general circulation models is therefore of primary interest. In this regard, measurements providing an accurate 3-D characterization of gravity waves are needed. Using the Gimballed Limb Observer for Radiance Imaging of the Atmosphere (GLORIA), the first airborne implementation of a novel infrared limb imaging technique, a gravity wave event over Iceland was observed. An air volume disturbed by this gravity wave was investigated from different angles by encircling the volume with a closed flight pattern. Using a tomographic retrieval approach, the measurements of this air mass at different angles allowed for a 3-D reconstruction of the temperature and trace gas structure. The temperature measurements were used to derive gravity wave amplitudes, 3-D wave vectors, and direction-resolved momentum fluxes. These parameters facilitated the backtracing of the waves to their sources on the southern coast of Iceland. Two wave packets are distinguished, one stemming from the main mountain ridge in the south of Iceland and the other from the smaller mountains in the north. The total area-integrated fluxes of these two wave packets are determined. Forward ray tracing reveals that the waves propagate laterally more than 2000 km away from their source region. A comparison of a 3-D ray-tracing version to solely column-based propagation showed that lateral propagation can help the waves to avoid critical layers and propagate to higher altitudes. Thus, the implementation of oblique gravity wave propagation into general circulation models may improve their predictive skills.
Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, S.T.C.
The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound,more » electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.« less
Bjørndal, L; Carlsen, O; Thuesen, G; Darvann, T; Kreiborg, S
1999-01-01
The aim of this study was to perform a qualitative analysis of the relationship between the external and internal macromorphology of the root complex and to use fractal dimension analysis to determine the correlation between the shape of the outer surface of the root and the shape of the root canal. On the basis of X-ray computed transaxial microtomography, a qualitative and quantitative analysis of the external and internal macromorphology of the root complex in permanent maxillary molars was performed using well-defined macromorphological variables and fractal dimension analysis. Five maxillary molars were placed between a microfocus X-ray tube with a focal spot size of 0.07 mm, a Thomson-SCF image intensifier, and a CCD camera compromising a detector for the tomograph. Between 100 and 240 tomographic 2D slices were made of each tooth. Assembling slices for 3D volume was carried out with subsequent median noise filtering. Segmentation into enamel, dentine and pulp space was achieved through thresholding followed by morphological filtering. Surface representations were then constructed. A useful visualization of the tooth was created by making the dental hard tissues transparent and the pulp chamber and root-canal system opaque. On this basis it became possible to assess the relationship between the external and internal macromorphology of the crown and root complex. There was strong agreement between the number, position and cross-section of the root canals and the number, position and degree of manifestation of the root complex macrostructures. Data from a fractal dimension analysis also showed a high correlation between the shape of the root canals and the corresponding roots. It is suggested that these types of 3D volumes constitute a platform for preclinical training in fundamental endodontic procedures.
Imaging and characterizing cells using tomography
Do, Myan; Isaacson, Samuel A.; McDermott, Gerry; Le Gros, Mark A.; Larabell, Carolyn A.
2015-01-01
We can learn much about cell function by imaging and quantifying sub-cellular structures, especially if this is done non-destructively without altering said structures. Soft x-ray tomography (SXT) is a high-resolution imaging technique for visualizing cells and their interior structure in 3D. A tomogram of the cell, reconstructed from a series of 2D projection images, can be easily segmented and analyzed. SXT has a very high specimen throughput compared to other high-resolution structure imaging modalities; for example, tomographic data for reconstructing an entire eukaryotic cell is acquired in a matter of minutes. SXT visualizes cells without the need for chemical fixation, dehydration, or staining of the specimen. As a result, the SXT reconstructions are close representations of cells in their native state. SXT is applicable to most cell types. The deep penetration of soft x-rays allows cells, even mammalian cells, to be imaged without being sectioned. Image contrast in SXT is generated by the differential attenuation soft x-ray illumination as it passes through the specimen. Accordingly, each voxel in the tomographic reconstruction has a measured linear absorption coefficient (LAC) value. LAC values are quantitative and give rise to each sub-cellular component having a characteristic LAC profile, allowing organelles to be identified and segmented from the milieu of other cell contents. In this chapter, we describe the fundamentals of SXT imaging and how this technique can answer real world questions in the study of the nucleus. We also describe the development of correlative methods for the localization of specific molecules in a SXT reconstruction. The combination of fluorescence and SXT data acquired from the same specimen produces composite 3D images, rich with detailed information on the inner workings of cells. PMID:25602704
Surface wave tomography of the Ontong Java Plateau: Seismic probing of the largest igneous province
NASA Astrophysics Data System (ADS)
Richardson, William Philip
1998-12-01
Large igneous provinces (LIP), such as the gigantic Cretaceous oceanic plateaus, the Ontong-Java, the Manihiki and the Kerguelen, are part of a globally distributed diverse suite of massive crustal features considered to be episodic representations of mantle dynamics (Coffin and Eldholm, 1994). The Ontong Java Plateau in the central western Pacific is by far the largest (and presumably thickest) of these provinces and is believed to have been emplaced rapidly in the Aptian, ˜122 Ma (Tarduno et al., 1991). From 1994 to 1996 four PASSCAL broadband seismic stations were deployed in an array north of the OJP. Analysis was conducted on vertical component broadband seismograms from events recorded on the Micronesian Seismic Experiment array between January 1994 and March 1996. The purpose of this experiment is to investigate the crustal and upper mantle structure of the Ontong Java Plateau (OJP) employing surface wave tomographic methods. Using the partitioned waveform inversion method (Nolet, 1990) and earthquakes with published Centroid Moment Tensor (Dziewonski et al., 1981) solutions, we produce waveform fits from source-to-receiver paths that primarily sample the OJP. From these waveform fits, linearized constraints on shear velocity suggest: (1) a massively thickened crust over the center of the OJP-greater than 35km over central areas of the plateau while thinning off-center; (2) a pronounced low-velocity zone down to ˜300km depth-a robust result in agreement with recent geochemical predictions (Neal et al., 1997); (3) the probability of lateral heterogeneity across the OJP. Finally, by combining many single waveform inversions (van der Lee and Nolet, 1997b) a 3-D shear velocity model can be computed for the Ontong Java Plateau and the nearby Caroline Basin. New constraints on the crustal thickness (and hence the volume extruded) are presented, thereby adding to the understanding of the overall tectonic setting and possible emplacement mechanism of the structure.
Williams, Melonie; Hong, Sang W; Kang, Min-Suk; Carlisle, Nancy B; Woodman, Geoffrey F
2013-04-01
Recent research using change-detection tasks has shown that a directed-forgetting cue, indicating that a subset of the information stored in memory can be forgotten, significantly benefits the other information stored in visual working memory. How do these directed-forgetting cues aid the memory representations that are retained? We addressed this question in the present study by using a recall paradigm to measure the nature of the retained memory representations. Our results demonstrated that a directed-forgetting cue leads to higher-fidelity representations of the remaining items and a lower probability of dropping these representations from memory. Next, we showed that this is made possible by the to-be-forgotten item being expelled from visual working memory following the cue, allowing maintenance mechanisms to be focused on only the items that remain in visual working memory. Thus, the present findings show that cues to forget benefit the remaining information in visual working memory by fundamentally improving their quality relative to conditions in which just as many items are encoded but no cue is provided.
WFIRST: Principal Components Analysis of H4RG-10 Near-IR Detector Data Cubes
NASA Astrophysics Data System (ADS)
Rauscher, Bernard
2018-01-01
The Wide Field Infrared Survey Telescope’s (WFIRST) Wide Field Instrument (WFI) incorporates an array of eighteen Teledyne H4RG-10 near-IR detector arrays. Because WFIRST’s science investigations require controlling systematic uncertainties to state-of-the-art levels, we conducted principal components analysis (PCA) of some H4RG-10 test data obtained in the NASA Goddard Space Flight Center Detector Characterization Laboratory (DCL). The PCA indicates that the Legendre polynomials provide a nearly orthogonal representation of up-the-ramp sampled illuminated data cubes, and suggests other representations that may provide an even more compact representation of the data in some circumstances. We hypothesize that by using orthogonal representations, such as those described here, it may be possible to control systematic errors better than has been achieved before for NASA missions. We believe that these findings are probably applicable to other H4RG, H2RG, and H1RG based systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
PELT, DANIEL
2017-04-21
Small Python package to compute tomographic reconstructions using a reconstruction method published in: Pelt, D.M., & De Andrade, V. (2017). Improved tomographic reconstruction of large-scale real-world data by filter optimization. Advanced Structural and Chemical Imaging 2: 17; and Pelt, D. M., & Batenburg, K. J. (2015). Accurately approximating algebraic tomographic reconstruction by filtered backprojection. In Proceedings of The 13th International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine (pp. 158-161).
NASA Astrophysics Data System (ADS)
Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.
2018-02-01
Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.
MacRoy-Higgins, Michelle; Dalton, Kevin Patrick
2015-12-01
The purpose of this study was to examine the influence of phonotactic probability on sublexical (phonological) and lexical representations in 3-year-olds who had a history of being late talkers in comparison with their peers with typical language development. Ten 3-year-olds who were late talkers and 10 age-matched typically developing controls completed nonword repetition and fast mapping tasks; stimuli for both experimental procedures differed in phonotactic probability. Both participant groups repeated nonwords containing high phonotactic probability sequences more accurately than nonwords containing low phonotactic probability sequences. Participants with typical language showed an early advantage for fast mapping high phonotactic probability words; children who were late talkers required more exposures to the novel words to show the same advantage for fast mapping high phonotactic probability words. Children who were late talkers showed similar sensitivities to phonotactic probability in nonword repetition and word learning when compared with their peers with no history of language delay. However, word learning in children who were late talkers appeared to be slower when compared with their peers.
Off-diagonal long-range order, cycle probabilities, and condensate fraction in the ideal Bose gas.
Chevallier, Maguelonne; Krauth, Werner
2007-11-01
We discuss the relationship between the cycle probabilities in the path-integral representation of the ideal Bose gas, off-diagonal long-range order, and Bose-Einstein condensation. Starting from the Landsberg recursion relation for the canonic partition function, we use elementary considerations to show that in a box of size L3 the sum of the cycle probabilities of length k>L2 equals the off-diagonal long-range order parameter in the thermodynamic limit. For arbitrary systems of ideal bosons, the integer derivative of the cycle probabilities is related to the probability of condensing k bosons. We use this relation to derive the precise form of the pik in the thermodynamic limit. We also determine the function pik for arbitrary systems. Furthermore, we use the cycle probabilities to compute the probability distribution of the maximum-length cycles both at T=0, where the ideal Bose gas reduces to the study of random permutations, and at finite temperature. We close with comments on the cycle probabilities in interacting Bose gases.
Reduced Wiener Chaos representation of random fields via basis adaptation and projection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsilifis, Panagiotis, E-mail: tsilifis@usc.edu; Department of Civil Engineering, University of Southern California, Los Angeles, CA 90089; Ghanem, Roger G., E-mail: ghanem@usc.edu
2017-07-15
A new characterization of random fields appearing in physical models is presented that is based on their well-known Homogeneous Chaos expansions. We take advantage of the adaptation capabilities of these expansions where the core idea is to rotate the basis of the underlying Gaussian Hilbert space, in order to achieve reduced functional representations that concentrate the induced probability measure in a lower dimensional subspace. For a smooth family of rotations along the domain of interest, the uncorrelated Gaussian inputs are transformed into a Gaussian process, thus introducing a mesoscale that captures intermediate characteristics of the quantity of interest.
A Hilbert Space Representation of Generalized Observables and Measurement Processes in the ESR Model
NASA Astrophysics Data System (ADS)
Sozzo, Sandro; Garola, Claudio
2010-12-01
The extended semantic realism ( ESR) model recently worked out by one of the authors embodies the mathematical formalism of standard (Hilbert space) quantum mechanics in a noncontextual framework, reinterpreting quantum probabilities as conditional instead of absolute. We provide here a Hilbert space representation of the generalized observables introduced by the ESR model that satisfy a simple physical condition, propose a generalization of the projection postulate, and suggest a possible mathematical description of the measurement process in terms of evolution of the compound system made up of the measured system and the measuring apparatus.
Reduced Wiener Chaos representation of random fields via basis adaptation and projection
NASA Astrophysics Data System (ADS)
Tsilifis, Panagiotis; Ghanem, Roger G.
2017-07-01
A new characterization of random fields appearing in physical models is presented that is based on their well-known Homogeneous Chaos expansions. We take advantage of the adaptation capabilities of these expansions where the core idea is to rotate the basis of the underlying Gaussian Hilbert space, in order to achieve reduced functional representations that concentrate the induced probability measure in a lower dimensional subspace. For a smooth family of rotations along the domain of interest, the uncorrelated Gaussian inputs are transformed into a Gaussian process, thus introducing a mesoscale that captures intermediate characteristics of the quantity of interest.
Miotto, Riccardo; Li, Li; Kidd, Brian A.; Dudley, Joel T.
2016-01-01
Secondary use of electronic health records (EHRs) promises to advance clinical research and better inform clinical decision making. Challenges in summarizing and representing patient data prevent widespread practice of predictive modeling using EHRs. Here we present a novel unsupervised deep feature learning method to derive a general-purpose patient representation from EHR data that facilitates clinical predictive modeling. In particular, a three-layer stack of denoising autoencoders was used to capture hierarchical regularities and dependencies in the aggregated EHRs of about 700,000 patients from the Mount Sinai data warehouse. The result is a representation we name “deep patient”. We evaluated this representation as broadly predictive of health states by assessing the probability of patients to develop various diseases. We performed evaluation using 76,214 test patients comprising 78 diseases from diverse clinical domains and temporal windows. Our results significantly outperformed those achieved using representations based on raw EHR data and alternative feature learning strategies. Prediction performance for severe diabetes, schizophrenia, and various cancers were among the top performing. These findings indicate that deep learning applied to EHRs can derive patient representations that offer improved clinical predictions, and could provide a machine learning framework for augmenting clinical decision systems. PMID:27185194
Temporal and Motor Representation of Rhythm in Fronto-Parietal Cortical Areas: An fMRI Study
Konoike, Naho; Kotozaki, Yuka; Jeong, Hyeonjeong; Miyazaki, Atsuko; Sakaki, Kohei; Shinada, Takamitsu; Sugiura, Motoaki; Kawashima, Ryuta; Nakamura, Katsuki
2015-01-01
When sounds occur with temporally structured patterns, we can feel a rhythm. To memorize a rhythm, perception of its temporal patterns and organization of them into a hierarchically structured sequence are necessary. On the other hand, rhythm perception can often cause unintentional body movements. Thus, we hypothesized that rhythm information can be manifested in two different ways; temporal and motor representations. The motor representation depends on effectors, such as the finger or foot, whereas the temporal representation is effector-independent. We tested our hypothesis with a working memory paradigm to elucidate neuronal correlates of temporal or motor representation of rhythm and to reveal the neural networks associated with these representations. We measured brain activity by fMRI while participants memorized rhythms and reproduced them by tapping with the right finger, left finger, or foot, or by articulation. The right inferior frontal gyrus and the inferior parietal lobule exhibited significant effector-independent activations during encoding and retrieval of rhythm information, whereas the left inferior parietal lobule and supplementary motor area (SMA) showed effector-dependent activations during retrieval. These results suggest that temporal sequences of rhythm are probably represented in the right fronto-parietal network, whereas motor sequences of rhythm can be represented in the SMA-parietal network. PMID:26076024
NASA Astrophysics Data System (ADS)
Miotto, Riccardo; Li, Li; Kidd, Brian A.; Dudley, Joel T.
2016-05-01
Secondary use of electronic health records (EHRs) promises to advance clinical research and better inform clinical decision making. Challenges in summarizing and representing patient data prevent widespread practice of predictive modeling using EHRs. Here we present a novel unsupervised deep feature learning method to derive a general-purpose patient representation from EHR data that facilitates clinical predictive modeling. In particular, a three-layer stack of denoising autoencoders was used to capture hierarchical regularities and dependencies in the aggregated EHRs of about 700,000 patients from the Mount Sinai data warehouse. The result is a representation we name “deep patient”. We evaluated this representation as broadly predictive of health states by assessing the probability of patients to develop various diseases. We performed evaluation using 76,214 test patients comprising 78 diseases from diverse clinical domains and temporal windows. Our results significantly outperformed those achieved using representations based on raw EHR data and alternative feature learning strategies. Prediction performance for severe diabetes, schizophrenia, and various cancers were among the top performing. These findings indicate that deep learning applied to EHRs can derive patient representations that offer improved clinical predictions, and could provide a machine learning framework for augmenting clinical decision systems.
A comparison of newborn stylized and tomographic models for dose assessment in paediatric radiology
NASA Astrophysics Data System (ADS)
Staton, R. J.; Pazik, F. D.; Nipper, J. C.; Williams, J. L.; Bolch, W. E.
2003-04-01
Establishment of organ doses from diagnostic and interventional examinations is a key component to quantifying the radiation risks from medical exposures and for formulating corresponding dose-reduction strategies. Radiation transport models of human anatomy provide a convenient method for simulating radiological examinations. At present, two classes of models exist: stylized mathematical models and tomographic voxel models. In the present study, organ dose comparisons are made for projection radiographs of both a stylized and a tomographic model of the newborn patient. Sixteen separate radiographs were simulated for each model at x-ray technique factors typical of newborn examinations: chest, abdomen, thorax and head views in the AP, PA, left LAT and right LAT projection orientation. For AP and PA radiographs of the torso (chest, abdomen and thorax views), the effective dose assessed for the tomographic model exceeds that for the stylized model with per cent differences ranging from 19% (AP abdominal view) to 43% AP chest view. In contrast, the effective dose for the stylized model exceeds that for the tomographic model for all eight lateral views including those of the head, with per cent differences ranging from 9% (LLAT chest view) to 51% (RLAT thorax view). While organ positioning differences do exist between the models, a major factor contributing to differences in effective dose is the models' exterior trunk shape. In the tomographic model, a more elliptical shape is seen thus providing for less tissue shielding for internal organs in the AP and PA directions, with corresponding increased tissue shielding in the lateral directions. This observation is opposite of that seen in comparisons of stylized and tomographic models of the adult.
Bayesian image reconstruction - The pixon and optimal image modeling
NASA Technical Reports Server (NTRS)
Pina, R. K.; Puetter, R. C.
1993-01-01
In this paper we describe the optimal image model, maximum residual likelihood method (OptMRL) for image reconstruction. OptMRL is a Bayesian image reconstruction technique for removing point-spread function blurring. OptMRL uses both a goodness-of-fit criterion (GOF) and an 'image prior', i.e., a function which quantifies the a priori probability of the image. Unlike standard maximum entropy methods, which typically reconstruct the image on the data pixel grid, OptMRL varies the image model in order to find the optimal functional basis with which to represent the image. We show how an optimal basis for image representation can be selected and in doing so, develop the concept of the 'pixon' which is a generalized image cell from which this basis is constructed. By allowing both the image and the image representation to be variable, the OptMRL method greatly increases the volume of solution space over which the image is optimized. Hence the likelihood of the final reconstructed image is greatly increased. For the goodness-of-fit criterion, OptMRL uses the maximum residual likelihood probability distribution introduced previously by Pina and Puetter (1992). This GOF probability distribution, which is based on the spatial autocorrelation of the residuals, has the advantage that it ensures spatially uncorrelated image reconstruction residuals.
A quantum probability explanation for violations of ‘rational’ decision theory
Pothos, Emmanuel M.; Busemeyer, Jerome R.
2009-01-01
Two experimental tasks in psychology, the two-stage gambling game and the Prisoner's Dilemma game, show that people violate the sure thing principle of decision theory. These paradoxical findings have resisted explanation by classical decision theory for over a decade. A quantum probability model, based on a Hilbert space representation and Schrödinger's equation, provides a simple and elegant explanation for this behaviour. The quantum model is compared with an equivalent Markov model and it is shown that the latter is unable to account for violations of the sure thing principle. Accordingly, it is argued that quantum probability provides a better framework for modelling human decision-making. PMID:19324743
Neutrino oscillation processes in a quantum-field-theoretical approach
NASA Astrophysics Data System (ADS)
Egorov, Vadim O.; Volobuev, Igor P.
2018-05-01
It is shown that neutrino oscillation processes can be consistently described in the framework of quantum field theory using only the plane wave states of the particles. Namely, the oscillating electron survival probabilities in experiments with neutrino detection by charged-current and neutral-current interactions are calculated in the quantum field-theoretical approach to neutrino oscillations based on a modification of the Feynman propagator in the momentum representation. The approach is most similar to the standard Feynman diagram technique. It is found that the oscillating distance-dependent probabilities of detecting an electron in experiments with neutrino detection by charged-current and neutral-current interactions exactly coincide with the corresponding probabilities calculated in the standard approach.
Pedale, Tiziana; Santangelo, Valerio
2015-01-01
One of the most important issues in the study of cognition is to understand which are the factors determining internal representation of the external world. Previous literature has started to highlight the impact of low-level sensory features (indexed by saliency-maps) in driving attention selection, hence increasing the probability for objects presented in complex and natural scenes to be successfully encoded into working memory (WM) and then correctly remembered. Here we asked whether the probability of retrieving high-saliency objects modulates the overall contents of WM, by decreasing the probability of retrieving other, lower-saliency objects. We presented pictures of natural scenes for 4 s. After a retention period of 8 s, we asked participants to verbally report as many objects/details as possible of the previous scenes. We then computed how many times the objects located at either the peak of maximal or minimal saliency in the scene (as indexed by a saliency-map; Itti et al., 1998) were recollected by participants. Results showed that maximal-saliency objects were recollected more often and earlier in the stream of successfully reported items than minimal-saliency objects. This indicates that bottom-up sensory salience increases the recollection probability and facilitates the access to memory representation at retrieval, respectively. Moreover, recollection of the maximal- (but not the minimal-) saliency objects predicted the overall amount of successfully recollected objects: The higher the probability of having successfully reported the most-salient object in the scene, the lower the amount of recollected objects. These findings highlight that bottom-up sensory saliency modulates the current contents of WM during recollection of objects from natural scenes, most likely by reducing available resources to encode and then retrieve other (lower saliency) objects. PMID:25741266
2017-07-31
Report: High-Energy, High-Pulse-Rate Light Sources for Enhanced Time -Resolved Tomographic PIV of Unsteady & Turbulent Flows The views, opinions and/or...reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching...High-Energy, High-Pulse-Rate Light Sources for Enhanced Time -Resolved Tomographic PIV of Unsteady & Turbulent Flows Report Term: 0-Other Email
Image processing pipeline for synchrotron-radiation-based tomographic microscopy.
Hintermüller, C; Marone, F; Isenegger, A; Stampanoni, M
2010-07-01
With synchrotron-radiation-based tomographic microscopy, three-dimensional structures down to the micrometer level can be visualized. Tomographic data sets typically consist of 1000 to 1500 projections of 1024 x 1024 to 2048 x 2048 pixels and are acquired in 5-15 min. A processing pipeline has been developed to handle this large amount of data efficiently and to reconstruct the tomographic volume within a few minutes after the end of a scan. Just a few seconds after the raw data have been acquired, a selection of reconstructed slices is accessible through a web interface for preview and to fine tune the reconstruction parameters. The same interface allows initiation and control of the reconstruction process on the computer cluster. By integrating all programs and tools, required for tomographic reconstruction into the pipeline, the necessary user interaction is reduced to a minimum. The modularity of the pipeline allows functionality for new scan protocols to be added, such as an extended field of view, or new physical signals such as phase-contrast or dark-field imaging etc.
Tomographic imaging of OH laser-induced fluorescence in laminar and turbulent jet flames
NASA Astrophysics Data System (ADS)
Li, Tao; Pareja, Jhon; Fuest, Frederik; Schütte, Manuel; Zhou, Yihui; Dreizler, Andreas; Böhm, Benjamin
2018-01-01
In this paper a new approach for 3D flame structure diagnostics using tomographic laser-induced fluorescence (Tomo-LIF) of the OH radical was evaluated. The approach combined volumetric illumination with a multi-camera detection system of eight views. Single-shot measurements were performed in a methane/air premixed laminar flame and in a non-premixed turbulent methane jet flame. 3D OH fluorescence distributions in the flames were reconstructed using the simultaneous multiplicative algebraic reconstruction technique. The tomographic measurements were compared and validated against results of OH-PLIF in the laminar flame. The effects of the experimental setup of the detection system and the size of the volumetric illumination on the quality of the tomographic reconstructions were evaluated. Results revealed that the Tomo-LIF is suitable for volumetric reconstruction of flame structures with acceptable spatial resolution and uncertainty. It was found that the number of views and their angular orientation have a strong influence on the quality and accuracy of the tomographic reconstruction while the illumination volume thickness influences mainly the spatial resolution.
Wavelet compression of noisy tomographic images
NASA Astrophysics Data System (ADS)
Kappeler, Christian; Mueller, Stefan P.
1995-09-01
3D data acquisition is increasingly used in positron emission tomography (PET) to collect a larger fraction of the emitted radiation. A major practical difficulty with data storage and transmission in 3D-PET is the large size of the data sets. A typical dynamic study contains about 200 Mbyte of data. PET images inherently have a high level of photon noise and therefore usually are evaluated after being processed by a smoothing filter. In this work we examined lossy compression schemes under the postulate not induce image modifications exceeding those resulting from low pass filtering. The standard we will refer to is the Hanning filter. Resolution and inhomogeneity serve as figures of merit for quantification of image quality. The images to be compressed are transformed to a wavelet representation using Daubechies12 wavelets and compressed after filtering by thresholding. We do not include further compression by quantization and coding here. Achievable compression factors at this level of processing are thirty to fifty.
Representation of cerebral bridging veins in infants by postmortem computed tomography.
Stein, Kirsten Marion; Ruf, Katharina; Ganten, Maria Katharina; Mattern, Rainer
2006-11-10
The postmortem diagnosis of shaken baby syndrome, a severe form of child abuse, may be difficult, especially when no other visible signs of significant trauma are obvious. An important finding in shaken baby syndrome is subdural haemorrhage, typically originating from ruptured cerebral bridging veins. Since these are difficult to detect at autopsy, we have developed a special postmortem computed tomographic (PMCT) method to demonstrate the intracranial vein system in infants. This method is minimally invasive and can be carried out conveniently and quickly on clinical computed tomography (CT) systems. Firstly, a precontrast CT is made of the infant's head, to document the original state. Secondly, contrast fluid is injected manually via fontanel puncture into the superior sagittal sinus, followed by a repeat CT scan. This allows the depiction of even very small vessels of the deep and superficial cerebral veins, especially the bridging veins, without damaging them. Ruptures appear as extravasation of contrast medium, which helps to locate them at autopsy and examine them histologically, whenever necessary.
Mueller coherency matrix method for contrast image in tissue polarimetry
NASA Astrophysics Data System (ADS)
Arce-Diego, J. L.; Fanjul-Vélez, F.; Samperio-García, D.; Pereda-Cubián, D.
2007-07-01
In this work, we propose the use of the Mueller Coherency matrix of biological tissues in order to increase the information from tissue images and so their contrast. This method involves different Mueller Coherency matrix based parameters, like the eigenvalues analysis, the entropy factor calculation, polarization components crosstalks, linear and circular polarization degrees, hermiticity or the Quaternions analysis in case depolarisation properties of tissue are sufficiently low. All these parameters make information appear clearer and so increase image contrast, so pathologies like cancer could be detected in a sooner stage of development. The election will depend on the concrete pathological process under study. This Mueller Coherency matrix method can be applied to a single tissue point, or it can be combined with a tomographic technique, so as to obtain a 3D representation of polarization contrast parameters in pathological tissues. The application of this analysis to concrete diseases can lead to tissue burn depth estimation or cancer early detection.
Pachajoa, Harry; Hernandez-Amaris, Maria F; Porras-Hurtado, Gloria Liliana; Rodriguez, Carlos A
2014-06-01
Craniofacial duplication or diprosopus is a very rare malformation that is present in approximately 0.4% of conjoined twins. Here is presented a case of craniofacial duplication in association with bilateral cleft lip/palate in both heads found in a ceramic representation from the early Chimú culture from Peru. A comparative analysis is made with a current case of a 28-week-old fetus with similar characteristics. After reviewing the medical literature on conjoined twins, very few reports of facial cleft in both twins were found, with no reports at all of bilateral cleft lip/palate. This ceramic crock is considered one of the first representations suggestive of craniofacial duplication, and probably the first reporting it in association with facial cleft.
Optimal joule heating of the subsurface
Berryman, James G.; Daily, William D.
1994-01-01
A method for simultaneously heating the subsurface and imaging the effects of the heating. This method combines the use of tomographic imaging (electrical resistance tomography or ERT) to image electrical resistivity distribution underground, with joule heating by electrical currents injected in the ground. A potential distribution is established on a series of buried electrodes resulting in energy deposition underground which is a function of the resistivity and injection current density. Measurement of the voltages and currents also permits a tomographic reconstruction of the resistivity distribution. Using this tomographic information, the current injection pattern on the driving electrodes can be adjusted to change the current density distribution and thus optimize the heating. As the heating changes conditions, the applied current pattern can be repeatedly adjusted (based on updated resistivity tomographs) to affect real time control of the heating.
Maximum-Likelihood Methods for Processing Signals From Gamma-Ray Detectors
Barrett, Harrison H.; Hunter, William C. J.; Miller, Brian William; Moore, Stephen K.; Chen, Yichun; Furenlid, Lars R.
2009-01-01
In any gamma-ray detector, each event produces electrical signals on one or more circuit elements. From these signals, we may wish to determine the presence of an interaction; whether multiple interactions occurred; the spatial coordinates in two or three dimensions of at least the primary interaction; or the total energy deposited in that interaction. We may also want to compute listmode probabilities for tomographic reconstruction. Maximum-likelihood methods provide a rigorous and in some senses optimal approach to extracting this information, and the associated Fisher information matrix provides a way of quantifying and optimizing the information conveyed by the detector. This paper will review the principles of likelihood methods as applied to gamma-ray detectors and illustrate their power with recent results from the Center for Gamma-ray Imaging. PMID:20107527
Porro, Laura B; Witmer, Lawrence M; Barrett, Paul M
2015-01-01
Several skulls of the ornithischian dinosaur Lesothosaurus diagnosticus (Lower Jurassic, southern Africa) are known, but all are either incomplete, deformed, or incompletely prepared. This has hampered attempts to provide a comprehensive description of skull osteology in this crucial early dinosaurian taxon. Using visualization software, computed tomographic scans of the Lesothosaurus syntypes were digitally segmented to remove matrix, and identify and separate individual cranial and mandibular bones, revealing new anatomical details such as sutural morphology and the presence of several previously undescribed elements. Together with visual inspection of exposed skull bones, these CT data enable a complete description of skull anatomy in this taxon. Comparisons with our new data suggest that two specimens previously identified as Lesothosaurus sp. (MNHN LES 17 and MNHN LES 18) probably represent additional individuals of Lesothosaurus diagnosticus.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez, R.; Mondot, J.; Stankovski, Z.
1988-11-01
APOLLO II is a new, multigroup transport code under development at the Commissariat a l'Energie Atomique. The code has a modular structure and uses sophisticated software for data structuralization, dynamic memory management, data storage, and user macrolanguage. This paper gives an overview of the main methods used in the code for (a) multidimensional collision probability calculations, (b) leakage calculations, and (c) homogenization procedures. Numerical examples are given to demonstrate the potential of the modular structure of the code and the novel multilevel flat-flux representation used in the calculation of the collision probabilities.
Tomographic imaging of subducted lithosphere below northwest Pacific island arcs
Van Der Hilst, R.; Engdahl, R.; Spakman, W.; Nolet, G.
1991-01-01
The seismic tomography problem does not have a unique solution, and published tomographic images have been equivocal with regard to the deep structure of subducting slabs. An improved tomographic method, using a more realistic background Earth model and surf ace-reflected as well as direct seismic phases, shows that slabs beneath the Japan and Izu Bonin island arcs are deflected at the boundary between upper and lower mantle, whereas those beneath the northern Kuril and Mariana arcs sink into the lower mantle.
Business aspects of cardiovascular computed tomography: tackling the challenges.
Bateman, Timothy M
2008-01-01
The purpose of this article is to provide a comprehensive understanding of the business issues surrounding provision of dedicated cardiovascular computed tomographic imaging. Some of the challenges include high up-front costs, current low utilization relative to scanner capability, and inadequate payments. Cardiovascular computed tomographic imaging is a valuable clinical modality that should be offered by cardiovascular centers-of-excellence. With careful consideration of the business aspects, moderate-to-large size cardiology programs should be able to implement an economically viable cardiovascular computed tomographic service.
NASA Astrophysics Data System (ADS)
Karagiannis, Georgios
2017-03-01
This work led to a new method named 3D spectracoustic tomographic mapping imaging. The current and the future work is related to the fabrication of a combined acoustic microscopy transducer and infrared illumination probe permitting the simultaneous acquisition of the spectroscopic and the tomographic information. This probe provides with the capability of high fidelity and precision registered information from the combined modalities named spectracoustic information.
2016-04-28
Single- shot , volumetrically illuminated, three- dimensional, tomographic laser-induced- fluorescence imaging in a gaseous free jet Benjamin R. Halls...us.af.mil Abstract: Single- shot , tomographic imaging of the three-dimensional concentration field is demonstrated in a turbulent gaseous free jet in co-flow...2001). 6. K. M. Tacina and W. J. A. Dahm, “Effects of heat release on turbulent shear flows, Part 1. A general equivalence principle for non-buoyant
Zhang, T; Godavarthi, C; Chaumet, P C; Maire, G; Giovannini, H; Talneau, A; Prada, C; Sentenac, A; Belkebir, K
2015-02-15
Tomographic diffractive microscopy is a marker-free optical digital imaging technique in which three-dimensional samples are reconstructed from a set of holograms recorded under different angles of incidence. We show experimentally that, by processing the holograms with singular value decomposition, it is possible to image objects in a noisy background that are invisible with classical wide-field microscopy and conventional tomographic reconstruction procedure. The targets can be further characterized with a selective quantitative inversion.
2016-01-01
We investigated whether intentional forgetting impacts only the likelihood of later retrieval from long-term memory or whether it also impacts the fidelity of those representations that are successfully retrieved. We accomplished this by combining an item-method directed forgetting task with a testing procedure and modeling approach inspired by the delayed-estimation paradigm used in the study of visual short-term memory (STM). Abstract or concrete colored images were each followed by a remember (R) or forget (F) instruction and sometimes by a visual probe requiring a speeded detection response (E1–E3). Memory was tested using an old–new (E1–E2) or remember-know-no (E3) recognition task followed by a continuous color judgment task (E2–E3); a final experiment included only the color judgment task (E4). Replicating the existing literature, more “old” or “remember” responses were made to R than F items and RTs to postinstruction visual probes were longer following F than R instructions. Color judgments were more accurate for successfully recognized or recollected R than F items (E2–E3); a mixture model confirmed a decrease to both the probability of retrieving the F items as well as the fidelity of the representation of those F items that were retrieved (E4). We conclude that intentional forgetting is an effortful process that not only reduces the likelihood of successfully encoding an item for later retrieval, but also produces an impoverished memory trace even when those items are retrieved; these findings draw a parallel between the control of memory representations within working and long-term memory. PMID:26709589
Fawcett, Jonathan M; Lawrence, Michael A; Taylor, Tracy L
2016-01-01
We investigated whether intentional forgetting impacts only the likelihood of later retrieval from long-term memory or whether it also impacts the fidelity of those representations that are successfully retrieved. We accomplished this by combining an item-method directed forgetting task with a testing procedure and modeling approach inspired by the delayed-estimation paradigm used in the study of visual short-term memory (STM). Abstract or concrete colored images were each followed by a remember (R) or forget (F) instruction and sometimes by a visual probe requiring a speeded detection response (E1-E3). Memory was tested using an old-new (E1-E2) or remember-know-no (E3) recognition task followed by a continuous color judgment task (E2-E3); a final experiment included only the color judgment task (E4). Replicating the existing literature, more "old" or "remember" responses were made to R than F items and RTs to postinstruction visual probes were longer following F than R instructions. Color judgments were more accurate for successfully recognized or recollected R than F items (E2-E3); a mixture model confirmed a decrease to both the probability of retrieving the F items as well as the fidelity of the representation of those F items that were retrieved (E4). We conclude that intentional forgetting is an effortful process that not only reduces the likelihood of successfully encoding an item for later retrieval, but also produces an impoverished memory trace even when those items are retrieved; these findings draw a parallel between the control of memory representations within working and long-term memory. (c) 2015 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Heit, B.; Yuan, X.; Bianchi, M.; Jakovlev, A.; Kumar, P.; Kay, S. M.; Sandvol, E. A.; Alonso, R.; Coira, B.; Comte, D.; Brown, L. D.; Kind, R.
2011-12-01
We present here the results obtained using the data form our passive seismic array in the southern Puna plateau between 25°S to 28°S latitude in Argentina and Chile. In first instance we have been able to calculate P and S receiver functions in order to investigate the Moho thickness and other seismic discontinuities in the study area. The RF data shows that the northern Puna plateau has a thicker crust and that the Moho topography is more irregular along strike. The seismic structure and thickness of the continental crust and the lithospheric mantle beneath the southern Puna plateau reveals that the LAB is deeper to the north of the array suggesting lithospheric removal towards the south. Later we performed a joint inversion of teleseismic and regional tomographic data in order to study the distribution of velocity anomalies that could help us to better understand the evolution of the Andean elevated plateau and the role of lithosphere-asthenosphere interactions in this region. Low velocities are observed in correlation with young volcanic centers (e.g. Ojos del Salado, Cerro Blanco, Galan) and agree very well with the position of crustal lineaments in the region. This is suggesting a close relationship between magmatism and lithospheric structures at crustal scale coniciding with the presence of hot asthenospheric material at the base of the crust probably induced by lithospheric foundering.
Polynomial chaos representation of databases on manifolds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soize, C., E-mail: christian.soize@univ-paris-est.fr; Ghanem, R., E-mail: ghanem@usc.edu
2017-04-15
Characterizing the polynomial chaos expansion (PCE) of a vector-valued random variable with probability distribution concentrated on a manifold is a relevant problem in data-driven settings. The probability distribution of such random vectors is multimodal in general, leading to potentially very slow convergence of the PCE. In this paper, we build on a recent development for estimating and sampling from probabilities concentrated on a diffusion manifold. The proposed methodology constructs a PCE of the random vector together with an associated generator that samples from the target probability distribution which is estimated from data concentrated in the neighborhood of the manifold. Themore » method is robust and remains efficient for high dimension and large datasets. The resulting polynomial chaos construction on manifolds permits the adaptation of many uncertainty quantification and statistical tools to emerging questions motivated by data-driven queries.« less
On the inequivalence of the CH and CHSH inequalities due to finite statistics
NASA Astrophysics Data System (ADS)
Renou, M. O.; Rosset, D.; Martin, A.; Gisin, N.
2017-06-01
Different variants of a Bell inequality, such as CHSH and CH, are known to be equivalent when evaluated on nonsignaling outcome probability distributions. However, in experimental setups, the outcome probability distributions are estimated using a finite number of samples. Therefore the nonsignaling conditions are only approximately satisfied and the robustness of the violation depends on the chosen inequality variant. We explain that phenomenon using the decomposition of the space of outcome probability distributions under the action of the symmetry group of the scenario, and propose a method to optimize the statistical robustness of a Bell inequality. In the process, we describe the finite group composed of relabeling of parties, measurement settings and outcomes, and identify correspondences between the irreducible representations of this group and properties of outcome probability distributions such as normalization, signaling or having uniform marginals.
Mental Representation of Circuit Diagrams.
1984-10-15
transformer serves to change the voltage of an AC supply, that a particular combination of transitors acts as a flip-flop, and so forth. Fundamentally, this... statistically signi ’P,. differences between skill levels, the size of the effect as a pro., an of variability would probably not be very great. Thus
Douglas, Pamela S; Pontone, Gianluca; Hlatky, Mark A; Patel, Manesh R; Norgaard, Bjarne L; Byrne, Robert A; Curzen, Nick; Purcell, Ian; Gutberlet, Matthias; Rioufol, Gilles; Hink, Ulrich; Schuchlenz, Herwig Walter; Feuchtner, Gudrun; Gilard, Martine; Andreini, Daniele; Jensen, Jesper M; Hadamitzky, Martin; Chiswell, Karen; Cyr, Derek; Wilk, Alan; Wang, Furong; Rogers, Campbell; De Bruyne, Bernard
2015-12-14
In symptomatic patients with suspected coronary artery disease (CAD), computed tomographic angiography (CTA) improves patient selection for invasive coronary angiography (ICA) compared with functional testing. The impact of measuring fractional flow reserve by CTA (FFRCT) is unknown. At 11 sites, 584 patients with new onset chest pain were prospectively assigned to receive either usual testing (n = 287) or CTA/FFR(CT) (n = 297). Test interpretation and care decisions were made by the clinical care team. The primary endpoint was the percentage of those with planned ICA in whom no significant obstructive CAD (no stenosis ≥50% by core laboratory quantitative analysis or invasive FFR < 0.80) was found at ICA within 90 days. Secondary endpoints including death, myocardial infarction, and unplanned revascularization were independently and blindly adjudicated. Subjects averaged 61 ± 11 years of age, 40% were female, and the mean pre-test probability of obstructive CAD was 49 ± 17%. Among those with intended ICA (FFR(CT)-guided = 193; usual care = 187), no obstructive CAD was found at ICA in 24 (12%) in the CTA/FFR(CT) arm and 137 (73%) in the usual care arm (risk difference 61%, 95% confidence interval 53-69, P< 0.0001), with similar mean cumulative radiation exposure (9.9 vs. 9.4 mSv, P = 0.20). Invasive coronary angiography was cancelled in 61% after receiving CTA/FFR(CT) results. Among those with intended non-invasive testing, the rates of finding no obstructive CAD at ICA were 13% (CTA/FFR(CT)) and 6% (usual care; P = 0.95). Clinical event rates within 90 days were low in usual care and CTA/FFR(CT) arms. Computed tomographic angiography/fractional flow reserve by CTA was a feasible and safe alternative to ICA and was associated with a significantly lower rate of invasive angiography showing no obstructive CAD. © The Author 2015. Published by Oxford University Press on behalf of the European Society of Cardiology.
Optimal joule heating of the subsurface
Berryman, J.G.; Daily, W.D.
1994-07-05
A method for simultaneously heating the subsurface and imaging the effects of the heating is disclosed. This method combines the use of tomographic imaging (electrical resistance tomography or ERT) to image electrical resistivity distribution underground, with joule heating by electrical currents injected in the ground. A potential distribution is established on a series of buried electrodes resulting in energy deposition underground which is a function of the resistivity and injection current density. Measurement of the voltages and currents also permits a tomographic reconstruction of the resistivity distribution. Using this tomographic information, the current injection pattern on the driving electrodes can be adjusted to change the current density distribution and thus optimize the heating. As the heating changes conditions, the applied current pattern can be repeatedly adjusted (based on updated resistivity tomographs) to affect real time control of the heating.
Wang, Dengjiang; Zhang, Weifang; Wang, Xiangyu; Sun, Bo
2016-01-01
This study presents a novel monitoring method for hole-edge corrosion damage in plate structures based on Lamb wave tomographic imaging techniques. An experimental procedure with a cross-hole layout using 16 piezoelectric transducers (PZTs) was designed. The A0 mode of the Lamb wave was selected, which is sensitive to thickness-loss damage. The iterative algebraic reconstruction technique (ART) method was used to locate and quantify the corrosion damage at the edge of the hole. Hydrofluoric acid with a concentration of 20% was used to corrode the specimen artificially. To estimate the effectiveness of the proposed method, the real corrosion damage was compared with the predicted corrosion damage based on the tomographic method. The results show that the Lamb-wave-based tomographic method can be used to monitor the hole-edge corrosion damage accurately. PMID:28774041
Kuhl, Brice A.; Rissman, Jesse; Wagner, Anthony D.
2012-01-01
Successful encoding of episodic memories is thought to depend on contributions from prefrontal and temporal lobe structures. Neural processes that contribute to successful encoding have been extensively explored through univariate analyses of neuroimaging data that compare mean activity levels elicited during the encoding of events that are subsequently remembered vs. those subsequently forgotten. Here, we applied pattern classification to fMRI data to assess the degree to which distributed patterns of activity within prefrontal and temporal lobe structures elicited during the encoding of word-image pairs were diagnostic of the visual category (Face or Scene) of the encoded image. We then assessed whether representation of category information was predictive of subsequent memory. Classification analyses indicated that temporal lobe structures contained information robustly diagnostic of visual category. Information in prefrontal cortex was less diagnostic of visual category, but was nonetheless associated with highly reliable classifier-based evidence for category representation. Critically, trials associated with greater classifier-based estimates of category representation in temporal and prefrontal regions were associated with a higher probability of subsequent remembering. Finally, consideration of trial-by-trial variance in classifier-based measures of category representation revealed positive correlations between prefrontal and temporal lobe representations, with the strength of these correlations varying as a function of the category of image being encoded. Together, these results indicate that multi-voxel representations of encoded information can provide unique insights into how visual experiences are transformed into episodic memories. PMID:21925190
NASA Astrophysics Data System (ADS)
Tornai, Martin P.; Bowsher, James E.; Archer, Caryl N.; Peter, Jörg; Jaszczak, Ronald J.; MacDonald, Lawrence R.; Patt, Bradley E.; Iwanczyk, Jan S.
2003-01-01
A novel tomographic gantry was designed, built and initially evaluated for single photon emission imaging of metabolically active lesions in the pendant breast and near chest wall. Initial emission imaging measurements with breast lesions of various uptake ratios are presented. Methods: A prototype tomograph was constructed utilizing a compact gamma camera having a field-of-view of <13×13 cm 2 with arrays of 2×2×6 mm 3 quantized NaI(Tl) scintillators coupled to position sensitive PMTs. The camera was mounted on a radially oriented support with 6 cm variable radius-of-rotation. This unit is further mounted on a goniometric cradle providing polar motion, and in turn mounted on an azimuthal rotation stage capable of indefinite vertical axis-of-rotation about the central rotation axis (RA). Initial measurements with isotopic Tc-99 m (140 keV) to evaluate the system include acquisitions with various polar tilt angles about the RA. Tomographic measurements were made of a frequency and resolution cold-rod phantom filled with aqueous Tc-99 m. Tomographic and planar measurements of 0.6 and 1.0 cm diameter fillable spheres in an available ˜950 ml hemi-ellipsoidal (uncompressed) breast phantom attached to a life-size anthropomorphic torso phantom with lesion:breast-and-body:cardiac-and-liver activity concentration ratios of 11:1:19 were compared. Various photopeak energy windows from 10-30% widths were obtained, along with a 35% scatter window below a 15% photopeak window from the list mode data. Projections with all photopeak window and camera tilt conditions were reconstructed with an ordered subsets expectation maximization (OSEM) algorithm capable of reconstructing arbitrary tomographic orbits. Results: As iteration number increased for the tomographically measured data at all polar angles, contrasts increased while signal-to-noise ratios (SNRs) decreased in the expected way with OSEM reconstruction. The rollover between contrast improvement and SNR degradation of the lesion occurred at two to three iterations. The reconstructed tomographic data yielded SNRs with or without scatter correction that were >9 times better than the planar scans. There was up to a factor of ˜2.5 increase in total primary and scatter contamination in the photopeak window with increasing tilt angle from 15° to 45°, consistent with more direct line-of-sight of myocardial and liver activity with increased camera polar angle. Conclusion: This new, ultra-compact, dedicated tomographic imaging system has the potential of providing valuable, fully 3D functional information about small, otherwise indeterminate breast lesions as an adjunct to diagnostic mammography.
Atwood, Robert C.; Bodey, Andrew J.; Price, Stephen W. T.; Basham, Mark; Drakopoulos, Michael
2015-01-01
Tomographic datasets collected at synchrotrons are becoming very large and complex, and, therefore, need to be managed efficiently. Raw images may have high pixel counts, and each pixel can be multidimensional and associated with additional data such as those derived from spectroscopy. In time-resolved studies, hundreds of tomographic datasets can be collected in sequence, yielding terabytes of data. Users of tomographic beamlines are drawn from various scientific disciplines, and many are keen to use tomographic reconstruction software that does not require a deep understanding of reconstruction principles. We have developed Savu, a reconstruction pipeline that enables users to rapidly reconstruct data to consistently create high-quality results. Savu is designed to work in an ‘orthogonal’ fashion, meaning that data can be converted between projection and sinogram space throughout the processing workflow as required. The Savu pipeline is modular and allows processing strategies to be optimized for users' purposes. In addition to the reconstruction algorithms themselves, it can include modules for identification of experimental problems, artefact correction, general image processing and data quality assessment. Savu is open source, open licensed and ‘facility-independent’: it can run on standard cluster infrastructure at any institution. PMID:25939626
Protein sequence comparison based on K-string dictionary.
Yu, Chenglong; He, Rong L; Yau, Stephen S-T
2013-10-25
The current K-string-based protein sequence comparisons require large amounts of computer memory because the dimension of the protein vector representation grows exponentially with K. In this paper, we propose a novel concept, the "K-string dictionary", to solve this high-dimensional problem. It allows us to use a much lower dimensional K-string-based frequency or probability vector to represent a protein, and thus significantly reduce the computer memory requirements for their implementation. Furthermore, based on this new concept, we use Singular Value Decomposition to analyze real protein datasets, and the improved protein vector representation allows us to obtain accurate gene trees. © 2013.
The Political Impact of the New Hispanic Second Generation
Logan, John R.; Oh, Sookhee; Darrah, Jennifer
2013-01-01
The rapid growth of the Hispanic population in the United States, particularly those of the second generation, who have automatic rights of citizenship, could be expected to result in increased influence and representation in politics for this group. We show that the effect of a sheer growth in numbers at the national level is diminished by several factors: low probabilities of naturalisation by Hispanic immigrants; non-participation in voting, especially by the US-born generations; and concentration of growth in Congressional Districts that already have Hispanic Representatives. It is a challenge for public policy to reduce the lag between population growth and political representation. PMID:24009469
A study of complex scaling transformation using the Wigner representation of wavefunctions.
Kaprálová-Ždánská, Petra Ruth
2011-05-28
The complex scaling operator exp(-θ ̂x̂p/ℏ), being a foundation of the complex scaling method for resonances, is studied in the Wigner phase-space representation. It is shown that the complex scaling operator behaves similarly to the squeezing operator, rotating and amplifying Wigner quasi-probability distributions of the respective wavefunctions. It is disclosed that the distorting effect of the complex scaling transformation is correlated with increased numerical errors of computed resonance energies and widths. The behavior of the numerical error is demonstrated for a computation of CO(2+) vibronic resonances. © 2011 American Institute of Physics
Integrated stationary Ornstein-Uhlenbeck process, and double integral processes
NASA Astrophysics Data System (ADS)
Abundo, Mario; Pirozzi, Enrica
2018-03-01
We find a representation of the integral of the stationary Ornstein-Uhlenbeck (ISOU) process in terms of Brownian motion Bt; moreover, we show that, under certain conditions on the functions f and g , the double integral process (DIP) D(t) = ∫βt g(s) (∫αs f(u) dBu) ds can be thought as the integral of a suitable Gauss-Markov process. Some theoretical and application details are given, among them we provide a simulation formula based on that representation by which sample paths, probability densities and first passage times of the ISOU process are obtained; the first-passage times of the DIP are also studied.
Tomographic inversion of satellite photometry. II
NASA Technical Reports Server (NTRS)
Solomon, S. C.; Hays, P. B.; Abreu, V. J.
1985-01-01
A method for combining nadir observations of emission features in the upper atmosphere with the result of a tomographic inversion of limb brightness measurements is presented. Simulated and actual results are provided, and error sensitivity is investigated.
Quantum-Like Models for Decision Making in Psychology and Cognitive Science
NASA Astrophysics Data System (ADS)
Khrennikov, Andrei.
2009-02-01
We show that (in contrast to rather common opinion) the domain of applications of the mathematical formalism of quantum mechanics is not restricted to physics. This formalism can be applied to the description of various quantum-like (QL) information processing. In particular, the calculus of quantum (and more general QL) probabilities can be used to explain some paradoxical statistical data which was collected in psychology and cognitive science. The main lesson of our study is that one should sharply distinguish the mathematical apparatus of QM from QM as a physical theory. The domain of application of the mathematical apparatus is essentially wider than quantum physics. Quantum-like representation algorithm, formula of total probability, interference of probabilities, psychology, cognition, decision making.
Developmental dyscalculia is related to visuo-spatial memory and inhibition impairment☆
Szucs, Denes; Devine, Amy; Soltesz, Fruzsina; Nobes, Alison; Gabriel, Florence
2013-01-01
Developmental dyscalculia is thought to be a specific impairment of mathematics ability. Currently dominant cognitive neuroscience theories of developmental dyscalculia suggest that it originates from the impairment of the magnitude representation of the human brain, residing in the intraparietal sulcus, or from impaired connections between number symbols and the magnitude representation. However, behavioral research offers several alternative theories for developmental dyscalculia and neuro-imaging also suggests that impairments in developmental dyscalculia may be linked to disruptions of other functions of the intraparietal sulcus than the magnitude representation. Strikingly, the magnitude representation theory has never been explicitly contrasted with a range of alternatives in a systematic fashion. Here we have filled this gap by directly contrasting five alternative theories (magnitude representation, working memory, inhibition, attention and spatial processing) of developmental dyscalculia in 9–10-year-old primary school children. Participants were selected from a pool of 1004 children and took part in 16 tests and nine experiments. The dominant features of developmental dyscalculia are visuo-spatial working memory, visuo-spatial short-term memory and inhibitory function (interference suppression) impairment. We hypothesize that inhibition impairment is related to the disruption of central executive memory function. Potential problems of visuo-spatial processing and attentional function in developmental dyscalculia probably depend on short-term memory/working memory and inhibition impairments. The magnitude representation theory of developmental dyscalculia was not supported. PMID:23890692
Veloz, Tomas; Desjardins, Sylvie
2015-01-01
Quantum models of concept combinations have been successful in representing various experimental situations that cannot be accommodated by traditional models based on classical probability or fuzzy set theory. In many cases, the focus has been on producing a representation that fits experimental results to validate quantum models. However, these representations are not always consistent with the cognitive modeling principles. Moreover, some important issues related to the representation of concepts such as the dimensionality of the realization space, the uniqueness of solutions, and the compatibility of measurements, have been overlooked. In this paper, we provide a dimensional analysis of the realization space for the two-sector Fock space model for conjunction of concepts focusing on the first and second sectors separately. We then introduce various representation of concepts that arise from the use of unitary operators in the realization space. In these concrete representations, a pair of concepts and their combination are modeled by a single conceptual state, and by a collection of exemplar-dependent operators. Therefore, they are consistent with cognitive modeling principles. This framework not only provides a uniform approach to model an entire data set, but, because all measurement operators are expressed in the same basis, allows us to address the question of compatibility of measurements. In particular, we present evidence that it may be possible to predict non-commutative effects from partial measurements of conceptual combinations. PMID:26617556
Veloz, Tomas; Desjardins, Sylvie
2015-01-01
Quantum models of concept combinations have been successful in representing various experimental situations that cannot be accommodated by traditional models based on classical probability or fuzzy set theory. In many cases, the focus has been on producing a representation that fits experimental results to validate quantum models. However, these representations are not always consistent with the cognitive modeling principles. Moreover, some important issues related to the representation of concepts such as the dimensionality of the realization space, the uniqueness of solutions, and the compatibility of measurements, have been overlooked. In this paper, we provide a dimensional analysis of the realization space for the two-sector Fock space model for conjunction of concepts focusing on the first and second sectors separately. We then introduce various representation of concepts that arise from the use of unitary operators in the realization space. In these concrete representations, a pair of concepts and their combination are modeled by a single conceptual state, and by a collection of exemplar-dependent operators. Therefore, they are consistent with cognitive modeling principles. This framework not only provides a uniform approach to model an entire data set, but, because all measurement operators are expressed in the same basis, allows us to address the question of compatibility of measurements. In particular, we present evidence that it may be possible to predict non-commutative effects from partial measurements of conceptual combinations.
Nondeterministic data base for computerized visual perception
NASA Technical Reports Server (NTRS)
Yakimovsky, Y.
1976-01-01
A description is given of the knowledge representation data base in the perception subsystem of the Mars robot vehicle prototype. Two types of information are stored. The first is generic information that represents general rules that are conformed to by structures in the expected environments. The second kind of information is a specific description of a structure, i.e., the properties and relations of objects in the specific case being analyzed. The generic knowledge is represented so that it can be applied to extract and infer the description of specific structures. The generic model of the rules is substantially a Bayesian representation of the statistics of the environment, which means it is geared to representation of nondeterministic rules relating properties of, and relations between, objects. The description of a specific structure is also nondeterministic in the sense that all properties and relations may take a range of values with an associated probability distribution.
Visual long-term memory has the same limit on fidelity as visual working memory.
Brady, Timothy F; Konkle, Talia; Gill, Jonathan; Oliva, Aude; Alvarez, George A
2013-06-01
Visual long-term memory can store thousands of objects with surprising visual detail, but just how detailed are these representations, and how can one quantify this fidelity? Using the property of color as a case study, we estimated the precision of visual information in long-term memory, and compared this with the precision of the same information in working memory. Observers were shown real-world objects in random colors and were asked to recall the colors after a delay. We quantified two parameters of performance: the variability of internal representations of color (fidelity) and the probability of forgetting an object's color altogether. Surprisingly, the fidelity of color information in long-term memory was comparable to the asymptotic precision of working memory. These results suggest that long-term memory and working memory may be constrained by a common limit, such as a bound on the fidelity required to retrieve a memory representation.
OCT despeckling via weighted nuclear norm constrained non-local low-rank representation
NASA Astrophysics Data System (ADS)
Tang, Chang; Zheng, Xiao; Cao, Lijuan
2017-10-01
As a non-invasive imaging modality, optical coherence tomography (OCT) plays an important role in medical sciences. However, OCT images are always corrupted by speckle noise, which can mask image features and pose significant challenges for medical analysis. In this work, we propose an OCT despeckling method by using non-local, low-rank representation with weighted nuclear norm constraint. Unlike previous non-local low-rank representation based OCT despeckling methods, we first generate a guidance image to improve the non-local group patches selection quality, then a low-rank optimization model with a weighted nuclear norm constraint is formulated to process the selected group patches. The corrupted probability of each pixel is also integrated into the model as a weight to regularize the representation error term. Note that each single patch might belong to several groups, hence different estimates of each patch are aggregated to obtain its final despeckled result. Both qualitative and quantitative experimental results on real OCT images show the superior performance of the proposed method compared with other state-of-the-art speckle removal techniques.
NASA Astrophysics Data System (ADS)
Petit, C.; Le Louarn, M.; Fusco, T.; Madec, P.-Y.
2011-09-01
Various tomographic control solutions have been proposed during the last decades to ensure efficient or even optimal closed-loop correction to tomographic Adaptive Optics (AO) concepts such as Laser Tomographic AO (LTAO), Multi-Conjugate AO (MCAO). The optimal solution, based on Linear Quadratic Gaussian (LQG) approach, as well as suboptimal but efficient solutions such as Pseudo-Open Loop Control (POLC) require multiple Matrix Vector Multiplications (MVM). Disregarding their respective performance, these efficient control solutions thus exhibit strong increase of on-line complexity and their implementation may become difficult in demanding cases. Among them, two cases are of particular interest. First, the system Real-Time Computer architecture and implementation is derived from past or present solutions and does not support multiple MVM. This is the case of the AO Facility which RTC architecture is derived from the SPARTA platform and inherits its simple MVM architecture, which does not fit with LTAO control solutions for instance. Second, considering future systems such as Extremely Large Telescopes, the number of degrees of freedom is twenty to one hundred times bigger than present systems. In these conditions, tomographic control solutions can hardly be used in their standard form and optimized implementation shall be considered. Single MVM tomographic control solutions represent a potential solution, and straightforward solutions such as Virtual Deformable Mirrors have been already proposed for LTAO but with tuning issues. We investigate in this paper the possibility to derive from tomographic control solutions, such as POLC or LQG, simplified control solutions ensuring simple MVM architecture and that could be thus implemented on nowadays systems or future complex systems. We theoretically derive various solutions and analyze their respective performance on various systems thanks to numerical simulation. We discuss the optimization of their performance and stability issues with respect to classic control solutions. We finally discuss off-line computation and implementation constraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
EMAM, M; Eldib, A; Lin, M
2014-06-01
Purpose: An in-house Monte Carlo based treatment planning system (MC TPS) has been developed for modulated electron radiation therapy (MERT). Our preliminary MERT planning experience called for a more user friendly graphical user interface. The current work aimed to design graphical windows and tools to facilitate the contouring and planning process. Methods: Our In-house GUI MC TPS is built on a set of EGS4 user codes namely MCPLAN and MCBEAM in addition to an in-house optimization code, which was named as MCOPTIM. Patient virtual phantom is constructed using the tomographic images in DICOM format exported from clinical treatment planning systemsmore » (TPS). Treatment target volumes and critical structures were usually contoured on clinical TPS and then sent as a structure set file. In our GUI program we developed a visualization tool to allow the planner to visualize the DICOM images and delineate the various structures. We implemented an option in our code for automatic contouring of the patient body and lungs. We also created an interface window displaying a three dimensional representation of the target and also showing a graphical representation of the treatment beams. Results: The new GUI features helped streamline the planning process. The implemented contouring option eliminated the need for performing this step on clinical TPS. The auto detection option for contouring the outer patient body and lungs was tested on patient CTs and it was shown to be accurate as compared to that of clinical TPS. The three dimensional representation of the target and the beams allows better selection of the gantry, collimator and couch angles. Conclusion: An in-house GUI program has been developed for more efficient MERT planning. The application of aiding tools implemented in the program is time saving and gives better control of the planning process.« less
Assessing the resolution-dependent utility of tomograms for geostatistics
Day-Lewis, F. D.; Lane, J.W.
2004-01-01
Geophysical tomograms are used increasingly as auxiliary data for geostatistical modeling of aquifer and reservoir properties. The correlation between tomographic estimates and hydrogeologic properties is commonly based on laboratory measurements, co-located measurements at boreholes, or petrophysical models. The inferred correlation is assumed uniform throughout the interwell region; however, tomographic resolution varies spatially due to acquisition geometry, regularization, data error, and the physics underlying the geophysical measurements. Blurring and inversion artifacts are expected in regions traversed by few or only low-angle raypaths. In the context of radar traveltime tomography, we derive analytical models for (1) the variance of tomographic estimates, (2) the spatially variable correlation with a hydrologic parameter of interest, and (3) the spatial covariance of tomographic estimates. Synthetic examples demonstrate that tomograms of qualitative value may have limited utility for geostatistics; moreover, the imprint of regularization may preclude inference of meaningful spatial statistics from tomograms.
NASA Astrophysics Data System (ADS)
Flynn, Brendan P.; D'Souza, Alisha V.; Kanick, Stephen C.; Maytin, Edward; Hasan, Tayyaba; Pogue, Brian W.
2013-03-01
Aminolevulinic acid (ALA)-induced Protoporphyrin IX (PpIX)-based photodynamic therapy (PDT) is an effective treatment for skin cancers including basal cell carcinoma (BCC). Topically applied ALA promotes PpIX production preferentially in tumors, and many strategies have been developed to increase PpIX distribution and PDT treatment efficacy at depths > 1mm is not fully understood. While surface imaging techniques provide useful diagnosis, dosimetry, and efficacy information for superficial tumors, these methods cannot interrogate deeper tumors to provide in situ insight into spatial PpIX distributions. We have developed an ultrasound-guided, white-light-informed, tomographics spectroscopy system for the spatial measurement of subsurface PpIX. Detailed imaging system specifications, methodology, and optical-phantom-based characterization will be presented separately. Here we evaluate preliminary in vivo results using both full tomographic reconstruction and by plotting individual tomographic source-detector pair data against US images.
Students' Appreciation of Expectation and Variation as a Foundation for Statistical Understanding
ERIC Educational Resources Information Center
Watson, Jane M.; Callingham, Rosemary A.; Kelly, Ben A.
2007-01-01
This study presents the results of a partial credit Rasch analysis of in-depth interview data exploring statistical understanding of 73 school students in 6 contextual settings. The use of Rasch analysis allowed the exploration of a single underlying variable across contexts, which included probability sampling, representation of temperature…
30 CFR 44.27 - Consent findings and rules or orders.
Code of Federal Regulations, 2010 CFR
2010-07-01
... STANDARDS Hearings § 44.27 Consent findings and rules or orders. (a) General. At any time after a request..., representations of the parties, and probability of an agreement which will result in a just disposition of the... time granted for negotiations, the parties or their counsel may: (1) Submit the proposed agreement to...
The Role of Probability in Developing Learners' Models of Simulation Approaches to Inference
ERIC Educational Resources Information Center
Lee, Hollylynne S.; Doerr, Helen M.; Tran, Dung; Lovett, Jennifer N.
2016-01-01
Repeated sampling approaches to inference that rely on simulations have recently gained prominence in statistics education, and probabilistic concepts are at the core of this approach. In this approach, learners need to develop a mapping among the problem situation, a physical enactment, computer representations, and the underlying randomization…
Shooting Free Throws, Probability, and the Golden Ratio
ERIC Educational Resources Information Center
Goodman, Terry
2010-01-01
Part of the power of algebra is that it provides students with tools that they can use to model a variety of problems and applications. Such modeling requires them to understand patterns and choose from a variety of representations--numeric, graphical, symbolic--to construct a model that accurately reflects the relationships found in the original…
Path integrals and large deviations in stochastic hybrid systems.
Bressloff, Paul C; Newby, Jay M
2014-04-01
We construct a path-integral representation of solutions to a stochastic hybrid system, consisting of one or more continuous variables evolving according to a piecewise-deterministic dynamics. The differential equations for the continuous variables are coupled to a set of discrete variables that satisfy a continuous-time Markov process, which means that the differential equations are only valid between jumps in the discrete variables. Examples of stochastic hybrid systems arise in biophysical models of stochastic ion channels, motor-driven intracellular transport, gene networks, and stochastic neural networks. We use the path-integral representation to derive a large deviation action principle for a stochastic hybrid system. Minimizing the associated action functional with respect to the set of all trajectories emanating from a metastable state (assuming that such a minimization scheme exists) then determines the most probable paths of escape. Moreover, evaluating the action functional along a most probable path generates the so-called quasipotential used in the calculation of mean first passage times. We illustrate the theory by considering the optimal paths of escape from a metastable state in a bistable neural network.
Two conditions for equivalence of 0-norm solution and 1-norm solution in sparse representation.
Li, Yuanqing; Amari, Shun-Ichi
2010-07-01
In sparse representation, two important sparse solutions, the 0-norm and 1-norm solutions, have been receiving much of attention. The 0-norm solution is the sparsest, however it is not easy to obtain. Although the 1-norm solution may not be the sparsest, it can be easily obtained by the linear programming method. In many cases, the 0-norm solution can be obtained through finding the 1-norm solution. Many discussions exist on the equivalence of the two sparse solutions. This paper analyzes two conditions for the equivalence of the two sparse solutions. The first condition is necessary and sufficient, however, difficult to verify. Although the second is necessary but is not sufficient, it is easy to verify. In this paper, we analyze the second condition within the stochastic framework and propose a variant. We then prove that the equivalence of the two sparse solutions holds with high probability under the variant of the second condition. Furthermore, in the limit case where the 0-norm solution is extremely sparse, the second condition is also a sufficient condition with probability 1.
Estimation of the Thermal Process in the Honeycomb Panel by a Monte Carlo Method
NASA Astrophysics Data System (ADS)
Gusev, S. A.; Nikolaev, V. N.
2018-01-01
A new Monte Carlo method for estimating the thermal state of the heat insulation containing honeycomb panels is proposed in the paper. The heat transfer in the honeycomb panel is described by a boundary value problem for a parabolic equation with discontinuous diffusion coefficient and boundary conditions of the third kind. To obtain an approximate solution, it is proposed to use the smoothing of the diffusion coefficient. After that, the obtained problem is solved on the basis of the probability representation. The probability representation is the expectation of the functional of the diffusion process corresponding to the boundary value problem. The process of solving the problem is reduced to numerical statistical modelling of a large number of trajectories of the diffusion process corresponding to the parabolic problem. It was used earlier the Euler method for this object, but that requires a large computational effort. In this paper the method is modified by using combination of the Euler and the random walk on moving spheres methods. The new approach allows us to significantly reduce the computation costs.
Discriminative Bayesian Dictionary Learning for Classification.
Akhtar, Naveed; Shafait, Faisal; Mian, Ajmal
2016-12-01
We propose a Bayesian approach to learn discriminative dictionaries for sparse representation of data. The proposed approach infers probability distributions over the atoms of a discriminative dictionary using a finite approximation of Beta Process. It also computes sets of Bernoulli distributions that associate class labels to the learned dictionary atoms. This association signifies the selection probabilities of the dictionary atoms in the expansion of class-specific data. Furthermore, the non-parametric character of the proposed approach allows it to infer the correct size of the dictionary. We exploit the aforementioned Bernoulli distributions in separately learning a linear classifier. The classifier uses the same hierarchical Bayesian model as the dictionary, which we present along the analytical inference solution for Gibbs sampling. For classification, a test instance is first sparsely encoded over the learned dictionary and the codes are fed to the classifier. We performed experiments for face and action recognition; and object and scene-category classification using five public datasets and compared the results with state-of-the-art discriminative sparse representation approaches. Experiments show that the proposed Bayesian approach consistently outperforms the existing approaches.
The extraction and integration framework: a two-process account of statistical learning.
Thiessen, Erik D; Kronstein, Alexandra T; Hufnagle, Daniel G
2013-07-01
The term statistical learning in infancy research originally referred to sensitivity to transitional probabilities. Subsequent research has demonstrated that statistical learning contributes to infant development in a wide array of domains. The range of statistical learning phenomena necessitates a broader view of the processes underlying statistical learning. Learners are sensitive to a much wider range of statistical information than the conditional relations indexed by transitional probabilities, including distributional and cue-based statistics. We propose a novel framework that unifies learning about all of these kinds of statistical structure. From our perspective, learning about conditional relations outputs discrete representations (such as words). Integration across these discrete representations yields sensitivity to cues and distributional information. To achieve sensitivity to all of these kinds of statistical structure, our framework combines processes that extract segments of the input with processes that compare across these extracted items. In this framework, the items extracted from the input serve as exemplars in long-term memory. The similarity structure of those exemplars in long-term memory leads to the discovery of cues and categorical structure, which guides subsequent extraction. The extraction and integration framework provides a way to explain sensitivity to both conditional statistical structure (such as transitional probabilities) and distributional statistical structure (such as item frequency and variability), and also a framework for thinking about how these different aspects of statistical learning influence each other. 2013 APA, all rights reserved
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1977-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1978-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Continuity equation for probability as a requirement of inference over paths
NASA Astrophysics Data System (ADS)
González, Diego; Díaz, Daniela; Davis, Sergio
2016-09-01
Local conservation of probability, expressed as the continuity equation, is a central feature of non-equilibrium Statistical Mechanics. In the existing literature, the continuity equation is always motivated by heuristic arguments with no derivation from first principles. In this work we show that the continuity equation is a logical consequence of the laws of probability and the application of the formalism of inference over paths for dynamical systems. That is, the simple postulate that a system moves continuously through time following paths implies the continuity equation. The translation between the language of dynamical paths to the usual representation in terms of probability densities of states is performed by means of an identity derived from Bayes' theorem. The formalism presented here is valid independently of the nature of the system studied: it is applicable to physical systems and also to more abstract dynamics such as financial indicators, population dynamics in ecology among others.
Locality, reflection, and wave-particle duality
NASA Astrophysics Data System (ADS)
Mugur-Schächter, Mioara
1987-08-01
Bell's theorem is believed to establish that the quantum mechanical predictions do not generally admit a causal representation compatible with Einsten's principle of separability, thereby proving incompatibility between quantum mechanics and relativity. This interpretation is contested via two convergent approaches which lead to a sharp distinction between quantum nonseparability and violation of Einstein's theory of relativity. In a first approach we explicate from the quantum mechanical formalism a concept of “reflected dependence.” Founded on this concept, we produce a causal representation of the quantum mechanical probability measure involved in Bell's proof, which is clearly separable in Einstein's sense, i.e., it does not involve supraluminal velocities, and nevertheless is “nonlocal” in Bell's sense. So Bell locality and Einstein separability are distinct qualifications, and Bell nonlocality (or Bell nonseparability) and Einstein separability are not incompatible. It is then proved explicitly that with respect to the mentioned representation Bell's derivation does not hold. So Bell's derivation does not establish that any Einstein-separable representation is incompatible with quantum mechanics. This first—negative—conclusion is a syntactic fact. The characteristics of the representation and of the reasoning involved in the mentioned counterexample to the usual interpretation of Bell's theorem suggest that the representation used—notwithstanding its ability to bring forth the specified syntactic fact—is not factually true. Factual truth and syntactic properties also have to be radically distinguished in their turn. So, in a second approach, starting from de Broglie's initial relativistic model of a microsystem, a deeper, factually acceptable representation is constructed. The analyses leading to this second representation show that quantum mechanics does indeed involve basically a certain sort of nonseparability, called here de Broglie-Bohr quantum nonseparability. But the de Broglie-Bohr quantum nonseparability is shown to stem directly from the relativistic character of the considerations which led Louis de Broglie to the fundamental relation p = h/λ, thereby being essentially consistent with relativity. As to Einstein separability, it appears to be a still insufficiently specified concept of which a future, improved specification, will probably be explicitly harmonizable with the de Broglie-Bohr quantum nonseparability. The ensemble of the conclusions obtained here brings forth a new concept of causality, a concept of folded, zigzag, reflexive causality, with respect to which the type of causality conceived of up to now appears as a particular case of outstretched, one-way causality. The reflexive causality is found compatible with the results of Aspect's experiment, and it suggests new experiments. Considered globally, the conclusions obtained in the present work might convert the conceptual situation created by Bell's proof into a process of unification of quantum mechanics and relativity.
Independent Component Analysis of Textures
NASA Technical Reports Server (NTRS)
Manduchi, Roberto; Portilla, Javier
2000-01-01
A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.
Probability of Detection (POD) as a statistical model for the validation of qualitative methods.
Wehling, Paul; LaBudde, Robert A; Brunelle, Sharon L; Nelson, Maria T
2011-01-01
A statistical model is presented for use in validation of qualitative methods. This model, termed Probability of Detection (POD), harmonizes the statistical concepts and parameters between quantitative and qualitative method validation. POD characterizes method response with respect to concentration as a continuous variable. The POD model provides a tool for graphical representation of response curves for qualitative methods. In addition, the model allows comparisons between candidate and reference methods, and provides calculations of repeatability, reproducibility, and laboratory effects from collaborative study data. Single laboratory study and collaborative study examples are given.
Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon
2013-01-01
Purpose Phonotactic probability or neighborhood density have predominately been defined using gross distinctions (i.e., low vs. high). The current studies examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method The full range of probability or density was examined by sampling five nonwords from each of four quartiles. Three- and 5-year-old children received training on nonword-nonobject pairs. Learning was measured in a picture-naming task immediately following training and 1-week after training. Results were analyzed using multi-level modeling. Results A linear spline model best captured nonlinearities in phonotactic probability. Specifically word learning improved as probability increased in the lowest quartile, worsened as probability increased in the midlow quartile, and then remained stable and poor in the two highest quartiles. An ordinary linear model sufficiently described neighborhood density. Here, word learning improved as density increased across all quartiles. Conclusion Given these different patterns, phonotactic probability and neighborhood density appear to influence different word learning processes. Specifically, phonotactic probability may affect recognition that a sound sequence is an acceptable word in the language and is a novel word for the child, whereas neighborhood density may influence creation of a new representation in long-term memory. PMID:23882005
Losing track of time through delayed body representations.
Fritz, Thomas H; Steixner, Agnes; Boettger, Joachim; Villringer, Arno
2015-01-01
The ability to keep track of time is perceived as crucial in most human societies. However, to lose track of time may also serve an important social role, associated with recreational purpose. To this end a number of social technologies are employed, some of which may relate to a manipulation of time perception through a modulation of body representation. Here, we investigated an influence of real-time or delayed videos of own-body representations on time perception in an experimental setup with virtual mirrors. Seventy participants were asked to either stay in the installation until they thought that a defined time (90 s) had passed, or they were encouraged to stay in the installation as long as they wanted and after exiting were asked to estimate the duration of their stay. Results show that a modulation of body representation by time-delayed representations of the mirror-video displays influenced time perception. Furthermore, these time-delayed conditions were associated with a greater sense of arousal and intoxication. We suggest that feeding in references to the immediate past into working memory could be the underlying mental mechanism mediating the observed modulation of time perception. We argue that such an influence on time perception would probably not only be achieved visually, but might also work with acoustic references to the immediate past (e.g., with music).
NASA Astrophysics Data System (ADS)
Hengl, Tomislav
2015-04-01
Efficiency of spatial sampling largely determines success of model building. This is especially important for geostatistical mapping where an initial sampling plan should provide a good representation or coverage of both geographical (defined by the study area mask map) and feature space (defined by the multi-dimensional covariates). Otherwise the model will need to extrapolate and, hence, the overall uncertainty of the predictions will be high. In many cases, geostatisticians use point data sets which are produced using unknown or inconsistent sampling algorithms. Many point data sets in environmental sciences suffer from spatial clustering and systematic omission of feature space. But how to quantify these 'representation' problems and how to incorporate this knowledge into model building? The author has developed a generic function called 'spsample.prob' (Global Soil Information Facilities package for R) and which simultaneously determines (effective) inclusion probabilities as an average between the kernel density estimation (geographical spreading of points; analysed using the spatstat package in R) and MaxEnt analysis (feature space spreading of points; analysed using the MaxEnt software used primarily for species distribution modelling). The output 'iprob' map indicates whether the sampling plan has systematically missed some important locations and/or features, and can also be used as an input for geostatistical modelling e.g. as a weight map for geostatistical model fitting. The spsample.prob function can also be used in combination with the accessibility analysis (cost of field survey are usually function of distance from the road network, slope and land cover) to allow for simultaneous maximization of average inclusion probabilities and minimization of total survey costs. The author postulates that, by estimating effective inclusion probabilities using combined geographical and feature space analysis, and by comparing survey costs to representation efficiency, an optimal initial sampling plan can be produced which satisfies both criteria: (a) good representation (i.e. within a tolerance threshold), and (b) minimized survey costs. This sampling analysis framework could become especially interesting for generating sampling plans in new areas e.g. for which no previous spatial prediction model exists. The presentation includes data processing demos with standard soil sampling data sets Ebergotzen (Germany) and Edgeroi (Australia), also available via the GSIF package.
Imaging the fetal central nervous system
De Keersmaecker, B.; Claus, F.; De Catte, L.
2011-01-01
The low prevalence of fetal central nervous system anomalies results in a restricted level of exposure and limited experience for most of the obstetricians involved in prenatal ultrasound. Sonographic guidelines for screening the fetal brain in a systematic way will probably increase the detection rate and enhance a correct referral to a tertiary care center, offering the patient a multidisciplinary approach of the condition. This paper aims to elaborate on prenatal sonographic and magnetic resonance imaging (MRI) diagnosis and outcome of various central nervous system malformations. Detailed neurosonographic investigation has become available through high resolution vaginal ultrasound probes and the development of a variety of 3D ultrasound modalities e.g. ultrasound tomographic imaging. In addition, fetal MRI is particularly helpful in the detection of gyration and neurulation anomalies and disorders of the gray and white matter. PMID:24753859
TOmographic Remote Observer of Ionospheric Disturbances
2007-11-15
ionosphere . The proposed spacecraft was an evolutionary design from the USUSat, Combat Sentinel, and USUSat II programs whose histories are shown in...Figure 1. The primary science instrument, TOROID for TOmographic Remote Observer of Ionospheric Disturbances, is a photometer for measuring the
Optical tomograph optimized for tumor detection inside highly absorbent organs
NASA Astrophysics Data System (ADS)
Boutet, Jérôme; Koenig, Anne; Hervé, Lionel; Berger, Michel; Dinten, Jean-Marc; Josserand, Véronique; Coll, Jean-Luc
2011-05-01
This paper presents a tomograph for small animal fluorescence imaging. The compact and cost-effective system described in this article was designed to address the problem of tumor detection inside highly absorbent heterogeneous organs, such as lungs. To validate the tomograph's ability to detect cancerous nodules inside lungs, in vivo tumor growth was studied on seven cancerous mice bearing murine mammary tumors marked with Alexa Fluor 700. They were successively imaged 10, 12, and 14 days after the primary tumor implantation. The fluorescence maps were compared over this time period. As expected, the reconstructed fluorescence increases with the tumor growth stage.
Lamb wave tomographic imaging system for aircraft structural health assessment
NASA Astrophysics Data System (ADS)
Schwarz, Willi G.; Read, Michael E.; Kremer, Matthew J.; Hinders, Mark K.; Smith, Barry T.
1999-01-01
A tomographic imaging system using ultrasonic Lamb waves for the nondestructive inspection of aircraft components such as wings and fuselage is being developed. The computer-based system provides large-area inspection capability by electronically scanning an array of transducers that can be easily attached to flat and curved surface without moving parts. Images of the inspected area are produced in near real time employing a tomographic reconstruction method adapted from seismological applications. Changes in material properties caused by structural flaws such as disbonds, corrosion, and fatigue cracks can be effectively detected and characterized utilizing this fast NDE technique.
Sodankylä ionospheric tomography dataset 2003-2014
NASA Astrophysics Data System (ADS)
Norberg, J.; Roininen, L.; Kero, A.; Raita, T.; Ulich, T.; Markkanen, M.; Juusola, L.; Kauristie, K.
2015-12-01
Sodankylä Geophysical Observatory has been operating a tomographic receiver network and collecting the produced data since 2003. The collected dataset consists of phase difference curves measured from Russian COSMOS dual-frequency (150/400 MHz) low-Earth-orbit satellite signals, and tomographic electron density reconstructions obtained from these measurements. In this study vertical total electron content (VTEC) values are integrated from the reconstructed electron densities to make a qualitative and quantitative analysis to validate the long-term performance of the tomographic system. During the observation period, 2003-2014, there were three-to-five operational stations at the Fenno-Scandinavian sector. Altogether the analysis consists of around 66 000 overflights, but to ensure the quality of the reconstructions, the examination is limited to cases with descending (north to south) overflights and maximum elevation over 60°. These constraints limit the number of overflights to around 10 000. Based on this dataset, one solar cycle of ionospheric vertical total electron content estimates is constructed. The measurements are compared against International Reference Ionosphere IRI-2012 model, F10.7 solar flux index and sunspot number data. Qualitatively the tomographic VTEC estimate corresponds to reference data very well, but the IRI-2012 model are on average 40 % higher of that of the tomographic results.
Characteristics of a Two-Dimensional Hydrogenlike Atom
NASA Astrophysics Data System (ADS)
Skobelev, V. V.
2018-06-01
Using the customary and well-known representation of the radiation probability of a hydrogen-like atom in the three-dimensional case, a general expression for the probability of single-photon emission of a twodimensional atom has been obtained along with an expression for the particular case of the transition from the first excited state to the ground state, in the latter case in comparison with corresponding expressions for the three-dimensional atom and the one-dimensional atom. Arguments are presented in support of the claim that this method of calculation gives a value of the probability that is identical to the value given by exact methods of QED extended to the subspace {0, 1, 2}. Relativistic corrections (Zα)4 to the usual Schrödinger value of the energy ( (Zα)2) are also discussed.
Tomographic Image Compression Using Multidimensional Transforms.
ERIC Educational Resources Information Center
Villasenor, John D.
1994-01-01
Describes a method for compressing tomographic images obtained using Positron Emission Tomography (PET) and Magnetic Resonance (MR) by applying transform compression using all available dimensions. This takes maximum advantage of redundancy of the data, allowing significant increases in compression efficiency and performance. (13 references) (KRN)
Crustal and mantle velocity models of southern Tibet from finite frequency tomography
NASA Astrophysics Data System (ADS)
Liang, Xiaofeng; Shen, Yang; Chen, Yongshun John; Ren, Yong
2011-02-01
Using traveltimes of teleseismic body waves recorded by several temporary local seismic arrays, we carried out finite-frequency tomographic inversions to image the three-dimensional velocity structure beneath southern Tibet to examine the roles of the upper mantle in the formation of the Tibetan Plateau. The results reveal a region of relatively high P and S wave velocity anomalies extending from the uppermost mantle to at least 200 km depth beneath the Higher Himalaya. We interpret this high-velocity anomaly as the underthrusting Indian mantle lithosphere. There is a strong low P and S wave velocity anomaly that extends from the lower crust to at least 200 km depth beneath the Yadong-Gulu rift, suggesting that rifting in southern Tibet is probably a process that involves the entire lithosphere. Intermediate-depth earthquakes in southern Tibet are located at the top of an anomalous feature in the mantle with a low Vp, a high Vs, and a low Vp/Vs ratio. One possible explanation for this unusual velocity anomaly is the ongoing granulite-eclogite transformation. Together with the compressional stress from the collision, eclogitization and the associated negative buoyancy force offer a plausible mechanism that causes the subduction of the Indian mantle lithosphere beneath the Higher Himalaya. Our tomographic model and the observation of north-dipping lineations in the upper mantle suggest that the Indian mantle lithosphere has been broken laterally in the direction perpendicular to the convergence beneath the north-south trending rifts and subducted in a progressive, piecewise and subparallel fashion with the current one beneath the Higher Himalaya.
ERIC Educational Resources Information Center
Clinton, Virginia; Morsanyi, Kinga; Alibali, Martha W.; Nathan, Mitchell J.
2016-01-01
Learning from visual representations is enhanced when learners appropriately integrate corresponding visual and verbal information. This study examined the effects of two methods of promoting integration, color coding and labeling, on learning about probabilistic reasoning from a table and text. Undergraduate students (N = 98) were randomly…
ERIC Educational Resources Information Center
Myers, Beth Ann
2016-01-01
To create a more competitive and creative engineering workforce, breakthroughs in how we attract and educate more diverse engineers are mandated. Despite a programmatic focus on increasing the representation of women and minorities in engineering during the last few decades, no single solution has been identified and is probably not realistic. But…
30 CFR 44.27 - Consent findings and rules or orders.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., representations of the parties, and probability of an agreement which will result in a just disposition of the... STANDARDS Hearings § 44.27 Consent findings and rules or orders. (a) General. At any time after a request for hearing is filed in accordance with § 44.14, a reasonable opportunity may be afforded to permit...
ERIC Educational Resources Information Center
Urban-Woldron, Hildegard
2012-01-01
The recent implementation of technology in the classroom is probably one of the most challenging innovations that many teachers are having to confront today. Teachers have to develop a knowledge base that goes beyond technology proficiency, into learning about how technology, for example, can be used for various forms of representations of subject…
Developmental dyscalculia is related to visuo-spatial memory and inhibition impairment.
Szucs, Denes; Devine, Amy; Soltesz, Fruzsina; Nobes, Alison; Gabriel, Florence
2013-01-01
Developmental dyscalculia is thought to be a specific impairment of mathematics ability. Currently dominant cognitive neuroscience theories of developmental dyscalculia suggest that it originates from the impairment of the magnitude representation of the human brain, residing in the intraparietal sulcus, or from impaired connections between number symbols and the magnitude representation. However, behavioral research offers several alternative theories for developmental dyscalculia and neuro-imaging also suggests that impairments in developmental dyscalculia may be linked to disruptions of other functions of the intraparietal sulcus than the magnitude representation. Strikingly, the magnitude representation theory has never been explicitly contrasted with a range of alternatives in a systematic fashion. Here we have filled this gap by directly contrasting five alternative theories (magnitude representation, working memory, inhibition, attention and spatial processing) of developmental dyscalculia in 9-10-year-old primary school children. Participants were selected from a pool of 1004 children and took part in 16 tests and nine experiments. The dominant features of developmental dyscalculia are visuo-spatial working memory, visuo-spatial short-term memory and inhibitory function (interference suppression) impairment. We hypothesize that inhibition impairment is related to the disruption of central executive memory function. Potential problems of visuo-spatial processing and attentional function in developmental dyscalculia probably depend on short-term memory/working memory and inhibition impairments. The magnitude representation theory of developmental dyscalculia was not supported. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
Normal versus High Tension Glaucoma: A Comparison of Functional and Structural Defects
Thonginnetra, Oraorn; Greenstein, Vivienne C.; Chu, David; Liebmann, Jeffrey M.; Ritch, Robert; Hood, Donald C.
2009-01-01
Purpose To compare visual field defects obtained with both multifocal visual evoked potential (mfVEP) and Humphrey visual field (HVF) techniques to topographic optic disc measurements in patients with normal tension glaucoma (NTG) and high tension glaucoma (HTG). Methods We studied 32 patients with NTG and 32 with HTG. All patients had reliable 24-2 HVFs with a mean deviation (MD) of −10 dB or better, a glaucomatous optic disc and an abnormal HVF in at least one eye. Multifocal VEPs were obtained from each eye and probability plots created. The mfVEP and HVF probability plots were divided into a central 10-degree (radius) and an outer arcuate subfield in both superior and inferior hemifields. Cluster analyses and counts of abnormal points were performed in each subfield. Optic disc images were obtained with the Heidelberg Retina Tomograph III (HRT III). Eleven stereometric parameters were calculated. Moorfields regression analysis (MRA) and the glaucoma probability score (GPS) were performed. Results There were no significant differences in MD and PSD values between NTG and HTG eyes. However, NTG eyes had a higher percentage of abnormal test points and clusters of abnormal points in the central subfields on both mfVEP and HVF than HTG eyes. For HRT III, there were no significant differences in the 11 stereometric parameters or in the MRA and GPS analyses of the optic disc images. Conclusions The visual field data suggest more localized and central defects for NTG than HTG. PMID:19223786
Levesque, Barrett G; Cipriano, Lauren E; Chang, Steven L; Lee, Keane K; Owens, Douglas K; Garber, Alan M
2010-03-01
The cost effectiveness of alternative approaches to the diagnosis of small-bowel Crohn's disease is unknown. This study evaluates whether computed tomographic enterography (CTE) is a cost-effective alternative to small-bowel follow-through (SBFT) and whether capsule endoscopy is a cost-effective third test in patients in whom a high suspicion of disease remains after 2 previous negative tests. A decision-analytic model was developed to compare the lifetime costs and benefits of each diagnostic strategy. Patients were considered with low (20%) and high (75%) pretest probability of small-bowel Crohn's disease. Effectiveness was measured in quality-adjusted life-years (QALYs) gained. Parameter assumptions were tested with sensitivity analyses. With a moderate to high pretest probability of small-bowel Crohn's disease, and a higher likelihood of isolated jejunal disease, follow-up evaluation with CTE has an incremental cost-effectiveness ratio of less than $54,000/QALY-gained compared with SBFT. The addition of capsule endoscopy after ileocolonoscopy and negative CTE or SBFT costs greater than $500,000 per QALY-gained in all scenarios. Results were not sensitive to costs of tests or complications but were sensitive to test accuracies. The cost effectiveness of strategies depends critically on the pretest probability of Crohn's disease and if the terminal ileum is examined at ileocolonoscopy. CTE is a cost-effective alternative to SBFT in patients with moderate to high suspicion of small-bowel Crohn's disease. The addition of capsule endoscopy as a third test is not a cost-effective third test, even in patients with high pretest probability of disease. Copyright 2010 AGA Institute. Published by Elsevier Inc. All rights reserved.
The Mathematics of Four or More N-Localizers for Stereotactic Neurosurgery.
Brown, Russell A
2015-10-13
The mathematics that were originally developed for the N-localizer apply to three N-localizers that produce three sets of fiducials in a tomographic image. Some applications of the N-localizer use four N-localizers that produce four sets of fiducials; however, the mathematics that apply to three sets of fiducials do not apply to four sets of fiducials. This article presents mathematics that apply to four or more sets of fiducials that all lie within one planar tomographic image. In addition, these mathematics are extended to apply to four or more fiducials that do not all lie within one planar tomographic image, as may be the case with magnetic resonance (MR) imaging where a volume is imaged instead of a series of planar tomographic images. Whether applied to a planar image or a volume image, the mathematics of four or more N-localizers provide a statistical measure of the quality of the image data that may be influenced by factors, such as the nonlinear distortion of MR images.
TomoBank: a tomographic data repository for computational x-ray science
De Carlo, Francesco; Gürsoy, Doğa; Ching, Daniel J.; ...
2018-02-08
There is a widening gap between the fast advancement of computational methods for tomographic reconstruction and their successful implementation in production software at various synchrotron facilities. This is due in part to the lack of readily available instrument datasets and phantoms representative of real materials for validation and comparison of new numerical methods. Recent advancements in detector technology made sub-second and multi-energy tomographic data collection possible [1], but also increased the demand to develop new reconstruction methods able to handle in-situ [2] and dynamic systems [3] that can be quickly incorporated in beamline production software [4]. The X-ray Tomography Datamore » Bank, tomoBank, provides a repository of experimental and simulated datasets with the aim to foster collaboration among computational scientists, beamline scientists, and experimentalists and to accelerate the development and implementation of tomographic reconstruction methods for synchrotron facility production software by providing easy access to challenging dataset and their descriptors.« less
High-resolution multimodal clinical multiphoton tomography of skin
NASA Astrophysics Data System (ADS)
König, Karsten
2011-03-01
This review focuses on multimodal multiphoton tomography based on near infrared femtosecond lasers. Clinical multiphoton tomographs for 3D high-resolution in vivo imaging have been placed into the market several years ago. The second generation of this Prism-Award winning High-Tech skin imaging tool (MPTflex) was introduced in 2010. The same year, the world's first clinical CARS studies have been performed with a hybrid multimodal multiphoton tomograph. In particular, non-fluorescent lipids and water as well as mitochondrial fluorescent NAD(P)H, fluorescent elastin, keratin, and melanin as well as SHG-active collagen has been imaged with submicron resolution in patients suffering from psoriasis. Further multimodal approaches include the combination of multiphoton tomographs with low-resolution wide-field systems such as ultrasound, optoacoustical, OCT, and dermoscopy systems. Multiphoton tomographs are currently employed in Australia, Japan, the US, and in several European countries for early diagnosis of skin cancer, optimization of treatment strategies, and cosmetic research including long-term testing of sunscreen nanoparticles as well as anti-aging products.
Nuclear medicine in clinical neurology: an update
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oldendorf, W.H.
1981-01-01
Isotope scanning using technetium 99m pertechnetate has fallen into disuse since the advent of x-ray computerized tomography. Regional brain blood flow studies have been pursued on a research basis. Increased regional blood flow during focal seizure activity has been demonstrated and is of use in localizing such foci. Cisternography as a predictive tool in normal pressure hydrocephalus is falling into disuse. Positron tomographic scanning is a potent research tool that can demonstrate both regional glycolysis and blood flow. Unfortunately, it is extremely expensive and complex to apply in a clinical setting. With support from the National Institutes of Health, sevenmore » extramural centers have been funded to develop positron tomographic capabilities, and they will greatly advance our knowledge of stroke pathophysiology, seizure disorders, brain tumors, and various degenerative diseases. Nuclear magnetic resonance imaging is a potentially valuable tool since it creates tomographic images representing the distribution of brain water. No tissue ionization is produced, and images comparable to second-generation computerized tomographic scans are already being produced in humans.« less
Noniterative MAP reconstruction using sparse matrix representations.
Cao, Guangzhi; Bouman, Charles A; Webb, Kevin J
2009-09-01
We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.
Moosavi Tayebi, Rohollah; Wirza, Rahmita; Sulaiman, Puteri S B; Dimon, Mohd Zamrin; Khalid, Fatimah; Al-Surmi, Aqeel; Mazaheri, Samaneh
2015-04-22
Computerized tomographic angiography (3D data representing the coronary arteries) and X-ray angiography (2D X-ray image sequences providing information about coronary arteries and their stenosis) are standard and popular assessment tools utilized for medical diagnosis of coronary artery diseases. At present, the results of both modalities are individually analyzed by specialists and it is difficult for them to mentally connect the details of these two techniques. The aim of this work is to assist medical diagnosis by providing specialists with the relationship between computerized tomographic angiography and X-ray angiography. In this study, coronary arteries from two modalities are registered in order to create a 3D reconstruction of the stenosis position. The proposed method starts with coronary artery segmentation and labeling for both modalities. Then, stenosis and relevant labeled artery in X-ray angiography image are marked by a specialist. Proper control points for the marked artery in both modalities are automatically detected and normalized. Then, a geometrical transformation function is computed using these control points. Finally, this function is utilized to register the marked artery from the X-ray angiography image on the computerized tomographic angiography and get the 3D position of the stenosis lesion. The result is a 3D informative model consisting of stenosis and coronary arteries' information from the X-ray angiography and computerized tomographic angiography modalities. The results of the proposed method for coronary artery segmentation, labeling and 3D reconstruction are evaluated and validated on the dataset containing both modalities. The advantage of this method is to aid specialists to determine a visual relationship between the correspondent coronary arteries from two modalities and also set up a connection between stenosis points from an X-ray angiography along with their 3D positions on the coronary arteries from computerized tomographic angiography. Moreover, another benefit of this work is that the medical acquisition standards remain unchanged, which means that no calibration in the acquisition devices is required. It can be applied on most computerized tomographic angiography and angiography devices.
Tomographic findings of acute pulmonary toxoplasmosis in immunocompetent patients.
de Souza Giassi, Karina; Costa, Andre Nathan; Apanavicius, Andre; Teixeira, Fernando Bin; Fernandes, Caio Julio Cesar; Helito, Alfredo Salim; Kairalla, Ronaldo Adib
2014-11-25
Toxoplasmosis is one of the most common human zoonosis, and is generally benign in most of the individuals. Pulmonary involvement is common in immunocompromised subjects, but very rare in immunocompetents and there are scarce reports of tomographic findings in the literature. The aim of the study is to describe three immunocompetent patients diagnosed with acute pulmonary toxoplasmosis and their respective thoracic tomographic findings. Acute toxoplasmosis was diagnosed according to the results of serological tests suggestive of recent primary infection and the absence of an alternative etiology. From 2009 to 2013, three patients were diagnosed with acute respiratory failure secondary to acute toxoplasmosis. The patients were two female and one male, and were 38, 56 and 36 years old. Similarly they presented a two-week febrile illness and progressive dyspnea before admission. Laboratory tests demonstrated lymphocytosis, slight changes in liver enzymes and high inflammatory markers. Tomographic findings were bilateral smooth septal and peribronchovascular thickening (100%), ground-glass opacities (100%), atelectasis (33%), random nodules (33%), lymph node enlargement (33%) and pleural effusion (66%). All the patients improved their symptoms after treatment, and complete resolution of tomographic findings were found in the followup. These cases provide a unique description of the presentation and evolution of pulmonary tomographic manifestations of toxoplasmosis in immunocompetent patients. Toxoplasma pneumonia manifests with fever, dyspnea and a non-productive cough that may result in respiratory failure. In animal models, changes were described as interstitial pneumonitis with focal infiltrates of neutrophils that can finally evolve into a pattern of diffuse alveolar damage with focal necrosis. The tomographic findings are characterized as ground glass opacities, smooth septal and marked peribronchovascular thickening; and may mimic pulmonary congestion, lymphangitis, atypical pneumonia and pneumocystosis. This is the largest series of CT findings of acute toxoplasmosis in immunocompetent hosts, and the diagnosis should be considered as patients that present with acute respiratory failure in the context of a subacute febrile illness with bilateral and diffuse interstitial infiltrates with marked peribronchovascular thickening. If promptly treated, pulmonary toxoplasmosis can result in complete clinical and radiological recovery in immunocompetent hosts.
Mason, Robert A; Just, Marcel Adam
2015-05-01
Incremental instruction on the workings of a set of mechanical systems induced a progression of changes in the neural representations of the systems. The neural representations of four mechanical systems were assessed before, during, and after three phases of incremental instruction (which first provided information about the system components, then provided partial causal information, and finally provided full functional information). In 14 participants, the neural representations of four systems (a bathroom scale, a fire extinguisher, an automobile braking system, and a trumpet) were assessed using three recently developed techniques: (1) machine learning and classification of multi-voxel patterns; (2) localization of consistently responding voxels; and (3) representational similarity analysis (RSA). The neural representations of the systems progressed through four stages, or states, involving spatially and temporally distinct multi-voxel patterns: (1) initially, the representation was primarily visual (occipital cortex); (2) it subsequently included a large parietal component; (3) it eventually became cortically diverse (frontal, parietal, temporal, and medial frontal regions); and (4) at the end, it demonstrated a strong frontal cortex weighting (frontal and motor regions). At each stage of knowledge, it was possible for a classifier to identify which one of four mechanical systems a participant was thinking about, based on their brain activation patterns. The progression of representational states was suggestive of progressive stages of learning: (1) encoding information from the display; (2) mental animation, possibly involving imagining the components moving; (3) generating causal hypotheses associated with mental animation; and finally (4) determining how a person (probably oneself) would interact with the system. This interpretation yields an initial, cortically-grounded, theory of learning of physical systems that potentially can be related to cognitive learning theories by suggesting links between cortical representations, stages of learning, and the understanding of simple systems. Copyright © 2015 Elsevier Inc. All rights reserved.
Application of Second-Moment Source Analysis to Three Problems in Earthquake Forecasting
NASA Astrophysics Data System (ADS)
Donovan, J.; Jordan, T. H.
2011-12-01
Though earthquake forecasting models have often represented seismic sources as space-time points (usually hypocenters), a more complete hazard analysis requires the consideration of finite-source effects, such as rupture extent, orientation, directivity, and stress drop. The most compact source representation that includes these effects is the finite moment tensor (FMT), which approximates the degree-two polynomial moments of the stress glut by its projection onto the seismic (degree-zero) moment tensor. This projection yields a scalar space-time source function whose degree-one moments define the centroid moment tensor (CMT) and whose degree-two moments define the FMT. We apply this finite-source parameterization to three forecasting problems. The first is the question of hypocenter bias: can we reject the null hypothesis that the conditional probability of hypocenter location is uniformly distributed over the rupture area? This hypothesis is currently used to specify rupture sets in the "extended" earthquake forecasts that drive simulation-based hazard models, such as CyberShake. Following McGuire et al. (2002), we test the hypothesis using the distribution of FMT directivity ratios calculated from a global data set of source slip inversions. The second is the question of source identification: given an observed FMT (and its errors), can we identify it with an FMT in the complete rupture set that represents an extended fault-based rupture forecast? Solving this problem will facilitate operational earthquake forecasting, which requires the rapid updating of earthquake triggering and clustering models. Our proposed method uses the second-order uncertainties as a norm on the FMT parameter space to identify the closest member of the hypothetical rupture set and to test whether this closest member is an adequate representation of the observed event. Finally, we address the aftershock excitation problem: given a mainshock, what is the spatial distribution of aftershock probabilities? The FMT representation allows us to generalize the models typically used for this purpose (e.g., marked point process models, such as ETAS), which will again be necessary in operational earthquake forecasting. To quantify aftershock probabilities, we compare mainshock FMTs with the first and second spatial moments of weighted aftershock hypocenters. We will describe applications of these results to the Uniform California Earthquake Rupture Forecast, version 3, which is now under development by the Working Group on California Earthquake Probabilities.
Derenzo, Stephen E.; Budinger, Thomas F.
1984-01-01
In brief, the invention is a tomograph modified to be in a clamshell configuration so that the ring or rings may be moved to multiple sampling positions. The tomograph includes an array of detectors arranged in successive adjacent relative locations along a closed curve in a first position in a selected plane, and means for securing the detectors in the relative locations in a first sampling position. The securing means is movable in the plane in two sections and pivotable at one p The U.S. Government has rights in this invention pursuant to Contract No. W-7405-ENG-48 between the U.S. Department of Energy and the University of California.
Nyquist, Jonathan E.; Toran, Laura; Fang, Allison C.; Ryan, Robert J.; Rosenberry, Donald O.
2010-01-01
Characterization of the hyporheic zone is of critical importance for understanding stream ecology, contaminant transport, and groundwater‐surface water interaction. A salt water tracer test was used to probe the hyporheic zone of a recently re‐engineered portion of Crabby Creek, a stream located near Philadelphia, PA. The tracer solution was tracked through a 13.5 meter segment of the stream using both a network of 25 wells sampled every 5–15 minutes and time‐lapse electrical resistivity tomographs collected every 11 minutes for six hours, with additional tomographs collected every 100 minutes for an additional 16 hours. The comparison of tracer monitoring methods is of keen interest because tracer tests are one of the few techniques available for characterizing this dynamic zone, and logistically it is far easier to collect resistivity tomographs than to install and monitor a dense network of wells. Our results show that resistivity monitoring captured the essential shape of the breakthrough curve and may indicate portions of the stream where the tracer lingered in the hyporheic zone. Time‐lapse resistivity measurements, however, represent time averages over the period required to collect a tomographic data set, and spatial averages over a volume larger than captured by a well sample. Smoothing by the resistivity data inversion algorithm further blurs the resulting tomograph; consequently resistivity monitoring underestimates the degree of fine‐scale heterogeneity in the hyporheic zone.
Rapid tomographic reconstruction based on machine learning for time-resolved combustion diagnostics
NASA Astrophysics Data System (ADS)
Yu, Tao; Cai, Weiwei; Liu, Yingzheng
2018-04-01
Optical tomography has attracted surged research efforts recently due to the progress in both the imaging concepts and the sensor and laser technologies. The high spatial and temporal resolutions achievable by these methods provide unprecedented opportunity for diagnosis of complicated turbulent combustion. However, due to the high data throughput and the inefficiency of the prevailing iterative methods, the tomographic reconstructions which are typically conducted off-line are computationally formidable. In this work, we propose an efficient inversion method based on a machine learning algorithm, which can extract useful information from the previous reconstructions and build efficient neural networks to serve as a surrogate model to rapidly predict the reconstructions. Extreme learning machine is cited here as an example for demonstrative purpose simply due to its ease of implementation, fast learning speed, and good generalization performance. Extensive numerical studies were performed, and the results show that the new method can dramatically reduce the computational time compared with the classical iterative methods. This technique is expected to be an alternative to existing methods when sufficient training data are available. Although this work is discussed under the context of tomographic absorption spectroscopy, we expect it to be useful also to other high speed tomographic modalities such as volumetric laser-induced fluorescence and tomographic laser-induced incandescence which have been demonstrated for combustion diagnostics.
Rapid tomographic reconstruction based on machine learning for time-resolved combustion diagnostics.
Yu, Tao; Cai, Weiwei; Liu, Yingzheng
2018-04-01
Optical tomography has attracted surged research efforts recently due to the progress in both the imaging concepts and the sensor and laser technologies. The high spatial and temporal resolutions achievable by these methods provide unprecedented opportunity for diagnosis of complicated turbulent combustion. However, due to the high data throughput and the inefficiency of the prevailing iterative methods, the tomographic reconstructions which are typically conducted off-line are computationally formidable. In this work, we propose an efficient inversion method based on a machine learning algorithm, which can extract useful information from the previous reconstructions and build efficient neural networks to serve as a surrogate model to rapidly predict the reconstructions. Extreme learning machine is cited here as an example for demonstrative purpose simply due to its ease of implementation, fast learning speed, and good generalization performance. Extensive numerical studies were performed, and the results show that the new method can dramatically reduce the computational time compared with the classical iterative methods. This technique is expected to be an alternative to existing methods when sufficient training data are available. Although this work is discussed under the context of tomographic absorption spectroscopy, we expect it to be useful also to other high speed tomographic modalities such as volumetric laser-induced fluorescence and tomographic laser-induced incandescence which have been demonstrated for combustion diagnostics.
Field-portable lensfree tomographic microscope†
Isikman, Serhan O.; Bishara, Waheb; Sikora, Uzair; Yaglidere, Oguzhan; Yeah, John; Ozcan, Aydogan
2011-01-01
We present a field-portable lensfree tomographic microscope, which can achieve sectional imaging of a large volume (~20 mm3) on a chip with an axial resolution of <7 μm. In this compact tomographic imaging platform (weighing only ~110 grams), 24 light-emitting diodes (LEDs) that are each butt-coupled to a fibre-optic waveguide are controlled through a cost-effective micro-processor to sequentially illuminate the sample from different angles to record lensfree holograms of the sample that is placed on the top of a digital sensor array. In order to generate pixel super-resolved (SR) lensfree holograms and hence digitally improve the achievable lateral resolution, multiple sub-pixel shifted holograms are recorded at each illumination angle by electromagnetically actuating the fibre-optic waveguides using compact coils and magnets. These SR projection holograms obtained over an angular range of ~50° are rapidly reconstructed to yield projection images of the sample, which can then be back-projected to compute tomograms of the objects on the sensor-chip. The performance of this compact and light-weight lensfree tomographic microscope is validated by imaging micro-beads of different dimensions as well as a Hymenolepis nana egg, which is an infectious parasitic flatworm. Achieving a decent three-dimensional spatial resolution, this field-portable on-chip optical tomographic microscope might provide a useful toolset for telemedicine and high-throughput imaging applications in resource-poor settings. PMID:21573311
Compressive sensing reconstruction of 3D wet refractivity based on GNSS and InSAR observations
NASA Astrophysics Data System (ADS)
Heublein, Marion; Alshawaf, Fadwa; Erdnüß, Bastian; Zhu, Xiao Xiang; Hinz, Stefan
2018-06-01
In this work, the reconstruction quality of an approach for neutrospheric water vapor tomography based on Slant Wet Delays (SWDs) obtained from Global Navigation Satellite Systems (GNSS) and Interferometric Synthetic Aperture Radar (InSAR) is investigated. The novelties of this approach are (1) the use of both absolute GNSS and absolute InSAR SWDs for tomography and (2) the solution of the tomographic system by means of compressive sensing (CS). The tomographic reconstruction is performed based on (i) a synthetic SWD dataset generated using wet refractivity information from the Weather Research and Forecasting (WRF) model and (ii) a real dataset using GNSS and InSAR SWDs. Thus, the validation of the achieved results focuses (i) on a comparison of the refractivity estimates with the input WRF refractivities and (ii) on radiosonde profiles. In case of the synthetic dataset, the results show that the CS approach yields a more accurate and more precise solution than least squares (LSQ). In addition, the benefit of adding synthetic InSAR SWDs into the tomographic system is analyzed. When applying CS, adding synthetic InSAR SWDs into the tomographic system improves the solution both in magnitude and in scattering. When solving the tomographic system by means of LSQ, no clear behavior is observed. In case of the real dataset, the estimated refractivities of both methodologies show a consistent behavior although the LSQ and CS solution strategies differ.
NASA Astrophysics Data System (ADS)
Hart, V. P.; Taylor, M. J.; Doyle, T. E.; Zhao, Y.; Pautet, P.-D.; Carruth, B. L.; Rusch, D. W.; Russell, J. M.
2018-01-01
This research presents the first application of tomographic techniques for investigating gravity wave structures in polar mesospheric clouds (PMCs) imaged by the Cloud Imaging and Particle Size instrument on the NASA AIM satellite. Albedo data comprising consecutive PMC scenes were used to tomographically reconstruct a 3-D layer using the Partially Constrained Algebraic Reconstruction Technique algorithm and a previously developed "fanning" technique. For this pilot study, a large region (760 × 148 km) of the PMC layer (altitude 83 km) was sampled with a 2 km horizontal resolution, and an intensity weighted centroid technique was developed to create novel 2-D surface maps, characterizing the individual gravity waves as well as their altitude variability. Spectral analysis of seven selected wave events observed during the Northern Hemisphere 2007 PMC season exhibited dominant horizontal wavelengths of 60-90 km, consistent with previous studies. These tomographic analyses have enabled a broad range of new investigations. For example, a clear spatial anticorrelation was observed between the PMC albedo and wave-induced altitude changes, with higher-albedo structures aligning well with wave troughs, while low-intensity regions aligned with wave crests. This result appears to be consistent with current theories of PMC development in the mesopause region. This new tomographic imaging technique also provides valuable wave amplitude information enabling further mesospheric gravity wave investigations, including quantitative analysis of their hemispheric and interannual characteristics and variations.
GNSS-ISR data fusion: General framework with application to the high-latitude ionosphere
NASA Astrophysics Data System (ADS)
Semeter, Joshua; Hirsch, Michael; Lind, Frank; Coster, Anthea; Erickson, Philip; Pankratius, Victor
2016-03-01
A mathematical framework is presented for the fusion of electron density measured by incoherent scatter radar (ISR) and total electron content (TEC) measured using global navigation satellite systems (GNSS). Both measurements are treated as projections of an unknown density field (for GNSS-TEC the projection is tomographic; for ISR the projection is a weighted average over a local spatial region) and discrete inverse theory is applied to obtain a higher fidelity representation of the field than could be obtained from either modality individually. The specific implementation explored herein uses the interpolated ISR density field as initial guess to the combined inverse problem, which is subsequently solved using maximum entropy regularization. Simulations involving a dense meridional network of GNSS receivers near the Poker Flat ISR demonstrate the potential of this approach to resolve sub-beam structure in ISR measurements. Several future directions are outlined, including (1) data fusion using lower level (lag product) ISR data, (2) consideration of the different temporal sampling rates, (3) application of physics-based regularization, (4) consideration of nonoptimal observing geometries, and (5) use of an ISR simulation framework for optimal experiment design.
NASA Astrophysics Data System (ADS)
Cirilo-Lombardo, Diego Julio
2009-04-01
The physical meaning of the particularly simple non-degenerate supermetric, introduced in the previous part by the authors, is elucidated and the possible connection with processes of topological origin in high energy physics is analyzed and discussed. New possible mechanism of the localization of the fields in a particular sector of the supermanifold is proposed and the similarity and differences with a 5-dimensional warped model are shown. The relation with gauge theories of supergravity based in the OSP(1/4) group is explicitly given and the possible original action is presented. We also show that in this non-degenerate super-model the physic states, in contrast with the basic states, are observables and can be interpreted as tomographic projections or generalized representations of operators belonging to the metaplectic group Mp(2). The advantage of geometrical formulations based on non-degenerate super-manifolds over degenerate ones is pointed out and the description and the analysis of some interesting aspects of the simplest Riemannian superspaces are presented from the point of view of the possible vacuum solutions.
Analysis of iterative region-of-interest image reconstruction for x-ray computed tomography
Sidky, Emil Y.; Kraemer, David N.; Roth, Erin G.; Ullberg, Christer; Reiser, Ingrid S.; Pan, Xiaochuan
2014-01-01
Abstract. One of the challenges for iterative image reconstruction (IIR) is that such algorithms solve an imaging model implicitly, requiring a complete representation of the scanned subject within the viewing domain of the scanner. This requirement can place a prohibitively high computational burden for IIR applied to x-ray computed tomography (CT), especially when high-resolution tomographic volumes are required. In this work, we aim to develop an IIR algorithm for direct region-of-interest (ROI) image reconstruction. The proposed class of IIR algorithms is based on an optimization problem that incorporates a data fidelity term, which compares a derivative of the estimated data with the available projection data. In order to characterize this optimization problem, we apply it to computer-simulated two-dimensional fan-beam CT data, using both ideal noiseless data and realistic data containing a level of noise comparable to that of the breast CT application. The proposed method is demonstrated for both complete field-of-view and ROI imaging. To demonstrate the potential utility of the proposed ROI imaging method, it is applied to actual CT scanner data. PMID:25685824
Analysis of iterative region-of-interest image reconstruction for x-ray computed tomography.
Sidky, Emil Y; Kraemer, David N; Roth, Erin G; Ullberg, Christer; Reiser, Ingrid S; Pan, Xiaochuan
2014-10-03
One of the challenges for iterative image reconstruction (IIR) is that such algorithms solve an imaging model implicitly, requiring a complete representation of the scanned subject within the viewing domain of the scanner. This requirement can place a prohibitively high computational burden for IIR applied to x-ray computed tomography (CT), especially when high-resolution tomographic volumes are required. In this work, we aim to develop an IIR algorithm for direct region-of-interest (ROI) image reconstruction. The proposed class of IIR algorithms is based on an optimization problem that incorporates a data fidelity term, which compares a derivative of the estimated data with the available projection data. In order to characterize this optimization problem, we apply it to computer-simulated two-dimensional fan-beam CT data, using both ideal noiseless data and realistic data containing a level of noise comparable to that of the breast CT application. The proposed method is demonstrated for both complete field-of-view and ROI imaging. To demonstrate the potential utility of the proposed ROI imaging method, it is applied to actual CT scanner data.
Modeling late rectal toxicities based on a parameterized representation of the 3D dose distribution
NASA Astrophysics Data System (ADS)
Buettner, Florian; Gulliford, Sarah L.; Webb, Steve; Partridge, Mike
2011-04-01
Many models exist for predicting toxicities based on dose-volume histograms (DVHs) or dose-surface histograms (DSHs). This approach has several drawbacks as firstly the reduction of the dose distribution to a histogram results in the loss of spatial information and secondly the bins of the histograms are highly correlated with each other. Furthermore, some of the complex nonlinear models proposed in the past lack a direct physical interpretation and the ability to predict probabilities rather than binary outcomes. We propose a parameterized representation of the 3D distribution of the dose to the rectal wall which explicitly includes geometrical information in the form of the eccentricity of the dose distribution as well as its lateral and longitudinal extent. We use a nonlinear kernel-based probabilistic model to predict late rectal toxicity based on the parameterized dose distribution and assessed its predictive power using data from the MRC RT01 trial (ISCTRN 47772397). The endpoints under consideration were rectal bleeding, loose stools, and a global toxicity score. We extract simple rules identifying 3D dose patterns related to a specifically low risk of complication. Normal tissue complication probability (NTCP) models based on parameterized representations of geometrical and volumetric measures resulted in areas under the curve (AUCs) of 0.66, 0.63 and 0.67 for predicting rectal bleeding, loose stools and global toxicity, respectively. In comparison, NTCP models based on standard DVHs performed worse and resulted in AUCs of 0.59 for all three endpoints. In conclusion, we have presented low-dimensional, interpretable and nonlinear NTCP models based on the parameterized representation of the dose to the rectal wall. These models had a higher predictive power than models based on standard DVHs and their low dimensionality allowed for the identification of 3D dose patterns related to a low risk of complication.
A Riemannian framework for orientation distribution function computing.
Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid
2009-01-01
Compared with Diffusion Tensor Imaging (DTI), High Angular Resolution Imaging (HARDI) can better explore the complex microstructure of white matter. Orientation Distribution Function (ODF) is used to describe the probability of the fiber direction. Fisher information metric has been constructed for probability density family in Information Geometry theory and it has been successfully applied for tensor computing in DTI. In this paper, we present a state of the art Riemannian framework for ODF computing based on Information Geometry and sparse representation of orthonormal bases. In this Riemannian framework, the exponential map, logarithmic map and geodesic have closed forms. And the weighted Frechet mean exists uniquely on this manifold. We also propose a novel scalar measurement, named Geometric Anisotropy (GA), which is the Riemannian geodesic distance between the ODF and the isotropic ODF. The Renyi entropy H1/2 of the ODF can be computed from the GA. Moreover, we present an Affine-Euclidean framework and a Log-Euclidean framework so that we can work in an Euclidean space. As an application, Lagrange interpolation on ODF field is proposed based on weighted Frechet mean. We validate our methods on synthetic and real data experiments. Compared with existing Riemannian frameworks on ODF, our framework is model-free. The estimation of the parameters, i.e. Riemannian coordinates, is robust and linear. Moreover it should be noted that our theoretical results can be used for any probability density function (PDF) under an orthonormal basis representation.
NASA Technical Reports Server (NTRS)
Markley, F. Landis
2005-01-01
A new method is presented for the simultaneous estimation of the attitude of a spacecraft and an N-vector of bias parameters. This method uses a probability distribution function defined on the Cartesian product of SO(3), the group of rotation matrices, and the Euclidean space W N .The Fokker-Planck equation propagates the probability distribution function between measurements, and Bayes s formula incorporates measurement update information. This approach avoids all the issues of singular attitude representations or singular covariance matrices encountered in extended Kalman filters. In addition, the filter has a consistent initialization for a completely unknown initial attitude, owing to the fact that SO(3) is a compact space.
Geodesic Monte Carlo on Embedded Manifolds
Byrne, Simon; Girolami, Mark
2013-01-01
Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton–Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024
NASA Astrophysics Data System (ADS)
Perversi, Eleonora; Regazzini, Eugenio
2015-05-01
For a general inelastic Kac-like equation recently proposed, this paper studies the long-time behaviour of its probability-valued solution. In particular, the paper provides necessary and sufficient conditions for the initial datum in order that the corresponding solution converges to equilibrium. The proofs rest on the general CLT for independent summands applied to a suitable Skorokhod representation of the original solution evaluated at an increasing and divergent sequence of times. It turns out that, roughly speaking, the initial datum must belong to the standard domain of attraction of a stable law, while the equilibrium is presentable as a mixture of stable laws.
Application Of Iterative Reconstruction Techniques To Conventional Circular Tomography
NASA Astrophysics Data System (ADS)
Ghosh Roy, D. N.; Kruger, R. A.; Yih, B. C.; Del Rio, S. P.; Power, R. L.
1985-06-01
Two "point-by-point" iteration procedures, namely, Iterative Least Square Technique (ILST) and Simultaneous Iterative Reconstructive Technique (SIRT) were applied to classical circular tomographic reconstruction. The technique of tomosynthetic DSA was used in forming the tomographic images. Reconstructions of a dog's renal and neck anatomy are presented.
21 CFR 892.1740 - Tomographic x-ray system.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Tomographic x-ray system. 892.1740 Section 892.1740 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED.... This generic type of device may include signal analysis and display equipment, patient and equipment...
21 CFR 892.1740 - Tomographic x-ray system.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Tomographic x-ray system. 892.1740 Section 892.1740 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED.... This generic type of device may include signal analysis and display equipment, patient and equipment...
21 CFR 892.1740 - Tomographic x-ray system.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Tomographic x-ray system. 892.1740 Section 892.1740 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED.... This generic type of device may include signal analysis and display equipment, patient and equipment...
21 CFR 892.1740 - Tomographic x-ray system.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Tomographic x-ray system. 892.1740 Section 892.1740 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED.... This generic type of device may include signal analysis and display equipment, patient and equipment...
21 CFR 892.1740 - Tomographic x-ray system.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Tomographic x-ray system. 892.1740 Section 892.1740 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED.... This generic type of device may include signal analysis and display equipment, patient and equipment...
ECAT: A New Computerized Tomographic Imaging System for Position-Emitting Radiopharmaceuticals
DOE R&D Accomplishments Database
Phelps, M. E.; Hoffman, E. J.; Huang, S. C.; Kuhl, D. E.
1977-01-01
The ECAT was designed and developed as a complete computerized positron radionuclide imaging system capable of providing high contrast, high resolution, quantitative images in 2 dimensional and tomographic formats. Flexibility, in its various image mode options, allows it to be used for a wide variety of imaging problems.
Goal-Directed Movement Enhances Body Representation Updating
Wen, Wen; Muramatsu, Katsutoshi; Hamasaki, Shunsuke; An, Qi; Yamakawa, Hiroshi; Tamura, Yusuke; Yamashita, Atsushi; Asama, Hajime
2016-01-01
Body representation refers to perception, memory, and cognition related to the body and is updated continuously by sensory input. The present study examined the influence of goals on body representation updating with two experiments of the rubber hand paradigm. In the experiments, participants moved their hidden left hands forward and backward either in response to instruction to touch a virtual object or without any specific goal, while a virtual left hand was presented 250 mm above the real hand and moved in synchrony with the real hand. Participants then provided information concerning the perceived heights of their real left hands and rated their sense of agency and ownership of the virtual hand. Results of Experiment 1 showed that when participants moved their hands with the goal of touching a virtual object and received feedback indicating goal attainment, the perceived positions of their real hands shifted more toward that of the virtual hand relative to that in the condition without a goal, indicating that their body representations underwent greater modification. Furthermore, results of Experiment 2 showed that the effect of goal-directed movement occurred in the active condition, in which participants moved their own hands, but did not occur in the passive condition, in which participants’ hands were moved by the experimenter. Therefore, we concluded that the sense of agency probably contributed to the updating of body representation involving goal-directed movement. PMID:27445766
Contingency bias in probability judgement may arise from ambiguity regarding additional causes.
Mitchell, Chris J; Griffiths, Oren; More, Pranjal; Lovibond, Peter F
2013-09-01
In laboratory contingency learning tasks, people usually give accurate estimates of the degree of contingency between a cue and an outcome. However, if they are asked to estimate the probability of the outcome in the presence of the cue, they tend to be biased by the probability of the outcome in the absence of the cue. This bias is often attributed to an automatic contingency detection mechanism, which is said to act via an excitatory associative link to activate the outcome representation at the time of testing. We conducted 3 experiments to test alternative accounts of contingency bias. Participants were exposed to the same outcome probability in the presence of the cue, but different outcome probabilities in the absence of the cue. Phrasing the test question in terms of frequency rather than probability and clarifying the test instructions reduced but did not eliminate contingency bias. However, removal of ambiguity regarding the presence of additional causes during the test phase did eliminate contingency bias. We conclude that contingency bias may be due to ambiguity in the test question, and therefore it does not require postulation of a separate associative link-based mechanism.
Quantifying Mixed Uncertainties in Cyber Attacker Payoffs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, Samrat; Halappanavar, Mahantesh; Tipireddy, Ramakrishna
Representation and propagation of uncertainty in cyber attacker payoffs is a key aspect of security games. Past research has primarily focused on representing the defender’s beliefs about attacker payoffs as point utility estimates. More recently, within the physical security domain, attacker payoff uncertainties have been represented as Uniform and Gaussian probability distributions, and intervals. Within cyber-settings, continuous probability distributions may still be appropriate for addressing statistical (aleatory) uncertainties where the defender may assume that the attacker’s payoffs differ over time. However, systematic (epistemic) uncertainties may exist, where the defender may not have sufficient knowledge or there is insufficient information aboutmore » the attacker’s payoff generation mechanism. Such epistemic uncertainties are more suitably represented as probability boxes with intervals. In this study, we explore the mathematical treatment of such mixed payoff uncertainties.« less
Value of freedom to choose encoded by the human brain
Fujiwara, Juri; Usui, Nobuo; Park, Soyoung Q.; Williams, Tony; Iijima, Toshio; Taira, Masato; Tsutsui, Ken-Ichiro
2013-01-01
Humans and animals value the opportunity to choose by preferring alternatives that offer more rather than fewer choices. This preference for choice may arise not only from an increased probability of obtaining preferred outcomes but also from the freedom it provides. We used human neuroimaging to investigate the neural basis of the preference for choice as well as for the items that could be chosen. In each trial, participants chose between two options, a monetary amount option and a “choice option.” The latter consisted of a number that corresponded to the number of everyday items participants would subsequently be able to choose from. We found that the opportunity to choose from a larger number of items was equivalent to greater amounts of money, indicating that participants valued having more choice; moreover, participants varied in the degree to which they valued having the opportunity to choose, with some valuing it more than the increased probability of obtaining preferred items. Neural activations in the mid striatum increased with the value of the opportunity to choose. The same region also coded the value of the items. Conversely, activation in the dorsolateral striatum was not related to the value of the items but was elevated when participants were offered more choices, particularly in those participants who overvalued the opportunity to choose. These data suggest a functional dissociation of value representations within the striatum, with general representations in mid striatum and specific representations of the value of freedom provided by the opportunity to choose in dorsolateral striatum. PMID:23864380
Multivariate Density Estimation and Remote Sensing
NASA Technical Reports Server (NTRS)
Scott, D. W.
1983-01-01
Current efforts to develop methods and computer algorithms to effectively represent multivariate data commonly encountered in remote sensing applications are described. While this may involve scatter diagrams, multivariate representations of nonparametric probability density estimates are emphasized. The density function provides a useful graphical tool for looking at data and a useful theoretical tool for classification. This approach is called a thunderstorm data analysis.
Position Error Covariance Matrix Validation and Correction
NASA Technical Reports Server (NTRS)
Frisbee, Joe, Jr.
2016-01-01
In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.
Use of vectors in sequence analysis.
Ishikawa, T; Yamamoto, K; Yoshikura, H
1987-10-01
Applications of the vector diagram, a new type of representation of protein structure, in homology search of various proteins including oncogene products are presented. The method takes account of various kinds of information concerning the properties of amino acids, such as Chou and Fasman's probability data. The method can detect conformational similarities of proteins which may not be detected by the conventional programs.
Bagarello, F; Haven, E; Khrennikov, A
2017-11-13
We present the mathematical model of decision-making (DM) of agents acting in a complex and uncertain environment (combining huge variety of economical, financial, behavioural and geopolitical factors). To describe interaction of agents with it, we apply the formalism of quantum field theory (QTF). Quantum fields are a purely informational nature. The QFT model can be treated as a far relative of the expected utility theory, where the role of utility is played by adaptivity to an environment (bath). However, this sort of utility-adaptivity cannot be represented simply as a numerical function. The operator representation in Hilbert space is used and adaptivity is described as in quantum dynamics. We are especially interested in stabilization of solutions for sufficiently large time. The outputs of this stabilization process, probabilities for possible choices, are treated in the framework of classical DM. To connect classical and quantum DM, we appeal to Quantum Bayesianism. We demonstrate the quantum-like interference effect in DM, which is exhibited as a violation of the formula of total probability, and hence the classical Bayesian inference scheme.This article is part of the themed issue 'Second quantum revolution: foundational questions'. © 2017 The Author(s).
A model of adaptive decision-making from representation of information environment by quantum fields
NASA Astrophysics Data System (ADS)
Bagarello, F.; Haven, E.; Khrennikov, A.
2017-10-01
We present the mathematical model of decision-making (DM) of agents acting in a complex and uncertain environment (combining huge variety of economical, financial, behavioural and geopolitical factors). To describe interaction of agents with it, we apply the formalism of quantum field theory (QTF). Quantum fields are a purely informational nature. The QFT model can be treated as a far relative of the expected utility theory, where the role of utility is played by adaptivity to an environment (bath). However, this sort of utility-adaptivity cannot be represented simply as a numerical function. The operator representation in Hilbert space is used and adaptivity is described as in quantum dynamics. We are especially interested in stabilization of solutions for sufficiently large time. The outputs of this stabilization process, probabilities for possible choices, are treated in the framework of classical DM. To connect classical and quantum DM, we appeal to Quantum Bayesianism. We demonstrate the quantum-like interference effect in DM, which is exhibited as a violation of the formula of total probability, and hence the classical Bayesian inference scheme. This article is part of the themed issue `Second quantum revolution: foundational questions'.
Experience-Based Probabilities Modulate Expectations in a Gender-Coded Artificial Language
Öttl, Anton; Behne, Dawn M.
2016-01-01
The current study combines artificial language learning with visual world eyetracking to investigate acquisition of representations associating spoken words and visual referents using morphologically complex pseudowords. Pseudowords were constructed to consistently encode referential gender by means of suffixation for a set of imaginary figures that could be either male or female. During training, the frequency of exposure to pseudowords and their imaginary figure referents were manipulated such that a given word and its referent would be more likely to occur in either the masculine form or the feminine form, or both forms would be equally likely. Results show that these experience-based probabilities affect the formation of new representations to the extent that participants were faster at recognizing a referent whose gender was consistent with the induced expectation than a referent whose gender was inconsistent with this expectation. Disambiguating gender information available from the suffix did not mask the induced expectations. Eyetracking data provide additional evidence that such expectations surface during online lexical processing. Taken together, these findings indicate that experience-based information is accessible during the earliest stages of processing, and are consistent with the view that language comprehension depends on the activation of perceptual memory traces. PMID:27602009
Thermal structure of the Panama Basin by analysis of seismic attenuation
NASA Astrophysics Data System (ADS)
Vargas, Carlos A.; Pulido, José E.; Hobbs, Richard W.
2018-04-01
Using recordings of earthquakes on Oceanic Bottom Seismographs and onshore stations on the coastal margins of Colombia, Panama, and Ecuador, we estimate attenuation parameters in the upper lithosphere of the Panama Basin. The tomographic images of the derived coda-Q values are correlated with estimates of Curie Point Depth and measured and theoretical heat flow. Our study reveals three tectonic domains where magmatic/hydrothermal activity or lateral variations of the lithologic composition in the upper lithosphere can account for the modeled thermal structure and the anelasticity. We find that the Costa Rica Ridge and the Panama Fracture Zone are significant tectonic features probably related to thermal anomalies detected in the study area. We interpret a large and deep intrinsic attenuation anomaly as related to the heat source at the Costa Rica Ridge and show how interactions with regional fault systems cause contrasting attenuation anomalies.
Uncertainty loops in travel-time tomography from nonlinear wave physics.
Galetti, Erica; Curtis, Andrew; Meles, Giovanni Angelo; Baptie, Brian
2015-04-10
Estimating image uncertainty is fundamental to guiding the interpretation of geoscientific tomographic maps. We reveal novel uncertainty topologies (loops) which indicate that while the speeds of both low- and high-velocity anomalies may be well constrained, their locations tend to remain uncertain. The effect is widespread: loops dominate around a third of United Kingdom Love wave tomographic uncertainties, changing the nature of interpretation of the observed anomalies. Loops exist due to 2nd and higher order aspects of wave physics; hence, although such structures must exist in many tomographic studies in the physical sciences and medicine, they are unobservable using standard linearized methods. Higher order methods might fruitfully be adopted.
Tomographic diffractive microscopy with a wavefront sensor.
Ruan, Y; Bon, P; Mudry, E; Maire, G; Chaumet, P C; Giovannini, H; Belkebir, K; Talneau, A; Wattellier, B; Monneret, S; Sentenac, A
2012-05-15
Tomographic diffractive microscopy is a recent imaging technique that reconstructs quantitatively the three-dimensional permittivity map of a sample with a resolution better than that of conventional wide-field microscopy. Its main drawbacks lie in the complexity of the setup and in the slowness of the image recording as both the amplitude and the phase of the field scattered by the sample need to be measured for hundreds of successive illumination angles. In this Letter, we show that, using a wavefront sensor, tomographic diffractive microscopy can be implemented easily on a conventional microscope. Moreover, the number of illuminations can be dramatically decreased if a constrained reconstruction algorithm is used to recover the sample map of permittivity.
5D-intravital tomography as a novel tool for non-invasive in-vivo analysis of human skin
NASA Astrophysics Data System (ADS)
König, Karsten; Weinigel, Martin; Breunig, Hans G.; Gregory, Axel; Fischer, Peter; Kellner-Höfer, Marcel; Bückle, Rainer; Schwarz, Martin; Riemann, Iris; Stracke, Frank; Huck, Volker; Gorzelanny, Christian; Schneider, Stefan W.
2010-02-01
Some years ago, CE-marked clinical multiphoton systems for 3D imaging of human skin with subcellular resolution have been launched. These tomographs provide optical biopsies with submicron resolution based on two-photon excited autofluorescence (NAD(P)H, flavoproteins, keratin, elastin, melanin, porphyrins) and second harmonic generation by collagen. The 3D tomograph was now transferred into a 5D imaging system by the additional detection of the emission spectrum and the fluorescence lifetime based on spatially and spectrally resolved time-resolved single photon counting. The novel 5D intravital tomograph (5D-IVT) was employed for the early detection of atopic dermatitis and the analysis of treatment effects.
NASA Technical Reports Server (NTRS)
Yin, L. I.; Trombka, J. I.; Bielefeld, M. J.; Seltzer, S. M.
1984-01-01
The results of two computer simulations demonstrate the feasibility of using the nonoverlapping redundant array (NORA) to form three-dimensional images of objects with X-rays. Pinholes admit the X-rays to nonoverlapping points on a detector. The object is reconstructed in the analog mode by optical correlation and in the digital mode by tomographic computations. Trials were run with a stick-figure pyramid and extended objects with out-of-focus backgrounds. Substitution of spherical optical lenses for the pinholes increased the light transmission sufficiently that objects could be easily viewed in a dark room. Out-of-focus aberrations in tomographic reconstruction could be eliminated using Chang's (1976) algorithm.
Tomographic phase microscopy: principles and applications in bioimaging [Invited
Jin, Di; Zhou, Renjie; Yaqoob, Zahid; So, Peter T. C.
2017-01-01
Tomographic phase microscopy (TPM) is an emerging optical microscopic technique for bioimaging. TPM uses digital holographic measurements of complex scattered fields to reconstruct three-dimensional refractive index (RI) maps of cells with diffraction-limited resolution by solving inverse scattering problems. In this paper, we review the developments of TPM from the fundamental physics to its applications in bioimaging. We first provide a comprehensive description of the tomographic reconstruction physical models used in TPM. The RI map reconstruction algorithms and various regularization methods are discussed. Selected TPM applications for cellular imaging, particularly in hematology, are reviewed. Finally, we examine the limitations of current TPM systems, propose future solutions, and envision promising directions in biomedical research. PMID:29386746
Neuner-Jehle, S; Wegwarth, O; Steurer, J
2008-06-11
Communication about risk, e.g. cardiovascular risk, is a central task of physicians in their daily practice. In this paper we summarize the different methods of risk communication published in the literature. The different methods and their particular advantages and shortcomings are described and some recommendations are formulated. The most significant of them are: verbal qualifiers like, your risk for a cardiovascular event is moderate, is imprecise and difficult to interpret. Information about risk in numerical form is more comprehensible when delivered in natural frequencies compared to percentages, pictorial representations contribute to a better understanding. Probably not one single mode of representation is the most effective way to convey information about risk, but a combination of methods.
A Hierarchical multi-input and output Bi-GRU Model for Sentiment Analysis on Customer Reviews
NASA Astrophysics Data System (ADS)
Zhang, Liujie; Zhou, Yanquan; Duan, Xiuyu; Chen, Ruiqi
2018-03-01
Multi-label sentiment classification on customer reviews is a practical challenging task in Natural Language Processing. In this paper, we propose a hierarchical multi-input and output model based bi-directional recurrent neural network, which both considers the semantic and lexical information of emotional expression. Our model applies two independent Bi-GRU layer to generate part of speech and sentence representation. Then the lexical information is considered via attention over output of softmax activation on part of speech representation. In addition, we combine probability of auxiliary labels as feature with hidden layer to capturing crucial correlation between output labels. The experimental result shows that our model is computationally efficient and achieves breakthrough improvements on customer reviews dataset.
Palliative care and the intensive care nurses: feelings that endure.
Silveira, Natyele Rippel; Nascimento, Eliane Regina Pereira do; Rosa, Luciana Martins da; Jung, Walnice; Martins, Sabrina Regina; Fontes, Moisés Dos Santos
2016-01-01
to know the feelings of nurses regarding palliative care in adult intensive care units. qualitative study, which adopted the theoretical framework of Social Representations, carried out with 30 nurses of the state of Santa Catarina included by Snowball sampling. Data were collected through semi-structured interviews conducted from April to August 2015, organized and analyzed through the Collective Subject Discourse. the results showed how central ideas are related to feelings of comfort, frustration, insecurity and anguish, in addition to the feeling that the professional training and performance are focused on the cure. the social representations of nurses regarding the feelings related to palliative care are represented mainly by negative feelings, probably as consequence of the context in which care is provided.
NASA Astrophysics Data System (ADS)
Lee, Choonsik; Lee, Choonik; Lee, Jai-Ki
2006-11-01
Distributions of radiation absorbed dose within human anatomy have been estimated through Monte Carlo radiation transport techniques implemented for two different classes of computational anthropomorphic phantoms: (1) mathematical equation-based stylized phantoms and (2) tomographic image-based voxel phantoms. Voxel phantoms constructed from tomographic images of real human anatomy have been actively developed since the late 1980s to overcome the anatomical approximations necessary with stylized phantoms, which themselves have been utilized since the mid 1960s. However, revisions of stylized phantoms have also been pursued in parallel to the development of voxel phantoms since voxel phantoms (1) are initially restricted to the individual-specific anatomy of the person originally imaged, (2) must be restructured on an organ-by-organ basis to conform to reference individual anatomy and (3) cannot easily represent very fine anatomical structures and tissue layers that are thinner than the voxel dimensions of the overall phantom. Although efforts have been made to improve the anatomic realism of stylized phantoms, most of these efforts have been limited to attempts to alter internal organ structures. Aside from the internal organs, the exterior shapes, and especially the arm structures, of stylized phantoms are also far from realistic descriptions of human anatomy, and may cause dosimetry errors in the calculation of organ-absorbed doses for external irradiation scenarios. The present study was intended to highlight the need to revise the existing arm structure within stylized phantoms by comparing organ doses of stylized adult phantoms with those from three adult voxel phantoms in the lateral photon irradiation geometry. The representative stylized phantom, the adult phantom of the Oak Ridge National Laboratory (ORNL) series and two adult male voxel phantoms, KTMAN-2 and VOXTISS8, were employed for Monte Carlo dose calculation, and data from another voxel phantom, VIP-Man, were obtained from literature sources. The absorbed doses for lungs, oesophagus, liver and kidneys that could be affected by arm structures in the lateral irradiation geometry were obtained for both classes of phantoms in lateral monoenergetic photon irradiation geometries. As expected, those organs in the ORNL phantoms received apparently higher absorbed doses than those in the voxel phantoms. The overestimation is mainly attributed to the relatively poor representation of the arm structure in the ORNL phantom in which the arm bones are embedded within the regions describing the phantom's torso. The results of this study suggest that the overestimation of organ doses, due to unrealistic arm representation, should be taken into account when stylized phantoms are employed for equivalent or effective dose estimates, especially in the case of an irradiation scenario with dominating lateral exposure. For such a reason, the stylized phantom arm structure definition should be revised in order to obtain more realistic evaluations.
Mittleman, D M; Hunsche, S; Boivin, L; Nuss, M C
1997-06-15
We demonstrate tomographic T-ray imaging, using the timing information present in terahertz (THz) pulses in a reflection geometry. THz pulses are reflected from refractive-index discontinuities inside an object, and the time delays of these pulses are used to determine the positions of the discontinuities along the propagation direction. In this fashion a tomographic image can be constructed.
Computed tomographic findings of cerebral fat embolism following multiple bone fractures.
Law, Huong Ling; Wong, Siong Lung; Tan, Suzet
2013-02-01
Fat embolism to the lungs and brain is an uncommon complication following fractures. Few reports with descriptions of computed tomographic (CT) findings of emboli to the brain or cerebral fat embolism are available. We report a case of cerebral fat embolism following multiple skeletal fractures and present its CT findings here.
Quantum-tomographic cryptography with a semiconductor single-photon source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaszlikowski, D.; Yang, L.J.; Yong, L.S.
2005-09-15
We analyze the security of so-called quantum-tomographic cryptography with the source producing entangled photons via an experimental scheme proposed by Fattal et al. [Phys. Rev. Lett. 92, 37903 (2004)]. We determine the range of the experimental parameters for which the protocol is secure against the most general incoherent attacks.
Current developments in clinical multiphoton tomography
NASA Astrophysics Data System (ADS)
König, Karsten; Weinigel, Martin; Breunig, Hans Georg; Gregory, Axel; Fischer, Peter; Kellner-Höfer, Marcel; Bückle, Rainer
2010-02-01
Two-photon microscopy has been introduced in 1990 [1]. 13 years later, CE-marked clinical multiphoton systems for 3D imaging of human skin with subcellular resolution have been launched by the JenLab company with the tomograph DermaInspectTM. In 2010, the second generation of clinical multiphoton tomographs was introduced. The novel mobile multiphoton tomograph MPTflexTM, equipped with a flexible articulated optical arm, provides an increased flexibility and accessibility especially for clinical and cosmetical examinations. The multiphoton excitation of fluorescent biomolecules like NAD(P)H, flavins, porphyrins, elastin, and melanin as well as the second harmonic generation of collagen is induced by picojoule femtosecond laser pulses from an tunable turn-key near infrared laser system. The ability for rapid highquality image acquisition, the user-friendly operation of the system, and the compact and flexible design qualifies this system to be used for melanoma detection, diagnostics of dermatological disorders, cosmetic research, and skin aging measurements as well as in situ drug monitoring and animal research. So far, more than 1,000 patients and volunteers have been investigated with the multiphoton tomographs in Europe, Asia, and Australia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ceglio, N.M.; George, E.V.; Brooks, K.M.
The first successful demonstration of high resolution, tomographic imaging of a laboratory plasma using coded imaging techniques is reported. ZPCI has been used to image the x-ray emission from laser compressed DT filled microballoons. The zone plate camera viewed an x-ray spectral window extending from below 2 keV to above 6 keV. It exhibited a resolution approximately 8 ..mu..m, a magnification factor approximately 13, and subtended a radiation collection solid angle at the target approximately 10/sup -2/ sr. X-ray images using ZPCI were compared with those taken using a grazing incidence reflection x-ray microscope. The agreement was excellent. In addition,more » the zone plate camera produced tomographic images. The nominal tomographic resolution was approximately 75 ..mu..m. This allowed three dimensional viewing of target emission from a single shot in planar ''slices''. In addition to its tomographic capability, the great advantage of the coded imaging technique lies in its applicability to hard (greater than 10 keV) x-ray and charged particle imaging. Experiments involving coded imaging of the suprathermal x-ray and high energy alpha particle emission from laser compressed microballoon targets are discussed.« less
Tomography and the Herglotz-Wiechert inverse formulation
NASA Astrophysics Data System (ADS)
Nowack, Robert L.
1990-04-01
In this paper, linearized tomography and the Herglotz-Wiechert inverse formulation are compared. Tomographic inversions for 2-D or 3-D velocity structure use line integrals along rays and can be written in terms of Radon transforms. For radially concentric structures, Radon transforms are shown to reduce to Abel transforms. Therefore, for straight ray paths, the Abel transform of travel-time is a tomographic algorithm specialized to a one-dimensional radially concentric medium. The Herglotz-Wiechert formulation uses seismic travel-time data to invert for one-dimensional earth structure and is derived using exact ray trajectories by applying an Abel transform. This is of historical interest since it would imply that a specialized tomographic-like algorithm has been used in seismology since the early part of the century (see Herglotz, 1907; Wiechert, 1910). Numerical examples are performed comparing the Herglotz-Wiechert algorithm and linearized tomography along straight rays. Since the Herglotz-Wiechert algorithm is applicable under specific conditions, (the absence of low velocity zones) to non-straight ray paths, the association with tomography may prove to be useful in assessing the uniqueness of tomographic results generalized to curved ray geometries.
A detailed comparison of single-camera light-field PIV and tomographic PIV
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.
2018-03-01
This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.
Diaz, Alejandro A; Estépar, Raul San José; Washko, George R
2016-01-01
Computed tomographic measures of central airway morphology have been used in clinical, epidemiologic, and genetic investigation as an inference of the presence and severity of small-airway disease in smokers. Although several association studies have brought us to believe that these computed tomographic measures reflect airway remodeling, a careful review of such data and more recent evidence may reveal underappreciated complexity to these measures and limitations that prompt us to question that belief. This Perspective offers a review of seminal papers and alternative explanations of their data in the light of more recent evidence. The relationships between airway morphology and lung function are observed in subjects who never smoked, implying that native airway structure indeed contributes to lung function; computed tomographic measures of central airways such as wall area, lumen area, and total bronchial area are smaller in smokers with chronic obstructive pulmonary disease versus those without chronic obstructive pulmonary disease; and the airways are smaller as disease severity increases. The observations suggest that (1) native airway morphology likely contributes to the relationships between computed tomographic measures of airways and lung function; and (2) the presence of smaller airways in those with chronic obstructive pulmonary disease versus those without chronic obstructive pulmonary disease as well as their decrease with disease severity suggests that smokers with chronic obstructive pulmonary disease may simply have smaller airways to begin with, which put them at greater risk for the development of smoking-related disease.
Hierarchical multimodal tomographic x-ray imaging at a superbend
NASA Astrophysics Data System (ADS)
Stampanoni, M.; Marone, F.; Mikuljan, G.; Jefimovs, K.; Trtik, P.; Vila-Comamala, J.; David, C.; Abela, R.
2008-08-01
Over the last decade, synchrotron-based X-ray tomographic microscopy has established itself as a fundamental tool for non-invasive, quantitative investigations of a broad variety of samples, with application ranging from space research and materials science to biology and medicine. Thanks to the brilliance of modern third generation sources, voxel sizes in the micrometer range are routinely achieved by the major X-ray microtomography devices around the world, while the isotropic 100 nm barrier is reached and trespassed only by few instruments. The beamline for TOmographic Microscopy and Coherent rAdiology experiments (TOMCAT) of the Swiss Light Source at the Paul Scherrer Institut, operates a multimodal endstation which offers tomographic capabilities in the micrometer range in absorption contrast - of course - as well as phase contrast imaging. Recently, the beamline has been equipped with a full field, hard X-rays microscope with a theoretical pixel size down to 30 nm and a field of view of 50 microns. The nanoscope performs well at X-ray energies between 8 and 12 keV, selected from the white beam of a 2.9 T superbend by a [Ru/C]100 fixed exit multilayer monochromator. In this work we illustrate the experimental setup dedicated to the nanoscope, in particular the ad-hoc designed X-ray optics needed to produce a homogeneous, square illumination of the sample imaging plane as well as the magnifying zone plate. Tomographic reconstructions at 60 nm voxel size will be shown and discussed.
Field-portable lensfree tomographic microscope.
Isikman, Serhan O; Bishara, Waheb; Sikora, Uzair; Yaglidere, Oguzhan; Yeah, John; Ozcan, Aydogan
2011-07-07
We present a field-portable lensfree tomographic microscope, which can achieve sectional imaging of a large volume (∼20 mm(3)) on a chip with an axial resolution of <7 μm. In this compact tomographic imaging platform (weighing only ∼110 grams), 24 light-emitting diodes (LEDs) that are each butt-coupled to a fibre-optic waveguide are controlled through a cost-effective micro-processor to sequentially illuminate the sample from different angles to record lensfree holograms of the sample that is placed on the top of a digital sensor array. In order to generate pixel super-resolved (SR) lensfree holograms and hence digitally improve the achievable lateral resolution, multiple sub-pixel shifted holograms are recorded at each illumination angle by electromagnetically actuating the fibre-optic waveguides using compact coils and magnets. These SR projection holograms obtained over an angular range of ±50° are rapidly reconstructed to yield projection images of the sample, which can then be back-projected to compute tomograms of the objects on the sensor-chip. The performance of this compact and light-weight lensfree tomographic microscope is validated by imaging micro-beads of different dimensions as well as a Hymenolepis nana egg, which is an infectious parasitic flatworm. Achieving a decent three-dimensional spatial resolution, this field-portable on-chip optical tomographic microscope might provide a useful toolset for telemedicine and high-throughput imaging applications in resource-poor settings. This journal is © The Royal Society of Chemistry 2011
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kramar, M.; Lin, H.; Tomczyk, S., E-mail: kramar@cua.edu, E-mail: lin@ifa.hawaii.edu, E-mail: tomczyk@ucar.edu
We present the first direct “observation” of the global-scale, 3D coronal magnetic fields of Carrington Rotation (CR) Cycle 2112 using vector tomographic inversion techniques. The vector tomographic inversion uses measurements of the Fe xiii 10747 Å Hanle effect polarization signals by the Coronal Multichannel Polarimeter (CoMP) and 3D coronal density and temperature derived from scalar tomographic inversion of Solar Terrestrial Relations Observatory (STEREO)/Extreme Ultraviolet Imager (EUVI) coronal emission lines (CELs) intensity images as inputs to derive a coronal magnetic field model that best reproduces the observed polarization signals. While independent verifications of the vector tomography results cannot be performed, wemore » compared the tomography inverted coronal magnetic fields with those constructed by magnetohydrodynamic (MHD) simulations based on observed photospheric magnetic fields of CR 2112 and 2113. We found that the MHD model for CR 2112 is qualitatively consistent with the tomography inverted result for most of the reconstruction domain except for several regions. Particularly, for one of the most noticeable regions, we found that the MHD simulation for CR 2113 predicted a model that more closely resembles the vector tomography inverted magnetic fields. In another case, our tomographic reconstruction predicted an open magnetic field at a region where a coronal hole can be seen directly from a STEREO-B/EUVI image. We discuss the utilities and limitations of the tomographic inversion technique, and present ideas for future developments.« less
Relative arrival-time upper-mantle tomography and the elusive background mean
NASA Astrophysics Data System (ADS)
Bastow, Ian D.
2012-08-01
The interpretation of seismic tomographic images of upper-mantle seismic wave speed structure is often a matter of considerable debate because the observations can usually be explained by a range of hypotheses, including variable temperature, composition, anisotropy, and the presence of partial melt. An additional problem, often overlooked in tomographic studies using relative as opposed to absolute arrival-times, is the issue of the resulting velocity model's zero mean. In shield areas, for example, relative arrival-time analysis strips off a background mean velocity structure that is markedly fast compared to the global average. Conversely, in active areas, the background mean is often markedly slow compared to the global average. Appreciation of this issue is vital when interpreting seismic tomographic images: 'high' and 'low' velocity anomalies should not necessarily be interpreted, respectively, as 'fast' and 'slow' compared to 'normal mantle'. This issue has been discussed in the seismological literature in detail over the years, yet subsequent tomography studies have still fallen into the trap of mis-interpreting their velocity models. I highlight here some recent examples of this and provide a simple strategy to address the problem using constraints from a recent global tomographic model, and insights from catalogues of absolute traveltime anomalies. Consultation of such absolute measures of seismic wave speed should be routine during regional tomographic studies, if only for the benefit of the broader Earth Science community, who readily follow the red = hot and slow, blue = cold and fast rule of thumb when interpreting the images for themselves.
Reasoning and choice in the Monty Hall Dilemma (MHD): implications for improving Bayesian reasoning
Tubau, Elisabet; Aguilar-Lleyda, David; Johnson, Eric D.
2015-01-01
The Monty Hall Dilemma (MHD) is a two-step decision problem involving counterintuitive conditional probabilities. The first choice is made among three equally probable options, whereas the second choice takes place after the elimination of one of the non-selected options which does not hide the prize. Differing from most Bayesian problems, statistical information in the MHD has to be inferred, either by learning outcome probabilities or by reasoning from the presented sequence of events. This often leads to suboptimal decisions and erroneous probability judgments. Specifically, decision makers commonly develop a wrong intuition that final probabilities are equally distributed, together with a preference for their first choice. Several studies have shown that repeated practice enhances sensitivity to the different reward probabilities, but does not facilitate correct Bayesian reasoning. However, modest improvements in probability judgments have been observed after guided explanations. To explain these dissociations, the present review focuses on two types of causes producing the observed biases: Emotional-based choice biases and cognitive limitations in understanding probabilistic information. Among the latter, we identify a crucial cause for the universal difficulty in overcoming the equiprobability illusion: Incomplete representation of prior and conditional probabilities. We conclude that repeated practice and/or high incentives can be effective for overcoming choice biases, but promoting an adequate partitioning of possibilities seems to be necessary for overcoming cognitive illusions and improving Bayesian reasoning. PMID:25873906
NASA Astrophysics Data System (ADS)
De Landro, Grazia; Gammaldi, Sergio; Serlenga, Vincenzo; Amoroso, Ortensia; Russo, Guido; Festa, Gaetano; D'Auria, Luca; Bruno, Pier Paolo; Gresse, Marceau; Vandemeulebrouck, Jean; Zollo, Aldo
2017-04-01
Seismic tomography can be used to image the spatial variation of rock properties within complex geological media such as volcanoes. Solfatara is a volcano located within the Campi Flegrei still active caldera, characterized by periodic episodes of extended, low-rate ground subsidence and uplift called bradyseism accompanied by intense seismic and geochemical activities. In particular, Solfatara is characterized by an impressive magnitude diffuse degassing, which underlines the relevance of fluid and heat transport at the crater and prompted further research to improve the understanding of the hydrothermal system feeding the surface phenomenon. In this line, an active seismic experiment, Repeated Induced Earthquake and Noise (RICEN) (EU Project MEDSUV), was carried out between September 2013 and November 2014 to provide time-varying high-resolution images of the structure of Solfatara. In this study we used the datasets provided by two different acquisition geometries: a) A 2D array cover an area of 90 x 115 m ^ 2 sampled by a regular grid of 240 vertical sensors deployed at the crater surface; b) two 1D orthogonal seismic arrays deployed along NE-SW and NW-SE directions crossing the 400 m crater surface. The arrays are sampled with a regular line of 240 receiver and 116 shots. We present 2D and 3D tomographic high-resolution P-wave velocity images obtained using two different tomographic methods adopting a multiscale strategy. The 3D image of the shallow (30-35 m) central part of Solfatara crater is performed through the iterative, linearized, tomographic inversion of the P-wave first arrival times. 2D P-wave velocity sections (60-70 m) are obtained using a non-linear travel-time tomography method based on the evaluation of a posteriori probability density with a Bayesian approach. The 3D retrieved images integrated with resistivity section and temperature and CO2 flux measurements , define the following characteristics: 1. A depth dependent P-wave velocity layer down to 14 m, with Vp<700m/s typical of poorly-consolidated tephra and affected by CO2 degassing; 2. An intermediate layer, deepening towards the mineralized liquid-saturated area (Fangaia), interpreted as permeable deposits saturated with condensed water; 3. A deep, confined high velocity anomaly associated with a CO2 reservoir. With the 2D profiles we can image up to around 70 m depth: the first 30 m are characterized by features and velocities comparable to those of the 3D profiles, deeper, between 40-60 m depth, were found two low velocity anomalies, that probably indicate a preferential via for fluid degassing. These features are expression of an area located between the Fangaia, which is water saturated and replenished from deep aquifers, and the main fumaroles that are the superficial relief of deep rising CO2 flux. So, the changes in the outgassing rate greatly affects the shallow hydrothermal system, which can be used as a near-surface "mirror" of fluid migration processes occurring at greater depths.
Downscaling Smooth Tomographic Models: Separating Intrinsic and Apparent Anisotropy
NASA Astrophysics Data System (ADS)
Bodin, Thomas; Capdeville, Yann; Romanowicz, Barbara
2016-04-01
In recent years, a number of tomographic models based on full waveform inversion have been published. Due to computational constraints, the fitted waveforms are low pass filtered, which results in an inability to map features smaller than half the shortest wavelength. However, these tomographic images are not a simple spatial average of the true model, but rather an effective, apparent, or equivalent model that provides a similar 'long-wave' data fit. For example, it can be shown that a series of horizontal isotropic layers will be seen by a 'long wave' as a smooth anisotropic medium. In this way, the observed anisotropy in tomographic models is a combination of intrinsic anisotropy produced by lattice-preferred orientation (LPO) of minerals, and apparent anisotropy resulting from the incapacity of mapping discontinuities. Interpretations of observed anisotropy (e.g. in terms of mantle flow) requires therefore the separation of its intrinsic and apparent components. The "up-scaling" relations that link elastic properties of a rapidly varying medium to elastic properties of the effective medium as seen by long waves are strongly non-linear and their inverse highly non-unique. That is, a smooth homogenized effective model is equivalent to a large number of models with discontinuities. In the 1D case, Capdeville et al (GJI, 2013) recently showed that a tomographic model which results from the inversion of low pass filtered waveforms is an homogenized model, i.e. the same as the model computed by upscaling the true model. Here we propose a stochastic method to sample the ensemble of layered models equivalent to a given tomographic profile. We use a transdimensional formulation where the number of layers is variable. Furthermore, each layer may be either isotropic (1 parameter) or intrinsically anisotropic (2 parameters). The parsimonious character of the Bayesian inversion gives preference to models with the least number of parameters (i.e. least number of layers, and maximum number of isotropic layers). The non-uniqueness of the problem can be addressed by adding high frequency data such as receiver functions, able to map first order discontinuities. We show with synthetic tests that this method enables us to distinguish between intrinsic and apparent anisotropy in tomographic models, as layers with intrinsic anisotropy are only present when required by the data. A real data example is presented based on the latest global model produced at Berkeley.
NASA Astrophysics Data System (ADS)
Deyhle, Hans; Schmidli, Fredy; Krastl, Gabriel; Müller, Bert
2010-09-01
Direct composite fillings belong to widespread tooth restoration techniques in dental medicine. The procedure consists of successive steps, which include etching of the prepared tooth surface, bonding and placement of composite in incrementally built up layers. Durability and lifespan of the composite inlays strongly depend on the accurate completion of the individual steps to be also realized by students in dental medicine. Improper handling or nonconformity in the bonding procedure often lead to air enclosures (bubbles) as well as to significant gaps between the composite layers or at the margins of the restoration. Traditionally one analyzes the quality of the restoration cutting the tooth in an arbitrarily selected plane and inspecting this plane by conventional optical microscopy. Although the precision of this established method is satisfactory, it is restricted to the selected two-dimensional plane. Rather simple micro computed tomography (μCT) systems, such as SkyScan 1174™, allows for the non-destructive three-dimensional imaging of restored teeth ex vivo and virtually cutting the tomographic data in any desired direction, offering a powerful tool for inspection of the restored tooth with micrometer resolution before cutting and thus also to select a two-dimensional plane with potential defects. In order to study the influence of the individual steps on the resulted tooth restoration, direct composite fillings were placed in mod cavities of extracted teeth. After etching, an adhesive was applied in half of the specimens. From the tomographic datasets, it becomes clear that gaps occur more frequently when bonding is omitted. The visualization of air enclosures offers to determine the probability to find a micrometer-sized defect using an arbitrarily selected cutting plane for inspection.
NASA Astrophysics Data System (ADS)
Koulakov, I.; Bohm, M.; Asch, G.; Lühr, B.-G.; Manzanares, A.; Brotopuspito, K. S.; Fauzi, Pak; Purbawinata, M. A.; Puspito, N. T.; Ratdomopurbo, A.; Kopp, H.; Rabbel, W.; Shevkunova, E.
2007-08-01
Here we present the results of local source tomographic inversion beneath central Java. The data set was collected by a temporary seismic network. More than 100 stations were operated for almost half a year. About 13,000 P and S arrival times from 292 events were used to obtain three-dimensional (3-D) Vp, Vs, and Vp/Vs models of the crust and the mantle wedge beneath central Java. Source location and determination of the 3-D velocity models were performed simultaneously based on a new iterative tomographic algorithm, LOTOS-06. Final event locations clearly image the shape of the subduction zone beneath central Java. The dipping angle of the slab increases gradually from almost horizontal to about 70°. A double seismic zone is observed in the slab between 80 and 150 km depth. The most striking feature of the resulting P and S models is a pronounced low-velocity anomaly in the crust, just north of the volcanic arc (Merapi-Lawu anomaly (MLA)). An algorithm for estimation of the amplitude value, which is presented in the paper, shows that the difference between the fore arc and MLA velocities at a depth of 10 km reaches 30% and 36% in P and S models, respectively. The value of the Vp/Vs ratio inside the MLA is more than 1.9. This shows a probable high content of fluids and partial melts within the crust. In the upper mantle we observe an inclined low-velocity anomaly which links the cluster of seismicity at 100 km depth with MLA. This anomaly might reflect ascending paths of fluids released from the slab. The reliability of all these patterns was tested thoroughly.
Chow, Benjamin J W; Freeman, Michael R; Bowen, James M; Levin, Leslie; Hopkins, Robert B; Provost, Yves; Tarride, Jean-Eric; Dennie, Carole; Cohen, Eric A; Marcuzzi, Dan; Iwanochko, Robert; Moody, Alan R; Paul, Narinder; Parker, John D; O'Reilly, Daria J; Xie, Feng; Goeree, Ron
2011-06-13
Computed tomographic coronary angiography (CTCA) has gained clinical acceptance for the detection of obstructive coronary artery disease. Although single-center studies have demonstrated excellent accuracy, multicenter studies have yielded variable results. The true diagnostic accuracy of CTCA in the "real world" remains uncertain. We conducted a field evaluation comparing multidetector CTCA with invasive CA (ICA) to understand CTCA's diagnostic accuracy in a real-world setting. A multicenter cohort study of patients awaiting ICA was conducted between September 2006 and June 2009. All patients had either a low or an intermediate pretest probability for coronary artery disease and underwent CTCA and ICA within 10 days. The results of CTCA and ICA were interpreted visually by local expert observers who were blinded to all clinical data and imaging results. Using a patient-based analysis (diameter stenosis ≥50%) of 169 patients, the sensitivity, specificity, positive predictive value, and negative predictive value were 81.3% (95% confidence interval [CI], 71.0%-89.1%), 93.3% (95% CI, 85.9%-97.5%), 91.6% (95% CI, 82.5%-96.8%), and 84.7% (95% CI, 76.0%-91.2%), respectively; the area under receiver operating characteristic curve was 0.873. The diagnostic accuracy varied across centers (P < .001), with a sensitivity, specificity, positive predictive value, and negative predictive value ranging from 50.0% to 93.2%, 92.0% to 100%, 84.6% to 100%, and 42.9% to 94.7%, respectively. Compared with ICA, CTCA appears to have good accuracy; however, there was variability in diagnostic accuracy across centers. Factors affecting institutional variability need to be better understood before CTCA is universally adopted. Additional real-world evaluations are needed to fully understand the impact of CTCA on clinical care. clinicaltrials.gov Identifier: NCT00371891.
KiDS-450: cosmological parameter constraints from tomographic weak gravitational lensing
NASA Astrophysics Data System (ADS)
Hildebrandt, H.; Viola, M.; Heymans, C.; Joudaki, S.; Kuijken, K.; Blake, C.; Erben, T.; Joachimi, B.; Klaes, D.; Miller, L.; Morrison, C. B.; Nakajima, R.; Verdoes Kleijn, G.; Amon, A.; Choi, A.; Covone, G.; de Jong, J. T. A.; Dvornik, A.; Fenech Conti, I.; Grado, A.; Harnois-Déraps, J.; Herbonnet, R.; Hoekstra, H.; Köhlinger, F.; McFarland, J.; Mead, A.; Merten, J.; Napolitano, N.; Peacock, J. A.; Radovich, M.; Schneider, P.; Simon, P.; Valentijn, E. A.; van den Busch, J. L.; van Uitert, E.; Van Waerbeke, L.
2017-02-01
We present cosmological parameter constraints from a tomographic weak gravitational lensing analysis of ˜450 deg2 of imaging data from the Kilo Degree Survey (KiDS). For a flat Λ cold dark matter (ΛCDM) cosmology with a prior on H0 that encompasses the most recent direct measurements, we find S_8≡ σ _8√{Ω _m/0.3}=0.745± 0.039. This result is in good agreement with other low-redshift probes of large-scale structure, including recent cosmic shear results, along with pre-Planck cosmic microwave background constraints. A 2.3σ tension in S8 and `substantial discordance' in the full parameter space is found with respect to the Planck 2015 results. We use shear measurements for nearly 15 million galaxies, determined with a new improved `self-calibrating' version of lensFIT validated using an extensive suite of image simulations. Four-band ugri photometric redshifts are calibrated directly with deep spectroscopic surveys. The redshift calibration is confirmed using two independent techniques based on angular cross-correlations and the properties of the photometric redshift probability distributions. Our covariance matrix is determined using an analytical approach, verified numerically with large mock galaxy catalogues. We account for uncertainties in the modelling of intrinsic galaxy alignments and the impact of baryon feedback on the shape of the non-linear matter power spectrum, in addition to the small residual uncertainties in the shear and redshift calibration. The cosmology analysis was performed blind. Our high-level data products, including shear correlation functions, covariance matrices, redshift distributions, and Monte Carlo Markov chains are available at http://kids.strw.leidenuniv.nl.
Synthesis of generalized surface plasmon beams
NASA Astrophysics Data System (ADS)
Martinez-Niconoff, G.; Munoz-Lopez, J.; Martinez-Vara, P.
2009-08-01
Surface plasmon modes can be considered as the analogous to plane waves for homogeneous media. The extension to partially coherent surface plasmon beams is obtained by means of the incoherent superposition of the interference between surface plasmon modes whose profile is controlled associating a probability density function to the structural parameters implicit in their representation. We show computational simulations for cosine, Bessel, gaussian and dark hollow surface plasmon beams.
Classical-Quantum Correspondence by Means of Probability Densities
NASA Technical Reports Server (NTRS)
Vegas, Gabino Torres; Morales-Guzman, J. D.
1996-01-01
Within the frame of the recently introduced phase space representation of non relativistic quantum mechanics, we propose a Lagrangian from which the phase space Schrodinger equation can be derived. From that Lagrangian, the associated conservation equations, according to Noether's theorem, are obtained. This shows that one can analyze quantum systems completely in phase space as it is done in coordinate space, without additional complications.
NASA Astrophysics Data System (ADS)
Yatsishina, E. B.; Kovalchuk, M. V.; Loshak, M. D.; Vasilyev, S. V.; Vasilieva, O. A.; Dyuzheva, O. P.; Pojidaev, V. M.; Ushakov, V. L.
2018-05-01
Nine ancient Egyptian mummies (dated preliminarily to the period from the 1st mill. BCE to the first centuries CE) from the collection of the State Pushkin Museum of Fine Arts have been studied at the National Research Centre "Kurchatov Institute" (NRC KI) on the base of the complex of NBICS technologies. Tomographic scanning is performed using a magneto-resonance tomograph (3 T) and a hybrid positron emission tomography/computed tomography (PET-CT) scanner. Three-dimensional reconstructions of mummies and their anthropological measurements are carried out. Some medical conclusions are drawn based on the tomographic data. In addition, the embalming composition and tissue of one of the mummies are preliminarily analyzed.
System for plotting subsoil structure and method therefor
NASA Technical Reports Server (NTRS)
Narasimhan, K. Y.; Nathan, R.; Parthasarathy, S. P. (Inventor)
1980-01-01
Data for use in producing a tomograph of subsoil structure between boreholes is derived by pacing spaced geophones in one borehole, on the Earth surface if desired, and by producing a sequence of shots at spaced apart locations in the other borehole. The signals, detected by each of the geophones from the various shots, are processed either on a time of arrival basis, or on the basis of signal amplitude, to provide information of the characteristics of a large number of incremental areas between the boreholes. Such information is useable to produce a tomograph of the subsoil structure between the boreholes. By processing signals of relatively high frequencies, e.g., up to 100 Hz, and by closely spacing the geophones, a high resolution tomograph can be produced.
Data-processing strategies for nano-tomography with elemental specification
NASA Astrophysics Data System (ADS)
Liu, Yijin; Cats, Korneel H.; Nelson Weker, Johanna; Andrews, Joy C.; Weckhuysen, Bert M.; Pianetta, Piero
2013-10-01
Combining the energy tunability provided by synchrotron X-ray sources with transmission X-ray microscopy, the morphology of materials can be resolved in 3D at spatial resolution down to 30 nm with elemental/chemical specification. In order to study the energy dependence of the absorption coefficient over the investigated volume, the tomographic reconstruction and image registration (before and/or after the tomographic reconstruction) are critical. We show in this paper the comparison of two different data processing strategies and conclude that the signal to noise ratio (S/N) in the final result can be improved via performing tomographic reconstruction prior to the evaluation of energy dependence. Our result echoes the dose fractionation theorem, and is particularly helpful when the element of interest has low concentration.
Couple Graph Based Label Propagation Method for Hyperspectral Remote Sensing Data Classification
NASA Astrophysics Data System (ADS)
Wang, X. P.; Hu, Y.; Chen, J.
2018-04-01
Graph based semi-supervised classification method are widely used for hyperspectral image classification. We present a couple graph based label propagation method, which contains both the adjacency graph and the similar graph. We propose to construct the similar graph by using the similar probability, which utilize the label similarity among examples probably. The adjacency graph was utilized by a common manifold learning method, which has effective improve the classification accuracy of hyperspectral data. The experiments indicate that the couple graph Laplacian which unite both the adjacency graph and the similar graph, produce superior classification results than other manifold Learning based graph Laplacian and Sparse representation based graph Laplacian in label propagation framework.
NASA Astrophysics Data System (ADS)
Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide
2016-12-01
We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Youngren, M.A.
1989-11-01
An analytic probability model of tactical nuclear warfare in the theater is presented in this paper. The model addresses major problems associated with representing nuclear warfare in the theater. Current theater representations of a potential nuclear battlefield are developed in context of low-resolution, theater-level models or scenarios. These models or scenarios provide insufficient resolution in time and space for modeling a nuclear exchange. The model presented in this paper handles the spatial uncertainty in potentially targeted unit locations by proposing two-dimensional multivariate probability models for the actual and perceived locations of units subordinate to the major (division-level) units represented inmore » theater scenarios. The temporal uncertainty in the activities of interest represented in our theater-level Force Evaluation Model (FORCEM) is handled through probability models of the acquisition and movement of potential nuclear target units.« less
Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.
2016-01-01
Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Representing and computing regular languages on massively parallel networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, M.I.; O'Sullivan, J.A.; Boysam, B.
1991-01-01
This paper proposes a general method for incorporating rule-based constraints corresponding to regular languages into stochastic inference problems, thereby allowing for a unified representation of stochastic and syntactic pattern constraints. The authors' approach first established the formal connection of rules to Chomsky grammars, and generalizes the original work of Shannon on the encoding of rule-based channel sequences to Markov chains of maximum entropy. This maximum entropy probabilistic view leads to Gibb's representations with potentials which have their number of minima growing at precisely the exponential rate that the language of deterministically constrained sequences grow. These representations are coupled to stochasticmore » diffusion algorithms, which sample the language-constrained sequences by visiting the energy minima according to the underlying Gibbs' probability law. The coupling to stochastic search methods yields the all-important practical result that fully parallel stochastic cellular automata may be derived to generate samples from the rule-based constraint sets. The production rules and neighborhood state structure of the language of sequences directly determines the necessary connection structures of the required parallel computing surface. Representations of this type have been mapped to the DAP-510 massively-parallel processor consisting of 1024 mesh-connected bit-serial processing elements for performing automated segmentation of electron-micrograph images.« less
Beck, Valerie M; Hollingworth, Andrew
2017-02-01
The content of visual working memory (VWM) guides attention, but whether this interaction is limited to a single VWM representation or functional for multiple VWM representations is under debate. To test this issue, we developed a gaze-contingent search paradigm to directly manipulate selection history and examine the competition between multiple cue-matching saccade target objects. Participants first saw a dual-color cue followed by two pairs of colored objects presented sequentially. For each pair, participants selectively fixated an object that matched one of the cued colors. Critically, for the second pair, the cued color from the first pair was presented either with a new distractor color or with the second cued color. In the latter case, if two cued colors in VWM interact with selection simultaneously, we expected the second cued color object to generate substantial competition for selection, even though the first cued color was used to guide attention in the immediately previous pair. Indeed, in the second pair, selection probability of the first cued color was substantially reduced in the presence of the second cued color. This competition between cue-matching objects provides strong evidence that both VWM representations interacted simultaneously with selection. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Multisensory decisions provide support for probabilistic number representations.
Kanitscheider, Ingmar; Brown, Amanda; Pouget, Alexandre; Churchland, Anne K
2015-06-01
A large body of evidence suggests that an approximate number sense allows humans to estimate numerosity in sensory scenes. This ability is widely observed in humans, including those without formal mathematical training. Despite this, many outstanding questions remain about the nature of the numerosity representation in the brain. Specifically, it is not known whether approximate numbers are represented as scalar estimates of numerosity or, alternatively, as probability distributions over numerosity. In the present study, we used a multisensory decision task to distinguish these possibilities. We trained human subjects to decide whether a test stimulus had a larger or smaller numerosity compared with a fixed reference. Depending on the trial, the numerosity was presented as either a sequence of visual flashes or a sequence of auditory tones, or both. To test for a probabilistic representation, we varied the reliability of the stimulus by adding noise to the visual stimuli. In accordance with a probabilistic representation, we observed a significant improvement in multisensory compared with unisensory trials. Furthermore, a trial-by-trial analysis revealed that although individual subjects showed strategic differences in how they leveraged auditory and visual information, all subjects exploited the reliability of unisensory cues. An alternative, nonprobabilistic model, in which subjects combined cues without regard for reliability, was not able to account for these trial-by-trial choices. These findings provide evidence that the brain relies on a probabilistic representation for numerosity decisions. Copyright © 2015 the American Physiological Society.
A multiresolution inversion for imaging the ionosphere
NASA Astrophysics Data System (ADS)
Yin, Ping; Zheng, Ya-Nan; Mitchell, Cathryn N.; Li, Bo
2017-06-01
Ionospheric tomography has been widely employed in imaging the large-scale ionospheric structures at both quiet and storm times. However, the tomographic algorithms to date have not been very effective in imaging of medium- and small-scale ionospheric structures due to limitations of uneven ground-based data distributions and the algorithm itself. Further, the effect of the density and quantity of Global Navigation Satellite Systems data that could help improve the tomographic results for the certain algorithm remains unclear in much of the literature. In this paper, a new multipass tomographic algorithm is proposed to conduct the inversion using intensive ground GPS observation data and is demonstrated over the U.S. West Coast during the period of 16-18 March 2015 which includes an ionospheric storm period. The characteristics of the multipass inversion algorithm are analyzed by comparing tomographic results with independent ionosonde data and Center for Orbit Determination in Europe total electron content estimates. Then, several ground data sets with different data distributions are grouped from the same data source in order to investigate the impact of the density of ground stations on ionospheric tomography results. Finally, it is concluded that the multipass inversion approach offers an improvement. The ground data density can affect tomographic results but only offers improvements up to a density of around one receiver every 150 to 200 km. When only GPS satellites are tracked there is no clear advantage in increasing the density of receivers beyond this level, although this may change if multiple constellations are monitored from each receiving station in the future.
Computed tomographic contrast tenography of the digital flexor tendon sheath of the equine hindlimb.
Agass, Rachel; Dixon, Jonathon; Fraser, Barny
2018-05-01
Pre-surgical investigation of digital flexor tendon sheath pathology remains challenging with current standard imaging techniques. The aim of this prospective, anatomical, pilot study was to describe the anatomy of the equine hind limb digital flexor tendon sheath using a combination of computed tomography (CT) and computed tomographic contrast tenography in clinically normal cadaver limbs. Ten pairs of hind limbs with no external abnormalities were examined from the level of the tarsometatarsal joint distally. Limbs initially underwent non-contrast CT examination using 120 kVp, 300 mAs, and 1.5 mm slice thickness. Sixty millilitres of ioversol iodinated contrast media and saline (final concentration 100 mg/ml) were injected using a basilar sesamoidean approach. The computed tomographic contrast tenography examination was then repeated, before dissection of the specimens to compare gross and imaging findings. The combined CT and computed tomographic contrast tenography examinations provided excellent anatomical detail of intra-thecal structures. The borders of the superficial and deep digital flexor tendons, and the manica flexoria were consistently identifiable in all limbs. Detailed anatomy including that of the mesotenons, two of which are previously undescribed, and the plantar annular ligament were also consistently identifiable. Dissection of all 10 pairs of limbs revealed there to be no pathology, in accordance with the imaging findings. In conclusion, the combination of CT and computed tomographic contrast tenography may be useful adjunctive diagnostic techniques to define digital flexor tendon sheath pathology prior to surgical exploration in horses. © 2017 American College of Veterinary Radiology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakajima, K.; Bunko, H.; Tada, A.
1984-01-01
Phase analysis has been applied to Wolff-Parkinson-White syndrome (WPW) to detect the site of accessory conduction pathway (ACP); however, there was a limitation to estimate the precise location of ACP by planar phase analysis. In this study, the authors applied phase analysis to gated blood pool tomography. Twelve patients with WPW who underwent epicardial mapping and surgical division of ACP were studied by both of gated emission computed tomography (GECT) and routine gated blood pool study (GBPS). The GBPS was performed with Tc-99m red blood cells in multiple projections; modified left anterior oblique, right anterior oblique and/or left lateral views.more » In GECT, short axial, horizontal and vertical long axial blood pool images were reconstructed. Phase analysis was performed using fundamental frequency of the Fourier transform in both GECT and GBPS images, and abnormal initial contractions on both the planar and tomographic phase analysis were compared with the location of surgically confirmed ACPs. In planar phase analysis, abnormal initial phase was identified in 7 out of 12 (58%) patients, while in tomographic phase analysis, the localization of ACP was predicted in 11 out of 12 (92%) patients. Tomographic phase analysis is superior to planar phase images in 8 out of 12 patients to estimate the location of ACP. Phase analysis by GECT can avoid overlap of blood pool in cardiac chambers and has advantage to identify the propagation of phase three-dimensionally. Tomographic phase analysis is a good adjunctive method for patients with WPW to estimate the site of ACP.« less
NASA Astrophysics Data System (ADS)
Goodman, Joseph W.
2000-07-01
The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Robert G. Bartle The Elements of Integration and Lebesgue Measure George E. P. Box & Norman R. Draper Evolutionary Operation: A Statistical Method for Process Improvement George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Finite Groups of Lie Type: Conjugacy Classes and Complex Characters R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I RIchard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold S. M. Coxeter Introduction to Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Cuthbert Daniel Fitting Equations to Data: Computer Analysis of Multifactor Data, Second Edition Bruno de Finetti Theory of Probability, Volume I Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research
Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen; González-López, Antonio
2018-03-01
To provide a multi-stage model to calculate uncertainty in radiochromic film dosimetry with Monte-Carlo techniques. This new approach is applied to single-channel and multichannel algorithms. Two lots of Gafchromic EBT3 are exposed in two different Varian linacs. They are read with an EPSON V800 flatbed scanner. The Monte-Carlo techniques in uncertainty analysis provide a numerical representation of the probability density functions of the output magnitudes. From this numerical representation, traditional parameters of uncertainty analysis as the standard deviations and bias are calculated. Moreover, these numerical representations are used to investigate the shape of the probability density functions of the output magnitudes. Also, another calibration film is read in four EPSON scanners (two V800 and two 10000XL) and the uncertainty analysis is carried out with the four images. The dose estimates of single-channel and multichannel algorithms show a Gaussian behavior and low bias. The multichannel algorithms lead to less uncertainty in the final dose estimates when the EPSON V800 is employed as reading device. In the case of the EPSON 10000XL, the single-channel algorithms provide less uncertainty in the dose estimates for doses higher than four Gy. A multi-stage model has been presented. With the aid of this model and the use of the Monte-Carlo techniques, the uncertainty of dose estimates for single-channel and multichannel algorithms are estimated. The application of the model together with Monte-Carlo techniques leads to a complete characterization of the uncertainties in radiochromic film dosimetry. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
A representation of an NTCP function for local complication mechanisms
NASA Astrophysics Data System (ADS)
Alber, M.; Nüsslin, F.
2001-02-01
A mathematical formalism was tailored for the description of mechanisms complicating radiation therapy with a predominantly local component. The functional representation of an NTCP function was developed based on the notion that it has to be robust against population averages in order to be applicable to experimental data. The model was required to be invariant under scaling operations of the dose and the irradiated volume. The NTCP function was derived from the model assumptions that the complication is a consequence of local tissue damage and that the probability of local damage in a small reference volume is independent of the neighbouring volumes. The performance of the model was demonstrated with an animal model which has been published previously (Powers et al 1998 Radiother. Oncol. 46 297-306).
Quantum-Like Representation of Non-Bayesian Inference
NASA Astrophysics Data System (ADS)
Asano, M.; Basieva, I.; Khrennikov, A.; Ohya, M.; Tanaka, Y.
2013-01-01
This research is related to the problem of "irrational decision making or inference" that have been discussed in cognitive psychology. There are some experimental studies, and these statistical data cannot be described by classical probability theory. The process of decision making generating these data cannot be reduced to the classical Bayesian inference. For this problem, a number of quantum-like coginitive models of decision making was proposed. Our previous work represented in a natural way the classical Bayesian inference in the frame work of quantum mechanics. By using this representation, in this paper, we try to discuss the non-Bayesian (irrational) inference that is biased by effects like the quantum interference. Further, we describe "psychological factor" disturbing "rationality" as an "environment" correlating with the "main system" of usual Bayesian inference.
Shape-driven 3D segmentation using spherical wavelets.
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen
2006-01-01
This paper presents a novel active surface segmentation algorithm using a multiscale shape representation and prior. We define a parametric model of a surface using spherical wavelet functions and learn a prior probability distribution over the wavelet coefficients to model shape variations at different scales and spatial locations in a training set. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior in the segmentation framework. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to the segmentation of brain caudate nucleus, of interest in the study of schizophrenia. Our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm by capturing finer shape details.
Sparsity Aware Adaptive Radar Sensor Imaging in Complex Scattering Environments
2015-06-15
while meeting the requirement on the peak to average power ratio. Third, we study impact of waveform encoding on nonlinear electromagnetic tomographic...Enyue Lu. Time Domain Electromagnetic Tomography Using Propagation and Backpropagation Method, IEEE International Conference on Image Processing...Received Paper 3.00 4.00 Yuanwei Jin, Chengdon Dong, Enyue Lu. Waveform Encoding for Nonlinear Electromagnetic Tomographic Imaging, IEEE Global
An Analysis for Capital Expenditure Decisions at a Naval Regional Medical Center.
1981-12-01
Service Equipment Review Committee 1. Portable defibrilator Computed tomographic scanner and cardioscope 2. ECG cart Automated blood cell counter 3. Gas...system sterilizer Gas system sterilizer 4. Automated blood cell Portable defibrilator and counter cardioscope 5. Computed tomographic ECG cart scanner...dictating and automated typing) systems. e. Filing equipment f. Automatic data processing equipment including data communications equipment. g
Analysis of 21-cm tomographic data
NASA Astrophysics Data System (ADS)
Mellema, Garrelt; Giri, Sambit; Ghara, Raghuna
2018-05-01
The future SKA1-Low radio telescope will be powerful enough to produce tomographic images of the 21-cm signal from the Epoch of Reionization. Here we address how to identify ionized regions in such data sets, taking into account the resolution and noise levels associated with SKA1-Low. We describe three methods of which one, superpixel oversegmentation, consistently performs best.
1986-03-10
and P. Frangos , "Inverse Scattering for Dielectric Media", Annual OSA Meeting, Wash. D.C., Oct. 1985. Invited Presentations 1. N. Farhat, "Tomographic...Optical Computing", DARPA Briefing, ~~April 1985. ... -7--.. , 1% If .% P . .% .% *-. 7777~14e 7-7. K-7 77 Theses 0 P.V. Frangos , "The Electromagnetic
Volume Segmentation and Ghost Particles
NASA Astrophysics Data System (ADS)
Ziskin, Isaac; Adrian, Ronald
2011-11-01
Volume Segmentation Tomographic PIV (VS-TPIV) is a type of tomographic PIV in which images of particles in a relatively thick volume are segmented into images on a set of much thinner volumes that may be approximated as planes, as in 2D planar PIV. The planes of images can be analysed by standard mono-PIV, and the volume of flow vectors can be recreated by assembling the planes of vectors. The interrogation process is similar to a Holographic PIV analysis, except that the planes of image data are extracted from two-dimensional camera images of the volume of particles instead of three-dimensional holographic images. Like the tomographic PIV method using the MART algorithm, Volume Segmentation requires at least two cameras and works best with three or four. Unlike the MART method, Volume Segmentation does not require reconstruction of individual particle images one pixel at a time and it does not require an iterative process, so it operates much faster. As in all tomographic reconstruction strategies, ambiguities known as ghost particles are produced in the segmentation process. The effect of these ghost particles on the PIV measurement is discussed. This research was supported by Contract 79419-001-09, Los Alamos National Laboratory.
Creating three-dimensional tooth models from tomographic images.
Lima da Silva, Isaac Newton; Barbosa, Gustavo Frainer; Soares, Rodrigo Borowski Grecco; Beltrao, Maria Cecilia Gomes; Spohr, Ana Maria; Mota, Eduardo Golcalves; Oshima, Hugo Mitsuo Silva; Burnett, Luiz Henrique
2008-01-01
The use of Finite Element Analysis (FEA) is becoming very frequent in Dentistry. However, most of the three-dimensional models presented by the literature for teeth are limited in terms of geometry. Discrepancy in shape and dimensions can cause wrong results to occur. Sharp cusps and faceted contour can produce stress concentrations, which are incoherent with the reality. The aim of this study was the processing of tomographic images in order to develop an advanced three-dimensional reconstruction of the anatomy of a molar tooth and the integration of the resulting solid with commercially available CAD/CAE software. Computed tomographic images were obtained from 0.5 mm thick slices of mandibular molar and transferred to commercial cad software. Once the point cloud data have been generated, the work on these points started to get to the solid model of the tooth with Pro/Engineer software. The obtained tooth model showed very accurate shape and dimensions, as it was obtained from real tooth data with error of 0.0 to -0.8 mm. The methodology presented was efficient for creating a biomodel of a tooth from tomographic images that realistically represented its anatomy.
Experimental demonstration of laser tomographic adaptive optics on a 30-meter telescope at 800 nm
NASA Astrophysics Data System (ADS)
Ammons, S., Mark; Johnson, Luke; Kupke, Renate; Gavel, Donald T.; Max, Claire E.
2010-07-01
A critical goal in the next decade is to develop techniques that will extend Adaptive Optics correction to visible wavelengths on Extremely Large Telescopes (ELTs). We demonstrate in the laboratory the highly accurate atmospheric tomography necessary to defeat the cone effect on ELTs, an essential milestone on the path to this capability. We simulate a high-order Laser Tomographic AO System for a 30-meter telescope with the LTAO/MOAO testbed at UCSC. Eight Sodium Laser Guide Stars (LGSs) are sensed by 99x99 Shack-Hartmann wavefront sensors over 75". The AO system is diffraction-limited at a science wavelength of 800 nm (S ~ 6-9%) over a field of regard of 20" diameter. Openloop WFS systematic error is observed to be proportional to the total input atmospheric disturbance and is nearly the dominant error budget term (81 nm RMS), exceeded only by tomographic wavefront estimation error (92 nm RMS). The total residual wavefront error for this experiment is comparable to that expected for wide-field tomographic adaptive optics systems of similar wavefront sensor order and LGS constellation geometry planned for Extremely Large Telescopes.
Kubo, S; Nakata, H; Sugauchi, Y; Yokota, N; Yoshimine, T
2000-05-01
The preoperative localization of superficial intracranial lesions is often necessary for accurate burr hole placement or craniotomy siting. It is not always easy, however, to localize the lesions over the scalp working only from computed tomographic images. We developed a simple method for such localization using a laser pointer during the preoperative computed tomographic examination. The angle of incidence, extending from a point on the scalp to the center of the computed tomographic image, is measured by the software included with the scanner. In the gantry, at the same angle as on the image, a laser is beamed from a handmade projector onto the patient's scalp toward the center of the gantry. The point illuminated on the patient's head corresponds to that on the image. The device and the method are described in detail herein. We applied this technique to mark the area for the craniotomy before surgery in five patients with superficial brain tumors. At the time of surgery, it was confirmed that the tumors were circumscribed precisely. The technique is easy to perform and useful in the preoperative planning for a craniotomy. In addition, the device is easily constructed and inexpensive.
Optical coherence tomography using images of hair structure and dyes penetrating into the hair.
Tsugita, Tetsuya; Iwai, Toshiaki
2014-11-01
Hair dyes are commonly evaluated by the appearance of the hair after dyeing. However, this approach cannot simultaneously assess how deep the dye has penetrated into hair. For simultaneous assessment of the appearance and the interior of hair, we developed a visible-range red, green, and blue (RGB) (three primary colors)-optical coherence tomography (OCT) using an RGB LED light source. We then evaluated a phantom model based on the assumption that the sample's absorbability in the vertical direction affects the tomographic imaging. Consistent with theory, our device showed higher resolution than conventional OCT with far-red light. In the experiment on the phantom model, we confirmed that the tomographic imaging is affected by absorbability unique to the sample. Furthermore, we verified that permeability can be estimated from this tomographic image. We also identified for the first time the relationship between penetration of the dye into hair and characteristics of wavelength by tomographic imaging of dyed hair. We successfully simultaneously assessed the appearance of dyed hair and inward penetration of the dye without preparing hair sections. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
TomoBank: a tomographic data repository for computational x-ray science
NASA Astrophysics Data System (ADS)
De Carlo, Francesco; Gürsoy, Doğa; Ching, Daniel J.; Joost Batenburg, K.; Ludwig, Wolfgang; Mancini, Lucia; Marone, Federica; Mokso, Rajmund; Pelt, Daniël M.; Sijbers, Jan; Rivers, Mark
2018-03-01
There is a widening gap between the fast advancement of computational methods for tomographic reconstruction and their successful implementation in production software at various synchrotron facilities. This is due in part to the lack of readily available instrument datasets and phantoms representative of real materials for validation and comparison of new numerical methods. Recent advancements in detector technology have made sub-second and multi-energy tomographic data collection possible (Gibbs et al 2015 Sci. Rep. 5 11824), but have also increased the demand to develop new reconstruction methods able to handle in situ (Pelt and Batenburg 2013 IEEE Trans. Image Process. 22 5238-51) and dynamic systems (Mohan et al 2015 IEEE Trans. Comput. Imaging 1 96-111) that can be quickly incorporated in beamline production software (Gürsoy et al 2014 J. Synchrotron Radiat. 21 1188-93). The x-ray tomography data bank, tomoBank, provides a repository of experimental and simulated datasets with the aim to foster collaboration among computational scientists, beamline scientists, and experimentalists and to accelerate the development and implementation of tomographic reconstruction methods for synchrotron facility production software by providing easy access to challenging datasets and their descriptors.
Using artificial neural networks (ANN) for open-loop tomography
NASA Astrophysics Data System (ADS)
Osborn, James; De Cos Juez, Francisco Javier; Guzman, Dani; Butterley, Timothy; Myers, Richard; Guesalaga, Andres; Laine, Jesus
2011-09-01
The next generation of adaptive optics (AO) systems require tomographic techniques in order to correct for atmospheric turbulence along lines of sight separated from the guide stars. Multi-object adaptive optics (MOAO) is one such technique. Here, we present a method which uses an artificial neural network (ANN) to reconstruct the target phase given off-axis references sources. This method does not require any input of the turbulence profile and is therefore less susceptible to changing conditions than some existing methods. We compare our ANN method with a standard least squares type matrix multiplication method (MVM) in simulation and find that the tomographic error is similar to the MVM method. In changing conditions the tomographic error increases for MVM but remains constant with the ANN model and no large matrix inversions are required.
NASA Astrophysics Data System (ADS)
Massambone de Oliveira, Rafael; Salomão Helou, Elias; Fontoura Costa, Eduardo
2016-11-01
We present a method for non-smooth convex minimization which is based on subgradient directions and string-averaging techniques. In this approach, the set of available data is split into sequences (strings) and a given iterate is processed independently along each string, possibly in parallel, by an incremental subgradient method (ISM). The end-points of all strings are averaged to form the next iterate. The method is useful to solve sparse and large-scale non-smooth convex optimization problems, such as those arising in tomographic imaging. A convergence analysis is provided under realistic, standard conditions. Numerical tests are performed in a tomographic image reconstruction application, showing good performance for the convergence speed when measured as the decrease ratio of the objective function, in comparison to classical ISM.
Development and Plasticity of Cortical Processing Architectures
NASA Astrophysics Data System (ADS)
Singer, Wolf
1995-11-01
One of the basic functions of the cerebral cortex is the analysis and representation of relations among the components of sensory and motor patterns. It is proposed that the cortex applies two complementary strategies to cope with the combinatorial problem posed by the astronomical number of possible relations: (i) the analysis and representation of frequently occurring, behaviorally relevant relations by groups of cells with fixed but broadly tuned response properties; and (ii) the dynamic association of these cells into functionally coherent assemblies. Feedforward connections and reciprocal associative connections, respectively, are thought to underlie these two operations. The architectures of both types of connections are susceptible to experience-dependent modifications during development, but they become fixed in the adult. As development proceeds, feedforward connections also appear to lose much of their functional plasticity, whereas the synapses of the associative connections retain a high susceptibility to use-dependent modifications. The reduced plasticity of feedforward connections is probably responsible for the invariance of cognitive categories acquired early in development. The persistent adaptivity of reciprocal connections is a likely substrate for the ability to generate representations for new perceptual objects and motor patterns throughout life.
[An image of Saint Ottilia with reading stones].
Daxecker, F; Broucek, A
1995-01-01
Reading stones to facilitate reading in cases of presbyopia are mentioned in the literature, for example in the works of the Middle High German poet Albrecht and of Konrad of Würzburg. Most representations of the abbess, Saint Ottilia, show her holding a book with a pair of eyes in her hands. A gothic altarpiece (1485-1490), kept in the museum of the Premonstratensian Canons of Wilten in Innsbruck, Tyrol, shows a triune representation of St. Anne, the mother of the Virgin, with Mary and Jesus and St. Ursula with her companions. St. Ottilia is depicted on the edge of the painting. Two lenses, one on either side of the open book in her hand, magnify the letters underneath. As the two lenses are not held together by bows or similar devices, they are probably a rare representation of reading stones. The alter showing scenes of the life of St. Mary and St. Ursula was done by Ludwig Konraiter. A panel on the same alter, depicting the death of the Virgin, shows an apostle with rivet spectacles.
Universal Racah matrices and adjoint knot polynomials: Arborescent knots
NASA Astrophysics Data System (ADS)
Mironov, A.; Morozov, A.
2016-04-01
By now it is well established that the quantum dimensions of descendants of the adjoint representation can be described in a universal form, independent of a particular family of simple Lie algebras. The Rosso-Jones formula then implies a universal description of the adjoint knot polynomials for torus knots, which in particular unifies the HOMFLY (SUN) and Kauffman (SON) polynomials. For E8 the adjoint representation is also fundamental. We suggest to extend the universality from the dimensions to the Racah matrices and this immediately produces a unified description of the adjoint knot polynomials for all arborescent (double-fat) knots, including twist, 2-bridge and pretzel. Technically we develop together the universality and the "eigenvalue conjecture", which expresses the Racah and mixing matrices through the eigenvalues of the quantum R-matrix, and for dealing with the adjoint polynomials one has to extend it to the previously unknown 6 × 6 case. The adjoint polynomials do not distinguish between mutants and therefore are not very efficient in knot theory, however, universal polynomials in higher representations can probably be better in this respect.
A Representation for Gaining Insight into Clinical Decision Models
Jimison, Holly B.
1988-01-01
For many medical domains uncertainty and patient preferences are important components of decision making. Decision theory is useful as a representation for such medical models in computer decision aids, but the methodology has typically had poor performance in the areas of explanation and user interface. The additional representation of probabilities and utilities as random variables serves to provide a framework for graphical and text insight into complicated decision models. The approach allows for efficient customization of a generic model that describes the general patient population of interest to a patient- specific model. Monte Carlo simulation is used to calculate the expected value of information and sensitivity for each model variable, thus providing a metric for deciding what to emphasize in the graphics and text summary. The computer-generated explanation includes variables that are sensitive with respect to the decision or that deviate significantly from what is typically observed. These techniques serve to keep the assessment and explanation of the patient's decision model concise, allowing the user to focus on the most important aspects for that patient.
Crescentini, Cristiano; Aglioti, Salvatore M; Fabbro, Franco; Urgesi, Cosimo
2014-05-01
Religiousness and spirituality (RS) are two ubiquitous aspects of human experience typically considered impervious to scientific investigation. Nevertheless, associations between RS and frontoparietal neural activity have been recently reported. However, much less is known about whether such activity is causally involved in modulating RS or just epiphenomenal to them. Here we combined two-pulse (10 Hz) Transcranial Magnetic Stimulation (TMS) with a novel, ad-hoc developed RS-related, Implicit Association Test (IAT) to investigate whether implicit RS representations, although supposedly rather stable, can be rapidly modified by a virtual lesion of inferior parietal lobe (IPL) and dorsolateral prefrontal cortex (DLPFC). A self-esteem (SE) IAT, focused on self-concepts nonrelated to RS representations, was developed as control. A specific increase of RS followed inhibition of IPL demonstrating its causative role in inducing fast plastic changes of religiousness/spirituality. In contrast, DLPFC inhibition had more widespread effects probably reflecting a general role in the acquisition or maintenance of task-rules or in controlling the expression of self-related representations not specific to RS. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lanusse, F.; Rassat, A.; Starck, J.-L.
2015-06-01
Context. Upcoming spectroscopic galaxy surveys are extremely promising to help in addressing the major challenges of cosmology, in particular in understanding the nature of the dark universe. The strength of these surveys, naturally described in spherical geometry, comes from their unprecedented depth and width, but an optimal extraction of their three-dimensional information is of utmost importance to best constrain the properties of the dark universe. Aims: Although there is theoretical motivation and novel tools to explore these surveys using the 3D spherical Fourier-Bessel (SFB) power spectrum of galaxy number counts Cℓ(k,k'), most survey optimisations and forecasts are based on the tomographic spherical harmonics power spectrum C(ij)_ℓ. The goal of this paper is to perform a new investigation of the information that can be extracted from these two analyses in the context of planned stage IV wide-field galaxy surveys. Methods: We compared tomographic and 3D SFB techniques by comparing the forecast cosmological parameter constraints obtained from a Fisher analysis. The comparison was made possible by careful and coherent treatment of non-linear scales in the two analyses, which makes this study the first to compare 3D SFB and tomographic constraints on an equal footing. Nuisance parameters related to a scale- and redshift-dependent galaxy bias were also included in the computation of the 3D SFB and tomographic power spectra for the first time. Results: Tomographic and 3D SFB methods can recover similar constraints in the absence of systematics. This requires choosing an optimal number of redshift bins for the tomographic analysis, which we computed to be N = 26 for zmed ≃ 0.4, N = 30 for zmed ≃ 1.0, and N = 42 for zmed ≃ 1.7. When marginalising over nuisance parameters related to the galaxy bias, the forecast 3D SFB constraints are less affected by this source of systematics than the tomographic constraints. In addition, the rate of increase of the figure of merit as a function of median redshift is higher for the 3D SFB method than for the 2D tomographic method. Conclusions: Constraints from the 3D SFB analysis are less sensitive to unavoidable systematics stemming from a redshift- and scale-dependent galaxy bias. Even for surveys that are optimised with tomography in mind, a 3D SFB analysis is more powerful. In addition, for survey optimisation, the figure of merit for the 3D SFB method increases more rapidly with redshift, especially at higher redshifts, suggesting that the 3D SFB method should be preferred for designing and analysing future wide-field spectroscopic surveys. CosmicPy, the Python package developed for this paper, is freely available at https://cosmicpy.github.io. Appendices are available in electronic form at http://www.aanda.org
Manfredi; Feix
2000-10-01
The properties of an alternative definition of quantum entropy, based on Wigner functions, are discussed. Such a definition emerges naturally from the Wigner representation of quantum mechanics, and can easily quantify the amount of entanglement of a quantum state. It is shown that smoothing of the Wigner function induces an increase in entropy. This fact is used to derive some simple rules to construct positive-definite probability distributions which are also admissible Wigner functions.
NASA Technical Reports Server (NTRS)
Watson, Clifford
2010-01-01
Traditional hazard analysis techniques utilize a two-dimensional representation of the results determined by relative likelihood and severity of the residual risk. These matrices present a quick-look at the Likelihood (Y-axis) and Severity (X-axis) of the probable outcome of a hazardous event. A three-dimensional method, described herein, utilizes the traditional X and Y axes, while adding a new, third dimension, shown as the Z-axis, and referred to as the Level of Control. The elements of the Z-axis are modifications of the Hazard Elimination and Control steps (also known as the Hazard Reduction Precedence Sequence). These steps are: 1. Eliminate risk through design. 2. Substitute less risky materials for more hazardous materials. 3. Install safety devices. 4. Install caution and warning devices. 5. Develop administrative controls (to include special procedures and training.) 6. Provide protective clothing and equipment. When added to the twodimensional models, the level of control adds a visual representation of the risk associated with the hazardous condition, creating a tall-pole for the least-well-controlled failure while establishing the relative likelihood and severity of all causes and effects for an identified hazard. Computer modeling of the analytical results, using spreadsheets and threedimensional charting gives a visual confirmation of the relationship between causes and their controls
NASA Technical Reports Server (NTRS)
Watson, Clifford C.
2011-01-01
Traditional hazard analysis techniques utilize a two-dimensional representation of the results determined by relative likelihood and severity of the residual risk. These matrices present a quick-look at the Likelihood (Y-axis) and Severity (X-axis) of the probable outcome of a hazardous event. A three-dimensional method, described herein, utilizes the traditional X and Y axes, while adding a new, third dimension, shown as the Z-axis, and referred to as the Level of Control. The elements of the Z-axis are modifications of the Hazard Elimination and Control steps (also known as the Hazard Reduction Precedence Sequence). These steps are: 1. Eliminate risk through design. 2. Substitute less risky materials for more hazardous materials. 3. Install safety devices. 4. Install caution and warning devices. 5. Develop administrative controls (to include special procedures and training.) 6. Provide protective clothing and equipment. When added to the two-dimensional models, the level of control adds a visual representation of the risk associated with the hazardous condition, creating a tall-pole for the least-well-controlled failure while establishing the relative likelihood and severity of all causes and effects for an identified hazard. Computer modeling of the analytical results, using spreadsheets and three-dimensional charting gives a visual confirmation of the relationship between causes and their controls.
EDNA: Expert fault digraph analysis using CLIPS
NASA Technical Reports Server (NTRS)
Dixit, Vishweshwar V.
1990-01-01
Traditionally fault models are represented by trees. Recently, digraph models have been proposed (Sack). Digraph models closely imitate the real system dependencies and hence are easy to develop, validate and maintain. However, they can also contain directed cycles and analysis algorithms are hard to find. Available algorithms tend to be complicated and slow. On the other hand, the tree analysis (VGRH, Tayl) is well understood and rooted in vast research effort and analytical techniques. The tree analysis algorithms are sophisticated and orders of magnitude faster. Transformation of a digraph (cyclic) into trees (CLP, LP) is a viable approach to blend the advantages of the representations. Neither the digraphs nor the trees provide the ability to handle heuristic knowledge. An expert system, to capture the engineering knowledge, is essential. We propose an approach here, namely, expert network analysis. We combine the digraph representation and tree algorithms. The models are augmented by probabilistic and heuristic knowledge. CLIPS, an expert system shell from NASA-JSC will be used to develop a tool. The technique provides the ability to handle probabilities and heuristic knowledge. Mixed analysis, some nodes with probabilities, is possible. The tool provides graphics interface for input, query, and update. With the combined approach it is expected to be a valuable tool in the design process as well in the capture of final design knowledge.
Women among First Authors in Japanese Cardiovascular Journal.
Fujii, Tomoko; Matsuyama, Tasuku; Takeuchi, Jiro; Hara, Masahiko; Kitamura, Tetsuhisa; Yamauchi-Takihara, Keiko
2018-03-30
The representation of women in Japanese academic medicine is not evident. We aimed to assess trends related to the proportion of female first authors in Japanese cardiovascular journals.We reviewed original research articles in 6 journals published in English by Japanese societies between 2006 and 2015 related to cardiovascular fields. We conducted a multivariable logistic regression analysis to assess the factors associated with the gender of first authors and plotted the trend of predicted probability for female first authors over the study period. Of 7,005 original articles, 1,330 (19.0%) had female first authors. Affiliations located in Japan (adjusted odds ratio [aOR], 0.76; 95% confidence interval [CI], 0.71-0.81), concurrent first and corresponding authors (aOR, 0.69; 95% CI, 0.64-0.74), and the total number of listed authors (aOR, 0.97; 95% CI, 0.95-0.99) were negatively associated with female first authors. The adjusted probability of a female first author increased from 13% to 20% on an average between 2006 and 2009, but the increase reached a plateau after 2010.Female first authors of original research articles published in Japanese cardiovascular journals increased over the examined decade. However, the representation of women is still low and has plateaued in recent years. A gender gap in authorship for Japanese cardiovascular journals remains.
Zhang, Qin
2015-07-01
Probabilistic graphical models (PGMs) such as Bayesian network (BN) have been widely applied in uncertain causality representation and probabilistic reasoning. Dynamic uncertain causality graph (DUCG) is a newly presented model of PGMs, which can be applied to fault diagnosis of large and complex industrial systems, disease diagnosis, and so on. The basic methodology of DUCG has been previously presented, in which only the directed acyclic graph (DAG) was addressed. However, the mathematical meaning of DUCG was not discussed. In this paper, the DUCG with directed cyclic graphs (DCGs) is addressed. In contrast, BN does not allow DCGs, as otherwise the conditional independence will not be satisfied. The inference algorithm for the DUCG with DCGs is presented, which not only extends the capabilities of DUCG from DAGs to DCGs but also enables users to decompose a large and complex DUCG into a set of small, simple sub-DUCGs, so that a large and complex knowledge base can be easily constructed, understood, and maintained. The basic mathematical definition of a complete DUCG with or without DCGs is proved to be a joint probability distribution (JPD) over a set of random variables. The incomplete DUCG as a part of a complete DUCG may represent a part of JPD. Examples are provided to illustrate the methodology.
A fuzzy Bayesian network approach to quantify the human behaviour during an evacuation
NASA Astrophysics Data System (ADS)
Ramli, Nurulhuda; Ghani, Noraida Abdul; Ahmad, Nazihah
2016-06-01
Bayesian Network (BN) has been regarded as a successful representation of inter-relationship of factors affecting human behavior during an emergency. This paper is an extension of earlier work of quantifying the variables involved in the BN model of human behavior during an evacuation using a well-known direct probability elicitation technique. To overcome judgment bias and reduce the expert's burden in providing precise probability values, a new approach for the elicitation technique is required. This study proposes a new fuzzy BN approach for quantifying human behavior during an evacuation. Three major phases of methodology are involved, namely 1) development of qualitative model representing human factors during an evacuation, 2) quantification of BN model using fuzzy probability and 3) inferencing and interpreting the BN result. A case study of three inter-dependencies of human evacuation factors such as danger assessment ability, information about the threat and stressful conditions are used to illustrate the application of the proposed method. This approach will serve as an alternative to the conventional probability elicitation technique in understanding the human behavior during an evacuation.
Bekrater-Bodmann, Robin; Schredl, Michael; Diers, Martin; Reinhard, Iris; Foell, Jens; Trojan, Jörg; Fuchs, Xaver; Flor, Herta
2015-01-01
The experience of post-amputation pain such as phantom limb pain (PLP) and residual limb pain (RLP), is a common consequence of limb amputation, and its presence has negative effects on a person's well-being. The continuity hypothesis of dreams suggests that the presence of such aversive experiences in the waking state should be reflected in dream content, with the recalled body representation reflecting a cognitive proxy of negative impact. In the present study, we epidemiologically assessed the presence of post-amputation pain and other amputation-related information as well as recalled body representation in dreams in a sample of 3,234 unilateral limb amputees. Data on the site and time of amputation, residual limb length, prosthesis use, lifetime prevalence of mental disorders, presence of post-amputation pain, and presence of non-painful phantom phenomena were included in logistic regression analyses using recalled body representation in dreams (impaired, intact, no memory) as dependent variable. The effects of age, sex, and frequency of dream recall were controlled for. About 22% of the subjects indicated that they were not able to remember their body representation in dreams, another 24% of the amputees recalled themselves as always intact, and only a minority of less than 3% recalled themselves as always impaired. Almost 35% of the amputees dreamed of themselves in a mixed fashion. We found that lower-limb amputation as well as the presence of PLP and RLP was positively associated with the recall of an impaired body representation in dreams. The presence of non-painful phantom phenomena, however, had no influence. These results complement previous findings and indicate complex interactions of physical body appearance and mental body representation, probably modulated by distress in the waking state. The findings are discussed against the background of alterations in cognitive processes after amputation and hypotheses suggesting an innate body model.
Bekrater-Bodmann, Robin; Schredl, Michael; Diers, Martin; Reinhard, Iris; Foell, Jens; Trojan, Jörg; Fuchs, Xaver; Flor, Herta
2015-01-01
The experience of post-amputation pain such as phantom limb pain (PLP) and residual limb pain (RLP), is a common consequence of limb amputation, and its presence has negative effects on a person’s well-being. The continuity hypothesis of dreams suggests that the presence of such aversive experiences in the waking state should be reflected in dream content, with the recalled body representation reflecting a cognitive proxy of negative impact. In the present study, we epidemiologically assessed the presence of post-amputation pain and other amputation-related information as well as recalled body representation in dreams in a sample of 3,234 unilateral limb amputees. Data on the site and time of amputation, residual limb length, prosthesis use, lifetime prevalence of mental disorders, presence of post-amputation pain, and presence of non-painful phantom phenomena were included in logistic regression analyses using recalled body representation in dreams (impaired, intact, no memory) as dependent variable. The effects of age, sex, and frequency of dream recall were controlled for. About 22% of the subjects indicated that they were not able to remember their body representation in dreams, another 24% of the amputees recalled themselves as always intact, and only a minority of less than 3% recalled themselves as always impaired. Almost 35% of the amputees dreamed of themselves in a mixed fashion. We found that lower-limb amputation as well as the presence of PLP and RLP was positively associated with the recall of an impaired body representation in dreams. The presence of non-painful phantom phenomena, however, had no influence. These results complement previous findings and indicate complex interactions of physical body appearance and mental body representation, probably modulated by distress in the waking state. The findings are discussed against the background of alterations in cognitive processes after amputation and hypotheses suggesting an innate body model. PMID:25742626
The best of both Reps—Diabatized Gaussians on adiabatic surfaces
NASA Astrophysics Data System (ADS)
Meek, Garrett A.; Levine, Benjamin G.
2016-11-01
When simulating nonadiabatic molecular dynamics, choosing an electronic representation requires consideration of well-known trade-offs. The uniqueness and spatially local couplings of the adiabatic representation come at the expense of an electronic wave function that changes discontinuously with nuclear motion and associated singularities in the nonadiabatic coupling matrix elements. The quasi-diabatic representation offers a smoothly varying wave function and finite couplings, but identification of a globally well-behaved quasi-diabatic representation is a system-specific challenge. In this work, we introduce the diabatized Gaussians on adiabatic surfaces (DGAS) approximation, a variant of the ab initio multiple spawning (AIMS) method that preserves the advantages of both electronic representations while avoiding their respective pitfalls. The DGAS wave function is expanded in a basis of vibronic functions that are continuous in both electronic and nuclear coordinates, but potentially discontinuous in time. Because the time-dependent Schrödinger equation contains only first-order derivatives with respect to time, singularities in the second-derivative nonadiabatic coupling terms (i.e., diagonal Born-Oppenheimer correction; DBOC) at conical intersections are rigorously absent, though singular time-derivative couplings remain. Interpolation of the electronic wave function allows the accurate prediction of population transfer probabilities even in the presence of the remaining singularities. We compare DGAS calculations of the dynamics of photoexcited ethene to AIMS calculations performed in the adiabatic representation, including the DBOC. The 28 fs excited state lifetime observed in DGAS simulations is considerably shorter than the 50 fs lifetime observed in the adiabatic simulations. The slower decay in the adiabatic representation is attributable to the large, repulsive DBOC in the neighborhood of conical intersections. These repulsive DBOC terms are artifacts of the discontinuities in the individual adiabatic vibronic basis functions and therefore cannot reflect the behavior of the exact molecular wave function, which must be continuous.
Tomographic Image Reconstruction Using an Interpolation Method for Tree Decay Detection
Hailin Feng; Guanghui Li; Sheng Fu; Xiping Wang
2014-01-01
Stress wave velocity has been traditionally regarded as an indicator of the extent of damage inside wood. This paper aimed to detect internal decay of urban trees through reconstructing tomographic image of the cross section of a tree trunk. A grid model covering the cross section area of a tree trunk was defined with some assumptions. Stress wave data were processed...
Characterization of Flow Dynamics and Reduced-Order Description of Experimental Two-Phase Pipe Flow
NASA Astrophysics Data System (ADS)
Viggiano, Bianca; SkjæRaasen, Olaf; Tutkun, Murat; Cal, Raul Bayoan
2017-11-01
Multiphase pipe flow is investigated using proper orthogonal decomposition for tomographic X-ray data, where holdup, cross sectional phase distributions and phase interface characteristics are obtained. Instantaneous phase fractions of dispersed flow and slug flow are analyzed and a reduced order dynamical description is generated. The dispersed flow displays coherent structures in the first few modes near the horizontal center of the pipe, representing the liquid-liquid interface location while the slug flow case shows coherent structures that correspond to the cyclical formation and breakup of the slug in the first 10 modes. The reconstruction of the fields indicate that main features are observed in the low order dynamical descriptions utilizing less than 1 % of the full order model. POD temporal coefficients a1, a2 and a3 show interdependence for the slug flow case. The coefficients also describe the phase fraction holdup as a function of time for both dispersed and slug flow. These flows are highly applicable to petroleum transport pipelines, hydroelectric power and heat exchanger tubes to name a few. The mathematical representations obtained via proper orthogonal decomposition will deepen the understanding of fundamental multiphase flow characteristics.
Jini service to reconstruct tomographic data
NASA Astrophysics Data System (ADS)
Knoll, Peter; Mirzaei, S.; Koriska, K.; Koehn, H.
2002-06-01
A number of imaging systems rely on the reconstruction of a 3- dimensional model from its projections through the process of computed tomography (CT). In medical imaging, for example magnetic resonance imaging (MRI), positron emission tomography (PET), and Single Computer Tomography (SPECT) acquire two-dimensional projections of a three dimensional projections of a three dimensional object. In order to calculate the 3-dimensional representation of the object, i.e. its voxel distribution, several reconstruction algorithms have been developed. Currently, mainly two reconstruct use: the filtered back projection(FBP) and iterative methods. Although the quality of iterative reconstructed SPECT slices is better than that of FBP slices, such iterative algorithms are rarely used for clinical routine studies because of their low availability and increased reconstruction time. We used Jini and a self-developed iterative reconstructions algorithm to design and implement a Jini reconstruction service. With this service, the physician selects the patient study from a database and a Jini client automatically discovers the registered Jini reconstruction services in the department's Intranet. After downloading the proxy object the this Jini service, the SPECT acquisition data are reconstructed. The resulting transaxial slices are visualized using a Jini slice viewer, which can be used for various imaging modalities.
Effect of fringe-artifact correction on sub-tomogram averaging from Zernike phase-plate cryo-TEM
Kishchenko, Gregory P.; Danev, Radostin; Fisher, Rebecca; He, Jie; Hsieh, Chyongere; Marko, Michael; Sui, Haixin
2015-01-01
Zernike phase-plate (ZPP) imaging greatly increases contrast in cryo-electron microscopy, however fringe artifacts appear in the images. A computational de-fringing method has been proposed, but it has not been widely employed, perhaps because the importance of de-fringing has not been clearly demonstrated. For testing purposes, we employed Zernike phase-plate imaging in a cryo-electron tomographic study of radial-spoke complexes attached to microtubule doublets. We found that the contrast enhancement by ZPP imaging made nonlinear denoising insensitive to the filtering parameters, such that simple low-frequency band-pass filtering made the same improvement in map quality. We employed sub-tomogram averaging, which compensates for the effect of the “missing wedge” and considerably improves map quality. We found that fringes (caused by the abrupt cut-on of the central hole in the phase plate) can lead to incorrect representation of a structure that is well-known from the literature. The expected structure was restored by amplitude scaling, as proposed in the literature. Our results show that de-fringing is an important part of image-processing for cryo-electron tomography of macromolecular complexes with ZPP imaging. PMID:26210582
Computed tomographic images using tube source of x rays: interior properties of the material
NASA Astrophysics Data System (ADS)
Rao, Donepudi V.; Takeda, Tohoru; Itai, Yuji; Seltzer, S. M.; Hubbell, John H.; Zeniya, Tsutomu; Akatsuka, Takao; Cesareo, Roberto; Brunetti, Antonio; Gigante, Giovanni E.
2002-01-01
An image intensifier based computed tomography scanner and a tube source of x-rays are used to obtain the images of small objects, plastics, wood and soft materials in order to know the interior properties of the material. A new method is developed to estimate the degree of monochromacy, total solid angle, efficiency and geometrical effects of the measuring system and the way to produce monoenergetic radiation. The flux emitted by the x-ray tube is filtered using the appropriate filters at the chosen optimum energy and reasonable monochromacy is achieved and the images are acceptably distinct. Much attention has been focused on the imaging of small objects of weakly attenuating materials at optimum value. At optimum value it is possible to calculate the three-dimensional representation of inner and outer surfaces of the object. The image contrast between soft materials could be significantly enhanced by optimal selection of the energy of the x-rays by Monte Carlo methods. The imaging system is compact, reasonably economic, has a good contrast resolution, simple operation and routine availability and explores the use of optimizing tomography for various applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pablant, N. A.; Bell, R. E.; Bitter, M.
2014-11-15
Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at the Large Helical Device. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy andmore » tomographic inversion, XICS can provide profile measurements of the local emissivity, temperature, and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modified Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example, geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less
Enhanced Combined Tomography and Biomechanics Data for Distinguishing Forme Fruste Keratoconus.
Luz, Allan; Lopes, Bernardo; Hallahan, Katie M; Valbon, Bruno; Ramos, Isaac; Faria-Correia, Fernando; Schor, Paulo; Dupps, William J; Ambrósio, Renato
2016-07-01
To evaluate the performance of the Ocular Response Analyzer (ORA) (Reichert Ophthalmic Instruments, Depew, NY) variables and Pentacam HR (Oculus Optikgeräte GmbH, Wetzlar, Germany) tomographic parameters in differentiating forme fruste keratoconus (FFKC) from normal corneas, and to assess a combined biomechanical and tomographic parameter to improve outcomes. Seventy-six eyes of 76 normal patients and 21 eyes of 21 patients with FFKC were included in the study. Fifteen variables were derived from exported ORA signals to characterize putative indicators of biomechanical behavior and 37 ORA waveform parameters were tested. Sixteen tomographic parameters from Pentacam HR were tested. Logistic regression was used to produce a combined biomechanical and tomography linear model. Differences between groups were assessed by the Mann-Whitney U test. The area under the receiver operating characteristics curve (AUROC) was used to compare diagnostic performance. No statistically significant differences were found in age, thinnest point, central corneal thickness, and maximum keratometry between groups. Twenty-one parameters showed significant differences between the FFKC and control groups. Among the ORA waveform measurements, the best parameters were those related to the area under the first peak, p1area1 (AUROC, 0.717 ± 0.065). Among the investigator ORA variables, a measure incorporating the pressure-deformation relationship of the entire response cycle was the best predictor (hysteresis loop area, AUROC, 0.688 ± 0.068). Among tomographic parameters, Belin/Ambrósio display showed the highest predictive value (AUROC, 0.91 ± 0.057). A combination of parameters showed the best result (AUROC, 0.953 ± 0.024) outperforming individual parameters. Tomographic and biomechanical parameters demonstrated the ability to differentiate FFKC from normal eyes. A combination of both types of information further improved predictive value. [J Refract Surg. 2016;32(7):479-485.]. Copyright 2016, SLACK Incorporated.
Sporns, Peter B; Schwake, Michael; Schmidt, Rene; Kemmling, André; Minnerup, Jens; Schwindt, Wolfram; Cnyrim, Christian; Zoubi, Tarek; Heindel, Walter; Niederstadt, Thomas; Hanning, Uta
2017-01-01
Significant early hematoma growth in patients with intracerebral hemorrhage is an independent predictor of poor functional outcome. Recently, the novel blend sign (BS) has been introduced as a new imaging sign for predicting hematoma growth in noncontrast computed tomography. Another parameter predicting increasing hematoma size is the well-established spot sign (SS) visible in computed tomographic angiography. We, therefore, aimed to clarify the association between established SS and novel BS and their values predicting a secondary neurological deterioration. Retrospective study inclusion criteria were (1) spontaneous intracerebral hemorrhage confirmed on noncontrast computed tomography and (2) noncontrast computed tomography and computed tomographic angiography performed on admission within 6 hours after onset of symptoms. We defined a binary outcome (secondary neurological deterioration versus no secondary deterioration). As secondary neurological deterioration, we defined (1) early hemicraniectomy under standardized criteria or (2) secondary decrease of Glasgow Coma Scale of >3 points, both within the first 48 hours after symptom onset. Of 182 patients with spontaneous intracerebral hemorrhage, 37 (20.3%) presented with BS and 39 (21.4%) with SS. Of the 81 patients with secondary deterioration, 31 (38.3%) had BS and SS on admission. Multivariable logistic regression analysis identified hematoma volume (odds ratio, 1.07 per mL; P≤0.001), intraventricular hemorrhage (odds ratio, 3.08; P=0.008), and the presence of BS (odds ratio, 11.47; P≤0.001) as independent predictors of neurological deterioration. The BS, which is obtainable in noncontrast computed tomography, shows a high correlation with the computed tomographic angiography SS and is a reliable predictor of secondary neurological deterioration after spontaneous intracerebral hemorrhage. © 2016 American Heart Association, Inc.
NASA Astrophysics Data System (ADS)
Moeck, Jonas P.; Bourgouin, Jean-François; Durox, Daniel; Schuller, Thierry; Candel, Sébastien
2013-04-01
Swirl flows with vortex breakdown are widely used in industrial combustion systems for flame stabilization. This type of flow is known to sustain a hydrodynamic instability with a rotating helical structure, one common manifestation of it being the precessing vortex core. The role of this unsteady flow mode in combustion is not well understood, and its interaction with combustion instabilities and flame stabilization remains unclear. It is therefore important to assess the structure of the perturbation in the flame that is induced by this helical mode. Based on principles of tomographic reconstruction, a method is presented to determine the 3-D distribution of the heat release rate perturbation associated with the helical mode. Since this flow instability is rotating, a phase-resolved sequence of projection images of light emitted from the flame is identical to the Radon transform of the light intensity distribution in the combustor volume and thus can be used for tomographic reconstruction. This is achieved with one stationary camera only, a vast reduction in experimental and hardware requirements compared to a multi-camera setup or camera repositioning, which is typically required for tomographic reconstruction. Different approaches to extract the coherent part of the oscillation from the images are discussed. Two novel tomographic reconstruction algorithms specifically tailored to the structure of the heat release rate perturbations related to the helical mode are derived. The reconstruction techniques are first applied to an artificial field to illustrate the accuracy. High-speed imaging data acquired in a turbulent swirl-stabilized combustor setup with strong helical mode oscillations are then used to reconstruct the 3-D structure of the associated perturbation in the flame.
Li, Qiao; Gao, Xinyi; Yao, Zhenwei; Feng, Xiaoyuan; He, Huijin; Xue, Jing; Gao, Peiyi; Yang, Lumeng; Cheng, Xin; Chen, Weijian; Yang, Yunjun
2017-09-01
Permeability surface (PS) on computed tomographic perfusion reflects blood-brain barrier permeability and is related to hemorrhagic transformation (HT). HT of deep middle cerebral artery (MCA) territory can occur after recanalization of proximal large-vessel occlusion. We aimed to determine the relationship between HT and PS of deep MCA territory. We retrospectively reviewed 70 consecutive acute ischemic stroke patients presenting with occlusion of the distal internal carotid artery or M1 segment of the MCA. All patients underwent computed tomographic perfusion within 6 hours after symptom onset. Computed tomographic perfusion data were postprocessed to generate maps of different perfusion parameters. Risk factors were identified for increased deep MCA territory PS. Receiver operating characteristic curve analysis was performed to calculate the optimal PS threshold to predict HT of deep MCA territory. Increased PS was associated with HT of deep MCA territory. After adjustments for age, sex, onset time to computed tomographic perfusion, and baseline National Institutes of Health Stroke Scale, poor collateral status (odds ratio, 7.8; 95% confidence interval, 1.67-37.14; P =0.009) and proximal MCA-M1 occlusion (odds ratio, 4.12; 95% confidence interval, 1.03-16.52; P =0.045) were independently associated with increased deep MCA territory PS. Relative PS most accurately predicted HT of deep MCA territory (area under curve, 0.94; optimal threshold, 2.89). Increased PS can predict HT of deep MCA territory after recanalization therapy for cerebral proximal large-vessel occlusion. Proximal MCA-M1 complete occlusion and distal internal carotid artery occlusion in conjunction with poor collaterals elevate deep MCA territory PS. © 2017 American Heart Association, Inc.
Pablant, N. A.; Bell, R. E.; Bitter, M.; ...
2014-08-08
Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at LHD. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy and tomographic inversion, XICSmore » can provide pro file measurements of the local emissivity, temperature and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modifi ed Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less
Mobile visual object identification: from SIFT-BoF-RANSAC to Sketchprint
NASA Astrophysics Data System (ADS)
Voloshynovskiy, Sviatoslav; Diephuis, Maurits; Holotyak, Taras
2015-03-01
Mobile object identification based on its visual features find many applications in the interaction with physical objects and security. Discriminative and robust content representation plays a central role in object and content identification. Complex post-processing methods are used to compress descriptors and their geometrical information, aggregate them into more compact and discriminative representations and finally re-rank the results based on the similarity geometries of descriptors. Unfortunately, most of the existing descriptors are not very robust and discriminative once applied to the various contend such as real images, text or noise-like microstructures next to requiring at least 500-1'000 descriptors per image for reliable identification. At the same time, the geometric re-ranking procedures are still too complex to be applied to the numerous candidates obtained from the feature similarity based search only. This restricts that list of candidates to be less than 1'000 which obviously causes a higher probability of miss. In addition, the security and privacy of content representation has become a hot research topic in multimedia and security communities. In this paper, we introduce a new framework for non- local content representation based on SketchPrint descriptors. It extends the properties of local descriptors to a more informative and discriminative, yet geometrically invariant content representation. In particular it allows images to be compactly represented by 100 SketchPrint descriptors without being fully dependent on re-ranking methods. We consider several use cases, applying SketchPrint descriptors to natural images, text documents, packages and micro-structures and compare them with the traditional local descriptors.
Quantum mechanics on phase space: The hydrogen atom and its Wigner functions
NASA Astrophysics Data System (ADS)
Campos, P.; Martins, M. G. R.; Fernandes, M. C. B.; Vianna, J. D. M.
2018-03-01
Symplectic quantum mechanics (SQM) considers a non-commutative algebra of functions on a phase space Γ and an associated Hilbert space HΓ, to construct a unitary representation for the Galilei group. From this unitary representation the Schrödinger equation is rewritten in phase space variables and the Wigner function can be derived without the use of the Liouville-von Neumann equation. In this article the Coulomb potential in three dimensions (3D) is resolved completely by using the phase space Schrödinger equation. The Kustaanheimo-Stiefel(KS) transformation is applied and the Coulomb and harmonic oscillator potentials are connected. In this context we determine the energy levels, the amplitude of probability in phase space and correspondent Wigner quasi-distribution functions of the 3D-hydrogen atom described by Schrödinger equation in phase space.
Shape-Driven 3D Segmentation Using Spherical Wavelets
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen
2013-01-01
This paper presents a novel active surface segmentation algorithm using a multiscale shape representation and prior. We define a parametric model of a surface using spherical wavelet functions and learn a prior probability distribution over the wavelet coefficients to model shape variations at different scales and spatial locations in a training set. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior in the segmentation framework. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to the segmentation of brain caudate nucleus, of interest in the study of schizophrenia. Our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm by capturing finer shape details. PMID:17354875
Phase operator problem and macroscopic extension of quantum mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozawa, M.
1997-06-01
To find the Hermitian phase operator of a single-mode electromagnetic field in quantum mechanics, the Schr{umlt o}dinger representation is extended to a larger Hilbert space augmented by states with infinite excitation by nonstandard analysis. The Hermitian phase operator is shown to exist on the extended Hilbert space. This operator is naturally considered as the controversial limit of the approximate phase operators on finite dimensional spaces proposed by Pegg and Barnett. The spectral measure of this operator is a Naimark extension of the optimal probability operator-valued measure for the phase parameter found by Helstrom. Eventually, the two promising approaches to themore » statistics of the phase in quantum mechanics are synthesized by means of the Hermitian phase operator in the macroscopic extension of the Schr{umlt o}dinger representation. {copyright} 1997 Academic Press, Inc.« less
The Tomographic Ionized-Carbon Mapping Experiment (TIME) CII Imaging Spectrometer
NASA Astrophysics Data System (ADS)
Staniszewski, Z.; Bock, J. J.; Bradford, C. M.; Brevik, J.; Cooray, A.; Gong, Y.; Hailey-Dunsheath, S.; O'Brient, R.; Santos, M.; Shirokoff, E.; Silva, M.; Zemcov, M.
2014-09-01
The Tomographic Ionized-Carbon Mapping Experiment (TIME) and TIME-Pilot are proposed imaging spectrometers to measure reionization and large scale structure at redshifts 5-9. We seek to exploit the 158 restframe emission of [CII], which becomes measurable at 200-300 GHz at reionization redshifts. Here we describe the scientific motivation, give an overview of the proposed instrument, and highlight key technological developments underway to enable these measurements.
Tomographic Validation of the AWSoM Model of the Inner Corona During Solar Minima
NASA Astrophysics Data System (ADS)
Manchester, W.; Vásquez, A. M.; Lloveras, D. G.; Mac Cormack, C.; Nuevo, F.; Lopez-Fuentes, M.; Frazin, R. A.; van der Holst, B.; Landi, E.; Gombosi, T. I.
2017-12-01
Continuous improvement of MHD three-dimensional (3D) models of the global solar corona, such as the Alfven Wave Solar Model (AWSoM) of the Space Weather Modeling Framework (SWMF), requires testing their ability to reproduce observational constraints at a global scale. To that end, solar rotational tomography based on EUV image time-series can be used to reconstruct the 3D distribution of the electron density and temperature in the inner solar corona (r < 1.25 Rsun). The tomographic results, combined with a global coronal magnetic model, can further provide constraints on the energy input flux required at the coronal base to maintain stable structures. In this work, tomographic reconstructions are used to validate steady-state 3D MHD simulations of the inner corona using the latest version of the AWSoM model. We perform the study for selected rotations representative of solar minimum conditions, when the global structure of the corona is more axisymmetric. We analyse in particular the ability of the MHD simulation to match the tomographic results across the boundary region between the equatorial streamer belt and the surrounding coronal holes. The region is of particular interest as the plasma flow from that zone is thought to be related to the origin of the slow component of the solar wind.
Isaacson, Brandon; Kutz, Joe Walter; Mendelsohn, Dianne; Roland, Peter S
2009-04-01
To demonstrate the use of computed tomographic (CT) venography in selecting a surgical approach for cholesterol granulomas. Retrospective case review. Tertiary referral center. Three patients presented with symptomatic petrous apex cholesterol granulomas with extensive bone erosion involving the jugular fossa. Computed tomographic venography was performed on each patient before selecting a surgical approach for drainage. Localization of the jugular bulb in relation to the petrous carotid artery and basal turn of the cochlea was ascertained in each subject. Three patients with large symptomatic cholesterol granulomas were identified. Conventional CT demonstrated extensive bone erosion involving the jugular fossa in each patient. The location of the jugular bulb and its proximity to the petrous carotid artery and basal turn of the cochlea could not be determined with conventional temporal bone CT and magnetic resonance imaging. Computed tomographic venography provided the exact location of the jugular bulb in all 3 patients. The favorable position of the jugular bulb in all 3 cases permitted drainage of these lesions using an infracochlear approach. Computed tomographic venography provided invaluable information in 3 patients with large symptomatic cholesterol granulomas. All 3 patients were previously thought to be unsuitable candidates for an infracochlear or infralabyrinthine approach because of the unknown location of the jugular bulb.
SSULI/SSUSI UV Tomographic Images of Large-Scale Plasma Structuring
NASA Astrophysics Data System (ADS)
Hei, M. A.; Budzien, S. A.; Dymond, K.; Paxton, L. J.; Schaefer, R. K.; Groves, K. M.
2015-12-01
We present a new technique that creates tomographic reconstructions of atmospheric ultraviolet emission based on data from the Special Sensor Ultraviolet Limb Imager (SSULI) and the Special Sensor Ultraviolet Spectrographic Imager (SSUSI), both flown on the Defense Meteorological Satellite Program (DMSP) Block 5D3 series satellites. Until now, the data from these two instruments have been used independently of each other. The new algorithm combines SSULI/SSUSI measurements of 135.6 nm emission using the tomographic technique; the resultant data product - whole-orbit reconstructions of atmospheric volume emission within the satellite orbital plane - is substantially improved over the original data sets. Tests using simulated atmospheric emission verify that the algorithm performs well in a variety of situations, including daytime, nighttime, and even in the challenging terminator regions. A comparison with ALTAIR radar data validates that the volume emission reconstructions can be inverted to yield maps of electron density. The algorithm incorporates several innovative new features, including the use of both SSULI and SSUSI data to create tomographic reconstructions, the use of an inversion algorithm (Richardson-Lucy; RL) that explicitly accounts for the Poisson statistics inherent in optical measurements, and a pseudo-diffusion based regularization scheme implemented between iterations of the RL code. The algorithm also explicitly accounts for extinction due to absorption by molecular oxygen.
Sodankylä ionospheric tomography data set 2003-2014
NASA Astrophysics Data System (ADS)
Norberg, Johannes; Roininen, Lassi; Kero, Antti; Raita, Tero; Ulich, Thomas; Markkanen, Markku; Juusola, Liisa; Kauristie, Kirsti
2016-07-01
Sodankylä Geophysical Observatory has been operating a receiver network for ionospheric tomography and collecting the produced data since 2003. The collected data set consists of phase difference curves measured from COSMOS navigation satellites from the Russian Parus network (Wood and Perry, 1980) and tomographic electron density reconstructions obtained from these measurements. In this study vertical total electron content (VTEC) values are integrated from the reconstructed electron densities to make a qualitative and quantitative analysis to validate the long-term performance of the tomographic system. During the observation period, 2003-2014, there were three to five operational stations at the Fennoscandia sector. Altogether the analysis consists of around 66 000 overflights, but to ensure the quality of the reconstructions, the examination is limited to cases with descending (north to south) overflights and maximum elevation over 60°. These constraints limit the number of overflights to around 10 000. Based on this data set, one solar cycle of ionospheric VTEC estimates is constructed. The measurements are compared against the International Reference Ionosphere (IRI)-2012 model, F10.7 solar flux index and sunspot number data. Qualitatively the tomographic VTEC estimate corresponds to reference data very well, but the IRI-2012 model results are on average 40 % higher than that of the tomographic results.
Probabilistic images (PBIS): A concise image representation technique for multiple parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, L.C.; Yeh, S.H.; Chen, Z.
1984-01-01
Based on m parametric images (PIs) derived from a dynamic series (DS), each pixel of DS is regarded as an m-dimensional vector. Given one set of normal samples (pixels) N and another of abnormal samples A, probability density functions (pdfs) of both sets are estimated. Any unknown sample is classified into N or A by calculating the probability of its being in the abnormal set using the Bayes' theorem. Instead of estimating the multivariate pdfs, a distance ratio transformation is introduced to map the m-dimensional sample space to one dimensional Euclidean space. Consequently, the image that localizes the regional abnormalitiesmore » is characterized by the probability of being abnormal. This leads to the new representation scheme of PBIs. Tc-99m HIDA study for detecting intrahepatic lithiasis (IL) was chosen as an example of constructing PBI from 3 parameters derived from DS and such a PBI was compared with those 3 PIs, namely, retention ratio image (RRI), peak time image (TNMAX) and excretion mean transit time image (EMTT). 32 normal subjects and 20 patients with proved IL were collected and analyzed. The resultant sensitivity and specificity of PBI were 97% and 98% respectively. They were superior to those of any of the 3 PIs: RRI (94/97), TMAX (86/88) and EMTT (94/97). Furthermore, the contrast of PBI was much better than that of any other image. This new image formation technique, based on multiple parameters, shows the functional abnormalities in a structural way. Its good contrast makes the interpretation easy. This technique is powerful compared to the existing parametric image method.« less
Loading Rate Effects on the One-Dimensional Compressibility of Four Partially Saturated Soils
1986-12-01
representations are referred to as constitutive models. Numerous constitutive models incorporating loading rate effects have been developed ( Baladi and Rohani...and probably more indicative of the true values of applied pressure and average strain produced during the test. A technique developed by Baladi and...Sand," Technical Report No. AFWL-TR-66-146, Air Force Weapons Laboratory, Kirtland Air Force Base, New Mexico, June, 1967. 4. Baladi , George Y., and
A controlled genetic algorithm by fuzzy logic and belief functions for job-shop scheduling.
Hajri, S; Liouane, N; Hammadi, S; Borne, P
2000-01-01
Most scheduling problems are highly complex combinatorial problems. However, stochastic methods such as genetic algorithm yield good solutions. In this paper, we present a controlled genetic algorithm (CGA) based on fuzzy logic and belief functions to solve job-shop scheduling problems. For better performance, we propose an efficient representational scheme, heuristic rules for creating the initial population, and a new methodology for mixing and computing genetic operator probabilities.
Local Renyi entropic profiles of DNA sequences.
Vinga, Susana; Almeida, Jonas S
2007-10-16
In a recent report the authors presented a new measure of continuous entropy for DNA sequences, which allows the estimation of their randomness level. The definition therein explored was based on the Rényi entropy of probability density estimation (pdf) using the Parzen's window method and applied to Chaos Game Representation/Universal Sequence Maps (CGR/USM). Subsequent work proposed a fractal pdf kernel as a more exact solution for the iterated map representation. This report extends the concepts of continuous entropy by defining DNA sequence entropic profiles using the new pdf estimations to refine the density estimation of motifs. The new methodology enables two results. On the one hand it shows that the entropic profiles are directly related with the statistical significance of motifs, allowing the study of under and over-representation of segments. On the other hand, by spanning the parameters of the kernel function it is possible to extract important information about the scale of each conserved DNA region. The computational applications, developed in Matlab m-code, the corresponding binary executables and additional material and examples are made publicly available at http://kdbio.inesc-id.pt/~svinga/ep/. The ability to detect local conservation from a scale-independent representation of symbolic sequences is particularly relevant for biological applications where conserved motifs occur in multiple, overlapping scales, with significant future applications in the recognition of foreign genomic material and inference of motif structures.
Local Renyi entropic profiles of DNA sequences
Vinga, Susana; Almeida, Jonas S
2007-01-01
Background In a recent report the authors presented a new measure of continuous entropy for DNA sequences, which allows the estimation of their randomness level. The definition therein explored was based on the Rényi entropy of probability density estimation (pdf) using the Parzen's window method and applied to Chaos Game Representation/Universal Sequence Maps (CGR/USM). Subsequent work proposed a fractal pdf kernel as a more exact solution for the iterated map representation. This report extends the concepts of continuous entropy by defining DNA sequence entropic profiles using the new pdf estimations to refine the density estimation of motifs. Results The new methodology enables two results. On the one hand it shows that the entropic profiles are directly related with the statistical significance of motifs, allowing the study of under and over-representation of segments. On the other hand, by spanning the parameters of the kernel function it is possible to extract important information about the scale of each conserved DNA region. The computational applications, developed in Matlab m-code, the corresponding binary executables and additional material and examples are made publicly available at . Conclusion The ability to detect local conservation from a scale-independent representation of symbolic sequences is particularly relevant for biological applications where conserved motifs occur in multiple, overlapping scales, with significant future applications in the recognition of foreign genomic material and inference of motif structures. PMID:17939871
NASA Astrophysics Data System (ADS)
Mazoyer, M.; Roehrig, R.; Nuissier, O.; Duffourg, F.; Somot, S.
2017-12-01
Most regional climate models (RCSMs) face difficulties in representing a reasonable pre-cipitation probability density function in the Mediterranean area and especially over land.Small amounts of rain are too frequent, preventing any realistic representation of droughts orheat waves, while the intensity of heavy precipitating events is underestimated and not welllocated by most state-of-the-art RCSMs using parameterized convection (resolution from10 to 50 km). Convective parameterization is a key point for the representation of suchevents and recently, the new physics implemented in the CNRM-RCSM has been shown toremarkably improve it, even at a 50-km scale.The present study seeks to further analyse the representation of heavy precipitating eventsby this new version of CNRM-RCSM using a process oriented approach. We focus on oneparticular event in the south-east of France, over the Cévennes. Two hindcast experimentswith the CNRM-RCSM (12 and 50 km) are performed and compared with a simulationbased on the convection-permitting model Meso-NH, which makes use of a very similarsetup as CNRM-RCSM hindcasts. The role of small-scale features of the regional topogra-phy and its interaction with the impinging large-scale flow in triggering the convective eventare investigated. This study provides guidance in the ongoing implementation and use of aspecific parameterization dedicated to account for subgrid-scale orography in the triggeringand closure conditions of the CNRM-RCSM convection scheme.
Self-gravity, self-consistency, and self-organization in geodynamics and geochemistry
NASA Astrophysics Data System (ADS)
Anderson, Don L.
The results of seismology and geochemistry for mantle structure are widely believed to be discordant, the former favoring whole-mantle convection and the latter favoring layered convection with a boundary near 650 km. However, a different view arises from recognizing effects usually ignored in the construction of these models, including physical plausibility and dimensionality. Self-compression and expansion affect material properties that are important in all aspects of mantle geochemistry and dynamics, including the interpretation of tomographic images. Pressure compresses a solid and changes physical properties that depend on volume and does so in a highly nonlinear way. Intrinsic, anelastic, compositional, and crystal structure effects control seismic velocities; temperature is not the only parameter, even though tomographic images are often treated as temperature maps. Shear velocity is not a good proxy for density, temperature, and composition or for other elastic constants. Scaling concepts are important in mantle dynamics, equations of state, and wherever it is necessary to extend laboratory experiments to the parameter range of the Earth's mantle. Simple volume-scaling relations that permit extrapolation of laboratory experiments, in a thermodynamically self-consistent way, to deep mantle conditions include the quasiharmonic approximation but not the Boussinesq formalisms. Whereas slabs, plates, and the upper thermal boundary layer of the mantle have characteristic thicknesses of hundreds of kilometers and lifetimes on the order of 100 million years, volume-scaling predicts values an order of magnitude higher for deep-mantle thermal boundary layers. This implies that deep-mantle features are sluggish and ancient. Irreversible chemical stratification is consistent with these results; plausible temperature variations in the deep mantle cause density variations that are smaller than the probable density contrasts across chemical interfaces created by accretional differentiation and magmatic processes. Deep-mantle features may be convectively isolated from upper-mantle processes. Plate tectonics and surface geochemical cycles appear to be entirely restricted to the upper ˜1,000 km. The 650-km discontinuity is mainly an isochemical phase change but major-element chemical boundaries may occur at other depths. Recycling laminates the upper mantle and also makes it statistically heterogeneous, in agreement with high-frequency scattering studies. In contrast to standard geochemical models and recent modifications, the deeper layers need not be accessible to surface volcanoes. There is no conflict between geophysical and geochemical data, but a physical basis for standard geochemical and geodynamic mantle models, including the two-layer and whole-mantle versions, and qualitative tomographic interpretations has been lacking.
A Methodology to Seperate and Analyze a Seismic Wide Angle Profile
NASA Astrophysics Data System (ADS)
Weinzierl, Wolfgang; Kopp, Heidrun
2010-05-01
General solutions of inverse problems can often be obtained through the introduction of probability distributions to sample the model space. We present a simple approach of defining an a priori space in a tomographic study and retrieve the velocity-depth posterior distribution by a Monte Carlo method. Utilizing a fitting routine designed for very low statistics to setup and analyze the obtained tomography results, it is possible to statistically separate the velocity-depth model space derived from the inversion of seismic refraction data. An example of a profile acquired in the Lesser Antilles subduction zone reveals the effectiveness of this approach. The resolution analysis of the structural heterogeneity includes a divergence analysis which proves to be capable of dissecting long wide-angle profiles for deep crust and upper mantle studies. The complete information of any parameterised physical system is contained in the a posteriori distribution. Methods for analyzing and displaying key properties of the a posteriori distributions of highly nonlinear inverse problems are therefore essential in the scope of any interpretation. From this study we infer several conclusions concerning the interpretation of the tomographic approach. By calculating a global as well as singular misfits of velocities we are able to map different geological units along a profile. Comparing velocity distributions with the result of a tomographic inversion along the profile we can mimic the subsurface structures in their extent and composition. The possibility of gaining a priori information for seismic refraction analysis by a simple solution to an inverse problem and subsequent resolution of structural heterogeneities through a divergence analysis is a new and simple way of defining a priori space and estimating the a posteriori mean and covariance in singular and general form. The major advantage of a Monte Carlo based approach in our case study is the obtained knowledge of velocity depth distributions. Certainly the decision of where to extract velocity information on the profile for setting up a Monte Carlo ensemble is limiting the a priori space. However, the general conclusion of analyzing the velocity field according to distinct reference distributions gives us the possibility to define the covariance according to any geological unit if we have a priori information on the velocity depth distributions. Using the wide angle data recorded across the Lesser Antilles arc, we are able to resolve a shallow feature like the backstop by a robust and simple divergence analysis. We demonstrate the effectiveness of the new methodology to extract some key features and properties from the inversion results by including information concerning the confidence level of results.
Thermal Aging of Oceanic Asthenosphere
NASA Astrophysics Data System (ADS)
Paulson, E.; Jordan, T. H.
2013-12-01
To investigate the depth extent of mantle thermal aging beneath ocean basins, we project 3D Voigt-averaged S-velocity variations from an ensemble of global tomographic models onto a 1x1 degree age-based regionalization and average over bins delineated by equal increments in the square-root of crustal age. From comparisons among the bin-averaged S-wave profiles, we estimate age-dependent convergence depths (minimum depths where the age variations become statistically insignificant) as well as S travel times from these depths to a shallow reference surface. Using recently published techniques (Jordan & Paulson, JGR, doi:10.1002/jgrb.50263, 2013), we account for the aleatory variability in the bin-averaged S-wave profiles using the angular correlation functions of the individual tomographic models, we correct the convergence depths for vertical-smearing bias using their radial correlation functions, and we account for epistemic uncertainties through Bayesian averaging over the tomographic model ensemble. From this probabilistic analysis, we can assert with 90% confidence that the age-correlated variations in Voigt-averaged S velocities persist to depths greater than 170 km; i.e., more than 100 km below the mean depth of the G discontinuity (~70 km). Moreover, the S travel time above the convergence depth decays almost linearly with the square-root of crustal age out to 200 Ma, consistent with a half-space cooling model. Given the strong evidence that the G discontinuity approximates the lithosphere-asthenosphere boundary (LAB) beneath ocean basins, we conclude that the upper (and probably weakest) part of the oceanic asthenosphere, like the oceanic lithosphere, participates in the cooling that forms the kinematic plates, or tectosphere. In other words, the thermal boundary layer of a mature oceanic plate appears to be more than twice the thickness of its mechanical boundary layer. We do not discount the possibility that small-scale convection creates heterogeneities in the oceanic upper mantle; however, the large-scale flow evidently advects these small-scale heterogeneities along with the plates, allowing the upper part of the asthenosphere to continue cooling with lithospheric age. The dominance of this large-scale horizontal flow may be related to the high stresses associated with its channelization in a thin (~100 km) asthenosphere, as well as the possible focusing of the subtectospheric strain in a low-viscosity channel immediately above the 410-km discontinuity. These speculations aside, the observed thermal aging of oceanic asthenosphere is inconsistent with a tenet of plate tectonics, the LAB hypothesis, which states that lithospheric plates are decoupled from deeper mantle flow by a shear zone in the upper part of the asthenosphere.
Alexander, Erica S; Hankins, Carol A; Machan, Jason T; Healey, Terrance T; Dupuy, Damian E
2013-03-01
To retrospectively identify the incidence and probable risk factors for rib fractures after percutaneous radiofrequency ablation (RFA) and microwave ablation (MWA) of neoplasms in the lung and to identify complications related to these fractures. Institutional review board approval was obtained for this HIPAA-compliant retrospective study. Study population was 163 patients treated with MWA and/or RFA for 195 lung neoplasms between February 2004 and April 2010. Follow-up computed tomographic images of at least 3 months were retrospectively reviewed by board-certified radiologists to determine the presence of rib fractures. Generalized estimating equations were performed to assess the effect that patient demographics, tumor characteristics, treatment parameters, and ablation zone characteristics had on development of rib fractures. Kaplan-Meier curve was used to estimate patients' probability of rib fracture after ablation as a function of time. Clinical parameters (ie, pain in ribs or chest, organ damage caused by fractured rib) were evaluated for patients with confirmed fracture. Rib fractures in proximity to the ablation zone were found in 13.5% (22 of 163) of patients. Estimated probability of fracture was 9% at 1 year and 22% at 3 years. Women were more likely than were men to develop fracture after ablation (P = .041). Patients with tumors closer to the chest wall were more likely to develop fracture (P = .0009), as were patients with ablation zones that involved visceral pleura (P = .039). No patients with rib fractures that were apparently induced by RFA and MWA had organ injury or damage related to fracture, and 9.1% (2 of 22) of patients reported mild pain. Rib fractures were present in 13.5% of patients after percutaneous RFA and MWA of lung neoplasms. Patients who had ablations performed close to the chest wall should be monitored for rib fractures.
The probability of lava inundation at the proposed and existing Kulani prison sites
Kauahikaua, J.P.; Trusdell, F.A.; Heliker, C.C.
1998-01-01
The State of Hawai`i has proposed building a 2,300-bed medium-security prison about 10 km downslope from the existing Kulani medium-security correctional facility. The proposed and existing facilities lie on the northeast rift zone of Mauna Loa, which last erupted in 1984 in this same general area. We use the best available geologic mapping and dating with GIS software to estimate the average recurrence interval between lava flows that inundate these sites. Three different methods are used to adjust the number of flows exposed at the surface for those flows that are buried to allow a better representation of the recurrence interval. Probabilities are then computed, based on these recurrence intervals, assuming that the data match a Poisson distribution. The probability of lava inundation for the existing prison site is estimated to be 11- 12% in the next 50 years. The probability of lava inundation for the proposed sites B and C are 2- 3% and 1-2%, respectively, in the same period. The probabilities are based on estimated recurrence intervals for lava flows, which are approximately proportional to the area considered. The probability of having to evacuate the prison is certainly higher than the probability of lava entering the site. Maximum warning times between eruption and lava inundation of a site are estimated to be 24 hours for the existing prison site and 72 hours for proposed sites B and C. Evacuation plans should take these times into consideration.
Magnetic resonance imaging of granular materials
NASA Astrophysics Data System (ADS)
Stannarius, Ralf
2017-05-01
Magnetic Resonance Imaging (MRI) has become one of the most important tools to screen humans in medicine; virtually every modern hospital is equipped with a Nuclear Magnetic Resonance (NMR) tomograph. The potential of NMR in 3D imaging tasks is by far greater, but there is only "a handful" of MRI studies of particulate matter. The method is expensive, time-consuming, and requires a deep understanding of pulse sequences, signal acquisition, and processing. We give a short introduction into the physical principles of this imaging technique, describe its advantages and limitations for the screening of granular matter, and present a number of examples of different application purposes, from the exploration of granular packing, via the detection of flow and particle diffusion, to real dynamic measurements. Probably, X-ray computed tomography is preferable in most applications, but fast imaging of single slices with modern MRI techniques is unmatched, and the additional opportunity to retrieve spatially resolved flow and diffusion profiles without particle tracking is a unique feature.
Lee, Jeongmi; Geng, Joy J
2017-02-01
The efficiency of finding an object in a crowded environment depends largely on the similarity of nontargets to the search target. Models of attention theorize that the similarity is determined by representations stored within an "attentional template" held in working memory. However, the degree to which the contents of the attentional template are individually unique and where those idiosyncratic representations are encoded in the brain are unknown. We investigated this problem using representational similarity analysis of human fMRI data to measure the common and idiosyncratic representations of famous face morphs during an identity categorization task; data from the categorization task were then used to predict performance on a separate identity search task. We hypothesized that the idiosyncratic categorical representations of the continuous face morphs would predict their distractability when searching for each target identity. The results identified that patterns of activation in the lateral prefrontal cortex (LPFC) as well as in face-selective areas in the ventral temporal cortex were highly correlated with the patterns of behavioral categorization of face morphs and search performance that were common across subjects. However, the individually unique components of the categorization behavior were reliably decoded only in right LPFC. Moreover, the neural pattern in right LPFC successfully predicted idiosyncratic variability in search performance, such that reaction times were longer when distractors had a higher probability of being categorized as the target identity. These results suggest that the prefrontal cortex encodes individually unique components of categorical representations that are also present in attentional templates for target search. Everyone's perception of the world is uniquely shaped by personal experiences and preferences. Using functional MRI, we show that individual differences in the categorization of face morphs between two identities could be decoded from the prefrontal cortex and the ventral temporal cortex. Moreover, the individually unique representations in prefrontal cortex predicted idiosyncratic variability in attentional performance when looking for each identity in the "crowd" of another morphed face in a separate search task. Our results reveal that the representation of task-related information in prefrontal cortex is individually unique and preserved across categorization and search performance. This demonstrates the possibility of predicting individual behaviors across tasks with patterns of brain activity. Copyright © 2017 the authors 0270-6474/17/371257-12$15.00/0.
Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas
2016-06-01
Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols.
Dinov, Ivo D.; Siegrist, Kyle; Pearl, Dennis K.; Kalinin, Alexandr; Christou, Nicolas
2015-01-01
Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome, which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols. PMID:27158191
Alpha-cluster preformation factor within cluster-formation model for odd-A and odd-odd heavy nuclei
NASA Astrophysics Data System (ADS)
Saleh Ahmed, Saad M.
2017-06-01
The alpha-cluster probability that represents the preformation of alpha particle in alpha-decay nuclei was determined for high-intensity alpha-decay mode odd-A and odd-odd heavy nuclei, 82 < Z < 114, 111 < N < 174. This probability was calculated using the energy-dependent formula derived from the formulation of clusterisation states representation (CSR) and the hypothesised cluster-formation model (CFM) as in our previous work. Our previous successful determination of phenomenological values of alpha-cluster preformation factors for even-even nuclei motivated us to expand the work to cover other types of nuclei. The formation energy of interior alpha cluster needed to be derived for the different nuclear systems with considering the unpaired-nucleon effect. The results showed the phenomenological value of alpha preformation probability and reflected the unpaired nucleon effect and the magic and sub-magic effects in nuclei. These results and their analyses presented are very useful for future work concerning the calculation of the alpha decay constants and the progress of its theory.
Bayen, Ute J.; Kuhlmann, Beatrice G.
2010-01-01
The authors investigated conditions under which judgments in source-monitoring tasks are influenced by prior schematic knowledge. According to a probability-matching account of source guessing (Spaniol & Bayen, 2002), when people do not remember the source of information, they match source guessing probabilities to the perceived contingency between sources and item types. When they do not have a representation of a contingency, they base their guesses on prior schematic knowledge. The authors provide support for this account in two experiments with sources presenting information that was expected for one source and somewhat unexpected for another. Schema-relevant information about the sources was provided at the time of encoding. When contingency perception was impeded by dividing attention, participants showed schema-based guessing (Experiment 1). Manipulating source - item contingency also affected guessing (Experiment 2). When this contingency was schema-inconsistent, it superseded schema-based expectations and led to schema-inconsistent guessing. PMID:21603251
LaBudde, Robert A; Harnly, James M
2012-01-01
A qualitative botanical identification method (BIM) is an analytical procedure that returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer, or regulator to determine whether a botanical material being tested is the same as the target (desired) material, or whether it contains excessive nontarget (undesirable) material. The report describes the development and validation of studies for a BIM based on the proportion of replicates identified, or probability of identification (POI), as the basic observed statistic. The statistical procedures proposed for data analysis follow closely those of the probability of detection, and harmonize the statistical concepts and parameters between quantitative and qualitative method validation. Use of POI statistics also harmonizes statistical concepts for botanical, microbiological, toxin, and other analyte identification methods that produce binary results. The POI statistical model provides a tool for graphical representation of response curves for qualitative methods, reporting of descriptive statistics, and application of performance requirements. Single collaborator and multicollaborative study examples are given.
Tomographic imaging of transparent biological samples using the pyramid phase microscope
Iglesias, Ignacio
2016-01-01
We show how a pyramid phase microscope can be used to obtain tomographic information of the spatial variation of refractive index in biological samples using the Radon transform. A method that uses the information provided by the phase microscope for axial and lateral repositioning of the sample when it rotates is also described. Its application to the reconstruction of mouse embryos in the blastocyst stage is demonstrated. PMID:27570696
Full-wave Moment Tensor and Tomographic Inversions Based on 3D Strain Green Tensor
2010-01-31
propagation in three-dimensional (3D) earth, linearizes the inverse problem by iteratively updating the earth model , and provides an accurate way to...self-consistent FD-SGT databases constructed from finite-difference simulations of wave propagation in full-wave tomographic models can be used to...determine the moment tensors within minutes after a seismic event, making it possible for real time monitoring using 3D models . 15. SUBJECT TERMS
Medical ultrasonic tomographic system
NASA Technical Reports Server (NTRS)
Heyser, R. C.; Lecroissette, D. H.; Nathan, R.; Wilson, R. L.
1977-01-01
An electro-mechanical scanning assembly was designed and fabricated for the purpose of generating an ultrasound tomogram. A low cost modality was demonstrated in which analog instrumentation methods formed a tomogram on photographic film. Successful tomogram reconstructions were obtained on in vitro test objects by using the attenuation of the fist path ultrasound signal as it passed through the test object. The nearly half century tomographic methods of X-ray analysis were verified as being useful for ultrasound imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pelt, Daniël M.; Gürsoy, Dogˇa; Palenstijn, Willem Jan
2016-04-28
The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it ismore » shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy's standard reconstruction method.« less
Tomographic capabilities of the new GEM based SXR diagnostic of WEST
NASA Astrophysics Data System (ADS)
Jardin, A.; Mazon, D.; O'Mullane, M.; Mlynar, J.; Loffelmann, V.; Imrisek, M.; Chernyshova, M.; Czarski, T.; Kasprowicz, G.; Wojenski, A.; Bourdelle, C.; Malard, P.
2016-07-01
The tokamak WEST (Tungsten Environment in Steady-State Tokamak) will start operating by the end of 2016 as a test bed for the ITER divertor components in long pulse operation. In this context, radiative cooling of heavy impurities like tungsten (W) in the Soft X-ray (SXR) range [0.1 keV; 20 keV] is a critical issue for the plasma core performances. Thus reliable tools are required to monitor the local impurity density and avoid W accumulation. The WEST SXR diagnostic will be equipped with two new GEM (Gas Electron Multiplier) based poloidal cameras allowing to perform 2D tomographic reconstructions in tunable energy bands. In this paper tomographic capabilities of the Minimum Fisher Information (MFI) algorithm developed for Tore Supra and upgraded for WEST are investigated, in particular through a set of emissivity phantoms and the standard WEST scenario including reconstruction errors, influence of noise as well as computational time.
Image Reconstruction is a New Frontier of Machine Learning.
Wang, Ge; Ye, Jong Chu; Mueller, Klaus; Fessler, Jeffrey A
2018-06-01
Over past several years, machine learning, or more generally artificial intelligence, has generated overwhelming research interest and attracted unprecedented public attention. As tomographic imaging researchers, we share the excitement from our imaging perspective [item 1) in the Appendix], and organized this special issue dedicated to the theme of "Machine learning for image reconstruction." This special issue is a sister issue of the special issue published in May 2016 of this journal with the theme "Deep learning in medical imaging" [item 2) in the Appendix]. While the previous special issue targeted medical image processing/analysis, this special issue focuses on data-driven tomographic reconstruction. These two special issues are highly complementary, since image reconstruction and image analysis are two of the main pillars for medical imaging. Together we cover the whole workflow of medical imaging: from tomographic raw data/features to reconstructed images and then extracted diagnostic features/readings.
Single-shot ultrafast tomographic imaging by spectral multiplexing
NASA Astrophysics Data System (ADS)
Matlis, N. H.; Axley, A.; Leemans, W. P.
2012-10-01
Computed tomography has profoundly impacted science, medicine and technology by using projection measurements scanned over multiple angles to permit cross-sectional imaging of an object. The application of computed tomography to moving or dynamically varying objects, however, has been limited by the temporal resolution of the technique, which is set by the time required to complete the scan. For objects that vary on ultrafast timescales, traditional scanning methods are not an option. Here we present a non-scanning method capable of resolving structure on femtosecond timescales by using spectral multiplexing of a single laser beam to perform tomographic imaging over a continuous range of angles simultaneously. We use this technique to demonstrate the first single-shot ultrafast computed tomography reconstructions and obtain previously inaccessible structure and position information for laser-induced plasma filaments. This development enables real-time tomographic imaging for ultrafast science, and offers a potential solution to the challenging problem of imaging through scattering surfaces.
Lynch, Rod; Pitson, Graham; Ball, David; Claude, Line; Sarrut, David
2013-01-01
To develop a reproducible definition for each mediastinal lymph node station based on the new TNM classification for lung cancer. This paper proposes an atlas using the new international lymph node map used in the seventh edition of the TNM classification for lung cancer. Four radiation oncologists and 1 diagnostic radiologist were involved in the project to put forward a reproducible radiologic description for the lung lymph node stations. The International Association for the Study of Lung Cancer lymph node definitions for stations 1 to 11 have been described and illustrated on axial computed tomographic scan images using a certified radiotherapy planning system. This atlas will assist both diagnostic radiologists and radiation oncologists in accurately defining the lymph node stations on computed tomographic scan in patients diagnosed with lung cancer. Copyright © 2013 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Two-dimensional tomographic terahertz imaging by homodyne self-mixing.
Mohr, Till; Breuer, Stefan; Giuliani, G; Elsäßer, Wolfgang
2015-10-19
We realize a compact two-dimensional tomographic terahertz imaging experiment involving only one photoconductive antenna (PCA) simultaneously serving as a transmitter and receiver of the terahertz radiation. A hollow-core Teflon cylinder filled with α-Lactose monohydrate powder is studied at two terahertz frequencies, far away and at a specific absorption line of the powder. This sample is placed between the antenna and a chopper wheel, which serves as back reflector of the terahertz radiation into the PCA. Amplitude and phase information of the continuous-wave (CW) terahertz radiation are extracted from the measured homodyne self-mixing (HSM) signal after interaction with the cylinder. The influence of refraction is studied by modeling the set-up utilizing ZEMAX and is discussed by means of the measured 1D projections. The tomographic reconstruction by using the Simultaneous Algebraic Reconstruction Technique (SART) allows to identify both object geometry and α-Lactose filling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Carlo, Francesco; Gürsoy, Doğa; Ching, Daniel J.
There is a widening gap between the fast advancement of computational methods for tomographic reconstruction and their successful implementation in production software at various synchrotron facilities. This is due in part to the lack of readily available instrument datasets and phantoms representative of real materials for validation and comparison of new numerical methods. Recent advancements in detector technology made sub-second and multi-energy tomographic data collection possible [1], but also increased the demand to develop new reconstruction methods able to handle in-situ [2] and dynamic systems [3] that can be quickly incorporated in beamline production software [4]. The X-ray Tomography Datamore » Bank, tomoBank, provides a repository of experimental and simulated datasets with the aim to foster collaboration among computational scientists, beamline scientists, and experimentalists and to accelerate the development and implementation of tomographic reconstruction methods for synchrotron facility production software by providing easy access to challenging dataset and their descriptors.« less
High-efficiency tomographic reconstruction of quantum states by quantum nondemolition measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, J. S.; Centre for Quantum Technologies and Department of Physics, National University of Singapore, 3 Science Drive 2, Singapore 117542; Wei, L. F.
We propose a high-efficiency scheme to tomographically reconstruct an unknown quantum state by using a series of quantum nondemolition (QND) measurements. The proposed QND measurements of the qubits are implemented by probing the stationary transmissions through a driven dispersively coupled resonator. It is shown that only one kind of QND measurement is sufficient to determine all the diagonal elements of the density matrix of the detected quantum state. The remaining nondiagonal elements can be similarly determined by transferring them to the diagonal locations after a series of unitary operations. Compared with the tomographic reconstructions based on the usual destructive projectivemore » measurements (wherein one such measurement can determine only one diagonal element of the density matrix), the present reconstructive approach exhibits significantly high efficiency. Specifically, our generic proposal is demonstrated by the experimental circuit quantum electrodynamics systems with a few Josephson charge qubits.« less
de Lima, Camila; Salomão Helou, Elias
2018-01-01
Iterative methods for tomographic image reconstruction have the computational cost of each iteration dominated by the computation of the (back)projection operator, which take roughly O(N 3 ) floating point operations (flops) for N × N pixels images. Furthermore, classical iterative algorithms may take too many iterations in order to achieve acceptable images, thereby making the use of these techniques unpractical for high-resolution images. Techniques have been developed in the literature in order to reduce the computational cost of the (back)projection operator to O(N 2 logN) flops. Also, incremental algorithms have been devised that reduce by an order of magnitude the number of iterations required to achieve acceptable images. The present paper introduces an incremental algorithm with a cost of O(N 2 logN) flops per iteration and applies it to the reconstruction of very large tomographic images obtained from synchrotron light illuminated data.
Pelt, Daniël M.; Gürsoy, Doǧa; Palenstijn, Willem Jan; Sijbers, Jan; De Carlo, Francesco; Batenburg, Kees Joost
2016-01-01
The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it is shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy’s standard reconstruction method. PMID:27140167
NASA Astrophysics Data System (ADS)
Dudak, J.; Zemlicka, J.; Krejci, F.; Karch, J.; Patzelt, M.; Zach, P.; Sykora, V.; Mrzilkova, J.
2016-03-01
X-ray microradiography and microtomography are imaging techniques with increasing applicability in the field of biomedical and preclinical research. Application of hybrid pixel detector Timepix enables to obtain very high contrast of low attenuating materials such as soft biological tissue. However X-ray imaging of ex-vivo soft tissue samples is a difficult task due to its structural instability. Ex-vivo biological tissue is prone to fast drying-out which is connected with undesired changes of sample size and shape producing later on artefacts within the tomographic reconstruction. In this work we present the optimization of our Timepix equipped micro-CT system aiming to maintain soft tissue sample in stable condition. Thanks to the suggested approach higher contrast of tomographic reconstructions can be achieved while also large samples that require detector scanning can be easily measured.
Imaging open-path Fourier transform infrared spectrometer for 3D cloud profiling
NASA Astrophysics Data System (ADS)
Rentz Dupuis, Julia; Mansur, David J.; Vaillancourt, Robert; Carlson, David; Evans, Thomas; Schundler, Elizabeth; Todd, Lori; Mottus, Kathleen
2010-04-01
OPTRA has developed an imaging open-path Fourier transform infrared (I-OP-FTIR) spectrometer for 3D profiling of chemical and biological agent simulant plumes released into test ranges and chambers. An array of I-OP-FTIR instruments positioned around the perimeter of the test site, in concert with advanced spectroscopic algorithms, enables real time tomographic reconstruction of the plume. The approach is intended as a referee measurement for test ranges and chambers. This Small Business Technology Transfer (STTR) effort combines the instrumentation and spectroscopic capabilities of OPTRA, Inc. with the computed tomographic expertise of the University of North Carolina, Chapel Hill. In this paper, we summarize the design and build and detail system characterization and test of a prototype I-OP-FTIR instrument. System characterization includes radiometric performance and spectral resolution. Results from a series of tomographic reconstructions of sulfur hexafluoride plumes in a laboratory setting are also presented.
Imaging open-path Fourier transform infrared spectrometer for 3D cloud profiling
NASA Astrophysics Data System (ADS)
Rentz Dupuis, Julia; Mansur, David J.; Engel, James R.; Vaillancourt, Robert; Todd, Lori; Mottus, Kathleen
2008-04-01
OPTRA and University of North Carolina are developing an imaging open-path Fourier transform infrared (I-OP-FTIR) spectrometer for 3D profiling of chemical and biological agent simulant plumes released into test ranges and chambers. An array of I-OP-FTIR instruments positioned around the perimeter of the test site, in concert with advanced spectroscopic algorithms, enables real time tomographic reconstruction of the plume. The approach will be considered as a candidate referee measurement for test ranges and chambers. This Small Business Technology Transfer (STTR) effort combines the instrumentation and spectroscopic capabilities of OPTRA, Inc. with the computed tomographic expertise of the University of North Carolina, Chapel Hill. In this paper, we summarize progress to date and overall system performance projections based on the instrument, spectroscopy, and tomographic reconstruction accuracy. We then present a preliminary optical design of the I-OP-FTIR.
NASA Technical Reports Server (NTRS)
Wu, Xiaoqing; Paden, John; Jezek, Ken; Rignot, Eric; Gim, Young
2013-01-01
We produced the high resolution bedmaps of several glaciers in western Greenland from IceBridge Mission sounding radar data using tomographic sounding technique. The bedmaps cover 3 regions: Russell glaciers, Umanaq glaciers and Jakobshavn glaciers of western Greenland. The covered areas is about 20x40 km(sup 2) for Russell glaciers and 300x100 sq km, and 100x80 sq km for Jakobshavn glaciers. The ground resolution is 50 meters and the average ice thickness accuracy is 10 to 20 meters. There are some void areas within the swath of the tracks in the bedmaps where the ice thickness is not known. Tomographic observations of these void areas indicate that the surface and shallow sub-surface pockets, likely filled with water, are highly reflective and greatly weaken the radar signal and reduce the energy reaching and reflected from the ice sheet bottom.
NASA Astrophysics Data System (ADS)
Bourillot, Eric; Vitry, Pauline; Optasanu, Virgil; Plassard, Cédric; Lacroute, Yvon; Montessin, Tony; Lesniewska, Eric
A general challenge in metallic components is the need for materials research to improve the service lifetime of the structural tanks or tubes subjected to harsh environments or the storage medium for the products. One major problem is the formation of lightest chemical elements bubbles or different chemical association, which can have a significant impact on the mechanical properties and structural stability of materials. The high migration mobility of these light chemical elements in solids presents a challenge for experimental characterization. Here, we present work relating to an original non-destructive, with high spatial resolution, tomographic technique based on Scanning Microwave Microscopy (SMM), which is used to visualize in-depth chemical composition of solid solution of a light chemical element in a metal. The experiments showed the capacity of SMM to detect volume. Measurements realized at different frequencies give access to a tomographic study of the sample.
Assessment of crustal velocity models using seismic refraction and reflection tomography
NASA Astrophysics Data System (ADS)
Zelt, Colin A.; Sain, Kalachand; Naumenko, Julia V.; Sawyer, Dale S.
2003-06-01
Two tomographic methods for assessing velocity models obtained from wide-angle seismic traveltime data are presented through four case studies. The modelling/inversion of wide-angle traveltimes usually involves some aspects that are quite subjective. For example: (1) identifying and including later phases that are often difficult to pick within the seismic coda, (2) assigning specific layers to arrivals, (3) incorporating pre-conceived structure not specifically required by the data and (4) selecting a model parametrization. These steps are applied to maximize model constraint and minimize model non-uniqueness. However, these steps may cause the overall approach to appear ad hoc, and thereby diminish the credibility of the final model. The effect of these subjective choices can largely be addressed by estimating the minimum model structure required by the least subjective portion of the wide-angle data set: the first-arrival times. For data sets with Moho reflections, the tomographic velocity model can be used to invert the PmP times for a minimum-structure Moho. In this way, crustal velocity and Moho models can be obtained that require the least amount of subjective input, and the model structure that is required by the wide-angle data with a high degree of certainty can be differentiated from structure that is merely consistent with the data. The tomographic models are not intended to supersede the preferred models, since the latter model is typically better resolved and more interpretable. This form of tomographic assessment is intended to lend credibility to model features common to the tomographic and preferred models. Four case studies are presented in which a preferred model was derived using one or more of the subjective steps described above. This was followed by conventional first-arrival and reflection traveltime tomography using a finely gridded model parametrization to derive smooth, minimum-structure models. The case studies are from the SE Canadian Cordillera across the Rocky Mountain Trench, central India across the Narmada-Son lineament, the Iberia margin across the Galicia Bank, and the central Chilean margin across the Valparaiso Basin and a subducting seamount. These case studies span the range of modern wide-angle experiments and data sets in terms of shot-receiver spacing, marine and land acquisition, lateral heterogeneity of the study area, and availability of wide-angle reflections and coincident near-vertical reflection data. The results are surprising given the amount of structure in the smooth, tomographically derived models that is consistent with the more subjectively derived models. The results show that exploiting the complementary nature of the subjective and tomographic approaches is an effective strategy for the analysis of wide-angle traveltime data.
NASA Astrophysics Data System (ADS)
Staton, Robert J.
Of the various types of imaging modalities used in pediatric radiology, fluoroscopy and computed tomography (CT) have the highest associated radiation dose. While these examinations are commonly used for pediatric patients, little data exists on the magnitude of the organ and effective dose values for these procedures. Calculation of these dose values is necessary because of children's increased sensitivity to radiation and their long life expectancy for which to express radiation's latent effects. In this study, a newborn tomographic phantom has been implemented in a radiation transport code to evaluate organ and effective doses for newborn patients in commonly performed fluoroscopy and CT examinations. Organ doses were evaluated for voiding cystourethrogram (VCUG) fluoroscopy studies of infant patients. Time-sequence analysis was performed for videotaped VCUG studies of five different patients. Organ dose values were then estimated for each patient through Monte Carlo (MC) simulations. The effective dose values of the VCUG examination for five patients ranged from 0.6 mSv to 3.2 mSv, with a mean of 1.8 +/- 0.9 mSv. Organ doses were also assessed for infant upper gastrointestinal (UGI) fluoroscopy exams. The effective dose values of the UGI examinations for five patients ranged from 1.05 mSv to 5.92 mSv, with a mean of 2.90 +/- 1.97 mSv. MC simulations of helical multislice CT (MSCT) exams were also completed using, the newborn tomographic phantom and a stylized newborn phantom. The helical path of the source, beam shaping filter, beam profile, patient table, were all included in the MC simulations of the helical MSCT scanner. Organ doses and effective doses and their dependence on scan parameters were evaluated for newborn patients. For all CT scans, the effective dose was found to range approximately 1-13 mSv, with the largest values occurring for CAP scans. Tube current modulation strategies to reduce patient dose were also evaluated for newborn patients. Overall, utilization of the newborn tomographic phantom in MC simulations has shown the need for and usefulness of pediatric tomographic phantoms. The newborn tomographic model has shown more versatility and realistic anatomical modeling when compared to the existing stylized newborn phantom. This work has provided important organ dose data for infant patients in common examinations in pediatric radiology.
NASA Astrophysics Data System (ADS)
Yamagishi, Y.; Yanaka, H.; Tsuboi, S.
2009-12-01
We have developed a conversion tool for the data of seismic tomography into KML, called KML generator, and made it available on the web site (http://www.jamstec.go.jp/pacific21/google_earth). The KML generator enables us to display vertical and horizontal cross sections of the model on Google Earth in three-dimensional manner, which would be useful to understand the Earth's interior. The previous generator accepts text files of grid-point data having longitude, latitude, and seismic velocity anomaly. Each data file contains the data for each depth. Metadata, such as bibliographic reference, grid-point interval, depth, are described in other information file. We did not allow users to upload their own tomographic model to the web application, because there is not standard format to represent tomographic model. Recently European seismology research project, NEIRES (Network of Research Infrastructures for European Seismology), advocates that the data of seismic tomography should be standardized. They propose a new format based on JSON (JavaScript Object Notation), which is one of the data-interchange formats, as a standard one for the tomography. This format consists of two parts, which are metadata and grid-point data values. The JSON format seems to be powerful to handle and to analyze the tomographic model, because the structure of the format is fully defined by JavaScript objects, thus the elements are directly accessible by a script. In addition, there exist JSON libraries for several programming languages. The International Federation of Digital Seismograph Network (FDSN) adapted this format as a FDSN standard format for seismic tomographic model. There might be a possibility that this format would not only be accepted by European seismologists but also be accepted as the world standard. Therefore we improve our KML generator for seismic tomography to accept the data file having also JSON format. We also improve the web application of the generator so that the JSON formatted data file can be uploaded. Users can convert any tomographic model data to KML. The KML obtained through the new generator should provide an arena to compare various tomographic models and other geophysical observations on Google Earth, which may act as a common platform for geoscience browser.
Analysis of genomic sequences by Chaos Game Representation.
Almeida, J S; Carriço, J A; Maretzek, A; Noble, P A; Fletcher, M
2001-05-01
Chaos Game Representation (CGR) is an iterative mapping technique that processes sequences of units, such as nucleotides in a DNA sequence or amino acids in a protein, in order to find the coordinates for their position in a continuous space. This distribution of positions has two properties: it is unique, and the source sequence can be recovered from the coordinates such that distance between positions measures similarity between the corresponding sequences. The possibility of using the latter property to identify succession schemes have been entirely overlooked in previous studies which raises the possibility that CGR may be upgraded from a mere representation technique to a sequence modeling tool. The distribution of positions in the CGR plane were shown to be a generalization of Markov chain probability tables that accommodates non-integer orders. Therefore, Markov models are particular cases of CGR models rather than the reverse, as currently accepted. In addition, the CGR generalization has both practical (computational efficiency) and fundamental (scale independence) advantages. These results are illustrated by using Escherichia coli K-12 as a test data-set, in particular, the genes thrA, thrB and thrC of the threonine operon.
Hybrid Histogram Descriptor: A Fusion Feature Representation for Image Retrieval.
Feng, Qinghe; Hao, Qiaohong; Chen, Yuqi; Yi, Yugen; Wei, Ying; Dai, Jiangyan
2018-06-15
Currently, visual sensors are becoming increasingly affordable and fashionable, acceleratingly the increasing number of image data. Image retrieval has attracted increasing interest due to space exploration, industrial, and biomedical applications. Nevertheless, designing effective feature representation is acknowledged as a hard yet fundamental issue. This paper presents a fusion feature representation called a hybrid histogram descriptor (HHD) for image retrieval. The proposed descriptor comprises two histograms jointly: a perceptually uniform histogram which is extracted by exploiting the color and edge orientation information in perceptually uniform regions; and a motif co-occurrence histogram which is acquired by calculating the probability of a pair of motif patterns. To evaluate the performance, we benchmarked the proposed descriptor on RSSCN7, AID, Outex-00013, Outex-00014 and ETHZ-53 datasets. Experimental results suggest that the proposed descriptor is more effective and robust than ten recent fusion-based descriptors under the content-based image retrieval framework. The computational complexity was also analyzed to give an in-depth evaluation. Furthermore, compared with the state-of-the-art convolutional neural network (CNN)-based descriptors, the proposed descriptor also achieves comparable performance, but does not require any training process.
NASA Astrophysics Data System (ADS)
Cormann, Mirko; Caudano, Yves
2017-07-01
We express modular and weak values of observables of three- and higher-level quantum systems in their polar form. The Majorana representation of N-level systems in terms of symmetric states of N - 1 qubits provides us with a description on the Bloch sphere. With this geometric approach, we find that modular and weak values of observables of N-level quantum systems can be factored in N - 1 contributions. Their modulus is determined by the product of N - 1 ratios involving projection probabilities between qubits, while their argument is deduced from a sum of N - 1 solid angles on the Bloch sphere. These theoretical results allow us to study the geometric origin of the quantum phase discontinuity around singularities of weak values in three-level systems. We also analyze the three-box paradox (Aharonov and Vaidman 1991 J. Phys. A: Math. Gen. 24 2315-28) from the point of view of a bipartite quantum system. In the Majorana representation of this paradox, an observer comes to opposite conclusions about the entanglement state of the particles that were successfully pre- and postselected.
Riemann-Liouville Fractional Calculus of Certain Finite Class of Classical Orthogonal Polynomials
NASA Astrophysics Data System (ADS)
Malik, Pradeep; Swaminathan, A.
2010-11-01
In this work we consider certain class of classical orthogonal polynomials defined on the positive real line. These polynomials have their weight function related to the probability density function of F distribution and are finite in number up to orthogonality. We generalize these polynomials for fractional order by considering the Riemann-Liouville type operator on these polynomials. Various properties like explicit representation in terms of hypergeometric functions, differential equations, recurrence relations are derived.
Wee, Natalie; Asplund, Christopher L; Chee, Michael W L
2013-06-01
Visual short-term memory (VSTM) is an important measure of information processing capacity and supports many higher-order cognitive processes. We examined how sleep deprivation (SD) and maintenance duration interact to influence the number and precision of items in VSTM using an experimental design that limits the contribution of lapses at encoding. For each trial, participants attempted to maintain the location and color of three stimuli over a delay. After a retention interval of either 1 or 10 seconds, participants reported the color of the item at the cued location by selecting it on a color wheel. The probability of reporting the probed item, the precision of report, and the probability of reporting a nonprobed item were determined using a mixture-modeling analysis. Participants were studied twice in counterbalanced order, once after a night of normal sleep and once following a night of sleep deprivation. Sleep laboratory. Nineteen healthy college age volunteers (seven females) with regular sleep patterns. Approximately 24 hours of total SD. SD selectively reduced the number of integrated representations that can be retrieved after a delay, while leaving the precision of object information in the stored representations intact. Delay interacted with SD to lower the rate of successful recall. Visual short-term memory is compromised during sleep deprivation, an effect compounded by delay. However, when memories are retrieved, they tend to be intact.
The small low SNR target tracking using sparse representation information
NASA Astrophysics Data System (ADS)
Yin, Lifan; Zhang, Yiqun; Wang, Shuo; Sun, Chenggang
2017-11-01
Tracking small targets, such as missile warheads, from a remote distance is a difficult task since the targets are "points" which are similar to sensor's noise points. As a result, traditional tracking algorithms only use the information contained in point measurement, such as the position information and intensity information, as characteristics to identify targets from noise points. But in fact, as a result of the diffusion of photon, any small target is not a point in the focal plane array and it occupies an area which is larger than one sensor cell. So, if we can take the geometry characteristic into account as a new dimension of information, it will be of helpful in distinguishing targets from noise points. In this paper, we use a novel method named sparse representation (SR) to depict the geometry information of target intensity and define it as the SR information of target. Modeling the intensity spread and solving its SR coefficients, the SR information is represented by establishing its likelihood function. Further, the SR information likelihood is incorporated in the conventional Probability Hypothesis Density (PHD) filter algorithm with point measurement. To illustrate the different performances of algorithm with or without the SR information, the detection capability and estimation error have been compared through simulation. Results demonstrate the proposed method has higher estimation accuracy and probability of detecting target than the conventional algorithm without the SR information.
A hybrid probabilistic/spectral model of scalar mixing
NASA Astrophysics Data System (ADS)
Vaithianathan, T.; Collins, Lance
2002-11-01
In the probability density function (PDF) description of a turbulent reacting flow, the local temperature and species concentration are replaced by a high-dimensional joint probability that describes the distribution of states in the fluid. The PDF has the great advantage of rendering the chemical reaction source terms closed, independent of their complexity. However, molecular mixing, which involves two-point information, must be modeled. Indeed, the qualitative shape of the PDF is sensitive to this modeling, hence the reliability of the model to predict even the closed chemical source terms rests heavily on the mixing model. We will present a new closure to the mixing based on a spectral representation of the scalar field. The model is implemented as an ensemble of stochastic particles, each carrying scalar concentrations at different wavenumbers. Scalar exchanges within a given particle represent ``transfer'' while scalar exchanges between particles represent ``mixing.'' The equations governing the scalar concentrations at each wavenumber are derived from the eddy damped quasi-normal Markovian (or EDQNM) theory. The model correctly predicts the evolution of an initial double delta function PDF into a Gaussian as seen in the numerical study by Eswaran & Pope (1988). Furthermore, the model predicts the scalar gradient distribution (which is available in this representation) approaches log normal at long times. Comparisons of the model with data derived from direct numerical simulations will be shown.
NASA Astrophysics Data System (ADS)
Tang, Yinan; Chen, Ping
2014-06-01
The sub-prime crisis in the U.S. reveals the limitation of diversification strategy based on mean-variance analysis. A regime switch and a turning point can be observed using a high moment representation and time-dependent transition probability. Up-down price movements are induced by interactions among agents, which can be described by the birth-death (BD) process. Financial instability is visible by dramatically increasing 3rd to 5th moments one-quarter before and during the crisis. The sudden rising high moments provide effective warning signals of a regime-switch or a coming crisis. The critical condition of a market breakdown can be identified from nonlinear stochastic dynamics. The master equation approach of population dynamics provides a unified theory of a calm and turbulent market.
NASA Astrophysics Data System (ADS)
Lombardi, Olimpia; Fortin, Sebastian; Holik, Federico; López, Cristian
2017-04-01
Preface; Introduction; Part I. About the Concept of Information: 1. About the concept of information Sebastian Fortin and Olimpia Lombardi; 2. Representation, information, and theories of information Armond Duwell; 3. Information, communication, and manipulability Olimpia Lombardi and Cristian López; Part II. Information and quantum mechanics: 4. Quantum versus classical information Jeffrey Bub; 5. Quantum information and locality Dennis Dieks; 6. Pragmatic information in quantum mechanics Juan Roederer; 7. Interpretations of quantum theory: a map of madness Adán Cabello; Part III. Probability, Correlations, and Information: 8. On the tension between ontology and epistemology in quantum probabilities Amit Hagar; 9. Inferential versus dynamical conceptions of physics David Wallace; 10. Classical models for quantum information Federico Holik and Gustavo Martin Bosyk; 11. On the relative character of quantum correlations Guido Bellomo and Ángel Ricardo Plastino; Index.
Computer models of social processes: the case of migration.
Beshers, J M
1967-06-01
The demographic model is a program for representing births, deaths, migration, and social mobility as social processes in a non-stationary stochastic process (Markovian). Transition probabilities for each age group are stored and then retrieved at the next appearance of that age cohort. In this way new transition probabilities can be calculated as a function of the old transition probabilities and of two successive distribution vectors.Transition probabilities can be calculated to represent effects of the whole age-by-state distribution at any given time period, too. Such effects as saturation or queuing may be represented by a market mechanism; for example, migration between metropolitan areas can be represented as depending upon job supplies and labor markets. Within metropolitan areas, migration can be represented as invasion and succession processes with tipping points (acceleration curves), and the market device has been extended to represent this phenomenon.Thus, the demographic model makes possible the representation of alternative classes of models of demographic processes. With each class of model one can deduce implied time series (varying parame-terswithin the class) and the output of the several classes can be compared to each other and to outside criteria, such as empirical time series.
A functional model of sensemaking in a neurocognitive architecture.
Lebiere, Christian; Pirolli, Peter; Thomson, Robert; Paik, Jaehyon; Rutledge-Taylor, Matthew; Staszewski, James; Anderson, John R
2013-01-01
Sensemaking is the active process of constructing a meaningful representation (i.e., making sense) of some complex aspect of the world. In relation to intelligence analysis, sensemaking is the act of finding and interpreting relevant facts amongst the sea of incoming reports, images, and intelligence. We present a cognitive model of core information-foraging and hypothesis-updating sensemaking processes applied to complex spatial probability estimation and decision-making tasks. While the model was developed in a hybrid symbolic-statistical cognitive architecture, its correspondence to neural frameworks in terms of both structure and mechanisms provided a direct bridge between rational and neural levels of description. Compared against data from two participant groups, the model correctly predicted both the presence and degree of four biases: confirmation, anchoring and adjustment, representativeness, and probability matching. It also favorably predicted human performance in generating probability distributions across categories, assigning resources based on these distributions, and selecting relevant features given a prior probability distribution. This model provides a constrained theoretical framework describing cognitive biases as arising from three interacting factors: the structure of the task environment, the mechanisms and limitations of the cognitive architecture, and the use of strategies to adapt to the dual constraints of cognition and the environment.
Closed-form solution of decomposable stochastic models
NASA Technical Reports Server (NTRS)
Sjogren, Jon A.
1990-01-01
Markov and semi-Markov processes are increasingly being used in the modeling of complex reconfigurable systems (fault tolerant computers). The estimation of the reliability (or some measure of performance) of the system reduces to solving the process for its state probabilities. Such a model may exhibit numerous states and complicated transition distributions, contributing to an expensive and numerically delicate solution procedure. Thus, when a system exhibits a decomposition property, either structurally (autonomous subsystems), or behaviorally (component failure versus reconfiguration), it is desirable to exploit this decomposition in the reliability calculation. In interesting cases there can be failure states which arise from non-failure states of the subsystems. Equations are presented which allow the computation of failure probabilities of the total (combined) model without requiring a complete solution of the combined model. This material is presented within the context of closed-form functional representation of probabilities as utilized in the Symbolic Hierarchical Automated Reliability and Performance Evaluator (SHARPE) tool. The techniques adopted enable one to compute such probability functions for a much wider class of systems at a reduced computational cost. Several examples show how the method is used, especially in enhancing the versatility of the SHARPE tool.
The known unknowns: neural representation of second-order uncertainty, and ambiguity
Bach, Dominik R.; Hulme, Oliver; Penny, William D.; Dolan, Raymond J.
2011-01-01
Predictions provided by action-outcome probabilities entail a degree of (first-order) uncertainty. However, these probabilities themselves can be imprecise and embody second-order uncertainty. Tracking second-order uncertainty is important for optimal decision making and reinforcement learning. Previous functional magnetic resonance imaging investigations of second-order uncertainty in humans have drawn on an economic concept of ambiguity, where action-outcome associations in a gamble are either known (unambiguous) or completely unknown (ambiguous). Here, we relaxed the constraints associated with a purely categorical concept of ambiguity and varied the second-order uncertainty of gambles continuously, quantified as entropy over second-order probabilities. We show that second-order uncertainty influences decisions in a pessimistic way by biasing second-order probabilities, and that second-order uncertainty is negatively correlated with posterior cingulate cortex activity. The category of ambiguous (compared to non-ambiguous) gambles also biased choice in a similar direction, but was associated with distinct activation of a posterior parietal cortical area; an activation that we show reflects a different computational mechanism. Our findings indicate that behavioural and neural responses to second-order uncertainty are distinct from those associated with ambiguity and may call for a reappraisal of previous data. PMID:21451019
Quantum stochastic walks on networks for decision-making.
Martínez-Martínez, Ismael; Sánchez-Burillo, Eduardo
2016-03-31
Recent experiments report violations of the classical law of total probability and incompatibility of certain mental representations when humans process and react to information. Evidence shows promise of a more general quantum theory providing a better explanation of the dynamics and structure of real decision-making processes than classical probability theory. Inspired by this, we show how the behavioral choice-probabilities can arise as the unique stationary distribution of quantum stochastic walkers on the classical network defined from Luce's response probabilities. This work is relevant because (i) we provide a very general framework integrating the positive characteristics of both quantum and classical approaches previously in confrontation, and (ii) we define a cognitive network which can be used to bring other connectivist approaches to decision-making into the quantum stochastic realm. We model the decision-maker as an open system in contact with her surrounding environment, and the time-length of the decision-making process reveals to be also a measure of the process' degree of interplay between the unitary and irreversible dynamics. Implementing quantum coherence on classical networks may be a door to better integrate human-like reasoning biases in stochastic models for decision-making.
Quantum stochastic walks on networks for decision-making
NASA Astrophysics Data System (ADS)
Martínez-Martínez, Ismael; Sánchez-Burillo, Eduardo
2016-03-01
Recent experiments report violations of the classical law of total probability and incompatibility of certain mental representations when humans process and react to information. Evidence shows promise of a more general quantum theory providing a better explanation of the dynamics and structure of real decision-making processes than classical probability theory. Inspired by this, we show how the behavioral choice-probabilities can arise as the unique stationary distribution of quantum stochastic walkers on the classical network defined from Luce’s response probabilities. This work is relevant because (i) we provide a very general framework integrating the positive characteristics of both quantum and classical approaches previously in confrontation, and (ii) we define a cognitive network which can be used to bring other connectivist approaches to decision-making into the quantum stochastic realm. We model the decision-maker as an open system in contact with her surrounding environment, and the time-length of the decision-making process reveals to be also a measure of the process’ degree of interplay between the unitary and irreversible dynamics. Implementing quantum coherence on classical networks may be a door to better integrate human-like reasoning biases in stochastic models for decision-making.
Quantum stochastic walks on networks for decision-making
Martínez-Martínez, Ismael; Sánchez-Burillo, Eduardo
2016-01-01
Recent experiments report violations of the classical law of total probability and incompatibility of certain mental representations when humans process and react to information. Evidence shows promise of a more general quantum theory providing a better explanation of the dynamics and structure of real decision-making processes than classical probability theory. Inspired by this, we show how the behavioral choice-probabilities can arise as the unique stationary distribution of quantum stochastic walkers on the classical network defined from Luce’s response probabilities. This work is relevant because (i) we provide a very general framework integrating the positive characteristics of both quantum and classical approaches previously in confrontation, and (ii) we define a cognitive network which can be used to bring other connectivist approaches to decision-making into the quantum stochastic realm. We model the decision-maker as an open system in contact with her surrounding environment, and the time-length of the decision-making process reveals to be also a measure of the process’ degree of interplay between the unitary and irreversible dynamics. Implementing quantum coherence on classical networks may be a door to better integrate human-like reasoning biases in stochastic models for decision-making. PMID:27030372
A Functional Model of Sensemaking in a Neurocognitive Architecture
Lebiere, Christian; Paik, Jaehyon; Rutledge-Taylor, Matthew; Staszewski, James; Anderson, John R.
2013-01-01
Sensemaking is the active process of constructing a meaningful representation (i.e., making sense) of some complex aspect of the world. In relation to intelligence analysis, sensemaking is the act of finding and interpreting relevant facts amongst the sea of incoming reports, images, and intelligence. We present a cognitive model of core information-foraging and hypothesis-updating sensemaking processes applied to complex spatial probability estimation and decision-making tasks. While the model was developed in a hybrid symbolic-statistical cognitive architecture, its correspondence to neural frameworks in terms of both structure and mechanisms provided a direct bridge between rational and neural levels of description. Compared against data from two participant groups, the model correctly predicted both the presence and degree of four biases: confirmation, anchoring and adjustment, representativeness, and probability matching. It also favorably predicted human performance in generating probability distributions across categories, assigning resources based on these distributions, and selecting relevant features given a prior probability distribution. This model provides a constrained theoretical framework describing cognitive biases as arising from three interacting factors: the structure of the task environment, the mechanisms and limitations of the cognitive architecture, and the use of strategies to adapt to the dual constraints of cognition and the environment. PMID:24302930
Tomographic Neutron Imaging using SIRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gregor, Jens; FINNEY, Charles E A; Toops, Todd J
2013-01-01
Neutron imaging is complementary to x-ray imaging in that materials such as water and plastic are highly attenuating while material such as metal is nearly transparent. We showcase tomographic imaging of a diesel particulate filter. Reconstruction is done using a modified version of SIRT called PSIRT. We expand on previous work and introduce Tikhonov regularization. We show that near-optimal relaxation can still be achieved. The algorithmic ideas apply to cone beam x-ray CT and other inverse problems.
1987-01-01
BLOOD FLOW CHANGE Steven E. Petersen, Peter T. Fox, Michael I. Posner, Marcus Raichle McDonnell Center for Studies of Higher Brain Function...Single Words Using Positron Emission Tomographic Measurements of Cerebral Blood Flow Change *= ’I PERSONAL AUTHOR(S) * Petersen, Steven E. 13a. TYPE OF...CHANGE Steven E. Petersen, Peter T. Fox, Michael I. Posner, Marcus E. Raichle INTRODUCTION Language is an essential characteristic of the human
2015-05-18
Head computed tomographic scan most commonly found skull fracture (68.9%), subdural hematoma (54.1%), and cerebral contusion (51.4%). Hypertonic saline...were common on presentation. Head computed tomographic scan most commonly found skull fracture (68.9%), subdural hematoma (54.1%), and cerebral con...reported was skull fracture, occurring in 68.9% of patients. The most common type of intracranial hemorrhage was subdural hematoma (54.1%). Multiple
Computer tomographic evaluation of digestive tract non-Hodgkin lymphomas.
Lupescu, Ioana G; Grasu, Mugur; Goldis, Gheorghe; Popa, Gelu; Gheorghe, Cristian; Vasilescu, Catalin; Moicean, Andreea; Herlea, Vlad; Georgescu, Serban A
2007-09-01
Computer Tomographic (CT) study is crucial for defining distribution, characteristics and staging of primary gastrointestinal lymphomas. The presence of multifocal sites, the wall thickening with diffuse infiltration of the affected gastrointestinal (GI) segment in association with regional adenopathies, permit the orientation of the CT diagnosis for primary GI lymphomas. The gold standard for diagnosis remains, in all cases of digestive tract non-Hodgkin lymphomas (NHL), the histological examination, which allows a tissue diagnosis, performed preferably by transmural biopsy.
NASA Astrophysics Data System (ADS)
Kunitsyn, V.; Nesterov, I.; Andreeva, E.; Zelenyi, L.; Veselov, M.; Galperin, Y.; Buchner, J.
A satellite radiotomography method for electron density distributions was recently proposed for closely-space multi-spacecraft group of high-altitude satellites to study the physics of reconnection process. The original idea of the ROY project is to use a constellation of spacecrafts (one main and several sub-satellites) in order to carry out closely-spaced multipoint measurements and 2D tomographic reconstruction of elec- tron density in the space between the main satellite and the subsatellites. The distances between the satellites were chosen to vary from dozens to few hundreds of kilometers. The easiest data interpretation is achieved when the subsatellites are placed along the plasma streamline. Then, whenever a plasma density irregularity moves between the main satellite and the subsatellites it will be scanned in different directions and we can get 2D distribution of plasma using these projections. However in general sub- satellites are not placed exactly along the plasma streamline. The method of plasma velocity determination relative to multi-spacecraft systems is considered. Possibilities of 3D tomographic imaging using multi-spacecraft systems are analyzed. The model- ing has shown that efficient scheme for 3D tomographic imaging would be to place spacecrafts in different planes so that the angle between the planes would make not more then ten degrees. Work is supported by INTAS PROJECT 2000-465.
NASA Technical Reports Server (NTRS)
Succi, G. P.
1983-01-01
The techniques of helicopter rotor noise prediction attempt to describe precisely the details of the noise field and remove the empiricisms and restrictions inherent in previous methods. These techniques require detailed inputs of the rotor geometry, operating conditions, and blade surface pressure distribution. The Farassat noise prediction techniques was studied, and high speed helicopter noise prediction using more detailed representations of the thickness and loading noise sources was investigated. These predictions were based on the measured blade surface pressures on an AH-1G rotor and compared to the measured sound field. Although refinements in the representation of the thickness and loading noise sources improve the calculation, there are still discrepancies between the measured and predicted sound field. Analysis of the blade surface pressure data indicates shocks on the blades, which are probably responsible for these discrepancies.
Non-adiabatic molecular dynamics with complex quantum trajectories. I. The diabatic representation.
Zamstein, Noa; Tannor, David J
2012-12-14
We extend a recently developed quantum trajectory method [Y. Goldfarb, I. Degani, and D. J. Tannor, J. Chem. Phys. 125, 231103 (2006)] to treat non-adiabatic transitions. Each trajectory evolves on a single surface according to Newton's laws with complex positions and momenta. The transfer of amplitude between surfaces stems naturally from the equations of motion, without the need for surface hopping. In this paper we derive the equations of motion and show results in the diabatic representation, which is rarely used in trajectory methods for calculating non-adiabatic dynamics. We apply our method to the first two benchmark models introduced by Tully [J. Chem. Phys. 93, 1061 (1990)]. Besides giving the probability branching ratios between the surfaces, the method also allows the reconstruction of the time-dependent wavepacket. Our results are in quantitative agreement with converged quantum mechanical calculations.
Mariani Canova, Giordana
2008-01-01
The paper intends to prove the incidence that scientific doctrines, mostly Pietro d'Abano's astrological and medical studies, had on Giotto's painting at the Cappella degli Scrovegni in Padova and his lost astrological cycle in the Sala della Ragione. It is emphasized how in no other painting of his, Giotto displayed as much intellectualism as in the Cappella degli Scrovegni. There we can note the importance of the physical representation of the sky and stars and figures' particular physiognomic characterization referable to Pietro d'Abano's theories presented in his astrological treatises and in his Compilation Phisionomiae. Even the ecceptional botanical realism displayed in the representation of plants can be probably referred to Pietro d'Abano's scientific teaching. An hypotetical reconstruction, according to Ptolomeus' theories and Pietro d'Abano's physiognomic, of Giotto's astrological cycle in the Sala della Ragione is also proposed.
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2014-12-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.
Homogeneous quantum electrodynamic turbulence
NASA Technical Reports Server (NTRS)
Shebalin, John V.
1992-01-01
The electromagnetic field equations and Dirac equations for oppositely charged wave functions are numerically time-integrated using a spatial Fourier method. The numerical approach used, a spectral transform technique, is based on a continuum representation of physical space. The coupled classical field equations contain a dimensionless parameter which sets the strength of the nonlinear interaction (as the parameter increases, interaction volume decreases). For a parameter value of unity, highly nonlinear behavior in the time-evolution of an individual wave function, analogous to ideal fluid turbulence, is observed. In the truncated Fourier representation which is numerically implemented here, the quantum turbulence is homogeneous but anisotropic and manifests itself in the nonlinear evolution of equilibrium modal spatial spectra for the probability density of each particle and also for the electromagnetic energy density. The results show that nonlinearly interacting fermionic wave functions quickly approach a multi-mode, dynamic equilibrium state, and that this state can be determined by numerical means.
Immediate tool incorporation processes determine human motor planning with tools
Ganesh, G.; Yoshioka, T.; Osu, R.; Ikegami, T.
2014-01-01
Human dexterity with tools is believed to stem from our ability to incorporate and use tools as parts of our body. However tool incorporation, evident as extensions in our body representation and peri-personal space, has been observed predominantly after extended tool exposures and does not explain our immediate motor behaviours when we change tools. Here we utilize two novel experiments to elucidate the presence of additional immediate tool incorporation effects that determine motor planning with tools. Interestingly, tools were observed to immediately induce a trial-by-trial, tool length dependent shortening of the perceived limb lengths, opposite to observations of elongations after extended tool use. Our results thus exhibit that tools induce a dual effect on our body representation; an immediate shortening that critically affects motor planning with a new tool, and the slow elongation, probably a consequence of skill related changes in sensory-motor mappings with the repeated use of the tool. PMID:25077612
New feature extraction method for classification of agricultural products from x-ray images
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Casasent, David P.; Lee, Ha-Woon; Keagy, Pamela M.; Schatzki, Thomas F.
1999-01-01
Classification of real-time x-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work the MRDF is applied to standard features. The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC data.
Cultural Consensus Theory: Aggregating Continuous Responses in a Finite Interval
NASA Astrophysics Data System (ADS)
Batchelder, William H.; Strashny, Alex; Romney, A. Kimball
Cultural consensus theory (CCT) consists of cognitive models for aggregating responses of "informants" to test items about some domain of their shared cultural knowledge. This paper develops a CCT model for items requiring bounded numerical responses, e.g. probability estimates, confidence judgments, or similarity judgments. The model assumes that each item generates a latent random representation in each informant, with mean equal to the consensus answer and variance depending jointly on the informant and the location of the consensus answer. The manifest responses may reflect biases of the informants. Markov Chain Monte Carlo (MCMC) methods were used to estimate the model, and simulation studies validated the approach. The model was applied to an existing cross-cultural dataset involving native Japanese and English speakers judging the similarity of emotion terms. The results sharpened earlier studies that showed that both cultures appear to have very similar cognitive representations of emotion terms.
On the nonlocal predictions of quantum optics
NASA Technical Reports Server (NTRS)
Marshall, Trevor W.; Santos, Emilio; Vidiella-Barranco, Antonio
1994-01-01
We give a definition of locality in quantum optics based upon Bell's work, and show that locality has been violated in no experiment performed up to now. We argue that the interpretation of the Wigner function as a probability density gives a very attractive local realistic picture of quantum optics provided that this function is nonnegative. We conjecture that this is the case for all states which can be realized in the laboratory. In particular, we believe that the usual representation of 'single photon states' by a Fock state of the Hilbert space is not correct and that a more physical, although less simple mathematically, representation involves density matrices. We study in some detail the experiment showing anticorrelation after a beam splitter and prove that it naturally involves a positive Wigner function. Our (quantum) predictions for this experiment disagree with the ones reported in the literature.
Low excitatory innervation balances high intrinsic excitability of immature dentate neurons
Dieni, Cristina V.; Panichi, Roberto; Aimone, James B.; Kuo, Chay T.; Wadiche, Jacques I.; Overstreet-Wadiche, Linda
2016-01-01
Persistent neurogenesis in the dentate gyrus produces immature neurons with high intrinsic excitability and low levels of inhibition that are predicted to be more broadly responsive to afferent activity than mature neurons. Mounting evidence suggests that these immature neurons are necessary for generating distinct neural representations of similar contexts, but it is unclear how broadly responsive neurons help distinguish between similar patterns of afferent activity. Here we show that stimulation of the entorhinal cortex in mouse brain slices paradoxically generates spiking of mature neurons in the absence of immature neuron spiking. Immature neurons with high intrinsic excitability fail to spike due to insufficient excitatory drive that results from low innervation rather than silent synapses or low release probability. Our results suggest that low synaptic connectivity prevents immature neurons from responding broadly to cortical activity, potentially enabling excitable immature neurons to contribute to sparse and orthogonal dentate representations. PMID:27095423
Looking sharp: Becoming a search template boosts precision and stability in visual working memory.
Rajsic, Jason; Ouslis, Natasha E; Wilson, Daryl E; Pratt, Jay
2017-08-01
Visual working memory (VWM) plays a central role in visual cognition, and current work suggests that there is a special state in VWM for items that are the goal of visual searches. However, whether the quality of memory for target templates differs from memory for other items in VWM is currently unknown. In this study, we measured the precision and stability of memory for search templates and accessory items to determine whether search templates receive representational priority in VWM. Memory for search templates exhibited increased precision and probability of recall, whereas accessory items were remembered less often. Additionally, while memory for Templates showed benefits when instances of the Template appeared in search, this benefit was not consistently observed for Accessory items when they appeared in search. Our results show that becoming a search template can substantially affect the quality of a representation in VWM.
Low excitatory innervation balances high intrinsic excitability of immature dentate neurons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dieni, Cristina V.; Panichi, Roberto; Aimone, James B.
Persistent neurogenesis in the dentate gyrus produces immature neurons with high intrinsic excitability and low levels of inhibition that are predicted to be more broadly responsive to afferent activity than mature neurons. Mounting evidence suggests that these immature neurons are necessary for generating distinct neural representations of similar contexts, but it is unclear how broadly responsive neurons help distinguish between similar patterns of afferent activity. Here we show that stimulation of the entorhinal cortex in mouse brain slices paradoxically generates spiking of mature neurons in the absence of immature neuron spiking. Immature neurons with high intrinsic excitability fail to spikemore » due to insufficient excitatory drive that results from low innervation rather than silent synapses or low release probability. Here, our results suggest that low synaptic connectivity prevents immature neurons from responding broadly to cortical activity, potentially enabling excitable immature neurons to contribute to sparse and orthogonal dentate representations.« less
Low excitatory innervation balances high intrinsic excitability of immature dentate neurons
Dieni, Cristina V.; Panichi, Roberto; Aimone, James B.; ...
2016-04-20
Persistent neurogenesis in the dentate gyrus produces immature neurons with high intrinsic excitability and low levels of inhibition that are predicted to be more broadly responsive to afferent activity than mature neurons. Mounting evidence suggests that these immature neurons are necessary for generating distinct neural representations of similar contexts, but it is unclear how broadly responsive neurons help distinguish between similar patterns of afferent activity. Here we show that stimulation of the entorhinal cortex in mouse brain slices paradoxically generates spiking of mature neurons in the absence of immature neuron spiking. Immature neurons with high intrinsic excitability fail to spikemore » due to insufficient excitatory drive that results from low innervation rather than silent synapses or low release probability. Here, our results suggest that low synaptic connectivity prevents immature neurons from responding broadly to cortical activity, potentially enabling excitable immature neurons to contribute to sparse and orthogonal dentate representations.« less
Using the Logarithm of Odds to Define a Vector Space on Probabilistic Atlases
Pohl, Kilian M.; Fisher, John; Bouix, Sylvain; Shenton, Martha; McCarley, Robert W.; Grimson, W. Eric L.; Kikinis, Ron; Wells, William M.
2007-01-01
The Logarithm of the Odds ratio (LogOdds) is frequently used in areas such as artificial neural networks, economics, and biology, as an alternative representation of probabilities. Here, we use LogOdds to place probabilistic atlases in a linear vector space. This representation has several useful properties for medical imaging. For example, it not only encodes the shape of multiple anatomical structures but also captures some information concerning uncertainty. We demonstrate that the resulting vector space operations of addition and scalar multiplication have natural probabilistic interpretations. We discuss several examples for placing label maps into the space of LogOdds. First, we relate signed distance maps, a widely used implicit shape representation, to LogOdds and compare it to an alternative that is based on smoothing by spatial Gaussians. We find that the LogOdds approach better preserves shapes in a complex multiple object setting. In the second example, we capture the uncertainty of boundary locations by mapping multiple label maps of the same object into the LogOdds space. Third, we define a framework for non-convex interpolations among atlases that capture different time points in the aging process of a population. We evaluate the accuracy of our representation by generating a deformable shape atlas that captures the variations of anatomical shapes across a population. The deformable atlas is the result of a principal component analysis within the LogOdds space. This atlas is integrated into an existing segmentation approach for MR images. We compare the performance of the resulting implementation in segmenting 20 test cases to a similar approach that uses a more standard shape model that is based on signed distance maps. On this data set, the Bayesian classification model with our new representation outperformed the other approaches in segmenting subcortical structures. PMID:17698403
Jaffray, D A; Drake, D G; Moreau, M; Martinez, A A; Wong, J W
1999-10-01
Dose escalation in conformal radiation therapy requires accurate field placement. Electronic portal imaging devices are used to verify field placement but are limited by the low subject contrast of bony anatomy at megavoltage (MV) energies, the large imaging dose, and the small size of the radiation fields. In this article, we describe the in-house modification of a medical linear accelerator to provide radiographic and tomographic localization of bone and soft-tissue targets in the reference frame of the accelerator. This system separates the verification of beam delivery (machine settings, field shaping) from patient and target localization. A kilovoltage (kV) x-ray source is mounted on the drum assembly of an Elekta SL-20 medical linear accelerator, maintaining the same isocenter as the treatment beam with the central axis at 90 degrees to the treatment beam axis. The x-ray tube is powered by a high-frequency generator and can be retracted to the drum-face. Two CCD-based fluoroscopic imaging systems are mounted on the accelerator to collect MV and kV radiographic images. The system is also capable of cone-beam tomographic imaging at both MV and kV energies. The gain stages of the two imaging systems have been modeled to assess imaging performance. The contrast-resolution of the kV and MV systems was measured using a contrast-detail (C-D) phantom. The dosimetric advantage of using the kV imaging system over the MV system for the detection of bone-like objects is quantified for a specific imaging geometry using a C-D phantom. Accurate guidance of the treatment beam requires registration of the imaging and treatment coordinate systems. The mechanical characteristics of the treatment and imaging gantries are examined to determine a localizing precision assuming an unambiguous object. MV and kV radiographs of patients receiving radiation therapy are acquired to demonstrate the radiographic performance of the system. The tomographic performance is demonstrated on phantoms using both the MV and the kV imaging system, and the visibility of soft-tissue targets is assessed. Characterization of the gains in the two systems demonstrates that the MV system is x-ray quantum noise-limited at very low spatial frequencies; this is not the case for the kV system. The estimates of gain used in the model are validated by measurements of the total gain in each system. Contrast-detail measurements demonstrate that the MV system is capable of detecting subject contrasts of less than 0.1% (at 6 and 18 MV). A comparison of the kV and MV contrast-detail performance indicates that equivalent bony object detection can be achieved with the kV system at significantly lower doses (factors of 40 and 90 lower than for 6 and 18 MV, respectively). The tomographic performance of the system is promising; soft-tissue visibility is demonstrated at relatively low imaging doses (3 cGy) using four laboratory rats. We have integrated a kV radiographic and tomographic imaging system with a medical linear accelerator to allow localization of bone and soft-tissue structures in the reference frame of the accelerator. Modeling and experiments have demonstrated the feasibility of acquiring high-quality radiographic and tomographic images at acceptable imaging doses. Full integration of the kV and MV imaging systems with the treatment machine will allow on-line radiographic and tomographic guidance of field placement.