Tomographic Processing of Synthetic Aperture Radar Signals for Enhanced Resolution
1989-11-01
to image 3 larger scenes, this problem becomes more important. A byproduct of this investigation is a duality theorem which is a generalization of the...well-known Projection-Slice Theorem . The second prob- - lem proposed is that of imaging a rapidly-spinning object, for example in inverse SAR mode...slices is absent. There is a possible connection of the word to the Projection-Slice Theorem , but, as seen in Chapter 4, even this is absent in the
Fan beam image reconstruction with generalized Fourier slice theorem.
Zhao, Shuangren; Yang, Kang; Yang, Kevin
2014-01-01
For parallel beam geometry the Fourier reconstruction works via the Fourier slice theorem (or central slice theorem, projection slice theorem). For fan beam situation, Fourier slice can be extended to a generalized Fourier slice theorem (GFST) for fan-beam image reconstruction. We have briefly introduced this method in a conference. This paper reintroduces the GFST method for fan beam geometry in details. The GFST method can be described as following: the Fourier plane is filled by adding up the contributions from all fanbeam projections individually; thereby the values in the Fourier plane are directly calculated for Cartesian coordinates such avoiding the interpolation from polar to Cartesian coordinates in the Fourier domain; inverse fast Fourier transform is applied to the image in Fourier plane and leads to a reconstructed image in spacial domain. The reconstructed image is compared between the result of the GFST method and the result from the filtered backprojection (FBP) method. The major differences of the GFST and the FBP methods are: (1) The interpolation process are at different data sets. The interpolation of the GFST method is at projection data. The interpolation of the FBP method is at filtered projection data. (2) The filtering process are done in different places. The filtering process of the GFST is at Fourier domain. The filtering process of the FBP method is the ramp filter which is done at projections. The resolution of ramp filter is variable with different location but the filter in the Fourier domain lead to resolution invariable with location. One advantage of the GFST method over the FBP method is in short scan situation, an exact solution can be obtained with the GFST method, but it can not be obtained with the FBP method. The calculation of both the GFST and the FBP methods are at O(N
Generalized Fourier slice theorem for cone-beam image reconstruction.
Zhao, Shuang-Ren; Jiang, Dazong; Yang, Kevin; Yang, Kang
2015-01-01
The cone-beam reconstruction theory has been proposed by Kirillov in 1961, Tuy in 1983, Feldkamp in 1984, Smith in 1985, Pierre Grangeat in 1990. The Fourier slice theorem is proposed by Bracewell 1956, which leads to the Fourier image reconstruction method for parallel-beam geometry. The Fourier slice theorem is extended to fan-beam geometry by Zhao in 1993 and 1995. By combining the above mentioned cone-beam image reconstruction theory and the above mentioned Fourier slice theory of fan-beam geometry, the Fourier slice theorem in cone-beam geometry is proposed by Zhao 1995 in short conference publication. This article offers the details of the derivation and implementation of this Fourier slice theorem for cone-beam geometry. Especially the problem of the reconstruction from Fourier domain has been overcome, which is that the value of in the origin of Fourier space is 0/0. The 0/0 type of limit is proper handled. As examples, the implementation results for the single circle and two perpendicular circle source orbits are shown. In the cone-beam reconstruction if a interpolation process is considered, the number of the calculations for the generalized Fourier slice theorem algorithm is
Projection-slice theorem based 2D-3D registration
NASA Astrophysics Data System (ADS)
van der Bom, M. J.; Pluim, J. P. W.; Homan, R.; Timmer, J.; Bartels, L. W.
2007-03-01
In X-ray guided procedures, the surgeon or interventionalist is dependent on his or her knowledge of the patient's specific anatomy and the projection images acquired during the procedure by a rotational X-ray source. Unfortunately, these X-ray projections fail to give information on the patient's anatomy in the dimension along the projection axis. It would be very profitable to provide the surgeon or interventionalist with a 3D insight of the patient's anatomy that is directly linked to the X-ray images acquired during the procedure. In this paper we present a new robust 2D-3D registration method based on the Projection-Slice Theorem. This theorem gives us a relation between the pre-operative 3D data set and the interventional projection images. Registration is performed by minimizing a translation invariant similarity measure that is applied to the Fourier transforms of the images. The method was tested by performing multiple exhaustive searches on phantom data of the Circle of Willis and on a post-mortem human skull. Validation was performed visually by comparing the test projections to the ones that corresponded to the minimal value of the similarity measure. The Projection-Slice Theorem Based method was shown to be very effective and robust, and provides capture ranges up to 62 degrees. Experiments have shown that the method is capable of retrieving similar results when translations are applied to the projection images.
The derivative-free Fourier shell identity for photoacoustics.
Baddour, Natalie
2016-01-01
In X-ray tomography, the Fourier slice theorem provides a relationship between the Fourier components of the object being imaged and the measured projection data. The Fourier slice theorem is the basis for X-ray Fourier-based tomographic inversion techniques. A similar relationship, referred to as the 'Fourier shell identity' has been previously derived for photoacoustic applications. However, this identity relates the pressure wavefield data function and its normal derivative measured on an arbitrary enclosing aperture to the three-dimensional Fourier transform of the enclosed object evaluated on a sphere. Since the normal derivative of pressure is not normally measured, the applicability of the formulation is limited in this form. In this paper, alternative derivations of the Fourier shell identity in 1D, 2D polar and 3D spherical polar coordinates are presented. The presented formulations do not require the normal derivative of pressure, thereby lending the formulas directly adaptable for Fourier based absorber reconstructions.
1986-09-01
necessary to define "canonical" * parameterizations. Examples of proposed parameterizations are Munge N...of a slice of the surface oriented along the vector CT on the surface is given by STr -(A4.24) 11 is clear from the above expression, that when a slice
SAGA: A project to automate the management of software production systems
NASA Technical Reports Server (NTRS)
Campbell, R. H.
1983-01-01
The current work in progress for the SAGA project are described. The highlights of this research are: a parser independent SAGA editor, design for the screen editing facilities of the editor, delivery to NASA of release 1 of Olorin, the SAGA parser generator, personal workstation environment research, release 1 of the SAGA symbol table manager, delta generation in SAGA, requirements for a proof management system, documentation for and testing of the cyber pascal make prototype, a prototype cyber-based slicing facility, a June 1984 demonstration plan, SAGA utility programs, summary of UNIX software engineering support, and theorem prover review.
Pohit, M; Sharma, J
2015-05-10
Image recognition in the presence of both rotation and translation is a longstanding problem in correlation pattern recognition. Use of log polar transform gives a solution to this problem, but at a cost of losing the vital phase information from the image. The main objective of this paper is to develop an algorithm based on Fourier slice theorem for measuring the simultaneous rotation and translation of an object in a 2D plane. The algorithm is applicable for any arbitrary object shift for full 180° rotation.
Constant mean curvature slicings of Kantowski-Sachs spacetimes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinzle, J. Mark
2011-04-15
We investigate existence, uniqueness, and the asymptotic properties of constant mean curvature (CMC) slicings in vacuum Kantowski-Sachs spacetimes with positive cosmological constant. Since these spacetimes violate the strong energy condition, most of the general theorems on CMC slicings do not apply. Although there are in fact Kantowski-Sachs spacetimes with a unique CMC foliation or CMC time function, we prove that there also exist Kantowski-Sachs spacetimes with an arbitrary number of (families of) CMC slicings. The properties of these slicings are analyzed in some detail.
Riemannian and Lorentzian flow-cut theorems
NASA Astrophysics Data System (ADS)
Headrick, Matthew; Hubeny, Veronika E.
2018-05-01
We prove several geometric theorems using tools from the theory of convex optimization. In the Riemannian setting, we prove the max flow-min cut (MFMC) theorem for boundary regions, applied recently to develop a ‘bit-thread’ interpretation of holographic entanglement entropies. We also prove various properties of the max flow and min cut, including respective nesting properties. In the Lorentzian setting, we prove the analogous MFMC theorem, which states that the volume of a maximal slice equals the flux of a minimal flow, where a flow is defined as a divergenceless timelike vector field with norm at least 1. This theorem includes as a special case a continuum version of Dilworth’s theorem from the theory of partially ordered sets. We include a brief review of the necessary tools from the theory of convex optimization, in particular Lagrangian duality and convex relaxation.
Analytical properties of time-of-flight PET data.
Cho, Sanghee; Ahn, Sangtae; Li, Quanzheng; Leahy, Richard M
2008-06-07
We investigate the analytical properties of time-of-flight (TOF) positron emission tomography (PET) sinograms, where the data are modeled as line integrals weighted by a spatially invariant TOF kernel. First, we investigate the Fourier transform properties of 2D TOF data and extend the 'bow-tie' property of the 2D Radon transform to the time-of-flight case. Second, we describe a new exact Fourier rebinning method, TOF-FOREX, based on the Fourier transform in the time-of-flight variable. We then combine TOF-FOREX rebinning with a direct extension of the projection slice theorem to TOF data, to perform fast 3D TOF PET image reconstruction. Finally, we illustrate these properties using simulated data.
Analytical properties of time-of-flight PET data
NASA Astrophysics Data System (ADS)
Cho, Sanghee; Ahn, Sangtae; Li, Quanzheng; Leahy, Richard M.
2008-06-01
We investigate the analytical properties of time-of-flight (TOF) positron emission tomography (PET) sinograms, where the data are modeled as line integrals weighted by a spatially invariant TOF kernel. First, we investigate the Fourier transform properties of 2D TOF data and extend the 'bow-tie' property of the 2D Radon transform to the time-of-flight case. Second, we describe a new exact Fourier rebinning method, TOF-FOREX, based on the Fourier transform in the time-of-flight variable. We then combine TOF-FOREX rebinning with a direct extension of the projection slice theorem to TOF data, to perform fast 3D TOF PET image reconstruction. Finally, we illustrate these properties using simulated data.
Analytical Properties of Time-of-Flight PET Data
Cho, Sanghee; Ahn, Sangtae; Li, Quanzheng; Leahy, Richard M.
2015-01-01
We investigate the analytical properties of time-of-flight (TOF) positron emission tomography (PET) sinograms, where the data are modeled as line integrals weighted by a spatially invariant TOF kernel. First, we investigate the Fourier transform properties of 2D TOF data and extend the “bow-tie” property of the 2D Radon transform to the time of flight case. Second, we describe a new exact Fourier rebinning method, TOF-FOREX, based on the Fourier transform in the time-of-flight variable. We then combine TOF-FOREX rebinning with a direct extension of the projection slice theorem to TOF data, to perform fast 3D TOF PET image reconstruction. Finally, we illustrate these properties using simulated data. PMID:18460746
An Integrated Environment for Efficient Formal Design and Verification
NASA Technical Reports Server (NTRS)
1998-01-01
The general goal of this project was to improve the practicality of formal methods by combining techniques from model checking and theorem proving. At the time the project was proposed, the model checking and theorem proving communities were applying different tools to similar problems, but there was not much cross-fertilization. This project involved a group from SRI that had substantial experience in the development and application of theorem-proving technology, and a group at Stanford that specialized in model checking techniques. Now, over five years after the proposal was submitted, there are many research groups working on combining theorem-proving and model checking techniques, and much more communication between the model checking and theorem proving research communities. This project contributed significantly to this research trend. The research work under this project covered a variety of topics: new theory and algorithms; prototype tools; verification methodology; and applications to problems in particular domains.
The Variation Theorem Applied to H-2+: A Simple Quantum Chemistry Computer Project
ERIC Educational Resources Information Center
Robiette, Alan G.
1975-01-01
Describes a student project which requires limited knowledge of Fortran and only minimal computing resources. The results illustrate such important principles of quantum mechanics as the variation theorem and the virial theorem. Presents sample calculations and the subprogram for energy calculations. (GS)
Cooperation Among Theorem Provers
NASA Technical Reports Server (NTRS)
Waldinger, Richard J.
1998-01-01
In many years of research, a number of powerful theorem-proving systems have arisen with differing capabilities and strengths. Resolution theorem provers (such as Kestrel's KITP or SRI's SNARK) deal with first-order logic with equality but not the principle of mathematical induction. The Boyer-Moore theorem prover excels at proof by induction but cannot deal with full first-order logic. Both are highly automated but cannot accept user guidance easily. The purpose of this project, and the companion project at Kestrel, has been to use the category-theoretic notion of logic morphism to combine systems with different logics and languages.
Discovering the Theorem of Pythagoras
NASA Technical Reports Server (NTRS)
Lattanzio, Robert (Editor)
1988-01-01
In this 'Project Mathematics! series, sponsored by the California Institute of Technology, Pythagoraus' theorem a(exp 2) + b(exp 2) = c(exp 2) is discussed and the history behind this theorem is explained. hrough live film footage and computer animation, applications in real life are presented and the significance of and uses for this theorem are put into practice.
High Performance GPU-Based Fourier Volume Rendering.
Abdellah, Marwan; Eldeib, Ayman; Sharawi, Amr
2015-01-01
Fourier volume rendering (FVR) is a significant visualization technique that has been used widely in digital radiography. As a result of its (N (2)logN) time complexity, it provides a faster alternative to spatial domain volume rendering algorithms that are (N (3)) computationally complex. Relying on the Fourier projection-slice theorem, this technique operates on the spectral representation of a 3D volume instead of processing its spatial representation to generate attenuation-only projections that look like X-ray radiographs. Due to the rapid evolution of its underlying architecture, the graphics processing unit (GPU) became an attractive competent platform that can deliver giant computational raw power compared to the central processing unit (CPU) on a per-dollar-basis. The introduction of the compute unified device architecture (CUDA) technology enables embarrassingly-parallel algorithms to run efficiently on CUDA-capable GPU architectures. In this work, a high performance GPU-accelerated implementation of the FVR pipeline on CUDA-enabled GPUs is presented. This proposed implementation can achieve a speed-up of 117x compared to a single-threaded hybrid implementation that uses the CPU and GPU together by taking advantage of executing the rendering pipeline entirely on recent GPU architectures.
Modified Polar-Format Software for Processing SAR Data
NASA Technical Reports Server (NTRS)
Chen, Curtis
2003-01-01
HMPF is a computer program that implements a modified polar-format algorithm for processing data from spaceborne synthetic-aperture radar (SAR) systems. Unlike prior polar-format processing algorithms, this algorithm is based on the assumption that the radar signal wavefronts are spherical rather than planar. The algorithm provides for resampling of SAR pulse data from slant range to radial distance from the center of a reference sphere that is nominally the local Earth surface. Then, invoking the projection-slice theorem, the resampled pulse data are Fourier-transformed over radial distance, arranged in the wavenumber domain according to the acquisition geometry, resampled to a Cartesian grid, and inverse-Fourier-transformed. The result of this process is the focused SAR image. HMPF, and perhaps other programs that implement variants of the algorithm, may give better accuracy than do prior algorithms for processing strip-map SAR data from high altitudes and may give better phase preservation relative to prior polar-format algorithms for processing spotlight-mode SAR data.
NASA Astrophysics Data System (ADS)
Fine, Dana S.; Sawin, Stephen
2017-01-01
Feynman's time-slicing construction approximates the path integral by a product, determined by a partition of a finite time interval, of approximate propagators. This paper formulates general conditions to impose on a short-time approximation to the propagator in a general class of imaginary-time quantum mechanics on a Riemannian manifold which ensure that these products converge. The limit defines a path integral which agrees pointwise with the heat kernel for a generalized Laplacian. The result is a rigorous construction of the propagator for supersymmetric quantum mechanics, with potential, as a path integral. Further, the class of Laplacians includes the square of the twisted Dirac operator, which corresponds to an extension of N = 1/2 supersymmetric quantum mechanics. General results on the rate of convergence of the approximate path integrals suffice in this case to derive the local version of the Atiyah-Singer index theorem.
Development and Demonstration of an Ada Test Generation System
NASA Technical Reports Server (NTRS)
1996-01-01
In this project we have built a prototype system that performs Feasible Path Analysis on Ada programs: given a description of a set of control flow paths through a procedure, and a predicate at a program point feasible path analysis determines if there is input data which causes execution to flow down some path in the collection reaching the point so that tile predicate is true. Feasible path analysis can be applied to program testing, program slicing, array bounds checking, and other forms of anomaly checking. FPA is central to most applications of program analysis. But, because this problem is formally unsolvable, syntactic-based approximations are used in its place. For example, in dead-code analysis the problem is to determine if there are any input values which cause execution to reach a specified program point. Instead an approximation to this problem is computed: determine whether there is a control flow path from the start of the program to the point. This syntactic approximation is efficiently computable and conservative: if there is no such path the program point is clearly unreachable, but if there is such a path, the analysis is inconclusive, and the code is assumed to be live. Such conservative analysis too often yields unsatisfactory results because the approximation is too weak. As another example, consider data flow analysis. A du-pair is a pair of program points such that the first point is a definition of a variable and the second point a use and for which there exists a definition-free path from the definition to the use. The sharper, semantic definition of a du-pair requires that there be a feasible definition-free path from the definition to the use. A compiler using du-pairs for detecting dead variables may miss optimizations by not considering feasibility. Similarly, a program analyzer computing program slices to merge parallel versions may report conflicts where none exist. In the context of software testing, feasibility analysis plays an important role in identifying testing requirements which are infeasible. This is especially true for data flow testing and modified condition/decision coverage. Our system uses in an essential way symbolic analysis and theorem proving technology, and we believe this work represents one of the few successful uses of a theorem prover working in a completely automatic fashion to solve a problem of practical interest. We believe this work anticipates an important trend away from purely syntactic-based methods for program analysis to semantic methods based on symbolic processing and inference technology. Other results demonstrating the practical use of automatic inference is being reported in hardware verification, although there are significant differences between the hardware work and ours. However, what is common and important is that general purpose theorem provers are being integrated with more special-purpose decision procedures to solve problems in analysis and verification. We are pursuina commercial opportunities for this work, and will use and extend the work in other projects we are engaged in. Ultimately we would like to rework the system to analyze C, C++, or Java as a key step toward commercialization.
Cooperation Among Theorem Provers
NASA Technical Reports Server (NTRS)
Waldinger, Richard J.
1998-01-01
This is a final report, which supports NASA's PECSEE (Persistent Cognizant Software Engineering Environment) effort and complements the Kestrel Institute project "Inference System Integration via Logic Morphism". The ultimate purpose of the project is to develop a superior logical inference mechanism by combining the diverse abilities of multiple cooperating theorem provers. In many years of research, a number of powerful theorem-proving systems have arisen with differing capabilities and strengths. Resolution theorem provers (such as Kestrel's KITP or SRI's, SNARK) deal with first-order logic with equality but not the principle of mathematical induction. The Boyer-Moore theorem prover excels at proof by induction but cannot deal with full first-order logic. Both are highly automated but cannot accept user guidance easily. The PVS system (from SRI) in only automatic within decidable theories, but it has well-designed interactive capabilities: furthermore, it includes higher-order logic, not just first-order logic. The NuPRL system from Cornell University and the STeP system from Stanford University have facilities for constructive logic and temporal logic, respectively - both are interactive. It is often suggested - for example, in the anonymous "QED Manifesto"-that we should pool the resources of all these theorem provers into a single system, so that the strengths of one can compensate for the weaknesses of others, and so that effort will not be duplicated. However, there is no straightforward way of doing this, because each system relies on its own language and logic for its success. Thus. SNARK uses ordinary first-order logic with equality, PVS uses higher-order logic. and NuPRL uses constructive logic. The purpose of this project, and the companion project at Kestrel, has been to use the category-theoretic notion of logic morphism to combine systems with different logics and languages. Kestrel's SPECWARE system has been the vehicle for the implementation.
3D Imaging with Holographic Tomography
NASA Astrophysics Data System (ADS)
Sheppard, Colin J. R.; Kou, Shan Shan
2010-04-01
There are two main types of tomography that enable the 3D internal structures of objects to be reconstructed from scattered data. The commonly known computerized tomography (CT) give good results in the x-ray wavelength range where the filtered back-projection theorem and Radon transform can be used. These techniques rely on the Fourier projection-slice theorem where rays are considered to propagate straight through the object. Another type of tomography called `diffraction tomography' applies in applications in optics and acoustics where diffraction and scattering effects must be taken into account. The latter proves to be a more difficult problem, as light no longer travels straight through the sample. Holographic tomography is a popular way of performing diffraction tomography and there has been active experimental research on reconstructing complex refractive index data using this approach recently. However, there are two distinct ways of doing tomography: either by rotation of the object or by rotation of the illumination while fixing the detector. The difference between these two setups is intuitive but needs to be quantified. From Fourier optics and information transformation point of view, we use 3D transfer function analysis to quantitatively describe how spatial frequencies of the object are mapped to the Fourier domain. We first employ a paraxial treatment by calculating the Fourier transform of the defocused OTF. The shape of the calculated 3D CTF for tomography, by scanning the illumination in one direction only, takes on a form that we might call a 'peanut,' compared to the case of object rotation, where a diablo is formed, the peanut exhibiting significant differences and non-isotropy. In particular, there is a line singularity along one transverse direction. Under high numerical aperture conditions, the paraxial treatment is not accurate, and so we make use of 3D analytical geometry to calculate the behaviour in the non-paraxial case. This time, we obtain a similar peanut, but without the line singularity.
NASA Technical Reports Server (NTRS)
Nerren, B. H.
1977-01-01
The electrophoresis of six columns was accomplished on the Apollo-Soyuz test Project. After separation, these columns were frozen in orbit and were returned for ground-based analyses. One major goal of the MA-011 experiment was the assessment of the separation achieved in orbit by slicing these frozen columns. The slicing of the frozen columns required a new device. The development of that device is described.
From statistical proofs of the Kochen-Specker theorem to noise-robust noncontextuality inequalities
NASA Astrophysics Data System (ADS)
Kunjwal, Ravi; Spekkens, Robert W.
2018-05-01
The Kochen-Specker theorem rules out models of quantum theory wherein projective measurements are assigned outcomes deterministically and independently of context. This notion of noncontextuality is not applicable to experimental measurements because these are never free of noise and thus never truly projective. For nonprojective measurements, therefore, one must drop the requirement that an outcome be assigned deterministically in the model and merely require that it be assigned a distribution over outcomes in a manner that is context-independent. By demanding context independence in the representation of preparations as well, one obtains a generalized principle of noncontextuality that also supports a quantum no-go theorem. Several recent works have shown how to derive inequalities on experimental data which, if violated, demonstrate the impossibility of finding a generalized-noncontextual model of this data. That is, these inequalities do not presume quantum theory and, in particular, they make sense without requiring an operational analog of the quantum notion of projectiveness. We here describe a technique for deriving such inequalities starting from arbitrary proofs of the Kochen-Specker theorem. It extends significantly previous techniques that worked only for logical proofs, which are based on sets of projective measurements that fail to admit of any deterministic noncontextual assignment, to the case of statistical proofs, which are based on sets of projective measurements that d o admit of some deterministic noncontextual assignments, but not enough to explain the quantum statistics.
Light deflection and Gauss-Bonnet theorem: definition of total deflection angle and its applications
NASA Astrophysics Data System (ADS)
Arakida, Hideyoshi
2018-05-01
In this paper, we re-examine the light deflection in the Schwarzschild and the Schwarzschild-de Sitter spacetime. First, supposing a static and spherically symmetric spacetime, we propose the definition of the total deflection angle α of the light ray by constructing a quadrilateral Σ^4 on the optical reference geometry M^opt determined by the optical metric \\bar{g}_{ij}. On the basis of the definition of the total deflection angle α and the Gauss-Bonnet theorem, we derive two formulas to calculate the total deflection angle α ; (1) the angular formula that uses four angles determined on the optical reference geometry M^opt or the curved (r, φ ) subspace M^sub being a slice of constant time t and (2) the integral formula on the optical reference geometry M^opt which is the areal integral of the Gaussian curvature K in the area of a quadrilateral Σ ^4 and the line integral of the geodesic curvature κ _g along the curve C_{Γ}. As the curve C_{Γ}, we introduce the unperturbed reference line that is the null geodesic Γ on the background spacetime such as the Minkowski or the de Sitter spacetime, and is obtained by projecting Γ vertically onto the curved (r, φ ) subspace M^sub. We demonstrate that the two formulas give the same total deflection angle α for the Schwarzschild and the Schwarzschild-de Sitter spacetime. In particular, in the Schwarzschild case, the result coincides with Epstein-Shapiro's formula when the source S and the receiver R of the light ray are located at infinity. In addition, in the Schwarzschild-de Sitter case, there appear order O(Lambda;m) terms in addition to the Schwarzschild-like part, while order O(Λ) terms disappear.
NASA Technical Reports Server (NTRS)
Raphael, B.; Fikes, R.; Waldinger, R.
1973-01-01
The results are summarised of a project aimed at the design and implementation of computer languages to aid in expressing problem solving procedures in several areas of artificial intelligence including automatic programming, theorem proving, and robot planning. The principal results of the project were the design and implementation of two complete systems, QA4 and QLISP, and their preliminary experimental use. The various applications of both QA4 and QLISP are given.
NASA Astrophysics Data System (ADS)
Puong, Sylvie; Patoureaux, Fanny; Iordache, Razvan; Bouchevreau, Xavier; Muller, Serge
2007-03-01
In this paper, we present the development of dual-energy Contrast-Enhanced Digital Breast Tomosynthesis (CEDBT). A method to produce background clutter-free slices from a set of low and high-energy projections is introduced, along with a scheme for the determination of the optimal low and high-energy techniques. Our approach consists of a dual-energy recombination of the projections, with an algorithm that has proven its performance in Contrast-Enhanced Digital Mammography1 (CEDM), followed by an iterative volume reconstruction. The aim is to eliminate the anatomical background clutter and to reconstruct slices where the gray level is proportional to the local iodine volumetric concentration. Optimization of the low and high-energy techniques is performed by minimizing the total glandular dose to reach a target iodine Signal Difference to Noise Ratio (SDNR) in the slices. In this study, we proved that this optimization could be done on the projections, by consideration of the SDNR in the projections instead of the SDNR in the slices, and verified this with phantom measurements. We also discuss some limitations of dual-energy CEDBT, due to the restricted angular range for the projection views, and to the presence of scattered radiation. Experiments on textured phantoms with iodine inserts were conducted to assess the performance of dual-energy CEDBT. Texture contrast was nearly completely removed and the iodine signal was enhanced in the slices.
Secondary Mathematics Education in the Soviet Union, an Individual Study Project.
1982-05-14
Pythagoras and other well-known congruence theorems on angles and triangles. Concepts of set theory are developed in relation to the topics studied. Grades 6...geometry (areas, volumes, etc.). Geometric topics include: use of the ruler, protractor, and compasses in geometric constructions; Theorem of
Volumetric three-dimensional display system with rasterization hardware
NASA Astrophysics Data System (ADS)
Favalora, Gregg E.; Dorval, Rick K.; Hall, Deirdre M.; Giovinco, Michael; Napoli, Joshua
2001-06-01
An 8-color multiplanar volumetric display is being developed by Actuality Systems, Inc. It will be capable of utilizing an image volume greater than 90 million voxels, which we believe is the greatest utilizable voxel set of any volumetric display constructed to date. The display is designed to be used for molecular visualization, mechanical CAD, e-commerce, entertainment, and medical imaging. As such, it contains a new graphics processing architecture, novel high-performance line- drawing algorithms, and an API similar to a current standard. Three-dimensional imagery is created by projecting a series of 2-D bitmaps ('image slices') onto a diffuse screen that rotates at 600 rpm. Persistence of vision fuses the slices into a volume-filling 3-D image. A modified three-panel Texas Instruments projector provides slices at approximately 4 kHz, resulting in 8-color 3-D imagery comprised of roughly 200 radially-disposed slices which are updated at 20 Hz. Each slice has a resolution of 768 by 768 pixels, subtending 10 inches. An unusual off-axis projection scheme incorporating tilted rotating optics is used to maintain good focus across the projection screen. The display electronics includes a custom rasterization architecture which converts the user's 3- D geometry data into image slices, as well as 6 Gbits of DDR SDRAM graphics memory.
Abrishami, V; Bilbao-Castro, J R; Vargas, J; Marabini, R; Carazo, J M; Sorzano, C O S
2015-10-01
We describe a fast and accurate method for the reconstruction of macromolecular complexes from a set of projections. Direct Fourier inversion (in which the Fourier Slice Theorem plays a central role) is a solution for dealing with this inverse problem. Unfortunately, the set of projections provides a non-equidistantly sampled version of the macromolecule Fourier transform in the single particle field (and, therefore, a direct Fourier inversion) may not be an optimal solution. In this paper, we introduce a gridding-based direct Fourier method for the three-dimensional reconstruction approach that uses a weighting technique to compute a uniform sampled Fourier transform. Moreover, the contrast transfer function of the microscope, which is a limiting factor in pursuing a high resolution reconstruction, is corrected by the algorithm. Parallelization of this algorithm, both on threads and on multiple CPU's, makes the process of three-dimensional reconstruction even faster. The experimental results show that our proposed gridding-based direct Fourier reconstruction is slightly more accurate than similar existing methods and presents a lower computational complexity both in terms of time and memory, thereby allowing its use on larger volumes. The algorithm is fully implemented in the open-source Xmipp package and is downloadable from http://xmipp.cnb.csic.es. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Goldman, H.; Wolf, M.
1978-01-01
The significant economic data for the current production multiblade wafering and inner diameter slicing processes were tabulated and compared to data on the experimental and projected multiblade slurry, STC ID diamond coated blade, multiwire slurry and crystal systems fixed abrasive multiwire slicing methods. Cost calculations were performed for current production processes and for 1982 and 1986 projected wafering techniques.
Proceedings of the Second NASA Formal Methods Symposium
NASA Technical Reports Server (NTRS)
Munoz, Cesar (Editor)
2010-01-01
This publication contains the proceedings of the Second NASA Formal Methods Symposium sponsored by the National Aeronautics and Space Administration and held in Washington D.C. April 13-15, 2010. Topics covered include: Decision Engines for Software Analysis using Satisfiability Modulo Theories Solvers; Verification and Validation of Flight-Critical Systems; Formal Methods at Intel -- An Overview; Automatic Review of Abstract State Machines by Meta Property Verification; Hardware-independent Proofs of Numerical Programs; Slice-based Formal Specification Measures -- Mapping Coupling and Cohesion Measures to Formal Z; How Formal Methods Impels Discovery: A Short History of an Air Traffic Management Project; A Machine-Checked Proof of A State-Space Construction Algorithm; Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications; Modeling Regular Replacement for String Constraint Solving; Using Integer Clocks to Verify the Timing-Sync Sensor Network Protocol; Can Regulatory Bodies Expect Efficient Help from Formal Methods?; Synthesis of Greedy Algorithms Using Dominance Relations; A New Method for Incremental Testing of Finite State Machines; Verification of Faulty Message Passing Systems with Continuous State Space in PVS; Phase Two Feasibility Study for Software Safety Requirements Analysis Using Model Checking; A Prototype Embedding of Bluespec System Verilog in the PVS Theorem Prover; SimCheck: An Expressive Type System for Simulink; Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness; Software Model Checking of ARINC-653 Flight Code with MCP; Evaluation of a Guideline by Formal Modelling of Cruise Control System in Event-B; Formal Verification of Large Software Systems; Symbolic Computation of Strongly Connected Components Using Saturation; Towards the Formal Verification of a Distributed Real-Time Automotive System; Slicing AADL Specifications for Model Checking; Model Checking with Edge-valued Decision Diagrams; and Data-flow based Model Analysis.
NASA Technical Reports Server (NTRS)
Holden, S. C.
1976-01-01
Multiblade slurry sawing is used to slice 10 cm diameter silicon ingots into wafers 0.024 cm thick using 0.050 cm of silicon per slice (0.026 cm kerf loss). Total slicing time is less than twenty hours, and 143 slices are produced simultaneously. Productivity (slice area per hour per blade) is shown as a function or blade load and thickness, and abrasive size. Finer abrasive slurries cause a reduction in slice productivity, and thin blades cause a reduction of wafer accuracy. Sawing induced surface damage is found to extend 18 microns into the wafer.
Answering Junior Ant's "Why" for Pythagoras' Theorem
ERIC Educational Resources Information Center
Pask, Colin
2002-01-01
A seemingly simple question in a cartoon about Pythagoras' Theorem is shown to lead to questions about the nature of mathematical proof and the profound relationship between mathematics and science. It is suggested that an analysis of the issues involved could provide a good vehicle for classroom discussions or projects for senior students.…
Student Research Project: Goursat's Other Theorem
ERIC Educational Resources Information Center
Petrillo, Joseph
2009-01-01
In an elementary undergraduate abstract algebra or group theory course, a student is introduced to a variety of methods for constructing and deconstructing groups. What seems to be missing from contemporary texts and syllabi is a theorem, first proved by Edouard Jean-Baptiste Goursat (1858-1936) in 1889, which completely describes the subgroups of…
Simultaneous Generalizations of the Theorems of Ceva and Menelaus for Field Planes
ERIC Educational Resources Information Center
Houston, Kelly B.; Powers, Robert C.
2009-01-01
In 1992, Klamkin and Liu proved a very general result in the Extended Euclidean Plane that contains the theorems of Ceva and Menelaus as special cases. In this article, we extend the Klamkin and Liu result to projective planes "PG"(2, F) where F is a field. (Contains 2 figures.)
NASA Astrophysics Data System (ADS)
Han, Minah; Jang, Hanjoo; Baek, Jongduk
2018-03-01
We investigate lesion detectability and its trends for different noise structures in single-slice and multislice CBCT images with anatomical background noise. Anatomical background noise is modeled using a power law spectrum of breast anatomy. Spherical signal with a 2 mm diameter is used for modeling a lesion. CT projection data are acquired by the forward projection and reconstructed by the Feldkamp-Davis-Kress algorithm. To generate different noise structures, two types of reconstruction filters (Hanning and Ram-Lak weighted ramp filters) are used in the reconstruction, and the transverse and longitudinal planes of reconstructed volume are used for detectability evaluation. To evaluate single-slice images, the central slice, which contains the maximum signal energy, is used. To evaluate multislice images, central nine slices are used. Detectability is evaluated using human and model observer studies. For model observer, channelized Hotelling observer (CHO) with dense difference-of-Gaussian (D-DOG) channels are used. For all noise structures, detectability by a human observer is higher for multislice images than single-slice images, and the degree of detectability increase in multislice images depends on the noise structure. Variation in detectability for different noise structures is reduced in multislice images, but detectability trends are not much different between single-slice and multislice images. The CHO with D-DOG channels predicts detectability by a human observer well for both single-slice and multislice images.
Jini service to reconstruct tomographic data
NASA Astrophysics Data System (ADS)
Knoll, Peter; Mirzaei, S.; Koriska, K.; Koehn, H.
2002-06-01
A number of imaging systems rely on the reconstruction of a 3- dimensional model from its projections through the process of computed tomography (CT). In medical imaging, for example magnetic resonance imaging (MRI), positron emission tomography (PET), and Single Computer Tomography (SPECT) acquire two-dimensional projections of a three dimensional projections of a three dimensional object. In order to calculate the 3-dimensional representation of the object, i.e. its voxel distribution, several reconstruction algorithms have been developed. Currently, mainly two reconstruct use: the filtered back projection(FBP) and iterative methods. Although the quality of iterative reconstructed SPECT slices is better than that of FBP slices, such iterative algorithms are rarely used for clinical routine studies because of their low availability and increased reconstruction time. We used Jini and a self-developed iterative reconstructions algorithm to design and implement a Jini reconstruction service. With this service, the physician selects the patient study from a database and a Jini client automatically discovers the registered Jini reconstruction services in the department's Intranet. After downloading the proxy object the this Jini service, the SPECT acquisition data are reconstructed. The resulting transaxial slices are visualized using a Jini slice viewer, which can be used for various imaging modalities.
NASA Technical Reports Server (NTRS)
Schmid, F.
1981-01-01
The crystallinity of large HEM silicon ingots as a function of heat flow conditions is investigated. A balanced heat flow at the bottom of the ingot restricts spurious nucleation to the edge of the melted-back seed in contact with the crucible. Homogeneous resistivity distribution over all the ingot has been achieved. The positioning of diamonds electroplated on wirepacks used to slice silicon crystals is considered. The electroplating of diamonds on only the cutting edge is described and the improved slicing performance of these wires evaluated. An economic analysis of value added costs of HEM ingot casting and band saw sectioning indicates the projected add on cost of HEM is well below the 1986 allocation.
Fourier crosstalk analysis of multislice and cone-beam helical CT
NASA Astrophysics Data System (ADS)
La Riviere, Patrick J.
2004-05-01
Multi-slice helical CT scanners allow for much faster scanning and better x-ray utilization than do their single-slice predecessors, but they engender considerably more complicated data sampling patterns due to the interlacing of the samples from different rows as the patient is translated. Characterizing and optimizing this sampling is challenging because the conebeam geometry of such scanners means that the projections measured by each detector row are at least slightly oblique, making it difficult to apply standard multidimensional sampling analyses. In this study, we seek to apply a more general framework for analyzing sampled imaging systems known as Fourier crosstalk analysis. Our purpose in this preliminary work is to compare the information content of the data acquired in three different scanner geometries and operating conditions with ostensibly equivalent volume coverage and average longitudinal sampling interval: a single-slice scanner operating at pitch 1, a four-slice scanner operating at pitch 3 and a 15-slice scanner operating at pitch 15. We find that moving from a single-slice to a multi-slice geometry introduces longitudinal crosstalk characteristic of the longitudinal sampling interval between periods of individual each detector row, and not of the overall interlaced sampling pattern. This is attributed to data inconsistencies caused by the obliqueness of the projections in a multi-slice/conebeam configuration. However, these preliminary results suggest that the significance of this additional crosstalk actually decreases as the number of detector rows increases.
Transient quantum fluctuation theorems and generalized measurements
NASA Astrophysics Data System (ADS)
Prasanna Venkatesh, B.; Watanabe, Gentaro; Talkner, Peter
2014-01-01
The transient quantum fluctuation theorems of Crooks and Jarzynski restrict and relate the statistics of work performed in forward and backward forcing protocols. So far, these theorems have been obtained under the assumption that the work is determined by two projective energy measurements, one at the end, and the other one at the beginning of each run of the protocol. We found that one can replace these two projective measurements only by special error-free generalized energy measurements with pairs of tailored, protocol-dependent post-measurement states that satisfy detailed balance-like relations. For other generalized measurements, the Crooks relation is typically not satisfied. For the validity of the Jarzynski equality, it is sufficient that the first energy measurements are error-free and the post-measurement states form a complete orthonormal set of elements in the Hilbert space of the considered system. Additionally, the effects of the second energy measurements must have unit trace. We illustrate our results by an example of a two-level system for different generalized measurements.
Transient quantum fluctuation theorems and generalized measurements
NASA Astrophysics Data System (ADS)
Prasanna Venkatesh, B.; Watanabe, Gentaro; Talkner, Peter
2014-05-01
The transient quantum fluctuation theorems of Crooks and Jarzynski restrict and relate the statistics of work performed in forward and backward forcing protocols. So far, these theorems have been obtained under the assumption that the work is determined by two projective energy measurements, one at the end, and the other one at the beginning of each run of the protocol.We found that one can replace these two projective measurements only by special error-free generalized energy measurements with pairs of tailored, protocol-dependent post-measurement states that satisfy detailed balance-like relations. For other generalized measurements, the Crooks relation is typically not satisfied. For the validity of the Jarzynski equality, it is sufficient that the first energy measurements are error-free and the post-measurement states form a complete orthonormal set of elements in the Hilbert space of the considered system. Additionally, the effects of the second energy measurements must have unit trace. We illustrate our results by an example of a two-level system for different generalized measurements.
Generalizing the Iterative Proportional Fitting Procedure.
1980-04-01
Csiszar gives conditions under which P (R) exists (it is always unique) and develops a geometry of I-divergence by using an analogue of Pythagoras ...8217 Theorem . As our goal is to study maximum likelihood estimation in contingency tables, we turn briefly to the problem of estimating a multinomial...envoke a result of Csiszir (due originally to Kullback (1959)), giving the form of the density of the I-projection. Csiszar’s Theorem 3.1, which we
Integrating interface slicing into software engineering processes
NASA Technical Reports Server (NTRS)
Beck, Jon
1993-01-01
Interface slicing is a tool which was developed to facilitate software engineering. As previously presented, it was described in terms of its techniques and mechanisms. The integration of interface slicing into specific software engineering activities is considered by discussing a number of potential applications of interface slicing. The applications discussed specifically address the problems, issues, or concerns raised in a previous project. Because a complete interface slicer is still under development, these applications must be phrased in future tenses. Nonetheless, the interface slicing techniques which were presented can be implemented using current compiler and static analysis technology. Whether implemented as a standalone tool or as a module in an integrated development or reverse engineering environment, they require analysis no more complex than that required for current system development environments. By contrast, conventional slicing is a methodology which, while showing much promise and intuitive appeal, has yet to be fully implemented in a production language environment despite 12 years of development.
Aperture shape dependencies in extended depth of focus for imaging camera by wavefront coding
NASA Astrophysics Data System (ADS)
Sakita, Koichi; Ohta, Mitsuhiko; Shimano, Takeshi; Sakemoto, Akito
2015-02-01
Optical transfer functions (OTFs) on various directional spatial frequency axes for cubic phase mask (CPM) with circular and square apertures are investigated. Although OTF has no zero points, it has a very close value to zero for a circular aperture at low frequencies on diagonal axis, which results in degradation of restored images. The reason for close-to-zero value in OTF is also analyzed in connection with point spread function profiles using Fourier slice theorem. To avoid close-to-zero condition, square aperture with CPM is indispensable in WFC. We optimized cubic coefficient α of CPM and coefficients of digital filter, and succeeded to get excellent de-blurred images at large depth of field.
Image reconstruction of x-ray tomography by using image J platform
NASA Astrophysics Data System (ADS)
Zain, R. M.; Razali, A. M.; Salleh, K. A. M.; Yahya, R.
2017-01-01
A tomogram is a technical term for a CT image. It is also called a slice because it corresponds to what the object being scanned would look like if it were sliced open along a plane. A CT slice corresponds to a certain thickness of the object being scanned. So, while a typical digital image is composed of pixels, a CT slice image is composed of voxels (volume elements). In the case of x-ray tomography, similar to x-ray Radiography, the quantity being imaged is the distribution of the attenuation coefficient μ(x) within the object of interest. The different is only on the technique to produce the tomogram. The image of x-ray radiography can be produced straight foward after exposed to x-ray, while the image of tomography produces by combination of radiography images in every angle of projection. A number of image reconstruction methods by converting x-ray attenuation data into a tomography image have been produced by researchers. In this work, Ramp filter in "filtered back projection" has been applied. The linear data acquired at each angular orientation are convolved with a specially designed filter and then back projected across a pixel field at the same angle. This paper describe the step of using Image J software to produce image reconstruction of x-ray tomography.
On the Chern-Gauss-Bonnet theorem for the noncommutative 4-sphere
NASA Astrophysics Data System (ADS)
Arnlind, Joakim; Wilson, Mitsuru
2017-01-01
We construct a differential calculus over the noncommutative 4-sphere in the framework of pseudo-Riemannian calculi, and show that for every metric in a conformal class of perturbations of the round metric, there exists a unique metric and torsion-free connection. Furthermore, we find a localization of the projective module corresponding to the space of vector fields, which allows us to formulate a Chern-Gauss-Bonnet type theorem for the noncommutative 4-sphere.
Generalized entropy production fluctuation theorems for quantum systems
NASA Astrophysics Data System (ADS)
Rana, Shubhashis; Lahiri, Sourabh; Jayannavar, A. M.
2013-02-01
Based on trajectory dependent path probability formalism in state space, we derive generalized entropy production fluctuation relations for a quantum system in the presence of measurement and feedback. We have obtained these results for three different cases: (i) the system is evolving in isolation from its surroundings; (ii) the system being weakly coupled to a heat bath; and (iii) system in contact with reservoir using quantum Crooks fluctuation theorem. In case (iii), we build on the treatment carried out in [H. T. Quan and H. Dong, arxiv/cond-mat: 0812.4955], where a quantum trajectory has been defined as a sequence of alternating work and heat steps. The obtained entropy production fluctuation theorems retain the same form as in the classical case. The inequality of second law of thermodynamics gets modified in the presence of information. These fluctuation theorems are robust against intermediate measurements of any observable performed with respect to von Neumann projective measurements as well as weak or positive operator valued measurements.
Generalized fluctuation-dissipation theorem as a test of the Markovianity of a system
NASA Astrophysics Data System (ADS)
Willareth, Lucian; Sokolov, Igor M.; Roichman, Yael; Lindner, Benjamin
2017-04-01
We study how well a generalized fluctuation-dissipation theorem (GFDT) is suited to test whether a stochastic system is not Markovian. To this end, we simulate a stochastic non-equilibrium model of the mechanosensory hair bundle from the inner ear organ and analyze its spontaneous activity and response to external stimulation. We demonstrate that this two-dimensional Markovian system indeed obeys the GFDT, as long as i) the averaging ensemble is sufficiently large and ii) finite-size effects in estimating the conjugated variable and its susceptibility can be neglected. Furthermore, we test the GFDT also by looking only at a one-dimensional projection of the system, the experimentally accessible position variable. This reduced system is certainly non-Markovian and the GFDT is somewhat violated but not as drastically as for the equilibrium fluctuation-dissipation theorem. We explore suitable measures to quantify the violation of the theorem and demonstrate that for a set of limited experimental data it might be difficult to decide whether the system is Markovian or not.
2008-03-01
computational version of the CASIE architecture serves to demonstrate the functionality of our primary theories. However, implementation of several other...following facts. First, based on Theorem 3 and Theorem 5, the objective function is non -increasing under updating rule (6); second, by the criteria for...reassignment in updating rule (7), it is trivial to show that the objective function is non -increasing under updating rule (7). A Unified View to Graph
NASA Astrophysics Data System (ADS)
Gliouez, Souhir; Hachicha, Skander; Nasroui, Ikbel
We characterize the support projection of a state evolving under the action of a quantum Markov semigroup with unbounded generator represented in the generalized GKSL form and a quantum version of the classical Lévy-Austin-Ornstein theorem.
NASA Astrophysics Data System (ADS)
Demirkaya, Omer
2001-07-01
This study investigates the efficacy of filtering two-dimensional (2D) projection images of Computer Tomography (CT) by the nonlinear diffusion filtration in removing the statistical noise prior to reconstruction. The projection images of Shepp-Logan head phantom were degraded by Gaussian noise. The variance of the Gaussian distribution was adaptively changed depending on the intensity at a given pixel in the projection image. The corrupted projection images were then filtered using the nonlinear anisotropic diffusion filter. The filtered projections as well as original noisy projections were reconstructed using filtered backprojection (FBP) with Ram-Lak filter and/or Hanning window. The ensemble variance was computed for each pixel on a slice. The nonlinear filtering of projection images improved the SNR substantially, on the order of fourfold, in these synthetic images. The comparison of intensity profiles across a cross-sectional slice indicated that the filtering did not result in any significant loss of image resolution.
Impacts of simultaneous multislice acquisition on sensitivity and specificity in fMRI.
Risk, Benjamin B; Kociuba, Mary C; Rowe, Daniel B
2018-05-15
Simultaneous multislice (SMS) imaging can be used to decrease the time between acquisition of fMRI volumes, which can increase sensitivity by facilitating the removal of higher-frequency artifacts and boosting effective sample size. The technique requires an additional processing step in which the slices are separated, or unaliased, to recover the whole brain volume. However, this may result in signal "leakage" between aliased locations, i.e., slice "leakage," and lead to spurious activation (decreased specificity). SMS can also lead to noise amplification, which can reduce the benefits of decreased repetition time. In this study, we evaluate the original slice-GRAPPA (no leak block) reconstruction algorithm and acceleration factor (AF = 8) used in the fMRI data in the young adult Human Connectome Project (HCP). We also evaluate split slice-GRAPPA (leak block), which can reduce slice leakage. We use simulations to disentangle higher test statistics into true positives (sensitivity) and false positives (decreased specificity). Slice leakage was greatly decreased by split slice-GRAPPA. Noise amplification was decreased by using moderate acceleration factors (AF = 4). We examined slice leakage in unprocessed fMRI motor task data from the HCP. When data were smoothed, we found evidence of slice leakage in some, but not all, subjects. We also found evidence of SMS noise amplification in unprocessed task and processed resting-state HCP data. Copyright © 2018 Elsevier Inc. All rights reserved.
Objective evaluation of linear and nonlinear tomosynthetic reconstruction algorithms
NASA Astrophysics Data System (ADS)
Webber, Richard L.; Hemler, Paul F.; Lavery, John E.
2000-04-01
This investigation objectively tests five different tomosynthetic reconstruction methods involving three different digital sensors, each used in a different radiologic application: chest, breast, and pelvis, respectively. The common task was to simulate a specific representative projection for each application by summation of appropriately shifted tomosynthetically generated slices produced by using the five algorithms. These algorithms were, respectively, (1) conventional back projection, (2) iteratively deconvoluted back projection, (3) a nonlinear algorithm similar to back projection, except that the minimum value from all of the component projections for each pixel is computed instead of the average value, (4) a similar algorithm wherein the maximum value was computed instead of the minimum value, and (5) the same type of algorithm except that the median value was computed. Using these five algorithms, we obtained data from each sensor-tissue combination, yielding three factorially distributed series of contiguous tomosynthetic slices. The respective slice stacks then were aligned orthogonally and averaged to yield an approximation of a single orthogonal projection radiograph of the complete (unsliced) tissue thickness. Resulting images were histogram equalized, and actual projection control images were subtracted from their tomosynthetically synthesized counterparts. Standard deviations of the resulting histograms were recorded as inverse figures of merit (FOMs). Visual rankings of image differences by five human observers of a subset (breast data only) also were performed to determine whether their subjective observations correlated with homologous FOMs. Nonparametric statistical analysis of these data demonstrated significant differences (P > 0.05) between reconstruction algorithms. The nonlinear minimization reconstruction method nearly always outperformed the other methods tested. Observer rankings were similar to those measured objectively.
Formal verification of an avionics microprocessor
NASA Technical Reports Server (NTRS)
Srivas, Mandayam, K.; Miller, Steven P.
1995-01-01
Formal specification combined with mechanical verification is a promising approach for achieving the extremely high levels of assurance required of safety-critical digital systems. However, many questions remain regarding their use in practice: Can these techniques scale up to industrial systems, where are they likely to be useful, and how should industry go about incorporating them into practice? This report discusses a project undertaken to answer some of these questions, the formal verification of the AAMPS microprocessor. This project consisted of formally specifying in the PVS language a rockwell proprietary microprocessor at both the instruction-set and register-transfer levels and using the PVS theorem prover to show that the microcode correctly implemented the instruction-level specification for a representative subset of instructions. Notable aspects of this project include the use of a formal specification language by practicing hardware and software engineers, the integration of traditional inspections with formal specifications, and the use of a mechanical theorem prover to verify a portion of a commercial, pipelined microprocessor that was not explicitly designed for formal verification.
Lonchamp, Etienne; Dupont, Jean-Luc; Beekenkamp, Huguette; Poulain, Bernard; Bossu, Jean-Louis
2006-01-01
Thin acute slices and dissociated cell cultures taken from different parts of the brain have been widely used to examine the function of the nervous system, neuron-specific interactions, and neuronal development (specifically, neurobiology, neuropharmacology, and neurotoxicology studies). Here, we focus on an alternative in vitro model: brain-slice cultures in roller tubes, initially introduced by Beat Gähwiler for studies with rats, that we have recently adapted for studies of mouse cerebellum. Cultured cerebellar slices afford many of the advantages of dissociated cultures of neurons and thin acute slices. Organotypic slice cultures were established from newborn or 10-15-day-old mice. After 3-4 weeks in culture, the slices flattened to form a cell monolayer. The main types of cerebellar neurons could be identified with immunostaining techniques, while their electrophysiological properties could be easily characterized with the patch-clamp recording technique. When slices were taken from newborn mice and cultured for 3 weeks, aspects of the cerebellar development were displayed. A functional neuronal network was established despite the absence of mossy and climbing fibers, which are the two excitatory afferent projections to the cerebellum. When slices were made from 10-15-day-old mice, which are at a developmental stage when cerebellum organization is almost established, the structure and neuronal pathways were intact after 3-4 weeks in culture. These unique characteristics make organotypic slice cultures of mouse cerebellar cortex a valuable model for analyzing the consequences of gene mutations that profoundly alter neuronal function and compromise postnatal survival.
Twistor interpretation of slice regular functions
NASA Astrophysics Data System (ADS)
Altavilla, Amedeo
2018-01-01
Given a slice regular function f : Ω ⊂ H → H, with Ω ∩ R ≠ ∅, it is possible to lift it to surfaces in the twistor space CP3 of S4 ≃ H ∪ { ∞ } (see Gentili et al., 2014). In this paper we show that the same result is true if one removes the hypothesis Ω ∩ R ≠ ∅ on the domain of the function f. Moreover we find that if a surface S ⊂CP3 contains the image of the twistor lift of a slice regular function, then S has to be ruled by lines. Starting from these results we find all the projective classes of algebraic surfaces up to degree 3 in CP3 that contain the lift of a slice regular function. In addition we extend and further explore the so-called twistor transform, that is a curve in Gr2(C4) which, given a slice regular function, returns the arrangement of lines whose lift carries on. With the explicit expression of the twistor lift and of the twistor transform of a slice regular function we exhibit the set of slice regular functions whose twistor transform describes a rational line inside Gr2(C4) , showing the role of slice regular functions not defined on R. At the end we study the twistor lift of a particular slice regular function not defined over the reals. This example shows the effectiveness of our approach and opens some questions.
NASA Astrophysics Data System (ADS)
Han, Minah; Baek, Jongduk
2017-03-01
We investigate location dependent lesion detectability of cone beam computed tomography images for different background types (i.e., uniform and anatomical), image planes (i.e., transverse and longitudinal) and slice thicknesses. Anatomical backgrounds are generated using a power law spectrum of breast anatomy, 1/f3. Spherical object with a 5mm diameter is used as a signal. CT projection data are acquired by the forward projection of uniform and anatomical backgrounds with and without the signal. Then, projection data are reconstructed using the FDK algorithm. Detectability is evaluated by a channelized Hotelling observer with dense difference-of-Gaussian channels. For uniform background, off-centered images yield higher detectability than iso-centered images for the transverse plane, while for the longitudinal plane, detectability of iso-centered and off-centered images are similar. For anatomical background, off-centered images yield higher detectability for the transverse plane, while iso-centered images yield higher detectability for the longitudinal plane, when the slice thickness is smaller than 1.9mm. The optimal slice thickness is 3.8mm for all tasks, and the transverse plane at the off-center (iso-center and off-center) produces the highest detectability for uniform (anatomical) background.
NASA Technical Reports Server (NTRS)
Bensalem, Saddek; Ganesh, Vijay; Lakhnech, Yassine; Munoz, Cesar; Owre, Sam; Ruess, Harald; Rushby, John; Rusu, Vlad; Saiedi, Hassen; Shankar, N.
2000-01-01
To become practical for assurance, automated formal methods must be made more scalable, automatic, and cost-effective. Such an increase in scope, scale, automation, and utility can be derived from an emphasis on a systematic separation of concerns during verification. SAL (Symbolic Analysis Laboratory) attempts to address these issues. It is a framework for combining different tools to calculate properties of concurrent systems. The heart of SAL is a language, developed in collaboration with Stanford, Berkeley, and Verimag for specifying concurrent systems in a compositional way. Our instantiation of the SAL framework augments PVS with tools for abstraction, invariant generation, program analysis (such as slicing), theorem proving, and model checking to separate concerns as well as calculate properties (i.e., perform, symbolic analysis) of concurrent systems. We. describe the motivation, the language, the tools, their integration in SAL/PAS, and some preliminary experience of their use.
NASA Astrophysics Data System (ADS)
Park, Subok; Zhang, George Z.; Zeng, Rongping; Myers, Kyle J.
2014-03-01
A task-based assessment of image quality1 for digital breast tomosynthesis (DBT) can be done in either the projected or reconstructed data space. As the choice of observer models and feature selection methods can vary depending on the type of task and data statistics, we previously investigated the performance of two channelized- Hotelling observer models in conjunction with 2D Laguerre-Gauss (LG) and two implementations of partial least squares (PLS) channels along with that of the Hotelling observer in binary detection tasks involving DBT projections.2, 3 The difference in these observers lies in how the spatial correlation in DBT angular projections is incorporated in the observer's strategy to perform the given task. In the current work, we extend our method to the reconstructed data space of DBT. We investigate how various model observers including the aforementioned compare for performing the binary detection of a spherical signal embedded in structured breast phantoms with the use of DBT slices reconstructed via filtered back projection. We explore how well the model observers incorporate the spatial correlation between different numbers of reconstructed DBT slices while varying the number of projections. For this, relatively small and large scan angles (24° and 96°) are used for comparison. Our results indicate that 1) given a particular scan angle, the number of projections needed to achieve the best performance for each observer is similar across all observer/channel combinations, i.e., Np = 25 for scan angle 96° and Np = 13 for scan angle 24°, and 2) given these sufficient numbers of projections, the number of slices for each observer to achieve the best performance differs depending on the channel/observer types, which is more pronounced in the narrow scan angle case.
Boolean approach to dichotomic quantum measurement theories
NASA Astrophysics Data System (ADS)
Nagata, K.; Nakamura, T.; Batle, J.; Abdalla, S.; Farouk, A.
2017-02-01
Recently, a new measurement theory based on truth values was proposed by Nagata and Nakamura [Int. J. Theor. Phys. 55, 3616 (2016)], that is, a theory where the results of measurements are either 0 or 1. The standard measurement theory accepts a hidden variable model for a single Pauli observable. Hence, we can introduce a classical probability space for the measurement theory in this particular case. Additionally, we discuss in the present contribution the fact that projective measurement theories (the results of which are either +1 or -1) imply the Bell, Kochen, and Specker (BKS) paradox for a single Pauli observable. To justify our assertion, we present the BKS theorem in almost all the two-dimensional states by using a projective measurement theory. As an example, we present the BKS theorem in two-dimensions with white noise. Our discussion provides new insight into the quantum measurement problem by using this measurement theory based on the truth values.
Generalized energy measurements and modified transient quantum fluctuation theorems
NASA Astrophysics Data System (ADS)
Watanabe, Gentaro; Venkatesh, B. Prasanna; Talkner, Peter
2014-05-01
Determining the work which is supplied to a system by an external agent provides a crucial step in any experimental realization of transient fluctuation relations. This, however, poses a problem for quantum systems, where the standard procedure requires the projective measurement of energy at the beginning and the end of the protocol. Unfortunately, projective measurements, which are preferable from the point of view of theory, seem to be difficult to implement experimentally. We demonstrate that, when using a particular type of generalized energy measurements, the resulting work statistics is simply related to that of projective measurements. This relation between the two work statistics entails the existence of modified transient fluctuation relations. The modifications are exclusively determined by the errors incurred in the generalized energy measurements. They are universal in the sense that they do not depend on the force protocol. Particularly simple expressions for the modified Crooks relation and Jarzynski equality are found for Gaussian energy measurements. These can be obtained by a sequence of sufficiently many generalized measurements which need not be Gaussian. In accordance with the central limit theorem, this leads to an effective error reduction in the individual measurements and even yields a projective measurement in the limit of infinite repetitions.
Weak convergence of a projection algorithm for variational inequalities in a Banach space
NASA Astrophysics Data System (ADS)
Iiduka, Hideaki; Takahashi, Wataru
2008-03-01
Let C be a nonempty, closed convex subset of a Banach space E. In this paper, motivated by Alber [Ya.I. Alber, Metric and generalized projection operators in Banach spaces: Properties and applications, in: A.G. Kartsatos (Ed.), Theory and Applications of Nonlinear Operators of Accretive and Monotone Type, in: Lecture Notes Pure Appl. Math., vol. 178, Dekker, New York, 1996, pp. 15-50], we introduce the following iterative scheme for finding a solution of the variational inequality problem for an inverse-strongly-monotone operator A in a Banach space: x1=x[set membership, variant]C andxn+1=[Pi]CJ-1(Jxn-[lambda]nAxn) for every , where [Pi]C is the generalized projection from E onto C, J is the duality mapping from E into E* and {[lambda]n} is a sequence of positive real numbers. Then we show a weak convergence theorem (Theorem 3.1). Finally, using this result, we consider the convex minimization problem, the complementarity problem, and the problem of finding a point u[set membership, variant]E satisfying 0=Au.
Regularity and Tresse's theorem for geometric structures
NASA Astrophysics Data System (ADS)
Sarkisyan, R. A.; Shandra, I. G.
2008-04-01
For any non-special bundle P\\to X of geometric structures we prove that the k-jet space J^k of this bundle with an appropriate k contains an open dense domain U_k on which Tresse's theorem holds. For every s\\geq k we prove that the pre-image \\pi^{-1}(k,s)(U_k) of U_k under the natural projection \\pi(k,s)\\colon J^s\\to J^k consists of regular points. (A point of J^s is said to be regular if the orbits of the group of diffeomorphisms induced from X have locally constant dimension in a neighbourhood of this point.)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hinterbichler, Kurt; Joyce, Austin; Khoury, Justin, E-mail: kurt.hinterbichler@case.edu, E-mail: austin.joyce@columbia.edu, E-mail: jkhoury@sas.upenn.edu
We investigate the symmetry structure of inflation in 2+1 dimensions. In particular, we show that the asymptotic symmetries of three-dimensional de Sitter space are in one-to-one correspondence with cosmological adiabatic modes for the curvature perturbation. In 2+1 dimensions, the asymptotic symmetry algebra is infinite-dimensional, given by two copies of the Virasoro algebra, and can be traced to the conformal symmetries of the two-dimensional spatial slices of de Sitter. We study the consequences of this infinite-dimensional symmetry for inflationary correlation functions, finding new soft theorems that hold only in 2+1 dimensions. Expanding the correlation functions as a power series in themore » soft momentum q , these relations constrain the traceless part of the tensorial coefficient at each order in q in terms of a lower-point function. As a check, we verify that the O( q {sup 2}) identity is satisfied by inflationary correlation functions in the limit of small sound speed.« less
NASA Astrophysics Data System (ADS)
Rogatko, Marek
1998-08-01
Using the ADM formulation of the Einstein-Maxwell axion-dilaton gravity we derive the formulas for the variation of mass and other asymptotic conserved quantities in the theory under consideration. Generalizing this kind of reasoning to the initial data for the manifold with an interior boundary we get the generalized first law of black hole mechanics. We consider an asymptotically flat solution to the Einstein-Maxwell axion-dilaton gravity describing a black hole with a Killing vector field timelike at infinity, the horizon of which comprises a bifurcate Killing horizon with a bifurcate surface. Supposing that the Killing vector field is asymptotically orthogonal to the static hypersurface with boundary S and a compact interior, we find that the solution is static in the exterior world, when the timelike vector field is normal to the horizon and has vanishing electric and axion-electric fields on static slices.
Information technologies for taking into account risks in business development programme
NASA Astrophysics Data System (ADS)
Kalach, A. V.; Khasianov, R. R.; Rossikhina, L. V.; Zybin, D. G.; Melnik, A. A.
2018-05-01
The paper describes the information technologies for taking into account risks in business development programme, which rely on the algorithm for assessment of programme project risks and the algorithm of programme forming with constrained financing of high-risk projects taken into account. A method of lower-bound estimate is suggested for subsets of solutions. The corresponding theorem and lemma and their proofs are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Favazza, C; Yu, L; Leng, S
2015-06-15
Purpose: To investigate using multiple CT image slices from a single acquisition as independent training images for a channelized Hotelling observer (CHO) model to reduce the number of repeated scans for CHO-based CT image quality assessment. Methods: We applied a previously validated CHO model to detect low contrast disk objects formed from cross-sectional images of three epoxy-resin-based rods (diameters: 3, 5, and 9 mm; length: ∼5cm). The rods were submerged in a 35x 25 cm2 iodine-doped water filled phantom, yielding-15 HU object contrast. The phantom was scanned 100 times with and without the rods present. Scan and reconstruction parameters include:more » 5 mm slice thickness at 0.5 mm intervals, 120 kV, 480 Quality Reference mAs, and a 128-slice scanner. The CHO’s detectability index was evaluated as a function of factors related to incorporating multi-slice image data: object misalignment along the z-axis, inter-slice pixel correlation, and number of unique slice locations. In each case, the CHO training set was fixed to 100 images. Results: Artificially shifting the object’s center position by as much as 3 pixels in any direction relative to the Gabor channel filters had insignificant impact on object detectability. An inter-slice pixel correlation of >∼0.2 yielded positive bias in the model’s performance. Incorporating multi-slice image data yielded slight negative bias in detectability with increasing number of slices, likely due to physical variations in the objects. However, inclusion of image data from up to 5 slice locations yielded detectability indices within measurement error of the single slice value. Conclusion: For the investigated model and task, incorporating image data from 5 different slice locations of at least 5 mm intervals into the CHO model yielded detectability indices within measurement error of the single slice value. Consequently, this methodology would Result in a 5-fold reduction in number of image acquisitions. This project was supported by National Institutes of Health grants R01 EB017095 and U01 EB017185 from the National Institute of Biomedical Imaging and Bioengineering.« less
Sinogram restoration for ultra-low-dose x-ray multi-slice helical CT by nonparametric regression
NASA Astrophysics Data System (ADS)
Jiang, Lu; Siddiqui, Khan; Zhu, Bin; Tao, Yang; Siegel, Eliot
2007-03-01
During the last decade, x-ray computed tomography (CT) has been applied to screen large asymptomatic smoking and nonsmoking populations for early lung cancer detection. Because a larger population will be involved in such screening exams, more and more attention has been paid to studying low-dose, even ultra-low-dose x-ray CT. However, reducing CT radiation exposure will increase noise level in the sinogram, thereby degrading the quality of reconstructed CT images as well as causing more streak artifacts near the apices of the lung. Thus, how to reduce the noise levels and streak artifacts in the low-dose CT images is becoming a meaningful topic. Since multi-slice helical CT has replaced conventional stop-and-shoot CT in many clinical applications, this research mainly focused on the noise reduction issue in multi-slice helical CT. The experiment data were provided by Siemens SOMATOM Sensation 16-Slice helical CT. It included both conventional CT data acquired under 120 kvp voltage and 119 mA current and ultra-low-dose CT data acquired under 120 kvp and 10 mA protocols. All other settings are the same as that of conventional CT. In this paper, a nonparametric smoothing method with thin plate smoothing splines and the roughness penalty was proposed to restore the ultra-low-dose CT raw data. Each projection frame was firstly divided into blocks, and then the 2D data in each block was fitted to a thin-plate smoothing splines' surface via minimizing a roughness-penalized least squares objective function. By doing so, the noise in each ultra-low-dose CT projection was reduced by leveraging the information contained not only within each individual projection profile, but also among nearby profiles. Finally the restored ultra-low-dose projection data were fed into standard filtered back projection (FBP) algorithm to reconstruct CT images. The rebuilt results as well as the comparison between proposed approach and traditional method were given in the results and discussions section, and showed effectiveness of proposed thin-plate based nonparametric regression method.
NASA Astrophysics Data System (ADS)
Katsura, Hosho; Koma, Tohru
2018-03-01
We study a wide class of topological free-fermion systems on a hypercubic lattice in spatial dimensions d ≥ 1. When the Fermi level lies in a spectral gap or a mobility gap, the topological properties, e.g., the integral quantization of the topological invariant, are protected by certain symmetries of the Hamiltonian against disorder. This generic feature is characterized by a generalized index theorem which is a noncommutative analog of the Atiyah-Singer index theorem. The noncommutative index defined in terms of a pair of projections gives a precise formula for the topological invariant in each symmetry class in any dimension (d ≥ 1). Under the assumption on the nonvanishing spectral or mobility gap, we prove that the index formula reproduces Bott periodicity and all of the possible values of topological invariants in the classification table of topological insulators and superconductors. We also prove that the indices are robust against perturbations that do not break the symmetry of the unperturbed Hamiltonian.
Optimized 3D stitching algorithm for whole body SPECT based on transition error minimization (TEM)
NASA Astrophysics Data System (ADS)
Cao, Xinhua; Xu, Xiaoyin; Voss, Stephan
2017-02-01
Standard Single Photon Emission Computed Tomography (SPECT) has a limited field of view (FOV) and cannot provide a 3D image of an entire long whole body SPECT. To produce a 3D whole body SPECT image, two to five overlapped SPECT FOVs from head to foot are acquired and assembled using image stitching. Most commercial software from medical imaging manufacturers applies a direct mid-slice stitching method to avoid blurring or ghosting from 3D image blending. Due to intensity changes across the middle slice of overlapped images, direct mid-slice stitching often produces visible seams in the coronal and sagittal views and maximal intensity projection (MIP). In this study, we proposed an optimized algorithm to reduce the visibility of stitching edges. The new algorithm computed, based on transition error minimization (TEM), a 3D stitching interface between two overlapped 3D SPECT images. To test the suggested algorithm, four studies of 2-FOV whole body SPECT were used and included two different reconstruction methods (filtered back projection (FBP) and ordered subset expectation maximization (OSEM)) as well as two different radiopharmaceuticals (Tc-99m MDP for bone metastases and I-131 MIBG for neuroblastoma tumors). Relative transition errors of stitched whole body SPECT using mid-slice stitching and the TEM-based algorithm were measured for objective evaluation. Preliminary experiments showed that the new algorithm reduced the visibility of the stitching interface in the coronal, sagittal, and MIP views. Average relative transition errors were reduced from 56.7% of mid-slice stitching to 11.7% of TEM-based stitching. The proposed algorithm also avoids blurring artifacts by preserving the noise properties of the original SPECT images.
NASA Technical Reports Server (NTRS)
Schmid, F.; Khattak, C. P.
1979-01-01
Several 20 cm diameter silicon ingots, up to 6.3 kg. were cast with good crystallinity. The graphite heat zone can be purified by heating it to high temperatures in vacuum. This is important in reducing costs and purification of large parts. Electroplated wires with 45 um synthetic diamonds and 30 um natural diamonds showed good cutting efficiency and lifetime. During slicing of a 10 cm x 10 cm workpiece, jerky motion occurred in the feed and rocking mechanisms. This problem is corrected and modifications were made to reduce the weight of the bladeheat by 50%.
Mathematical and physical meaning of the Bell inequalities
NASA Astrophysics Data System (ADS)
Santos, Emilio
2016-09-01
It is shown that the Bell inequalities are closely related to the triangle inequalities involving distance functions amongst pairs of random variables with values \\{0,1\\}. A hidden variables model may be defined as a mapping between a set of quantum projection operators and a set of random variables. The model is noncontextual if there is a joint probability distribution. The Bell inequalities are necessary conditions for its existence. The inequalities are most relevant when measurements are performed at space-like separation, thus showing a conflict between quantum mechanics and local realism (Bell's theorem). The relations of the Bell inequalities with contextuality, the Kochen-Specker theorem, and quantum entanglement are briefly discussed.
An iterative reconstruction method for high-pitch helical luggage CT
NASA Astrophysics Data System (ADS)
Xue, Hui; Zhang, Li; Chen, Zhiqiang; Jin, Xin
2012-10-01
X-ray luggage CT is widely used in airports and railway stations for the purpose of detecting contrabands and dangerous goods that may be potential threaten to public safety, playing an important role in homeland security. An X-ray luggage CT is usually in a helical trajectory with a high pitch for achieving a high passing speed of the luggage. The disadvantage of high pitch is that conventional filtered back-projection (FBP) requires a very large slice thickness, leading to bad axial resolution and helical artifacts. Especially when severe data inconsistencies are present in the z-direction, like the ends of a scanning object, the partial volume effect leads to inaccuracy value and may cause a wrong identification. In this paper, an iterative reconstruction method is developed to improve the image quality and accuracy for a large-spacing multi-detector high-pitch helical luggage CT system. In this method, the slice thickness is set to be much smaller than the pitch. Each slice involves projection data collected in a rather small angular range, being an ill-conditioned limited-angle problem. Firstly a low-resolution reconstruction is employed to obtain images, which are used as prior images in the following process. Then iterative reconstruction is performed to obtain high-resolution images. This method enables a high volume coverage speed and a thin reconstruction slice for the helical luggage CT. We validate this method with data collected in a commercial X-ray luggage CT.
NASA Astrophysics Data System (ADS)
Timberg, P.; Dustler, M.; Petersson, H.; Tingberg, A.; Zackrisson, S.
2015-03-01
Purpose: To investigate detection performance for calcification clusters in reconstructed digital breast tomosynthesis (DBT) slices at different dose levels using a Super Resolution and Statistical Artifact Reduction (SRSAR) reconstruction method. Method: Simulated calcifications with irregular profile (0.2 mm diameter) where combined to form clusters that were added to projection images (1-3 per abnormal image) acquired on a DBT system (Mammomat Inspiration, Siemens). The projection images were dose reduced by software to form 35 abnormal cases and 25 normal cases as if acquired at 100%, 75% and 50% dose level (AGD of approximately 1.6 mGy for a 53 mm standard breast, measured according to EUREF v0.15). A standard FBP and a SRSAR reconstruction method (utilizing IRIS (iterative reconstruction filters), and outlier detection using Maximum-Intensity Projections and Average-Intensity Projections) were used to reconstruct single central slices to be used in a Free-response task (60 images per observer and dose level). Six observers participated and their task was to detect the clusters and assign confidence rating in randomly presented images from the whole image set (balanced by dose level). Each trial was separated by one weeks to reduce possible memory bias. The outcome was analyzed for statistical differences using Jackknifed Alternative Free-response Receiver Operating Characteristics. Results: The results indicate that it is possible reduce the dose by 50% with SRSAR without jeopardizing cluster detection. Conclusions: The detection performance for clusters can be maintained at a lower dose level by using SRSAR reconstruction.
NASA Technical Reports Server (NTRS)
Holden, S. C.; Fleming, J. R.
1978-01-01
Fabrication of a prototype large capacity multiple blade slurry saw is considered. Design of the bladehead which will tension up to 1000 blades, and cut a 45 cm long silicon ingot as large as 12 cm in diameter is given. The large blade tensioning force of 270,000 kg is applied through two bolts acting on a pair of scissor toggles, significantly reducing operator set-up time. Tests with an upside-down cutting technique resulted in 100% wafering yields and the highest wafer accuracy yet experienced with MS slicing. Variations in oil and abrasives resulted only in degraded slicing results. A technique of continuous abrasive slurry separation to remove silicon debris is described.
Generalized quantum no-go theorems of pure states
NASA Astrophysics Data System (ADS)
Li, Hui-Ran; Luo, Ming-Xing; Lai, Hong
2018-07-01
Various results of the no-cloning theorem, no-deleting theorem and no-superposing theorem in quantum mechanics have been proved using the superposition principle and the linearity of quantum operations. In this paper, we investigate general transformations forbidden by quantum mechanics in order to unify these theorems. First, we prove that any useful information cannot be created from an unknown pure state which is randomly chosen from a Hilbert space according to the Harr measure. And then, we propose a unified no-go theorem based on a generalized no-superposing result. The new theorem includes the no-cloning theorem, no-anticloning theorem, no-partial-erasure theorem, no-splitting theorem, no-superposing theorem or no-encoding theorem as a special case. Moreover, it implies various new results. Third, we extend the new theorem into another form that includes the no-deleting theorem as a special case.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, D; Neylon, J; Dou, T
Purpose: A recently proposed 4D-CT protocol uses deformable registration of free-breathing fast-helical CT scans to generate a breathing motion model. In order to allow accurate registration, free-breathing images are required to be free of doubling-artifacts, which arise when tissue motion is greater than scan speed. This work identifies the minimum scanner parameters required to successfully generate free-breathing fast-helical scans without doubling-artifacts. Methods: 10 patients were imaged under free breathing conditions 25 times in alternating directions with a 64-slice CT scanner using a low dose fast helical protocol. A high temporal resolution (0.1s) 4D-CT was generated using a patient specific motionmore » model and patient breathing waveforms, and used as the input for a scanner simulation. Forward projections were calculated using helical cone-beam geometry (800 projections per rotation) and a GPU accelerated reconstruction algorithm was implemented. Various CT scanner detector widths and rotation times were simulated, and verified using a motion phantom. Doubling-artifacts were quantified in patient images using structural similarity maps to determine the similarity between axial slices. Results: Increasing amounts of doubling-artifacts were observed with increasing rotation times > 0.2s for 16×1mm slice scan geometry. No significant increase in doubling artifacts was observed for 64×1mm slice scan geometry up to 1.0s rotation time although blurring artifacts were observed >0.6s. Using a 16×1mm slice scan geometry, a rotation time of less than 0.3s (53mm/s scan speed) would be required to produce images of similar quality to a 64×1mm slice scan geometry. Conclusion: The current generation of 16 slice CT scanners, which are present in most Radiation Oncology departments, are not capable of generating free-breathing sorting-artifact-free images in the majority of patients. The next generation of CT scanners should be capable of at least 53mm/s scan speed in order to use a fast-helical 4D-CT protocol to generate a motion-artifact free 4D-CT. NIH R01CA096679.« less
Slicing Method for curved façade and window extraction from point clouds
NASA Astrophysics Data System (ADS)
Iman Zolanvari, S. M.; Laefer, Debra F.
2016-09-01
Laser scanning technology is a fast and reliable method to survey structures. However, the automatic conversion of such data into solid models for computation remains a major challenge, especially where non-rectilinear features are present. Since, openings and the overall dimensions of the buildings are the most critical elements in computational models for structural analysis, this article introduces the Slicing Method as a new, computationally-efficient method for extracting overall façade and window boundary points for reconstructing a façade into a geometry compatible for computational modelling. After finding a principal plane, the technique slices a façade into limited portions, with each slice representing a unique, imaginary section passing through a building. This is done along a façade's principal axes to segregate window and door openings from structural portions of the load-bearing masonry walls. The method detects each opening area's boundaries, as well as the overall boundary of the façade, in part, by using a one-dimensional projection to accelerate processing. Slices were optimised as 14.3 slices per vertical metre of building and 25 slices per horizontal metre of building, irrespective of building configuration or complexity. The proposed procedure was validated by its application to three highly decorative, historic brick buildings. Accuracy in excess of 93% was achieved with no manual intervention on highly complex buildings and nearly 100% on simple ones. Furthermore, computational times were less than 3 sec for data sets up to 2.6 million points, while similar existing approaches required more than 16 hr for such datasets.
Finite slice analysis (FINA) of sliced and velocity mapped images on a Cartesian grid
NASA Astrophysics Data System (ADS)
Thompson, J. O. F.; Amarasinghe, C.; Foley, C. D.; Rombes, N.; Gao, Z.; Vogels, S. N.; van de Meerakker, S. Y. T.; Suits, A. G.
2017-08-01
Although time-sliced imaging yields improved signal-to-noise and resolution compared with unsliced velocity mapped ion images, for finite slice widths as encountered in real experiments there is a loss of resolution and recovered intensities for the slow fragments. Recently, we reported a new approach that permits correction of these effects for an arbitrarily sliced distribution of a 3D charged particle cloud. This finite slice analysis (FinA) method utilizes basis functions that model the out-of-plane contribution of a given velocity component to the image for sequential subtraction in a spherical polar coordinate system. However, the original approach suffers from a slow processing time due to the weighting procedure needed to accurately model the out-of-plane projection of an anisotropic angular distribution. To overcome this issue we present a variant of the method in which the FinA approach is performed in a cylindrical coordinate system (Cartesian in the image plane) rather than a spherical polar coordinate system. Dubbed C-FinA, we show how this method is applied in much the same manner. We compare this variant to the polar FinA method and find that the processing time (of a 510 × 510 pixel image) in its most extreme case improves by a factor of 100. We also show that although the resulting velocity resolution is not quite as high as the polar version, this new approach shows superior resolution for fine structure in the differential cross sections. We demonstrate the method on a range of experimental and synthetic data at different effective slice widths.
ERIC Educational Resources Information Center
Kirker, Sara Schmickle
2009-01-01
In this article, the author describes an art project for second-grade students based on American Regionalist Grant Wood's most famous painting, "American Gothic," which was modeled by his sister, Nan, and his dentist. This well-loved painting depicting a hard-working farmer and his daughter standing in front of their farmhouse is the project's…
Scatter measurement and correction method for cone-beam CT based on single grating scan
NASA Astrophysics Data System (ADS)
Huang, Kuidong; Shi, Wenlong; Wang, Xinyu; Dong, Yin; Chang, Taoqi; Zhang, Hua; Zhang, Dinghua
2017-06-01
In cone-beam computed tomography (CBCT) systems based on flat-panel detector imaging, the presence of scatter significantly reduces the quality of slices. Based on the concept of collimation, this paper presents a scatter measurement and correction method based on single grating scan. First, according to the characteristics of CBCT imaging, the scan method using single grating and the design requirements of the grating are analyzed and figured out. Second, by analyzing the composition of object projection images and object-and-grating projection images, the processing method for the scatter image at single projection angle is proposed. In addition, to avoid additional scan, this paper proposes an angle interpolation method of scatter images to reduce scan cost. Finally, the experimental results show that the scatter images obtained by this method are accurate and reliable, and the effect of scatter correction is obvious. When the additional object-and-grating projection images are collected and interpolated at intervals of 30 deg, the scatter correction error of slices can still be controlled within 3%.
Hutzler, Michael; Fromherz, Peter
2004-04-01
Probing projections between brain areas and their modulation by synaptic potentiation requires dense arrays of contacts for noninvasive electrical stimulation and recording. Semiconductor technology is able to provide planar arrays with high spatial resolution to be used with planar neuronal structures such as organotypic brain slices. To address basic methodical issues we developed a silicon chip with simple arrays of insulated capacitors and field-effect transistors for stimulation of neuronal activity and recording of evoked field potentials. Brain slices from rat hippocampus were cultured on that substrate. We achieved local stimulation of the CA3 region by applying defined voltage pulses to the chip capacitors. Recording of resulting local field potentials in the CA1 region was accomplished with transistors. The relationship between stimulation and recording was rationalized by a sheet conductor model. By combining a row of capacitors with a row of transistors we determined a simple stimulus-response matrix from CA3 to CA1. Possible contributions of inhomogeneities of synaptic projection, of tissue structure and of neuroelectronic interfacing were considered. The study provides the basis for a development of semiconductor chips with high spatial resolution that are required for long-term studies of topographic mapping.
Direct approach for the fluctuation-dissipation theorem under nonequilibrium steady-state conditions
NASA Astrophysics Data System (ADS)
Komori, Kentaro; Enomoto, Yutaro; Takeda, Hiroki; Michimura, Yuta; Somiya, Kentaro; Ando, Masaki; Ballmer, Stefan W.
2018-05-01
The test mass suspensions of cryogenic gravitational-wave detectors such as the KAGRA project are tasked with extracting the heat deposited on the optics. These suspensions have a nonuniform temperature, requiring the calculation of thermal noise in nonequilibrium conditions. While it is not possible to describe the whole suspension system with one temperature, the local temperature at every point in the system is still well defined. We therefore generalize the application of the fluctuation-dissipation theorem to mechanical systems, pioneered by Saulson and Levin, to nonequilibrium conditions in which a temperature can only be defined locally. The result is intuitive in the sense that the thermal noise in the observed degree of freedom is given by averaging the temperature field, weighted by the dissipation density associated with that particular degree of freedom. After proving this theorem, we apply the result to examples of increasing complexity: a simple spring, the bending of a pendulum suspension fiber, and a model of the KAGRA cryogenic suspension. We conclude by outlining the application to nonequilibrium thermoelastic noise.
Illuminating the Mathematics of Lamp Shades
ERIC Educational Resources Information Center
Matthews, Michael E.; Gross, Greg
2008-01-01
The problem of creating lamp shades to specific design parameters allows rich and interesting explorations in the mathematics of circles and triangles. This interactive project helps students build their spatial reasoning and is especially appropriate during a unit on either the Pythagorean theorem or similar triangles. (Contains 7 figures and 1…
Maxwell Equations and the Redundant Gauge Degree of Freedom
ERIC Educational Resources Information Center
Wong, Chun Wa
2009-01-01
On transformation to the Fourier space (k,[omega]), the partial differential Maxwell equations simplify to algebraic equations, and the Helmholtz theorem of vector calculus reduces to vector algebraic projections. Maxwell equations and their solutions can then be separated readily into longitudinal and transverse components relative to the…
Searching Algorithm Using Bayesian Updates
ERIC Educational Resources Information Center
Caudle, Kyle
2010-01-01
In late October 1967, the USS Scorpion was lost at sea, somewhere between the Azores and Norfolk Virginia. Dr. Craven of the U.S. Navy's Special Projects Division is credited with using Bayesian Search Theory to locate the submarine. Bayesian Search Theory is a straightforward and interesting application of Bayes' theorem which involves searching…
On a Game of Large-Scale Projects Competition
NASA Astrophysics Data System (ADS)
Nikonov, Oleg I.; Medvedeva, Marina A.
2009-09-01
The paper is devoted to game-theoretical control problems motivated by economic decision making situations arising in realization of large-scale projects, such as designing and putting into operations the new gas or oil pipelines. A non-cooperative two player game is considered with payoff functions of special type for which standard existence theorems and algorithms for searching Nash equilibrium solutions are not applicable. The paper is based on and develops the results obtained in [1]-[5].
Cosmic voids and void lensing in the Dark Energy Survey science verification data
Sánchez, C.; Clampitt, J.; Kovacs, A.; ...
2016-10-26
Galaxies and their dark matter halos populate a complicated filamentary network around large, nearly empty regions known as cosmic voids. Cosmic voids are usually identified in spectroscopic galaxy surveys, where 3D information about the large-scale structure of the Universe is available. Although an increasing amount of photometric data is being produced, its potential for void studies is limited since photometric redshifts induce line-of-sight position errors of ~50 Mpc/h or more that can render many voids undetectable. In this paper we present a new void finder designed for photometric surveys, validate it using simulations, and apply it to the high-quality photo-zmore » redMaGiC galaxy sample of the Dark Energy Survey Science Verification (DES-SV) data. The algorithm works by projecting galaxies into 2D slices and finding voids in the smoothed 2D galaxy density field of the slice. Fixing the line-of-sight size of the slices to be at least twice the photo- z scatter, the number of voids found in these projected slices of simulated spectroscopic and photometric galaxy catalogs is within 20% for all transverse void sizes, and indistinguishable for the largest voids of radius ~70 Mpc/h and larger. The positions, radii, and projected galaxy profiles of photometric voids also accurately match the spectroscopic void sample. Applying the algorithm to the DES-SV data in the redshift range 0.2 < z < 0.8 , we identify 87 voids with comoving radii spanning the range 18-120 Mpc/h, and carry out a stacked weak lensing measurement. With a significance of 4.4σ, the lensing measurement confirms the voids are truly underdense in the matter field and hence not a product of Poisson noise, tracer density effects or systematics in the data. In conclusion, it also demonstrates, for the first time in real data, the viability of void lensing studies in photometric surveys.« less
Fabrication of corner cube array retro-reflective structure with DLP-based 3D printing technology
NASA Astrophysics Data System (ADS)
Riahi, Mohammadreza
2016-06-01
In this article, the fabrication of a corner cube array retro-reflective structure is presented by using DLP-based 3D printing technology. In this additive manufacturing technology a pattern of a cube corner array is designed in a computer and sliced with specific software. The image of each slice is then projected from the bottom side of a reservoir, containing UV cure resin, utilizing a DLP video projector. The projected area is cured and attached to a base plate. This process is repeated until the entire part is made. The best orientation of the printing process and the effect of layer thicknesses on the surface finish of the cube has been investigated. The thermal reflow surface finishing and replication with soft molding has also been presented in this article.
A novel slice preparation to study medullary oromotor and autonomic circuits in vitro
Nasse, Jason S.
2014-01-01
Background The medulla is capable of controlling and modulating ingestive behavior and gastrointestinal function. These two functions, which are critical to maintaining homeostasis, are governed by an interconnected group of nuclei dispersed throughout the medulla. As such, in vitro experiments to study the neurophysiologic details of these connections have been limited by spatial constraints of conventional slice preparations. New method This study demonstrates a novel method of sectioning the medulla so that sensory, integrative, and motor nuclei that innervate the gastrointestinal tract and the oral cavity remain intact. Results: Immunohistochemical staining against choline-acetyl-transferase and dopamine-β-hydroxylase demonstrated that within a 450 μm block of tissue we are able to capture sensory, integrative and motor nuclei that are critical to oromotor and gastrointestinal function. Within slice tracing shows that axonal projections from the NST to the reticular formation and from the reticular formation to the hypoglossal motor nucleus (mXII) persist. Live-cell calcium imaging of the slice demonstrates that stimulation of either the rostral or caudal NST activates neurons throughout the NST, as well as the reticular formation and mXII. Comparison with existing methods This new method of sectioning captures a majority of the nuclei that are active when ingesting a meal. Tradition planes of section, i.e. coronal, horizontal or sagittal, contain only a limited portion of the substrate. Conclusions Our results demonstrate that both anatomical and physiologic connections of oral and visceral sensory nuclei that project to integrative and motor nuclei remain intact with this new plane of section. PMID:25196216
A novel slice preparation to study medullary oromotor and autonomic circuits in vitro.
Nasse, Jason S
2014-11-30
The medulla is capable of controlling and modulating ingestive behavior and gastrointestinal function. These two functions, which are critical to maintaining homeostasis, are governed by an interconnected group of nuclei dispersed throughout the medulla. As such, in vitro experiments to study the neurophysiologic details of these connections have been limited by spatial constraints of conventional slice preparations. This study demonstrates a novel method of sectioning the medulla so that sensory, integrative, and motor nuclei that innervate the gastrointestinal tract and the oral cavity remain intact. Immunohistochemical staining against choline-acetyl-transferase and dopamine-β-hydroxylase demonstrated that within a 450 μm block of tissue we are able to capture sensory, integrative and motor nuclei that are critical to oromotor and gastrointestinal function. Within slice tracing shows that axonal projections from the NST to the reticular formation and from the reticular formation to the hypoglossal motor nucleus (mXII) persist. Live-cell calcium imaging of the slice demonstrates that stimulation of either the rostral or caudal NST activates neurons throughout the NST, as well as the reticular formation and mXII. This new method of sectioning captures a majority of the nuclei that are active when ingesting a meal. Tradition planes of section, i.e. coronal, horizontal or sagittal, contain only a limited portion of the substrate. Our results demonstrate that both anatomical and physiologic connections of oral and visceral sensory nuclei that project to integrative and motor nuclei remain intact with this new plane of section. Published by Elsevier B.V.
Banking for the future: an Australian experience in brain banking.
Sarris, M; Garrick, T M; Sheedy, D; Harper, C G
2002-06-01
The New South Wales (NSW) Tissue Resource Centre (TRC) has been set up to provide Australian and international researchers with fixed and frozen brain tissue from cases that are well characterised, both clinically and pathologically, for projects related to neuropsychiatric and alcohol-related disorders. A daily review of the Department of Forensic Medicine provides initial information regarding a potential collection. If the case adheres to the strict inclusion criteria, the pathologist performing the postmortem examination is approached regarding retention of the brain tissue. The next of kin of the deceased is then contacted requesting permission to retain the brain for medical research. Cases are also obtained through donor programmes, where donors are assessed and consent to donate their brain during life. Once the brain is removed at autopsy, the brain is photographed, weighed and the volume determined, the brainstem and cerebellum are removed. The two hemispheres are divided, one hemisphere is fresh frozen and one fixed (randomised). Prior to freezing, the hemisphere is sliced into 1-cm coronal slices and a set of critical area blocks is taken. All frozen tissues are kept bagged at -80 degrees C. The other hemisphere is fixed in 15% buffered formalin for 2 weeks, embedded in agar and sliced at 3-mm intervals in the coronal plane. Tissue blocks from these slices are used for neuropathological analysis to exclude any other pathology. The TRC currently has 230 cases of both fixed and frozen material that has proven useful in a range of techniques in many research projects. These techniques include quantitative analyses of brain regions using neuropathological, neurochemical, neuropharmacological and gene expression assays.
Filtrations on Springer fiber cohomology and Kostka polynomials
NASA Astrophysics Data System (ADS)
Bellamy, Gwyn; Schedler, Travis
2018-03-01
We prove a conjecture which expresses the bigraded Poisson-de Rham homology of the nilpotent cone of a semisimple Lie algebra in terms of the generalized (one-variable) Kostka polynomials, via a formula suggested by Lusztig. This allows us to construct a canonical family of filtrations on the flag variety cohomology, and hence on irreducible representations of the Weyl group, whose Hilbert series are given by the generalized Kostka polynomials. We deduce consequences for the cohomology of all Springer fibers. In particular, this computes the grading on the zeroth Poisson homology of all classical finite W-algebras, as well as the filtration on the zeroth Hochschild homology of all quantum finite W-algebras, and we generalize to all homology degrees. As a consequence, we deduce a conjecture of Proudfoot on symplectic duality, relating in type A the Poisson homology of Slodowy slices to the intersection cohomology of nilpotent orbit closures. In the last section, we give an analogue of our main theorem in the setting of mirabolic D-modules.
Principal Component Analysis: Resources for an Essential Application of Linear Algebra
ERIC Educational Resources Information Center
Pankavich, Stephen; Swanson, Rebecca
2015-01-01
Principal Component Analysis (PCA) is a highly useful topic within an introductory Linear Algebra course, especially since it can be used to incorporate a number of applied projects. This method represents an essential application and extension of the Spectral Theorem and is commonly used within a variety of fields, including statistics,…
A Geometric Puzzle That Leads To Fibonacci Sequences.
ERIC Educational Resources Information Center
Rulf, Benjamin
1998-01-01
Illustrates how mathematicians work and do mathematical research through the use of a puzzle. Demonstrates how general rules, then theorems develop from special cases. This approach may be used as a research project in high school classrooms or math club settings with the teacher helping to formulate questions, set goals, and avoid becoming…
Making almost commuting matrices commute
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hastings, Matthew B
Suppose two Hermitian matrices A, B almost commute ({parallel}[A,B]{parallel} {<=} {delta}). Are they close to a commuting pair of Hermitian matrices, A', B', with {parallel}A-A'{parallel},{parallel}B-B'{parallel} {<=} {epsilon}? A theorem of H. Lin shows that this is uniformly true, in that for every {epsilon} > 0 there exists a {delta} > 0, independent of the size N of the matrices, for which almost commuting implies being close to a commuting pair. However, this theorem does not specifiy how {delta} depends on {epsilon}. We give uniform bounds relating {delta} and {epsilon}. The proof is constructive, giving an explicit algorithm to construct A'more » and B'. We provide tighter bounds in the case of block tridiagonal and tridiagnonal matrices. Within the context of quantum measurement, this implies an algorithm to construct a basis in which we can make a projective measurement that approximately measures two approximately commuting operators simultaneously. Finally, we comment briefly on the case of approximately measuring three or more approximately commuting operators using POVMs (positive operator-valued measures) instead of projective measurements.« less
NASA Astrophysics Data System (ADS)
Zhao, Yumin
1997-07-01
By the techniques of the Wick theorem for coupled clusters, the no-energy-weighted electromagnetic sum-rule calculations are presented in the sdg neutron-proton interacting boson model, the nuclear pair shell model and the fermion-dynamical symmetry model. The project supported by Development Project Foundation of China, National Natural Science Foundation of China, Doctoral Education Fund of National Education Committee, Fundamental Research Fund of Southeast University
Dose fractionation theorem in 3-D reconstruction (tomography)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glaeser, R.M.
It is commonly assumed that the large number of projections for single-axis tomography precludes its application to most beam-labile specimens. However, Hegerl and Hoppe have pointed out that the total dose required to achieve statistical significance for each voxel of a computed 3-D reconstruction is the same as that required to obtain a single 2-D image of that isolated voxel, at the same level of statistical significance. Thus a statistically significant 3-D image can be computed from statistically insignificant projections, as along as the total dosage that is distributed among these projections is high enough that it would have resultedmore » in a statistically significant projection, if applied to only one image. We have tested this critical theorem by simulating the tomographic reconstruction of a realistic 3-D model created from an electron micrograph. The simulations verify the basic conclusions of high absorption, signal-dependent noise, varying specimen contrast and missing angular range. Furthermore, the simulations demonstrate that individual projections in the series of fractionated-dose images can be aligned by cross-correlation because they contain significant information derived from the summation of features from different depths in the structure. This latter information is generally not useful for structural interpretation prior to 3-D reconstruction, owing to the complexity of most specimens investigated by single-axis tomography. These results, in combination with dose estimates for imaging single voxels and measurements of radiation damage in the electron microscope, demonstrate that it is feasible to use single-axis tomography with soft X-ray microscopy of frozen-hydrated specimens.« less
Evolution in Stage-Structured Populations
Barfield, Michael; Holt, Robert D.; Gomulkiewicz, Richard
2016-01-01
For many organisms, stage is a better predictor of demographic rates than age. Yet no general theoretical framework exists for understanding or predicting evolution in stage-structured populations. Here, we provide a general modeling approach that can be used to predict evolution and demography of stage-structured populations. This advances our ability to understand evolution in stage-structured populations to a level previously available only for populations structured by age. We use this framework to provide the first rigorous proof that Lande’s theorem, which relates adaptive evolution to population growth, applies to stage-classified populations, assuming only normality and that evolution is slow relative to population dynamics. We extend this theorem to allow for different means or variances among stages. Our next major result is the formulation of Price’s theorem, a fundamental law of evolution, for stage-structured populations. In addition, we use data from Trillium grandiflorum to demonstrate how our models can be applied to a real-world population and thereby show their practical potential to generate accurate projections of evolutionary and population dynamics. Finally, we use our framework to compare rates of evolution in age- versus stage-structured populations, which shows how our methods can yield biological insights about evolution in stage-structured populations. PMID:21460563
NASA Astrophysics Data System (ADS)
Luo, Shunlong; Li, Nan; Cao, Xuelian
2009-05-01
The no-broadcasting theorem, first established by Barnum [Phys. Rev. Lett. 76, 2818 (1996)], states that a set of quantum states can be broadcast if and only if it constitutes a commuting family. Quite recently, Piani [Phys. Rev. Lett. 100, 090502 (2008)] showed, by using an ingenious and sophisticated method, that the correlations in a single bipartite state can be locally broadcast if and only if the state is effectively a classical one (i.e., the correlations therein are classical). In this Brief Report, under the condition of nondegenerate spectrum, we provide an alternative and significantly simpler proof of the latter result based on the original no-broadcasting theorem and the monotonicity of the quantum relative entropy. This derivation motivates us to conjecture the equivalence between these two elegant yet formally different no-broadcasting theorems and indicates a subtle and fundamental issue concerning spectral degeneracy which also lies at the heart of the conflict between the von Neumann projection postulate and the Lüders ansatz for quantum measurements. This relation not only offers operational interpretations for commutativity and classicality but also illustrates the basic significance of noncommutativity in characterizing quantumness from the informational perspective.
The Great Emch Closure Theorem and a combinatorial proof of Poncelet's Theorem
NASA Astrophysics Data System (ADS)
Avksentyev, E. A.
2015-11-01
The relations between the classical closure theorems (Poncelet's, Steiner's, Emch's, and the zigzag theorems) and some of their generalizations are discussed. It is known that Emch's Theorem is the most general of these, while the others follow as special cases. A generalization of Emch's Theorem to pencils of circles is proved, which (by analogy with the Great Poncelet Theorem) can be called the Great Emch Theorem. It is shown that the Great Emch and Great Poncelet Theorems are equivalent and can be derived one from the other using elementary geometry, and also that both hold in the Lobachevsky plane as well. A new closure theorem is also obtained, in which the construction of closure is slightly more involved: closure occurs on a variable circle which is tangent to a fixed pair of circles. In conclusion, a combinatorial proof of Poncelet's Theorem is given, which deduces the closure principle for an arbitrary number of steps from the principle for three steps using combinatorics and number theory. Bibliography: 20 titles.
Statistical Inference and Simulation with StatKey
ERIC Educational Resources Information Center
Quinn, Anne
2016-01-01
While looking for an inexpensive technology package to help students in statistics classes, the author found StatKey, a free Web-based app. Not only is StatKey useful for students' year-end projects, but it is also valuable for helping students learn fundamental content such as the central limit theorem. Using StatKey, students can engage in…
Type Theory, Computation and Interactive Theorem Proving
2015-09-01
postdoc Cody Roux, to develop new methods of verifying real-valued inequalities automatically. They developed a prototype implementation in Python [8] (an...he has developed new heuristic, geometric methods of verifying real-valued inequalities. A python -based implementation has performed surprisingly...express complex mathematical and computational assertions. In this project, Avigad and Harper developed type-theoretic algorithms and formalisms that
ERIC Educational Resources Information Center
Ekstrom, James
2001-01-01
Advocates using computer imaging technology to assist students in doing projects in which determining density is important. Students can study quantitative comparisons of masses, lengths, and widths using computer software. Includes figures displaying computer images of shells, yeast cultures, and the Aral Sea. (SAH)
Mapping the Universe: Slices and Bubbles.
ERIC Educational Resources Information Center
Geller, Margaret J.
1990-01-01
Map making is described in the context of extraterrestrial areas. An analogy to terrestrial map making is used to provide some background. The status of projects designed to map extraterrestrial areas are discussed including problems unique to this science. (CW)
NASA Astrophysics Data System (ADS)
Woo, Sumin; Singh, Gyan Prakash; Oh, Jai-Ho; Lee, Kyoung-Min
2018-05-01
Seasonal changes in precipitation characteristics over India were projected using a high-resolution (40-km) atmospheric general circulation model (AGCM) during the near- (2010-2039), mid- (2040-2069), and far- (2070-2099) futures. For the model evaluation, we simulated an Atmospheric Model Intercomparison Project-type present-day climate using AGCM with observed sea-surface temperature and sea-ice concentration. Based on this simulation, we have simulated the current climate from 1979 to 2009 and subsequently the future climate projection until 2100 using a CMCC-CM model from Coupled Model Intercomparison Project phase 5 models based on RCP4.5 and RCP8.5 scenarios. Using various observed precipitation data, the validation of the simulated precipitation indicates that the AGCM well-captured the high and low rain belts and also onset and withdrawal of monsoon in the present-day climate simulation. Future projections were performed for the above-mentioned time slices (near-, mid-, and far futures). The model projected an increase in summer precipitation from 7 to 18% under RCP4.5 and from 14 to 18% under RCP8.5 from the mid- to far futures. Projected summer precipitation from different time slices depicts an increase over northwest (NWI) and west-south peninsular India (SPI) and a reduction over northeast and north-central India. The model projected an eastward shift of monsoon trough around 2° longitude and expansion and intensification of Mascarene High and Tibetan High seems to be associated with projected precipitation. The model projected extreme precipitation events show an increase (20-50%) in rainy days over NWI and SPI. While a significant increase of about 20-50% is noticed in heavy rain events over SPI during the far future.
A new scheme of general hybrid projective complete dislocated synchronization
NASA Astrophysics Data System (ADS)
Chu, Yan-dong; Chang, Ying-Xiang; An, Xin-lei; Yu, Jian-Ning; Zhang, Jian-Gang
2011-03-01
Based on the Lyapunov stability theorem, a new type of chaos synchronization, general hybrid projective complete dislocated synchronization (GHPCDS), is proposed under the framework of drive-response systems. The difference between the GHPCDS and complete synchronization is that every state variable of drive system does not equal the corresponding state variable, but equal other ones of response system while evolving in time. The GHPCDS includes complete dislocated synchronization, dislocated anti-synchronization and projective dislocated synchronization as its special item. As examples, the Lorenz chaotic system, Rössler chaotic system, hyperchaotic Chen system and hyperchaotic Lü system are discussed. Numerical simulations are given to show the effectiveness of these methods.
LLNL Center of Excellence Work Items for Q9-Q10 period
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neely, J. R.
This work plan encompasses a slice of effort going on within the ASC program, and for projects utilizing COE vendor resources, describes work that will be performed by both LLNL staff and COE vendor staff collaboratively.
Brill-Noether theory for vector bundles on projective curves
NASA Astrophysics Data System (ADS)
Ballico, E.
1998-11-01
In this paper we will study the Brill-Noether theory of vector bundles on a smooth projective curve X. As usual in papers on this topic we are mainly interested in stable or at least semistable bundles. Let Wkr, d(X) be the scheme of all stable vector bundles E on X with rank (E)=r, deg (E)=d and h0(X, E)[gt-or-equal, slanted]k+1. For a survey of the main known results, see the introduction of [6]. The referee has pointed out that the results in [6] were improved by V. Mercat in [14]; he proved that Wkr, d(X) is non-empty for d<2r if and only if k+1[less-than-or-eq, slant]r+(d[minus sign]r)/g. If X has general moduli the more interesting existence theorem was proved in [19]. However, in this paper we are mainly interested in very special curves X, e.g. the hyperelliptic or the bielliptic curves. We work over an algebraically closed base field K. In Section 5 we will assume char (K)=0. In Section 1 we will give some theorems of Clifford's type. In Section 2 we will construct several stable bundles with certain properties. Here the main tool is an operation (the +elementary transformation) which sends a vector bundle E on X to another vector bundle E[prime prime or minute] with rank (E[prime prime or minute])=rank (E) and deg (E[prime prime or minute])=deg (E)+1 (see Section 2 for its definition and its elementary properties). Using the +elementary transformations in Section 3 we will prove the following existence theorem which covers the case of a ‘small’ number of sections.
An Iterative Procedure for Obtaining I-Projections onto the Intersection of Convex Sets.
1984-06-01
Dykstra Department of Statistics and Actuarial Science The University of Iowa Iowa City, Iowa 52242 Technical Report #106 June 1984D I e ELECTE lSEP...t Theorem ~ ~ 2.. Asm i where the 4 are closed, convex sets of PD’s and R d 0 is a nonnegative vector such that there exists a T E 4 where I(TIR) < M...PERFOMING ORGANIZATION NAME AND ADDRESS 1. PROGIRA ILEMNT. PROJECT. TAK Department of Statistics and Actuarial Science AEAS a WORK UNIT Numaa The
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C; Han, M; Baek, J
Purpose: To investigate the detectability of a small target for different slice direction of a volumetric cone beam CT image and its impact on dose reduction. Methods: Analytic projection data of a sphere object (1 mm diameter, 0.2/cm attenuation coefficient) were generated and reconstructed by FDK algorithm. In this work, we compared the detectability of the small target from four different backprojection Methods: hanning weighted ramp filter with linear interpolation (RECON 1), hanning weighted ramp filter with Fourier interpolation (RECON2), ramp filter with linear interpolation (RECON 3), and ramp filter with Fourier interpolation (RECON4), respectively. For noise simulation, 200 photonsmore » per measurement were used, and the noise only data were reconstructed using FDK algorithm. For each reconstructed volume, axial and coronal slice were extracted and detection-SNR was calculated using channelized Hotelling observer (CHO) with dense difference-of-Gaussian (D-DOG) channels. Results: Detection-SNR of coronal images varies for different backprojection methods, while axial images have a similar detection-SNR. Detection-SNR{sup 2} ratios of coronal and axial images in RECON1 and RECON2 are 1.33 and 1.15, implying that the coronal image has a better detectability than axial image. In other words, using coronal slices for the small target detection can reduce the patient dose about 33% and 15% compared to using axial slices in RECON 1 and RECON 2. Conclusion: In this work, we investigated slice direction dependent detectability of a volumetric cone beam CT image. RECON 1 and RECON 2 produced the highest detection-SNR, with better detectability in coronal slices. These results indicate that it is more beneficial to use coronal slice to improve detectability of a small target in a volumetric cone beam CT image. This research was supported by the MSIP (Ministry of Science, ICT and Future Planning), Korea, under the IT Consilience Creative Program (NIPA-2014-H0201-14-1002) supervised by the NIPA (National IT Industry Promotion Agency). Authors declares that s/he has no conflict of Interest in relation to the work in this abstract.« less
Quantum fluctuation theorems and power measurements
NASA Astrophysics Data System (ADS)
Prasanna Venkatesh, B.; Watanabe, Gentaro; Talkner, Peter
2015-07-01
Work in the paradigm of the quantum fluctuation theorems of Crooks and Jarzynski is determined by projective measurements of energy at the beginning and end of the force protocol. In analogy to classical systems, we consider an alternative definition of work given by the integral of the supplied power determined by integrating up the results of repeated measurements of the instantaneous power during the force protocol. We observe that such a definition of work, in spite of taking account of the process dependence, has different possible values and statistics from the work determined by the conventional two energy measurement approach (TEMA). In the limit of many projective measurements of power, the system’s dynamics is frozen in the power measurement basis due to the quantum Zeno effect leading to statistics only trivially dependent on the force protocol. In general the Jarzynski relation is not satisfied except for the case when the instantaneous power operator commutes with the total Hamiltonian at all times. We also consider properties of the joint statistics of power-based definition of work and TEMA work in protocols where both values are determined. This allows us to quantify their correlations. Relaxing the projective measurement condition, weak continuous measurements of power are considered within the stochastic master equation formalism. Even in this scenario the power-based work statistics is in general not able to reproduce qualitative features of the TEMA work statistics.
Illustrating the Central Limit Theorem through Microsoft Excel Simulations
ERIC Educational Resources Information Center
Moen, David H.; Powell, John E.
2005-01-01
Using Microsoft Excel, several interactive, computerized learning modules are developed to demonstrate the Central Limit Theorem. These modules are used in the classroom to enhance the comprehension of this theorem. The Central Limit Theorem is a very important theorem in statistics, and yet because it is not intuitively obvious, statistics…
Unified quantum no-go theorems and transforming of quantum pure states in a restricted set
NASA Astrophysics Data System (ADS)
Luo, Ming-Xing; Li, Hui-Ran; Lai, Hong; Wang, Xiaojun
2017-12-01
The linear superposition principle in quantum mechanics is essential for several no-go theorems such as the no-cloning theorem, the no-deleting theorem and the no-superposing theorem. In this paper, we investigate general quantum transformations forbidden or permitted by the superposition principle for various goals. First, we prove a no-encoding theorem that forbids linearly superposing of an unknown pure state and a fixed pure state in Hilbert space of a finite dimension. The new theorem is further extended for multiple copies of an unknown state as input states. These generalized results of the no-encoding theorem include the no-cloning theorem, the no-deleting theorem and the no-superposing theorem as special cases. Second, we provide a unified scheme for presenting perfect and imperfect quantum tasks (cloning and deleting) in a one-shot manner. This scheme may lead to fruitful results that are completely characterized with the linear independence of the representative vectors of input pure states. The upper bounds of the efficiency are also proved. Third, we generalize a recent superposing scheme of unknown states with a fixed overlap into new schemes when multiple copies of an unknown state are as input states.
Spotting L3 slice in CT scans using deep convolutional network and transfer learning.
Belharbi, Soufiane; Chatelain, Clément; Hérault, Romain; Adam, Sébastien; Thureau, Sébastien; Chastan, Mathieu; Modzelewski, Romain
2017-08-01
In this article, we present a complete automated system for spotting a particular slice in a complete 3D Computed Tomography exam (CT scan). Our approach does not require any assumptions on which part of the patient's body is covered by the scan. It relies on an original machine learning regression approach. Our models are learned using the transfer learning trick by exploiting deep architectures that have been pre-trained on imageNet database, and therefore it requires very little annotation for its training. The whole pipeline consists of three steps: i) conversion of the CT scans into Maximum Intensity Projection (MIP) images, ii) prediction from a Convolutional Neural Network (CNN) applied in a sliding window fashion over the MIP image, and iii) robust analysis of the prediction sequence to predict the height of the desired slice within the whole CT scan. Our approach is applied to the detection of the third lumbar vertebra (L3) slice that has been found to be representative to the whole body composition. Our system is evaluated on a database collected in our clinical center, containing 642 CT scans from different patients. We obtained an average localization error of 1.91±2.69 slices (less than 5 mm) in an average time of less than 2.5 s/CT scan, allowing integration of the proposed system into daily clinical routines. Copyright © 2017 Elsevier Ltd. All rights reserved.
Formally Generating Adaptive Security Protocols
2013-03-01
User Interfaces for Theorem Provers, 2012. [9] Xiaoming Liu, Christoph Kreitz, Robbert van Renesse, Jason J. Hickey, Mark Hayden, Ken- neth Birman, and...Constable, Mark Hayden, Jason Hickey, Christoph Kreitz, Robbert van Renesse, Ohad Rodeh, and Werner Vogels. The Horus and Ensemble projects: Accom...plishments and limitations. In DARPA Information Survivability Conference and Exposition (DISCEX 2000), pages 149–161, Hilton Head, SC, 2000. IEEE
Inverse solutions for electrical impedance tomography based on conjugate gradients methods
NASA Astrophysics Data System (ADS)
Wang, M.
2002-01-01
A multistep inverse solution for two-dimensional electric field distribution is developed to deal with the nonlinear inverse problem of electric field distribution in relation to its boundary condition and the problem of divergence due to errors introduced by the ill-conditioned sensitivity matrix and the noise produced by electrode modelling and instruments. This solution is based on a normalized linear approximation method where the change in mutual impedance is derived from the sensitivity theorem and a method of error vector decomposition. This paper presents an algebraic solution of the linear equations at each inverse step, using a generalized conjugate gradients method. Limiting the number of iterations in the generalized conjugate gradients method controls the artificial errors introduced by the assumption of linearity and the ill-conditioned sensitivity matrix. The solution of the nonlinear problem is approached using a multistep inversion. This paper also reviews the mathematical and physical definitions of the sensitivity back-projection algorithm based on the sensitivity theorem. Simulations and discussion based on the multistep algorithm, the sensitivity coefficient back-projection method and the Newton-Raphson method are given. Examples of imaging gas-liquid mixing and a human hand in brine are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Jin; Yi Byongyong; Lasio, Giovanni
Kilovoltage x-ray projection images (kV images for brevity) are increasingly available in image guided radiotherapy (IGRT) for patient positioning. These images are two-dimensional (2D) projections of a three-dimensional (3D) object along the x-ray beam direction. Projecting a 3D object onto a plane may lead to ambiguities in the identification of anatomical structures and to poor contrast in kV images. Therefore, the use of kV images in IGRT is mainly limited to bony landmark alignments. This work proposes a novel subtraction technique that isolates a slice of interest (SOI) from a kV image with the assistance of a priori information frommore » a previous CT scan. The method separates structural information within a preselected SOI by suppressing contributions to the unprocessed projection from out-of-SOI-plane structures. Up to a five-fold increase in the contrast-to-noise ratios (CNRs) was observed in selected regions of the isolated SOI, when compared to the original unprocessed kV image. The tomographic image via background subtraction (TIBS) technique aims to provide a quick snapshot of the slice of interest with greatly enhanced image contrast over conventional kV x-ray projections for fast and accurate image guidance of radiation therapy. With further refinements, TIBS could, in principle, provide real-time tumor localization using gantry-mounted x-ray imaging systems without the need for implanted markers.« less
NASA Technical Reports Server (NTRS)
Thompson, Daniel
2004-01-01
Coming into the Combustion Branch of the Turbomachinery and Propulsion Systems Division, there was not any set project planned out for me to work on. This was understandable, considering I am only at my sophmore year in college. Also, my mentor was a division chief and it was expected that I would be passed down the line. It took about a week for me to be placed with somebody who could use me. My first project was to write a macro for TecPlot. Commonly, a person would have a 3D contour volume modeling something such as a combustion engine. This 3D volume needed to have slices extracted from it and made into 2D scientific plots with all of the appropriate axis and titles. This was very tedious to do by hand. My macro needed to automate the process. There was some education I needed before I could start, however. First, TecPlot ran on Unix and Linux, like a growing majority of scientific applications. I knew a little about Linux, but I would need to know more to use the software at hand. I took two classes at the Learning Center on Unix and am now comfortable with Linux and Unix. I already had taken Computer Science I and II, and had undergone the transformation from Computer Programmer to Procedural Epistemologist. I knew how to design efficient algorithms, I just needed to learn the macro language. After a little less than a week, I had learned the basics of the language. Like most languages, the best way to learn more of it was by using it. It was decided that it was best that I do the macro in layers, starting simple and adding features as I went. The macro started out slicing with respect to only one axis, and did not make 2D plots out of the slices. Instead, it lined them up inside the solid. Next, I allowed for more than one axis and placed each slice in a separate frame. After this, I added code that transformed each individual slice-frame into a scientific plot. I also made frames for composite volumes, which showed all of the slices in the same XYZ space. I then designed an addition companion macro that exported each frame into its own image file. I then distributed the macros to a test group, and am awaiting feedback. In the meantime, a am researching the possible applications of distributed computing on the National Combustor Code. Many of our Linux boxes were idle for most of the day. The department thinks that it would be wonderful if we could get all of these idle processors to work on a problem under the NCC code. The client software would have to be easily distributed, such as in screensaver format or as a program that only ran when the computer was not in use. This project proves to be an interesting challenge.
System Characterizations and Optimized Reconstruction Methods for Novel X-ray Imaging Modalities
NASA Astrophysics Data System (ADS)
Guan, Huifeng
In the past decade there have been many new emerging X-ray based imaging technologies developed for different diagnostic purposes or imaging tasks. However, there exist one or more specific problems that prevent them from being effectively or efficiently employed. In this dissertation, four different novel X-ray based imaging technologies are discussed, including propagation-based phase-contrast (PB-XPC) tomosynthesis, differential X-ray phase-contrast tomography (D-XPCT), projection-based dual-energy computed radiography (DECR), and tetrahedron beam computed tomography (TBCT). System characteristics are analyzed or optimized reconstruction methods are proposed for these imaging modalities. In the first part, we investigated the unique properties of propagation-based phase-contrast imaging technique when combined with the X-ray tomosynthesis. Fourier slice theorem implies that the high frequency components collected in the tomosynthesis data can be more reliably reconstructed. It is observed that the fringes or boundary enhancement introduced by the phase-contrast effects can serve as an accurate indicator of the true depth position in the tomosynthesis in-plane image. In the second part, we derived a sub-space framework to reconstruct images from few-view D-XPCT data set. By introducing a proper mask, the high frequency contents of the image can be theoretically preserved in a certain region of interest. A two-step reconstruction strategy is developed to mitigate the risk of subtle structures being oversmoothed when the commonly used total-variation regularization is employed in the conventional iterative framework. In the thirt part, we proposed a practical method to improve the quantitative accuracy of the projection-based dual-energy material decomposition. It is demonstrated that applying a total-projection-length constraint along with the dual-energy measurements can achieve a stabilized numerical solution of the decomposition problem, thus overcoming the disadvantages of the conventional approach that was extremely sensitive to noise corruption. In the final part, we described the modified filtered backprojection and iterative image reconstruction algorithms specifically developed for TBCT. Special parallelization strategies are designed to facilitate the use of GPU computing, showing demonstrated capability of producing high quality reconstructed volumetric images with a super fast computational speed. For all the investigations mentioned above, both simulation and experimental studies have been conducted to demonstrate the feasibility and effectiveness of the proposed methodologies.
WE-G-18A-03: Cone Artifacts Correction in Iterative Cone Beam CT Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, H; Folkerts, M; Jiang, S
Purpose: For iterative reconstruction (IR) in cone-beam CT (CBCT) imaging, data truncation along the superior-inferior (SI) direction causes severe cone artifacts in the reconstructed CBCT volume images. Not only does it reduce the effective SI coverage of the reconstructed volume, it also hinders the IR algorithm convergence. This is particular a problem for regularization based IR, where smoothing type regularization operations tend to propagate the artifacts to a large area. It is our purpose to develop a practical cone artifacts correction solution. Methods: We found it is the missing data residing in the truncated cone area that leads to inconsistencymore » between the calculated forward projections and measured projections. We overcome this problem by using FDK type reconstruction to estimate the missing data and design weighting factors to compensate the inconsistency caused by the missing data. We validate the proposed methods in our multi-GPU low-dose CBCT reconstruction system on multiple patients' datasets. Results: Compared to the FDK reconstruction with full datasets, while IR is able to reconstruct CBCT images using a subset of projection data, the severe cone artifacts degrade overall image quality. For head-neck case under a full-fan mode, 13 out of 80 slices are contaminated. It is even more severe in pelvis case under half-fan mode, where 36 out of 80 slices are affected, leading to inferior soft-tissue delineation. By applying the proposed method, the cone artifacts are effectively corrected, with a mean intensity difference decreased from ∼497 HU to ∼39HU for those contaminated slices. Conclusion: A practical and effective solution for cone artifacts correction is proposed and validated in CBCT IR algorithm. This study is supported in part by NIH (1R01CA154747-01)« less
NASA Astrophysics Data System (ADS)
Li, Xiaobing; Qiu, Tianshuang; Lebonvallet, Stephane; Ruan, Su
2010-02-01
This paper presents a brain tumor segmentation method which automatically segments tumors from human brain MRI image volume. The presented model is based on the symmetry of human brain and level set method. Firstly, the midsagittal plane of an MRI volume is searched, the slices with potential tumor of the volume are checked out according to their symmetries, and an initial boundary of the tumor in the slice, in which the tumor is in the largest size, is determined meanwhile by watershed and morphological algorithms; Secondly, the level set method is applied to the initial boundary to drive the curve evolving and stopping to the appropriate tumor boundary; Lastly, the tumor boundary is projected one by one to its adjacent slices as initial boundaries through the volume for the whole tumor. The experiment results are compared with hand tracking of the expert and show relatively good accordance between both.
IMPLEMENTING A NOVEL CYCLIC CO2 FLOOD IN PALEOZOIC REEFS
DOE Office of Scientific and Technical Information (OSTI.GOV)
James R. Wood; W. Quinlan; A. Wylie
2003-07-01
Recycled CO2 will be used in this demonstration project to produce bypassed oil from the Silurian Charlton 6 pinnacle reef (Otsego County) in the Michigan Basin. Contract negotiations by our industry partner to gain access to this CO2 that would otherwise be vented to the atmosphere are near completion. A new method of subsurface characterization, log curve amplitude slicing, is being used to map facies distributions and reservoir properties in two reefs, the Belle River Mills and Chester 18 Fields. The Belle River Mills and Chester18 fields are being used as typefields because they have excellent log-curve and core datamore » coverage. Amplitude slicing of the normalized gamma ray curves is showing trends that may indicate significant heterogeneity and compartmentalization in these reservoirs. Digital and hard copy data continues to be compiled for the Niagaran reefs in the Michigan Basin. Technology transfer took place through technical presentations regarding the log curve amplitude slicing technique and a booth at the Midwest PTTC meeting.« less
A Decomposition Theorem for Finite Automata.
ERIC Educational Resources Information Center
Santa Coloma, Teresa L.; Tucci, Ralph P.
1990-01-01
Described is automata theory which is a branch of theoretical computer science. A decomposition theorem is presented that is easier than the Krohn-Rhodes theorem. Included are the definitions, the theorem, and a proof. (KR)
Development of a slicer integral field unit for the existing optical imaging spectrograph FOCAS
NASA Astrophysics Data System (ADS)
Ozaki, Shinobu; Tanaka, Yoko; Hattori, Takashi; Mitsui, Kenji; Fukusima, Mitsuhiro; Okada, Norio; Obuchi, Yoshiyuki; Miyazaki, Satoshi; Yamashita, Takuya
2012-09-01
We are developing an integral field unit (IFU) with an image slicer for the existing optical imaging spectrograph, Faint Object Camera And Spectrograph (FOCAS), on the Subaru Telescope. Basic optical design has already finished. The slice width is 0.4 arcsec, slice number is 24, and field of view is 13.5x 9.6 arcsec. Sky spectra separated by about 3 arcmin from an object field can be simultaneously obtained, which allows us precise background subtraction. The IFU will be installed as a mask plate and set by the mask exchanger mechanism of FOCAS. Slice mirrors, pupil mirrors and slit mirrors are all made of glass, and their mirror surfaces are fabricated by polishing. Multilayer dielectric reflective coating with high reflectivity (< 98%) is made on each mirror surface. Slicer IFU consists of many mirrors which need to be arraigned with high accuracy. For such alignment, we will make alignment jigs and mirror holders made with high accuracy. Some pupil mirrors need off-axis ellipsoidal surfaces to reduce aberration. We are conducting some prototyping works including slice mirrors, an off-axis ellipsoidal surface, alignment jigs and a mirror support. In this paper, we will introduce our project and show those prototyping works.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fishman, S., E-mail: fishman@physics.technion.ac.il; Soffer, A., E-mail: soffer@math.rutgers.edu
2016-07-15
We employ the recently developed multi-time scale averaging method to study the large time behavior of slowly changing (in time) Hamiltonians. We treat some known cases in a new way, such as the Zener problem, and we give another proof of the adiabatic theorem in the gapless case. We prove a new uniform ergodic theorem for slowly changing unitary operators. This theorem is then used to derive the adiabatic theorem, do the scattering theory for such Hamiltonians, and prove some classical propagation estimates and asymptotic completeness.
Phase Tomography Reconstructed by 3D TIE in Hard X-ray Microscope
NASA Astrophysics Data System (ADS)
Yin, Gung-Chian; Chen, Fu-Rong; Pyun, Ahram; Je, Jung Ho; Hwu, Yeukuang; Liang, Keng S.
2007-01-01
X-ray phase tomography and phase imaging are promising ways of investigation on low Z material. A polymer blend of PE/PS sample was used to test the 3D phase retrieval method in the parallel beam illuminated microscope. Because the polymer sample is thick, the phase retardation is quite mixed and the image can not be distinguished when the 2D transport intensity equation (TIE) is applied. In this study, we have provided a different approach for solving the phase in three dimensions for thick sample. Our method involves integration of 3D TIE/Fourier slice theorem for solving thick phase sample. In our experiment, eight sets of de-focal series image data sets were recorded covering the angular range of 0 to 180 degree. Only three set of image cubes were used in 3D TIE equation for solving the phase tomography. The phase contrast of the polymer blend in 3D is obviously enhanced, and the two different groups of polymer blend can be distinguished in the phase tomography.
The Non-Signalling theorem in generalizations of Bell's theorem
NASA Astrophysics Data System (ADS)
Walleczek, J.; Grössing, G.
2014-04-01
Does "epistemic non-signalling" ensure the peaceful coexistence of special relativity and quantum nonlocality? The possibility of an affirmative answer is of great importance to deterministic approaches to quantum mechanics given recent developments towards generalizations of Bell's theorem. By generalizations of Bell's theorem we here mean efforts that seek to demonstrate the impossibility of any deterministic theories to obey the predictions of Bell's theorem, including not only local hidden-variables theories (LHVTs) but, critically, of nonlocal hidden-variables theories (NHVTs) also, such as de Broglie-Bohm theory. Naturally, in light of the well-established experimental findings from quantum physics, whether or not a deterministic approach to quantum mechanics, including an emergent quantum mechanics, is logically possible, depends on compatibility with the predictions of Bell's theorem. With respect to deterministic NHVTs, recent attempts to generalize Bell's theorem have claimed the impossibility of any such approaches to quantum mechanics. The present work offers arguments showing why such efforts towards generalization may fall short of their stated goal. In particular, we challenge the validity of the use of the non-signalling theorem as a conclusive argument in favor of the existence of free randomness, and therefore reject the use of the non-signalling theorem as an argument against the logical possibility of deterministic approaches. We here offer two distinct counter-arguments in support of the possibility of deterministic NHVTs: one argument exposes the circularity of the reasoning which is employed in recent claims, and a second argument is based on the inconclusive metaphysical status of the non-signalling theorem itself. We proceed by presenting an entirely informal treatment of key physical and metaphysical assumptions, and of their interrelationship, in attempts seeking to generalize Bell's theorem on the basis of an ontic, foundational interpretation of the non-signalling theorem. We here argue that the non-signalling theorem must instead be viewed as an epistemic, operational theorem i.e. one that refers exclusively to what epistemic agents can, or rather cannot, do. That is, we emphasize that the non-signalling theorem is a theorem about the operational inability of epistemic agents to signal information. In other words, as a proper principle, the non-signalling theorem may only be employed as an epistemic, phenomenological, or operational principle. Critically, our argument emphasizes that the non-signalling principle must not be used as an ontic principle about physical reality as such, i.e. as a theorem about the nature of physical reality independently of epistemic agents e.g. human observers. One major reason in favor of our conclusion is that any definition of signalling or of non-signalling invariably requires a reference to epistemic agents, and what these agents can actually measure and report. Otherwise, the non-signalling theorem would equal a general "no-influence" theorem. In conclusion, under the assumption that the non-signalling theorem is epistemic (i.e. "epistemic non-signalling"), the search for deterministic approaches to quantum mechanics, including NHVTs and an emergent quantum mechanics, continues to be a viable research program towards disclosing the foundations of physical reality at its smallest dimensions.
Point-vortex stability under the influence of an external periodic flow
NASA Astrophysics Data System (ADS)
Ortega, Rafael; Ortega, Víctor; Torres, Pedro J.
2018-05-01
We provide sufficient conditions for the stability of the particle advection around a fixed vortex in a two-dimensional ideal fluid under the action of a periodic background flow. The proof relies on the identification of closed invariant curves around the origin by means of Moser’s invariant curve theorem. Partially supported by Spanish MINECO and ERDF project MTM2014-52232-P.
Consistency of the adiabatic theorem.
Amin, M H S
2009-06-05
The adiabatic theorem provides the basis for the adiabatic model of quantum computation. Recently the conditions required for the adiabatic theorem to hold have become a subject of some controversy. Here we show that the reported violations of the adiabatic theorem all arise from resonant transitions between energy levels. In the absence of fast driven oscillations the traditional adiabatic theorem holds. Implications for adiabatic quantum computation are discussed.
Optimal no-go theorem on hidden-variable predictions of effect expectations
NASA Astrophysics Data System (ADS)
Blass, Andreas; Gurevich, Yuri
2018-03-01
No-go theorems prove that, under reasonable assumptions, classical hidden-variable theories cannot reproduce the predictions of quantum mechanics. Traditional no-go theorems proved that hidden-variable theories cannot predict correctly the values of observables. Recent expectation no-go theorems prove that hidden-variable theories cannot predict the expectations of observables. We prove the strongest expectation-focused no-go theorem to date. It is optimal in the sense that the natural weakenings of the assumptions and the natural strengthenings of the conclusion make the theorem fail. The literature on expectation no-go theorems strongly suggests that the expectation-focused approach is more general than the value-focused one. We establish that the expectation approach is not more general.
Helix Project Testbed - Towards the Self-Regenerative Incorruptible Enterprise
2011-09-14
hardware implementation with a microkernel in a way that allows information flow properties of the entire construction to be statically verified all the way...secure architectural skeleton. This skeleton couples a critical slice of the low level hardware implementation with a microkernel in a way that
Wenz, Holger; Maros, Máté E.; Meyer, Mathias; Förster, Alex; Haubenreisser, Holger; Kurth, Stefan; Schoenberg, Stefan O.; Flohr, Thomas; Leidecker, Christianne; Groden, Christoph; Scharf, Johann; Henzler, Thomas
2015-01-01
Objectives To prospectively intra-individually compare image quality of a 3rd generation Dual-Source-CT (DSCT) spiral cranial CT (cCT) to a sequential 4-slice Multi-Slice-CT (MSCT) while maintaining identical intra-individual radiation dose levels. Methods 35 patients, who had a non-contrast enhanced sequential cCT examination on a 4-slice MDCT within the past 12 months, underwent a spiral cCT scan on a 3rd generation DSCT. CTDIvol identical to initial 4-slice MDCT was applied. Data was reconstructed using filtered backward projection (FBP) and 3rd-generation iterative reconstruction (IR) algorithm at 5 different IR strength levels. Two neuroradiologists independently evaluated subjective image quality using a 4-point Likert-scale and objective image quality was assessed in white matter and nucleus caudatus with signal-to-noise ratios (SNR) being subsequently calculated. Results Subjective image quality of all spiral cCT datasets was rated significantly higher compared to the 4-slice MDCT sequential acquisitions (p<0.05). Mean SNR was significantly higher in all spiral compared to sequential cCT datasets with mean SNR improvement of 61.65% (p*Bonferroni0.05<0.0024). Subjective image quality improved with increasing IR levels. Conclusion Combination of 3rd-generation DSCT spiral cCT with an advanced model IR technique significantly improves subjective and objective image quality compared to a standard sequential cCT acquisition acquired at identical dose levels. PMID:26288186
Wenz, Holger; Maros, Máté E; Meyer, Mathias; Förster, Alex; Haubenreisser, Holger; Kurth, Stefan; Schoenberg, Stefan O; Flohr, Thomas; Leidecker, Christianne; Groden, Christoph; Scharf, Johann; Henzler, Thomas
2015-01-01
To prospectively intra-individually compare image quality of a 3rd generation Dual-Source-CT (DSCT) spiral cranial CT (cCT) to a sequential 4-slice Multi-Slice-CT (MSCT) while maintaining identical intra-individual radiation dose levels. 35 patients, who had a non-contrast enhanced sequential cCT examination on a 4-slice MDCT within the past 12 months, underwent a spiral cCT scan on a 3rd generation DSCT. CTDIvol identical to initial 4-slice MDCT was applied. Data was reconstructed using filtered backward projection (FBP) and 3rd-generation iterative reconstruction (IR) algorithm at 5 different IR strength levels. Two neuroradiologists independently evaluated subjective image quality using a 4-point Likert-scale and objective image quality was assessed in white matter and nucleus caudatus with signal-to-noise ratios (SNR) being subsequently calculated. Subjective image quality of all spiral cCT datasets was rated significantly higher compared to the 4-slice MDCT sequential acquisitions (p<0.05). Mean SNR was significantly higher in all spiral compared to sequential cCT datasets with mean SNR improvement of 61.65% (p*Bonferroni0.05<0.0024). Subjective image quality improved with increasing IR levels. Combination of 3rd-generation DSCT spiral cCT with an advanced model IR technique significantly improves subjective and objective image quality compared to a standard sequential cCT acquisition acquired at identical dose levels.
Maximum entropy PDF projection: A review
NASA Astrophysics Data System (ADS)
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
On adaptive modified projective synchronization of a supply chain management system
NASA Astrophysics Data System (ADS)
Tirandaz, Hamed
2017-12-01
In this paper, the synchronization problem of a chaotic supply chain management system is studied. A novel adaptive modified projective synchronization method is introduced to control the behaviour of the leader supply chain system by a follower chaotic system and to adjust the leader system parameters until the measurable errors of the system parameters converge to zero. The stability evaluation and convergence analysis are carried out by the Lyapanov stability theorem. The proposed synchronization and antisynchronization techniques are studied for identical supply chain chaotic systems. Finally, some numerical simulations are presented to verify the effectiveness of the theoretical discussions.
Using Pictures to Enhance Students' Understanding of Bayes' Theorem
ERIC Educational Resources Information Center
Trafimow, David
2011-01-01
Students often have difficulty understanding algebraic proofs of statistics theorems. However, it sometimes is possible to prove statistical theorems with pictures in which case students can gain understanding more easily. I provide examples for two versions of Bayes' theorem.
Lin, Ju; Li, Jie; Li, Xiaolei; Wang, Ning
2016-10-01
An acoustic reciprocity theorem is generalized, for a smoothly varying perturbed medium, to a hierarchy of reciprocity theorems including higher-order derivatives of acoustic fields. The standard reciprocity theorem is the first member of the hierarchy. It is shown that the conservation of higher-order interaction quantities is related closely to higher-order derivative distributions of perturbed media. Then integral reciprocity theorems are obtained by applying Gauss's divergence theorem, which give explicit integral representations connecting higher-order interactions and higher-order derivative distributions of perturbed media. Some possible applications to an inverse problem are also discussed.
SparseCT: interrupted-beam acquisition and sparse reconstruction for radiation dose reduction
NASA Astrophysics Data System (ADS)
Koesters, Thomas; Knoll, Florian; Sodickson, Aaron; Sodickson, Daniel K.; Otazo, Ricardo
2017-03-01
State-of-the-art low-dose CT methods reduce the x-ray tube current and use iterative reconstruction methods to denoise the resulting images. However, due to compromises between denoising and image quality, only moderate dose reductions up to 30-40% are accepted in clinical practice. An alternative approach is to reduce the number of x-ray projections and use compressed sensing to reconstruct the full-tube-current undersampled data. This idea was recognized in the early days of compressed sensing and proposals for CT dose reduction appeared soon afterwards. However, no practical means of undersampling has yet been demonstrated in the challenging environment of a rapidly rotating CT gantry. In this work, we propose a moving multislit collimator as a practical incoherent undersampling scheme for compressed sensing CT and evaluate its application for radiation dose reduction. The proposed collimator is composed of narrow slits and moves linearly along the slice dimension (z), to interrupt the incident beam in different slices for each x-ray tube angle (θ). The reduced projection dataset is then reconstructed using a sparse approach, where 3D image gradients are employed to enforce sparsity. The effects of the collimator slits on the beam profile were measured and represented as a continuous slice profile. SparseCT was tested using retrospective undersampling and compared against commercial current-reduction techniques on phantoms and in vivo studies. Initial results suggest that SparseCT may enable higher performance than current-reduction, particularly for high dose reduction factors.
Mu, Zhiping; Hong, Baoming; Li, Shimin; Liu, Yi-Hwa
2009-01-01
Coded aperture imaging for two-dimensional (2D) planar objects has been investigated extensively in the past, whereas little success has been achieved in imaging 3D objects using this technique. In this article, the authors present a novel method of 3D single photon emission computerized tomography (SPECT) reconstruction for near-field coded aperture imaging. Multiangular coded aperture projections are acquired and a stack of 2D images is reconstructed separately from each of the projections. Secondary projections are subsequently generated from the reconstructed image stacks based on the geometry of parallel-hole collimation and the variable magnification of near-field coded aperture imaging. Sinograms of cross-sectional slices of 3D objects are assembled from the secondary projections, and the ordered subset expectation and maximization algorithm is employed to reconstruct the cross-sectional image slices from the sinograms. Experiments were conducted using a customized capillary tube phantom and a micro hot rod phantom. Imaged at approximately 50 cm from the detector, hot rods in the phantom with diameters as small as 2.4 mm could be discerned in the reconstructed SPECT images. These results have demonstrated the feasibility of the authors’ 3D coded aperture image reconstruction algorithm for SPECT, representing an important step in their effort to develop a high sensitivity and high resolution SPECT imaging system. PMID:19544769
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Hands-On Whole Science. What Rots?
ERIC Educational Resources Information Center
Markle, Sandra
1991-01-01
Presents activities on the science of garbage to help elementary students learn to save the earth. A rotting experiment teaches students what happens to apple slices sealed in plastic or buried in damp soil. Other activities include reading stories on the subject and conducting classroom composting or toxic materials projects. (SM)
On the symmetry foundation of double soft theorems
NASA Astrophysics Data System (ADS)
Li, Zhi-Zhong; Lin, Hung-Hwa; Zhang, Shun-Qing
2017-12-01
Double-soft theorems, like its single-soft counterparts, arises from the underlying symmetry principles that constrain the interactions of massless particles. While single soft theorems can be derived in a non-perturbative fashion by employing current algebras, recent attempts of extending such an approach to known double soft theorems has been met with difficulties. In this work, we have traced the difficulty to two inequivalent expansion schemes, depending on whether the soft limit is taken asymmetrically or symmetrically, which we denote as type A and B respectively. The soft-behaviour for type A scheme can simply be derived from single soft theorems, and are thus non-perturbatively protected. For type B, the information of the four-point vertex is required to determine the corresponding soft theorems, and thus are in general not protected. This argument can be readily extended to general multi-soft theorems. We also ask whether unitarity can be emergent from locality together with the two kinds of soft theorems, which has not been fully investigated before.
Chemical Equilibrium and Polynomial Equations: Beware of Roots.
ERIC Educational Resources Information Center
Smith, William R.; Missen, Ronald W.
1989-01-01
Describes two easily applied mathematical theorems, Budan's rule and Rolle's theorem, that in addition to Descartes's rule of signs and intermediate-value theorem, are useful in chemical equilibrium. Provides examples that illustrate the use of all four theorems. Discusses limitations of the polynomial equation representation of chemical…
ERIC Educational Resources Information Center
Garcia, Stephan Ramon; Ross, William T.
2017-01-01
We hope to initiate a discussion about various methods for introducing Cauchy's Theorem. Although Cauchy's Theorem is the fundamental theorem upon which complex analysis is based, there is no "standard approach." The appropriate choice depends upon the prerequisites for the course and the level of rigor intended. Common methods include…
Sherwood, John L; Amici, Mascia; Dargan, Sheila L; Culley, Georgia R; Fitzjohn, Stephen M; Jane, David E; Collingridge, Graham L; Lodge, David; Bortolotto, Zuner A
2012-09-01
Long-term potentiation (LTP) is a well-established experimental model used to investigate the synaptic basis of learning and memory. LTP at mossy fibre - CA3 synapses in the hippocampus is unusual because it is normally N-methyl-d-aspartate (NMDA) receptor-independent. Instead it seems that the trigger for mossy fibre LTP involves kainate receptors (KARs). Although it is generally accepted that pre-synaptic KARs play an essential role in frequency facilitation and LTP, their subunit composition remains a matter of significant controversy. We have reported previously that both frequency facilitation and LTP can be blocked by selective antagonism of GluK1 (formerly GluR5/Glu(K5))-containing KARs, but other groups have failed to reproduce this effect. Moreover, data from receptor knockout and mRNA expression studies argue against a major role of GluK1, supporting a more central role for GluK2 (formerly GluR6/Glu(K6)). A potential reason underlying the controversy in the pharmacological experiments may reside in differences in the preparations used. Here we show differences in pharmacological sensitivity of synaptic plasticity at mossy fibre - CA3 synapses depend critically on slice orientation. In transverse slices, LTP of fEPSPs was invariably resistant to GluK1-selective antagonists whereas in parasagittal slices LTP was consistently blocked by GluK1-selective antagonists. In addition, there were pronounced differences in the magnitude of frequency facilitation and the sensitivity to the mGlu2/3 receptor agonist DCG-IV. Using anterograde labelling of granule cells we show that slices of both orientations possess intact mossy fibres and both large and small presynaptic boutons. Transverse slices have denser fibre tracts but a smaller proportion of giant mossy fibre boutons. These results further demonstrate a considerable heterogeneity in the functional properties of the mossy fibre projection. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ge, Jun; Chan, Heang-Ping; Sahiner, Berkman; Zhang, Yiheng; Wei, Jun; Hadjiiski, Lubomir M.; Zhou, Chuan
2007-03-01
We are developing a computerized technique to reduce intra- and interplane ghosting artifacts caused by high-contrast objects such as dense microcalcifications (MCs) or metal markers on the reconstructed slices of digital tomosynthesis mammography (DTM). In this study, we designed a constrained iterative artifact reduction method based on a priori 3D information of individual MCs. We first segmented individual MCs on projection views (PVs) using an automated MC detection system. The centroid and the contrast profile of the individual MCs in the 3D breast volume were estimated from the backprojection of the segmented individual MCs on high-resolution (0.1 mm isotropic voxel size) reconstructed DTM slices. An isolated volume of interest (VOI) containing one or a few MCs is then modeled as a high-contrast object embedded in a local homogeneous background. A shift-variant 3D impulse response matrix (IRM) of the projection-reconstruction (PR) system for the extracted VOI was calculated using the DTM geometry and the reconstruction algorithm. The PR system for this VOI is characterized by a system of linear equations. A constrained iterative method was used to solve these equations for the effective linear attenuation coefficients (eLACs) within the isolated VOI. Spatial constraint and positivity constraint were used in this method. Finally, the intra- and interplane artifacts on the whole breast volume resulting from the MC were calculated using the corresponding impulse responses and subsequently subtracted from the original reconstructed slices. The performance of our artifact-reduction method was evaluated using a computer-simulated MC phantom, as well as phantom images and patient DTMs obtained with IRB approval. A GE prototype DTM system that acquires 21 PVs in 3º increments over a +/-30º range was used for image acquisition in this study. For the computer-simulated MC phantom, the eLACs can be estimated accurately, thus the interplane artifacts were effectively removed. For MCs in phantom and patient DTMs, our method reduced the artifacts but also created small over-corrected areas in some cases. Potential reasons for this may include: the simplified mathematical modeling of the forward projection process, and the amplified noise in the solution of the system of linear equations.
Early Vector Calculus: A Path through Multivariable Calculus
ERIC Educational Resources Information Center
Robertson, Robert L.
2013-01-01
The divergence theorem, Stokes' theorem, and Green's theorem appear near the end of calculus texts. These are important results, but many instructors struggle to reach them. We describe a pathway through a standard calculus text that allows instructors to emphasize these theorems. (Contains 2 figures.)
ERIC Educational Resources Information Center
Russell, Alan R.
2004-01-01
Pick's theorem can be used in various ways just like a lemon. This theorem generally finds its way in the syllabus approximately at the middle school level and in fact at times students have even calculated the area of a state considering its outline with the help of the above theorem.
A characterization of linearly repetitive cut and project sets
NASA Astrophysics Data System (ADS)
Haynes, Alan; Koivusalo, Henna; Walton, James
2018-02-01
For the development of a mathematical theory which can be used to rigorously investigate physical properties of quasicrystals, it is necessary to understand regularity of patterns in special classes of aperiodic point sets in Euclidean space. In one dimension, prototypical mathematical models for quasicrystals are provided by Sturmian sequences and by point sets generated by substitution rules. Regularity properties of such sets are well understood, thanks mostly to well known results by Morse and Hedlund, and physicists have used this understanding to study one dimensional random Schrödinger operators and lattice gas models. A key fact which plays an important role in these problems is the existence of a subadditive ergodic theorem, which is guaranteed when the corresponding point set is linearly repetitive. In this paper we extend the one-dimensional model to cut and project sets, which generalize Sturmian sequences in higher dimensions, and which are frequently used in mathematical and physical literature as models for higher dimensional quasicrystals. By using a combination of algebraic, geometric, and dynamical techniques, together with input from higher dimensional Diophantine approximation, we give a complete characterization of all linearly repetitive cut and project sets with cubical windows. We also prove that these are precisely the collection of such sets which satisfy subadditive ergodic theorems. The results are explicit enough to allow us to apply them to known classical models, and to construct linearly repetitive cut and project sets in all pairs of dimensions and codimensions in which they exist. Research supported by EPSRC grants EP/L001462, EP/J00149X, EP/M023540. HK also gratefully acknowledges the support of the Osk. Huttunen foundation.
Generalized Optical Theorem Detection in Random and Complex Media
NASA Astrophysics Data System (ADS)
Tu, Jing
The problem of detecting changes of a medium or environment based on active, transmit-plus-receive wave sensor data is at the heart of many important applications including radar, surveillance, remote sensing, nondestructive testing, and cancer detection. This is a challenging problem because both the change or target and the surrounding background medium are in general unknown and can be quite complex. This Ph.D. dissertation presents a new wave physics-based approach for the detection of targets or changes in rather arbitrary backgrounds. The proposed methodology is rooted on a fundamental result of wave theory called the optical theorem, which gives real physical energy meaning to the statistics used for detection. This dissertation is composed of two main parts. The first part significantly expands the theory and understanding of the optical theorem for arbitrary probing fields and arbitrary media including nonreciprocal media, active media, as well as time-varying and nonlinear scatterers. The proposed formalism addresses both scalar and full vector electromagnetic fields. The second contribution of this dissertation is the application of the optical theorem to change detection with particular emphasis on random, complex, and active media, including single frequency probing fields and broadband probing fields. The first part of this work focuses on the generalization of the existing theoretical repertoire and interpretation of the scalar and electromagnetic optical theorem. Several fundamental generalizations of the optical theorem are developed. A new theory is developed for the optical theorem for scalar fields in nonhomogeneous media which can be bounded or unbounded. The bounded media context is essential for applications such as intrusion detection and surveillance in enclosed environments such as indoor facilities, caves, tunnels, as well as for nondestructive testing and communication systems based on wave-guiding structures. The developed scalar optical theorem theory applies to arbitrary lossless backgrounds and quite general probing fields including near fields which play a key role in super-resolution imaging. The derived formulation holds for arbitrary passive scatterers, which can be dissipative, as well as for the more general class of active scatterers which are composed of a (passive) scatterer component and an active, radiating (antenna) component. Furthermore, the generalization of the optical theorem to active scatterers is relevant to many applications such as surveillance of active targets including certain cloaks, invisible scatterers, and wireless communications. The latter developments have important military applications. The derived theoretical framework includes the familiar real power optical theorem describing power extinction due to both dissipation and scattering as well as a reactive optical theorem related to the reactive power changes. Meanwhile, the developed approach naturally leads to three optical theorem indicators or statistics, which can be used to detect changes or targets in unknown complex media. In addition, the optical theorem theory is generalized in the time domain so that it applies to arbitrary full vector fields, and arbitrary media including anisotropic media, nonreciprocal media, active media, as well as time-varying and nonlinear scatterers. The second component of this Ph.D. research program focuses on the application of the optical theorem to change detection. Three different forms of indicators or statistics are developed for change detection in unknown background media: a real power optical theorem detector, a reactive power optical theorem detector, and a total apparent power optical theorem detector. No prior knowledge is required of the background or the change or target. The performance of the three proposed optical theorem detectors is compared with the classical energy detector approach for change detection. The latter uses a mathematical or functional energy while the optical theorem detectors are based on real physical energy. For reference, the optical theorem detectors are also compared with the matched filter approach which (unlike the optical theorem detectors) assumes perfect target and medium information. The practical implementation of the optical theorem detectors is based for certain random and complex media on the exploitation of time reversal focusing ideas developed in the past 20 years in electromagnetics and acoustics. In the final part of the dissertation, we also discuss the implementation of the optical theorem sensors for one-dimensional propagation systems such as transmission lines. We also present a new generalized likelihood ratio test for detection that exploits a prior data constraint based on the optical theorem. Finally, we also address the practical implementation of the optical theorem sensors for optical imaging systems, by means of holography. The later is the first holographic implementation the optical theorem for arbitrary scenes and targets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holden, S.C.
1976-12-27
The stability of tensioned blades used in multiblade sawing does not seem to be the limitation in cutting with thin blades. So far, 0.010 cm thick blades have been totally unsuccessful. Recently, 0.015 cm blades have proven successful in wafering, offering an 0.005 cm reduction in the silicon used per slice. The failure of thin blades is characterized as a possible result of blade misalignment or from the inherent uncontrollability of the loose abrasive multiblade process. Corrective procedures will be employed in the assembly of packages to eliminate one type of blade misalignment. Two ingots were sliced with the samemore » batch of standard silicon carbide abrasive slurry to determine the useful lifetime of this expendable material. After 250 slices, the cutting efficiency had not degraded. Further tests will be continued to establish the maximum lifetime of both silicon carbide and boron carbide abrasive. Electron microscopy will be employed to evaluate the wear of abrasive particles in the failure of abrasive slurry. The surface damage of silicon wafers has been characterized as predominantly subsurface fracture. Damage with No. 600 SiC is between 10 and 15 microns into the wafer surface. This agrees well with previous investigations of damage from silicon carbide abrasive papers.« less
Feature tracking for automated volume of interest stabilization on 4D-OCT images
NASA Astrophysics Data System (ADS)
Laves, Max-Heinrich; Schoob, Andreas; Kahrs, Lüder A.; Pfeiffer, Tom; Huber, Robert; Ortmaier, Tobias
2017-03-01
A common representation of volumetric medical image data is the triplanar view (TV), in which the surgeon manually selects slices showing the anatomical structure of interest. In addition to common medical imaging such as MRI or computed tomography, recent advances in the field of optical coherence tomography (OCT) have enabled live processing and volumetric rendering of four-dimensional images of the human body. Due to the region of interest undergoing motion, it is challenging for the surgeon to simultaneously keep track of an object by continuously adjusting the TV to desired slices. To select these slices in subsequent frames automatically, it is necessary to track movements of the volume of interest (VOI). This has not been addressed with respect to 4DOCT images yet. Therefore, this paper evaluates motion tracking by applying state-of-the-art tracking schemes on maximum intensity projections (MIP) of 4D-OCT images. Estimated VOI location is used to conveniently show corresponding slices and to improve the MIPs by calculating thin-slab MIPs. Tracking performances are evaluated on an in-vivo sequence of human skin, captured at 26 volumes per second. Among investigated tracking schemes, our recently presented tracking scheme for soft tissue motion provides highest accuracy with an error of under 2.2 voxels for the first 80 volumes. Object tracking on 4D-OCT images enables its use for sub-epithelial tracking of microvessels for image-guidance.
NASA Astrophysics Data System (ADS)
McClelland, Jamie R.; Modat, Marc; Arridge, Simon; Grimes, Helen; D'Souza, Derek; Thomas, David; O' Connell, Dylan; Low, Daniel A.; Kaza, Evangelia; Collins, David J.; Leach, Martin O.; Hawkes, David J.
2017-06-01
Surrogate-driven respiratory motion models relate the motion of the internal anatomy to easily acquired respiratory surrogate signals, such as the motion of the skin surface. They are usually built by first using image registration to determine the motion from a number of dynamic images, and then fitting a correspondence model relating the motion to the surrogate signals. In this paper we present a generalized framework that unifies the image registration and correspondence model fitting into a single optimization. This allows the use of ‘partial’ imaging data, such as individual slices, projections, or k-space data, where it would not be possible to determine the motion from an individual frame of data. Motion compensated image reconstruction can also be incorporated using an iterative approach, so that both the motion and a motion-free image can be estimated from the partial image data. The framework has been applied to real 4DCT, Cine CT, multi-slice CT, and multi-slice MR data, as well as simulated datasets from a computer phantom. This includes the use of a super-resolution reconstruction method for the multi-slice MR data. Good results were obtained for all datasets, including quantitative results for the 4DCT and phantom datasets where the ground truth motion was known or could be estimated.
McClelland, Jamie R; Modat, Marc; Arridge, Simon; Grimes, Helen; D'Souza, Derek; Thomas, David; Connell, Dylan O'; Low, Daniel A; Kaza, Evangelia; Collins, David J; Leach, Martin O; Hawkes, David J
2017-06-07
Surrogate-driven respiratory motion models relate the motion of the internal anatomy to easily acquired respiratory surrogate signals, such as the motion of the skin surface. They are usually built by first using image registration to determine the motion from a number of dynamic images, and then fitting a correspondence model relating the motion to the surrogate signals. In this paper we present a generalized framework that unifies the image registration and correspondence model fitting into a single optimization. This allows the use of 'partial' imaging data, such as individual slices, projections, or k-space data, where it would not be possible to determine the motion from an individual frame of data. Motion compensated image reconstruction can also be incorporated using an iterative approach, so that both the motion and a motion-free image can be estimated from the partial image data. The framework has been applied to real 4DCT, Cine CT, multi-slice CT, and multi-slice MR data, as well as simulated datasets from a computer phantom. This includes the use of a super-resolution reconstruction method for the multi-slice MR data. Good results were obtained for all datasets, including quantitative results for the 4DCT and phantom datasets where the ground truth motion was known or could be estimated.
McClelland, Jamie R; Modat, Marc; Arridge, Simon; Grimes, Helen; D’Souza, Derek; Thomas, David; Connell, Dylan O’; Low, Daniel A; Kaza, Evangelia; Collins, David J; Leach, Martin O; Hawkes, David J
2017-01-01
Abstract Surrogate-driven respiratory motion models relate the motion of the internal anatomy to easily acquired respiratory surrogate signals, such as the motion of the skin surface. They are usually built by first using image registration to determine the motion from a number of dynamic images, and then fitting a correspondence model relating the motion to the surrogate signals. In this paper we present a generalized framework that unifies the image registration and correspondence model fitting into a single optimization. This allows the use of ‘partial’ imaging data, such as individual slices, projections, or k-space data, where it would not be possible to determine the motion from an individual frame of data. Motion compensated image reconstruction can also be incorporated using an iterative approach, so that both the motion and a motion-free image can be estimated from the partial image data. The framework has been applied to real 4DCT, Cine CT, multi-slice CT, and multi-slice MR data, as well as simulated datasets from a computer phantom. This includes the use of a super-resolution reconstruction method for the multi-slice MR data. Good results were obtained for all datasets, including quantitative results for the 4DCT and phantom datasets where the ground truth motion was known or could be estimated. PMID:28195833
de Hoogt, Ronald; Estrada, Marta F; Vidic, Suzana; Davies, Emma J; Osswald, Annika; Barbier, Michael; Santo, Vítor E; Gjerde, Kjersti; van Zoggel, Hanneke J A A; Blom, Sami; Dong, Meng; Närhi, Katja; Boghaert, Erwin; Brito, Catarina; Chong, Yolanda; Sommergruber, Wolfgang; van der Kuip, Heiko; van Weerden, Wytske M; Verschuren, Emmy W; Hickman, John; Graeser, Ralph
2017-11-21
Two-dimensional (2D) culture of cancer cells in vitro does not recapitulate the three-dimensional (3D) architecture, heterogeneity and complexity of human tumors. More representative models are required that better reflect key aspects of tumor biology. These are essential studies of cancer biology and immunology as well as for target validation and drug discovery. The Innovative Medicines Initiative (IMI) consortium PREDECT (www.predect.eu) characterized in vitro models of three solid tumor types with the goal to capture elements of tumor complexity and heterogeneity. 2D culture and 3D mono- and stromal co-cultures of increasing complexity, and precision-cut tumor slice models were established. Robust protocols for the generation of these platforms are described. Tissue microarrays were prepared from all the models, permitting immunohistochemical analysis of individual cells, capturing heterogeneity. 3D cultures were also characterized using image analysis. Detailed step-by-step protocols, exemplary datasets from the 2D, 3D, and slice models, and refined analytical methods were established and are presented.
de Hoogt, Ronald; Estrada, Marta F.; Vidic, Suzana; Davies, Emma J.; Osswald, Annika; Barbier, Michael; Santo, Vítor E.; Gjerde, Kjersti; van Zoggel, Hanneke J. A. A.; Blom, Sami; Dong, Meng; Närhi, Katja; Boghaert, Erwin; Brito, Catarina; Chong, Yolanda; Sommergruber, Wolfgang; van der Kuip, Heiko; van Weerden, Wytske M.; Verschuren, Emmy W.; Hickman, John; Graeser, Ralph
2017-01-01
Two-dimensional (2D) culture of cancer cells in vitro does not recapitulate the three-dimensional (3D) architecture, heterogeneity and complexity of human tumors. More representative models are required that better reflect key aspects of tumor biology. These are essential studies of cancer biology and immunology as well as for target validation and drug discovery. The Innovative Medicines Initiative (IMI) consortium PREDECT (www.predect.eu) characterized in vitro models of three solid tumor types with the goal to capture elements of tumor complexity and heterogeneity. 2D culture and 3D mono- and stromal co-cultures of increasing complexity, and precision-cut tumor slice models were established. Robust protocols for the generation of these platforms are described. Tissue microarrays were prepared from all the models, permitting immunohistochemical analysis of individual cells, capturing heterogeneity. 3D cultures were also characterized using image analysis. Detailed step-by-step protocols, exemplary datasets from the 2D, 3D, and slice models, and refined analytical methods were established and are presented. PMID:29160867
On the generation of a bubbly universe - A quantitative assessment of the CfA slice
NASA Technical Reports Server (NTRS)
Ostriker, J. P.; Strassler, M. J.
1989-01-01
A first attempt is made to calculate the properties of the matter distribution in a universe filled with overlapping bubbles produced by multiple explosions. Each spherical shell follows the cosmological Sedov-Taylor solution until it encounters another shell. Thereafter, mergers are allowed to occur in pairs on the basis of N-body results. At the final epoch, the matrix of overlapping shells is populated with 'galaxies' and the properties of slices through the numerically constructed cube compare well with CfA survey results for specified initial conditions. A statistic is found which measures the distance distribution from uniformly distributed points to the nearest galaxies on the projected plane which appears to provide a good measure of the bubbly character of the galaxy distribution. In a quantitative analysis of the CfA 'slice of the universe', a very good match is found between simulation and the real data for final average bubble radii of (13.5 + or - 1.5)/h Mpc with formal filling factor 1.0-1.5 or actual filling factor of 65-80 percent.
NASA Astrophysics Data System (ADS)
Hoang, Thai M.; Pan, Rui; Ahn, Jonghoon; Bang, Jaehoon; Quan, H. T.; Li, Tongcang
2018-02-01
Nonequilibrium processes of small systems such as molecular machines are ubiquitous in biology, chemistry, and physics but are often challenging to comprehend. In the past two decades, several exact thermodynamic relations of nonequilibrium processes, collectively known as fluctuation theorems, have been discovered and provided critical insights. These fluctuation theorems are generalizations of the second law and can be unified by a differential fluctuation theorem. Here we perform the first experimental test of the differential fluctuation theorem using an optically levitated nanosphere in both underdamped and overdamped regimes and in both spatial and velocity spaces. We also test several theorems that can be obtained from it directly, including a generalized Jarzynski equality that is valid for arbitrary initial states, and the Hummer-Szabo relation. Our study experimentally verifies these fundamental theorems and initiates the experimental study of stochastic energetics with the instantaneous velocity measurement.
Generalized virial theorem for massless electrons in graphene and other Dirac materials
NASA Astrophysics Data System (ADS)
Sokolik, A. A.; Zabolotskiy, A. D.; Lozovik, Yu. E.
2016-05-01
The virial theorem for a system of interacting electrons in a crystal, which is described within the framework of the tight-binding model, is derived. We show that, in the particular case of interacting massless electrons in graphene and other Dirac materials, the conventional virial theorem is violated. Starting from the tight-binding model, we derive the generalized virial theorem for Dirac electron systems, which contains an additional term associated with a momentum cutoff at the bottom of the energy band. Additionally, we derive the generalized virial theorem within the Dirac model using the minimization of the variational energy. The obtained theorem is illustrated by many-body calculations of the ground-state energy of an electron gas in graphene carried out in Hartree-Fock and self-consistent random-phase approximations. Experimental verification of the theorem in the case of graphene is discussed.
The geometric Mean Value Theorem
NASA Astrophysics Data System (ADS)
de Camargo, André Pierro
2018-05-01
In a previous article published in the American Mathematical Monthly, Tucker (Amer Math Monthly. 1997; 104(3): 231-240) made severe criticism on the Mean Value Theorem and, unfortunately, the majority of calculus textbooks also do not help to improve its reputation. The standard argument for proving it seems to be applying Rolle's theorem to a function like
A note on generalized Weyl's theorem
NASA Astrophysics Data System (ADS)
Zguitti, H.
2006-04-01
We prove that if either T or T* has the single-valued extension property, then the spectral mapping theorem holds for B-Weyl spectrum. If, moreover T is isoloid, and generalized Weyl's theorem holds for T, then generalized Weyl's theorem holds for f(T) for every . An application is given for algebraically paranormal operators.
On the addition theorem of spherical functions
NASA Astrophysics Data System (ADS)
Shkodrov, V. G.
The addition theorem of spherical functions is expressed in two reference systems, viz., an inertial system and a system rigidly fixed to a planet. A generalized addition theorem of spherical functions and a particular addition theorem for the rigidly fixed system are derived. The results are applied to the theory of a planetary potential.
Takahara, Taro; Imai, Yutaka; Yamashita, Tomohiro; Yasuda, Seiei; Nasu, Seiji; Van Cauteren, Marc
2004-01-01
To examine a new way of body diffusion weighted imaging (DWI) using the short TI inversion recovery-echo planar imaging (STIR-EPI) sequence and free breathing scanning (diffusion weighted whole body imaging with background body signal suppression; DWIBS) to obtain three-dimensional displays. 1) Apparent contrast-to-noise ratios (AppCNR) between lymph nodes and surrounding fat tissue were compared in three types of DWI with and without breath-holding, with variable lengths of scan time and slice thickness. 2) The STIR-EPI sequence and spin echo-echo planar imaging (SE-EPI) sequence with chemical shift selective (CHESS) pulse were compared in terms of their degree of fat suppression. 3) Eleven patients with neck, chest, and abdominal malignancy were scanned with DWIBS for evaluation of feasibility. Whole body imaging was done in a later stage of the study using the peripheral vascular coil. The AppCNR of 8 mm slice thickness images reconstructed from 4 mm slice thickness source images obtained in a free breathing scan of 430 sec were much better than 9 mm slice thickness breath-hold scans obtained in 25 sec. High resolution multi-planar reformat (MPR) and maximum intensity projection (MIP) images could be made from the data set of 4 mm slice thickness images. Fat suppression was much better in the STIR-EPI sequence than SE-EPI with CHESS pulse. The feasibility of DWIBS was showed in clinical scans of 11 patients. Whole body images were successfully obtained with adequate fat suppression. Three-dimensional DWIBS can be obtained with this technique, which may allow us to screen for malignancies in the whole body.
Berggren, Karl; Cederström, Björn; Lundqvist, Mats; Fredenberg, Erik
2018-02-01
Digital breast tomosynthesis (DBT) is an emerging tool for breast-cancer screening and diagnostics. The purpose of this study is to present a second-generation photon-counting slit-scanning DBT system and compare it to the first-generation system in terms of geometry and image quality. The study presents the first image-quality measurements on the second-generation system. The geometry of the new system is based on a combined rotational and linear motion, in contrast to a purely rotational scan motion in the first generation. In addition, the calibration routines have been updated. Image quality was measured in the center of the image field in terms of in-slice modulation transfer function (MTF), artifact spread function (ASF), and in-slice detective quantum efficiency (DQE). Images were acquired using a W/Al 29 kVp spectrum at 13 mAs with 2 mm Al additional filtration and reconstructed using simple back-projection. The in-slice 50% MTF was improved in the chest-mammilla direction, going from 3.2 to 3.5 lp/mm, and the zero-frequency DQE increased from 0.71 to 0.77. The MTF and ASF were otherwise found to be on par for the two systems. The new system has reduced in-slice variation of the tomographic angle. The new geometry is less curved, which reduces in-slice tomographic-angle variation, and increases the maximum compression height, making the system accessible for a larger population. The improvements in MTF and DQE were attributed to the updated calibration procedures. We conclude that the second-generation system maintains the key features of the photon-counting system while maintaining or improving image quality and improving the maximum compression height. © 2017 American Association of Physicists in Medicine.
OBSERVING LYAPUNOV EXPONENTS OF INFINITE-DIMENSIONAL DYNAMICAL SYSTEMS
OTT, WILLIAM; RIVAS, MAURICIO A.; WEST, JAMES
2016-01-01
Can Lyapunov exponents of infinite-dimensional dynamical systems be observed by projecting the dynamics into ℝN using a ‘typical’ nonlinear projection map? We answer this question affirmatively by developing embedding theorems for compact invariant sets associated with C1 maps on Hilbert spaces. Examples of such discrete-time dynamical systems include time-T maps and Poincaré return maps generated by the solution semigroups of evolution partial differential equations. We make every effort to place hypotheses on the projected dynamics rather than on the underlying infinite-dimensional dynamical system. In so doing, we adopt an empirical approach and formulate checkable conditions under which a Lyapunov exponent computed from experimental data will be a Lyapunov exponent of the infinite-dimensional dynamical system under study (provided the nonlinear projection map producing the data is typical in the sense of prevalence). PMID:28066028
OBSERVING LYAPUNOV EXPONENTS OF INFINITE-DIMENSIONAL DYNAMICAL SYSTEMS.
Ott, William; Rivas, Mauricio A; West, James
2015-12-01
Can Lyapunov exponents of infinite-dimensional dynamical systems be observed by projecting the dynamics into ℝ N using a 'typical' nonlinear projection map? We answer this question affirmatively by developing embedding theorems for compact invariant sets associated with C 1 maps on Hilbert spaces. Examples of such discrete-time dynamical systems include time- T maps and Poincaré return maps generated by the solution semigroups of evolution partial differential equations. We make every effort to place hypotheses on the projected dynamics rather than on the underlying infinite-dimensional dynamical system. In so doing, we adopt an empirical approach and formulate checkable conditions under which a Lyapunov exponent computed from experimental data will be a Lyapunov exponent of the infinite-dimensional dynamical system under study (provided the nonlinear projection map producing the data is typical in the sense of prevalence).
ERIC Educational Resources Information Center
Longenecker, Herbert E., Jr.; Babb, Jeffry; Waguespack, Leslie J.; Janicki, Thomas N.; Feinstein, David
2015-01-01
The evolution of computing education spans a spectrum from "computer science" ("CS") grounded in the theory of computing, to "information systems" ("IS"), grounded in the organizational application of data processing. This paper reports on a project focusing on a particular slice of that spectrum commonly…
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Decisions, Decisions, Decisions: What Determines the Path Taken in Lectures?
ERIC Educational Resources Information Center
Paterson, Judy; Thomas, Mike; Taylor, Steve
2011-01-01
A group of mathematicians and mathematics educators are collaborating in the fine-grained examination of selected "slices" of video recordings of lectures, drawing on Schoenfeld's Resources, Orientations and Goals framework of teaching-in-context. In the larger project, we are exploring ways in which this model can be extended to examine…
High-resolution brain SPECT imaging by combination of parallel and tilted detector heads.
Suzuki, Atsuro; Takeuchi, Wataru; Ishitsu, Takafumi; Morimoto, Yuichi; Kobashi, Keiji; Ueno, Yuichiro
2015-10-01
To improve the spatial resolution of brain single-photon emission computed tomography (SPECT), we propose a new brain SPECT system in which the detector heads are tilted towards the rotation axis so that they are closer to the brain. In addition, parallel detector heads are used to obtain the complete projection data set. We evaluated this parallel and tilted detector head system (PT-SPECT) in simulations. In the simulation study, the tilt angle of the detector heads relative to the axis was 45°. The distance from the collimator surface of the parallel detector heads to the axis was 130 mm. The distance from the collimator surface of the tilted detector heads to the origin on the axis was 110 mm. A CdTe semiconductor panel with a 1.4 mm detector pitch and a parallel-hole collimator were employed in both types of detector head. A line source phantom, cold-rod brain-shaped phantom, and cerebral blood flow phantom were evaluated. The projection data were generated by forward-projection of the phantom images using physics models, and Poisson noise at clinical levels was applied to the projection data. The ordered-subsets expectation maximization algorithm with physics models was used. We also evaluated conventional SPECT using four parallel detector heads for the sake of comparison. The evaluation of the line source phantom showed that the transaxial FWHM in the central slice for conventional SPECT ranged from 6.1 to 8.5 mm, while that for PT-SPECT ranged from 5.3 to 6.9 mm. The cold-rod brain-shaped phantom image showed that conventional SPECT could visualize up to 8-mm-diameter rods. By contrast, PT-SPECT could visualize up to 6-mm-diameter rods in upper slices of a cerebrum. The cerebral blood flow phantom image showed that the PT-SPECT system provided higher resolution at the thalamus and caudate nucleus as well as at the longitudinal fissure of the cerebrum compared with conventional SPECT. PT-SPECT provides improved image resolution at not only upper but also at central slices of the cerebrum.
Bertrand's theorem and virial theorem in fractional classical mechanics
NASA Astrophysics Data System (ADS)
Yu, Rui-Yan; Wang, Towe
2017-09-01
Fractional classical mechanics is the classical counterpart of fractional quantum mechanics. The central force problem in this theory is investigated. Bertrand's theorem is generalized, and virial theorem is revisited, both in three spatial dimensions. In order to produce stable, closed, non-circular orbits, the inverse-square law and the Hooke's law should be modified in fractional classical mechanics.
Guided Discovery of the Nine-Point Circle Theorem and Its Proof
ERIC Educational Resources Information Center
Buchbinder, Orly
2018-01-01
The nine-point circle theorem is one of the most beautiful and surprising theorems in Euclidean geometry. It establishes an existence of a circle passing through nine points, all of which are related to a single triangle. This paper describes a set of instructional activities that can help students discover the nine-point circle theorem through…
C formal verification with unix communication and concurrency
NASA Technical Reports Server (NTRS)
Hoover, Doug N.
1990-01-01
The results of a NASA SBIR project are presented in which CSP-Ariel, a verification system for C programs which use Unix system calls for concurrent programming, interprocess communication, and file input and output, was developed. This project builds on ORA's Ariel C verification system by using the system of Hoare's book, Communicating Sequential Processes, to model concurrency and communication. The system runs in ORA's Clio theorem proving environment. The use of CSP to model Unix concurrency and sketch the CSP semantics of a simple concurrent program is outlined. Plans for further development of CSP-Ariel are discussed. This paper is presented in viewgraph form.
Observability/Identifiability of Rigid Motion under Perspective Projection
1994-03-08
Faugeras and S. Maybank . Motion from point mathces: multiplicity of solutions. Int. J, of Computer Vision, 1990. [16] D.B. Gennery. Tracking known...sequences. Int. 9. of computer vision, 1989. [37] S. Maybank . Theory of reconstruction from image motion. Springer Verlag, 1992. [38] Andrea 6...defined in section 5; in this appendix we show a simple characterization which is due to Faugeras and Maybank [15, 371. Theorem B.l . Let Q = UCVT
NASA Astrophysics Data System (ADS)
Choudhary, A.; Dimri, A. P.
2018-04-01
Precipitation is one of the important climatic indicators in the global climate system. Probable changes in monsoonal (June, July, August and September; hereafter JJAS) mean precipitation in the Himalayan region for three different greenhouse gas emission scenarios (i.e. representative concentration pathways or RCPs) and two future time slices (near and far) are estimated from a set of regional climate simulations performed under Coordinated Regional Climate Downscaling Experiment-South Asia (CORDEX-SA) project. For each of the CORDEX-SA simulations and their ensemble, projections of near future (2020-2049) and far future (2070-2099) precipitation climatology with respect to corresponding present climate (1970-2005) over Himalayan region are presented. The variability existing over each of the future time slices is compared with the present climate variability to determine the future changes in inter annual fluctuations of monsoonal mean precipitation. The long-term (1970-2099) trend (mm/day/year) of monsoonal mean precipitation spatially distributed as well as averaged over Himalayan region is analyzed to detect any change across twenty-first century as well as to assess model uncertainty in simulating the precipitation changes over this period. The altitudinal distribution of difference in trend of future precipitation from present climate existing over each of the time slices is also studied to understand any elevation dependency of change in precipitation pattern. Except for a part of the Hindu-Kush area in western Himalayan region which shows drier condition, the CORDEX-SA experiments project in general wetter/drier conditions in near future for western/eastern Himalayan region, a scenario which gets further intensified in far future. Although, a gradually increasing precipitation trend is seen throughout the twenty-first century in carbon intensive scenarios, the distribution of trend with elevation presents a very complex picture with lower elevations showing a greater trend in far-future under RCP8.5 when compared with higher elevations.
Protocol, pattern and paper: interactive stabilization of immunohistochemical knowledge.
Nederbragt, Hubertus
2010-12-01
This paper analyzes the investigation of the distribution of the protein tenascin-C in canine mammary tumors. The method involved immunohistochemistry of tissue slices, performed by the application of an antibody to tenascin-C that specifically can be made visible for microscopic inspection. The first phase of the project is the making of the protocol, the second the deduction of a pattern of tenascin-C distribution in tumors and the third the writing of a paper. Each of the phases is analyzed separately, using the concept of resistance and accommodation. My purpose is to show that in each phase of the process of producing knowledge, the scientist meets resistances which force him to accommodate by changing his conceptual, technical and methodological approaches. In reverse, the details of the non-human agent (protocol, pattern or paper) have to be accommodated to the wishes and expectations of the scientist. Through this interaction a situation of stability of knowledge is reached at the end of each phase. In the protocol phase, resistance is found in the antibody and tissue slices. In the phase of pattern deduction the resistance is in the pathological diagnosis of the tumors and the expectations and hypothesis with which the scientist had entered the project; in the criteria to be used for assigning the slices to a tenascin-C pattern; and in the responses of colleagues and supervisor. In the paper-writing phase the interaction is between the scientist and the scientific community which should take on board the knowledge from the research project. When stabilization of knowledge is obtained in one of the phases, the agents of resistance turn into allies in the next phase, giving support to accommodating the resistances in this later phase. Second, the stabilization of knowledge of the protocol is further enhanced when stabilization of the pattern is achieved; in addition, knowledge of the pattern is more definite when it has become stabilized and closed knowledge within the science community. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Aiken, Alexander
2001-01-01
The Scalable Analysis Toolkit (SAT) project aimed to demonstrate that it is feasible and useful to statically detect software bugs in very large systems. The technical focus of the project was on a relatively new class of constraint-based techniques for analysis software, where the desired facts about programs (e.g., the presence of a particular bug) are phrased as constraint problems to be solved. At the beginning of this project, the most successful forms of formal software analysis were limited forms of automatic theorem proving (as exemplified by the analyses used in language type systems and optimizing compilers), semi-automatic theorem proving for full verification, and model checking. With a few notable exceptions these approaches had not been demonstrated to scale to software systems of even 50,000 lines of code. Realistic approaches to large-scale software analysis cannot hope to make every conceivable formal method scale. Thus, the SAT approach is to mix different methods in one application by using coarse and fast but still adequate methods at the largest scales, and reserving the use of more precise but also more expensive methods at smaller scales for critical aspects (that is, aspects critical to the analysis problem under consideration) of a software system. The principled method proposed for combining a heterogeneous collection of formal systems with different scalability characteristics is mixed constraints. This idea had been used previously in small-scale applications with encouraging results: using mostly coarse methods and narrowly targeted precise methods, useful information (meaning the discovery of bugs in real programs) was obtained with excellent scalability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blume-Kohout, Robin J; Scholten, Travis L.
Quantum state tomography on a d-dimensional system demands resources that grow rapidly with d. They may be reduced by using model selection to tailor the number of parameters in the model (i.e., the size of the density matrix). Most model selection methods typically rely on a test statistic and a null theory that describes its behavior when two models are equally good. Here, we consider the loglikelihood ratio. Because of the positivity constraint ρ ≥ 0, quantum state space does not generally satisfy local asymptotic normality (LAN), meaning the classical null theory for the loglikelihood ratio (the Wilks theorem) shouldmore » not be used. Thus, understanding and quantifying how positivity affects the null behavior of this test statistic is necessary for its use in model selection for state tomography. We define a new generalization of LAN, metric-projected LAN, show that quantum state space satisfies it, and derive a replacement for the Wilks theorem. In addition to enabling reliable model selection, our results shed more light on the qualitative effects of the positivity constraint on state tomography.« less
Time-dependent entropy evolution in microscopic and macroscopic electromagnetic relaxation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker-Jarvis, James
This paper is a study of entropy and its evolution in the time and frequency domains upon application of electromagnetic fields to materials. An understanding of entropy and its evolution in electromagnetic interactions bridges the boundaries between electromagnetism and thermodynamics. The approach used here is a Liouville-based statistical-mechanical theory. I show that the microscopic entropy is reversible and the macroscopic entropy satisfies an H theorem. The spectral entropy development can be very useful for studying the frequency response of materials. Using a projection-operator based nonequilibrium entropy, different equations are derived for the entropy and entropy production and are applied tomore » the polarization, magnetization, and macroscopic fields. I begin by proving an exact H theorem for the entropy, progress to application of time-dependent entropy in electromagnetics, and then apply the theory to relevant applications in electromagnetics. The paper concludes with a discussion of the relationship of the frequency-domain form of the entropy to the permittivity, permeability, and impedance.« less
Fuchsian triangle groups and Grothendieck dessins. Variations on a theme of Belyi
NASA Astrophysics Data System (ADS)
Cohen, Paula Beazley; Itzykson, Claude; Wolfart, Jürgen
1994-07-01
According to a theorem of Belyi, a smooth projective algebraic curve is defined over a number field if and only if there exists a non-constant element of its function field ramified only over 0, 1 and . The existence of such a Belyi function is equivalent to that of a representation of the curve as a possibly compactified quotient space of the Poincaré upper half plane by a subgroup of finite index in a Fuchsian triangle group. On the other hand, Fuchsian triangle groups arise in many contexts, such as in the theory of hypergeometric functions and certain triangular billiard problems, which would appear at first sight to have no relation to the Galois problems that motivated the above discovery of Belyi. In this note we review several results related to Belyi's theorem and we develop certain aspects giving examples. For preliminary accounts, see the preprint [Wo1], the conference proceedings article [BauItz] and the Comptes Rendus note [CoWo2].
Characterization of Generalized Young Measures Generated by Symmetric Gradients
NASA Astrophysics Data System (ADS)
De Philippis, Guido; Rindler, Filip
2017-06-01
This work establishes a characterization theorem for (generalized) Young measures generated by symmetric derivatives of functions of bounded deformation (BD) in the spirit of the classical Kinderlehrer-Pedregal theorem. Our result places such Young measures in duality with symmetric-quasiconvex functions with linear growth. The "local" proof strategy combines blow-up arguments with the singular structure theorem in BD (the analogue of Alberti's rank-one theorem in BV), which was recently proved by the authors. As an application of our characterization theorem we show how an atomic part in a BD-Young measure can be split off in generating sequences.
The Poincaré-Hopf Theorem for line fields revisited
NASA Astrophysics Data System (ADS)
Crowley, Diarmuid; Grant, Mark
2017-07-01
A Poincaré-Hopf Theorem for line fields with point singularities on orientable surfaces can be found in Hopf's 1956 Lecture Notes on Differential Geometry. In 1955 Markus presented such a theorem in all dimensions, but Markus' statement only holds in even dimensions 2 k ≥ 4. In 1984 Jänich presented a Poincaré-Hopf theorem for line fields with more complicated singularities and focussed on the complexities arising in the generalized setting. In this expository note we review the Poincaré-Hopf Theorem for line fields with point singularities, presenting a careful proof which is valid in all dimensions.
Common fixed point theorems for maps under a contractive condition of integral type
NASA Astrophysics Data System (ADS)
Djoudi, A.; Merghadi, F.
2008-05-01
Two common fixed point theorems for mapping of complete metric space under a general contractive inequality of integral type and satisfying minimal commutativity conditions are proved. These results extend and improve several previous results, particularly Theorem 4 of Rhoades [B.E. Rhoades, Two fixed point theorems for mappings satisfying a general contractive condition of integral type, Int. J. Math. Math. Sci. 63 (2003) 4007-4013] and Theorem 4 of Sessa [S. Sessa, On a weak commutativity condition of mappings in fixed point considerations, Publ. Inst. Math. (Beograd) (N.S.) 32 (46) (1982) 149-153].
A Converse of the Mean Value Theorem Made Easy
ERIC Educational Resources Information Center
Mortici, Cristinel
2011-01-01
The aim of this article is to discuss some results about the converse mean value theorem stated by Tong and Braza [J. Tong and P. Braza, "A converse of the mean value theorem", Amer. Math. Monthly 104(10), (1997), pp. 939-942] and Almeida [R. Almeida, "An elementary proof of a converse mean-value theorem", Internat. J. Math. Ed. Sci. Tech. 39(8)…
Recurrence theorems: A unified account
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallace, David, E-mail: david.wallace@balliol.ox.ac.uk
I discuss classical and quantum recurrence theorems in a unified manner, treating both as generalisations of the fact that a system with a finite state space only has so many places to go. Along the way, I prove versions of the recurrence theorem applicable to dynamics on linear and metric spaces and make some comments about applications of the classical recurrence theorem in the foundations of statistical mechanics.
A variational theorem for creep with applications to plates and columns
NASA Technical Reports Server (NTRS)
Sanders, J Lyell, Jr; Mccomb, Harvey G , Jr; Schlechte, Floyd R
1958-01-01
A variational theorem is presented for a body undergoing creep. Solutions to problems of the creep behavior of plates, columns, beams, and shells can be obtained by means of the direct methods of the calculus of variations in conjunction with the stated theorem. The application of the theorem is illustrated for plates and columns by the solution of two sample problems.
ERIC Educational Resources Information Center
Gkioulekas, Eleftherios
2013-01-01
Many limits, typically taught as examples of applying the "squeeze" theorem, can be evaluated more easily using the proposed zero-bounded limit theorem. The theorem applies to functions defined as a product of a factor going to zero and a factor that remains bounded in some neighborhood of the limit. This technique is immensely useful…
Nawratil, Georg
2014-01-01
In 1898, Ernest Duporcq stated a famous theorem about rigid-body motions with spherical trajectories, without giving a rigorous proof. Today, this theorem is again of interest, as it is strongly connected with the topic of self-motions of planar Stewart–Gough platforms. We discuss Duporcq's theorem from this point of view and demonstrate that it is not correct. Moreover, we also present a revised version of this theorem. PMID:25540467
NAGANAWA, Shinji; KANOU, Mai; OHASHI, Toshio; KUNO, Kayao; SONE, Michihiko
2016-01-01
Purpose: To evaluate the feasibility of a simple estimation for the endolymphatic volume ratio (endolymph volume/total lymph volume = %ELvolume) from an area ratio obtained from only one slice (%EL1slice) or from three slices (%EL3slices). The %ELvolume, calculated from a time-consuming measurement on all magnetic resonance (MR) slices, was compared to the %EL1slice and the %EL3slices. Methods: In 40 ears of 20 patients with a clinical suspicion of endolymphatic hydrops, MR imaging was performed 4 hours after intravenous administration of a single dose of gadolinium-based contrast material (IV-SD-GBCM). Using previously reported HYDROPS2-Mi2 MR imaging, the %ELvolume values in the cochlea and the vestibule were measured separately by two observers. The correlations between the %EL1slice or the %EL3slices and the %ELvolume values were evaluated. Results: A strong linear correlation was observed between the %ELvolume and the %EL3slices or the %EL1slice in the cochlea. The Pearson correlation coefficient (r) was 0.968 (3 slices) and 0.965 (1 slice) for observer A, and 0.968 (3 slices) and 0.964 (1 slice) for observer B (P < 0.001, for all). A strong linear correlation was also observed between the %ELvolume and the %EL3slices or the %EL1slice in the vestibule. The Pearson correlation coefficient (r) was 0.980 (3 slices) and 0.953 (1 slice) for observer A, and 0.979 (3 slices) and 0.952 (1 slice) for observer B (P < 0.001, for all). The high intra-class correlation coefficients (0.991–0.997) between the endolymph volume ratios by two observers were observed in both the cochlea and the vestibule for values of the %ELvolume, the %EL3slices and the %EL1slice. Conclusion: The %ELvolume might be easily estimated from the %EL3slices or the %EL1slice. PMID:27001396
Naganawa, Shinji; Kanou, Mai; Ohashi, Toshio; Kuno, Kayao; Sone, Michihiko
2016-10-11
To evaluate the feasibility of a simple estimation for the endolymphatic volume ratio (endolymph volume/total lymph volume = %EL volume ) from an area ratio obtained from only one slice (%EL 1slice ) or from three slices (%EL 3slices ). The %EL volume, calculated from a time-consuming measurement on all magnetic resonance (MR) slices, was compared to the %EL 1slice and the %EL 3slices . In 40 ears of 20 patients with a clinical suspicion of endolymphatic hydrops, MR imaging was performed 4 hours after intravenous administration of a single dose of gadolinium-based contrast material (IV-SD-GBCM). Using previously reported HYDROPS2-Mi2 MR imaging, the %EL volume values in the cochlea and the vestibule were measured separately by two observers. The correlations between the %EL 1slice or the %EL 3slices and the %EL volume values were evaluated. A strong linear correlation was observed between the %EL volume and the %EL 3slices or the %EL 1slice in the cochlea. The Pearson correlation coefficient (r) was 0.968 (3 slices) and 0.965 (1 slice) for observer A, and 0.968 (3 slices) and 0.964 (1 slice) for observer B (P < 0.001, for all). A strong linear correlation was also observed between the %EL volume and the %EL 3slices or the %EL 1slice in the vestibule. The Pearson correlation coefficient (r) was 0.980 (3 slices) and 0.953 (1 slice) for observer A, and 0.979 (3 slices) and 0.952 (1 slice) for observer B (P < 0.001, for all). The high intra-class correlation coefficients (0.991-0.997) between the endolymph volume ratios by two observers were observed in both the cochlea and the vestibule for values of the %EL volume , the %EL 3slices and the %EL 1slice . The %EL volume might be easily estimated from the %EL 3slices or the %EL 1slice .
Multi-wire slurry wafering demonstrations. [slicing silicon ingots for solar arrays
NASA Technical Reports Server (NTRS)
Chen, C. P.
1978-01-01
Ten slicing demonstrations on a multi-wire slurry saw, made to evaluate the silicon ingot wafering capabilities, reveal that the present sawing capabilities can provide usable wafer area from an ingot 1.05m/kg (e.g. kerf width 0.135 mm and wafer thickness 0.265 mm). Satisfactory surface qualities and excellent yield of silicon wafers were found. One drawback is that the add-on cost of producing water from this saw, as presently used, is considerably higher than other systems being developed for the low-cost silicon solar array project (LSSA), primarily because the saw uses a large quantity of wire. The add-on cost can be significantly reduced by extending the wire life and/or by rescue of properly plated wire to restore the diameter.
Voronovskaja's theorem revisited
NASA Astrophysics Data System (ADS)
Tachev, Gancho T.
2008-07-01
We represent a new quantitative variant of Voronovskaja's theorem for Bernstein operator. This estimate improves the recent quantitative versions of Voronovskaja's theorem for certain Bernstein-type operators, obtained by H. Gonska, P. Pitul and I. Rasa in 2006.
Random Walks on Cartesian Products of Certain Nonamenable Groups and Integer Lattices
NASA Astrophysics Data System (ADS)
Vishnepolsky, Rachel
A random walk on a discrete group satisfies a local limit theorem with power law exponent \\alpha if the return probabilities follow the asymptotic law. P{ return to starting point after n steps } ˜ Crhonn-alpha.. A group has a universal local limit theorem if all random walks on the group with finitely supported step distributions obey a local limit theorem with the same power law exponent. Given two groups that obey universal local limit theorems, it is not known whether their cartesian product also has a universal local limit theorem. We settle the question affirmatively in one case, by considering a random walk on the cartesian product of a nonamenable group whose Cayley graph is a tree, and the integer lattice. As corollaries, we derive large deviations estimates and a central limit theorem.
Waller, Niels
2018-01-01
Kristof's Theorem (Kristof, 1970 ) describes a matrix trace inequality that can be used to solve a wide-class of least-square optimization problems without calculus. Considering its generality, it is surprising that Kristof's Theorem is rarely used in statistics and psychometric applications. The underutilization of this method likely stems, in part, from the mathematical complexity of Kristof's ( 1964 , 1970 ) writings. In this article, I describe the underlying logic of Kristof's Theorem in simple terms by reviewing four key mathematical ideas that are used in the theorem's proof. I then show how Kristof's Theorem can be used to provide novel derivations to two cognate models from statistics and psychometrics. This tutorial includes a glossary of technical terms and an online supplement with R (R Core Team, 2017 ) code to perform the calculations described in the text.
Radial q-space sampling for DSI
Baete, Steven H.; Yutzy, Stephen; Boada, Fernando, E.
2015-01-01
Purpose Diffusion Spectrum Imaging (DSI) has been shown to be an effective tool for non-invasively depicting the anatomical details of brain microstructure. Existing implementations of DSI sample the diffusion encoding space using a rectangular grid. Here we present a different implementation of DSI whereby a radially symmetric q-space sampling scheme for DSI (RDSI) is used to improve the angular resolution and accuracy of the reconstructed Orientation Distribution Functions (ODF). Methods Q-space is sampled by acquiring several q-space samples along a number of radial lines. Each of these radial lines in q-space is analytically connected to a value of the ODF at the same angular location by the Fourier slice theorem. Results Computer simulations and in vivo brain results demonstrate that RDSI correctly estimates the ODF when moderately high b-values (4000 s/mm2) and number of q-space samples (236) are used. Conclusion The nominal angular resolution of RDSI depends on the number of radial lines used in the sampling scheme, and only weakly on the maximum b-value. In addition, the radial analytical reconstruction reduces truncation artifacts which affect Cartesian reconstructions. Hence, a radial acquisition of q-space can be favorable for DSI. PMID:26363002
Double soft graviton theorems and Bondi-Metzner-Sachs symmetries
NASA Astrophysics Data System (ADS)
Anupam, A. H.; Kundu, Arpan; Ray, Krishnendu
2018-05-01
It is now well understood that Ward identities associated with the (extended) BMS algebra are equivalent to single soft graviton theorems. In this work, we show that if we consider nested Ward identities constructed out of two BMS charges, a class of double soft factorization theorems can be recovered. By making connections with earlier works in the literature, we argue that at the subleading order, these double soft graviton theorems are the so-called consecutive double soft graviton theorems. We also show how these nested Ward identities can be understood as Ward identities associated with BMS symmetries in scattering states defined around (non-Fock) vacua parametrized by supertranslations or superrotations.
Measurement of Hubble constant: non-Gaussian errors in HST Key Project data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Meghendra; Gupta, Shashikant; Pandey, Ashwini
2016-08-01
Assuming the Central Limit Theorem, experimental uncertainties in any data set are expected to follow the Gaussian distribution with zero mean. We propose an elegant method based on Kolmogorov-Smirnov statistic to test the above; and apply it on the measurement of Hubble constant which determines the expansion rate of the Universe. The measurements were made using Hubble Space Telescope. Our analysis shows that the uncertainties in the above measurement are non-Gaussian.
On Generalizations of Cochran’s Theorem and Projection Matrices.
1980-08-01
Definiteness of the Estimated Dispersion Matrix in a Multivariate Linear Model ," F. Pukelsheim and George P.H. Styan, May 1978. TECHNICAL REPORTS...with applications to the analysis of covariance," Proc. Cambridge Philos. Soc., 30, pp. 178-191. Graybill , F. A. and Marsaglia, G. (1957...34Idempotent matrices and quad- ratic forms in the general linear hypothesis," Ann. Math. Statist., 28, pp. 678-686. Greub, W. (1975). Linear Algebra (4th ed
A fermionic de Finetti theorem
NASA Astrophysics Data System (ADS)
Krumnow, Christian; Zimborás, Zoltán; Eisert, Jens
2017-12-01
Quantum versions of de Finetti's theorem are powerful tools, yielding conceptually important insights into the security of key distribution protocols or tomography schemes and allowing one to bound the error made by mean-field approaches. Such theorems link the symmetry of a quantum state under the exchange of subsystems to negligible quantum correlations and are well understood and established in the context of distinguishable particles. In this work, we derive a de Finetti theorem for finite sized Majorana fermionic systems. It is shown, much reflecting the spirit of other quantum de Finetti theorems, that a state which is invariant under certain permutations of modes loses most of its anti-symmetric character and is locally well described by a mode separable state. We discuss the structure of the resulting mode separable states and establish in specific instances a quantitative link to the quality of the Hartree-Fock approximation of quantum systems. We hint at a link to generalized Pauli principles for one-body reduced density operators. Finally, building upon the obtained de Finetti theorem, we generalize and extend the applicability of Hudson's fermionic central limit theorem.
ERIC Educational Resources Information Center
Davis, Philip J.
1993-01-01
Argues for a mathematics education that interprets the word "theorem" in a sense that is wide enough to include the visual aspects of mathematical intuition and reasoning. Defines the term "visual theorems" and illustrates the concept using the Marigold of Theodorus. (Author/MDH)
Note on the theorems of Bjerknes and Crocco
NASA Technical Reports Server (NTRS)
Theodorsen, Theodore
1946-01-01
The theorems of Bjerknes and Crocco are of great interest in the theory of flow around airfoils at Mach numbers near and above unity. A brief note shows how both theorems are developed by short vector transformations.
NASA Technical Reports Server (NTRS)
Horrigan, D. J.; Horwitz, B. A.; Horowitz, J. M.
1997-01-01
Serotonergic fibers project to the hippocampus, a brain area previously shown to have distinctive changes in electroencephalograph (EEG) activity during entrance into and arousal from hibernation. The EEG activity is generated by pyramidal cells in both hibernating and nonhibernating species. Using the brain slice preparation, we characterized serotonergic responses of these CA1 pyramidal cells in euthermic, cold-acclimated, and hibernating Syrian hamsters. Stimulation of Shaffer-collateral/commissural fibers evoked fast synaptic excitation of CA1 pyramidal cells, a response monitored by recording population spikes (the synchronous generation of action potentials). Neuromodulation by serotonin (5-HT) decreased population spike amplitude by 54% in cold-acclimated animals, 80% in hibernating hamsters, and 63% in euthermic animals. The depression was significantly greater in slices from hibernators than from cold-acclimated animals. In slices from euthermic animals, changes in extracellular K+ concentration between 2.5 and 5.0 mM did not significantly alter serotonergic responses. The 5-HT1A agonist 8-hydroxy-2(di-n-propylamino)tetralin mimicked serotonergic inhibition in euthermic hamsters. Results show that 5-HT is a robust neuromodulator not only in euthermic animals but also in cold-acclimated and hibernating hamsters.
Analysis of non locality proofs in Quantum Mechanics
NASA Astrophysics Data System (ADS)
Nisticò, Giuseppe
2012-02-01
Two kinds of non-locality theorems in Quantum Mechanics are taken into account: the theorems based on the criterion of reality and the quite different theorem proposed by Stapp. In the present work the analyses of the theorem due to Greenberger, Horne, Shimony and Zeilinger, based on the criterion of reality, and of Stapp's argument are shown. The results of these analyses show that the alleged violations of locality cannot be considered definitive.
PYGMALION: A Creative Programming Environment
1975-06-01
iiiiiimimmmimm wm^m^mmm’ wi-i ,»■»’■’.■- v* 26 Examples of Purely Iconic Reasoning 1-H Pythagoras ’ original proof of the Pythagorean Theorem ... Theorem Proving Machine. His program employed properties of the representation to guide the proof of theorems . His simple heruristic "Reject...one theorem the square of the hypotenuse. "Every proposition is presented as a self-contained fact relying on its own intrinsic evidence. Instead
A Maximal Element Theorem in FWC-Spaces and Its Applications
Hu, Qingwen; Miao, Yulin
2014-01-01
A maximal element theorem is proved in finite weakly convex spaces (FWC-spaces, in short) which have no linear, convex, and topological structure. Using the maximal element theorem, we develop new existence theorems of solutions to variational relation problem, generalized equilibrium problem, equilibrium problem with lower and upper bounds, and minimax problem in FWC-spaces. The results represented in this paper unify and extend some known results in the literature. PMID:24782672
Generalized Bloch theorem and topological characterization
NASA Astrophysics Data System (ADS)
Dobardžić, E.; Dimitrijević, M.; Milovanović, M. V.
2015-03-01
The Bloch theorem enables reduction of the eigenvalue problem of the single-particle Hamiltonian that commutes with the translational group. Based on a group theory analysis we present a generalization of the Bloch theorem that incorporates all additional symmetries of a crystal. The generalized Bloch theorem constrains the form of the Hamiltonian which becomes manifestly invariant under additional symmetries. In the case of isotropic interactions the generalized Bloch theorem gives a unique Hamiltonian. This Hamiltonian coincides with the Hamiltonian in the periodic gauge. In the case of anisotropic interactions the generalized Bloch theorem allows a family of Hamiltonians. Due to the continuity argument we expect that even in this case the Hamiltonian in the periodic gauge defines observables, such as Berry curvature, in the inverse space. For both cases we present examples and demonstrate that the average of the Berry curvatures of all possible Hamiltonians in the Bloch gauge is the Berry curvature in the periodic gauge.
Empson, R M; Heinemann, U
1995-05-01
1. The perforant path projection from layer III of the entorhinal cortex to CA1 of the hippocampus was studied within a hippocampal-entorhinal combined slice preparation. We prevented contamination from the other main hippocampal pathways by removal of CA3 and the dentate gyrus. 2. Initially the projection was mapped using field potential recordings that suggested an excitatory sink in stratum lacunosum moleculare with an associated source in stratum pyramidale. 3. However, recording intracellularly from CA1 cells, stimulation of the perforant path produced prominent fast GABAA and slow GABAB IPSPs often preceded by small EPSPs. In a small number of cells we observed EPSPs only. 4. CNQX blocked excitatory and inhibitory responses. This indicated the presence of an intervening excitatory synapse between the inhibitory interneurone and the pyramidal cell. 5. Focal bicuculline applications revealed that the major site of GABAA inhibitory input was to stratum radiatum of CA1. 6. The inhibition activated by the perforant path was very effective at reducing simultaneously activated Schaffer collateral mediated EPSPs and suprathreshold-stimulated action potentials. 7. Blockade of fast inhibition increased excitability and enhanced slow inhibition. Both increases relied upon the activation of NMDA receptors. 8. Perforant path inputs activated prominent and effective disynaptic inhibition of CA1 cells. This has significance for the output of hippocampal processing during normal behaviour and also under pathological conditions.
Revisiting Ramakrishnan's approach to relatively. [Velocity addition theorem uniqueness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nandi, K.K.; Shankara, T.S.
The conditions under which the velocity addition theorem (VAT) is formulated by Ramakrishnan gave rise to doubts about the uniqueness of the theorem. These conditions are rediscussed with reference to their algebraic and experimental implications. 9 references.
General Theorems about Homogeneous Ellipsoidal Inclusions
ERIC Educational Resources Information Center
Korringa, J.; And Others
1978-01-01
Mathematical theorems about the properties of ellipsoids are developed. Included are Poisson's theorem concerning the magnetization of a homogeneous body of ellipsoidal shape, the polarization of a dielectric, the transport of heat or electricity through an ellipsoid, and other problems. (BB)
A no-hair theorem for black holes in f(R) gravity
NASA Astrophysics Data System (ADS)
Cañate, Pedro
2018-01-01
In this work we present a no-hair theorem which discards the existence of four-dimensional asymptotically flat, static and spherically symmetric or stationary axisymmetric, non-trivial black holes in the frame of f(R) gravity under metric formalism. Here we show that our no-hair theorem also can discard asymptotic de Sitter stationary and axisymmetric non-trivial black holes. The novelty is that this no-hair theorem is built without resorting to known mapping between f(R) gravity and scalar–tensor theory. Thus, an advantage will be that our no-hair theorem applies as well to metric f(R) models that cannot be mapped to scalar–tensor theory.
Generalized Browder's and Weyl's theorems for Banach space operators
NASA Astrophysics Data System (ADS)
Curto, Raúl E.; Han, Young Min
2007-12-01
We find necessary and sufficient conditions for a Banach space operator T to satisfy the generalized Browder's theorem. We also prove that the spectral mapping theorem holds for the Drazin spectrum and for analytic functions on an open neighborhood of [sigma](T). As applications, we show that if T is algebraically M-hyponormal, or if T is algebraically paranormal, then the generalized Weyl's theorem holds for f(T), where f[set membership, variant]H((T)), the space of functions analytic on an open neighborhood of [sigma](T). We also show that if T is reduced by each of its eigenspaces, then the generalized Browder's theorem holds for f(T), for each f[set membership, variant]H([sigma](T)).
Lanchester-Type Models of Warfare. Volume II
1980-10-01
the so-called PERRON - FROBENIUS theorem50 for nonnegative matrices that one can guarantee that (without any further assumptions about A and B) there...always exists a vector of nonnegative values such that, for example, (7.18.6) holds. Before we state the PERRON - FROBENIUS theorem for nonnegative...a proof of this important theorem). THEOREM .5.-1.1 ( PERRON [121] and FROBENIUS [60]): Let C z 0 be an n x n matrix. Then, 1. C has a nonnegative real
A remark on the energy conditions for Hawking's area theorem
NASA Astrophysics Data System (ADS)
Lesourd, Martin
2018-06-01
Hawking's area theorem is a fundamental result in black hole theory that is universally associated with the null energy condition. That this condition can be weakened is illustrated by the formulation of a strengthened version of the theorem based on an energy condition that allows for violations of the null energy condition. With the semi-classical context in mind, some brief remarks pertaining to the suitability of the area theorem and its energy condition are made.
Li, Rongjin; Zhang, Xiaotao; Dong, Huanli; Li, Qikai; Shuai, Zhigang; Hu, Wenping
2016-02-24
The equilibrium crystal shape and shape evolution of organic crystals are found to follow the Gibbs-Curie-Wulff theorem. Organic crystals are grown by the physical vapor transport technique and exhibit exactly the same shape as predicted by the Gibbs-Curie-Wulff theorem under optimal conditions. This accordance provides concrete proof for the theorem. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Note on a Sampling Theorem for Functions over GF(q)n Domain
NASA Astrophysics Data System (ADS)
Ukita, Yoshifumi; Saito, Tomohiko; Matsushima, Toshiyasu; Hirasawa, Shigeichi
In digital signal processing, the sampling theorem states that any real valued function ƒ can be reconstructed from a sequence of values of ƒ that are discretely sampled with a frequency at least twice as high as the maximum frequency of the spectrum of ƒ. This theorem can also be applied to functions over finite domain. Then, the range of frequencies of ƒ can be expressed in more detail by using a bounded set instead of the maximum frequency. A function whose range of frequencies is confined to a bounded set is referred to as bandlimited function. And a sampling theorem for bandlimited functions over Boolean domain has been obtained. Here, it is important to obtain a sampling theorem for bandlimited functions not only over Boolean domain (GF(q)n domain) but also over GF(q)n domain, where q is a prime power and GF(q) is Galois field of order q. For example, in experimental designs, although the model can be expressed as a linear combination of the Fourier basis functions and the levels of each factor can be represented by GF(q)n, the number of levels often take a value greater than two. However, the sampling theorem for bandlimited functions over GF(q)n domain has not been obtained. On the other hand, the sampling points are closely related to the codewords of a linear code. However, the relation between the parity check matrix of a linear code and any distinct error vectors has not been obtained, although it is necessary for understanding the meaning of the sampling theorem for bandlimited functions. In this paper, we generalize the sampling theorem for bandlimited functions over Boolean domain to a sampling theorem for bandlimited functions over GF(q)n domain. We also present a theorem for the relation between the parity check matrix of a linear code and any distinct error vectors. Lastly, we clarify the relation between the sampling theorem for functions over GF(q)n domain and linear codes.
High Resolution Spectrograph for the Hobby-Eberly Telescope
NASA Astrophysics Data System (ADS)
Tull, R. G.; MacQueen, P. J.; Good, J.; Epps, H. W.; HET HRS Team
1998-12-01
A fiber fed high-resolution spectrograph (HRS) is under construction for the Hobby-Eberly Telescope (HET). The primary resolving power originally specified, from astrophysical considerations, was R = 60,000 with a fiber of diameter at least 1 arc-second, with full spectral coverage limited only by the combined band-pass of the HET, the optical fiber, and the image detector. This was achieved in the final design with a high blaze angle R-4 echelle mosaic, white pupil design, image slicing, and a large area CCD mosaic illuminated by an eight element refractive camera. Two back-to-back, user selectable first-order diffraction gratings are employed for cross dispersion, to separate echelle spectral orders; the entire spectral range (420 - 1,000 nm) can be covered in as few as two exposures. Critical issues addressed in the design are cross dispersion and order spacing, sky subtraction, echelle and CCD selection, fiber optic feed and scrambling, and image or pupil slicing. In the final design meeting the requirements we exploited the large-area 4096 square CCD, image slicing, and the optical performance of the white-pupil design to acquire a range of 30,000 < R < 120,000 with fibers of diameter 2 and 3 arc-seconds, without sacrificing full spectral coverage. Design details will be presented. Limiting magnitude is projected to be about V = 19 (for S/N = 10) at the nominal R = 60,000 resolving power. The poster display will outline performance characteristics expected in relation to projected astrophysical research capabilities outlined by Sneden et al., in this conference. HRS is supported by generous grants from NSF, NASA, the State of Texas, and private philanthropy, with matching funds granted by the University of Texas and by McDonald Observatory.
Tang, Rendong; Dai, Jiapei
2014-01-01
The processing of neural information in neural circuits plays key roles in neural functions. Biophotons, also called ultra-weak photon emissions (UPE), may play potential roles in neural signal transmission, contributing to the understanding of the high functions of nervous system such as vision, learning and memory, cognition and consciousness. However, the experimental analysis of biophotonic activities (emissions) in neural circuits has been hampered due to technical limitations. Here by developing and optimizing an in vitro biophoton imaging method, we characterize the spatiotemporal biophotonic activities and transmission in mouse brain slices. We show that the long-lasting application of glutamate to coronal brain slices produces a gradual and significant increase of biophotonic activities and achieves the maximal effect within approximately 90 min, which then lasts for a relatively long time (>200 min). The initiation and/or maintenance of biophotonic activities by glutamate can be significantly blocked by oxygen and glucose deprivation, together with the application of a cytochrome c oxidase inhibitor (sodium azide), but only partly by an action potential inhibitor (TTX), an anesthetic (procaine), or the removal of intracellular and extracellular Ca2+. We also show that the detected biophotonic activities in the corpus callosum and thalamus in sagittal brain slices mostly originate from axons or axonal terminals of cortical projection neurons, and that the hyperphosphorylation of microtubule-associated protein tau leads to a significant decrease of biophotonic activities in these two areas. Furthermore, the application of glutamate in the hippocampal dentate gyrus results in increased biophotonic activities in its intrahippocampal projection areas. These results suggest that the glutamate-induced biophotonic activities reflect biophotonic transmission along the axons and in neural circuits, which may be a new mechanism for the processing of neural information. PMID:24454909
Tang, Rendong; Dai, Jiapei
2014-01-01
The processing of neural information in neural circuits plays key roles in neural functions. Biophotons, also called ultra-weak photon emissions (UPE), may play potential roles in neural signal transmission, contributing to the understanding of the high functions of nervous system such as vision, learning and memory, cognition and consciousness. However, the experimental analysis of biophotonic activities (emissions) in neural circuits has been hampered due to technical limitations. Here by developing and optimizing an in vitro biophoton imaging method, we characterize the spatiotemporal biophotonic activities and transmission in mouse brain slices. We show that the long-lasting application of glutamate to coronal brain slices produces a gradual and significant increase of biophotonic activities and achieves the maximal effect within approximately 90 min, which then lasts for a relatively long time (>200 min). The initiation and/or maintenance of biophotonic activities by glutamate can be significantly blocked by oxygen and glucose deprivation, together with the application of a cytochrome c oxidase inhibitor (sodium azide), but only partly by an action potential inhibitor (TTX), an anesthetic (procaine), or the removal of intracellular and extracellular Ca(2+). We also show that the detected biophotonic activities in the corpus callosum and thalamus in sagittal brain slices mostly originate from axons or axonal terminals of cortical projection neurons, and that the hyperphosphorylation of microtubule-associated protein tau leads to a significant decrease of biophotonic activities in these two areas. Furthermore, the application of glutamate in the hippocampal dentate gyrus results in increased biophotonic activities in its intrahippocampal projection areas. These results suggest that the glutamate-induced biophotonic activities reflect biophotonic transmission along the axons and in neural circuits, which may be a new mechanism for the processing of neural information.
500 x 1Byte x 136 images. So each 500 bytes from this dataset represents one scan line of the slice image. For example, using PBM: Get frame one: rawtopgm 256 256 < tomato.data > frame1 Get frames one to four into a single image: rawtopgm 256 1024 < tomato.data >frame1-4 Get frame two (skip
Service-Learning Integrated throughout a College of Engineering (SLICE)
ERIC Educational Resources Information Center
Duffy, John; Barrington, Linda; West, Cheryl; Heredia, Manuel; Barry, Carol
2011-01-01
In the fall of 2004 a college began a program to integrate service-learning (S-L) projects into required engineering courses throughout the curriculum, so that students would be exposed to S-L in at least one course in each of eight semesters. The ultimate goal is to graduate better engineers and more engaged citizens and to improve communities,…
ERIC Educational Resources Information Center
Marsh, Karen R.; Giffin, Bruce F.; Lowrie, Donald J., Jr.
2008-01-01
The purpose of this project was to develop Web-based learning modules that combine (1) animated 3D graphics; (2) 3D models that a student can manipulate independently; (3) passage of time in embryonic development; and (4) animated 2D graphics, including 2D cross-sections that represent different "slices" of the embryo, and animate in…
NASA Astrophysics Data System (ADS)
Mezey, Paul G.
2017-11-01
Two strongly related theorems on non-degenerate ground state electron densities serve as the basis of "Molecular Informatics". The Hohenberg-Kohn theorem is a statement on global molecular information, ensuring that the complete electron density contains the complete molecular information. However, the Holographic Electron Density Theorem states more: the local information present in each and every positive volume density fragment is already complete: the information in the fragment is equivalent to the complete molecular information. In other words, the complete molecular information provided by the Hohenberg-Kohn Theorem is already provided, in full, by any positive volume, otherwise arbitrarily small electron density fragment. In this contribution some of the consequences of the Holographic Electron Density Theorem are discussed within the framework of the "Nuclear Charge Space" and the Universal Molecule Model. In the Nuclear Charge Space" the nuclear charges are regarded as continuous variables, and in the more general Universal Molecule Model some other quantized parameteres are also allowed to become "de-quantized and then re-quantized, leading to interrelations among real molecules through abstract molecules. Here the specific role of the Holographic Electron Density Theorem is discussed within the above context.
Generalized Dandelin’s Theorem
NASA Astrophysics Data System (ADS)
Kheyfets, A. L.
2017-11-01
The paper gives a geometric proof of the theorem which states that in case of the plane section of a second-order surface of rotation (quadrics of rotation, QR), such conics as an ellipse, a hyperbola or a parabola (types of conic sections) are formed. The theorem supplements the well-known Dandelin’s theorem which gives the geometric proof only for a circular cone and applies the proof to all QR, namely an ellipsoid, a hyperboloid, a paraboloid and a cylinder. That’s why the considered theorem is known as the generalized Dandelin’s theorem (GDT). The GDT proof is based on a relatively unknown generalized directrix definition (GDD) of conics. The work outlines the GDD proof for all types of conics as their necessary and sufficient condition. Based on the GDD, the author proves the GDT for all QR in case of a random position of the cutting plane. The graphical stereometric structures necessary for the proof are given. The implementation of the structures by 3d computer methods is considered. The article shows the examples of the builds made in the AutoCAD package. The theorem is intended for the training course of theoretical training of elite student groups of architectural and construction specialties.
The B-field soft theorem and its unification with the graviton and dilaton
NASA Astrophysics Data System (ADS)
Di Vecchia, Paolo; Marotta, Raffaele; Mojaza, Matin
2017-10-01
In theories of Einstein gravity coupled with a dilaton and a two-form, a soft theorem for the two-form, known as the Kalb-Ramond B-field, has so far been missing. In this work we fill the gap, and in turn formulate a unified soft theorem valid for gravitons, dilatons and B-fields in any tree-level scattering amplitude involving the three massless states. The new soft theorem is fixed by means of on-shell gauge invariance and enters at the subleading order of the graviton's soft theorem. In contrast to the subsubleading soft behavior of gravitons and dilatons, we show that the soft behavior of B-fields at this order cannot be fully fixed by gauge invariance. Nevertheless, we show that it is possible to establish a gauge invariant decomposition of the amplitudes to any order in the soft expansion. We check explicitly the new soft theorem in the bosonic string and in Type II superstring theories, and furthermore demonstrate that, at the next order in the soft expansion, totally gauge invariant terms appear in both string theories which cannot be factorized into a soft theorem.
Low cost monocrystalline silicon sheet fabrication for solar cells by advanced ingot technology
NASA Technical Reports Server (NTRS)
Fiegl, G. F.; Bonora, A. C.
1980-01-01
The continuous liquid feed (CLF) Czochralski furnace and the enhanced I.D. slicing technology for the low-cost production of monocrystalline silicon sheets for solar cells are discussed. The incorporation of the CLF system is shown to improve ingot production rate significantly. As demonstrated in actual runs, higher than average solidification rates (75 to 100 mm/hr for 150 mm 1-0-0 crystals) can be achieved, when the system approaches steady-state conditions. The design characteristics of the CLF furnace are detailed, noting that it is capable of precise control of dopant impurity incorporation in the axial direction of the crystal. The crystal add-on cost is computed to be $11.88/sq m, considering a projected 1986 25-slice per cm conversion factor with an 86% crystal growth yield.
Optimized image acquisition for breast tomosynthesis in projection and reconstruction space.
Chawla, Amarpreet S; Lo, Joseph Y; Baker, Jay A; Samei, Ehsan
2009-11-01
Breast tomosynthesis has been an exciting new development in the field of breast imaging. While the diagnostic improvement via tomosynthesis is notable, the full potential of tomosynthesis has not yet been realized. This may be attributed to the dependency of the diagnostic quality of tomosynthesis on multiple variables, each of which needs to be optimized. Those include dose, number of angular projections, and the total angular span of those projections. In this study, the authors investigated the effects of these acquisition parameters on the overall diagnostic image quality of breast tomosynthesis in both the projection and reconstruction space. Five mastectomy specimens were imaged using a prototype tomosynthesis system. 25 angular projections of each specimen were acquired at 6.2 times typical single-view clinical dose level. Images at lower dose levels were then simulated using a noise modification routine. Each projection image was supplemented with 84 simulated 3 mm 3D lesions embedded at the center of 84 nonoverlapping ROIs. The projection images were then reconstructed using a filtered backprojection algorithm at different combinations of acquisition parameters to investigate which of the many possible combinations maximizes the performance. Performance was evaluated in terms of a Laguerre-Gauss channelized Hotelling observer model-based measure of lesion detectability. The analysis was also performed without reconstruction by combining the model results from projection images using Bayesian decision fusion algorithm. The effect of acquisition parameters on projection images and reconstructed slices were then compared to derive an optimization rule for tomosynthesis. The results indicated that projection images yield comparable but higher performance than reconstructed images. Both modes, however, offered similar trends: Performance improved with an increase in the total acquisition dose level and the angular span. Using a constant dose level and angular span, the performance rolled off beyond a certain number of projections, indicating that simply increasing the number of projections in tomosynthesis may not necessarily improve its performance. The best performance for both projection images and tomosynthesis slices was obtained for 15-17 projections spanning an angular are of approximately 45 degrees--the maximum tested in our study, and for an acquisition dose equal to single-view mammography. The optimization framework developed in this framework is applicable to other reconstruction techniques and other multiprojection systems.
Abel's theorem in the noncommutative case
NASA Astrophysics Data System (ADS)
Leitenberger, Frank
2004-03-01
We define noncommutative binary forms. Using the typical representation of Hermite we prove the fundamental theorem of algebra and we derive a noncommutative Cardano formula for cubic forms. We define quantized elliptic and hyperelliptic differentials of the first kind. Following Abel we prove Abel's theorem.
Impossible colorings and Bell's theorem
NASA Astrophysics Data System (ADS)
Aravind, P. K.
1999-11-01
An argument due to Zimba and Penrose is generalized to show how all known non-coloring proofs of the Bell-Kochen-Specker (BKS) theorem can be converted into inequality-free proofs of Bell's nonlocality theorem. A compilation of many such inequality-free proofs is given.
ERIC Educational Resources Information Center
Parameswaran, Revathy
2009-01-01
This paper reports on an experiment studying twelfth grade students' understanding of Rolle's Theorem. In particular, we study the influence of different concept images that students employ when solving reasoning tasks related to Rolle's Theorem. We argue that students' "container schema" and "motion schema" allow for rich…
An Application of the Perron-Frobenius Theorem to a Damage Model Problem.
1985-04-01
RO-RI6I 20B AN APPLICATION OF THE PERRON - FROBENIUS THEOREM TO A ill I DAMAGOE MODEL PR BLEM.. (U) PITTSBURGH UNIV PA CENTER FOR I MULTIYARIATE...any copyright notation herein. * . .r * j * :h ~ ** . . .~. ~ % *~’ :. ~ ~ v 4 .% % %~ AN APPLICATION OF THE PERRON - FROBENIUS THEOREM TO A DAMAGE...University of Sheffield, U.K. S ~ Summry Using the Perron - Frobenius theorem, it is established that if’ (X,Y) is a random vector of non-negative
1989-06-09
Theorem and the Perron - Frobenius Theorem in matrix theory. We use the Hahn-Banach theorem and do not use any fixed-point related concepts. 179 A...games defined b’, tions 87 Isac G. Fixed point theorems on convex cones , generalized pseudo-contractive mappings and the omplementarity problem 89...and (II), af(x) ° denotes the negative polar cone ot of(x). This condition are respectively called "inward" and "outward". Indeed, when X is convex
Altürk, Ahmet
2016-01-01
Mean value theorems for both derivatives and integrals are very useful tools in mathematics. They can be used to obtain very important inequalities and to prove basic theorems of mathematical analysis. In this article, a semi-analytical method that is based on weighted mean-value theorem for obtaining solutions for a wide class of Fredholm integral equations of the second kind is introduced. Illustrative examples are provided to show the significant advantage of the proposed method over some existing techniques.
Markov Property of the Conformal Field Theory Vacuum and the a Theorem.
Casini, Horacio; Testé, Eduardo; Torroba, Gonzalo
2017-06-30
We use strong subadditivity of entanglement entropy, Lorentz invariance, and the Markov property of the vacuum state of a conformal field theory to give new proof of the irreversibility of the renormalization group in d=4 space-time dimensions-the a theorem. This extends the proofs of the c and F theorems in dimensions d=2 and d=3 based on vacuum entanglement entropy, and gives a unified picture of all known irreversibility theorems in relativistic quantum field theory.
A Polarimetric Extension of the van Cittert-Zernike Theorem for Use with Microwave Interferometers
NASA Technical Reports Server (NTRS)
Piepmeier, J. R.; Simon, N. K.
2004-01-01
The van Cittert-Zernike theorem describes the Fourier-transform relationship between an extended source and its visibility function. Developments in classical optics texts use scalar field formulations for the theorem. Here, we develop a polarimetric extension to the van Cittert-Zernike theorem with applications to passive microwave Earth remote sensing. The development provides insight into the mechanics of two-dimensional interferometric imaging, particularly the effects of polarization basis differences between the scene and the observer.
Digital forensic osteology--possibilities in cooperation with the Virtopsy project.
Verhoff, Marcel A; Ramsthaler, Frank; Krähahn, Jonathan; Deml, Ulf; Gille, Ralf J; Grabherr, Silke; Thali, Michael J; Kreutz, Kerstin
2008-01-30
The present study was carried out to check whether classic osteometric parameters can be determined from the 3D reconstructions of MSCT (multislice computed tomography) scans acquired in the context of the Virtopsy project. To this end, four isolated and macerated skulls were examined by six examiners. First the skulls were conventionally (manually) measured using 32 internationally accepted linear measurements. Then the skulls were scanned by the use of MSCT with slice thicknesses of 1.25 mm and 0.63 mm, and the 33 measurements were virtually determined on the digital 3D reconstructions of the skulls. The results of the traditional and the digital measurements were compared for each examiner to figure out variations. Furthermore, several parameters were measured on the cranium and postcranium during an autopsy and compared to the values that had been measured on a 3D reconstruction from a previously acquired postmortem MSCT scan. The results indicate that equivalent osteometric values can be obtained from digital 3D reconstructions from MSCT scans using a slice thickness of 1.25 mm, and from conventional manual examinations. The measurements taken from a corpse during an autopsy could also be validated with the methods used for the digital 3D reconstructions in the context of the Virtopsy project. Future aims are the assessment and biostatistical evaluation in respect to sex, age and stature of all data sets stored in the Virtopsy project so far, as well as of future data sets. Furthermore, a definition of new parameters, only measurable with the aid of MSCT data would be conceivable.
Geometric correction method for 3d in-line X-ray phase contrast image reconstruction
2014-01-01
Background Mechanical system with imperfect or misalignment of X-ray phase contrast imaging (XPCI) components causes projection data misplaced, and thus result in the reconstructed slice images of computed tomography (CT) blurred or with edge artifacts. So the features of biological microstructures to be investigated are destroyed unexpectedly, and the spatial resolution of XPCI image is decreased. It makes data correction an essential pre-processing step for CT reconstruction of XPCI. Methods To remove unexpected blurs and edge artifacts, a mathematics model for in-line XPCI is built by considering primary geometric parameters which include a rotation angle and a shift variant in this paper. Optimal geometric parameters are achieved by finding the solution of a maximization problem. And an iterative approach is employed to solve the maximization problem by using a two-step scheme which includes performing a composite geometric transformation and then following a linear regression process. After applying the geometric transformation with optimal parameters to projection data, standard filtered back-projection algorithm is used to reconstruct CT slice images. Results Numerical experiments were carried out on both synthetic and real in-line XPCI datasets. Experimental results demonstrate that the proposed method improves CT image quality by removing both blurring and edge artifacts at the same time compared to existing correction methods. Conclusions The method proposed in this paper provides an effective projection data correction scheme and significantly improves the image quality by removing both blurring and edge artifacts at the same time for in-line XPCI. It is easy to implement and can also be extended to other XPCI techniques. PMID:25069768
Towards the Formal Verification of a Distributed Real-Time Automotive System
NASA Technical Reports Server (NTRS)
Endres, Erik; Mueller, Christian; Shadrin, Andrey; Tverdyshev, Sergey
2010-01-01
We present the status of a project which aims at building, formally and pervasively verifying a distributed automotive system. The target system is a gate-level model which consists of several interconnected electronic control units with independent clocks. This model is verified against the specification as seen by a system programmer. The automotive system is implemented on several FPGA boards. The pervasive verification is carried out using combination of interactive theorem proving (Isabelle/HOL) and model checking (LTL).
Convex Relaxation For Hard Problem In Data Mining And Sensor Localization
2017-04-13
Drusvyatskiy, S.A. Vavasis, and H. Wolkowicz. Extreme point in- equalities and geometry of the rank sparsity ball. Math . Program., 152(1-2, Ser. A...521–544, 2015. [3] M-H. Lin and H. Wolkowicz. Hiroshima’s theorem and matrix norm inequalities. Acta Sci. Math . (Szeged), 81(1-2):45–53, 2015. [4] D...9867-4. [8] D. Drusvyatskiy, G. Li, and H. Wolkowicz. Alternating projections for ill-posed semidenite feasibility problems. Math . Program., 2016
Sines and Cosines. Part 3 of 3
NASA Technical Reports Server (NTRS)
Apostol, Tom M. (Editor)
1994-01-01
In this 'Project Mathematics' series video, the addition formulas of sines and cosines are explained and their real life applications are demonstrated. Both film footage and computer animation is used. Several mathematical concepts are discussed and include: Ptolemy's theorem concerned with quadrilaterals; the difference between a central angle and an inscribed angle; sines and chord lengths; special angles; subtraction formulas; and a application to simple harmonic motion. A brief history of the city Alexandria, its mathematicians, and their contribution to the field of mathematics is shown.
A Bibliography of Selected Publications: Project Air Force, 5th Edition
1989-05-01
Dyna - R-3028-AF. A Dynamic Retention Model for Air Force Officers: METRIC’s DL and and Pipeilne Variability. M. J. Carrillo. Theory and Estimates. G...Theorem and Dyna - and Support. METRIC’s Demand and Pipeline Variability. R-3255-AF. Aircraft Airframe Cost Estimating Relationships: N-2283/1-AF...U). 1970-1985. N-2409-AF. Tanker Splitting Across the SlOP Bomber Force R-3389-AF. Dyna -METRIC Version 4: Modeling Worldwide (U). Logistics Support of
Nonlocal Quantum Information Transfer Without Superluminal Signalling and Communication
NASA Astrophysics Data System (ADS)
Walleczek, Jan; Grössing, Gerhard
2016-09-01
It is a frequent assumption that—via superluminal information transfers—superluminal signals capable of enabling communication are necessarily exchanged in any quantum theory that posits hidden superluminal influences. However, does the presence of hidden superluminal influences automatically imply superluminal signalling and communication? The non-signalling theorem mediates the apparent conflict between quantum mechanics and the theory of special relativity. However, as a `no-go' theorem there exist two opposing interpretations of the non-signalling constraint: foundational and operational. Concerning Bell's theorem, we argue that Bell employed both interpretations, and that he finally adopted the operational position which is associated often with ontological quantum theory, e.g., de Broglie-Bohm theory. This position we refer to as "effective non-signalling". By contrast, associated with orthodox quantum mechanics is the foundational position referred to here as "axiomatic non-signalling". In search of a decisive communication-theoretic criterion for differentiating between "axiomatic" and "effective" non-signalling, we employ the operational framework offered by Shannon's mathematical theory of communication, whereby we distinguish between Shannon signals and non-Shannon signals. We find that an effective non-signalling theorem represents two sub-theorems: (1) Non-transfer-control (NTC) theorem, and (2) Non-signification-control (NSC) theorem. Employing NTC and NSC theorems, we report that effective, instead of axiomatic, non-signalling is entirely sufficient for prohibiting nonlocal communication. Effective non-signalling prevents the instantaneous, i.e., superluminal, transfer of message-encoded information through the controlled use—by a sender-receiver pair —of informationally-correlated detection events, e.g., in EPR-type experiments. An effective non-signalling theorem allows for nonlocal quantum information transfer yet—at the same time—effectively denies superluminal signalling and communication.
NASA Astrophysics Data System (ADS)
Sun, Jun-Wei; Shen, Yi; Zhang, Guo-Dong; Wang, Yan-Feng; Cui, Guang-Zhao
2013-04-01
According to the Lyapunov stability theorem, a new general hybrid projective complete dislocated synchronization scheme with non-derivative and derivative coupling based on parameter identification is proposed under the framework of drive-response systems. Every state variable of the response system equals the summation of the hybrid drive systems in the previous hybrid synchronization. However, every state variable of the drive system equals the summation of the hybrid response systems while evolving with time in our method. Complete synchronization, hybrid dislocated synchronization, projective synchronization, non-derivative and derivative coupling, and parameter identification are included as its special item. The Lorenz chaotic system, Rössler chaotic system, memristor chaotic oscillator system, and hyperchaotic Lü system are discussed to show the effectiveness of the proposed methods.
On Euler's Theorem for Homogeneous Functions and Proofs Thereof.
ERIC Educational Resources Information Center
Tykodi, R. J.
1982-01-01
Euler's theorem for homogenous functions is useful when developing thermodynamic distinction between extensive and intensive variables of state and when deriving the Gibbs-Duhem relation. Discusses Euler's theorem and thermodynamic applications. Includes six-step instructional strategy for introducing the material to students. (Author/JN)
Ergodic theorem, ergodic theory, and statistical mechanics
Moore, Calvin C.
2015-01-01
This perspective highlights the mean ergodic theorem established by John von Neumann and the pointwise ergodic theorem established by George Birkhoff, proofs of which were published nearly simultaneously in PNAS in 1931 and 1932. These theorems were of great significance both in mathematics and in statistical mechanics. In statistical mechanics they provided a key insight into a 60-y-old fundamental problem of the subject—namely, the rationale for the hypothesis that time averages can be set equal to phase averages. The evolution of this problem is traced from the origins of statistical mechanics and Boltzman's ergodic hypothesis to the Ehrenfests' quasi-ergodic hypothesis, and then to the ergodic theorems. We discuss communications between von Neumann and Birkhoff in the Fall of 1931 leading up to the publication of these papers and related issues of priority. These ergodic theorems initiated a new field of mathematical-research called ergodic theory that has thrived ever since, and we discuss some of recent developments in ergodic theory that are relevant for statistical mechanics. PMID:25691697
From Einstein's theorem to Bell's theorem: a history of quantum non-locality
NASA Astrophysics Data System (ADS)
Wiseman, H. M.
2006-04-01
In this Einstein Year of Physics it seems appropriate to look at an important aspect of Einstein's work that is often down-played: his contribution to the debate on the interpretation of quantum mechanics. Contrary to physics ‘folklore’, Bohr had no defence against Einstein's 1935 attack (the EPR paper) on the claimed completeness of orthodox quantum mechanics. I suggest that Einstein's argument, as stated most clearly in 1946, could justly be called Einstein's reality locality completeness theorem, since it proves that one of these three must be false. Einstein's instinct was that completeness of orthodox quantum mechanics was the falsehood, but he failed in his quest to find a more complete theory that respected reality and locality. Einstein's theorem, and possibly Einstein's failure, inspired John Bell in 1964 to prove his reality locality theorem. This strengthened Einstein's theorem (but showed the futility of his quest) by demonstrating that either reality or locality is a falsehood. This revealed the full non-locality of the quantum world for the first time.
The spectral theorem for quaternionic unbounded normal operators based on the S-spectrum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alpay, Daniel, E-mail: dany@math.bgu.ac.il; Kimsey, David P., E-mail: dpkimsey@gmail.com; Colombo, Fabrizio, E-mail: fabrizio.colombo@polimi.it
In this paper we prove the spectral theorem for quaternionic unbounded normal operators using the notion of S-spectrum. The proof technique consists of first establishing a spectral theorem for quaternionic bounded normal operators and then using a transformation which maps a quaternionic unbounded normal operator to a quaternionic bounded normal operator. With this paper we complete the foundation of spectral analysis of quaternionic operators. The S-spectrum has been introduced to define the quaternionic functional calculus but it turns out to be the correct object also for the spectral theorem for quaternionic normal operators. The lack of a suitable notion ofmore » spectrum was a major obstruction to fully understand the spectral theorem for quaternionic normal operators. A prime motivation for studying the spectral theorem for quaternionic unbounded normal operators is given by the subclass of unbounded anti-self adjoint quaternionic operators which play a crucial role in the quaternionic quantum mechanics.« less
NASA Astrophysics Data System (ADS)
Capron, E.; Govin, A.; Feng, R.; Otto-Bliesner, B. L.; Wolff, E. W.
2017-07-01
The Last Interglacial (LIG, ∼129-116 thousand years ago, ka) represents an excellent case study to investigate the response of sensitive components of the Earth System and mechanisms of high-latitude amplification to a climate warmer than present-day. The Paleoclimate Model Intercomparison Project (Phase 4, hereafter referred as PMIP4) and the Coupled Model Intercomparison Project (Phase 6, hereafter referred as CMIP6) are coordinating the design of (1) a LIG Tier 1 equilibrium simulation to simulate the climate response at 127 ka, a time interval associated with a strong orbital forcing and greenhouse gas concentrations close to preindustrial levels and (2) associated Tier 2 sensitivity experiments to examine the role of the ocean, vegetation and dust feedbacks in modulating the response to this orbital forcing. Evaluating the capability of the CMIP6/PMIP4 models to reproduce the 127 ka polar and sub-polar climate will require appropriate data-based benchmarks which are currently missing. Based on a recent data synthesis that offers the first spatio-temporal representation of high-latitude (i.e. poleward of 40°N and 40°S) surface temperature evolution during the LIG, we produce a new 126-128 ka time slab, hereafter named 127 ka time slice. This 127 ka time slice represents surface temperature anomalies relative to preindustrial and is associated with quantitative estimates of the uncertainties related to relative dating and surface temperature reconstruction methods. It illustrates warmer-than-preindustrial conditions in the high-latitude regions of both hemispheres. In particular, summer sea surface temperatures (SST) in the North Atlantic region were on average 1.1 °C (with a standard error of the mean of 0.7 °C) warmer relative to preindustrial and 1.8 °C (with a standard error of the mean of 0.8 °C) in the Southern Ocean. In Antarctica, average 127 ka annual surface air temperature was 2.2 °C (with a standard error of the mean of 1.4 °C) warmer compared to preindustrial. We provide a critical evaluation of the latest LIG surface climate compilations that are available for evaluating LIG climate model experiments. We discuss in particular our new 127 ka time-slice in the context of existing LIG surface temperature time-slices. We also compare the 127 ka time slice with the ones published for the 125 and 130 ka time intervals and we discuss the potential and limits of a data-based time slice at 127 ka in the context of the upcoming coordinated modeling exercise. Finally we provide guidance on the use of the available LIG climate compilations for future model-data comparison exercises in the framework of the upcoming CMIP6/PMIP4 127 ka experiments. We do not recommend the use of LIG peak warmth-centered syntheses. Instead we promote the use of the most recent syntheses that are based on coherent chronologies between paleoclimatic records and provide spatio-temporal reconstruction of the LIG climate. In particular, we recommend using our new 127 ka data-based time slice in model-data comparison studies with a focus on the high-latitude climate.
Investigation of Vapor Cooling Enhancements for Applications on Large Cryogenic Systems
NASA Technical Reports Server (NTRS)
Ameen, Lauren; Zoeckler, Joseph
2017-01-01
The need to demonstrate and evaluate the effectiveness of heat interception methods for use on a relevant cryogenic propulsion stage at a system level has been identified. Evolvable Cryogenics (eCryo) Structural Heat Intercept, Insulation and Vibration Evaluation Rig (SHIIVER) will be designed with vehicle specific geometries (SLS Exploration Upper Stage (EUS) as guidance) and will be subjected to simulated space environments. One method of reducing structure-born heat leak being investigated utilizes vapor-based heat interception. Vapor-based heat interception could potentially reduce heat leak into liquid hydrogen propulsion tanks, increasing potential mission length or payload capability. Due to the high number of unknowns associated with the heat transfer mechanism and integration of vapor-based heat interception on a realistic large-scale skirt design, a sub-scale investigation was developed. The sub-project effort is known as the Small-scale Laboratory Investigation of Cooling Enhancements (SLICE). The SLICE aims to study, design, and test sub-scale multiple attachments and flow configuration concepts for vapor-based heat interception of structural skirts. SLICE will focus on understanding the efficiency of the heat transfer mechanism to the boil-off hydrogen vapor by varying the fluid network designs and configurations. Various analyses were completed in MATLAB, Excel VBA, and COMSOL Multiphysics to understand the optimum flow pattern for heat transfer and fluid dynamics. Results from these analyses were used to design and fabricate test article subsections of a large forward skirt with vapor cooling applied. The SLICE testing is currently being performed to collect thermal mechanical performance data on multiple skirt heat removal designs while varying inlet vapor conditions necessary to intercept a specified amount of heat for a given system. Initial results suggest that applying vapor-cooling provides a 50 heat reduction in conductive heat transmission along the skirt to the tank. The information obtained by SLICE will be used by the SHIIVER engineering team to design and implement vapor-based heat removal technology into the SHIIVER forward skirt hardware design.
Network analysis of the COSMOS galaxy field
NASA Astrophysics Data System (ADS)
de Regt, R.; Apunevych, S.; von Ferber, C.; Holovatch, Yu; Novosyadlyj, B.
2018-07-01
The galaxy data provided by COSMOS survey for 1°×1° field of sky are analysed by methods of complex networks. Three galaxy samples (slices) with redshifts ranging within intervals 0.88÷0.91, 0.91÷0.94, and 0.94÷0.97 are studied as two-dimensional projections for the spatial distributions of galaxies. We construct networks and calculate network measures for each sample, in order to analyse the network similarity of different samples, distinguish various topological environments, and find associations between galaxy properties (colour index and stellar mass) and their topological environments. Results indicate a high level of similarity between geometry and topology for different galaxy samples and no clear evidence of evolutionary trends in network measures. The distribution of local clustering coefficient C manifests three modes which allow for discrimination between stand-alone singlets and dumbbells (0 ≤ C ≤ 0.1), intermediately packed (0.1 < C < 0.9) and clique (0.9 ≤ C ≤ 1) like galaxies. Analysing astrophysical properties of galaxies (colour index and stellar masses), we show that distributions are similar in all slices, however weak evolutionary trends can also be seen across redshift slices. To specify different topological environments, we have extracted selections of galaxies from each sample according to different modes of C distribution. We have found statistically significant associations between evolutionary parameters of galaxies and selections of C: the distribution of stellar mass for galaxies with interim C differs from the corresponding distributions for stand-alone and clique galaxies, and this difference holds for all redshift slices. The colour index realizes somewhat different behaviour.
Data-driven sampling method for building 3D anatomical models from serial histology
NASA Astrophysics Data System (ADS)
Salunke, Snehal Ulhas; Ablove, Tova; Danforth, Theresa; Tomaszewski, John; Doyle, Scott
2017-03-01
In this work, we investigate the effect of slice sampling on 3D models of tissue architecture using serial histopathology. We present a method for using a single fully-sectioned tissue block as pilot data, whereby we build a fully-realized 3D model and then determine the optimal set of slices needed to reconstruct the salient features of the model objects under biological investigation. In our work, we are interested in the 3D reconstruction of microvessel architecture in the trigone region between the vagina and the bladder. This region serves as a potential avenue for drug delivery to treat bladder infection. We collect and co-register 23 serial sections of CD31-stained tissue images (6 μm thick sections), from which four microvessels are selected for analysis. To build each model, we perform semi-automatic segmentation of the microvessels. Subsampled meshes are then created by removing slices from the stack, interpolating the missing data, and re-constructing the mesh. We calculate the Hausdorff distance between the full and subsampled meshes to determine the optimal sampling rate for the modeled structures. In our application, we found that a sampling rate of 50% (corresponding to just 12 slices) was sufficient to recreate the structure of the microvessels without significant deviation from the fullyrendered mesh. This pipeline effectively minimizes the number of histopathology slides required for 3D model reconstruction, and can be utilized to either (1) reduce the overall costs of a project, or (2) enable additional analysis on the intermediate slides.
Network analysis of the COSMOS galaxy field
NASA Astrophysics Data System (ADS)
de Regt, R.; Apunevych, S.; Ferber, C. von; Holovatch, Yu; Novosyadlyj, B.
2018-03-01
The galaxy data provided by COSMOS survey for 1° × 1° field of sky are analysed by methods of complex networks. Three galaxy samples (slices) with redshifts ranging within intervals 0.88÷0.91, 0.91÷0.94 and 0.94÷0.97 are studied as two-dimensional projections for the spatial distributions of galaxies. We construct networks and calculate network measures for each sample, in order to analyse the network similarity of different samples, distinguish various topological environments, and find associations between galaxy properties (colour index and stellar mass) and their topological environments. Results indicate a high level of similarity between geometry and topology for different galaxy samples and no clear evidence of evolutionary trends in network measures. The distribution of local clustering coefficient C manifests three modes which allow for discrimination between stand-alone singlets and dumbbells (0 ≤ C ≤ 0.1), intermediately packed (0.1 < C < 0.9) and clique (0.9 ≤ C ≤ 1) like galaxies. Analysing astrophysical properties of galaxies (colour index and stellar masses), we show that distributions are similar in all slices, however weak evolutionary trends can also be seen across redshift slices. To specify different topological environments we have extracted selections of galaxies from each sample according to different modes of C distribution. We have found statistically significant associations between evolutionary parameters of galaxies and selections of C: the distribution of stellar mass for galaxies with interim C differ from the corresponding distributions for stand-alone and clique galaxies, and this difference holds for all redshift slices. The colour index realises somewhat different behaviour.
Bring the Pythagorean Theorem "Full Circle"
ERIC Educational Resources Information Center
Benson, Christine C.; Malm, Cheryl G.
2011-01-01
Middle school mathematics generally explores applications of the Pythagorean theorem and lays the foundation for working with linear equations. The Grade 8 Curriculum Focal Points recommend that students "apply the Pythagorean theorem to find distances between points in the Cartesian coordinate plane to measure lengths and analyze polygons and…
Using Discovery in the Calculus Class
ERIC Educational Resources Information Center
Shilgalis, Thomas W.
1975-01-01
This article shows how two discoverable theorems from elementary calculus can be presented to students in a manner that assists them in making the generalizations themselves. The theorems are the mean value theorems for derivatives and for integrals. A conjecture is suggested by pictures and then refined. (Author/KM)
Three Lectures on Theorem-proving and Program Verification
NASA Technical Reports Server (NTRS)
Moore, J. S.
1983-01-01
Topics concerning theorem proving and program verification are discussed with particlar emphasis on the Boyer/Moore theorem prover, and approaches to program verification such as the functional and interpreter methods and the inductive assertion approach. A history of the discipline and specific program examples are included.
Freezing effect on bread appearance evaluated by digital imaging
NASA Astrophysics Data System (ADS)
Zayas, Inna Y.
1999-01-01
In marketing channels, bread is sometimes delivered in a frozen sate for distribution. Changes occur in physical dimensions, crumb grain and appearance of slices. Ten loaves, twelve bread slices per loaf were scanned for digital image analysis and then frozen in a commercial refrigerator. The bread slices were stored for four weeks scanned again, permitted to thaw and scanned a third time. Image features were extracted, to determine shape, size and image texture of the slices. Different thresholds of grey levels were set to detect changes that occurred in crumb, images were binarized at these settings. The number of pixels falling into these gray level settings were determined for each slice. Image texture features of subimages of each slice were calculated to quantify slice crumb grain. The image features of the slice size showed shrinking of bread slices, as a results of freezing and storage, although shape of slices did not change markedly. Visible crumb texture changes occurred and these changes were depicted by changes in image texture features. Image texture features showed that slice crumb changed differently at the center of a slice compared to a peripheral area close to the crust. Image texture and slice features were sufficient for discrimination of slices before and after freezing and after thawing.
NASA Astrophysics Data System (ADS)
Ji, Ye; Liu, Ting; Min, Lequan
2008-05-01
Two constructive generalized chaos synchronization (GCS) theorems for bidirectional differential equations and discrete systems are introduced. Using the two theorems, one can construct new chaos systems to make the system variables be in GCS. Five examples are presented to illustrate the effectiveness of the theoretical results.
The Law of Cosines for an "n"-Dimensional Simplex
ERIC Educational Resources Information Center
Ding, Yiren
2008-01-01
Using the divergence theorem technique of L. Eifler and N.H. Rhee, "The n-dimensional Pythagorean Theorem via the Divergence Theorem" (to appear: Amer. Math. Monthly), we extend the law of cosines for a triangle in a plane to an "n"-dimensional simplex in an "n"-dimensional space.
When 95% Accurate Isn't: Exploring Bayes's Theorem
ERIC Educational Resources Information Center
CadwalladerOlsker, Todd D.
2011-01-01
Bayes's theorem is notorious for being a difficult topic to learn and to teach. Problems involving Bayes's theorem (either implicitly or explicitly) generally involve calculations based on two or more given probabilities and their complements. Further, a correct solution depends on students' ability to interpret the problem correctly. Most people…
Optimal Keno Strategies and the Central Limit Theorem
ERIC Educational Resources Information Center
Johnson, Roger W.
2006-01-01
For the casino game Keno we determine optimal playing strategies. To decide such optimal strategies, both exact (hypergeometric) and approximate probability calculations are used. The approximate calculations are obtained via the Central Limit Theorem and simulation, and an important lesson about the application of the Central Limit Theorem is…
Computer Algebra Systems and Theorems on Real Roots of Polynomials
ERIC Educational Resources Information Center
Aidoo, Anthony Y.; Manthey, Joseph L.; Ward, Kim Y.
2010-01-01
A computer algebra system is used to derive a theorem on the existence of roots of a quadratic equation on any bounded real interval. This is extended to a cubic polynomial. We discuss how students could be led to derive and prove these theorems. (Contains 1 figure.)
Fluctuation theorem for Hamiltonian Systems: Le Chatelier's principle
NASA Astrophysics Data System (ADS)
Evans, Denis J.; Searles, Debra J.; Mittag, Emil
2001-05-01
For thermostated dissipative systems, the fluctuation theorem gives an analytical expression for the ratio of probabilities that the time-averaged entropy production in a finite system observed for a finite time takes on a specified value compared to the negative of that value. In the past, it has been generally thought that the presence of some thermostating mechanism was an essential component of any system that satisfies a fluctuation theorem. In the present paper, we point out that a fluctuation theorem can be derived for purely Hamiltonian systems, with or without applied dissipative fields.
Nambu-Goldstone theorem and spin-statistics theorem
NASA Astrophysics Data System (ADS)
Fujikawa, Kazuo
On December 19-21 in 2001, we organized a yearly workshop at Yukawa Institute for Theoretical Physics in Kyoto on the subject of "Fundamental Problems in Field Theory and their Implications". Prof. Yoichiro Nambu attended this workshop and explained a necessary modification of the Nambu-Goldstone theorem when applied to nonrelativistic systems. At the same workshop, I talked on a path integral formulation of the spin-statistics theorem. The present essay is on this memorable workshop, where I really enjoyed the discussions with Nambu, together with a short comment on the color freedom of quarks.
Counting Heron Triangles with Constraints
2013-01-25
Heron triangle is an integer, then b is even, say b = 2b1. By Pythagoras ’ theorem , a4 = h2 +4b21, and since in a Heron triangle, the heights are always...our first result, which follows an idea of [10, Theorem 2.3]. Theorem 4. Let a, b be two fixed integers, and let ab be factored as in (1). Then H(a, b...which we derive the result. Theorem 4 immediately offers us an interesting observation regarding a special class of fixed sides (a, b). Corollary 5. If
On Pythagoras Theorem for Products of Spectral Triples
NASA Astrophysics Data System (ADS)
D'Andrea, Francesco; Martinetti, Pierre
2013-05-01
We discuss a version of Pythagoras theorem in noncommutative geometry. Usual Pythagoras theorem can be formulated in terms of Connes' distance, between pure states, in the product of commutative spectral triples. We investigate the generalization to both non-pure states and arbitrary spectral triples. We show that Pythagoras theorem is replaced by some Pythagoras inequalities, that we prove for the product of arbitrary (i.e. non-necessarily commutative) spectral triples, assuming only some unitality condition. We show that these inequalities are optimal, and we provide non-unital counter-examples inspired by K-homology.
Which symmetry? Noether, Weyl, and conservation of electric charge
NASA Astrophysics Data System (ADS)
Brading, Katherine A.
In 1918, Emmy Noether published a (now famous) theorem establishing a general connection between continuous 'global' symmetries and conserved quantities. In fact, Noether's paper contains two theorems, and the second of these deals with 'local' symmetries; prima facie, this second theorem has nothing to do with conserved quantities. In the same year, Hermann Weyl independently made the first attempt to derive conservation of electric charge from a postulated gauge symmetry. In the light of Noether's work, it is puzzling that Weyl's argument uses local gauge symmetry. This paper explores the relationships between Weyl's work, Noether's two theorems, and the modern connection between gauge symmetry and conservation of electric charge. This includes showing that Weyl's connection is essentially an application of Noether's second theorem, with a novel twist.
NASA Technical Reports Server (NTRS)
Andrews, E. H., Jr.; Mackley, E. A.
1976-01-01
An aerodynamic engine inlet analysis was performed on the experimental results obtained at nominal Mach numbers of 5, 6, and 7 from the NASA Hypersonic Research Engine (HRE) Aerothermodynamic Integration Model (AIM). Incorporation on the AIM of the mixed-compression inlet design represented the final phase of an inlet development program of the HRE Project. The purpose of this analysis was to compare the AIM inlet experimental results with theoretical results. Experimental performance was based on measured surface pressures used in a one-dimensional force-momentum theorem. Results of the analysis indicate that surface static-pressure measurements agree reasonably well with theoretical predictions except in the regions where the theory predicts large pressure discontinuities. Experimental and theoretical results both based on the one-dimensional force-momentum theorem yielded inlet performance parameters as functions of Mach number that exhibited reasonable agreement. Previous predictions of inlet unstart that resulted from pressure disturbances created by fuel injection and combustion appeared to be pessimistic.
Behavior of the maximum likelihood in quantum state tomography
NASA Astrophysics Data System (ADS)
Scholten, Travis L.; Blume-Kohout, Robin
2018-02-01
Quantum state tomography on a d-dimensional system demands resources that grow rapidly with d. They may be reduced by using model selection to tailor the number of parameters in the model (i.e., the size of the density matrix). Most model selection methods typically rely on a test statistic and a null theory that describes its behavior when two models are equally good. Here, we consider the loglikelihood ratio. Because of the positivity constraint ρ ≥ 0, quantum state space does not generally satisfy local asymptotic normality (LAN), meaning the classical null theory for the loglikelihood ratio (the Wilks theorem) should not be used. Thus, understanding and quantifying how positivity affects the null behavior of this test statistic is necessary for its use in model selection for state tomography. We define a new generalization of LAN, metric-projected LAN, show that quantum state space satisfies it, and derive a replacement for the Wilks theorem. In addition to enabling reliable model selection, our results shed more light on the qualitative effects of the positivity constraint on state tomography.
Double-time correlation functions of two quantum operations in open systems
NASA Astrophysics Data System (ADS)
Ban, Masashi
2017-10-01
A double-time correlation function of arbitrary two quantum operations is studied for a nonstationary open quantum system which is in contact with a thermal reservoir. It includes a usual correlation function, a linear response function, and a weak value of an observable. Time evolution of the correlation function can be derived by means of the time-convolution and time-convolutionless projection operator techniques. For this purpose, a quasidensity operator accompanied by a fictitious field is introduced, which makes it possible to derive explicit formulas for calculating a double-time correlation function in the second-order approximation with respect to a system-reservoir interaction. The derived formula explicitly shows that the quantum regression theorem for calculating the double-time correlation function cannot be used if a thermal reservoir has a finite correlation time. Furthermore, the formula is applied for a pure dephasing process and a linear dissipative process. The quantum regression theorem and the the Leggett-Garg inequality are investigated for an open two-level system. The results are compared with those obtained by exact calculation to examine whether the formula is a good approximation.
Behavior of the maximum likelihood in quantum state tomography
Blume-Kohout, Robin J; Scholten, Travis L.
2018-02-22
Quantum state tomography on a d-dimensional system demands resources that grow rapidly with d. They may be reduced by using model selection to tailor the number of parameters in the model (i.e., the size of the density matrix). Most model selection methods typically rely on a test statistic and a null theory that describes its behavior when two models are equally good. Here, we consider the loglikelihood ratio. Because of the positivity constraint ρ ≥ 0, quantum state space does not generally satisfy local asymptotic normality (LAN), meaning the classical null theory for the loglikelihood ratio (the Wilks theorem) shouldmore » not be used. Thus, understanding and quantifying how positivity affects the null behavior of this test statistic is necessary for its use in model selection for state tomography. We define a new generalization of LAN, metric-projected LAN, show that quantum state space satisfies it, and derive a replacement for the Wilks theorem. In addition to enabling reliable model selection, our results shed more light on the qualitative effects of the positivity constraint on state tomography.« less
Behavior of the maximum likelihood in quantum state tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blume-Kohout, Robin J; Scholten, Travis L.
Quantum state tomography on a d-dimensional system demands resources that grow rapidly with d. They may be reduced by using model selection to tailor the number of parameters in the model (i.e., the size of the density matrix). Most model selection methods typically rely on a test statistic and a null theory that describes its behavior when two models are equally good. Here, we consider the loglikelihood ratio. Because of the positivity constraint ρ ≥ 0, quantum state space does not generally satisfy local asymptotic normality (LAN), meaning the classical null theory for the loglikelihood ratio (the Wilks theorem) shouldmore » not be used. Thus, understanding and quantifying how positivity affects the null behavior of this test statistic is necessary for its use in model selection for state tomography. We define a new generalization of LAN, metric-projected LAN, show that quantum state space satisfies it, and derive a replacement for the Wilks theorem. In addition to enabling reliable model selection, our results shed more light on the qualitative effects of the positivity constraint on state tomography.« less
Choi, Jang-Hwan; Maier, Andreas; Keil, Andreas; Pal, Saikat; McWalter, Emily J; Beaupré, Gary S; Gold, Garry E; Fahrig, Rebecca
2014-06-01
A C-arm CT system has been shown to be capable of scanning a single cadaver leg under loaded conditions by virtue of its highly flexible acquisition trajectories. In Part I of this study, using the 4D XCAT-based numerical simulation, the authors predicted that the involuntary motion in the lower body of subjects in weight-bearing positions would seriously degrade image quality and the authors suggested three motion compensation methods by which the reconstructions could be corrected to provide diagnostic image quality. Here, the authors demonstrate that a flat-panel angiography system is appropriate for scanning both legs of subjects in vivo under weight-bearing conditions and further evaluate the three motion-correction algorithms using in vivo data. The geometry of a C-arm CT system for a horizontal scan trajectory was calibrated using the PDS-2 phantom. The authors acquired images of two healthy volunteers while lying supine on a table, standing, and squatting at several knee flexion angles. In order to identify the involuntary motion of the lower body, nine 1-mm-diameter tantalum fiducial markers were attached around the knee. The static mean marker position in 3D, a reference for motion compensation, was estimated by back-projecting detected markers in multiple projections using calibrated projection matrices and identifying the intersection points in 3D of the back-projected rays. Motion was corrected using three different methods (described in detail previously): (1) 2D projection shifting, (2) 2D deformable projection warping, and (3) 3D rigid body warping. For quantitative image quality analysis, SSIM indices for the three methods were compared using the supine data as a ground truth. A 2D Euclidean distance-based metric of subjects' motion ranged from 0.85 mm (±0.49 mm) to 3.82 mm (±2.91 mm) (corresponding to 2.76 to 12.41 pixels) resulting in severe motion artifacts in 3D reconstructions. Shifting in 2D, 2D warping, and 3D warping improved the SSIM in the central slice by 20.22%, 16.83%, and 25.77% in the data with the largest motion among the five datasets (SCAN5); improvement in off-center slices was 18.94%, 29.14%, and 36.08%, respectively. The authors showed that C-arm CT control can be implemented for nonstandard horizontal trajectories which enabled us to scan and successfully reconstruct both legs of volunteers in weight-bearing positions. As predicted using theoretical models, the proposed motion correction methods improved image quality by reducing motion artifacts in reconstructions; 3D warping performed better than the 2D methods, especially in off-center slices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Jang-Hwan; Maier, Andreas; Keil, Andreas
2014-06-15
Purpose: A C-arm CT system has been shown to be capable of scanning a single cadaver leg under loaded conditions by virtue of its highly flexible acquisition trajectories. In Part I of this study, using the 4D XCAT-based numerical simulation, the authors predicted that the involuntary motion in the lower body of subjects in weight-bearing positions would seriously degrade image quality and the authors suggested three motion compensation methods by which the reconstructions could be corrected to provide diagnostic image quality. Here, the authors demonstrate that a flat-panel angiography system is appropriate for scanning both legs of subjectsin vivo undermore » weight-bearing conditions and further evaluate the three motion-correction algorithms using in vivo data. Methods: The geometry of a C-arm CT system for a horizontal scan trajectory was calibrated using the PDS-2 phantom. The authors acquired images of two healthy volunteers while lying supine on a table, standing, and squatting at several knee flexion angles. In order to identify the involuntary motion of the lower body, nine 1-mm-diameter tantalum fiducial markers were attached around the knee. The static mean marker position in 3D, a reference for motion compensation, was estimated by back-projecting detected markers in multiple projections using calibrated projection matrices and identifying the intersection points in 3D of the back-projected rays. Motion was corrected using three different methods (described in detail previously): (1) 2D projection shifting, (2) 2D deformable projection warping, and (3) 3D rigid body warping. For quantitative image quality analysis, SSIM indices for the three methods were compared using the supine data as a ground truth. Results: A 2D Euclidean distance-based metric of subjects’ motion ranged from 0.85 mm (±0.49 mm) to 3.82 mm (±2.91 mm) (corresponding to 2.76 to 12.41 pixels) resulting in severe motion artifacts in 3D reconstructions. Shifting in 2D, 2D warping, and 3D warping improved the SSIM in the central slice by 20.22%, 16.83%, and 25.77% in the data with the largest motion among the five datasets (SCAN5); improvement in off-center slices was 18.94%, 29.14%, and 36.08%, respectively. Conclusions: The authors showed that C-arm CT control can be implemented for nonstandard horizontal trajectories which enabled us to scan and successfully reconstruct both legs of volunteers in weight-bearing positions. As predicted using theoretical models, the proposed motion correction methods improved image quality by reducing motion artifacts in reconstructions; 3D warping performed better than the 2D methods, especially in off-center slices.« less
NASA Technical Reports Server (NTRS)
Roberts, E. G.; Johnson, C. M.
1982-01-01
The economics and sensitivities of slicing large diameter silicon ingots for photovoltaic applications were examined. Current economics and slicing add on cost sensitivities are calculated using variable parameters for blade life, slicing yield, and slice cutting speed. It is indicated that cutting speed has the biggest impact on slicing add on cost, followed by slicing yield, and by blade life as the blade life increases.
NASA Astrophysics Data System (ADS)
Van Vliet, Carolyne M.
2012-11-01
Nonequilibrium processes require that the density operator of an interacting system with Hamiltonian H(t)=H0(t)+λV converges and produces entropy. Employing projection operators in the state space, the density operator is developed to all orders of perturbation and then resummed. In contrast to earlier treatments by Van Hove [Physica0031-891410.1016/S0031-8914(54)92646-4 21, 517 (1955)] and others [U. Fano, Rev. Mod. Phys.0034-686110.1103/RevModPhys.29.74 29, 74 (1959); U. Fano, in Lectures on the Many-Body Problem, Vol 2, edited by E. R. Caniello (Academic Press, New York, 1964); R. Zwanzig, in Lectures in Theoretical Physics, Vol. III, edited by W. E. Britten, B. W. Downs, and J. Downs (Wiley Interscience, New York, 1961), pp. 116-141; K. M. Van Vliet, J. Math. Phys.0022-248810.1063/1.523833 19, 1345 (1978); K. M. Van Vliet, Can. J. Phys. 56, 1206 (1978)], closed expressions are obtained. From these we establish the time-reversal symmetry property P(γ,t|γ',t')=P˜(γ',t'|γ,t), where the tilde refers to the time-reversed protocol; also a nonstationary Markovian master equation is derived. Time-reversal symmetry is then applied to thermostatted systems yielding the Crooks-Tasaki fluctuation theorem (FT) and the quantum Jarzynski work-energy theorem, as well as the general entropy FT. The quantum mechanical concepts of work and entropy are discussed in detail. Finally, we present a nonequilibrium extension of Mazo's lemma of linear response theory, obtaining some applications via this alternate route.
NASA Astrophysics Data System (ADS)
Murashita, Yûto; Gong, Zongping; Ashida, Yuto; Ueda, Masahito
2017-10-01
The thermodynamics of quantum coherence has attracted growing attention recently, where the thermodynamic advantage of quantum superposition is characterized in terms of quantum thermodynamics. We investigate the thermodynamic effects of quantum coherent driving in the context of the fluctuation theorem. We adopt a quantum-trajectory approach to investigate open quantum systems under feedback control. In these systems, the measurement backaction in the forward process plays a key role, and therefore the corresponding time-reversed quantum measurement and postselection must be considered in the backward process, in sharp contrast to the classical case. The state reduction associated with quantum measurement, in general, creates a zero-probability region in the space of quantum trajectories of the forward process, which causes singularly strong irreversibility with divergent entropy production (i.e., absolute irreversibility) and hence makes the ordinary fluctuation theorem break down. In the classical case, the error-free measurement ordinarily leads to absolute irreversibility, because the measurement restricts classical paths to the region compatible with the measurement outcome. In contrast, in open quantum systems, absolute irreversibility is suppressed even in the presence of the projective measurement due to those quantum rare events that go through the classically forbidden region with the aid of quantum coherent driving. This suppression of absolute irreversibility exemplifies the thermodynamic advantage of quantum coherent driving. Absolute irreversibility is shown to emerge in the absence of coherent driving after the measurement, especially in systems under time-delayed feedback control. We show that absolute irreversibility is mitigated by increasing the duration of quantum coherent driving or decreasing the delay time of feedback control.
Time Evolution of the Dynamical Variables of a Stochastic System.
ERIC Educational Resources Information Center
de la Pena, L.
1980-01-01
By using the method of moments, it is shown that several important and apparently unrelated theorems describing average properties of stochastic systems are in fact particular cases of a general law; this method is applied to generalize the virial theorem and the fluctuation-dissipation theorem to the time-dependent case. (Author/SK)
A Generalization of the Prime Number Theorem
ERIC Educational Resources Information Center
Bruckman, Paul S.
2008-01-01
In this article, the author begins with the prime number theorem (PNT), and then develops this into a more general theorem, of which many well-known number theoretic results are special cases, including PNT. He arrives at an asymptotic relation that allows the replacement of certain discrete sums involving primes into corresponding differentiable…
ERIC Educational Resources Information Center
Stupel, Moshe; Ben-Chaim, David
2013-01-01
Based on Steiner's fascinating theorem for trapezium, seven geometrical constructions using straight-edge alone are described. These constructions provide an excellent base for teaching theorems and the properties of geometrical shapes, as well as challenging thought and inspiring deeper insight into the world of geometry. In particular, this…
Leaning on Socrates to Derive the Pythagorean Theorem
ERIC Educational Resources Information Center
Percy, Andrew; Carr, Alistair
2010-01-01
The one theorem just about every student remembers from school is the theorem about the side lengths of a right angled triangle which Euclid attributed to Pythagoras when writing Proposition 47 of "The Elements". Usually first met in middle school, the student will be continually exposed throughout their mathematical education to the…
ERIC Educational Resources Information Center
Howell, Russell W.; Schrohe, Elmar
2017-01-01
Rouché's Theorem is a standard topic in undergraduate complex analysis. It is usually covered near the end of the course with applications relating to pure mathematics only (e.g., using it to produce an alternate proof of the Fundamental Theorem of Algebra). The "winding number" provides a geometric interpretation relating to the…
Geometry of the Adiabatic Theorem
ERIC Educational Resources Information Center
Lobo, Augusto Cesar; Ribeiro, Rafael Antunes; Ribeiro, Clyffe de Assis; Dieguez, Pedro Ruas
2012-01-01
We present a simple and pedagogical derivation of the quantum adiabatic theorem for two-level systems (a single qubit) based on geometrical structures of quantum mechanics developed by Anandan and Aharonov, among others. We have chosen to use only the minimum geometric structure needed for the understanding of the adiabatic theorem for this case.…
The Classical Version of Stokes' Theorem Revisited
ERIC Educational Resources Information Center
Markvorsen, Steen
2008-01-01
Using only fairly simple and elementary considerations--essentially from first year undergraduate mathematics--we show how the classical Stokes' theorem for any given surface and vector field in R[superscript 3] follows from an application of Gauss' divergence theorem to a suitable modification of the vector field in a tubular shell around the…
ERIC Educational Resources Information Center
Smith, Michael D.
2016-01-01
The Parity Theorem states that any permutation can be written as a product of transpositions, but no permutation can be written as a product of both an even number and an odd number of transpositions. Most proofs of the Parity Theorem take several pages of mathematical formalism to complete. This article presents an alternative but equivalent…
Visualizing the Central Limit Theorem through Simulation
ERIC Educational Resources Information Center
Ruggieri, Eric
2016-01-01
The Central Limit Theorem is one of the most important concepts taught in an introductory statistics course, however, it may be the least understood by students. Sure, students can plug numbers into a formula and solve problems, but conceptually, do they really understand what the Central Limit Theorem is saying? This paper describes a simulation…
Shields, Richard K.; Dudley-Javoroski, Shauna; Boaldin, Kathryn M.; Corey, Trent A.; Fog, Daniel B.; Ruen, Jacquelyn M.
2012-01-01
Objectives To determine (1) the error attributable to external tibia-length measurements by using peripheral quantitative computed tomography (pQCT) and (2) the effect these errors have on scan location and tibia trabecular bone mineral density (BMD) after spinal cord injury (SCI). Design Blinded comparison and criterion standard in matched cohorts. Setting Primary care university hospital. Participants Eight able-bodied subjects underwent tibia length measurement. A separate cohort of 7 men with SCI and 7 able-bodied age-matched male controls underwent pQCT analysis. Interventions Not applicable. Main Outcome Measures The projected worst-case tibia-length–measurement error translated into a pQCT slice placement error of ±3mm. We collected pQCT slices at the distal 4% tibia site, 3mm proximal and 3mm distal to that site, and then quantified BMD error attributable to slice placement. Results Absolute BMD error was greater for able-bodied than for SCI subjects (5.87mg/cm3 vs 4.5mg/cm3). However, the percentage error in BMD was larger for SCI than able-bodied subjects (4.56% vs 2.23%). Conclusions During cross-sectional studies of various populations, BMD differences up to 5% may be attributable to variation in limb-length–measurement error. PMID:17023249
Virtual continuity of measurable functions and its applications
NASA Astrophysics Data System (ADS)
Vershik, A. M.; Zatitskii, P. B.; Petrov, F. V.
2014-12-01
A classical theorem of Luzin states that a measurable function of one real variable is `almost' continuous. For measurable functions of several variables the analogous statement (continuity on a product of sets having almost full measure) does not hold in general. The search for a correct analogue of Luzin's theorem leads to a notion of virtually continuous functions of several variables. This apparently new notion implicitly appears in the statements of embedding theorems and trace theorems for Sobolev spaces. In fact it reveals the nature of such theorems as statements about virtual continuity. The authors' results imply that under the conditions of Sobolev theorems there is a well-defined integration of a function with respect to a wide class of singular measures, including measures concentrated on submanifolds. The notion of virtual continuity is also used for the classification of measurable functions of several variables and in some questions on dynamical systems, the theory of polymorphisms, and bistochastic measures. In this paper the necessary definitions and properties of admissible metrics are recalled, several definitions of virtual continuity are given, and some applications are discussed. Bibliography: 24 titles.
NASA Technical Reports Server (NTRS)
Goldman, H.; Wolf, M.
1979-01-01
Analyses of slicing processes and junction formation processes are presented. A simple method for evaluation of the relative economic merits of competing process options with respect to the cost of energy produced by the system is described. An energy consumption analysis was developed and applied to determine the energy consumption in the solar module fabrication process sequence, from the mining of the SiO2 to shipping. The analysis shows that, in current technology practice, inordinate energy use in the purification step, and large wastage of the invested energy through losses, particularly poor conversion in slicing, as well as inadequate yields throughout. The cell process energy expenditures already show a downward trend based on increased throughput rates. The large improvement, however, depends on the introduction of a more efficient purification process and of acceptable ribbon growing techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koenig, Robert; Institute for Quantum Information, California Institute of Technology, Pasadena, California 91125; Mitchison, Graeme
In its most basic form, the finite quantum de Finetti theorem states that the reduced k-partite density operator of an n-partite symmetric state can be approximated by a convex combination of k-fold product states. Variations of this result include Renner's 'exponential' approximation by 'almost-product' states, a theorem which deals with certain triples of representations of the unitary group, and the result of D'Cruz et al. [e-print quant-ph/0606139;Phys. Rev. Lett. 98, 160406 (2007)] for infinite-dimensional systems. We show how these theorems follow from a single, general de Finetti theorem for representations of symmetry groups, each instance corresponding to a particular choicemore » of symmetry group and representation of that group. This gives some insight into the nature of the set of approximating states and leads to some new results, including an exponential theorem for infinite-dimensional systems.« less
The Levy sections theorem revisited
NASA Astrophysics Data System (ADS)
Figueiredo, Annibal; Gleria, Iram; Matsushita, Raul; Da Silva, Sergio
2007-06-01
This paper revisits the Levy sections theorem. We extend the scope of the theorem to time series and apply it to historical daily returns of selected dollar exchange rates. The elevated kurtosis usually observed in such series is then explained by their volatility patterns. And the duration of exchange rate pegs explains the extra elevated kurtosis in the exchange rates of emerging markets. In the end, our extension of the theorem provides an approach that is simpler than the more common explicit modelling of fat tails and dependence. Our main purpose is to build up a technique based on the sections that allows one to artificially remove the fat tails and dependence present in a data set. By analysing data through the lenses of the Levy sections theorem one can find common patterns in otherwise very different data sets.
Tutorial on Fourier space coverage for scattering experiments, with application to SAR
NASA Astrophysics Data System (ADS)
Deming, Ross W.
2010-04-01
The Fourier Diffraction Theorem relates the data measured during electromagnetic, optical, or acoustic scattering experiments to the spatial Fourier transform of the object under test. The theorem is well-known, but since it is based on integral equations and complicated mathematical expansions, the typical derivation may be difficult for the non-specialist. In this paper, the theorem is derived and presented using simple geometry, plus undergraduatelevel physics and mathematics. For practitioners of synthetic aperture radar (SAR) imaging, the theorem is important to understand because it leads to a simple geometric and graphical understanding of image resolution and sampling requirements, and how they are affected by radar system parameters and experimental geometry. Also, the theorem can be used as a starting point for imaging algorithms and motion compensation methods. Several examples are given in this paper for realistic scenarios.
Moog, Daniel; Maier, Uwe G
2017-08-01
Is the spatial organization of membranes and compartments within cells subjected to any rules? Cellular compartmentation differs between prokaryotic and eukaryotic life, because it is present to a high degree only in eukaryotes. In 1964, Prof. Eberhard Schnepf formulated the compartmentation rule (Schnepf theorem), which posits that a biological membrane, the main physical structure responsible for cellular compartmentation, usually separates a plasmatic form a non-plasmatic phase. Here we review and re-investigate the Schnepf theorem by applying the theorem to different cellular structures, from bacterial cells to eukaryotes with their organelles and compartments. In conclusion, we can confirm the general correctness of the Schnepf theorem, noting explicit exceptions only in special cases such as endosymbiosis and parasitism. © 2017 WILEY Periodicals, Inc.
Guided discovery of the nine-point circle theorem and its proof
NASA Astrophysics Data System (ADS)
Buchbinder, Orly
2018-01-01
The nine-point circle theorem is one of the most beautiful and surprising theorems in Euclidean geometry. It establishes an existence of a circle passing through nine points, all of which are related to a single triangle. This paper describes a set of instructional activities that can help students discover the nine-point circle theorem through investigation in a dynamic geometry environment, and consequently prove it using a method of guided discovery. The paper concludes with a variety of suggestions for the ways in which the whole set of activities can be implemented in geometry classrooms.
Kato type operators and Weyl's theorem
NASA Astrophysics Data System (ADS)
Duggal, B. P.; Djordjevic, S. V.; Kubrusly, Carlos
2005-09-01
A Banach space operator T satisfies Weyl's theorem if and only if T or T* has SVEP at all complex numbers [lambda] in the complement of the Weyl spectrum of T and T is Kato type at all [lambda] which are isolated eigenvalues of T of finite algebraic multiplicity. If T* (respectively, T) has SVEP and T is Kato type at all [lambda] which are isolated eigenvalues of T of finite algebraic multiplicity (respectively, T is Kato type at all [lambda][set membership, variant]iso[sigma](T)), then T satisfies a-Weyl's theorem (respectively, T* satisfies a-Weyl's theorem).
Fluctuation theorem: A critical review
NASA Astrophysics Data System (ADS)
Malek Mansour, M.; Baras, F.
2017-10-01
Fluctuation theorem for entropy production is revisited in the framework of stochastic processes. The applicability of the fluctuation theorem to physico-chemical systems and the resulting stochastic thermodynamics were analyzed. Some unexpected limitations are highlighted in the context of jump Markov processes. We have shown that these limitations handicap the ability of the resulting stochastic thermodynamics to correctly describe the state of non-equilibrium systems in terms of the thermodynamic properties of individual processes therein. Finally, we considered the case of diffusion processes and proved that the fluctuation theorem for entropy production becomes irrelevant at the stationary state in the case of one variable systems.
The Cr dependence problem of eigenvalues of the Laplace operator on domains in the plane
NASA Astrophysics Data System (ADS)
Haddad, Julian; Montenegro, Marcos
2018-03-01
The Cr dependence problem of multiple Dirichlet eigenvalues on domains is discussed for elliptic operators by regarding C r + 1-smooth one-parameter families of C1 perturbations of domains in Rn. As applications of our main theorem (Theorem 1), we provide a fairly complete description for all eigenvalues of the Laplace operator on disks and squares in R2 and also for its second eigenvalue on balls in Rn for any n ≥ 3. The central tool used in our proof is a degenerate implicit function theorem on Banach spaces (Theorem 2) of independent interest.
Nambu-Goldstone theorem and spin-statistics theorem
NASA Astrophysics Data System (ADS)
Fujikawa, Kazuo
2016-05-01
On December 19-21 in 2001, we organized a yearly workshop at Yukawa Institute for Theoretical Physics in Kyoto on the subject of “Fundamental Problems in Field Theory and their Implications”. Prof. Yoichiro Nambu attended this workshop and explained a necessary modification of the Nambu-Goldstone theorem when applied to non-relativistic systems. At the same workshop, I talked on a path integral formulation of the spin-statistics theorem. The present essay is on this memorable workshop, where I really enjoyed the discussions with Nambu, together with a short comment on the color freedom of quarks.
Solving a Class of Spatial Reasoning Problems: Minimal-Cost Path Planning in the Cartesian Plane.
1987-06-01
as in Figure 72. By the Theorem of Pythagoras : Z1 <a z 2 < C Yl(bl+b 2)uI, the cost of going along (a,b,c) is greater that the...preceding lemmas to an indefinite number of boundary-crossing episodes is accomplished by the following theorems . Theorem 1 extends the result of Lemma 1... Theorem 1: Any two Snell’s-law paths within a K-explored wedge defined by Snell’s-law paths RL and R. do not intersect within the K-explored portion of
NASA Astrophysics Data System (ADS)
Mints, M. V.; Berzin, R. G.; Babayants, P. S.; Konilov, A. N.; Suleimanov, A. K.; Zamozhniaya, N. G.; Zlobin, V. L.
2003-04-01
The 1-EU and 4B CDP transects worked out during 1998-2002 years by "Spetsgeophyzica", together with previously developed CDP profiles, have crossed most of the main tectonic units of the eastern Fennoscandian Shield and central part of the East-European platform. They provide seismic images of the Early Precambrian crust and upper mantle from the surface to about 80 km depth (25 s). The Neoarchaean granite-greenstone complexes of the Karelia craton along the 4B profile form a series of the tectonic slices descending eastward, some of which can be traced to the Moho. The Palaeoproterozoic structures presented by two main types: (1) volcano-sedimentary (VS) and (2) granulite-gneiss (GN) belts. The Pechenga-Varzuga VS belt has been identified as overthrust-underthrust southward-dipping package. Tectonic slices formed by the Palaeoproterozoic VS belts alternating with slices of the Neoarchaean granite-gneisses form the imbricated crustal unit that extends along the eastern margin of the Neoarchaean Karelia craton. The slices dip steeply northeastward flattening and partially juxtaposing at 20 km depth at the 1-EU cross-section. This level, which can be understood as the surface of main detachment, ascends westward. An imbrication and related thickening of the crust was caused by displacement of crustal slices in western and southwestern directions because of the Palaeoproterozoic collision event. The Palaeoproterozoic Onega unit comprising VS assemblages originated in a setting of the rifted passive margin forms the northwestward displaced thrust nappe complex. It is considered initially belonging to the southern edge of the Svecofennian passive margin. The Lapland GN belt has been transected by the Polar and EGGI profiles. Both cross-sections demonstrated that it constitutes thick composite crustal-scale tectonic slice. According to geophysical data, the continuation of the Lapland GN belt beneath the platform cover of the East European Craton forms an extended arch-shaped system of the belts approximately 2000 km long. In the vicinity of Moscow the thrust-nappe structure of these belts was recently recognized from reflection seismic profiling along 1-EU profile. The work has been developed in frames of the MPR RF Program and The SVEKALAPKO project and supported by the RFBR, grant No.00-05-64241.
Lu, Yao; Chan, Heang-Ping; Wei, Jun; Hadjiiski, Lubomir M
2014-01-01
Digital breast tomosynthesis (DBT) has strong promise to improve sensitivity for detecting breast cancer. DBT reconstruction estimates the breast tissue attenuation using projection views (PVs) acquired in a limited angular range. Because of the limited field of view (FOV) of the detector, the PVs may not completely cover the breast in the x-ray source motion direction at large projection angles. The voxels in the imaged volume cannot be updated when they are outside the FOV, thus causing a discontinuity in intensity across the FOV boundaries in the reconstructed slices, which we refer to as the truncated projection artifact (TPA). Most existing TPA reduction methods were developed for the filtered backprojection method in the context of computed tomography. In this study, we developed a new diffusion-based method to reduce TPAs during DBT reconstruction using the simultaneous algebraic reconstruction technique (SART). Our TPA reduction method compensates for the discontinuity in background intensity outside the FOV of the current PV after each PV updating in SART. The difference in voxel values across the FOV boundary is smoothly diffused to the region beyond the FOV of the current PV. Diffusion-based background intensity estimation is performed iteratively to avoid structured artifacts. The method is applicable to TPA in both the forward and backward directions of the PVs and for any number of iterations during reconstruction. The effectiveness of the new method was evaluated by comparing the visual quality of the reconstructed slices and the measured discontinuities across the TPA with and without artifact correction at various iterations. The results demonstrated that the diffusion-based intensity compensation method reduced the TPA while preserving the detailed tissue structures. The visibility of breast lesions obscured by the TPA was improved after artifact reduction. PMID:23318346
Discovering Theorems in Abstract Algebra Using the Software "GAP"
ERIC Educational Resources Information Center
Blyth, Russell D.; Rainbolt, Julianne G.
2010-01-01
A traditional abstract algebra course typically consists of the professor stating and then proving a sequence of theorems. As an alternative to this classical structure, the students could be expected to discover some of the theorems even before they are motivated by classroom examples. This can be done by using a software system to explore a…
Bell's Theorem and Einstein's "Spooky Actions" from a Simple Thought Experiment
ERIC Educational Resources Information Center
Kuttner, Fred; Rosenblum, Bruce
2010-01-01
In 1964 John Bell proved a theorem allowing the experimental test of whether what Einstein derided as "spooky actions at a distance" actually exist. We will see that they "do". Bell's theorem can be displayed with a simple, nonmathematical thought experiment suitable for a physics course at "any" level. And a simple, semi-classical derivation of…
Unique Factorization and the Fundamental Theorem of Arithmetic
ERIC Educational Resources Information Center
Sprows, David
2017-01-01
The fundamental theorem of arithmetic is one of those topics in mathematics that somehow "falls through the cracks" in a student's education. When asked to state this theorem, those few students who are willing to give it a try (most have no idea of its content) will say something like "every natural number can be broken down into a…
Viète's Formula and an Error Bound without Taylor's Theorem
ERIC Educational Resources Information Center
Boucher, Chris
2018-01-01
This note presents a derivation of Viète's classic product approximation of pi that relies on only the Pythagorean Theorem. We also give a simple error bound for the approximation that, while not optimal, still reveals the exponential convergence of the approximation and whose derivation does not require Taylor's Theorem.
A Physical Proof of the Pythagorean Theorem
ERIC Educational Resources Information Center
Treeby, David
2017-01-01
What proof of the Pythagorean theorem might appeal to a physics teacher? A proof that involved the notion of mass would surely be of interest. While various proofs of the Pythagorean theorem employ the circumcenter and incenter of a right-angled triangle, we are not aware of any proof that uses the triangle's center of mass. This note details one…
Quantum Field Theory on Spacetimes with a Compactly Generated Cauchy Horizon
NASA Astrophysics Data System (ADS)
Kay, Bernard S.; Radzikowski, Marek J.; Wald, Robert M.
1997-02-01
We prove two theorems which concern difficulties in the formulation of the quantum theory of a linear scalar field on a spacetime, (M,g_{ab}), with a compactly generated Cauchy horizon. These theorems demonstrate the breakdown of the theory at certain base points of the Cauchy horizon, which are defined as 'past terminal accumulation points' of the horizon generators. Thus, the theorems may be interpreted as giving support to Hawking's 'Chronology Protection Conjecture', according to which the laws of physics prevent one from manufacturing a 'time machine'. Specifically, we prove: Theorem 1. There is no extension to (M,g_{ab}) of the usual field algebra on the initial globally hyperbolic region which satisfies the condition of F-locality at any base point. In other words, any extension of the field algebra must, in any globally hyperbolic neighbourhood of any base point, differ from the algebra one would define on that neighbourhood according to the rules for globally hyperbolic spacetimes. Theorem 2. The two-point distribution for any Hadamard state defined on the initial globally hyperbolic region must (when extended to a distributional bisolution of the covariant Klein-Gordon equation on the full spacetime) be singular at every base point x in the sense that the difference between this two point distribution and a local Hadamard distribution cannot be given by a bounded function in any neighbourhood (in M 2 M) of (x,x). In consequence of Theorem 2, quantities such as the renormalized expectation value of J2 or of the stress-energy tensor are necessarily ill-defined or singular at any base point. The proof of these theorems relies on the 'Propagation of Singularities' theorems of Duistermaat and Hörmander.
Enter the reverend: introduction to and application of Bayes' theorem in clinical ophthalmology.
Thomas, Ravi; Mengersen, Kerrie; Parikh, Rajul S; Walland, Mark J; Muliyil, Jayprakash
2011-12-01
Ophthalmic practice utilizes numerous diagnostic tests, some of which are used to screen for disease. Interpretation of test results and many clinical management issues are actually problems in inverse probability that can be solved using Bayes' theorem. Use two-by-two tables to understand Bayes' theorem and apply it to clinical examples. Specific examples of the utility of Bayes' theorem in diagnosis and management. Two-by-two tables are used to introduce concepts and understand the theorem. The application in interpretation of diagnostic tests is explained. Clinical examples demonstrate its potential use in making management decisions. Positive predictive value and conditional probability. The theorem demonstrates the futility of testing when prior probability of disease is low. Application to untreated ocular hypertension demonstrates that the estimate of glaucomatous optic neuropathy is similar to that obtained from the Ocular Hypertension Treatment Study. Similar calculations are used to predict the risk of acute angle closure in a primary angle closure suspect, the risk of pupillary block in a diabetic undergoing cataract surgery, and the probability that an observed decrease in intraocular pressure is due to the medication that has been started. The examples demonstrate how data required for management can at times be easily obtained from available information. Knowledge of Bayes' theorem helps in interpreting test results and supports the clinical teaching that testing for conditions with a low prevalence has a poor predictive value. In some clinical situations Bayes' theorem can be used to calculate vital data required for patient management. © 2011 The Authors. Clinical and Experimental Ophthalmology © 2011 Royal Australian and New Zealand College of Ophthalmologists.
A shape-based statistical method to retrieve 2D TRUS-MR slice correspondence for prostate biopsy
NASA Astrophysics Data System (ADS)
Mitra, Jhimli; Srikantha, Abhilash; Sidibé, Désiré; Martí, Robert; Oliver, Arnau; Lladó, Xavier; Ghose, Soumya; Vilanova, Joan C.; Comet, Josep; Meriaudeau, Fabrice
2012-02-01
This paper presents a method based on shape-context and statistical measures to match interventional 2D Trans Rectal Ultrasound (TRUS) slice during prostate biopsy to a 2D Magnetic Resonance (MR) slice of a pre-acquired prostate volume. Accurate biopsy tissue sampling requires translation of the MR slice information on the TRUS guided biopsy slice. However, this translation or fusion requires the knowledge of the spatial position of the TRUS slice and this is only possible with the use of an electro-magnetic (EM) tracker attached to the TRUS probe. Since, the use of EM tracker is not common in clinical practice and 3D TRUS is not used during biopsy, we propose to perform an analysis based on shape and information theory to reach close enough to the actual MR slice as validated by experts. The Bhattacharyya distance is used to find point correspondences between shape-context representations of the prostate contours. Thereafter, Chi-square distance is used to find out those MR slices where the prostates closely match with that of the TRUS slice. Normalized Mutual Information (NMI) values of the TRUS slice with each of the axial MR slices are computed after rigid alignment and consecutively a strategic elimination based on a set of rules between the Chi-square distances and the NMI leads to the required MR slice. We validated our method for TRUS axial slices of 15 patients, of which 11 results matched at least one experts validation and the remaining 4 are at most one slice away from the expert validations.
Medanic, M; Gillette, M U
1992-05-01
1. The suprachiasmatic nucleus (SCN) of the hypothalamus is the primary pacemaker for circadian rhythms in mammals. The 24 h pacemaker is endogenous to the SCN and persists for multiple cycles in the suprachiasmatic brain slice. 2. While serotonin is not endogenous to the SCN, a major midbrain hypothalamic afferent pathway is serotonergic. Within this tract the dorsal raphe nucleus sends direct projections to the ventrolateral portions of the SCN. We investigated a possible regulatory role for serotonin in the mammalian circadian system by examining its effect, when applied at projection sites, on the circadian rhythm of neuronal activity in rat SCN in vitro. 3. Eight-week-old male rats from our inbred colony, housed on a 12 h light: 12 h dark schedule, were used. Hypothalamic brain slices containing the paired SCN were prepared in the day and maintained in glucose and bicarbonate-supplemented balanced salt solution for up to 53 h. 4. A 10(-11) ml drop of 10(-6) M-serotonin (5-hydroxytryptamine (5-HT) creatinine sulphate complex) in medium was applied to the ventrolateral portion of one of the SCN for 5 min on the first day in vitro. The effect of the treatment at each of seven time points across the circadian cycle was examined. The rhythm of spontaneous neuronal activity was recorded extracellularly on the second and third days in vitro. Phase shifts were determined by comparing the time-of-peak of neuronal activity in serotonin- vs. media-treated slices. 5. Application of serotonin during the subjective day induced significant advances in the phase of the electrical activity rhythm (n = 11). The most sensitive time of treatment was CT 7 (circadian time 7 is 7 h after 'lights on' in the animal colony), when a 7.0 +/- 0.1 h phase advance was observed (n = 3). This phase advance was perpetuated on day 3 in vitro without decrement. Serotonin treatment during the subjective night had no effect on the timing of the electrical activity rhythm (n = 9). 6. The specificity of the serotonin-induced phase change was assessed by treating slices in the same manner with a microdrop of serotonergic agonists, 5-carboxamidotryptamine, that targets the 5-HT1 class of receptors, or 8-hydroxy-dipropylaminotetralin (8-OH DPAT), that acts on the 5-HT1A receptor subtype.(ABSTRACT TRUNCATED AT 400 WORDS)
Malila, Jussi; McGraw, Robert; Laaksonen, Ari; ...
2015-01-07
Despite recent advances in monitoring nucleation from a vapor at close-to-molecular resolution, the identity of the critical cluster, forming the bottleneck for the nucleation process, remains elusive. During past twenty years, the first nucleation theorem has been often used to extract the size of the critical cluster from nucleation rate measurements. However, derivations of the first nucleation theorem invoke certain questionable assumptions that may fail, e.g., in the case of atmospheric new particle formation, including absence of subcritical cluster losses and heterogeneous nucleation on pre-existing nanoparticles. Here we extend the kinetic derivation of the first nucleation theorem to give amore » general framework to include such processes, yielding sum rules connecting the size dependent particle formation and loss rates to the corresponding loss-free nucleation rate and the apparent critical size from a naïve application of the first nucleation theorem that neglects them.« less
A new blackhole theorem and its applications to cosmology and astrophysics
NASA Astrophysics Data System (ADS)
Wang, Shouhong; Ma, Tian
2015-04-01
We shall present a blackhole theorem and a theorem on the structure of our Universe, proved in a recently published paper, based on 1) the Einstein general theory of relativity, and 2) the cosmological principle that the universe is homogeneous and isotropic. These two theorems are rigorously proved using astrophysical dynamical models coupling fluid dynamics and general relativity based on a symmetry-breaking principle. With the new blackhole theorem, we further demonstrate that both supernovae explosion and AGN jets, as well as many astronomical phenomena including e.g. the recent reported are due to combined relativistic, magnetic and thermal effects. The radial temperature gradient causes vertical Benard type convection cells, and the relativistic viscous force (via electromagnetic, the weak and the strong interactions) gives rise to a huge explosive radial force near the Schwarzschild radius, leading e.g. to supernovae explosion and AGN jets.
Atiyah-Patodi-Singer index theorem for domain-wall fermion Dirac operator
NASA Astrophysics Data System (ADS)
Fukaya, Hidenori; Onogi, Tetsuya; Yamaguchi, Satoshi
2018-03-01
Recently, the Atiyah-Patodi-Singer(APS) index theorem attracts attention for understanding physics on the surface of materials in topological phases. Although it is widely applied to physics, the mathematical set-up in the original APS index theorem is too abstract and general (allowing non-trivial metric and so on) and also the connection between the APS boundary condition and the physical boundary condition on the surface of topological material is unclear. For this reason, in contrast to the Atiyah-Singer index theorem, derivation of the APS index theorem in physics language is still missing. In this talk, we attempt to reformulate the APS index in a "physicist-friendly" way, similar to the Fujikawa method on closed manifolds, for our familiar domain-wall fermion Dirac operator in a flat Euclidean space. We find that the APS index is naturally embedded in the determinant of domain-wall fermions, representing the so-called anomaly descent equations.
NASA Astrophysics Data System (ADS)
Rau, Uwe; Brendel, Rolf
1998-12-01
It is shown that a recently described general relationship between the local collection efficiency of solar cells and the dark carrier concentration (reciprocity theorem) directly follows from the principle of detailed balance. We derive the relationship for situations where transport of charge carriers occurs between discrete states as well as for the situation where electronic transport is described in terms of continuous functions. Combining both situations allows to extend the range of applicability of the reciprocity theorem to all types of solar cells, including, e.g., metal-insulator-semiconductor-type, electrochemical solar cells, as well as the inclusion of the impurity photovoltaic effect. We generalize the theorem further to situations where the occupation probability of electronic states is governed by Fermi-Dirac statistics instead of Boltzmann statistics as underlying preceding work. In such a situation the reciprocity theorem is restricted to small departures from equilibrium.
Dynamic relaxation of a levitated nanoparticle from a non-equilibrium steady state.
Gieseler, Jan; Quidant, Romain; Dellago, Christoph; Novotny, Lukas
2014-05-01
Fluctuation theorems are a generalization of thermodynamics on small scales and provide the tools to characterize the fluctuations of thermodynamic quantities in non-equilibrium nanoscale systems. They are particularly important for understanding irreversibility and the second law in fundamental chemical and biological processes that are actively driven, thus operating far from thermal equilibrium. Here, we apply the framework of fluctuation theorems to investigate the important case of a system relaxing from a non-equilibrium state towards equilibrium. Using a vacuum-trapped nanoparticle, we demonstrate experimentally the validity of a fluctuation theorem for the relative entropy change occurring during relaxation from a non-equilibrium steady state. The platform established here allows non-equilibrium fluctuation theorems to be studied experimentally for arbitrary steady states and can be extended to investigate quantum fluctuation theorems as well as systems that do not obey detailed balance.
Exploiting structure: Introduction and motivation
NASA Technical Reports Server (NTRS)
Xu, Zhong Ling
1994-01-01
This annual report summarizes the research activities that were performed from 26 Jun. 1993 to 28 Feb. 1994. We continued to investigate the Robust Stability of Systems where transfer functions or characteristic polynomials are affine multilinear functions of parameters. An approach that differs from 'Stability by Linear Process' and that reduces the computational burden of checking the robust stability of the system with multilinear uncertainty was found for low order, 2-order, and 3-order cases. We proved a crucial theorem, the so-called Face Theorem. Previously, we have proven Kharitonov's Vertex Theorem and the Edge Theorem by Bartlett. The detail of this proof is contained in the Appendix. This Theorem provides a tool to describe the boundary of the image of the affine multilinear function. For SPR design, we have developed some new results. The third objective for this period is to design a controller for IHM by the H-infinity optimization technique. The details are presented in the Appendix.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perkins, R. J., E-mail: rperkins@pppl.gov; Bellan, P. M.
Action integrals are often used to average a system over fast oscillations and obtain reduced dynamics. It is not surprising, then, that action integrals play a central role in the Hellmann-Feynman theorem of classical mechanics, which furnishes the values of certain quantities averaged over one period of rapid oscillation. This paper revisits the classical Hellmann-Feynman theorem, rederiving it in connection to an analogous theorem involving the time-averaged evolution of canonical coordinates. We then apply a modified version of the Hellmann-Feynman theorem to obtain a new result: the magnetic flux enclosed by one period of gyro-motion of a charged particle inmore » a non-uniform magnetic field. These results further demonstrate the utility of the action integral in regards to obtaining orbit-averaged quantities and the usefulness of this formalism in characterizing charged particle motion.« less
Directly Reconstructing Principal Components of Heterogeneous Particles from Cryo-EM Images
Tagare, Hemant D.; Kucukelbir, Alp; Sigworth, Fred J.; Wang, Hongwei; Rao, Murali
2015-01-01
Structural heterogeneity of particles can be investigated by their three-dimensional principal components. This paper addresses the question of whether, and with what algorithm, the three-dimensional principal components can be directly recovered from cryo-EM images. The first part of the paper extends the Fourier slice theorem to covariance functions showing that the three-dimensional covariance, and hence the principal components, of a heterogeneous particle can indeed be recovered from two-dimensional cryo-EM images. The second part of the paper proposes a practical algorithm for reconstructing the principal components directly from cryo-EM images without the intermediate step of calculating covariances. This algorithm is based on maximizing the (posterior) likelihood using the Expectation-Maximization algorithm. The last part of the paper applies this algorithm to simulated data and to two real cryo-EM data sets: a data set of the 70S ribosome with and without Elongation Factor-G (EF-G), and a data set of the inluenza virus RNA dependent RNA Polymerase (RdRP). The first principal component of the 70S ribosome data set reveals the expected conformational changes of the ribosome as the EF-G binds and unbinds. The first principal component of the RdRP data set reveals a conformational change in the two dimers of the RdRP. PMID:26049077
Proof of a new area law in general relativity
NASA Astrophysics Data System (ADS)
Bousso, Raphael; Engelhardt, Netta
2015-08-01
A future holographic screen is a hypersurface of indefinite signature, foliated by marginally trapped surfaces with area A (r ). We prove that A (r ) grows strictly monotonically. Future holographic screens arise in gravitational collapse. Past holographic screens exist in our own Universe; they obey an analogous area law. Both exist more broadly than event horizons or dynamical horizons. Working within classical general relativity, we assume the null curvature condition and certain generiticity conditions. We establish several nontrivial intermediate results. If a surface σ divides a Cauchy surface into two disjoint regions, then a null hypersurface N that contains σ splits the entire spacetime into two disjoint portions: the future-and-interior, K+; and the past-and-exterior, K-. If a family of surfaces σ (r ) foliate a hypersurface, while flowing everywhere to the past or exterior, then the future-and-interior K+(r ) grows monotonically under inclusion. If the surfaces σ (r ) are marginally trapped, we prove that the evolution must be everywhere to the past or exterior, and the area theorem follows. A thermodynamic interpretation as a second law is suggested by the Bousso bound, which relates A (r ) to the entropy on the null slices N (r ) foliating the spacetime. In a companion letter, we summarize the proof and discuss further implications.
Preliminary frequency-domain analysis for the reconstructed spatial resolution of muon tomography
NASA Astrophysics Data System (ADS)
Yu, B.; Zhao, Z.; Wang, X.; Wang, Y.; Wu, D.; Zeng, Z.; Zeng, M.; Yi, H.; Luo, Z.; Yue, X.; Cheng, J.
2014-11-01
Muon tomography is an advanced technology to non-destructively detect high atomic number materials. It exploits the multiple Coulomb scattering information of muon to reconstruct the scattering density image of the traversed object. Because of the statistics of muon scattering, the measurement error of system and the data incompleteness, the reconstruction is always accompanied with a certain level of interference, which will influence the reconstructed spatial resolution. While statistical noises can be reduced by extending the measuring time, system parameters determine the ultimate spatial resolution that one system can reach. In this paper, an effective frequency-domain model is proposed to analyze the reconstructed spatial resolution of muon tomography. The proposed method modifies the resolution analysis in conventional computed tomography (CT) to fit the different imaging mechanism in muon scattering tomography. The measured scattering information is described in frequency domain, then a relationship between the measurements and the original image is proposed in Fourier domain, which is named as "Muon Central Slice Theorem". Furthermore, a preliminary analytical expression of the ultimate reconstructed spatial is derived, and the simulations are performed for validation. While the method is able to predict the ultimate spatial resolution of a given system, it can also be utilized for the optimization of system design and construction.
Radial q-space sampling for DSI.
Baete, Steven H; Yutzy, Stephen; Boada, Fernando E
2016-09-01
Diffusion spectrum imaging (DSI) has been shown to be an effective tool for noninvasively depicting the anatomical details of brain microstructure. Existing implementations of DSI sample the diffusion encoding space using a rectangular grid. Here we present a different implementation of DSI whereby a radially symmetric q-space sampling scheme for DSI is used to improve the angular resolution and accuracy of the reconstructed orientation distribution functions. Q-space is sampled by acquiring several q-space samples along a number of radial lines. Each of these radial lines in q-space is analytically connected to a value of the orientation distribution functions at the same angular location by the Fourier slice theorem. Computer simulations and in vivo brain results demonstrate that radial diffusion spectrum imaging correctly estimates the orientation distribution functions when moderately high b-values (4000 s/mm2) and number of q-space samples (236) are used. The nominal angular resolution of radial diffusion spectrum imaging depends on the number of radial lines used in the sampling scheme, and only weakly on the maximum b-value. In addition, the radial analytical reconstruction reduces truncation artifacts which affect Cartesian reconstructions. Hence, a radial acquisition of q-space can be favorable for DSI. Magn Reson Med 76:769-780, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Mesenchymal stem cells support neuronal fiber growth in an organotypic brain slice co-culture model.
Sygnecka, Katja; Heider, Andreas; Scherf, Nico; Alt, Rüdiger; Franke, Heike; Heine, Claudia
2015-04-01
Mesenchymal stem cells (MSCs) have been identified as promising candidates for neuroregenerative cell therapies. However, the impact of different isolation procedures on the functional and regenerative characteristics of MSC populations has not been studied thoroughly. To quantify these differences, we directly compared classically isolated bulk bone marrow-derived MSCs (bulk BM-MSCs) to the subpopulation Sca-1(+)Lin(-)CD45(-)-derived MSCs(-) (SL45-MSCs), isolated by fluorescence-activated cell sorting from bulk BM-cell suspensions. Both populations were analyzed with respect to functional readouts, that are, frequency of fibroblast colony forming units (CFU-f), general morphology, and expression of stem cell markers. The SL45-MSC population is characterized by greater morphological homogeneity, higher CFU-f frequency, and significantly increased nestin expression compared with bulk BM-MSCs. We further quantified the potential of both cell populations to enhance neuronal fiber growth, using an ex vivo model of organotypic brain slice co-cultures of the mesocortical dopaminergic projection system. The MSC populations were cultivated underneath the slice co-cultures without direct contact using a transwell system. After cultivation, the fiber density in the border region between the two brain slices was quantified. While both populations significantly enhanced fiber outgrowth as compared with controls, purified SL45-MSCs stimulated fiber growth to a larger degree. Subsequently, we analyzed the expression of different growth factors in both cell populations. The results show a significantly higher expression of brain-derived neurotrophic factor (BDNF) and basic fibroblast growth factor in the SL45-MSCs population. Altogether, we conclude that MSC preparations enriched for primary MSCs promote neuronal regeneration and axonal regrowth, more effectively than bulk BM-MSCs, an effect that may be mediated by a higher BDNF secretion.
NASA Astrophysics Data System (ADS)
Wahi-Anwar, M. Wasil; Emaminejad, Nastaran; Hoffman, John; Kim, Grace H.; Brown, Matthew S.; McNitt-Gray, Michael F.
2018-02-01
Quantitative imaging in lung cancer CT seeks to characterize nodules through quantitative features, usually from a region of interest delineating the nodule. The segmentation, however, can vary depending on segmentation approach and image quality, which can affect the extracted feature values. In this study, we utilize a fully-automated nodule segmentation method - to avoid reader-influenced inconsistencies - to explore the effects of varied dose levels and reconstruction parameters on segmentation. Raw projection CT images from a low-dose screening patient cohort (N=59) were reconstructed at multiple dose levels (100%, 50%, 25%, 10%), two slice thicknesses (1.0mm, 0.6mm), and a medium kernel. Fully-automated nodule detection and segmentation was then applied, from which 12 nodules were selected. Dice similarity coefficient (DSC) was used to assess the similarity of the segmentation ROIs of the same nodule across different reconstruction and dose conditions. Nodules at 1.0mm slice thickness and dose levels of 25% and 50% resulted in DSC values greater than 0.85 when compared to 100% dose, with lower dose leading to a lower average and wider spread of DSC values. At 0.6mm, the increased bias and wider spread of DSC values from lowering dose were more pronounced. The effects of dose reduction on DSC for CAD-segmented nodules were similar in magnitude to reducing the slice thickness from 1.0mm to 0.6mm. In conclusion, variation of dose and slice thickness can result in very different segmentations because of noise and image quality. However, there exists some stability in segmentation overlap, as even at 1mm, an image with 25% of the lowdose scan still results in segmentations similar to that seen in a full-dose scan.
Mishra, Anuj; Ehtuish, Ehtuish F
2006-06-01
To assess the renal vessel anatomy, compare the findings with the perioperative findings, to determine the sensitivity of multislice computed tomography (CT) angiography in the work-up of live potential donors and to discuss and compare the results of the present study with the reported results using single slice CT, magnetic resonance (MRI) and conventional angiography (CA). Retrospective analysis of the angiographic data of 118 of prospective live related kidney donors was carried out from October 2004 to August 2005 at the National Organ Transplant Centre, Tripoli Central Hospital, Libya. All donors underwent renal angiography on multislice (16-slice) CT scan using 80 cc intravenous contrast with 1.25 mm slice thickness followed by maximum intensity projection (MIP) and volume rendering techniques (VRT) post-processing algorithms. The number of vessels, vessel bifurcation, vessel morphology and venous anatomy were analyzed and the findings were compared with the surgical findings. Multislice spiral CT angiography (MSCTA) showed clear delineation of the main renal arteries in all donors with detailed vessel morphology. The study revealed 100% sensitivity in detection of accessory renal vessels, with an overall incidence of 26.7%, which is the most common distribution in the parahilar region. The present study showed 100% sensitivity in the visualization and detection of main and accessory renal vessels. These results were comparable with conventional angiography which has so far been considered as the gold standard and were found superior in specificity and accuracy to the use of single slice CT (SSCT) and MR in the angiographic work-up of live renal donors. Due to improved detection of accessory vessels less than 2 mm in diameter, a higher incidence of aberrant vessels was seen on the right side as has been suggested so far.
Moulin, Kevin; Croisille, Pierre; Feiweier, Thorsten; Delattre, Benedicte M A; Wei, Hongjiang; Robert, Benjamin; Beuf, Olivier; Viallon, Magalie
2016-07-01
In this study, we proposed an efficient free-breathing strategy for rapid and improved cardiac diffusion-weighted imaging (DWI) acquisition using a single-shot spin-echo echo planar imaging (SE-EPI) sequence. A real-time slice-following technique during free-breathing was combined with a sliding acquisition-window strategy prior Principal Component Analysis temporal Maximum Intensity Projection (PCAtMIP) postprocessing of in-plane co-registered diffusion-weighted images. This methodology was applied to 10 volunteers to quantify the performance of the motion correction technique and the reproducibility of diffusion parameters. The slice-following technique offers a powerful head-foot respiratory motion management solution for SE-EPI cDWI with the advantage of a 100% duty cycle scanning efficiency. The level of co-registration was further improved using nonrigid motion corrections and was evaluated with a co-registration index. Vascular fraction f and the diffusion coefficients D and D* were determined to be 0.122 ± 0.013, 1.41 ± 0.09 × 10(-3) mm(2) /s and 43.6 ± 9.2 × 10(-3) mm(2) /s, respectively. From the multidirectional dataset, the measured mean diffusivity was 1.72 ± 0.09 × 10(-3) mm(2) /s and the fractional anisotropy was 0.36 ± 0.02. The slice-following DWI SE-EPI sequence is a promising solution for clinical implementation, offering a robust improved workflow for further evaluation of DWI in cardiology. Magn Reson Med 76:70-82, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
"The Most Famous Brain in the World" Performance and Pedagogy on an Amnesiac's Brain
ERIC Educational Resources Information Center
Sweaney, Katherine W.
2012-01-01
Project H.M. was just the sort of thing one might expect the Internet to latch onto: it was a live streaming video of a frozen human brain being slowly sliced apart. Users who clicked the link on Twitter or Facebook between the 2nd and 4th of December 2009 were immediately confronted with a close-up shot of the brain's interior, which was…
NASA Technical Reports Server (NTRS)
Lamarque, J.-F.; Shindell, D. T.; Naik, V.; Plummer, D.; Josse, B.; Righi, M.; Rumbold, S. T.; Schulz, M.; Skeie, R. B.; Strode, S.;
2013-01-01
The Atmospheric Chemistry and Climate Model Intercomparison Project (ACCMIP) consists of a series of time slice experiments targeting the long-term changes in atmospheric composition between 1850 and 2100, with the goal of documenting composition changes and the associated radiative forcing. In this overview paper, we introduce the ACCMIP activity, the various simulations performed (with a requested set of 14) and the associated model output. The 16 ACCMIP models have a wide range of horizontal and vertical resolutions, vertical extent, chemistry schemes and interaction with radiation and clouds. While anthropogenic and biomass burning emissions were specified for all time slices in the ACCMIP protocol, it is found that the natural emissions are responsible for a significant range across models, mostly in the case of ozone precursors. The analysis of selected present-day climate diagnostics (precipitation, temperature, specific humidity and zonal wind) reveals biases consistent with state-of-the-art climate models. The model-to- model comparison of changes in temperature, specific humidity and zonal wind between 1850 and 2000 and between 2000 and 2100 indicates mostly consistent results. However, models that are clear outliers are different enough from the other models to significantly affect their simulation of atmospheric chemistry.
Jung, Eui-Hyun; Park, Yong-Jin
2008-01-01
In recent years, a few protocol bridge research projects have been announced to enable a seamless integration of Wireless Sensor Networks (WSNs) with the TCP/IP network. These studies have ensured the transparent end-to-end communication between two network sides in the node-centric manner. Researchers expect this integration will trigger the development of various application domains. However, prior research projects have not fully explored some essential features for WSNs, especially the reusability of sensing data and the data-centric communication. To resolve these issues, we suggested a new protocol bridge system named TinyONet. In TinyONet, virtual sensors play roles as virtual counterparts of physical sensors and they dynamically group to make a functional entity, Slice. Instead of direct interaction with individual physical sensors, each sensor application uses its own WSN service provided by Slices. If a new kind of service is required in TinyONet, the corresponding function can be dynamically added at runtime. Beside the data-centric communication, it also supports the node-centric communication and the synchronous access. In order to show the effectiveness of the system, we implemented TinyONet on an embedded Linux machine and evaluated it with several experimental scenarios. PMID:27873968
Diffusion Maps and Geometric Harmonics for Automatic Target Recognition (ATR). Volume 2. Appendices
2007-11-01
of the Perron - Frobenius theorem, it suffices to prove that the chain is irreducible and aperiodic. • The irreducibility is a mere consequence of the...of each atom; this is due to the linear programming constraint that the coefficients be nonnegative 4. Chen et al. [20, 21] describe two algorithms for...projection of x onto the convex cone spanned by Ψ(t) with the origin at the apex; we provide details on computing x̃(t) in Section 4.1.3. Let x̃ (t) H
Formal Modeling and Analysis of a Preliminary Small Aircraft Transportation System (SATS)Concept
NASA Technical Reports Server (NTRS)
Carrreno, Victor A.; Gottliebsen, Hanne; Butler, Ricky; Kalvala, Sara
2004-01-01
New concepts for automating air traffic management functions at small non-towered airports raise serious safety issues associated with the software implementations and their underlying key algorithms. The criticality of such software systems necessitates that strong guarantees of the safety be developed for them. In this paper we present a formal method for modeling and verifying such systems using the PVS theorem proving system. The method is demonstrated on a preliminary concept of operation for the Small Aircraft Transportation System (SATS) project at NASA Langley.
Slice regular functions of several Clifford variables
NASA Astrophysics Data System (ADS)
Ghiloni, R.; Perotti, A.
2012-11-01
We introduce a class of slice regular functions of several Clifford variables. Our approach to the definition of slice functions is based on the concept of stem functions of several variables and on the introduction on real Clifford algebras of a family of commuting complex structures. The class of slice regular functions include, in particular, the family of (ordered) polynomials in several Clifford variables. We prove some basic properties of slice and slice regular functions and give examples to illustrate this function theory. In particular, we give integral representation formulas for slice regular functions and a Hartogs type extension result.
Modular electronics packaging system
NASA Technical Reports Server (NTRS)
Hunter, Don J. (Inventor)
2001-01-01
A modular electronics packaging system includes multiple packaging slices that are mounted horizontally to a base structure. The slices interlock to provide added structural support. Each packaging slice includes a rigid and thermally conductive housing having four side walls that together form a cavity to house an electronic circuit. The chamber is enclosed on one end by an end wall, or web, that isolates the electronic circuit from a circuit in an adjacent packaging slice. The web also provides a thermal path between the electronic circuit and the base structure. Each slice also includes a mounting bracket that connects the packaging slice to the base structure. Four guide pins protrude from the slice into four corresponding receptacles in an adjacent slice. A locking element, such as a set screw, protrudes into each receptacle and interlocks with the corresponding guide pin. A conduit is formed in the slice to allow electrical connection to the electronic circuit.
Systematic Approaches to Experimentation: The Case of Pick's Theorem
ERIC Educational Resources Information Center
Papadopoulos, Ioannis; Iatridou, Maria
2010-01-01
In this paper two 10th graders having an accumulated experience on problem-solving ancillary to the concept of area confronted the task to find Pick's formula for a lattice polygon's area. The formula was omitted from the theorem in order for the students to read the theorem as a problem to be solved. Their working is examined and emphasis is…
Topology and the Lay of the Land: A Mathematician on the Topographer's Turf.
ERIC Educational Resources Information Center
Shubin, Mikhail
1992-01-01
Presents a proof of Euler's Theorem on polyhedra by relating the theorem to the field of modern topology, specifically to the topology of relief maps. An analogous theorem involving the features of mountain summits, basins, and passes on a terrain is proved and related to the faces, vertices, and edges on a convex polyhedron. (MDH)
Weak Compactness and Control Measures in the Space of Unbounded Measures
Brooks, James K.; Dinculeanu, Nicolae
1972-01-01
We present a synthesis theorem for a family of locally equivalent measures defined on a ring of sets. This theorem is then used to exhibit a control measure for weakly compact sets of unbounded measures. In addition, the existence of a local control measure for locally strongly bounded vector measures is proved by means of the synthesis theorem. PMID:16591980
ERIC Educational Resources Information Center
Raychaudhuri, D.
2007-01-01
The focus of this paper is on student interpretation and usage of the existence and uniqueness theorems for first-order ordinary differential equations. The inherent structure of the theorems is made explicit by the introduction of a framework of layers concepts-conditions-connectives-conclusions, and we discuss the manners in which students'…
Erratum: Correction to: Information Transmission and Criticality in the Contact Process
NASA Astrophysics Data System (ADS)
Cassandro, M.; Galves, A.; Löcherbach, E.
2018-01-01
The original publication of the article unfortunately contained a mistake in the first sentence of Theorem 1 and in the second part of the proof of Theorem 1. The corrected statement of Theorem as well as the corrected proof are given below. The full text of the corrected version is available at http://arxiv.org/abs/1705.11150.
Optical theorem for acoustic non-diffracting beams and application to radiation force and torque
Zhang, Likun; Marston, Philip L.
2013-01-01
Acoustical and optical non-diffracting beams are potentially useful for manipulating particles and larger objects. An extended optical theorem for a non-diffracting beam was given recently in the context of acoustics. The theorem relates the extinction by an object to the scattering at the forward direction of the beam’s plane wave components. Here we use this theorem to examine the extinction cross section of a sphere centered on the axis of the beam, with a non-diffracting Bessel beam as an example. The results are applied to recover the axial radiation force and torque on the sphere by the Bessel beam. PMID:24049681
Republication of: A theorem on Petrov types
NASA Astrophysics Data System (ADS)
Goldberg, J. N.; Sachs, R. K.
2009-02-01
This is a republication of the paper “A Theorem on Petrov Types” by Goldberg and Sachs, Acta Phys. Pol. 22 (supplement), 13 (1962), in which they proved the Goldberg-Sachs theorem. The article has been selected for publication in the Golden Oldies series of General Relativity and Gravitation. Typographical errors of the original publication were corrected by the editor. The paper is accompanied by a Golden Oldie Editorial containing an editorial note written by Andrzej Krasiński and Maciej Przanowski and Goldberg’s brief autobiography. The editorial note explains some difficult parts of the proof of the theorem and discusses the influence of results of the paper on later research.
A general Kastler-Kalau-Walze type theorem for manifolds with boundary
NASA Astrophysics Data System (ADS)
Wang, Jian; Wang, Yong
2016-11-01
In this paper, we establish a general Kastler-Kalau-Walze type theorem for any dimensional manifolds with boundary which generalizes the results in [Y. Wang, Lower-dimensional volumes and Kastler-Kalau-Walze type theorem for manifolds with boundary, Commun. Theor. Phys. 54 (2010) 38-42]. This solves a problem of the referee of [J. Wang and Y. Wang, A Kastler-Kalau-Walze type theorem for five-dimensional manifolds with boundary, Int. J. Geom. Meth. Mod. Phys. 12(5) (2015), Article ID: 1550064, 34 pp.], which is a general expression of the lower dimensional volumes in terms of the geometric data on the manifold.
NASA Technical Reports Server (NTRS)
Steiner, E.
1973-01-01
The use of the electrostatic Hellmann-Feynman theorem for the calculation of the leading term in the 1/R expansion of the force of interaction between two well-separated hydrogen atoms is discussed. Previous work has suggested that whereas this term is determined wholly by the first-order wavefunction when calculated by perturbation theory, the use of the Hellmann-Feynman theorem apparently requires the wavefunction through second order. It is shown how the two results may be reconciled and that the Hellmann-Feynman theorem may be reformulated in such a way that only the first-order wavefunction is required.
A Benes-like theorem for the shuffle-exchange graph
NASA Technical Reports Server (NTRS)
Schwabe, Eric J.
1992-01-01
One of the first theorems on permutation routing, proved by V. E. Beness (1965), shows that given a set of source-destination pairs in an N-node butterfly network with at most a constant number of sources or destinations in each column of the butterfly, there exists a set of paths of lengths O(log N) connecting each pair such that the total congestion is constant. An analogous theorem yielding constant-congestion paths for off-line routing in the shuffle-exchange graph is proved here. The necklaces of the shuffle-exchange graph play the same structural role as the columns of the butterfly in Beness' theorem.
Tree-manipulating systems and Church-Rosser theorems.
NASA Technical Reports Server (NTRS)
Rosen, B. K.
1973-01-01
Study of a broad class of tree-manipulating systems called subtree replacement systems. The use of this framework is illustrated by general theorems analogous to the Church-Rosser theorem and by applications of these theorems. Sufficient conditions are derived for the Church-Rosser property, and their applications to recursive definitions, the lambda calculus, and parallel programming are discussed. McCarthy's (1963) recursive calculus is extended by allowing a choice between call-by-value and call-by-name. It is shown that recursively defined functions are single-valued despite the nondeterminism of the evaluation algorithm. It is also shown that these functions solve their defining equations in a 'canonical' manner.
Quantum voting and violation of Arrow's impossibility theorem
NASA Astrophysics Data System (ADS)
Bao, Ning; Yunger Halpern, Nicole
2017-06-01
We propose a quantum voting system in the spirit of quantum games such as the quantum prisoner's dilemma. Our scheme enables a constitution to violate a quantum analog of Arrow's impossibility theorem. Arrow's theorem is a claim proved deductively in economics: Every (classical) constitution endowed with three innocuous-seeming properties is a dictatorship. We construct quantum analogs of constitutions, of the properties, and of Arrow's theorem. A quantum version of majority rule, we show, violates this quantum Arrow conjecture. Our voting system allows for tactical-voting strategies reliant on entanglement, interference, and superpositions. This contribution to quantum game theory helps elucidate how quantum phenomena can be harnessed for strategic advantage.
Common fixed points in best approximation for Banach operator pairs with Ciric type I-contractions
NASA Astrophysics Data System (ADS)
Hussain, N.
2008-02-01
The common fixed point theorems, similar to those of Ciric [Lj.B. Ciric, On a common fixed point theorem of a Gregus type, Publ. Inst. Math. (Beograd) (N.S.) 49 (1991) 174-178; Lj.B. Ciric, On Diviccaro, Fisher and Sessa open questions, Arch. Math. (Brno) 29 (1993) 145-152; Lj.B. Ciric, On a generalization of Gregus fixed point theorem, Czechoslovak Math. J. 50 (2000) 449-458], Fisher and Sessa [B. Fisher, S. Sessa, On a fixed point theorem of Gregus, Internat. J. Math. Math. Sci. 9 (1986) 23-28], Jungck [G. Jungck, On a fixed point theorem of Fisher and Sessa, Internat. J. Math. Math. Sci. 13 (1990) 497-500] and Mukherjee and Verma [R.N. Mukherjee, V. Verma, A note on fixed point theorem of Gregus, Math. Japon. 33 (1988) 745-749], are proved for a Banach operator pair. As applications, common fixed point and approximation results for Banach operator pair satisfying Ciric type contractive conditions are obtained without the assumption of linearity or affinity of either T or I. Our results unify and generalize various known results to a more general class of noncommuting mappings.
Gillespie, Dirk
2014-11-01
Classical density functional theory (DFT) of fluids is a fast and efficient theory to compute the structure of the electrical double layer in the primitive model of ions where ions are modeled as charged, hard spheres in a background dielectric. While the hard-core repulsive component of this ion-ion interaction can be accurately computed using well-established DFTs, the electrostatic component is less accurate. Moreover, many electrostatic functionals fail to satisfy a basic theorem, the contact density theorem, that relates the bulk pressure, surface charge, and ion densities at their distances of closest approach for ions in equilibrium at a smooth, hard, planar wall. One popular electrostatic functional that fails to satisfy the contact density theorem is a perturbation approach developed by Kierlik and Rosinberg [Phys. Rev. A 44, 5025 (1991)PLRAAN1050-294710.1103/PhysRevA.44.5025] and Rosenfeld [J. Chem. Phys. 98, 8126 (1993)JCPSA60021-960610.1063/1.464569], where the full free-energy functional is Taylor-expanded around a bulk (homogeneous) reference fluid. Here, it is shown that this functional fails to satisfy the contact density theorem because it also fails to satisfy the known low-density limit. When the functional is corrected to satisfy this limit, a corrected bulk pressure is derived and it is shown that with this pressure both the contact density theorem and the Gibbs adsorption theorem are satisfied.
Central Limit Theorem for Exponentially Quasi-local Statistics of Spin Models on Cayley Graphs
NASA Astrophysics Data System (ADS)
Reddy, Tulasi Ram; Vadlamani, Sreekar; Yogeshwaran, D.
2018-04-01
Central limit theorems for linear statistics of lattice random fields (including spin models) are usually proven under suitable mixing conditions or quasi-associativity. Many interesting examples of spin models do not satisfy mixing conditions, and on the other hand, it does not seem easy to show central limit theorem for local statistics via quasi-associativity. In this work, we prove general central limit theorems for local statistics and exponentially quasi-local statistics of spin models on discrete Cayley graphs with polynomial growth. Further, we supplement these results by proving similar central limit theorems for random fields on discrete Cayley graphs taking values in a countable space, but under the stronger assumptions of α -mixing (for local statistics) and exponential α -mixing (for exponentially quasi-local statistics). All our central limit theorems assume a suitable variance lower bound like many others in the literature. We illustrate our general central limit theorem with specific examples of lattice spin models and statistics arising in computational topology, statistical physics and random networks. Examples of clustering spin models include quasi-associated spin models with fast decaying covariances like the off-critical Ising model, level sets of Gaussian random fields with fast decaying covariances like the massive Gaussian free field and determinantal point processes with fast decaying kernels. Examples of local statistics include intrinsic volumes, face counts, component counts of random cubical complexes while exponentially quasi-local statistics include nearest neighbour distances in spin models and Betti numbers of sub-critical random cubical complexes.
Slice profile effects in 2D slice-selective MRI of hyperpolarized nuclei.
Deppe, Martin H; Teh, Kevin; Parra-Robles, Juan; Lee, Kuan J; Wild, Jim M
2010-02-01
This work explores slice profile effects in 2D slice-selective gradient-echo MRI of hyperpolarized nuclei. Two different sequences were investigated: a Spoiled Gradient Echo sequence with variable flip angle (SPGR-VFA) and a balanced Steady-State Free Precession (SSFP) sequence. It is shown that in SPGR-VFA the distribution of flip angles across the slice present in any realistically shaped radiofrequency (RF) pulse leads to large excess signal from the slice edges in later RF views, which results in an undesired non-constant total transverse magnetization, potentially exceeding the initial value by almost 300% for the last RF pulse. A method to reduce this unwanted effect is demonstrated, based on dynamic scaling of the slice selection gradient. SSFP sequences with small to moderate flip angles (<40 degrees ) are also shown to preserve the slice profile better than the most commonly used SPGR sequence with constant flip angle (SPGR-CFA). For higher flip angles, the slice profile in SSFP evolves in a manner similar to SPGR-CFA, with depletion of polarization in the center of the slice. Copyright 2009 Elsevier Inc. All rights reserved.
Project analysis and integration economic analyses summary
NASA Technical Reports Server (NTRS)
Macomber, H. L.
1986-01-01
An economic-analysis summary was presented for the manufacture of crystalline-silicon modules involving silicon ingot/sheet, growth, slicing, cell manufacture, and module assembly. Economic analyses provided: useful quantitative aspects for complex decision-making to the Flat-plate Solar Array (FSA) Project; yardsticks for design and performance to industry; and demonstration of how to evaluate and understand the worth of research and development both to JPL and other government agencies and programs. It was concluded that future research and development funds for photovoltaics must be provided by the Federal Government because the solar industry today does not reap enough profits from its present-day sales of photovoltaic equipment.
ERIC Educational Resources Information Center
Moen, David H.; Powell, John E.
2008-01-01
Using Microsoft® Excel, several interactive, computerized learning modules are developed to illustrate the Central Limit Theorem's appropriateness for comparing the difference between the means of any two populations. These modules are used in the classroom to enhance the comprehension of this theorem as well as the concepts that provide the…
Optimal Repairman Allocation Models
1976-03-01
state X under policy ir. Then lim {k1’ lC0 (^)I) e.(X,k) - 0 k*0 *’-’ (3.1.1) Proof; The result is proven by induction on |CQ(X...following theorem. Theorem 3.1 D. Under the conditions of theorem 3.1 A, define g1[ 1) (X) - g^U), then lim k- lC0 W l-mle (XHkl00^ Ig*11 (X
ERIC Educational Resources Information Center
Wawro, Megan Jean
2011-01-01
In this study, I considered the development of mathematical meaning related to the Invertible Matrix Theorem (IMT) for both a classroom community and an individual student over time. In this particular linear algebra course, the IMT was a core theorem in that it connected many concepts fundamental to linear algebra through the notion of…
A Converse of Fermat's Little Theorem
ERIC Educational Resources Information Center
Bruckman, P. S.
2007-01-01
As the name of the paper implies, a converse of Fermat's Little Theorem (FLT) is stated and proved. FLT states the following: if p is any prime, and x any integer, then x[superscript p] [equivalent to] x (mod p). There is already a well-known converse of FLT, known as Lehmer's Theorem, which is as follows: if x is an integer coprime with m, such…
Bayes' Theorem: An Old Tool Applicable to Today's Classroom Measurement Needs. ERIC/AE Digest.
ERIC Educational Resources Information Center
Rudner, Lawrence M.
This digest introduces ways of responding to the call for criterion-referenced information using Bayes' Theorem, a method that was coupled with criterion-referenced testing in the early 1970s (see R. Hambleton and M. Novick, 1973). To illustrate Bayes' Theorem, an example is given in which the goal is to classify an examinee as being a master or…
CONTRIBUTIONS TO RATIONAL APPROXIMATION,
Some of the key results of linear Chebyshev approximation theory are extended to generalized rational functions. Prominent among these is Haar’s...linear theorem which yields necessary and sufficient conditions for uniqueness. Some new results in the classic field of rational function Chebyshev...Furthermore a Weierstrass type theorem is proven for rational Chebyshev approximation. A characterization theorem for rational trigonometric Chebyshev approximation in terms of sign alternation is developed. (Author)
NASA Astrophysics Data System (ADS)
Rerikh, K. V.
1998-02-01
Using classic results of algebraic geometry for birational plane mappings in plane CP 2 we present a general approach to algebraic integrability of autonomous dynamical systems in C 2 with discrete time and systems of two autonomous functional equations for meromorphic functions in one complex variable defined by birational maps in C 2. General theorems defining the invariant curves, the dynamics of a birational mapping and a general theorem about necessary and sufficient conditions for integrability of birational plane mappings are proved on the basis of a new idea — a decomposition of the orbit set of indeterminacy points of direct maps relative to the action of the inverse mappings. A general method of generating integrable mappings and their rational integrals (invariants) I is proposed. Numerical characteristics Nk of intersections of the orbits Φn- kOi of fundamental or indeterminacy points Oi ɛ O ∩ S, of mapping Φn, where O = { O i} is the set of indeterminacy points of Φn and S is a similar set for invariant I, with the corresponding set O' ∩ S, where O' = { O' i} is the set of indeterminacy points of inverse mapping Φn-1, are introduced. Using the method proposed we obtain all nine integrable multiparameter quadratic birational reversible mappings with the zero fixed point and linear projective symmetry S = CΛC-1, Λ = diag(±1), with rational invariants generated by invariant straight lines and conics. The relations of numbers Nk with such numerical characteristics of discrete dynamical systems as the Arnold complexity and their integrability are established for the integrable mappings obtained. The Arnold complexities of integrable mappings obtained are determined. The main results are presented in Theorems 2-5, in Tables 1 and 2, and in Appendix A.
The topology of large-scale structure. VI - Slices of the universe
NASA Astrophysics Data System (ADS)
Park, Changbom; Gott, J. R., III; Melott, Adrian L.; Karachentsev, I. D.
1992-03-01
Results of an investigation of the topology of large-scale structure in two observed slices of the universe are presented. Both slices pass through the Coma cluster and their depths are 100 and 230/h Mpc. The present topology study shows that the largest void in the CfA slice is divided into two smaller voids by a statistically significant line of galaxies. The topology of toy models like the white noise and bubble models is shown to be inconsistent with that of the observed slices. A large N-body simulation was made of the biased cloud dark matter model and the slices are simulated by matching them in selection functions and boundary conditions. The genus curves for these simulated slices are spongelike and have a small shift in the direction of a meatball topology like those of observed slices.
The topology of large-scale structure. VI - Slices of the universe
NASA Technical Reports Server (NTRS)
Park, Changbom; Gott, J. R., III; Melott, Adrian L.; Karachentsev, I. D.
1992-01-01
Results of an investigation of the topology of large-scale structure in two observed slices of the universe are presented. Both slices pass through the Coma cluster and their depths are 100 and 230/h Mpc. The present topology study shows that the largest void in the CfA slice is divided into two smaller voids by a statistically significant line of galaxies. The topology of toy models like the white noise and bubble models is shown to be inconsistent with that of the observed slices. A large N-body simulation was made of the biased cloud dark matter model and the slices are simulated by matching them in selection functions and boundary conditions. The genus curves for these simulated slices are spongelike and have a small shift in the direction of a meatball topology like those of observed slices.
Norris, Joseph M; Kishikova, Lyudmila; Avadhanam, Venkata S; Koumellis, Panos; Francis, Ian S; Liu, Christopher S C
2015-08-01
To investigate the efficacy of 640-slice multidetector computed tomography (MDCT) for detecting osteo-odonto laminar resorption in the osteo-odonto-keratoprosthesis (OOKP) compared with the current standard 32-slice MDCT. Explanted OOKP laminae and bone-dentine fragments were scanned using 640-slice MDCT (Aquilion ONE; Toshiba) and 32-slice MDCT (LightSpeed Pro32; GE Healthcare). Pertinent comparisons including image quality, radiation dose, and scanning parameters were made. Benefits of 640-slice MDCT over 32-slice MDCT were shown. Key comparisons of 640-slice MDCT versus 32-slice MDCT included the following: percentage difference and correlation coefficient between radiological and anatomical measurements, 1.35% versus 3.67% and 0.9961 versus 0.9882, respectively; dose-length product, 63.50 versus 70.26; rotation time, 0.175 seconds versus 1.000 seconds; and detector coverage width, 16 cm versus 2 cm. Resorption of the osteo-odonto lamina after OOKP surgery can result in potentially sight-threatening complications, hence it warrants regular monitoring and timely intervention. MDCT remains the gold standard for radiological assessment of laminar resorption, which facilitates detection of subtle laminar changes earlier than the onset of clinical signs, thus indicating when preemptive measures can be taken. The 640-slice MDCT exhibits several advantages over traditional 32-slice MDCT. However, such benefits may not offset cost implications, except in rare cases, such as in young patients who might undergo years of radiation exposure.
Generalization of the Bogoliubov-Zubarev Theorem for Dynamic Pressure to the Case of Compressibility
NASA Astrophysics Data System (ADS)
Rudoi, Yu. G.
2018-01-01
We present the motivation, formulation, and modified proof of the Bogoliubov-Zubarev theorem connecting the pressure of a dynamical object with its energy within the framework of a classical description and obtain a generalization of this theorem to the case of dynamical compressibility. In both cases, we introduce the volume of the object into consideration using a singular addition to the Hamiltonian function of the physical object, which allows using the concept of the Bogoliubov quasiaverage explicitly already on a dynamical level of description. We also discuss the relation to the same result known as the Hellmann-Feynman theorem in the framework of the quantum description of a physical object.
Some constructions of biharmonic maps and Chen’s conjecture on biharmonic hypersurfaces
NASA Astrophysics Data System (ADS)
Ou, Ye-Lin
2012-04-01
We give several construction methods and use them to produce many examples of proper biharmonic maps including biharmonic tori of any dimension in Euclidean spheres (Theorem 2.2, Corollaries 2.3, 2.4 and 2.6), biharmonic maps between spheres (Theorem 2.9) and into spheres (Theorem 2.10) via orthogonal multiplications and eigenmaps. We also study biharmonic graphs of maps, derive the equation for a function whose graph is a biharmonic hypersurface in a Euclidean space, and give an equivalent formulation of Chen's conjecture on biharmonic hypersurfaces by using the biharmonic graph equation (Theorem 4.1) which paves a way for the analytic study of the conjecture.
Reciprocity relations in aerodynamics
NASA Technical Reports Server (NTRS)
Heaslet, Max A; Spreiter, John R
1953-01-01
Reverse flow theorems in aerodynamics are shown to be based on the same general concepts involved in many reciprocity theorems in the physical sciences. Reciprocal theorems for both steady and unsteady motion are found as a logical consequence of this approach. No restrictions on wing plan form or flight Mach number are made beyond those required in linearized compressible-flow analysis. A number of examples are listed, including general integral theorems for lifting, rolling, and pitching wings and for wings in nonuniform downwash fields. Correspondence is also established between the buildup of circulation with time of a wing starting impulsively from rest and the buildup of lift of the same wing moving in the reverse direction into a sharp-edged gust.
Berezhkovskii, Alexander M; Bezrukov, Sergey M
2008-05-15
In this paper, we discuss the fluctuation theorem for channel-facilitated transport of solutes through a membrane separating two reservoirs. The transport is characterized by the probability, P(n)(t), that n solute particles have been transported from one reservoir to the other in time t. The fluctuation theorem establishes a relation between P(n)(t) and P-(n)(t): The ratio P(n)(t)/P-(n)(t) is independent of time and equal to exp(nbetaA), where betaA is the affinity measured in the thermal energy units. We show that the same fluctuation theorem is true for both single- and multichannel transport of noninteracting particles and particles which strongly repel each other.
One-range addition theorems for derivatives of Slater-type orbitals.
Guseinov, Israfil
2004-06-01
Using addition theorems for STOs introduced by the author with the help of complete orthonormal sets of psi(alpha)-ETOs (Guseinov II (2003) J Mol Model 9:190-194), where alpha=1, 0, -1, -2, ..., a large number of one-range addition theorems for first and second derivatives of STOs are established. These addition theorems are especially useful for computation of multicenter-multielectron integrals over STOs that arise in the Hartree-Fock-Roothaan approximation and also in the Hylleraas function method, which play a significant role for the study of electronic structure and electron-nuclei interaction properties of atoms, molecules, and solids. The relationships obtained are valid for arbitrary quantum numbers, screening constants and location of STOs.
Out-of-time-order fluctuation-dissipation theorem
NASA Astrophysics Data System (ADS)
Tsuji, Naoto; Shitara, Tomohiro; Ueda, Masahito
2018-01-01
We prove a generalized fluctuation-dissipation theorem for a certain class of out-of-time-ordered correlators (OTOCs) with a modified statistical average, which we call bipartite OTOCs, for general quantum systems in thermal equilibrium. The difference between the bipartite and physical OTOCs defined by the usual statistical average is quantified by a measure of quantum fluctuations known as the Wigner-Yanase skew information. Within this difference, the theorem describes a universal relation between chaotic behavior in quantum systems and a nonlinear-response function that involves a time-reversed process. We show that the theorem can be generalized to higher-order n -partite OTOCs as well as in the form of generalized covariance.
Some theorems and properties of multi-dimensional fractional Laplace transforms
NASA Astrophysics Data System (ADS)
Ahmood, Wasan Ajeel; Kiliçman, Adem
2016-06-01
The aim of this work is to study theorems and properties for the one-dimensional fractional Laplace transform, generalize some properties for the one-dimensional fractional Lapalce transform to be valid for the multi-dimensional fractional Lapalce transform and is to give the definition of the multi-dimensional fractional Lapalce transform. This study includes: dedicate the one-dimensional fractional Laplace transform for functions of only one independent variable with some of important theorems and properties and develop of some properties for the one-dimensional fractional Laplace transform to multi-dimensional fractional Laplace transform. Also, we obtain a fractional Laplace inversion theorem after a short survey on fractional analysis based on the modified Riemann-Liouville derivative.
Thin slices of child personality: Perceptual, situational, and behavioral contributions.
Tackett, Jennifer L; Herzhoff, Kathrin; Kushner, Shauna C; Rule, Nicholas
2016-01-01
The present study examined whether thin-slice ratings of child personality serve as a resource-efficient and theoretically valid measurement of child personality traits. We extended theoretical work on the observability, perceptual accuracy, and situational consistency of childhood personality traits by examining intersource and interjudge agreement, cross-situational consistency, and convergent, divergent, and predictive validity of thin-slice ratings. Forty-five unacquainted independent coders rated 326 children's (ages 8-12) personality in 1 of 15 thin-slice behavioral scenarios (i.e., 3 raters per slice, for over 14,000 independent thin-slice ratings). Mothers, fathers, and children rated children's personality, psychopathology, and competence. We found robust evidence for correlations between thin-slice and mother/father ratings of child personality, within- and across-task consistency of thin-slice ratings, and convergent and divergent validity with psychopathology and competence. Surprisingly, thin-slice ratings were more consistent across situations in this child sample than previously found for adults. Taken together, these results suggest that thin slices are a valid and reliable measure to assess child personality, offering a useful method of measurement beyond questionnaires, helping to address novel questions of personality perception and consistency in childhood. (c) 2016 APA, all rights reserved).
High-order multiband encoding in the heart.
Cunningham, Charles H; Wright, Graham A; Wood, Michael L
2002-10-01
Spatial encoding with multiband selective excitation (e.g., Hadamard encoding) has been restricted to a small number of slices because the RF pulse becomes unacceptably long when more than about eight slices are encoded. In this work, techniques to shorten multiband RF pulses, and thus allow larger numbers of slices, are investigated. A method for applying the techniques while retaining the capability of adaptive slice thickness is outlined. A tradeoff between slice thickness and pulse duration is shown. Simulations and experiments with the shortened pulses confirmed that motion-induced excitation profile blurring and phase accrual were reduced. The connection between gradient hardware limitations, slice thickness, and flow sensitivity is shown. Excitation profiles for encoding 32 contiguous slices of 1-mm thickness were measured experimentally, and the artifact resulting from errors in timing of RF pulse relative to gradient was investigated. A multiband technique for imaging 32 contiguous 2-mm slices, with adaptive slice thickness, was developed and demonstrated for coronary artery imaging in healthy subjects. With the ability to image high numbers of contiguous slices, using relatively short (1-2 ms) RF pulses, multiband encoding has been advanced further toward practical application. Copyright 2002 Wiley-Liss, Inc.
A coupled mode formulation by reciprocity and a variational principle
NASA Technical Reports Server (NTRS)
Chuang, Shun-Lien
1987-01-01
A coupled mode formulation for parallel dielectric waveguides is presented via two methods: a reciprocity theorem and a variational principle. In the first method, a generalized reciprocity relation for two sets of field solutions satisfying Maxwell's equations and the boundary conditions in two different media, respectively, is derived. Based on the generalized reciprocity theorem, the coupled mode equations can then be formulated. The second method using a variational principle is also presented for a general waveguide system which can be lossy. The results of the variational principle can also be shown to be identical to those from the reciprocity theorem. The exact relations governing the 'conventional' and the new coupling coefficients are derived. It is shown analytically that the present formulation satisfies the reciprocity theorem and power conservation exactly, while the conventional theory violates the power conservation and reciprocity theorem by as much as 55 percent and the Hardy-Streifer (1985, 1986) theory by 0.033 percent, for example.
Abildtrup, Jens; Jensen, Frank; Dubgaard, Alex
2012-01-01
The Coase theorem depends on a number of assumptions, among others, perfect information about each other's payoff function, maximising behaviour and zero transaction costs. An important question is whether the Coase theorem holds for real market transactions when these assumptions are violated. This is the question examined in this paper. We consider the results of Danish waterworks' attempts to establish voluntary cultivation agreements with Danish farmers. A survey of these negotiations shows that the Coase theorem is not robust in the presence of imperfect information, non-maximising behaviour and transaction costs. Thus, negotiations between Danish waterworks and farmers may not be a suitable mechanism to achieve efficiency in the protection of groundwater quality due to violations of the assumptions of the Coase theorem. The use of standard schemes or government intervention (e.g. expropriation) may, under some conditions, be a more effective and cost efficient approach for the protection of vulnerable groundwater resources in Denmark. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Narkawicz, Anthony J.; Munoz, Cesar A.
2014-01-01
Sturm's Theorem is a well-known result in real algebraic geometry that provides a function that computes the number of roots of a univariate polynomial in a semiopen interval. This paper presents a formalization of this theorem in the PVS theorem prover, as well as a decision procedure that checks whether a polynomial is always positive, nonnegative, nonzero, negative, or nonpositive on any input interval. The soundness and completeness of the decision procedure is proven in PVS. The procedure and its correctness properties enable the implementation of a PVS strategy for automatically proving existential and universal univariate polynomial inequalities. Since the decision procedure is formally verified in PVS, the soundness of the strategy depends solely on the internal logic of PVS rather than on an external oracle. The procedure itself uses a combination of Sturm's Theorem, an interval bisection procedure, and the fact that a polynomial with exactly one root in a bounded interval is always nonnegative on that interval if and only if it is nonnegative at both endpoints.
Hyperpolarized 13C MR Markers of Renal Tumor Aggressiveness
2014-10-01
as a biomarker of tumor aggressiveness in a MR compatible 3D cell and tissue culture bioreactor ” to be presented at the ISMRM Workshop on Magnetic... Cell Carcinoma, Hyperpolarized 13C MR, Sub-renal capsule, patient derived tissue slice cultures , bioreactor 3. OVERALL PROJECT SUMMARY: Aim...grade from high grade RCCs using human TSCs cultured in a bioreactor . Aim 2:Identify HP 13C metabolic markers that discriminate low grade from
Breast percent density estimation from 3D reconstructed digital breast tomosynthesis images
NASA Astrophysics Data System (ADS)
Bakic, Predrag R.; Kontos, Despina; Carton, Ann-Katherine; Maidment, Andrew D. A.
2008-03-01
Breast density is an independent factor of breast cancer risk. In mammograms breast density is quantitatively measured as percent density (PD), the percentage of dense (non-fatty) tissue. To date, clinical estimates of PD have varied significantly, in part due to the projective nature of mammography. Digital breast tomosynthesis (DBT) is a 3D imaging modality in which cross-sectional images are reconstructed from a small number of projections acquired at different x-ray tube angles. Preliminary studies suggest that DBT is superior to mammography in tissue visualization, since superimposed anatomical structures present in mammograms are filtered out. We hypothesize that DBT could also provide a more accurate breast density estimation. In this paper, we propose to estimate PD from reconstructed DBT images using a semi-automated thresholding technique. Preprocessing is performed to exclude the image background and the area of the pectoral muscle. Threshold values are selected manually from a small number of reconstructed slices; a combination of these thresholds is applied to each slice throughout the entire reconstructed DBT volume. The proposed method was validated using images of women with recently detected abnormalities or with biopsy-proven cancers; only contralateral breasts were analyzed. The Pearson correlation and kappa coefficients between the breast density estimates from DBT and the corresponding digital mammogram indicate moderate agreement between the two modalities, comparable with our previous results from 2D DBT projections. Percent density appears to be a robust measure for breast density assessment in both 2D and 3D x-ray breast imaging modalities using thresholding.
NASA Astrophysics Data System (ADS)
Stock, Michala K.; Stull, Kyra E.; Garvin, Heather M.; Klales, Alexandra R.
2016-10-01
Forensic anthropologists are routinely asked to estimate a biological profile (i.e., age, sex, ancestry and stature) from a set of unidentified remains. In contrast to the abundance of collections and techniques associated with adult skeletons, there is a paucity of modern, documented subadult skeletal material, which limits the creation and validation of appropriate forensic standards. Many are forced to use antiquated methods derived from small sample sizes, which given documented secular changes in the growth and development of children, are not appropriate for application in the medico-legal setting. Therefore, the aim of this project is to use multi-slice computed tomography (MSCT) data from a large, diverse sample of modern subadults to develop new methods to estimate subadult age and sex for practical forensic applications. The research sample will consist of over 1,500 full-body MSCT scans of modern subadult individuals (aged birth to 20 years) obtained from two U.S. medical examiner's offices. Statistical analysis of epiphyseal union scores, long bone osteometrics, and os coxae landmark data will be used to develop modern subadult age and sex estimation standards. This project will result in a database of information gathered from the MSCT scans, as well as the creation of modern, statistically rigorous standards for skeletal age and sex estimation in subadults. Furthermore, the research and methods developed in this project will be applicable to dry bone specimens, MSCT scans, and radiographic images, thus providing both tools and continued access to data for forensic practitioners in a variety of settings.
Some functional limit theorems for compound Cox processes
NASA Astrophysics Data System (ADS)
Korolev, Victor Yu.; Chertok, A. V.; Korchagin, A. Yu.; Kossova, E. V.; Zeifman, Alexander I.
2016-06-01
An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.
Quantum Mechanics, Can It Be Consistent with Locality?
NASA Astrophysics Data System (ADS)
Nisticò, Giuseppe; Sestito, Angela
2011-07-01
We single out an alternative, strict interpretation of the Einstein-Podolsky-Rosen criterion of reality, and identify the implied extensions of quantum correlations. Then we prove that the theorem of Bell, and the non-locality theorems without inequalities, fail if the new extensions are adopted. Therefore, these theorems can be interpreted as arguments against the wide interpretation of the criterion of reality rather than as a violation of locality.
2016-02-01
proof in mathematics. For example, consider the proof of the Pythagorean Theorem illustrated at: http://www.cut-the-knot.org/ pythagoras / where 112...methods and tools have made significant progress in their ability to model software designs and prove correctness theorems about the systems modeled...assumption criticality” or “ theorem root set size” SITAPS detects potentially brittle verification cases. SITAPS provides tools and techniques that
Delaunay Refinement Mesh Generation
1997-05-18
edge is locally Delaunay; thus, by Lemma 3, every edge is Delaunay. Theorem 5 Let V be a set of three or more vertices in the plane that are not all...this document. Delaunay triangulations are valuable in part because they have the following optimality properties. Theorem 6 Among all triangulations of...no locally Delaunay edges. By Theorem 5, a triangulation with no locally Delaunay edges is the Delaunay triangulation. The property of max-min
Development of a Dependency Theory Toolbox for Database Design.
1987-12-01
published algorithms and theorems , and hand simulating these algorithms can be a tedious and error prone chore. Additionally, since the process of...to design and study relational databases exists in the form of published algorithms and theorems . However, hand simulating these algorithms can be a...published algorithms and theorems . Hand simulating these algorithms can be a tedious and error prone chore. Therefore, a toolbox of algorithms and
Field Computation and Nonpropositional Knowledge.
1987-09-01
field computer It is based on xeneralization of Taylor’s theorem to continuous dimensional vector spaces. 20. DISTRIBUTION/AVAILABILITY OF ABSTRACT 21...generalization of Taylor’s theorem to continuous dimensional vector -5paces A number of field computations are illustrated, including several Lransforma...paradigm. The "old" Al has been quite successful in performing a number of difficult tasks, such as theorem prov- ing, chess playing, medical diagnosis and
Ignoring the Innocent: Non-combatants in Urban Operations and in Military Models and Simulations
2006-01-01
such a model yields is a sufficiency theorem , a single run does not provide any information on the robustness of such theorems . That is, given that...often formally resolvable via inspection, simple differentiation, the implicit function theorem , comparative statistics, and so on. The only way to... Pythagoras , and Bactowars. For each, Grieger discusses model parameters, data collection, terrain, and other features. Grieger also discusses
Some functional limit theorems for compound Cox processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korolev, Victor Yu.; Institute of Informatics Problems FRC CSC RAS; Chertok, A. V.
2016-06-08
An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.
NASA Astrophysics Data System (ADS)
Fan, Hong-yi; Xu, Xue-xiang
2009-06-01
By virtue of the generalized Hellmann-Feynman theorem [H. Y. Fan and B. Z. Chen, Phys. Lett. A 203, 95 (1995)], we derive the mean energy of some interacting bosonic systems for some Hamiltonian models without proceeding with diagonalizing the Hamiltonians. Our work extends the field of applications of the Hellmann-Feynman theorem and may enrich the theory of quantum statistics.
Reduction theorems for optimal unambiguous state discrimination of density matrices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raynal, Philippe; Luetkenhaus, Norbert; Enk, Steven J. van
2003-08-01
We present reduction theorems for the problem of optimal unambiguous state discrimination of two general density matrices. We show that this problem can be reduced to that of two density matrices that have the same rank n and are described in a Hilbert space of dimensions 2n. We also show how to use the reduction theorems to discriminate unambiguously between N mixed states (N{>=}2)
Proof of factorization using background field method of QCD
NASA Astrophysics Data System (ADS)
Nayak, Gouranga C.
2010-02-01
Factorization theorem plays the central role at high energy colliders to study standard model and beyond standard model physics. The proof of factorization theorem is given by Collins, Soper and Sterman to all orders in perturbation theory by using diagrammatic approach. One might wonder if one can obtain the proof of factorization theorem through symmetry considerations at the lagrangian level. In this paper we provide such a proof.
Proof of factorization using background field method of QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nayak, Gouranga C.
Factorization theorem plays the central role at high energy colliders to study standard model and beyond standard model physics. The proof of factorization theorem is given by Collins, Soper and Sterman to all orders in perturbation theory by using diagrammatic approach. One might wonder if one can obtain the proof of factorization theorem through symmetry considerations at the lagrangian level. In this paper we provide such a proof.
Formalization of the Integral Calculus in the PVS Theorem Prover
NASA Technical Reports Server (NTRS)
Butler, Ricky W.
2004-01-01
The PVS Theorem prover is a widely used formal verification tool used for the analysis of safety-critical systems. The PVS prover, though fully equipped to support deduction in a very general logic framework, namely higher-order logic, it must nevertheless, be augmented with the definitions and associated theorems for every branch of mathematics and Computer Science that is used in a verification. This is a formidable task, ultimately requiring the contributions of researchers and developers all over the world. This paper reports on the formalization of the integral calculus in the PVS theorem prover. All of the basic definitions and theorems covered in a first course on integral calculus have been completed.The theory and proofs were based on Rosenlicht's classic text on real analysis and follow the traditional epsilon-delta method. The goal of this work was to provide a practical set of PVS theories that could be used for verification of hybrid systems that arise in air traffic management systems and other aerospace applications. All of the basic linearity, integrability, boundedness, and continuity properties of the integral calculus were proved. The work culminated in the proof of the Fundamental Theorem Of Calculus. There is a brief discussion about why mechanically checked proofs are so much longer than standard mathematics textbook proofs.
Model based LV-reconstruction in bi-plane x-ray angiography
NASA Astrophysics Data System (ADS)
Backfrieder, Werner; Carpella, Martin; Swoboda, Roland; Steinwender, Clemens; Gabriel, Christian; Leisch, Franz
2005-04-01
Interventional x-ray angiography is state of the art in diagnosis and therapy of severe diseases of the cardiovascular system. Diagnosis is based on contrast enhanced dynamic projection images of the left ventricle. A new model based algorithm for three dimensional reconstruction of the left ventricle from bi-planar angiograms was developed. Parametric super ellipses are deformed until their projection profiles optimally fit measured ventricular projections. Deformation is controlled by a simplex optimization procedure. A resulting optimized parameter set builds the initial guess for neighboring slices. A three dimensional surface model of the ventricle is built from stacked contours. The accuracy of the algorithm has been tested with mathematical phantom data and clinical data. Results show conformance with provided projection data and high convergence speed makes the algorithm useful for clinical application. Fully three dimensional reconstruction of the left ventricle has a high potential for improvements of clinical findings in interventional cardiology.
The value prescription: relative value theorem as a call to action.
Alston, Greg L; Blizzard, Joseph C
2012-01-01
The Joint Commission of Pharmacy Practitioners Future Vision of Pharmacy Practice 2015 (2005) and Project Destiny (2008) clearly defined a vision for transforming community practice pharmacy from a culture of dispensing drugs to the provision of services. Several viable service offerings were identified. Pharmacy has not yet fully capitalized on these opportunities. Pharmacy must demonstrate value in providing these services to remain viable in the marketplace. Many pharmacists do not understand how value is created and lack sufficient marketing skills to position their practice for long-term success. The relative value theorem (RVT) describes in simple terms the key elements that drive purchase decisions and thus marketing decisions: (P+S)×PV=RV (P, price; S, service; PV, perceived value; RV, relative value). A consumer compares the P, extra S, and PV of the purchase against all potential uses of their scarce resources before deciding what to buy. Evidence suggests that understanding and applying the principles of RVT is a critical skill for pharmacy professionals in all practice settings to master if they plan to remain viable players in the health care marketplace of the future. Copyright © 2012 Elsevier Inc. All rights reserved.
The Knaster-Kuratowski-Mazurkiewicz theorem and abstract convexities
NASA Astrophysics Data System (ADS)
Cain, George L., Jr.; González, Luis
2008-02-01
The Knaster-Kuratowski-Mazurkiewicz covering theorem (KKM), is the basic ingredient in the proofs of many so-called "intersection" theorems and related fixed point theorems (including the famous Brouwer fixed point theorem). The KKM theorem was extended from Rn to Hausdorff linear spaces by Ky Fan. There has subsequently been a plethora of attempts at extending the KKM type results to arbitrary topological spaces. Virtually all these involve the introduction of some sort of abstract convexity structure for a topological space, among others we could mention H-spaces and G-spaces. We have introduced a new abstract convexity structure that generalizes the concept of a metric space with a convex structure, introduced by E. Michael in [E. Michael, Convex structures and continuous selections, Canad. J. MathE 11 (1959) 556-575] and called a topological space endowed with this structure an M-space. In an article by Shie Park and Hoonjoo Kim [S. Park, H. Kim, Coincidence theorems for admissible multifunctions on generalized convex spaces, J. Math. Anal. Appl. 197 (1996) 173-187], the concepts of G-spaces and metric spaces with Michael's convex structure, were mentioned together but no kind of relationship was shown. In this article, we prove that G-spaces and M-spaces are close related. We also introduce here the concept of an L-space, which is inspired in the MC-spaces of J.V. Llinares [J.V. Llinares, Unified treatment of the problem of existence of maximal elements in binary relations: A characterization, J. Math. Econom. 29 (1998) 285-302], and establish relationships between the convexities of these spaces with the spaces previously mentioned.
NASA Astrophysics Data System (ADS)
Carozzi, T. D.; Woan, G.
2009-05-01
We derive a generalized van Cittert-Zernike (vC-Z) theorem for radio astronomy that is valid for partially polarized sources over an arbitrarily wide field of view (FoV). The classical vC-Z theorem is the theoretical foundation of radio astronomical interferometry, and its application is the basis of interferometric imaging. Existing generalized vC-Z theorems in radio astronomy assume, however, either paraxiality (narrow FoV) or scalar (unpolarized) sources. Our theorem uses neither of these assumptions, which are seldom fulfiled in practice in radio astronomy, and treats the full electromagnetic field. To handle wide, partially polarized fields, we extend the two-dimensional (2D) electric field (Jones vector) formalism of the standard `Measurement Equation' (ME) of radio astronomical interferometry to the full three-dimensional (3D) formalism developed in optical coherence theory. The resulting vC-Z theorem enables full-sky imaging in a single telescope pointing, and imaging based not only on standard dual-polarized interferometers (that measure 2D electric fields) but also electric tripoles and electromagnetic vector-sensor interferometers. We show that the standard 2D ME is easily obtained from our formalism in the case of dual-polarized antenna element interferometers. We also exploit an extended 2D ME to determine that dual-polarized interferometers can have polarimetric aberrations at the edges of a wide FoV. Our vC-Z theorem is particularly relevant to proposed, and recently developed, wide FoV interferometers such as Low Frequency Array (LOFAR) and Square Kilometer Array (SKA), for which direction-dependent effects will be important.
RETROSPECTIVE DETECTION OF INTERLEAVED SLICE ACQUISITION PARAMETERS FROM FMRI DATA
Parker, David; Rotival, Georges; Laine, Andrew; Razlighi, Qolamreza R.
2015-01-01
To minimize slice excitation leakage to adjacent slices, interleaved slice acquisition is nowadays performed regularly in fMRI scanners. In interleaved slice acquisition, the number of slices skipped between two consecutive slice acquisitions is often referred to as the ‘interleave parameter’; the loss of this parameter can be catastrophic for the analysis of fMRI data. In this article we present a method to retrospectively detect the interleave parameter and the axis in which it is applied. Our method relies on the smoothness of the temporal-distance correlation function, which becomes disrupted along the axis on which interleaved slice acquisition is applied. We examined this method on simulated and real data in the presence of fMRI artifacts such as physiological noise, motion, etc. We also examined the reliability of this method in detecting different types of interleave parameters and demonstrated an accuracy of about 94% in more than 1000 real fMRI scans. PMID:26161244
Loeffler, Ralf B; McCarville, M Beth; Wagstaff, Anne W; Smeltzer, Matthew P; Krafft, Axel J; Song, Ruitian; Hankins, Jane S; Hillenbrand, Claudia M
2017-01-01
Liver R2* values calculated from multi-gradient echo (mGRE) magnetic resonance images (MRI) are strongly correlated with hepatic iron concentration (HIC) as shown in several independently derived biopsy calibration studies. These calibrations were established for axial single-slice breath-hold imaging at the location of the portal vein. Scanning in multi-slice mode makes the exam more efficient, since whole-liver coverage can be achieved with two breath-holds and the optimal slice can be selected afterward. Navigator echoes remove the need for breath-holds and allow use in sedated patients. To evaluate if the existing biopsy calibrations can be applied to multi-slice and navigator-controlled mGRE imaging in children with hepatic iron overload, by testing if there is a bias-free correlation between single-slice R2* and multi-slice or multi-slice navigator controlled R2*. This study included MRI data from 71 patients with transfusional iron overload, who received an MRI exam to estimate HIC using gradient echo sequences. Patient scans contained 2 or 3 of the following imaging methods used for analysis: single-slice images (n = 71), multi-slice images (n = 69) and navigator-controlled images (n = 17). Small and large blood corrected region of interests were selected on axial images of the liver to obtain R2* values for all data sets. Bland-Altman and linear regression analysis were used to compare R2* values from single-slice images to those of multi-slice images and navigator-controlled images. Bland-Altman analysis showed that all imaging method comparisons were strongly associated with each other and had high correlation coefficients (0.98 ≤ r ≤ 1.00) with P-values ≤0.0001. Linear regression yielded slopes that were close to 1. We found that navigator-gated or breath-held multi-slice R2* MRI for HIC determination measures R2* values comparable to the biopsy-validated single-slice, single breath-hold scan. We conclude that these three R2* methods can be interchangeably used in existing R2*-HIC calibrations.
Xiong, Guoxiang; Metheny, Hannah; Johnson, Brian N.; Cohen, Akiva S.
2017-01-01
The hippocampus plays a critical role in learning and memory and higher cognitive functions, and its dysfunction has been implicated in various neuropathological disorders. Electrophysiological recording undertaken in live brain slices is one of the most powerful tools for investigating hippocampal cellular and network activities. The plane for cutting the slices determines which afferent and/or efferent connections are best preserved, and there are three commonly used slices: hippocampal-entorhinal cortex (HEC), coronal and transverse. All three slices have been widely used for studying the major afferent hippocampal pathways including the perforant path (PP), the mossy fibers (MFs) and the Schaffer collaterals (SCs). Surprisingly, there has never been a systematic investigation of the anatomical and functional consequences of slicing at a particular angle. In the present study, we focused on how well fiber pathways are preserved from the entorhinal cortex (EC) to the hippocampus, and within the hippocampus, in slices generated by sectioning at different angles. The postmortem neural tract tracer 1,1′-dioctadecyl-3,3,3′3′-tetramethylindocarbocyanine perchlorate (DiI) was used to label afferent fibers to hippocampal principal neurons in fixed slices or whole brains. Laser scanning confocal microscopy was adopted for imaging DiI-labeled axons and terminals. We demonstrated that PP fibers were well preserved in HEC slices, MFs in both HEC and transverse slices and SCs in all three types of slices. Correspondingly, field excitatory postsynaptic potentials (fEPSPs) could be consistently evoked in HEC slices when stimulating PP fibers and recorded in stratum lacunosum-moleculare (sl-m) of area CA1, and when stimulating the dentate granule cell layer (gcl) and recording in stratum lucidum (sl) of area CA3. The MF evoked fEPSPs could not be recorded in CA3 from coronal slices. In contrast to our DiI-tracing data demonstrating severely truncated PP fibers in coronal slices, fEPSPs could still be recorded in CA1 sl-m in this plane, suggesting that an additional afferent fiber pathway other than PP might be involved. The present study increases our understanding of which hippocampal pathways are best preserved in the three most common brain slice preparations, and will help investigators determine the appropriate slices to use for physiological studies depending on the subregion of interest. PMID:29201002
Yan, Qiao-Huan; Xu, Dian-Guo; Shen, Yan-Feng; Yuan, Ding-Ling; Bao, Jun-Hui; Li, Hai-Bin; Lv, Ying-Gang
2017-01-01
AIM To observe the effect of targeted therapy with 64-slice spiral computed tomography (CT) combined with cryoablation for liver cancer. METHODS A total of 124 patients (142 tumors) were enrolled into this study. According to the use of dual-slice spiral CT or 64-slice spiral CT as a guide technology, patients were divided into two groups: dual-slice group (n = 56, 65 tumors) and 64-slice group (n = 8, 77 tumors). All patients were accepted and received targeted therapy by an argon-helium superconducting surgery system. The guided scan times of the two groups was recorded and compared. In the two groups, the lesion ice coverage in diameter of ≥ 3 cm and < 3 cm were recorded, and freezing effective rate was compared. Hepatic perfusion values [hepatic artery perfusion (HAP), portal vein perfusion (PVP), and the hepatic arterial perfusion index (HAPI)] of tumor tissues, adjacent tissues and normal liver tissues at preoperative and postoperative four weeks in the two groups were compared. Local tumor changes were recorded and efficiency was compared at four weeks post-operation. Adverse events were recorded and compared between the two groups, including fever, pain, frostbite, nausea, vomiting, pleural effusion and abdominal bleeding. RESULTS Guided scan times in the dual-slice group was longer than that in the 64-slice group (t = 11.445, P = 0.000). The freezing effective rate for tumors < 3 cm in diameter in the dual-slice group (81.58%) was lower than that in the 64-slice group (92.86%) (χ2 = 5.707, P = 0.017). The HAP and HAPI of tumor tissues were lower at four weeks post-treatment than at pre-treatment in both groups (all P < 0.05), and those in the 64-slice group were lower than that in the dual-slice group (all P < 0.05). HAP and PVP were lower and HAPI was higher in tumor adjacent tissues at post-treatment than at pre-treatment (all P < 0.05). Furthermore, the treatment effect and therapeutic efficacy in the dual-slice group were lower than the 64-slice group at four weeks post-treatment (all P < 0.05). Moreover, pleural effusion and intraperitoneal hemorrhage occurred in patients in the dual-slice group, while no complications occurred in the 64-slice group (all P < 0.05). CONCLUSION 64-slice spiral CT applied with cryoablation in targeted therapy for liver cancer can achieve a safe and effective freezing treatment, so it is worth being used. PMID:28652661
Yan, Qiao-Huan; Xu, Dian-Guo; Shen, Yan-Feng; Yuan, Ding-Ling; Bao, Jun-Hui; Li, Hai-Bin; Lv, Ying-Gang
2017-06-14
To observe the effect of targeted therapy with 64-slice spiral computed tomography (CT) combined with cryoablation for liver cancer. A total of 124 patients (142 tumors) were enrolled into this study. According to the use of dual-slice spiral CT or 64-slice spiral CT as a guide technology, patients were divided into two groups: dual-slice group ( n = 56, 65 tumors) and 64-slice group ( n = 8, 77 tumors). All patients were accepted and received targeted therapy by an argon-helium superconducting surgery system. The guided scan times of the two groups was recorded and compared. In the two groups, the lesion ice coverage in diameter of ≥ 3 cm and < 3 cm were recorded, and freezing effective rate was compared. Hepatic perfusion values [hepatic artery perfusion (HAP), portal vein perfusion (PVP), and the hepatic arterial perfusion index (HAPI)] of tumor tissues, adjacent tissues and normal liver tissues at preoperative and postoperative four weeks in the two groups were compared. Local tumor changes were recorded and efficiency was compared at four weeks post-operation. Adverse events were recorded and compared between the two groups, including fever, pain, frostbite, nausea, vomiting, pleural effusion and abdominal bleeding. Guided scan times in the dual-slice group was longer than that in the 64-slice group ( t = 11.445, P = 0.000). The freezing effective rate for tumors < 3 cm in diameter in the dual-slice group (81.58%) was lower than that in the 64-slice group (92.86%) (χ 2 = 5.707, P = 0.017). The HAP and HAPI of tumor tissues were lower at four weeks post-treatment than at pre-treatment in both groups (all P < 0.05), and those in the 64-slice group were lower than that in the dual-slice group (all P < 0.05). HAP and PVP were lower and HAPI was higher in tumor adjacent tissues at post-treatment than at pre-treatment (all P < 0.05). Furthermore, the treatment effect and therapeutic efficacy in the dual-slice group were lower than the 64-slice group at four weeks post-treatment (all P < 0.05). Moreover, pleural effusion and intraperitoneal hemorrhage occurred in patients in the dual-slice group, while no complications occurred in the 64-slice group (all P < 0.05). 64-slice spiral CT applied with cryoablation in targeted therapy for liver cancer can achieve a safe and effective freezing treatment, so it is worth being used.
Infinite Set of Soft Theorems in Gauge-Gravity Theories as Ward-Takahashi Identities
NASA Astrophysics Data System (ADS)
Hamada, Yuta; Shiu, Gary
2018-05-01
We show that the soft photon, gluon, and graviton theorems can be understood as the Ward-Takahashi identities of large gauge transformation, i.e., diffeomorphism that does not fall off at spatial infinity. We found infinitely many new identities which constrain the higher order soft behavior of the gauge bosons and gravitons in scattering amplitudes of gauge and gravity theories. Diagrammatic representations of these soft theorems are presented.
ERIC Educational Resources Information Center
Johansson, Adam Johannes
2013-01-01
Teaching the Jahn-Teller theorem offers several challenges. For many students, the first encounter comes in coordination chemistry, which can be difficult due to the already complicated nature of transition-metal complexes. Moreover, a deep understanding of the Jahn-Teller theorem requires that one is well acquainted with quantum mechanics and…
Research on Quantum Algorithms at the Institute for Quantum Information
2009-10-17
accuracy threshold theorem for the one-way quantum computer. Their proof is based on a novel scheme, in which a noisy cluster state in three spatial...detected. The proof applies to independent stochastic noise but (in contrast to proofs of the quantum accuracy threshold theorem based on concatenated...proved quantum threshold theorems for long-range correlated non-Markovian noise, for leakage faults, for the one-way quantum computer, for postselected
Deductive Synthesis of the Unification Algorithm,
1981-06-01
DEDUCTIVE SYNTHESIS OF THE I - UNIFICATION ALGORITHM Zohar Manna Richard Waldinger I F? Computer Science Department Artificial Intelligence Center...theorem proving," Artificial Intelligence Journal, Vol. 9, No. 1, pp. 1-35. Boyer, R. S. and J S. Moore [Jan. 19751, "Proving theorems about LISP...d’Intelligence Artificielle , U.E.R. de Luminy, Universit6 d’ Aix-Marseille II. Green, C. C. [May 1969], "Application of theorem proving to problem
NASA Astrophysics Data System (ADS)
Min, Lequan; Chen, Guanrong
This paper establishes some generalized synchronization (GS) theorems for a coupled discrete array of difference systems (CDADS) and a coupled continuous array of differential systems (CCADS). These constructive theorems provide general representations of GS in CDADS and CCADS. Based on these theorems, one can design GS-driven CDADS and CCADS via appropriate (invertible) transformations. As applications, the results are applied to autonomous and nonautonomous coupled Chen cellular neural network (CNN) CDADS and CCADS, discrete bidirectional Lorenz CNN CDADS, nonautonomous bidirectional Chua CNN CCADS, and nonautonomously bidirectional Chen CNN CDADS and CCADS, respectively. Extensive numerical simulations show their complex dynamic behaviors. These theorems provide new means for understanding the GS phenomena of complex discrete and continuously differentiable networks.
Fixed-point theorems for families of weakly non-expansive maps
NASA Astrophysics Data System (ADS)
Mai, Jie-Hua; Liu, Xin-He
2007-10-01
In this paper, we present some fixed-point theorems for families of weakly non-expansive maps under some relatively weaker and more general conditions. Our results generalize and improve several results due to Jungck [G. Jungck, Fixed points via a generalized local commutativity, Int. J. Math. Math. Sci. 25 (8) (2001) 497-507], Jachymski [J. Jachymski, A generalization of the theorem by Rhoades and Watson for contractive type mappings, Math. Japon. 38 (6) (1993) 1095-1102], Guo [C. Guo, An extension of fixed point theorem of Krasnoselski, Chinese J. Math. (P.O.C.) 21 (1) (1993) 13-20], Rhoades [B.E. Rhoades, A comparison of various definitions of contractive mappings, Trans. Amer. Math. Soc. 226 (1977) 257-290], and others.
Common Coupled Fixed Point Theorems for Two Hybrid Pairs of Mappings under φ-ψ Contraction
Handa, Amrish
2014-01-01
We introduce the concept of (EA) property and occasional w-compatibility for hybrid pair F : X × X → 2X and f : X → X. We also introduce common (EA) property for two hybrid pairs F, G : X → 2X and f, g : X → X. We establish some common coupled fixed point theorems for two hybrid pairs of mappings under φ-ψ contraction on noncomplete metric spaces. An example is also given to validate our results. We improve, extend and generalize several known results. The results of this paper generalize the common fixed point theorems for hybrid pairs of mappings and essentially contain fixed point theorems for hybrid pair of mappings. PMID:27340688
Transactions of the Conference of Army Mathematicians (25th).
1980-01-01
pothesis (see description of H in Theorem 1). It follows from (4.16) and (4.17) that CT v Hv(4.18) CFT < MCT V V and, since the greatest eigenvalue of H is...0 (3.15)’ 2 (ar) = 0 -138- Tr1W A WlO (0,T) = a + 2 t1 W ( , T) = - - 2 r H* f* (3.16) 2 W12 ( CfT ) = f 2 O T at + (a212) Hi - 2 If* 12 3 W2...Theorem 8.10 and Theorem 8.11. For these tables, use of (8.36) to get bounds for I aml is not possible. It will be noted that Theorems 8.10 and 8.11 give
Lindeberg theorem for Gibbs-Markov dynamics
NASA Astrophysics Data System (ADS)
Denker, Manfred; Senti, Samuel; Zhang, Xuan
2017-12-01
A dynamical array consists of a family of functions \\{ fn, i: 1≤slant i≤slant k_n, n≥slant 1\\} and a family of initial times \\{τn, i: 1≤slant i≤slant k_n, n≥slant 1\\} . For a dynamical system (X, T) we identify distributional limits for sums of the form for suitable (non-random) constants s_n>0 and an, i\\in { R} . We derive a Lindeberg-type central limit theorem for dynamical arrays. Applications include new central limit theorems for functions which are not locally Lipschitz continuous and central limit theorems for statistical functions of time series obtained from Gibbs-Markov systems. Our results, which hold for more general dynamics, are stated in the context of Gibbs-Markov dynamical systems for convenience.
A reciprocal theorem for a mixture theory. [development of linearized theory of interacting media
NASA Technical Reports Server (NTRS)
Martin, C. J.; Lee, Y. M.
1972-01-01
A dynamic reciprocal theorem for a linearized theory of interacting media is developed. The constituents of the mixture are a linear elastic solid and a linearly viscous fluid. In addition to Steel's field equations, boundary conditions and inequalities on the material constants that have been shown by Atkin, Chadwick and Steel to be sufficient to guarantee uniqueness of solution to initial-boundary value problems are used. The elements of the theory are given and two different boundary value problems are considered. The reciprocal theorem is derived with the aid of the Laplace transform and the divergence theorem and this section is concluded with a discussion of the special cases which arise when one of the constituents of the mixture is absent.
Satorra, Albert; Neudecker, Heinz
2015-12-01
This paper develops a theorem that facilitates computing the degrees of freedom of Wald-type chi-square tests for moment restrictions when there is rank deficiency of key matrices involved in the definition of the test. An if and only if (iff) condition is developed for a simple rule of difference of ranks to be used when computing the desired degrees of freedom of the test. The theorem is developed exploiting basics tools of matrix algebra. The theorem is shown to play a key role in proving the asymptotic chi-squaredness of a goodness of fit test in moment structure analysis, and in finding the degrees of freedom of this chi-square statistic.
Portable Device Slices Thermoplastic Prepregs
NASA Technical Reports Server (NTRS)
Taylor, Beverly A.; Boston, Morton W.; Wilson, Maywood L.
1993-01-01
Prepreg slitter designed to slit various widths rapidly by use of slicing bar holding several blades, each capable of slicing strip of preset width in single pass. Produces material evenly sliced and does not contain jagged edges. Used for various applications in such batch processes involving composite materials as press molding and autoclaving, and in such continuous processes as pultrusion. Useful to all manufacturers of thermoplastic composites, and in slicing B-staged thermoset composites.
A z-gradient array for simultaneous multi-slice excitation with a single-band RF pulse.
Ertan, Koray; Taraghinia, Soheil; Sadeghi, Alireza; Atalar, Ergin
2018-07-01
Multi-slice radiofrequency (RF) pulses have higher specific absorption rates, more peak RF power, and longer pulse durations than single-slice RF pulses. Gradient field design techniques using a z-gradient array are investigated for exciting multiple slices with a single-band RF pulse. Two different field design methods are formulated to solve for the required current values of the gradient array elements for the given slice locations. The method requirements are specified, optimization problems are formulated for the minimum current norm and an analytical solution is provided. A 9-channel z-gradient coil array driven by independent, custom-designed gradient amplifiers is used to validate the theory. Performance measures such as normalized slice thickness error, gradient strength per unit norm current, power dissipation, and maximum amplitude of the magnetic field are provided for various slice locations and numbers of slices. Two and 3 slices are excited by a single-band RF pulse in simulations and phantom experiments. The possibility of multi-slice excitation with a single-band RF pulse using a z-gradient array is validated in simulations and phantom experiments. Magn Reson Med 80:400-412, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Zhou, Xiong; Huang, Guohe; Wang, Xiuquan; Cheng, Guanhui
2018-02-01
In this study, dynamically-downscaled temperature and precipitation changes over Saskatchewan are developed through the Providing Regional Climates for Impacts Studies (PRECIS) model. It can resolve detailed features within GCM grids such as topography, clouds, and land use in Saskatchewan. The PRECIS model is employed to carry out ensemble simulations for projections of temperature and precipitation changes over Saskatchewan. Temperature and precipitation variables at 14 weather stations for the baseline period are first extracted from each model run. Ranges of simulated temperature and precipitation variables are then obtained through combination of maximum and minimum values calculated from the five ensemble runs. The performance of PRECIS ensemble simulations can be evaluated through checking if observations of current temperature at each weather station are within the simulated range. Future climate projections are analyzed over three time slices (i.e., the 2030s, 2050s, and 2080s) to help understand the plausible changes in temperature and precipitation over Saskatchewan in response to global warming. The evaluation results show that the PRECIS ensemble simulations perform very well in terms of capturing the spatial patterns of temperature and precipitation variables. The results of future climate projections over three time slices indicate that there will be an obvious warming trend from the 2030s, to the 2050s, and the 2080s over Saskatchewan. The projected changes of mean temperature over the whole Saskatchewan area is [0, 2] °C in the 2030s at 10th percentile, [2, 5.5] °C in the 2050s at 50th percentile, and [3, 10] °C in the 2090s at 90th percentile. There are no significant changes in the spatial patterns of the projected total precipitation from the 2030s to the end of this century. The minimum change of the projected total precipitation over the whole Province of Saskatchewan is most likely to be -1.3% in the 2030s, and -0.2% in the 2050s, while the minimum value would be -2.1% to the end of this century at 50th percentile.
FBP and BPF reconstruction methods for circular X-ray tomography with off-center detector.
Schäfer, Dirk; Grass, Michael; van de Haar, Peter
2011-07-01
Circular scanning with an off-center planar detector is an acquisition scheme that allows to save detector area while keeping a large field of view (FOV). Several filtered back-projection (FBP) algorithms have been proposed earlier. The purpose of this work is to present two newly developed back-projection filtration (BPF) variants and evaluate the image quality of these methods compared to the existing state-of-the-art FBP methods. The first new BPF algorithm applies redundancy weighting of overlapping opposite projections before differentiation in a single projection. The second one uses the Katsevich-type differentiation involving two neighboring projections followed by redundancy weighting and back-projection. An averaging scheme is presented to mitigate streak artifacts inherent to circular BPF algorithms along the Hilbert filter lines in the off-center transaxial slices of the reconstructions. The image quality is assessed visually on reconstructed slices of simulated and clinical data. Quantitative evaluation studies are performed with the Forbild head phantom by calculating root-mean-squared-deviations (RMSDs) to the voxelized phantom for different detector overlap settings and by investigating the noise resolution trade-off with a wire phantom in the full detector and off-center scenario. The noise-resolution behavior of all off-center reconstruction methods corresponds to their full detector performance with the best resolution for the FDK based methods with the given imaging geometry. With respect to RMSD and visual inspection, the proposed BPF with Katsevich-type differentiation outperforms all other methods for the smallest chosen detector overlap of about 15 mm. The best FBP method is the algorithm that is also based on the Katsevich-type differentiation and subsequent redundancy weighting. For wider overlap of about 40-50 mm, these two algorithms produce similar results outperforming the other three methods. The clinical case with a detector overlap of about 17 mm confirms these results. The BPF-type reconstructions with Katsevich differentiation are widely independent of the size of the detector overlap and give the best results with respect to RMSD and visual inspection for minimal detector overlap. The increased homogeneity will improve correct assessment of lesions in the entire field of view.
Sampling limits for electron tomography with sparsity-exploiting reconstructions.
Jiang, Yi; Padgett, Elliot; Hovden, Robert; Muller, David A
2018-03-01
Electron tomography (ET) has become a standard technique for 3D characterization of materials at the nano-scale. Traditional reconstruction algorithms such as weighted back projection suffer from disruptive artifacts with insufficient projections. Popularized by compressed sensing, sparsity-exploiting algorithms have been applied to experimental ET data and show promise for improving reconstruction quality or reducing the total beam dose applied to a specimen. Nevertheless, theoretical bounds for these methods have been less explored in the context of ET applications. Here, we perform numerical simulations to investigate performance of ℓ 1 -norm and total-variation (TV) minimization under various imaging conditions. From 36,100 different simulated structures, our results show specimens with more complex structures generally require more projections for exact reconstruction. However, once sufficient data is acquired, dividing the beam dose over more projections provides no improvements-analogous to the traditional dose-fraction theorem. Moreover, a limited tilt range of ±75° or less can result in distorting artifacts in sparsity-exploiting reconstructions. The influence of optimization parameters on reconstructions is also discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, H; UT Southwestern Medical Center, Dallas, TX; Hilts, M
Purpose: To commission a multislice computed tomography (CT) scanner for fast and reliable readout of radiation therapy (RT) dose distributions using CT polymer gel dosimetry (PGD). Methods: Commissioning was performed for a 16-slice CT scanner using images acquired through a 1L cylinder filled with water. Additional images were collected using a single slice machine for comparison purposes. The variability in CT number associated with the anode heel effect was evaluated and used to define a new slice-by-slice background image subtraction technique. Image quality was assessed for the multislice system by comparing image noise and uniformity to that of the singlemore » slice machine. The consistency in CT number across slices acquired simultaneously using the multislice detector array was also evaluated. Finally, the variability in CT number due to increasing x-ray tube load was measured for the multislice scanner and compared to the tube load effects observed on the single slice machine. Results: Slice-by-slice background subtraction effectively removes the variability in CT number across images acquired simultaneously using the multislice scanner and is the recommended background subtraction method when using a multislice CT system. Image quality for the multislice machine was found to be comparable to that of the single slice scanner. Further study showed CT number was consistent across image slices acquired simultaneously using the multislice detector array for each detector configuration of the slice thickness examined. In addition, the multislice system was found to eliminate variations in CT number due to increasing x-ray tube load and reduce scanning time by a factor of 4 when compared to imaging a large volume using a single slice scanner. Conclusion: A multislice CT scanner has been commissioning for CT PGD, allowing images of an entire dose distribution to be acquired in a matter of minutes. Funding support provided by the Natural Sciences and Engineering Research Council of Canada (NSERC)« less
A fourth gradient to overcome slice dependent phase effects of voxel-sized coils in planar arrays.
Bosshard, John C; Eigenbrodt, Edwin P; McDougall, Mary P; Wright, Steven M
2010-01-01
The signals from an array of densely spaced long and narrow receive coils for MRI are complicated when the voxel size is of comparable dimension to the coil size. The RF coil causes a phase gradient across each voxel, which is dependent on the distance from the coil, resulting in a slice dependent shift of k-space. A fourth gradient coil has been implemented and used with the system's gradient set to create a gradient field which varies with slice. The gradients are pulsed together to impart a slice dependent phase gradient to compensate for the slice dependent phase due to the RF coils. However the non-linearity in the fourth gradient which creates the desired slice dependency also results in a through-slice phase ramp, which disturbs normal slice refocusing and leads to additional signal cancelation and reduced field of view. This paper discusses the benefits and limitations of using a fourth gradient coil to compensate for the phase due to RF coils.
2009-06-01
projection, the measurement matrix is of rank 3. This is known as the rank theorem and enables the matrix Y to be factored into the product of two...Interface 270 6.5.5. Bistatic Radar Measurements 271 6.5.6. Computer Aided Image-Model Matching 273 8 6.5.7. Tomasi-Kanade Factorization Method...the vectors of an orthonormal basis satisfy the following scalar- product relations (2 p. 239): = i. ir = n (2.1) i-j = i-k j-k 0 i-i= j • j = k-k
Twofold orthogonal weavings on cuboids
Kovács, Flórián
2016-01-01
Some closed polyhedral surfaces can be completely covered by two-way, twofold (rectangular) weaving of strands of constant width. In this paper, a construction for producing all possible geometries for such weavable cuboids is proposed: a theorem on spherical octahedra is proven first that all further theory is based on. The construction method of weavable cuboids itself relies on successive truncations of an initial tetrahedron and is also extended for cases of degenerate (unbounded) polyhedra. Arguments are mainly based on the plane geometry of the development of the respective polyhedra, in connection with some of three-dimensional projective properties of the same. PMID:27118910
High-frequency electromagnetic scarring in three-dimensional axisymmetric convex cavities
Warne, Larry K.; Jorgenson, Roy E.
2016-04-13
Here, this article examines the localization of high-frequency electromagnetic fields in three-dimensional axisymmetric cavities along periodic paths between opposing sides of the cavity. When these orbits lead to unstable localized modes, they are known as scars. This article treats the case where the opposing sides, or mirrors, are convex. Particular attention is focused on the normalization through the electromagnetic energy theorem. Both projections of the field along the scarred orbit as well as field point statistics are examined. Statistical comparisons are made with a numerical calculation of the scars run with an axisymmetric simulation.
Mariano-Goulart, D; Fourcade, M; Bernon, J L; Rossi, M; Zanca, M
2003-01-01
Thanks to an experimental study based on simulated and physical phantoms, the propagation of the stochastic noise in slices reconstructed using the conjugate gradient algorithm has been analysed versus iterations. After a first increase corresponding to the reconstruction of the signal, the noise stabilises before increasing linearly with iterations. The level of the plateau as well as the slope of the subsequent linear increase depends on the noise in the projection data.
Price estimates for the production of wafers from silicon ingots
NASA Technical Reports Server (NTRS)
Mokashi, A. R.
1982-01-01
The status of the inside-diameter sawing, (ID), multiblade sawing (MBS), and fixed-abrasive slicing technique (FAST) processes are discussed with respect to the estimated price each process adds on to the price of the final photovoltaic module. The expected improvements in each process, based on the knowledge of the current level of technology, are projected for the next two to five years and the expected add-on prices in 1983 and 1986 are estimated.
Negotiating for more than a slice of the pie.
Blair, J D; Savage, G T; Whitehead, C I; Dymond, S B
1991-01-01
Negotiation is an important way for physician executives to manage conflict and to accomplish new projects. Because of the rapidly changing nature of the health care environment, as well as conflicts and politics within their organizations, managers need to effectively negotiate with a wide range of other parties. Managers should consider the relative importance of both the substantive and relationship outcomes of any potential negotiation. These two factors may guide the executive's selection of initial negotiation strategies.
Generalization of the Ehrenfest theorem to quantum systems with periodical boundary conditions
NASA Astrophysics Data System (ADS)
Sanin, Andrey L.; Bagmanov, Andrey T.
2005-04-01
A generalization of Ehrenfest's theorem is discussed. For this purpose the quantum systems with periodical boundary conditions are being revised. The relations for time derivations of mean coordinate and momentum are derived once again. In comparison with Ehrenfest's theorem and its conventional quantities, the additional local terms occur which are caused boundaries. Because of this, the obtained new relations can be named as generalized. An example for using these relations is given.
NASA Astrophysics Data System (ADS)
Bai, Yunru; Baleanu, Dumitru; Wu, Guo-Cheng
2018-06-01
We investigate a class of generalized differential optimization problems driven by the Caputo derivative. Existence of weak Carathe ´odory solution is proved by using Weierstrass existence theorem, fixed point theorem and Filippov implicit function lemma etc. Then a numerical approximation algorithm is introduced, and a convergence theorem is established. Finally, a nonlinear programming problem constrained by the fractional differential equation is illustrated and the results verify the validity of the algorithm.
Cosmological singularity theorems and splitting theorems for N-Bakry-Émery spacetimes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woolgar, Eric, E-mail: ewoolgar@ualberta.ca; Wylie, William, E-mail: wwylie@syr.edu
We study Lorentzian manifolds with a weight function such that the N-Bakry-Émery tensor is bounded below. Such spacetimes arise in the physics of scalar-tensor gravitation theories, including Brans-Dicke theory, theories with Kaluza-Klein dimensional reduction, and low-energy approximations to string theory. In the “pure Bakry-Émery” N = ∞ case with f uniformly bounded above and initial data suitably bounded, cosmological-type singularity theorems are known, as are splitting theorems which determine the geometry of timelike geodesically complete spacetimes for which the bound on the initial data is borderline violated. We extend these results in a number of ways. We are able tomore » extend the singularity theorems to finite N-values N ∈ (n, ∞) and N ∈ (−∞, 1]. In the N ∈ (n, ∞) case, no bound on f is required, while for N ∈ (−∞, 1] and N = ∞, we are able to replace the boundedness of f by a weaker condition on the integral of f along future-inextendible timelike geodesics. The splitting theorems extend similarly, but when N = 1, the splitting is only that of a warped product for all cases considered. A similar limited loss of rigidity has been observed in a prior work on the N-Bakry-Émery curvature in Riemannian signature when N = 1 and appears to be a general feature.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venkatesan, R.C., E-mail: ravi@systemsresearchcorp.com; Plastino, A., E-mail: plastino@fisica.unlp.edu.ar
The (i) reciprocity relations for the relative Fisher information (RFI, hereafter) and (ii) a generalized RFI–Euler theorem are self-consistently derived from the Hellmann–Feynman theorem. These new reciprocity relations generalize the RFI–Euler theorem and constitute the basis for building up a mathematical Legendre transform structure (LTS, hereafter), akin to that of thermodynamics, that underlies the RFI scenario. This demonstrates the possibility of translating the entire mathematical structure of thermodynamics into a RFI-based theoretical framework. Virial theorems play a prominent role in this endeavor, as a Schrödinger-like equation can be associated to the RFI. Lagrange multipliers are determined invoking the RFI–LTS linkmore » and the quantum mechanical virial theorem. An appropriate ansatz allows for the inference of probability density functions (pdf’s, hereafter) and energy-eigenvalues of the above mentioned Schrödinger-like equation. The energy-eigenvalues obtained here via inference are benchmarked against established theoretical and numerical results. A principled theoretical basis to reconstruct the RFI-framework from the FIM framework is established. Numerical examples for exemplary cases are provided. - Highlights: • Legendre transform structure for the RFI is obtained with the Hellmann–Feynman theorem. • Inference of the energy-eigenvalues of the SWE-like equation for the RFI is accomplished. • Basis for reconstruction of the RFI framework from the FIM-case is established. • Substantial qualitative and quantitative distinctions with prior studies are discussed.« less
Anomaly manifestation of Lieb-Schultz-Mattis theorem and topological phases
NASA Astrophysics Data System (ADS)
Cho, Gil Young; Hsieh, Chang-Tse; Ryu, Shinsei
2017-11-01
The Lieb-Schultz-Mattis (LSM) theorem dictates that emergent low-energy states from a lattice model cannot be a trivial symmetric insulator if the filling per unit cell is not integral and if the lattice translation symmetry and particle number conservation are strictly imposed. In this paper, we compare the one-dimensional gapless states enforced by the LSM theorem and the boundaries of one-higher dimensional strong symmetry-protected topological (SPT) phases from the perspective of quantum anomalies. We first note that they can both be described by the same low-energy effective field theory with the same effective symmetry realizations on low-energy modes, wherein non-on-site lattice translation symmetry is encoded as if it were an internal symmetry. In spite of the identical form of the low-energy effective field theories, we show that the quantum anomalies of the theories play different roles in the two systems. In particular, we find that the chiral anomaly is equivalent to the LSM theorem, whereas there is another anomaly that is not related to the LSM theorem but is intrinsic to the SPT states. As an application, we extend the conventional LSM theorem to multiple-charge multiple-species problems and construct several exotic symmetric insulators. We also find that the (3+1)d chiral anomaly provides only the perturbative stability of the gaplessness local in the parameter space.
Cosmological singularity theorems and splitting theorems for N-Bakry-Émery spacetimes
NASA Astrophysics Data System (ADS)
Woolgar, Eric; Wylie, William
2016-02-01
We study Lorentzian manifolds with a weight function such that the N-Bakry-Émery tensor is bounded below. Such spacetimes arise in the physics of scalar-tensor gravitation theories, including Brans-Dicke theory, theories with Kaluza-Klein dimensional reduction, and low-energy approximations to string theory. In the "pure Bakry-Émery" N = ∞ case with f uniformly bounded above and initial data suitably bounded, cosmological-type singularity theorems are known, as are splitting theorems which determine the geometry of timelike geodesically complete spacetimes for which the bound on the initial data is borderline violated. We extend these results in a number of ways. We are able to extend the singularity theorems to finite N-values N ∈ (n, ∞) and N ∈ (-∞, 1]. In the N ∈ (n, ∞) case, no bound on f is required, while for N ∈ (-∞, 1] and N = ∞, we are able to replace the boundedness of f by a weaker condition on the integral of f along future-inextendible timelike geodesics. The splitting theorems extend similarly, but when N = 1, the splitting is only that of a warped product for all cases considered. A similar limited loss of rigidity has been observed in a prior work on the N-Bakry-Émery curvature in Riemannian signature when N = 1 and appears to be a general feature.
NASA Astrophysics Data System (ADS)
Sumin, M. I.
2015-06-01
A parametric nonlinear programming problem in a metric space with an operator equality constraint in a Hilbert space is studied assuming that its lower semicontinuous value function at a chosen individual parameter value has certain subdifferentiability properties in the sense of nonlinear (nonsmooth) analysis. Such subdifferentiability can be understood as the existence of a proximal subgradient or a Fréchet subdifferential. In other words, an individual problem has a corresponding generalized Kuhn-Tucker vector. Under this assumption, a stable sequential Kuhn-Tucker theorem in nondifferential iterative form is proved and discussed in terms of minimizing sequences on the basis of the dual regularization method. This theorem provides necessary and sufficient conditions for the stable construction of a minimizing approximate solution in the sense of Warga in the considered problem, whose initial data can be approximately specified. A substantial difference of the proved theorem from its classical same-named analogue is that the former takes into account the possible instability of the problem in the case of perturbed initial data and, as a consequence, allows for the inherited instability of classical optimality conditions. This theorem can be treated as a regularized generalization of the classical Uzawa algorithm to nonlinear programming problems. Finally, the theorem is applied to the "simplest" nonlinear optimal control problem, namely, to a time-optimal control problem.
Target recognition for ladar range image using slice image
NASA Astrophysics Data System (ADS)
Xia, Wenze; Han, Shaokun; Wang, Liang
2015-12-01
A shape descriptor and a complete shape-based recognition system using slice images as geometric feature descriptor for ladar range images are introduced. A slice image is a two-dimensional image generated by three-dimensional Hough transform and the corresponding mathematical transformation. The system consists of two processes, the model library construction and recognition. In the model library construction process, a series of range images are obtained after the model object is sampled at preset attitude angles. Then, all the range images are converted into slice images. The number of slice images is reduced by clustering analysis and finding a representation to reduce the size of the model library. In the recognition process, the slice image of the scene is compared with the slice image in the model library. The recognition results depend on the comparison. Simulated ladar range images are used to analyze the recognition and misjudgment rates, and comparison between the slice image representation method and moment invariants representation method is performed. The experimental results show that whether in conditions without noise or with ladar noise, the system has a high recognition rate and low misjudgment rate. The comparison experiment demonstrates that the slice image has better representation ability than moment invariants.
Kim, Kio; Habas, Piotr A.; Rajagopalan, Vidya; Scott, Julia A.; Corbett-Detig, James M.; Rousseau, Francois; Barkovich, A. James; Glenn, Orit A.; Studholme, Colin
2012-01-01
A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multi-slice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types. PMID:21511561
Mechanistic slumber vs. statistical insomnia: the early history of Boltzmann's H-theorem (1868-1877)
NASA Astrophysics Data System (ADS)
Badino, M.
2011-11-01
An intricate, long, and occasionally heated debate surrounds Boltzmann's H-theorem (1872) and his combinatorial interpretation of the second law (1877). After almost a century of devoted and knowledgeable scholarship, there is still no agreement as to whether Boltzmann changed his view of the second law after Loschmidt's 1876 reversibility argument or whether he had already been holding a probabilistic conception for some years at that point. In this paper, I argue that there was no abrupt statistical turn. In the first part, I discuss the development of Boltzmann's research from 1868 to the formulation of the H-theorem. This reconstruction shows that Boltzmann adopted a pluralistic strategy based on the interplay between a kinetic and a combinatorial approach. Moreover, it shows that the extensive use of asymptotic conditions allowed Boltzmann to bracket the problem of exceptions. In the second part I suggest that both Loschmidt's challenge and Boltzmann's response to it did not concern the H-theorem. The close relation between the theorem and the reversibility argument is a consequence of later investigations on the subject.
Special ergodic theorems and dynamical large deviations
NASA Astrophysics Data System (ADS)
Kleptsyn, Victor; Ryzhov, Dmitry; Minkov, Stanislav
2012-11-01
Let f : M → M be a self-map of a compact Riemannian manifold M, admitting a global SRB measure μ. For a continuous test function \\varphi\\colon M\\to R and a constant α > 0, consider the set Kφ,α of the initial points for which the Birkhoff time averages of the function φ differ from its μ-space average by at least α. As the measure μ is a global SRB one, the set Kφ,α should have zero Lebesgue measure. The special ergodic theorem, whenever it holds, claims that, moreover, this set has a Hausdorff dimension less than the dimension of M. We prove that for Lipschitz maps, the special ergodic theorem follows from the dynamical large deviations principle. We also define and prove analogous result for flows. Applying the theorems of Young and of Araújo and Pacifico, we conclude that the special ergodic theorem holds for transitive hyperbolic attractors of C2-diffeomorphisms, as well as for some other known classes of maps (including the one of partially hyperbolic non-uniformly expanding maps) and flows.
Heuristic analogy in Ars Conjectandi: From Archimedes' De Circuli Dimensione to Bernoulli's theorem.
Campos, Daniel G
2018-02-01
This article investigates the way in which Jacob Bernoulli proved the main mathematical theorem that undergirds his art of conjecturing-the theorem that founded, historically, the field of mathematical probability. It aims to contribute a perspective into the question of problem-solving methods in mathematics while also contributing to the comprehension of the historical development of mathematical probability. It argues that Bernoulli proved his theorem by a process of mathematical experimentation in which the central heuristic strategy was analogy. In this context, the analogy functioned as an experimental hypothesis. The article expounds, first, Bernoulli's reasoning for proving his theorem, describing it as a process of experimentation in which hypothesis-making is crucial. Next, it investigates the analogy between his reasoning and Archimedes' approximation of the value of π, by clarifying both Archimedes' own experimental approach to the said approximation and its heuristic influence on Bernoulli's problem-solving strategy. The discussion includes some general considerations about analogy as a heuristic technique to make experimental hypotheses in mathematics. Copyright © 2018 Elsevier Ltd. All rights reserved.
Foundations of a mathematical theory of darwinism.
Batty, Charles J K; Crewe, Paul; Grafen, Alan; Gratwick, Richard
2014-08-01
This paper pursues the 'formal darwinism' project of Grafen, whose aim is to construct formal links between dynamics of gene frequencies and optimization programmes, in very abstract settings with general implications for biologically relevant situations. A major outcome is the definition, within wide assumptions, of the ubiquitous but problematic concept of 'fitness'. This paper is the first to present the project for mathematicians. Within the framework of overlapping generations in discrete time and no social interactions, the current model shows links between fitness maximization and gene frequency change in a class-structured population, with individual-level uncertainty but no uncertainty in the class projection operator, where individuals are permitted to observe and condition their behaviour on arbitrary parts of the uncertainty. The results hold with arbitrary numbers of loci and alleles, arbitrary dominance and epistasis, and make no assumptions about linkage, linkage disequilibrium or mating system. An explicit derivation is given of Fisher's Fundamental Theorem of Natural Selection in its full generality.
Gonadal Steroids: Effects on Excitability of Hippocampal Pyramidal Cells
NASA Astrophysics Data System (ADS)
Teyler, Timothy J.; Vardaris, Richard M.; Lewis, Deborah; Rawitch, Allen B.
1980-08-01
Electrophysiological field potentials from hippocampal slices of rat brain show sex-linked differences in response to 1 × 10-10M concentrations of estradiol and testosterone added to the incubation medium. Slices from male rats show increased excitability to estradiol and not to testosterone. Slices from female rats are not affected by estradiol, but slices from female rats in diestrus show increased excitability in response to testosterone whereas slices from females in proestrus show decreased excitability.
The quantum theory of free automorphic fields
NASA Astrophysics Data System (ADS)
Banach, R.
1980-06-01
Heuristic spectral theory is developed for a symmetric operator on the universal covering space of a multiply connected static spacetime and is used to construct the quantum field theory of a multiplet of scalar fields in the customary sum-over-modes fashion. The non-local symmetries necessary to the theory are explicitly constructed, as are the projection on the field operators. The non-existence of a standard charge conjugation for certain types of representation is noted. Gauge transformations are used to give a simple and complete classification of automorphic field theories. The relationship between the unprojected and projected field algebras is clarified, and the implications for Fock space (vacuum degeneracy, etc.) are discussed - earlier work being criticized. The analogy to black hole physics is pointed out, and the possible role of the Reeh-Schlieder theorems is speculated upon.
Physiological temperature during brain slicing enhances the quality of acute slice preparations
Huang, Shiwei; Uusisaari, Marylka Y.
2013-01-01
We demonstrate that brain dissection and slicing using solutions warmed to near-physiological temperature (~ +34°C), greatly enhance slice quality without affecting intrinsic electrophysiological properties of the neurons. Improved slice quality is seen not only when using young (<1 month), but also mature (>2.5 month) mice. This allows easy in vitro patch-clamp experimentation using adult deep cerebellar nuclear slices, which until now have been considered very difficult. As proof of the concept, we compare intrinsic properties of cerebellar nuclear neurons in juvenile (<1 month) and adult (up to 7 months) mice, and confirm that no significant developmental changes occur after the fourth postnatal week. The enhanced quality of brain slices from old animals facilitates experimentation on age-related disorders as well as optogenetic studies requiring long transfection periods. PMID:23630465
A Stochastic Version of the Noether Theorem
NASA Astrophysics Data System (ADS)
González Lezcano, Alfredo; Cabo Montes de Oca, Alejandro
2018-06-01
A stochastic version of the Noether theorem is derived for systems under the action of external random forces. The concept of moment generating functional is employed to describe the symmetry of the stochastic forces. The theorem is applied to two kinds of random covariant forces. One of them generated in an electrodynamic way and the other is defined in the rest frame of the particle as a function of the proper time. For both of them, it is shown the conservation of the mean value of a random drift momentum. The validity of the theorem makes clear that random systems can produce causal stochastic correlations between two faraway separated systems, that had interacted in the past. In addition possible connections of the discussion with the Ives Couder's experimental results are remarked.
Noether’s second theorem and Ward identities for gauge symmetries
Avery, Steven G.; Schwab, Burkhard U. W.
2016-02-04
Recently, a number of new Ward identities for large gauge transformations and large diffeomorphisms have been discovered. Some of the identities are reinterpretations of previously known statements, while some appear to be genuinely new. We present and use Noether’s second theorem with the path integral as a powerful way of generating these kinds of Ward identities. We reintroduce Noether’s second theorem and discuss how to work with the physical remnant of gauge symmetry in gauge fixed systems. We illustrate our mechanism in Maxwell theory, Yang-Mills theory, p-form field theory, and Einstein-Hilbert gravity. We comment on multiple connections between Noether’s secondmore » theorem and known results in the recent literature. Finally, our approach suggests a novel point of view with important physical consequences.« less
Donaldson, Theodore; Wollert, Richard
2008-06-01
Expert witnesses in sexually violent predator (SVP) cases often rely on actuarial instruments to make risk determinations. Many questions surround their use, however. Bayes's Theorem holds much promise for addressing these questions. Some experts nonetheless claim that Bayesian analyses are inadmissible in SVP cases because they are not accepted by the relevant scientific community. This position is illogical because Bayes's Theorem is simply a probabilistic restatement of the way that frequency data are combined to arrive at whatever recidivism rates are paired with each test score in an actuarial table. This article presents a mathematical proof and example validating this assertion. The advantages and implications of a logic model that combines Bayes's Theorem and the null hypothesis are also discussed.
Sharp comparison theorems for the Klein-Gordon equation in d dimensions
NASA Astrophysics Data System (ADS)
Hall, Richard L.; Zorin, Petr
2016-06-01
We establish sharp (or ’refined’) comparison theorems for the Klein-Gordon equation. We show that the condition Va ≤ Vb, which leads to Ea ≤ Eb, can be replaced by the weaker assumption Ua ≤ Ub which still implies the spectral ordering Ea ≤ Eb. In the simplest case, for d = 1, Ui(x) =∫0xV i(t)dt, i = a or b and for d > 1, Ui(r) =∫0rV i(t)td-1dt, i = a or b. We also consider sharp comparison theorems in the presence of a scalar potential S (a ‘variable mass’) in addition to the vector term V (the time component of a four-vector). The theorems are illustrated by a variety of explicit detailed examples.
Logical errors on proving theorem
NASA Astrophysics Data System (ADS)
Sari, C. K.; Waluyo, M.; Ainur, C. M.; Darmaningsih, E. N.
2018-01-01
In tertiary level, students of mathematics education department attend some abstract courses, such as Introduction to Real Analysis which needs an ability to prove mathematical statements almost all the time. In fact, many students have not mastered this ability appropriately. In their Introduction to Real Analysis tests, even though they completed their proof of theorems, they achieved an unsatisfactory score. They thought that they succeeded, but their proof was not valid. In this study, a qualitative research was conducted to describe logical errors that students made in proving the theorem of cluster point. The theorem was given to 54 students. Misconceptions on understanding the definitions seem to occur within cluster point, limit of function, and limit of sequences. The habit of using routine symbol might cause these misconceptions. Suggestions to deal with this condition are described as well.
Mendes, Niele D; Fernandes, Artur; Almeida, Glaucia M; Santos, Luis E; Selles, Maria Clara; Lyra-Silva, Natalia; Machado, Carla M; Horta-Júnior, José A C; Louzada, Paulo R; De Felice, Fernanda G; Alvez-Leon, Soniza; Marcondes, Jorge; Assirati, João Alberto; Matias, Caio M; Klein, William L; Garcia-Cairasco, Norberto; Ferreira, Sergio T; Neder, Luciano; Sebollela, Adriano
2018-05-31
Slice cultures have been prepared from several organs. With respect to the brain, advantages of slice cultures over dissociated cell cultures include maintenance of the cytoarchitecture and neuronal connectivity. Slice cultures from adult human brain have been reported and constitute a promising method to study neurological diseases. Despite this potential, few studies have characterized in detail cell survival and function along time in short-term, free-floating cultures. We used tissue from adult human brain cortex from patients undergoing temporal lobectomy to prepare 200 μm-thick slices. Along the period in culture, we evaluated neuronal survival, histological modifications, and neurotransmitter release. The toxicity of Alzheimer's-associated Aβ oligomers (AβOs) to cultured slices was also analyzed. Neurons in human brain slices remain viable and neurochemically active for at least four days in vitro, which allowed detection of binding of AβOs. We further found that slices exposed to AβOs presented elevated levels of hyperphosphorylated Tau, a hallmark of Alzheimer's disease. Although slice cultures from adult human brain have been previously prepared, this is the first report to analyze cell viability and neuronal activity in short-term free-floating cultures as a function of days in vitro. Once surgical tissue is available, the current protocol is easy to perform and produces functional slices from adult human brain. These slice cultures may represent a preferred model for translational studies of neurodegenerative disorders when long term culturing in not required, as in investigations on AβO neurotoxicity. Copyright © 2018 Elsevier B.V. All rights reserved.
Xia, Yan; Berger, Martin; Bauer, Sebastian; Hu, Shiyang; Aichert, Andre; Maier, Andreas
2017-01-01
We improve data extrapolation for truncated computed tomography (CT) projections by using Helgason-Ludwig (HL) consistency conditions that mathematically describe the overlap of information between projections. First, we theoretically derive a 2D Fourier representation of the HL consistency conditions from their original formulation (projection moment theorem), for both parallel-beam and fan-beam imaging geometry. The derivation result indicates that there is a zero energy region forming a double-wedge shape in 2D Fourier domain. This observation is also referred to as the Fourier property of a sinogram in the previous literature. The major benefit of this representation is that the consistency conditions can be efficiently evaluated via 2D fast Fourier transform (FFT). Then, we suggest a method that extrapolates the truncated projections with data from a uniform ellipse of which the parameters are determined by optimizing these consistency conditions. The forward projection of the optimized ellipse can be used to complete the truncation data. The proposed algorithm is evaluated using simulated data and reprojections of clinical data. Results show that the root mean square error (RMSE) is reduced substantially, compared to a state-of-the-art extrapolation method.
Berger, Martin; Bauer, Sebastian; Hu, Shiyang; Aichert, Andre
2017-01-01
We improve data extrapolation for truncated computed tomography (CT) projections by using Helgason-Ludwig (HL) consistency conditions that mathematically describe the overlap of information between projections. First, we theoretically derive a 2D Fourier representation of the HL consistency conditions from their original formulation (projection moment theorem), for both parallel-beam and fan-beam imaging geometry. The derivation result indicates that there is a zero energy region forming a double-wedge shape in 2D Fourier domain. This observation is also referred to as the Fourier property of a sinogram in the previous literature. The major benefit of this representation is that the consistency conditions can be efficiently evaluated via 2D fast Fourier transform (FFT). Then, we suggest a method that extrapolates the truncated projections with data from a uniform ellipse of which the parameters are determined by optimizing these consistency conditions. The forward projection of the optimized ellipse can be used to complete the truncation data. The proposed algorithm is evaluated using simulated data and reprojections of clinical data. Results show that the root mean square error (RMSE) is reduced substantially, compared to a state-of-the-art extrapolation method. PMID:28808441
NASA Astrophysics Data System (ADS)
Ge, Zheng-Ming
2008-04-01
Necessary and sufficient conditions for the stability of a sleeping top described by dynamic equations of six state variables, Euler equations, and Poisson equations, by a two-degree-of-freedom system, Krylov equations, and by a one-degree-of-freedom system, nutation angle equation, is obtained by the Lyapunov direct method, Ge-Liu second instability theorem, an instability theorem, and a Ge-Yao-Chen partial region stability theorem without using the first approximation theory altogether.
Twelve years before the quantum no-cloning theorem
NASA Astrophysics Data System (ADS)
Ortigoso, Juan
2018-03-01
The celebrated quantum no-cloning theorem establishes the impossibility of making a perfect copy of an unknown quantum state. The discovery of this important theorem for the field of quantum information is currently dated 1982. I show here that an article published in 1970 [J. L. Park, Found. Phys. 1, 23-33 (1970)] contained an explicit mathematical proof of the impossibility of cloning quantum states. I analyze Park's demonstration in the light of published explanations concerning the genesis of the better-known papers on no-cloning.
Analytic solution and pulse area theorem for three-level atoms
NASA Astrophysics Data System (ADS)
Shchedrin, Gavriil; O'Brien, Chris; Rostovtsev, Yuri; Scully, Marlan O.
2015-12-01
We report an analytic solution for a three-level atom driven by arbitrary time-dependent electromagnetic pulses. In particular, we consider far-detuned driving pulses and show an excellent match between our analytic result and the numerical simulations. We use our solution to derive a pulse area theorem for three-level V and Λ systems without making the rotating wave approximation. Formulated as an energy conservation law, this pulse area theorem can be used to understand pulse propagation through three-level media.
A Pseudo-Reversing Theorem for Rotation and its Application to Orientation Theory
2012-03-01
approach to the task of constructing the appropriate course a ship must steer in order for the wind to appear to come from some given direction with some...axes, although the theorem doesn’t actually require such axes. The Pseudo-Reversing Theorem can often be invoked to give a different pedagogical basis to...of validity will quickly become obvious when it’s implemented on a computer. It does not seem to me that a great deal of pedagogical effort has found
Naval Research Logistics Quarterly. Volume 28. Number 1,
1981-03-01
doing %%e forfeit the contraction property and must base our analysis on other procedures t)ualit. theor. and the Perron - Frobenius theorem are the main...and the Perron - Frobenius theorem (see Varga [16] or Seneta 1141). 2. NOTATION AND PRELIMINARY RESULTS Let v and v be two vectors. Write x > .j...x). If P is a square matrix, p(P) will denote the spectral radius of P. If P > 0 and square then the Perron - Frobenius theorem gives us that Pv = p(P)x
Quantum Theory of Jaynes' Principle, Bayes' Theorem, and Information
NASA Astrophysics Data System (ADS)
Haken, Hermann
2014-12-01
After a reminder of Jaynes' maximum entropy principle and of my quantum theoretical extension, I consider two coupled quantum systems A,B and formulate a quantum version of Bayes' theorem. The application of Feynman's disentangling theorem allows me to calculate the conditional density matrix ρ (A|B) , if system A is an oscillator (or a set of them), linearly coupled to an arbitrary quantum system B. Expectation values can simply be calculated by means of the normalization factor of ρ (A|B) that is derived.
Advanced Wireless Integrated Navy Network
2005-03-01
transmitter and the receiver (do), the height of the setup above the floor can be estimated using Pythagoras ’ theorem : 4 The destination’s deck can also...single-unit resource model. Theorem I (RUA’s Blocking Time) Under RUA with the single-unit resource model, a task T, can be blocked for at most the...wait-free objects. Theorem 2 (Comparison of RUA’s Sojourn Times) Under RUA, as the critical section tac: of a task T, becomes longer, the difference
Generalized virial theorem and pressure relation for a strongly correlated Fermi gas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Shina
2008-12-15
For a two-component Fermi gas in the unitarity limit (i.e., with infinite scattering length), there is a well-known virial theorem, first shown by J.E. Thomas et al. A few people rederived this result, and extended it to few-body systems, but their results are all restricted to the unitarity limit. Here I show that there is a generalized virial theorem for FINITE scattering lengths. I also generalize an exact result concerning the pressure to the case of imbalanced populations.
Event Oriented Design and Adaptive Multiprocessing
1991-08-31
System 5 2.3 The Classification 5 2.4 Real-Time Systems 7 2.5 Non Real-Time Systems 10 2.6 Common Characterizations of all Software Systems 10 2.7... Non -Optimal Guarantee Test Theorem 37 6.3.2 Chetto’s Optimal Guarantee Test Theorem 37 6.3.3 Multistate Case: An Extended Guarantee 39 Test Theorem...which subdivides all software systems according to the way in which they operate, such as interactive, non interactive, real-time, etc. Having defined
NASA Technical Reports Server (NTRS)
Denney, Ewen; Power, John
2003-01-01
We introduce a hierarchical notion of formal proof, useful in the implementation of theorem provers, which we call highproofs. Two alternative definitions are given, motivated by existing notations used in theorem proving research. We define transformations between these two forms of hiproof, develop notions of underlying proof, and give a suitable definition of refinement in order to model incremental proof development. We show that our transformations preserve both underlying proofs and refinement. The relationship of our theory to existing theorem proving systems is discussed, as is its future extension.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yong, E-mail: 83229994@qq.com; Ge, Hao, E-mail: haoge@pku.edu.cn; Xiong, Jie, E-mail: jiexiong@umac.mo
Fluctuation theorem is one of the major achievements in the field of nonequilibrium statistical mechanics during the past two decades. There exist very few results for steady-state fluctuation theorem of sample entropy production rate in terms of large deviation principle for diffusion processes due to the technical difficulties. Here we give a proof for the steady-state fluctuation theorem of a diffusion process in magnetic fields, with explicit expressions of the free energy function and rate function. The proof is based on the Karhunen-Loève expansion of complex-valued Ornstein-Uhlenbeck process.
Towards a Property-Based Testing Environment With Applications to Security-Critical Software
1994-01-01
4 is a slice of the MINIX [Tan87] login program with respect to the setuid system call. The original program contains 337 lines, the slice only 20...demonstrat- ing the e ectiveness of slicing in this case5. The mapping of the abstract concept of au- thentication to source code in the MINIX login...Slice of MINIX login with respect to setuid(). occurs. If no incorrect execution occurs, slices of the program are examined for their data ow coverage
NASA Technical Reports Server (NTRS)
Rosu, Grigore (Inventor); Chen, Feng (Inventor); Chen, Guo-fang; Wu, Yamei; Meredith, Patrick O. (Inventor)
2014-01-01
A program trace is obtained and events of the program trace are traversed. For each event identified in traversing the program trace, a trace slice of which the identified event is a part is identified based on the parameter instance of the identified event. For each trace slice of which the identified event is a part, the identified event is added to an end of a record of the trace slice. These parametric trace slices can be used in a variety of different manners, such as for monitoring, mining, and predicting.
Organotypic Slice Cultures for Studies of Postnatal Neurogenesis
Mosa, Adam J.; Wang, Sabrina; Tan, Yao Fang; Wojtowicz, J. Martin
2015-01-01
Here we describe a technique for studying hippocampal postnatal neurogenesis in the rodent brain using the organotypic slice culture technique. This method maintains the characteristic topographical morphology of the hippocampus while allowing direct application of pharmacological agents to the developing hippocampal dentate gyrus. Additionally, slice cultures can be maintained for up to 4 weeks and thus, allow one to study the maturation process of newborn granule neurons. Slice cultures allow for efficient pharmacological manipulation of hippocampal slices while excluding complex variables such as uncertainties related to the deep anatomic location of the hippocampus as well as the blood brain barrier. For these reasons, we sought to optimize organotypic slice cultures specifically for postnatal neurogenesis research. PMID:25867138
Acute Hippocampal Slice Preparation and Hippocampal Slice Cultures
Lein, Pamela J.; Barnhart, Christopher D.; Pessah, Isaac N.
2012-01-01
A major advantage of hippocampal slice preparations is that the cytoarchitecture and synaptic circuits of the hippocampus are largely retained. In neurotoxicology research, organotypic hippocampal slices have mostly been used as acute ex vivo preparations for investigating the effects of neurotoxic chemicals on synaptic function. More recently, hippocampal slice cultures, which can be maintained for several weeks to several months in vitro, have been employed to study how neurotoxic chemicals influence the structural and functional plasticity in hippocampal neurons. This chapter provides protocols for preparing hippocampal slices to be used acutely for electrophysiological measurements using glass microelectrodes or microelectrode arrays or to be cultured for morphometric assessments of individual neurons labeled using biolistics. PMID:21815062
Quantization of Chirikov Map and Quantum KAM Theorem.
NASA Astrophysics Data System (ADS)
Shi, Kang-Jie
KAM theorem is one of the most important theorems in classical nonlinear dynamics and chaos. To extend KAM theorem to the regime of quantum mechanics, we first study the quantum Chirikov map, whose classical counterpart provides a good example of KAM theorem. Under resonance condition 2pihbar = 1/N, we obtain the eigenstates of the evolution operator of this system. We find that the wave functions in the coherent state representation (CSR) are very similar to the classical trajectories. In particular, some of these wave functions have wall-like structure at the locations of classical KAM curves. We also find that a local average is necessary for a Wigner function to approach its classical limit in the phase space. We then study the general problem theoretically. Under similar conditions for establishing the classical KAM theorem, we obtain a quantum extension of KAM theorem. By constructing successive unitary transformations, we can greatly reduce the perturbation part of a near-integrable Hamiltonian system in a region associated with a Diophantine number {rm W}_{o}. This reduction is restricted only by the magnitude of hbar.. We can summarize our results as follows: In the CSR of a nearly integrable quantum system, associated with a Diophantine number {rm W}_ {o}, there is a band near the corresponding KAM torus of the classical limit of the system. In this band, a Gaussian wave packet moves quasi-periodically (and remain close to the KAM torus) for a long time, with possible diffusion in both the size and the shape of its wave packet. The upper bound of the tunnelling rate out of this band for the wave packet can be made much smaller than any given power of hbar, if the original perturbation is sufficiently small (but independent of hbar). When hbarto 0, we reproduce the classical KAM theorem. For most near-integrable systems the eigenstate wave function in the above band can either have a wall -like structure or have a vanishing amplitude. These conclusions agree with the numerical results of the quantum Chirikov map.
Detection of MRI artifacts produced by intrinsic heart motion using a saliency model
NASA Astrophysics Data System (ADS)
Salguero, Jennifer; Velasco, Nelson; Romero, Eduardo
2017-11-01
Cardiac Magnetic Resonance (CMR) requires synchronization with the ECG to correct many types of noise. However, the complex heart motion frequently produces displaced slices that have to be either ignored or manually corrected since the ECG correction is useless in this case. This work presents a novel methodology that detects the motion artifacts in CMR using a saliency method that highlights the region where the heart chambers are located. Once the Region of Interest (RoI) is set, its center of gravity is determined for the set of slices composing the volume. The deviation of the gravity center is an estimation of the coherence between the slices and is used to find out slices with certain displacement. Validation was performed with distorted real images where a slice is artificially misaligned with respect to set of slices. The displaced slice is found with a Recall of 84% and F Score of 68%.
Thin silicon-solar cell fabrication
NASA Technical Reports Server (NTRS)
Lindmayer, J.
1979-01-01
Flexible silicon slices of uniform thicknesses are fabricated by etching in sodium hydroxide solution. Maintaining uniform thickness across slices during process(fabrication) is important for cell strength and resistance to damage in handling. Slices formed by procedure have reproducible surface with fine orange peel texture, and are far superior to slices prepared by other methods.
Thick Slice and Thin Slice Teaching Evaluations
ERIC Educational Resources Information Center
Tom, Gail; Tong, Stephanie Tom; Hesse, Charles
2010-01-01
Student-based teaching evaluations are an integral component to institutions of higher education. Previous work on student-based teaching evaluations suggest that evaluations of instructors based upon "thin slice" 30-s video clips of them in the classroom correlate strongly with their end of the term "thick slice" student evaluations. This study's…
Comparing thin slices of verbal communication behavior of varying number and duration.
Carcone, April Idalski; Naar, Sylvie; Eggly, Susan; Foster, Tanina; Albrecht, Terrance L; Brogan, Kathryn E
2015-02-01
The aim of this study was to assess the accuracy of thin slices to characterize the verbal communication behavior of counselors and patients engaged in Motivational Interviewing sessions relative to fully coded sessions. Four thin slice samples that varied in number (four versus six slices) and duration (one- versus two-minutes) were extracted from a previously coded dataset. In the parent study, an observational code scheme was used to characterize specific counselor and patient verbal communication behaviors. For the current study, we compared the frequency of communication codes and the correlations among the full dataset and each thin slice sample. Both the proportion of communication codes and strength of the correlation demonstrated the highest degree of accuracy when a greater number (i.e., six versus four) and duration (i.e., two- versus one-minute) of slices were extracted. These results suggest that thin slice sampling may be a useful and accurate strategy to reduce coding burden when coding specific verbal communication behaviors within clinical encounters. We suggest researchers interested in using thin slice sampling in their own work conduct preliminary research to determine the number and duration of thin slices required to accurately characterize the behaviors of interest. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Non-enzymatic browning and flavour kinetics of vacuum dried onion slices
NASA Astrophysics Data System (ADS)
Mitra, Jayeeta; Shrivastava, Shanker L.; Rao, Pavuluri S.
2015-01-01
Onion slices were dehydrated under vacuum to produce good quality dried ready-to-use onion slices. Colour development due to non-enzymatic browning and flavour loss in terms of thiosulphinate concentration was determined, along with moisture content and rehydration ratio. Kinetics of non-enzymatic browning and thiosulphinate loss during drying was analysed. Colour change due to non-enzymatic browning was found to be much lower in the case of vacuum dried onion, and improved flavour retention was observed as compared to hot air dried onion slices. The optical index values for non-enzymatic browning varied from 18.41 to 38.68 for untreated onion slices and from 16.73 to 36.51 for treated slices, whereas thiosulphinate concentration in the case of untreated onion slices was within the range of 2.96-3.92 μmol g-1 for dried sample and 3.71-4.43 μmol g-1 for the treated onion slices. Rehydration ratio was also increased, which may be attributed to a better porous structure attained due to vacuum drying. The treatment applied was found very suitable in controlling non-enzymatic browning and flavour loss during drying, besides increasing rehydration ratio. Hence, high quality dried ready- to-use onion slices were prepared.
Ripple artifact reduction using slice overlap in slice encoding for metal artifact correction.
den Harder, J Chiel; van Yperen, Gert H; Blume, Ulrike A; Bos, Clemens
2015-01-01
Multispectral imaging (MSI) significantly reduces metal artifacts. Yet, especially in techniques that use gradient selection, such as slice encoding for metal artifact correction (SEMAC), a residual ripple artifact may be prominent. Here, an analysis is presented of the ripple artifact and of slice overlap as an approach to reduce the artifact. The ripple artifact was analyzed theoretically to clarify its cause. Slice overlap, conceptually similar to spectral bin overlap in multi-acquisition with variable resonances image combination (MAVRIC), was achieved by reducing the selection gradient and, thus, increasing the slice profile width. Time domain simulations and phantom experiments were performed to validate the analyses and proposed solution. Discontinuities between slices are aggravated by signal displacement in the frequency encoding direction in areas with deviating B0. Specifically, it was demonstrated that ripple artifacts appear only where B0 varies both in-plane and through-plane. Simulations and phantom studies of metal implants confirmed the efficacy of slice overlap to reduce the artifact. The ripple artifact is an important limitation of gradient selection based MSI techniques, and can be understood using the presented simulations. At a scan-time penalty, slice overlap effectively addressed the artifact, thereby improving image quality near metal implants. © 2014 Wiley Periodicals, Inc.
Abe, Takayuki
2013-03-01
To improve the slice profile of the half radiofrequency (RF) pulse excitation and image quality of ultrashort echo time (UTE) imaging by compensating for an eddy current effect. The dedicated prescan has been developed to measure the phase accumulation due to eddy currents induced by the slice-selective gradient. The prescan measures two one-dimensional excitation k-space profiles, which can be acquired with a readout gradient in the slice-selection direction by changing the polarity of the slice-selective gradient. The time shifts due to the phase accumulation in the excitation k-space were calculated. The time shift compensated for the start time of the slice-selective gradient. The total prescan time was 6-15 s. The slice profile and the UTE image with the half RF pulse excitation were acquired to evaluate the slice selectivity and the image quality. Improved slice selectivity was obtained. The simple method proposed in this paper can eliminate eddy current effect. Good UTE images were obtained. The slice profile of the half RF pulse excitation and the image quality of UTE images have been improved by using a dedicated prescan. This method has a possibility that can improve the image quality of a clinical UTE imaging.
Directly reconstructing principal components of heterogeneous particles from cryo-EM images.
Tagare, Hemant D; Kucukelbir, Alp; Sigworth, Fred J; Wang, Hongwei; Rao, Murali
2015-08-01
Structural heterogeneity of particles can be investigated by their three-dimensional principal components. This paper addresses the question of whether, and with what algorithm, the three-dimensional principal components can be directly recovered from cryo-EM images. The first part of the paper extends the Fourier slice theorem to covariance functions showing that the three-dimensional covariance, and hence the principal components, of a heterogeneous particle can indeed be recovered from two-dimensional cryo-EM images. The second part of the paper proposes a practical algorithm for reconstructing the principal components directly from cryo-EM images without the intermediate step of calculating covariances. This algorithm is based on maximizing the posterior likelihood using the Expectation-Maximization algorithm. The last part of the paper applies this algorithm to simulated data and to two real cryo-EM data sets: a data set of the 70S ribosome with and without Elongation Factor-G (EF-G), and a data set of the influenza virus RNA dependent RNA Polymerase (RdRP). The first principal component of the 70S ribosome data set reveals the expected conformational changes of the ribosome as the EF-G binds and unbinds. The first principal component of the RdRP data set reveals a conformational change in the two dimers of the RdRP. Copyright © 2015 Elsevier Inc. All rights reserved.
Obatomi, D K; Blackburn, R O; Bach, P H
2001-10-01
The effects of dithiothreitol (DTT), a sulfhydryl-containing agent and verapamil (VRP), a calcium channel blocker as possible cytoprotectants against the atractyloside-induced toxicity were characterized in rat kidney and liver slices in vitro using multiple markers of toxicity. Precision-cut slices (200 microM thick) were either incubated with atractyloside (2 mM) or initially preincubated with either DTT (5 mM) or VRP (100 microM) for 30 min followed by exposure to atractyloside (2 mM) for 3 h at 37 degrees C on a rocker platform rotated at approximately 3 rpm. All of the toxicity parameters were sensitive to exposure to atractyloside, but treatment with DTT or VRP alone did not provide any indication of damage to the tissues. Preincubation of slices containing either DTT or VRP for 30 min provided total protection against atractyloside-induced increase in LDH leakage in both kidney and liver slices. Increased induction of lipid peroxidation by atractyloside in liver slices was completely abolished by DTT and VRP. Both DTT and VRP provided partial protection against atractyloside-induced inhibition of gluconeogenesis in both kidney and liver slices. Atractyloside-induced ATP depletion in both kidney and liver slices was partially abolished by VRP but not DTT. The significant depletion of GSH in the kidney slices by atractyloside was completely reversed by DTT only, while VRP alone reversed the same process in liver slices. Decreased MTT reductive capacity and significant increase in ALT leakage caused by atractyloside in liver slices was partially reversed. Complete protection was achieved with both DTT and VRP against atractyloside-induced inhibition of PAH uptake in kidney slices. These findings suggest that both DTT and VRP exert cytoprotective effects in atractyloside-induced biochemical perturbation, effects that differ in liver and kidney. The effect of these agents on atractyloside has provided us with a further understanding of the molecular mechanism of its action.
NASA Technical Reports Server (NTRS)
Lamarque, J.-F.; Dentener, F.; McConnell, J.; Ro, C.-U.; Shaw, M.; Vet, R.; Bergmann, D.; Cameron-Smith, P.; Doherty, R.; Faluvegi, G.;
2013-01-01
We present multi-model global datasets of nitrogen and sulfate deposition covering time periods from 1850 to 2100, calculated within the Atmospheric Chemistry and Climate Model Intercomparison Project (ACCMIP). The computed deposition fluxes are compared to surface wet deposition and ice-core measurements. We use a new dataset of wet deposition for 2000-2002 based on critical assessment of the quality of existing regional network data. We show that for present-day (year 2000 ACCMIP time-slice), the ACCMIP results perform similarly to previously published multi-model assessments. For this time slice, we find a multi-model mean deposition of 50 Tg(N) yr1 from nitrogen oxide emissions, 60 Tg(N) yr1 from ammonia emissions, and 83 Tg(S) yr1 from sulfur emissions. The analysis of changes between 1980 and 2000 indicates significant differences between model and measurements over the United States but less so over Europe. This difference points towards misrepresentation of 1980 NH3 emissions over North America. Based on ice-core records, the 1850 deposition fluxes agree well with Greenland ice cores but the change between 1850 and 2000 seems to be overestimated in the Northern Hemisphere for both nitrogen and sulfur species. Using the Representative Concentration Pathways to define the projected climate and atmospheric chemistry related emissions and concentrations, we find large regional nitrogen deposition increases in 2100 in Latin America, Africa and parts of Asia under some of the scenarios considered. Increases in South Asia are especially large, and are seen in all scenarios, with 2100 values more than double 2000 in some scenarios and reaching 1300 mg(N) m2 yr1 averaged over regional to continental scale regions in RCP 2.6 and 8.5, 3050 larger than the values in any region currently (2000). The new ACCMIP deposition dataset provides novel, consistent and evaluated global gridded deposition fields for use in a wide range of climate and ecological studies.
Mueck, F G; Körner, M; Scherr, M K; Geyer, L L; Deak, Z; Linsenmaier, U; Reiser, M; Wirth, S
2012-03-01
To compare the image quality of dose-reduced 64-row abdominal CT reconstructed at different levels of adaptive statistical iterative reconstruction (ASIR) to full-dose baseline examinations reconstructed with filtered back-projection (FBP) in a clinical setting and upgrade situation. Abdominal baseline examinations (noise index NI = 29; LightSpeed VCT XT, GE) were intra-individually compared to follow-up studies on a CT with an ASIR option (NI = 43; Discovery HD750, GE), n = 42. Standard-kernel images were calculated with ASIR blendings of 0 - 100 % in slice and volume mode, respectively. Three experienced radiologists compared the image quality of these 567 sets to their corresponding full-dose baseline examination (- 2: diagnostically inferior, - 1: inferior, 0: equal, + 1: superior, + 2: diagnostically superior). Furthermore, a phantom was scanned. Statistical analysis used the Wilcoxon - the Mann-Whitney U-test and the intra-class correlation (ICC). The mean CTDIvol decreased from 19.7 ± 5.5 to 12.2 ± 4.7 mGy (p < 0.001). The ICC was 0.861. The total image quality of the dose-reduced ASIR studies was comparable to the baseline at ASIR 50 % in slice (p = 0.18) and ASIR 50 - 100 % in volume mode (p > 0.10). Volume mode performed 73 % slower than slice mode (p < 0.01). After the system upgrade, the vendor recommendation of ASIR 50 % in slice mode allowed for a dose reduction of 38 % in abdominal CT with comparable image quality and time expenditure. However, there is still further dose reduction potential for more complex reconstruction settings. © Georg Thieme Verlag KG Stuttgart · New York.
Fast automatic delineation of cardiac volume of interest in MSCT images
NASA Astrophysics Data System (ADS)
Lorenz, Cristian; Lessick, Jonathan; Lavi, Guy; Bulow, Thomas; Renisch, Steffen
2004-05-01
Computed Tomography Angiography (CTA) is an emerging modality for assessing cardiac anatomy. The delineation of the cardiac volume of interest (VOI) is a pre-processing step for subsequent visualization or image processing. It serves the suppression of anatomic structures being not in the primary focus of the cardiac application, such as sternum, ribs, spinal column, descending aorta and pulmonary vasculature. These structures obliterate standard visualizations such as direct volume renderings or maximum intensity projections. In addition, outcome and performance of post-processing steps such as ventricle suppression, coronary artery segmentation or the detection of short and long axes of the heart can be improved. The structures being part of the cardiac VOI (coronary arteries and veins, myocardium, ventricles and atria) differ tremendously in appearance. In addition, there is no clear image feature associated with the contour (or better cut-surface) distinguishing between cardiac VOI and surrounding tissue making the automatic delineation of the cardiac VOI a difficult task. The presented approach locates in a first step chest wall and descending aorta in all image slices giving a rough estimate of the location of the heart. In a second step, a Fourier based active contour approach delineates slice-wise the border of the cardiac VOI. The algorithm has been evaluated on 41 multi-slice CT data-sets including cases with coronary stents and venous and arterial bypasses. The typical processing time amounts to 5-10s on a 1GHz P3 PC.
Oscillation theorems for second order nonlinear forced differential equations.
Salhin, Ambarka A; Din, Ummul Khair Salma; Ahmad, Rokiah Rozita; Noorani, Mohd Salmi Md
2014-01-01
In this paper, a class of second order forced nonlinear differential equation is considered and several new oscillation theorems are obtained. Our results generalize and improve those known ones in the literature.
Multipinhole SPECT helical scan parameters and imaging volume
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Rutao, E-mail: rutaoyao@buffalo.edu; Deng, Xiao; Wei, Qingyang
Purpose: The authors developed SPECT imaging capability on an animal PET scanner using a multiple-pinhole collimator and step-and-shoot helical data acquisition protocols. The objective of this work was to determine the preferred helical scan parameters, i.e., the angular and axial step sizes, and the imaging volume, that provide optimal imaging performance. Methods: The authors studied nine helical scan protocols formed by permuting three rotational and three axial step sizes. These step sizes were chosen around the reference values analytically calculated from the estimated spatial resolution of the SPECT system and the Nyquist sampling theorem. The nine helical protocols were evaluatedmore » by two figures-of-merit: the sampling completeness percentage (SCP) and the root-mean-square (RMS) resolution. SCP was an analytically calculated numerical index based on projection sampling. RMS resolution was derived from the reconstructed images of a sphere-grid phantom. Results: The RMS resolution results show that (1) the start and end pinhole planes of the helical scheme determine the axial extent of the effective field of view (EFOV), and (2) the diameter of the transverse EFOV is adequately calculated from the geometry of the pinhole opening, since the peripheral region beyond EFOV would introduce projection multiplexing and consequent effects. The RMS resolution results of the nine helical scan schemes show optimal resolution is achieved when the axial step size is the half, and the angular step size is about twice the corresponding values derived from the Nyquist theorem. The SCP results agree in general with that of RMS resolution but are less critical in assessing the effects of helical parameters and EFOV. Conclusions: The authors quantitatively validated the effective FOV of multiple pinhole helical scan protocols and proposed a simple method to calculate optimal helical scan parameters.« less
Generalized Bezout's Theorem and its applications in coding theory
NASA Technical Reports Server (NTRS)
Berg, Gene A.; Feng, Gui-Liang; Rao, T. R. N.
1996-01-01
This paper presents a generalized Bezout theorem which can be used to determine a tighter lower bound of the number of distinct points of intersection of two or more curves for a large class of plane curves. A new approach to determine a lower bound on the minimum distance (and also the generalized Hamming weights) for algebraic-geometric codes defined from a class of plane curves is introduced, based on the generalized Bezout theorem. Examples of more efficient linear codes are constructed using the generalized Bezout theorem and the new approach. For d = 4, the linear codes constructed by the new construction are better than or equal to the known linear codes. For d greater than 5, these new codes are better than the known codes. The Klein code over GF(2(sup 3)) is also constructed.
NASA Astrophysics Data System (ADS)
Popa, CL; Popa, V.
2016-11-01
This paper proposes a profiling method for the tool which generates the helical groove of male rotor, screw compressor component. The method is based on a complementary theorem of surfaces enveloping - "Substitute Family Circles Method”. The specific theorem of family circles of substitution has been applied using AUTOCAD graphics design environment facility. The frontal view of the male rotor, screw compressor component, has been determinate knowing the transverse profile of female rotor, and using this theorem of "Substitute Family Circle". The three-dimensional model of the rotor makes possible to apply the same theorem, leading to the surface of revolution enveloping the helical surface. An application will be also presented to determine the axial profile of the disk cutter, numeric and graphics, following the proposed algorithm.
Model Checking Failed Conjectures in Theorem Proving: A Case Study
NASA Technical Reports Server (NTRS)
Pike, Lee; Miner, Paul; Torres-Pomales, Wilfredo
2004-01-01
Interactive mechanical theorem proving can provide high assurance of correct design, but it can also be a slow iterative process. Much time is spent determining why a proof of a conjecture is not forthcoming. In some cases, the conjecture is false and in others, the attempted proof is insufficient. In this case study, we use the SAL family of model checkers to generate a concrete counterexample to an unproven conjecture specified in the mechanical theorem prover, PVS. The focus of our case study is the ROBUS Interactive Consistency Protocol. We combine the use of a mechanical theorem prover and a model checker to expose a subtle flaw in the protocol that occurs under a particular scenario of faults and processor states. Uncovering the flaw allows us to mend the protocol and complete its general verification in PVS.
Kochen-Specker theorem studied with neutron interferometer.
Hasegawa, Yuji; Durstberger-Rennhofer, Katharina; Sponar, Stephan; Rauch, Helmut
2011-04-01
The Kochen-Specker theorem shows the incompatibility of noncontextual hidden variable theories with quantum mechanics. Quantum contextuality is a more general concept than quantum non-locality which is quite well tested in experiments using Bell inequalities. Within neutron interferometry we performed an experimental test of the Kochen-Specker theorem with an inequality, which identifies quantum contextuality, by using spin-path entanglement of single neutrons. Here entanglement is achieved not between different particles, but between degrees of freedom of a single neutron, i.e., between spin and path degree of freedom. Appropriate combinations of the spin analysis and the position of the phase shifter allow an experimental verification of the violation of an inequality derived from the Kochen-Specker theorem. The observed violation 2.291±0.008≰1 clearly shows that quantum mechanical predictions cannot be reproduced by noncontextual hidden variable theories.
NASA Technical Reports Server (NTRS)
Bickford, Mark; Srivas, Mandayam
1991-01-01
Presented here is a formal specification and verification of a property of a quadruplicately redundant fault tolerant microprocessor system design. A complete listing of the formal specification of the system and the correctness theorems that are proved are given. The system performs the task of obtaining interactive consistency among the processors using a special instruction on the processors. The design is based on an algorithm proposed by Pease, Shostak, and Lamport. The property verified insures that an execution of the special instruction by the processors correctly accomplishes interactive consistency, providing certain preconditions hold, using a computer aided design verification tool, Spectool, and the theorem prover, Clio. A major contribution of the work is the demonstration of a significant fault tolerant hardware design that is mechanically verified by a theorem prover.
Gleason-Busch theorem for sequential measurements
NASA Astrophysics Data System (ADS)
Flatt, Kieran; Barnett, Stephen M.; Croke, Sarah
2017-12-01
Gleason's theorem is a statement that, given some reasonable assumptions, the Born rule used to calculate probabilities in quantum mechanics is essentially unique [A. M. Gleason, Indiana Univ. Math. J. 6, 885 (1957), 10.1512/iumj.1957.6.56050]. We show that Gleason's theorem contains within it also the structure of sequential measurements, and along with this the state update rule. We give a small set of axioms, which are physically motivated and analogous to those in Busch's proof of Gleason's theorem [P. Busch, Phys. Rev. Lett. 91, 120403 (2003), 10.1103/PhysRevLett.91.120403], from which the familiar Kraus operator form follows. An axiomatic approach has practical relevance as well as fundamental interest, in making clear those assumptions which underlie the security of quantum communication protocols. Interestingly, the two-time formalism is seen to arise naturally in this approach.
NASA Astrophysics Data System (ADS)
Wandres, Moritz; Pattiaratchi, Charitha; Hemer, Mark A.
2017-09-01
Incident wave energy flux is responsible for sediment transport and coastal erosion in wave-dominated regions such as the southwestern Australian (SWA) coastal zone. To evaluate future wave climates under increased greenhouse gas concentration scenarios, past studies have forced global wave simulations with wind data sourced from global climate model (GCM) simulations. However, due to the generally coarse spatial resolution of global climate and wave simulations, the effects of changing offshore wave conditions and sea level rise on the nearshore wave climate are still relatively unknown. To address this gap of knowledge, we investigated the projected SWA offshore, shelf, and nearshore wave climate under two potential future greenhouse gas concentration trajectories (representative concentration pathways RCP4.5 and RCP8.5). This was achieved by downscaling an ensemble of global wave simulations, forced with winds from GCMs participating in the Coupled Model Inter-comparison Project (CMIP5), into two regional domains, using the Simulating WAves Nearshore (SWAN) wave model. The wave climate is modeled for a historical 20-year time slice (1986-2005) and a projected future 20-year time-slice (2081-2100) for both scenarios. Furthermore, we compare these scenarios to the effects of considering sea-level rise (SLR) alone (stationary wave climate), and to the effects of combined SLR and projected wind-wave change. Results indicated that the SWA shelf and nearshore wave climate is more sensitive to changes in offshore mean wave direction than offshore wave heights. Nearshore, wave energy flux was projected to increase by ∼10% in exposed areas and decrease by ∼10% in sheltered areas under both climate scenarios due to a change in wave directions, compared to an overall increase of 2-4% in offshore wave heights. With SLR, the annual mean wave energy flux was projected to increase by up to 20% in shallow water (< 30 m) as a result of decreased wave dissipation. In winter months, the longshore wave energy flux, which is responsible for littoral drift, is expected to increase by up to 39% (62%) under the RCP4.5 (RCP8.5) greenhouse gas concentration pathway with SLR. The study highlights the importance of using high-resolution wave simulations to evaluate future regional wave climates, since the coastal wave climate is more responsive to changes in wave direction and sea level than offshore wave heights.
Fixed point theorems for generalized contractions in ordered metric spaces
NASA Astrophysics Data System (ADS)
O'Regan, Donal; Petrusel, Adrian
2008-05-01
The purpose of this paper is to present some fixed point results for self-generalized contractions in ordered metric spaces. Our results generalize and extend some recent results of A.C.M. Ran, M.C. Reurings [A.C.M. Ran, MEC. Reurings, A fixed point theorem in partially ordered sets and some applications to matrix equations, Proc. Amer. Math. Soc. 132 (2004) 1435-1443], J.J. Nieto, R. Rodríguez-López [J.J. Nieto, R. Rodríguez-López, Contractive mapping theorems in partially ordered sets and applications to ordinary differential equations, Order 22 (2005) 223-239; J.J. Nieto, R. Rodríguez-López, Existence and uniqueness of fixed points in partially ordered sets and applications to ordinary differential equations, Acta Math. Sin. (Engl. Ser.) 23 (2007) 2205-2212], J.J. Nieto, R.L. Pouso, R. Rodríguez-López [J.J. Nieto, R.L. Pouso, R. Rodríguez-López, Fixed point theorem theorems in ordered abstract sets, Proc. Amer. Math. Soc. 135 (2007) 2505-2517], A. Petrusel, I.A. Rus [A. Petrusel, I.A. Rus, Fixed point theorems in ordered L-spaces, Proc. Amer. Math. Soc. 134 (2006) 411-418] and R.P. Agarwal, M.A. El-Gebeily, D. O'Regan [R.P. Agarwal, M.A. El-Gebeily, D. O'Regan, Generalized contractions in partially ordered metric spaces, Appl. Anal., in press]. As applications, existence and uniqueness results for Fredholm and Volterra type integral equations are given.
NASA Astrophysics Data System (ADS)
Rerikh, K. V.
A smooth reversible dynamical system (SRDS) and a system of nonlinear functional equations, defined by a certain rational quadratic Cremona mapping and arising from the static model of the dispersion approach in the theory of strong interactions (the Chew-Low equations for p- wave πN- scattering) are considered. This SRDS is splitted into 1- and 2-dimensional ones. An explicit Cremona transformation that completely determines the exact solution of the two-dimensional system is found. This solution depends on an odd function satisfying a nonlinear autonomous 3-point functional equation. Non-algebraic integrability of SRDS under consideration is proved using the method of Poincaré normal forms and the Siegel theorem on biholomorphic linearization of a mapping at a non-resonant fixed point. The proof is based on the classical Feldman-Baker theorem on linear forms of logarithms of algebraic numbers, which, in turn, relies upon solving the 7th Hilbert problem by A.I. Gel'fond and T. Schneider and new powerful methods of A. Baker in the theory of transcendental numbers. The general theorem, following from the Feldman-Baker theorem, on applicability of the Siegel theorem to the set of the eigenvalues λ ɛ Cn of a mapping at a non-resonant fixed point which belong to the algebraic number field A is formulated and proved. The main results are presented in Theorems 1-3, 5, 7, 8 and Remarks 3, 7.
2011-03-01
protocol. Unfortunately for this grant project, this approval has come too late to acquire human subjects. Nonetheless, the MMI Lab will continue to...Gaussian filter ) of 10X clinical activity concentration (0.36 µCi/mL) images acquired on Day 1 with (LEFT) VAOR, (CENTER) TPB and (RIGHT) PROJSINE...trajectories. (ROW 3) Coronal and (ROW 4) transverse slices (smoothed with a Gaussian filter ) showing the placement and size of the VOI used to
2015-12-27
demonstration vehicles. Test and measurement of fabricated structures will be conducted to experimentally quantify RF and optical performance. Measurement...the development of coupled RF and optical structures. Both the graduate student and the undergraduate student were trained in conducting precision...research conducted for this project. The journal paper citations are: 1. L. Chen, J. Nagy, and R. M. Reano, "Patterned ion-sliced lithium niobate for
[Design and accuracy analysis of upper slicing system of MSCT].
Jiang, Rongjian
2013-05-01
The upper slicing system is the main components of the optical system in MSCT. This paper focuses on the design of upper slicing system and its accuracy analysis to improve the accuracy of imaging. The error of slice thickness and ray center by bearings, screw and control system were analyzed and tested. In fact, the accumulated error measured is less than 1 microm, absolute error measured is less than 10 microm. Improving the accuracy of the upper slicing system contributes to the appropriate treatment methods and success rate of treatment.
A survey of program slicing for software engineering
NASA Technical Reports Server (NTRS)
Beck, Jon
1993-01-01
This research concerns program slicing which is used as a tool for program maintainence of software systems. Program slicing decreases the level of effort required to understand and maintain complex software systems. It was first designed as a debugging aid, but it has since been generalized into various tools and extended to include program comprehension, module cohesion estimation, requirements verification, dead code elimination, and maintainence of several software systems, including reverse engineering, parallelization, portability, and reuse component generation. This paper seeks to address and define terminology, theoretical concepts, program representation, different program graphs, developments in static slicing, dynamic slicing, and semantics and mathematical models. Applications for conventional slicing are presented, along with a prognosis of future work in this field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johannsen, Tim; Psaltis, Dimitrios, E-mail: timj@physics.arizona.ed, E-mail: dpsaltis@email.arizona.ed
According to the no-hair theorem, an astrophysical black hole is uniquely described by only two quantities, the mass and the spin. In this series of papers, we investigate a framework for testing the no-hair theorem with observations of black holes in the electromagnetic spectrum. We formulate our approach in terms of a parametric spacetime which contains a quadrupole moment that is independent of both mass and spin. If the no-hair theorem is correct, then any deviation of the black hole quadrupole moment from its Kerr value has to be zero. We analyze in detail the properties of this quasi-Kerr spacetimemore » that are critical to interpreting observations of black holes and demonstrate their dependence on the spin and quadrupole moment. In particular, we show that the location of the innermost stable circular orbit and the gravitational lensing experienced by photons are affected significantly at even modest deviations of the quadrupole moment from the value predicted by the no-hair theorem. We argue that observations of black hole images, of relativistically broadened iron lines, as well as of thermal X-ray spectra from accreting black holes will lead in the near future to an experimental test of the no-hair theorem.« less
Matching factorization theorems with an inverse-error weighting
NASA Astrophysics Data System (ADS)
Echevarria, Miguel G.; Kasemets, Tomas; Lansberg, Jean-Philippe; Pisano, Cristian; Signori, Andrea
2018-06-01
We propose a new fast method to match factorization theorems applicable in different kinematical regions, such as the transverse-momentum-dependent and the collinear factorization theorems in Quantum Chromodynamics. At variance with well-known approaches relying on their simple addition and subsequent subtraction of double-counted contributions, ours simply builds on their weighting using the theory uncertainties deduced from the factorization theorems themselves. This allows us to estimate the unknown complete matched cross section from an inverse-error-weighted average. The method is simple and provides an evaluation of the theoretical uncertainty of the matched cross section associated with the uncertainties from the power corrections to the factorization theorems (additional uncertainties, such as the nonperturbative ones, should be added for a proper comparison with experimental data). Its usage is illustrated with several basic examples, such as Z boson, W boson, H0 boson and Drell-Yan lepton-pair production in hadronic collisions, and compared to the state-of-the-art Collins-Soper-Sterman subtraction scheme. It is also not limited to the transverse-momentum spectrum, and can straightforwardly be extended to match any (un)polarized cross section differential in other variables, including multi-differential measurements.
A Minimum Path Algorithm Among 3D-Polyhedral Objects
NASA Astrophysics Data System (ADS)
Yeltekin, Aysin
1989-03-01
In this work we introduce a minimum path theorem for 3D case. We also develop an algorithm based on the theorem we prove. The algorithm will be implemented on the software package we develop using C language. The theorem we introduce states that; "Given the initial point I, final point F and S be the set of finite number of static obstacles then an optimal path P from I to F, such that PA S = 0 is composed of straight line segments which are perpendicular to the edge segments of the objects." We prove the theorem as well as we develop the following algorithm depending on the theorem to find the minimum path among 3D-polyhedral objects. The algorithm generates the point Qi on edge ei such that at Qi one can find the line which is perpendicular to the edge and the IF line. The algorithm iteratively provides a new set of initial points from Qi and exploits all possible paths. Then the algorithm chooses the minimum path among the possible ones. The flowchart of the program as well as the examination of its numerical properties are included.
Structure theorems and the dynamics of nitrogen catabolite repression in yeast
Boczko, Erik M.; Cooper, Terrance G.; Gedeon, Tomas; Mischaikow, Konstantin; Murdock, Deborah G.; Pratap, Siddharth; Wells, K. Sam
2005-01-01
By using current biological understanding, a conceptually simple, but mathematically complex, model is proposed for the dynamics of the gene circuit responsible for regulating nitrogen catabolite repression (NCR) in yeast. A variety of mathematical “structure” theorems are described that allow one to determine the asymptotic dynamics of complicated systems under very weak hypotheses. It is shown that these theorems apply to several subcircuits of the full NCR circuit, most importantly to the URE2–GLN3 subcircuit that is independent of the other constituents but governs the switching behavior of the full NCR circuit under changes in nitrogen source. Under hypotheses that are fully consistent with biological data, it is proven that the dynamics of this subcircuit is simple periodic behavior in synchrony with the cell cycle. Although the current mathematical structure theorems do not apply to the full NCR circuit, extensive simulations suggest that the dynamics is constrained in much the same way as that of the URE2–GLN3 subcircuit. This finding leads to the proposal that mathematicians study genetic circuits to find new geometries for which structure theorems may exist. PMID:15814615
Matching factorization theorems with an inverse-error weighting
Echevarria, Miguel G.; Kasemets, Tomas; Lansberg, Jean-Philippe; ...
2018-04-03
We propose a new fast method to match factorization theorems applicable in different kinematical regions, such as the transverse-momentum-dependent and the collinear factorization theorems in Quantum Chromodynamics. At variance with well-known approaches relying on their simple addition and subsequent subtraction of double-counted contributions, ours simply builds on their weighting using the theory uncertainties deduced from the factorization theorems themselves. This allows us to estimate the unknown complete matched cross section from an inverse-error-weighted average. The method is simple and provides an evaluation of the theoretical uncertainty of the matched cross section associated with the uncertainties from the power corrections tomore » the factorization theorems (additional uncertainties, such as the nonperturbative ones, should be added for a proper comparison with experimental data). Its usage is illustrated with several basic examples, such as Z boson, W boson, H 0 boson and Drell–Yan lepton-pair production in hadronic collisions, and compared to the state-of-the-art Collins–Soper–Sterman subtraction scheme. In conclusion, it is also not limited to the transverse-momentum spectrum, and can straightforwardly be extended to match any (un)polarized cross section differential in other variables, including multi-differential measurements.« less
NASA Astrophysics Data System (ADS)
Chen, Li
1999-09-01
According to a general definition of discrete curves, surfaces, and manifolds (Li Chen, 'Generalized discrete object tracking algorithms and implementations, ' In Melter, Wu, and Latecki ed, Vision Geometry VI, SPIE Vol. 3168, pp 184 - 195, 1997.). This paper focuses on the Jordan curve theorem in 2D discrete spaces. The Jordan curve theorem says that a (simply) closed curve separates a simply connected surface into two components. Based on the definition of discrete surfaces, we give three reasonable definitions of simply connected spaces. Theoretically, these three definition shall be equivalent. We have proved the Jordan curve theorem under the third definition of simply connected spaces. The Jordan theorem shows the relationship among an object, its boundary, and its outside area. In continuous space, the boundary of an mD manifold is an (m - 1)D manifold. The similar result does apply to regular discrete manifolds. The concept of a new regular nD-cell is developed based on the regular surface point in 2D, and well-composed objects in 2D and 3D given by Latecki (L. Latecki, '3D well-composed pictures,' In Melter, Wu, and Latecki ed, Vision Geometry IV, SPIE Vol 2573, pp 196 - 203, 1995.).
Matching factorization theorems with an inverse-error weighting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Echevarria, Miguel G.; Kasemets, Tomas; Lansberg, Jean-Philippe
We propose a new fast method to match factorization theorems applicable in different kinematical regions, such as the transverse-momentum-dependent and the collinear factorization theorems in Quantum Chromodynamics. At variance with well-known approaches relying on their simple addition and subsequent subtraction of double-counted contributions, ours simply builds on their weighting using the theory uncertainties deduced from the factorization theorems themselves. This allows us to estimate the unknown complete matched cross section from an inverse-error-weighted average. The method is simple and provides an evaluation of the theoretical uncertainty of the matched cross section associated with the uncertainties from the power corrections tomore » the factorization theorems (additional uncertainties, such as the nonperturbative ones, should be added for a proper comparison with experimental data). Its usage is illustrated with several basic examples, such as Z boson, W boson, H 0 boson and Drell–Yan lepton-pair production in hadronic collisions, and compared to the state-of-the-art Collins–Soper–Sterman subtraction scheme. In conclusion, it is also not limited to the transverse-momentum spectrum, and can straightforwardly be extended to match any (un)polarized cross section differential in other variables, including multi-differential measurements.« less
The Hawking-Penrose Singularity Theorem for C 1,1-Lorentzian Metrics
NASA Astrophysics Data System (ADS)
Graf, Melanie; Grant, James D. E.; Kunzinger, Michael; Steinbauer, Roland
2018-06-01
We show that the Hawking-Penrose singularity theorem, and the generalisation of this theorem due to Galloway and Senovilla, continue to hold for Lorentzian metrics that are of C 1,1-regularity. We formulate appropriate weak versions of the strong energy condition and genericity condition for C 1,1-metrics, and of C 0-trapped submanifolds. By regularisation, we show that, under these weak conditions, causal geodesics necessarily become non-maximising. This requires a detailed analysis of the matrix Riccati equation for the approximating metrics, which may be of independent interest.
On the locality of the no hair conjection and the measure of the universe
NASA Technical Reports Server (NTRS)
Pacher, Tibor; Stein-Schabes, Jaime A.
1988-01-01
The reently proposed proof by Jensen and Stein-Schabes of the No Hair Theorem for inhomogeneous spacetimes is analyzed, putting a special emphasis on the asymptotic behavior of the shear and curvature. It is concluded that the theorem only holds locally, and the minimum size a region should be is estimated in order for it to inflate. The assumptions used in the theorem are discussed in detail. The last section speculates about the possible measure of the set of spacetimes that would undergo inflation.
NASA Astrophysics Data System (ADS)
Galloway, Gregory J.; Senovilla, José M. M.
2010-08-01
Standard singularity theorems are proven in Lorentzian manifolds of arbitrary dimension n if they contain closed trapped submanifolds of arbitrary co-dimension. By using the mean curvature vector to characterize trapped submanifolds, a unification of the several possibilities for the boundary conditions in the traditional theorems and their generalization to an arbitrary co-dimension is achieved. The classical convergence conditions must be replaced by a condition on sectional curvatures, or tidal forces, which reduces to the former in the cases of the co-dimension 1, 2 or n.