Multi-axial interferometry: demonstration of deep nulling
NASA Astrophysics Data System (ADS)
Buisset, Christophe; Rejeaunier, Xavier; Rabbia, Yves; Ruilier, Cyril; Barillot, Marc; Lierstuen, Lars; Perdigués Armengol, Josep Maria
2017-11-01
The ESA-Darwin mission is devoted to direct detection and spectroscopic characterization of earthlike exoplanets. Starlight rejection is achieved by nulling interferometry from space so as to make detectable the faintly emitting planet in the neighborhood. In that context, Alcatel Alenia Space has developed a nulling breadboard for ESA in order to demonstrate in laboratory conditions the rejection of an on-axis source. This device, the Multi Aperture Imaging Interferometer (MAII) demonstrated high rejection capability at a relevant level for exoplanets, in singlepolarized and mono-chromatic conditions. In this paper we report on the new multi-axial configuration of MAII and we summarize our late nulling results.
A Discrete X-Ray Transform for Chromotomographic HyperspectraI Imaging
2013-03-21
the Faculty Department of Mathematics and Statistics Graduate School of Engineering and Management Air Force Institute of Technology Air University Air...are dealing with an operator with a gigantic null space; in the literature, this space is known as the cone of missing information. This means that we...reconstruct f from g we would still be faced with solving a linear system L∗Lf = L∗g where the null space of L∗L is gigantic . This means that in order to
Compressed Sensing and Electron Microscopy
2010-01-01
dimensional space IRn and so there is a lot of collapsing of information. For example, any vector η in the null space N = N (Φ) of Φ is mapped...assignment of the pixel intensity f̂P in the image. Thus, the pixels size is the same as the grid spacing h and we can ( with only a slight abuse of notation...offers a fresh view of signal/image acquisition and reconstruction.
Telescopes in Near Space: Balloon Exoplanet Nulling Interferometer (BigBENI)
NASA Technical Reports Server (NTRS)
Lyon, Richard G.; Clampin, Mark; Petrone, Peter; Mallik, Udayan; Mauk, Robin
2012-01-01
A significant and often overlooked path to advancing both science and technology for direct imaging and spectroscopic characterization of exosolar planets is to fly "near space" missions, i.e. balloon borne exosolar missions. A near space balloon mission with two or more telescopes, coherently combined, is capable of achieving a subset of the mission science goals of a single large space telescope at a small fraction of the cost. Additionally such an approach advances technologies toward flight readiness for space flight. Herein we discuss the feasibility of flying two 1.2 meter telescopes, with a baseline separation of 3.6 meters, operating in visible light, on a composite boom structure coupled to a modified visible nulling coronagraph operating to achieve an inner working angle of 60 milli-arcseconds. We discuss the potential science return, atmospheric residuals at 135,000 feet, pointing control and visible nulling and evaluate the state-or-art of these technologies with regards to balloon missions.
Parallel Reconstruction Using Null Operations (PRUNO)
Zhang, Jian; Liu, Chunlei; Moseley, Michael E.
2011-01-01
A novel iterative k-space data-driven technique, namely Parallel Reconstruction Using Null Operations (PRUNO), is presented for parallel imaging reconstruction. In PRUNO, both data calibration and image reconstruction are formulated into linear algebra problems based on a generalized system model. An optimal data calibration strategy is demonstrated by using Singular Value Decomposition (SVD). And an iterative conjugate- gradient approach is proposed to efficiently solve missing k-space samples during reconstruction. With its generalized formulation and precise mathematical model, PRUNO reconstruction yields good accuracy, flexibility, stability. Both computer simulation and in vivo studies have shown that PRUNO produces much better reconstruction quality than autocalibrating partially parallel acquisition (GRAPPA), especially under high accelerating rates. With the aid of PRUO reconstruction, ultra high accelerating parallel imaging can be performed with decent image quality. For example, we have done successful PRUNO reconstruction at a reduction factor of 6 (effective factor of 4.44) with 8 coils and only a few autocalibration signal (ACS) lines. PMID:21604290
Analysis of nulling phase functions suitable to image plane coronagraphy
NASA Astrophysics Data System (ADS)
Hénault, François; Carlotti, Alexis; Vérinaud, Christophe
2016-07-01
Coronagraphy is a very efficient technique for identifying and characterizing extra-solar planets orbiting in the habitable zone of their parent star, especially in a space environment. An important family of coronagraphs is actually based on phase plates located at an intermediate image plane of the optical system, and spreading the starlight outside the "Lyot" exit pupil plane of the instrument. In this commutation we present a set of candidate phase functions generating a central null at the Lyot plane, and study how it propagates to the image plane of the coronagraph. These functions include linear azimuthal phase ramps (the well-known optical vortex), azimuthally cosine-modulated phase profiles, and circular phase gratings. Nnumerical simulations of the expected null depth, inner working angle, sensitivity to pointing errors, effect of central obscuration located at the pupil or image planes, and effective throughput including image mask and Lyot stop transmissions are presented and discussed. The preliminary conclusion is that azimuthal cosine functions appear as an interesting alternative to the classical optical vortex of integer topological charge.
System and Method for Null-Lens Wavefront Sensing
NASA Technical Reports Server (NTRS)
Hill, Peter C. (Inventor); Thompson, Patrick L. (Inventor); Aronstein, David L. (Inventor); Bolcar, Matthew R. (Inventor); Smith, Jeffrey S. (Inventor)
2015-01-01
A method of measuring aberrations in a null-lens including assembly and alignment aberrations. The null-lens may be used for measuring aberrations in an aspheric optic with the null-lens. Light propagates from the aspheric optic location through the null-lens, while sweeping a detector through the null-lens focal plane. Image data being is collected at locations about said focal plane. Light is simulated propagating to the collection locations for each collected image. Null-lens aberrations may extracted, e.g., applying image-based wavefront-sensing to collected images and simulation results. The null-lens aberrations improve accuracy in measuring aspheric optic aberrations.
NASA Technical Reports Server (NTRS)
Frey, B. J.; Barry, R. K.; Danchi, W. C.; Hyde, T. T.; Lee, K. Y.; Martino, A. J.; Zuray, M. S.
2006-01-01
The Fourier-Kelvin Stellar Interferometer (FKSI) is a mission concept for an imaging and nulling interferometer in the near to mid-infrared spectral region (3-8 microns), and will be a scientific and technological pathfinder for upcoming missions including TPF-I/DARWIN, SPECS, and SPIRIT. At NASA's Goddard Space Flight Center, we have constructed a symmetric Mach-Zehnder nulling testbed to demonstrate techniques and algorithms that can be used to establish and maintain the 10(exp 4) null depth that will be required for such a mission. Among the challenges inherent in such a system is the ability to acquire and track the null fringe to the desired depth for timescales on the order of hours in a laboratory environment. In addition, it is desirable to achieve this stability without using conventional dithering techniques. We describe recent testbed metrology and control system developments necessary to achieve these goals and present our preliminary results.
Visible Nulling Coronagraphy Testbed Development for Exoplanet Detection
NASA Technical Reports Server (NTRS)
Lyon, Richard G.; Clampin, Mark; Woodruff, Robert A.; Vasudevan, Gopal; Thompson, Patrick; Chen, Andrew; Petrone, Peter; Booth, Andrew; Madison, Timothy; Bolcar, Matthew;
2010-01-01
Three of the recently completed NASA Astrophysics Strategic Mission Concept (ASMC) studies addressed the feasibility of using a Visible Nulling Coronagraph (VNC) as the prime instrument for exoplanet science. The VNC approach is one of the few approaches that works with filled, segmented and sparse or diluted aperture telescope systems and thus spans the space of potential ASMC exoplanet missions. NASA/Goddard Space Flight Center (GSFC) has a well-established effort to develop VNC technologies and has developed an incremental sequence of VNC testbeds to advance the this approach and the technologies associated with it. Herein we report on the continued development of the vacuum Visible Nulling Coronagraph testbed (VNT). The VNT is an ultra-stable vibration isolated testbed that operates under high bandwidth closed-loop control within a vacuum chamber. It will be used to achieve an incremental sequence of three visible light nulling milestones of sequentially higher contrasts of 10(exp 8) , 10(exp 9) and 10(exp 10) at an inner working angle of 2*lambda/D and ultimately culminate in spectrally broadband (>20%) high contrast imaging. Each of the milestones, one per year, is traceable to one or more of the ASMC studies. The VNT uses a modified Mach-Zehnder nulling interferometer, modified with a modified "W" configuration to accommodate a hex-packed MEMS based deformable mirror, a coherent fiber bundle and achromatic phase shifters. Discussed will be the optical configuration laboratory results, critical technologies and the null sensing and control approach.
A Null Space Control of Two Wheels Driven Mobile Manipulator Using Passivity Theory
NASA Astrophysics Data System (ADS)
Shibata, Tsuyoshi; Murakami, Toshiyuki
This paper describes a control strategy of null space motion of a two wheels driven mobile manipulator. Recently, robot is utilized in various industrial fields and it is preferable for the robot manipulator to have multiple degrees of freedom motion. Several studies of kinematics for null space motion have been proposed. However stability analysis of null space motion is not enough. Furthermore, these approaches apply to stable systems, but they do not apply unstable systems. Then, in this research, base of manipulator equips with two wheels driven mobile robot. This robot is called two wheels driven mobile manipulator, which becomes unstable system. In the proposed approach, a control design of null space uses passivity based stabilizing. A proposed controller is decided so that closed-loop system of robot dynamics satisfies passivity. This is passivity based control. Then, control strategy is that stabilizing of the robot system applies to work space observer based approach and null space control while keeping end-effector position. The validity of the proposed approach is verified by simulations and experiments of two wheels driven mobile manipulator.
Jha, Abhinav K; Barrett, Harrison H; Frey, Eric C; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A
2015-09-21
Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and implemented for graphics processing units (GPUs). Further, this approach leverages another important advantage of PP systems, namely the possibility to perform photon-by-photon real-time reconstruction. We demonstrate the application of the approach to perform reconstruction in a simulated 2D SPECT system. The results help to validate and demonstrate the utility of the proposed method and show that PP systems can help overcome the aliasing artifacts that are otherwise intrinsically present in PC systems.
NASA Astrophysics Data System (ADS)
Jha, Abhinav K.; Barrett, Harrison H.; Frey, Eric C.; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A.
2015-09-01
Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and implemented for graphics processing units (GPUs). Further, this approach leverages another important advantage of PP systems, namely the possibility to perform photon-by-photon real-time reconstruction. We demonstrate the application of the approach to perform reconstruction in a simulated 2D SPECT system. The results help to validate and demonstrate the utility of the proposed method and show that PP systems can help overcome the aliasing artifacts that are otherwise intrinsically present in PC systems.
pyNSMC: A Python Module for Null-Space Monte Carlo Uncertainty Analysis
NASA Astrophysics Data System (ADS)
White, J.; Brakefield, L. K.
2015-12-01
The null-space monte carlo technique is a non-linear uncertainty analyses technique that is well-suited to high-dimensional inverse problems. While the technique is powerful, the existing workflow for completing null-space monte carlo is cumbersome, requiring the use of multiple commandline utilities, several sets of intermediate files and even a text editor. pyNSMC is an open-source python module that automates the workflow of null-space monte carlo uncertainty analyses. The module is fully compatible with the PEST and PEST++ software suites and leverages existing functionality of pyEMU, a python framework for linear-based uncertainty analyses. pyNSMC greatly simplifies the existing workflow for null-space monte carlo by taking advantage of object oriented design facilities in python. The core of pyNSMC is the ensemble class, which draws and stores realized random vectors and also provides functionality for exporting and visualizing results. By relieving users of the tedium associated with file handling and command line utility execution, pyNSMC instead focuses the user on the important steps and assumptions of null-space monte carlo analysis. Furthermore, pyNSMC facilitates learning through flow charts and results visualization, which are available at many points in the algorithm. The ease-of-use of the pyNSMC workflow is compared to the existing workflow for null-space monte carlo for a synthetic groundwater model with hundreds of estimable parameters.
New Methods of Entanglement with Spatial Modes of Light
2014-02-01
Poincare beam by state nulling. ....................................... 15 Figure 13: Poincare patterns measured by imaging polarimetry ...perform imaging polarimetry . This entails taking six single photon images, pixel by pixel, after the passage through six different polarization filters...state nulling [21,22] and by imaging polarimetry [24]. Figure 12 shows the result of state nulling measurements in diagnosing the mode of a Poincare
Qualification of a Null Lens Using Image-Based Phase Retrieval
NASA Technical Reports Server (NTRS)
Bolcar, Matthew R.; Aronstein, David L.; Hill, Peter C.; Smith, J. Scott; Zielinski, Thomas P.
2012-01-01
In measuring the figure error of an aspheric optic using a null lens, the wavefront contribution from the null lens must be independently and accurately characterized in order to isolate the optical performance of the aspheric optic alone. Various techniques can be used to characterize such a null lens, including interferometry, profilometry and image-based methods. Only image-based methods, such as phase retrieval, can measure the null-lens wavefront in situ - in single-pass, and at the same conjugates and in the same alignment state in which the null lens will ultimately be used - with no additional optical components. Due to the intended purpose of a Dull lens (e.g., to null a large aspheric wavefront with a near-equal-but-opposite spherical wavefront), characterizing a null-lens wavefront presents several challenges to image-based phase retrieval: Large wavefront slopes and high-dynamic-range data decrease the capture range of phase-retrieval algorithms, increase the requirements on the fidelity of the forward model of the optical system, and make it difficult to extract diagnostic information (e.g., the system F/#) from the image data. In this paper, we present a study of these effects on phase-retrieval algorithms in the context of a null lens used in component development for the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission. Approaches for mitigation are also discussed.
Agarwal, Krishna; Macháň, Radek; Prasad, Dilip K
2018-03-21
Localization microscopy and multiple signal classification algorithm use temporal stack of image frames of sparse emissions from fluorophores to provide super-resolution images. Localization microscopy localizes emissions in each image independently and later collates the localizations in all the frames, giving same weight to each frame irrespective of its signal-to-noise ratio. This results in a bias towards frames with low signal-to-noise ratio and causes cluttered background in the super-resolved image. User-defined heuristic computational filters are employed to remove a set of localizations in an attempt to overcome this bias. Multiple signal classification performs eigen-decomposition of the entire stack, irrespective of the relative signal-to-noise ratios of the frames, and uses a threshold to classify eigenimages into signal and null subspaces. This results in under-representation of frames with low signal-to-noise ratio in the signal space and over-representation in the null space. Thus, multiple signal classification algorithms is biased against frames with low signal-to-noise ratio resulting into suppression of the corresponding fluorophores. This paper presents techniques to automatically debias localization microscopy and multiple signal classification algorithm of these biases without compromising their resolution and without employing heuristics, user-defined criteria. The effect of debiasing is demonstrated through five datasets of invitro and fixed cell samples.
Comparison null imaging ellipsometry using polarization rotator
NASA Astrophysics Data System (ADS)
Park, Sungmo; Kim, Eunsung; Kim, Jiwon; An, Ilsin
2018-05-01
In this study, two-reflection imaging ellipsometry is carried out to compare the changes in polarization states between two samples. By using a polarization rotator, the parallel and perpendicular components of polarization are easily switched between the two samples being compared. This leads to an intensity image consisting of null and off-null points depending on the difference in optical characteristics between the two samples. This technique does not require any movement of optical elements for nulling and can be used to detect defects or surface contamination for quality control of samples.
Duggal, K L
2016-01-01
A new technique is used to study a family of time-dependent null horizons, called " Evolving Null Horizons " (ENHs), of generalized Robertson-Walker (GRW) space-time [Formula: see text] such that the metric [Formula: see text] satisfies a kinematic condition. This work is different from our early papers on the same issue where we used (1 + n )-splitting space-time but only some special subcases of GRW space-time have this formalism. Also, in contrast to previous work, we have proved that each member of ENHs is totally umbilical in [Formula: see text]. Finally, we show that there exists an ENH which is always a null horizon evolving into a black hole event horizon and suggest some open problems.
Space Interferometry Mission: Measuring the Universe
NASA Technical Reports Server (NTRS)
Marr, James; Dallas, Saterios; Laskin, Robert; Unwin, Stephen; Yu, Jeffrey
1991-01-01
The Space Interferometry Mission (SIM) will be the NASA Origins Program's first space based long baseline interferometric observatory. SIM will use a 10 m Michelson stellar interferometer to provide 4 microarcsecond precision absolute position measurements of stars down to 20th magnitude over its 5 yr. mission lifetime. SIM will also provide technology demonstrations of synthesis imaging and interferometric nulling. This paper describes the what, why and how of the SIM mission, including an overall mission and system description, science objectives, general description of how SIM makes its measurements, description of the design concepts now under consideration, operations concept, and supporting technology program.
2016-01-01
A new technique is used to study a family of time-dependent null horizons, called “Evolving Null Horizons” (ENHs), of generalized Robertson-Walker (GRW) space-time (M¯,g¯) such that the metric g¯ satisfies a kinematic condition. This work is different from our early papers on the same issue where we used (1 + n)-splitting space-time but only some special subcases of GRW space-time have this formalism. Also, in contrast to previous work, we have proved that each member of ENHs is totally umbilical in (M¯,g¯). Finally, we show that there exists an ENH which is always a null horizon evolving into a black hole event horizon and suggest some open problems. PMID:27722202
The appearance, motion, and disappearance of three-dimensional magnetic null points
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, Nicholas A., E-mail: namurphy@cfa.harvard.edu; Parnell, Clare E.; Haynes, Andrew L.
2015-10-15
While theoretical models and simulations of magnetic reconnection often assume symmetry such that the magnetic null point when present is co-located with a flow stagnation point, the introduction of asymmetry typically leads to non-ideal flows across the null point. To understand this behavior, we present exact expressions for the motion of three-dimensional linear null points. The most general expression shows that linear null points move in the direction along which the magnetic field and its time derivative are antiparallel. Null point motion in resistive magnetohydrodynamics results from advection by the bulk plasma flow and resistive diffusion of the magnetic field,more » which allows non-ideal flows across topological boundaries. Null point motion is described intrinsically by parameters evaluated locally; however, global dynamics help set the local conditions at the null point. During a bifurcation of a degenerate null point into a null-null pair or the reverse, the instantaneous velocity of separation or convergence of the null-null pair will typically be infinite along the null space of the Jacobian matrix of the magnetic field, but with finite components in the directions orthogonal to the null space. Not all bifurcating null-null pairs are connected by a separator. Furthermore, except under special circumstances, there will not exist a straight line separator connecting a bifurcating null-null pair. The motion of separators cannot be described using solely local parameters because the identification of a particular field line as a separator may change as a result of non-ideal behavior elsewhere along the field line.« less
Altered Anterior Segment Biometric Parameters in Mice Deficient in SPARC.
Ho, Henrietta; Htoon, Hla M; Yam, Gary Hin-Fai; Toh, Li Zhen; Lwin, Nyein Chan; Chu, Stephanie; Lee, Ying Shi; Wong, Tina T; Seet, Li-Fong
2017-01-01
Secreted protein acidic and rich in cysteine (SPARC) and Hevin are structurally related matricellular proteins involved in extracellular matrix assembly. In this study, we compared the anterior chamber biometric parameters and iris collagen properties in SPARC-, Hevin- and SPARC-/Hevin-null with wild-type (WT) mice. The right eyes of 53 WT, 35 SPARC-, 56 Hevin-, and 63 SPARC-/Hevin-null mice were imaged using the RTVue-100 Fourier-domain optical coherence tomography system. The parameters measured were anterior chamber depth (ACD), trabecular-iris space area (TISA), angle opening distance (AOD), and pupil diameter. Biometric data were analyzed using analysis of covariance and adjusted for age, sex, and pupil diameter. Expression of Col1a1, Col8a1, and Col8a2 transcripts in the irises was measured by quantitative polymerase chain reaction. Collagen fibril thickness was evaluated by transmission electron microscopy. Mice that were SPARC- and SPARC-/Hevin-null had 1.28- and 1.25-fold deeper ACD, 1.45- and 1.53-fold larger TISA, as well as 1.42- and 1.51-fold wider AOD than WT, respectively. These measurements were not significantly different between SPARC- and SPARC-/Hevin-null mice. The SPARC-null iris expressed lower Col1a1, but higher Col8a1 and Col8a2 transcripts compared with WT. Collagen fibrils in the SPARC- and SPARC-/Hevin-null irises were 1.5- and 1.7-fold thinner than WT, respectively. The Hevin-null iris did not differ from WT in these collagen properties. SPARC-null mice have deeper anterior chamber as well as wider drainage angles compared with WT. Therefore, SPARC plays a key role in influencing the spatial organization of the anterior segment, potentially via modulation of collagen properties, while Hevin is not likely to be involved.
Extending the scanning angle of a phased array antenna by using a null-space medium.
Sun, Fei; He, Sailing
2014-10-30
By introducing a columnar null-space region as the reference space, we design a radome that can extend the scanning angle of a phased array antenna (PAA) by a predetermined relationship (e.g. a linear relationship between the incident angle and steered output angle can be achieved). After some approximation, we only need two homogeneous materials to construct the proposed radome layer by layer. This kind of medium is called a null-space medium, which has been studied and fabricated for realizing hyper-lenses and some other devices. Numerical simulations verify the performance of our radome.
Digital image profilers for detecting faint sources which have bright companions
NASA Technical Reports Server (NTRS)
Morris, Elena; Flint, Graham; Slavey, Robert
1992-01-01
For this program, an image profiling system was developed which offers the potential for detecting extremely faint optical sources that are located in close proximity to bright companions. The approach employed is novel in three respects. First, it does not require an optical system wherein extraordinary measures must be taken to minimize diffraction and scatter. Second, it does not require detectors possessing either extreme uniformity in sensitivity or extreme temporal stability. Finally, the system can readily be calibrated, or nulled, in space by testing against an unresolved singular stellar source.
Efficient computational methods for electromagnetic imaging with applications to 3D magnetotellurics
NASA Astrophysics Data System (ADS)
Kordy, Michal Adam
The motivation for this work is the forward and inverse problem for magnetotellurics, a frequency domain electromagnetic remote-sensing geophysical method used in mineral, geothermal, and groundwater exploration. The dissertation consists of four papers. In the first paper, we prove the existence and uniqueness of a representation of any vector field in H(curl) by a vector lying in H(curl) and H(div). It allows us to represent electric or magnetic fields by another vector field, for which nodal finite element approximation may be used in the case of non-constant electromagnetic properties. With this approach, the system matrix does not become ill-posed for low-frequency. In the second paper, we consider hexahedral finite element approximation of an electric field for the magnetotelluric forward problem. The near-null space of the system matrix for low frequencies makes the numerical solution unstable in the air. We show that the proper solution may obtained by applying a correction on the null space of the curl. It is done by solving a Poisson equation using discrete Helmholtz decomposition. We parallelize the forward code on multicore workstation with large RAM. In the next paper, we use the forward code in the inversion. Regularization of the inversion is done by using the second norm of the logarithm of conductivity. The data space Gauss-Newton approach allows for significant savings in memory and computational time. We show the efficiency of the method by considering a number of synthetic inversions and we apply it to real data collected in Cascade Mountains. The last paper considers a cross-frequency interpolation of the forward response as well as the Jacobian. We consider Pade approximation through model order reduction and rational Krylov subspace. The interpolating frequencies are chosen adaptively in order to minimize the maximum error of interpolation. Two error indicator functions are compared. We prove a theorem of almost always lucky failure in the case of the right hand analytically dependent on frequency. The operator's null space is treated by decomposing the solution into the part in the null space and orthogonal to it.
Infrared Imaging and Characterization of Exoplanets: Can we Detect Earth-Twins on a Budget?
NASA Technical Reports Server (NTRS)
Danchi, William
2010-01-01
During the past decade considerable progress has been made developing techniques that can be used to detect and characterize Earth twins in the mid- infrared (7-20 microns). The principal technique is called nulling interferometry, and it was invented by Bracewell in the late 1970's. The nulling technique is an interferometric equivalent of an optical coronagraph. At the present time most of the technological hurdles have been overcome for a space mission to be able to begin Phase A early in the next decade, and it is possible to detect and characterize Earth-twins on a mid- sized strategic mission budget ($600-800 million). I will review progress on this exciting method of planet detection in the context of recent work on the Exoplanet Community Forum and the US Decadal Survey (Astro2010), including biomarkers, technological progress, mission concepts, the theory of these instruments, and a.comparison of the discovery space of this technique with others also under consideration.
Sodium imaging of the human knee using soft inversion recovery fluid attenuation.
Feldman, Rebecca E; Stobbe, Robert; Watts, Alexander; Beaulieu, Christian
2013-09-01
Sodium signal strength in MRI is low when compared with (1)H. Thus, image voxel volumes must be relatively large in order to produce a sufficient signal-to-noise ratio (SNR). The measurement of sodium in cartilage is hindered by conflation with signal from the adjacent fluid spaces. Inversion recovery can be used to null signal from fluid, but reduces SNR. The purpose of this work was to optimize inversion recovery sodium MRI to enhance cartilage SNR while nulling fluid. Sodium relaxation was first measured for knee cartilage (T1=21±1 ms, T(2 fast)(∗)=0.8±0.2 ms, T(2 slow)(∗)=19.7±0.5 ms) and fluid (T1=48±3 ms, T2(∗)=47±4 ms) in nine healthy subjects at 4.7 T. The rapid relaxation of cartilage in relation to fluid permits the use of a lengthened inversion pulse to preferentially invert the fluid components. Simulations of inversion pulse length were performed to yield a cartilage SNR enhancing combination of parameters that nulled fluid. The simulations were validated in a phantom and then in vivo. B0 inhomogeneity was measured and the effect of off-resonance during the soft inversion pulse was assessed with simulation. Soft inversion recovery yielded twice the SNR and much improved sodium images of cartilage in human knee with little confounding signal from fluid. Copyright © 2013 Elsevier Inc. All rights reserved.
Extending the scanning angle of a phased array antenna by using a null-space medium
Sun, Fei; He, Sailing
2014-01-01
By introducing a columnar null-space region as the reference space, we design a radome that can extend the scanning angle of a phased array antenna (PAA) by a predetermined relationship (e.g. a linear relationship between the incident angle and steered output angle can be achieved). After some approximation, we only need two homogeneous materials to construct the proposed radome layer by layer. This kind of medium is called a null-space medium, which has been studied and fabricated for realizing hyper-lenses and some other devices. Numerical simulations verify the performance of our radome. PMID:25355198
Vacuum Nuller Testbed Performance, Characterization and Null Control
NASA Technical Reports Server (NTRS)
Lyon, R. G.; Clampin, M.; Petrone, P.; Mallik, U.; Madison, T.; Bolcar, M.; Noecker, C.; Kendrick, S.; Helmbrecht, M. A.
2011-01-01
The Visible Nulling Coronagraph (VNC) can detect and characterize exoplanets with filled, segmented and sparse aperture telescopes, thereby spanning the choice of future internal coronagraph exoplanet missions. NASA/Goddard Space Flight Center (GSFC) has developed a Vacuum Nuller Testbed (VNT) to advance this approach, and assess and advance technologies needed to realize a VNC as a flight instrument. The VNT is an ultra-stable testbed operating at 15 Hz in vacuum. It consists of a MachZehnder nulling interferometer; modified with a "W" configuration to accommodate a hexpacked MEMS based deformable mirror (DM), coherent fiber bundle and achromatic phase shifters. The 2-output channels are imaged with a vacuum photon counting camera and conventional camera. Error-sensing and feedback to DM and delay line with control algorithms are implemented in a real-time architecture. The inherent advantage of the VNC is that it is its own interferometer and directly controls its errors by exploiting images from bright and dark channels simultaneously. Conservation of energy requires the sum total of the photon counts be conserved independent of the VNC state. Thus sensing and control bandwidth is limited by the target stars throughput, with the net effect that the higher bandwidth offloads stressing stability tolerances within the telescope. We report our recent progress with the VNT towards achieving an incremental sequence of contrast milestones of 10(exp 8) , 10(exp 9) and 10(exp 10) respectively at inner working angles approaching 2A/D. Discussed will be the optics, lab results, technologies, and null control. Shown will be evidence that the milestones have been achieved.
NASA Technical Reports Server (NTRS)
Friedman, I.; Casas, R. E.
1982-01-01
The collimating mirror within the Fine Guidance Subsystem of the Space Telescope's Pointing Control System is aspherized in order to correct the pupil aberration. A null corrector is needed to test the collimating mirror in autocollimation. Triplet and doublet null corrector designs are subjected to tolerance sensitivity analyses, and the doublet design is chosen despite its more restricted tolerances because of its compactness and simplicity.
DETECTING UNSPECIFIED STRUCTURE IN LOW-COUNT IMAGES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Nathan M.; Dyk, David A. van; Kashyap, Vinay L.
Unexpected structure in images of astronomical sources often presents itself upon visual inspection of the image, but such apparent structure may either correspond to true features in the source or be due to noise in the data. This paper presents a method for testing whether inferred structure in an image with Poisson noise represents a significant departure from a baseline (null) model of the image. To infer image structure, we conduct a Bayesian analysis of a full model that uses a multiscale component to allow flexible departures from the posited null model. As a test statistic, we use a tailmore » probability of the posterior distribution under the full model. This choice of test statistic allows us to estimate a computationally efficient upper bound on a p-value that enables us to draw strong conclusions even when there are limited computational resources that can be devoted to simulations under the null model. We demonstrate the statistical performance of our method on simulated images. Applying our method to an X-ray image of the quasar 0730+257, we find significant evidence against the null model of a single point source and uniform background, lending support to the claim of an X-ray jet.« less
Deblauwe, Vincent; Kennel, Pol; Couteron, Pierre
2012-01-01
Background Independence between observations is a standard prerequisite of traditional statistical tests of association. This condition is, however, violated when autocorrelation is present within the data. In the case of variables that are regularly sampled in space (i.e. lattice data or images), such as those provided by remote-sensing or geographical databases, this problem is particularly acute. Because analytic derivation of the null probability distribution of the test statistic (e.g. Pearson's r) is not always possible when autocorrelation is present, we propose instead the use of a Monte Carlo simulation with surrogate data. Methodology/Principal Findings The null hypothesis that two observed mapped variables are the result of independent pattern generating processes is tested here by generating sets of random image data while preserving the autocorrelation function of the original images. Surrogates are generated by matching the dual-tree complex wavelet spectra (and hence the autocorrelation functions) of white noise images with the spectra of the original images. The generated images can then be used to build the probability distribution function of any statistic of association under the null hypothesis. We demonstrate the validity of a statistical test of association based on these surrogates with both actual and synthetic data and compare it with a corrected parametric test and three existing methods that generate surrogates (randomization, random rotations and shifts, and iterative amplitude adjusted Fourier transform). Type I error control was excellent, even with strong and long-range autocorrelation, which is not the case for alternative methods. Conclusions/Significance The wavelet-based surrogates are particularly appropriate in cases where autocorrelation appears at all scales or is direction-dependent (anisotropy). We explore the potential of the method for association tests involving a lattice of binary data and discuss its potential for validation of species distribution models. An implementation of the method in Java for the generation of wavelet-based surrogates is available online as supporting material. PMID:23144961
MAGNETIC NULL POINTS IN KINETIC SIMULATIONS OF SPACE PLASMAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olshevsky, Vyacheslav; Innocenti, Maria Elena; Cazzola, Emanuele
2016-03-01
We present a systematic attempt to study magnetic null points and the associated magnetic energy conversion in kinetic particle-in-cell simulations of various plasma configurations. We address three-dimensional simulations performed with the semi-implicit kinetic electromagnetic code iPic3D in different setups: variations of a Harris current sheet, dipolar and quadrupolar magnetospheres interacting with the solar wind, and a relaxing turbulent configuration with multiple null points. Spiral nulls are more likely created in space plasmas: in all our simulations except lunar magnetic anomaly (LMA) and quadrupolar mini-magnetosphere the number of spiral nulls prevails over the number of radial nulls by a factor of 3–9.more » We show that often magnetic nulls do not indicate the regions of intensive energy dissipation. Energy dissipation events caused by topological bifurcations at radial nulls are rather rare and short-lived. The so-called X-lines formed by the radial nulls in the Harris current sheet and LMA simulations are rather stable and do not exhibit any energy dissipation. Energy dissipation is more powerful in the vicinity of spiral nulls enclosed by magnetic flux ropes with strong currents at their axes (their cross sections resemble 2D magnetic islands). These null lines reminiscent of Z-pinches efficiently dissipate magnetic energy due to secondary instabilities such as the two-stream or kinking instability, accompanied by changes in magnetic topology. Current enhancements accompanied by spiral nulls may signal magnetic energy conversion sites in the observational data.« less
Simple Fourier optics formalism for high-angular-resolution systems and nulling interferometry.
Hénault, François
2010-03-01
Reviewed are various designs of advanced, multiaperture optical systems dedicated to high-angular-resolution imaging or to the detection of exoplanets by nulling interferometry. A simple Fourier optics formalism applicable to both imaging arrays and nulling interferometers is presented, allowing their basic theoretical relationships to be derived as convolution or cross-correlation products suitable for fast and accurate computation. Several unusual designs, such as a "superresolving telescope" utilizing a mosaicking observation procedure or a free-flying, axially recombined interferometer are examined, and their performance in terms of imaging and nulling capacity are assessed. In all considered cases, it is found that the limiting parameter is the diameter of the individual telescopes. A final section devoted to nulling interferometry shows an apparent superiority of axial versus multiaxial recombining schemes. The entire study is valid only in the framework of first-order geometrical optics and scalar diffraction theory. Furthermore, it is assumed that all entrance subapertures are optically conjugated with their associated exit pupils.
Closed Loop, DM Diversity-based, Wavefront Correction Algorithm for High Contrast Imaging Systems
NASA Technical Reports Server (NTRS)
Give'on, Amir; Belikov, Ruslan; Shaklan, Stuart; Kasdin, Jeremy
2007-01-01
High contrast imaging from space relies on coronagraphs to limit diffraction and a wavefront control systems to compensate for imperfections in both the telescope optics and the coronagraph. The extreme contrast required (up to 10(exp -10) for terrestrial planets) puts severe requirements on the wavefront control system, as the achievable contrast is limited by the quality of the wavefront. This paper presents a general closed loop correction algorithm for high contrast imaging coronagraphs by minimizing the energy in a predefined region in the image where terrestrial planets could be found. The estimation part of the algorithm reconstructs the complex field in the image plane using phase diversity caused by the deformable mirror. This method has been shown to achieve faster and better correction than classical speckle nulling.
Einstein-Weyl spaces and third-order differential equations
NASA Astrophysics Data System (ADS)
Tod, K. P.
2000-08-01
The three-dimensional null-surface formalism of Tanimoto [M. Tanimoto, "On the null surface formalism," Report No. gr-qc/9703003 (1997)] and Forni et al. [Forni et al., "Null surfaces formation in 3D," J. Math Phys. (submitted)] are extended to describe Einstein-Weyl spaces, following Cartan [E. Cartan, "Les espaces généralisées et l'integration de certaines classes d'equations différentielles," C. R. Acad. Sci. 206, 1425-1429 (1938); "La geometria de las ecuaciones diferenciales de tercer order," Rev. Mat. Hispano-Am. 4, 1-31 (1941)]. In the resulting formalism, Einstein-Weyl spaces are obtained from a particular class of third-order differential equations. Some examples of the construction which include some new Einstein-Weyl spaces are given.
Null conformal Killing-Yano tensors and Birkhoff theorem
NASA Astrophysics Data System (ADS)
Ferrando, Joan Josep; Sáez, Juan Antonio
2016-04-01
We study the space-times admitting a null conformal Killing-Yano tensor whose divergence defines a Killing vector. We analyze the similarities and differences with the recently studied non null case (Ferrando and Sáez in Gen Relativ Gravit 47:1911, 2015). The results by Barnes concerning the Birkhoff theorem for the case of null orbits are analyzed and generalized.
Null-space and statistical significance of first-arrival traveltime inversion
NASA Astrophysics Data System (ADS)
Morozov, Igor B.
2004-03-01
The strong uncertainty inherent in the traveltime inversion of first arrivals from surface sources is usually removed by using a priori constraints or regularization. This leads to the null-space (data-independent model variability) being inadequately sampled, and consequently, model uncertainties may be underestimated in traditional (such as checkerboard) resolution tests. To measure the full null-space model uncertainties, we use unconstrained Monte Carlo inversion and examine the statistics of the resulting model ensembles. In an application to 1-D first-arrival traveltime inversion, the τ-p method is used to build a set of models that are equivalent to the IASP91 model within small, ~0.02 per cent, time deviations. The resulting velocity variances are much larger, ~2-3 per cent within the regions above the mantle discontinuities, and are interpreted as being due to the null-space. Depth-variant depth averaging is required for constraining the velocities within meaningful bounds, and the averaging scalelength could also be used as a measure of depth resolution. Velocity variances show structure-dependent, negative correlation with the depth-averaging scalelength. Neither the smoothest (Herglotz-Wiechert) nor the mean velocity-depth functions reproduce the discontinuities in the IASP91 model; however, the discontinuities can be identified by the increased null-space velocity (co-)variances. Although derived for a 1-D case, the above conclusions also relate to higher dimensions.
The Hubble Space Telescope optical systems failure report
NASA Technical Reports Server (NTRS)
1990-01-01
The findings of the Hubble Space Telescope Optical Systems Board of Investigation are reported. The Board was formed to determine the cause of the flaw in the telescope, how it occurred, and why it was not detected before launch. The Board conducted its investigation to include interviews with personnel involved in the fabrication and test of the telescope, review of documentation, and analysis and test of the equipment used in the fabrication of the telescope's mirrors. The investigation proved that the primary mirror was made in the wrong shape (a 0.4-wave rms wavefront error at 632.8 nm). The primary mirror was manufactured by the Perkin-Elmer Corporation (Hughes Danbury Optical Systems, Inc.). The critical optics used as a template in shaping the mirror, the reflective null corrector (RNC), consisted of two small mirrors and a lens. This unit had been preserved by the manufacturer exactly as it was during the manufacture of the mirror. When the Board measured the RNC, the lens was incorrectly spaced from the mirrors. Calculations of the effect of such displacement on the primary mirror show that the measured amount, 1.3 mm, accounts in detail for the amount and character of the observed image blurring. No verification of the reflective null corrector's dimensions was carried out by Perkin-Elmer after the original assembly. There were, however, clear indications of the problem from auxiliary optical tests made at the time. A special optical unit called an inverse null corrector, designed to mimic the reflection from a perfect primary mirror, was built and used to align the apparatus; when so used, it clearly showed the error in the reflective null corrector. A second null corrector was used to measure the vertex radius of the finished primary mirror. It, too, clearly showed the error in the primary mirror. Both indicators of error were discounted at the time as being themselves flawed. The Perkin-Elmer plan for fabricating the primary mirror placed complete reliance on the reflective null corrector as the only test to be used in both manufacturing and verifying the mirror's surface with the required precision. This methodology should have alerted NASA management to the fragility of the process and the possibility of gross error. Such errors had been seen in other telescope programs, yet no independent tests were planned, although some simple tests to protect against major error were considered and rejected. During the critical time period, there was great concern about cost and schedule, which further inhibited consideration of independent tests.
Samardzic, Dejan; Thamburaj, Krishnamoorthy
2015-01-01
To report the brain imaging features on magnetic resonance imaging (MRI) in inadvertent intrathecal gadolinium administration. A 67-year-old female with gadolinium encephalopathy from inadvertent high dose intrathecal gadolinium administration during an epidural steroid injection was studied with multisequence 3T MRI. T1-weighted imaging shows pseudo-T2 appearance with diffusion of gadolinium into the brain parenchyma, olivary bodies, and membranous labyrinth. Nulling of cerebrospinal fluid (CSF) signal is absent on fluid attenuation recovery (FLAIR). Susceptibility-weighted imaging (SWI) demonstrates features similar to subarachnoid hemorrhage. CT may demonstrate a pseudo-cerebral edema pattern given the high attenuation characteristics of gadolinium. Intrathecal gadolinium demonstrates characteristic imaging features on MRI of the brain and may mimic subarachnoid hemorrhage on susceptibility-weighted imaging. Identifying high dose gadolinium within the CSF spaces on MRI is essential to avoid diagnostic and therapeutic errors. Copyright © 2013 by the American Society of Neuroimaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olshevsky, Vyacheslav; Lapenta, Giovanni; Divin, Andrey
We use kinetic particle-in-cell and MHD simulations supported by an observational data set to investigate magnetic reconnection in clusters of null points in space plasma. The magnetic configuration under investigation is driven by fast adiabatic flux rope compression that dissipates almost half of the initial magnetic field energy. In this phase powerful currents are excited producing secondary instabilities, and the system is brought into a state of “intermittent turbulence” within a few ion gyro-periods. Reconnection events are distributed all over the simulation domain and energy dissipation is rather volume-filling. Numerous spiral null points interconnected via their spines form null linesmore » embedded into magnetic flux ropes; null point pairs demonstrate the signatures of torsional spine reconnection. However, energy dissipation mainly happens in the shear layers formed by adjacent flux ropes with oppositely directed currents. In these regions radial null pairs are spontaneously emerging and vanishing, associated with electron streams and small-scale current sheets. The number of spiral nulls in the simulation outweighs the number of radial nulls by a factor of 5–10, in accordance with Cluster observations in the Earth's magnetosheath. Twisted magnetic fields with embedded spiral null points might indicate the regions of major energy dissipation for future space missions such as the Magnetospheric Multiscale Mission.« less
Asymptotic symmetries of Rindler space at the horizon and null infinity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chung, Hyeyoun
2010-08-15
We investigate the asymptotic symmetries of Rindler space at null infinity and at the event horizon using both systematic and ad hoc methods. We find that the approaches that yield infinite-dimensional asymptotic symmetry algebras in the case of anti-de Sitter and flat spaces only give a finite-dimensional algebra for Rindler space at null infinity. We calculate the charges corresponding to these symmetries and confirm that they are finite, conserved, and integrable, and that the algebra of charges gives a representation of the asymptotic symmetry algebra. We also use relaxed boundary conditions to find infinite-dimensional asymptotic symmetry algebras for Rindler spacemore » at null infinity and at the event horizon. We compute the charges corresponding to these symmetries and confirm that they are finite and integrable. We also determine sufficient conditions for the charges to be conserved on-shell, and for the charge algebra to give a representation of the asymptotic symmetry algebra. In all cases, we find that the central extension of the charge algebra is trivial.« less
Towards a laboratory breadboard for PEGASE, the DARWIN pathfinder
NASA Astrophysics Data System (ADS)
Cassaing, F.; Le Duigou, J.-M.; Sorrente, B.; Fleury, B.; Gorius, N.; Brachet, F.; Buisset, C.; Ollivier, M.; Hénault, F.; Mourard, D.; Rabbia, Y.; Delpech, M.; Guidotti, P.-Y.; Léger, A.; Barillot, M.; Rouan, D.; Rousset, G.
2017-11-01
PEGASE, a spaceborne mission proposed to the CNES, is a 2-aperture interferometer for nulling and interferometric imaging. PEGASE is composed of 3 free-flying satellites (2 siderostats and 1 beam combiner) with baselines from 50 to 500 m. The goals of PEGASE are the spectroscopy of hot Jupiter (Pegasides) and brown dwarves, the exploration of the inner part of protoplanetary disks and the validation in real space conditions of nulling and visibility interferometry with formation flying. During a phase-0 study performed in 2005 at CNES, ONERA and in the laboratories, the critical subsystems of the optical payload have been investigated and a preliminary system integration has been performed. These subsystems are mostly the broadband (2.5-5 μm) nuller and the cophasing system (visible) dedicated to the real-time control of the OPD/tip/tilt inside the payload. A laboratory breadboard of the payload is under definition and should be built in 2007.
Deep Broad-Band Infrared Nulling Using A Single-Mode Fiber Beam Combiner and Baseline Rotation
NASA Technical Reports Server (NTRS)
Mennesson, Bertrand; Haguenauer, P.; Serabyn, E.; Liewer, K.
2006-01-01
The basic advantage of single-mode fibers for deep nulling applications resides in their spatial filtering ability, and has now long been known. However, and as suggested more recently, a single-mode fiber can also be used for direct coherent recombination of spatially separated beams, i.e. in a 'multi-axial' nulling scheme. After the first successful demonstration of deep (<2e-6) visible LASER nulls using this technique (Haguenauer & Serabyn, Applied Optics 2006), we decided to work on an infrared extension for ground based astronomical observations, e.g. using two or more off-axis sub-apertures of a large ground based telescope. In preparation for such a system, we built and tested a laboratory infrared fiber nuller working in a wavelength regime where atmospheric turbulence can be efficiently corrected, over a pass band (approx.1.5 to 1.8 micron) broad enough to provide reasonable sensitivity. In addition, since no snapshot images are readily accessible with a (single) fiber nuller, we also tested baseline rotation as an approach to detect off-axis companions while keeping a central null. This modulation technique is identical to the baseline rotation envisioned for the TPF-I space mission. Within this context, we report here on early laboratory results showing deep stable broad-band dual polarization infrared nulls <5e-4 (currently limited by detector noise), and visible LASER nulls better than 3e-4 over a 360 degree rotation of the baseline. While further work will take place in the laboratory to achieve deeper stable broad-band nulls and test off-axis sources detection through rotation, the emphasis will be put on bringing such a system to a telescope as soon as possible. Detection capability at the 500:1 contrast ratio in the K band (2.2 microns) seem readily accessible within 50-100 mas of the optical axis, even with a first generation system mounted on a >5m AO equipped telescope such as the Palomar Hale 200 inch, the Keck, Subaru or Gemini telescopes.
Charged-particle emission tomography
NASA Astrophysics Data System (ADS)
Ding, Yijun
Conventional charged-particle imaging techniques--such as autoradiography-- provide only two-dimensional (2D) images of thin tissue slices. To get volumetric information, images of multiple thin slices are stacked. This process is time consuming and prone to distortions, as registration of 2D images is required. We propose a direct three-dimensional (3D) autoradiography technique, which we call charged-particle emission tomography (CPET). This 3D imaging technique enables imaging of thick sections, thus increasing laboratory throughput and eliminating distortions due to registration. In CPET, molecules or cells of interest are labeled so that they emit charged particles without significant alteration of their biological function. Therefore, by imaging the source of the charged particles, one can gain information about the distribution of the molecules or cells of interest. Two special case of CPET include beta emission tomography (BET) and alpha emission tomography (alphaET), where the charged particles employed are fast electrons and alpha particles, respectively. A crucial component of CPET is the charged-particle detector. Conventional charged-particle detectors are sensitive only to the 2-D positions of the detected particles. We propose a new detector concept, which we call particle-processing detector (PPD). A PPD measures attributes of each detected particle, including location, direction of propagation, and/or the energy deposited in the detector. Reconstruction algorithms for CPET are developed, and reconstruction results from simulated data are presented for both BET and alphaET. The results show that, in addition to position, direction and energy provide valuable information for 3D reconstruction of CPET. Several designs of particle-processing detectors are described. Experimental results for one detector are discussed. With appropriate detector design and careful data analysis, it is possible to measure direction and energy, as well as position of each detected particle. The null functions of CPET with PPDs that measure different combinations of attributes are calculated through singular-value decomposition. In general, the more particle attributes are measured from each detection event, the smaller the null space of CPET is. In other words, the higher dimension the data space is, the more information about an object can be recovered from CPET.
The (2, 0) superalgebra, null M-branes and Hitchin's system
NASA Astrophysics Data System (ADS)
Kucharski, P.; Lambert, N.; Owen, M.
2017-10-01
We present an interacting system of equations with sixteen supersymmetries and an SO(2) × SO(6) R-symmetry where the fields depend on two space and one null dimensions that is derived from a representation of the six-dimensional (2, 0) superalgebra. The system can be viewed as two M5-branes compactified on {S}-^1× T^2 or equivalently as M2-branes on R+× R^2 , where ± refer to null directions. We show that for a particular choice of fields the dynamics can be reduced to motion on the moduli space of solutions to the Hitchin system. We argue that this provides a description of intersecting null M2-branes and is also related by U-duality to a DLCQ description of four-dimensional maximally supersymmetric Yang-Mills.
Testing the TPF Interferometry Approach before Launch
NASA Technical Reports Server (NTRS)
Serabyn, Eugene; Mennesson, Bertrand
2006-01-01
One way to directly detect nearby extra-solar planets is via their thermal infrared emission, and with this goal in mind, both NASA and ESA are investigating cryogenic infrared interferometers. Common to both agencies' approaches to faint off-axis source detection near bright stars is the use of a rotating nulling interferometer, such as the Terrestrial Planet Finder interferometer (TPF-I), or Darwin. In this approach, the central star is nulled, while the emission from off-axis sources is transmitted and modulated by the rotation of the off-axis fringes. Because of the high contrasts involved, and the novelty of the measurement technique, it is essential to gain experience with this technique before launch. Here we describe a simple ground-based experiment that can test the essential aspects of the TPF signal measurement and image reconstruction approaches by generating a rotating interferometric baseline within the pupil of a large singleaperture telescope. This approach can mimic potential space-based interferometric configurations, and allow the extraction of signals from off-axis sources using the same algorithms proposed for the space-based missions. This approach should thus allow for testing of the applicability of proposed signal extraction algorithms for the detection of single and multiple near-neighbor companions...
NASA Astrophysics Data System (ADS)
Defrère, D.; Absil, O.; den Hartog, R.; Hanot, C.; Stark, C.
2010-01-01
Context. Earth-sized planets around nearby stars are being detected for the first time by ground-based radial velocity and space-based transit surveys. This milestone is opening the path toward the definition of instruments able to directly detect the light from these planets, with the identification of bio-signatures as one of the main objectives. In that respect, both the European Space Agency (ESA) and the National Aeronautics and Space Administration (NASA) have identified nulling interferometry as one of the most promising techniques. The ability to study distant planets will however depend on the amount of exozodiacal dust in the habitable zone of the target stars. Aims: We assess the impact of exozodiacal clouds on the performance of an infrared nulling interferometer in the Emma X-array configuration. The first part of the study is dedicated to the effect of the disc brightness on the number of targets that can be surveyed and studied by spectroscopy during the mission lifetime. In the second part, we address the impact of asymmetric structures in the discs such as clumps and offset which can potentially mimic the planetary signal. Methods: We use the DarwinSIM software which was designed and validated to study the performance of space-based nulling interferometers. The software has been adapted to handle images of exozodiacal discs and to compute the corresponding demodulated signal. Results: For the nominal mission architecture with 2-m aperture telescopes, centrally symmetric exozodiacal dust discs about 100 times denser than the solar zodiacal cloud can be tolerated in order to survey at least 150 targets during the mission lifetime. Considering modeled resonant structures created by an Earth-like planet orbiting at 1 AU around a Sun-like star, we show that this tolerable dust density goes down to about 15 times the solar zodiacal density for face-on systems and decreases with the disc inclination. Conclusions: Whereas the disc brightness only affects the integration time, the presence of clumps or offset is more problematic and can hamper the planet detection. Based on the worst-case scenario for debris disc structures, the upper limit on the tolerable exozodiacal dust density is approximately 15 times the density of the solar zodiacal cloud. This gives the typical sensitivity that we will need to reach on exozodiacal discs in order to prepare the scientific programme of future Earth-like planet characterisation missions. FNRS Postdoctoral Researcher
Centroids evaluation of the images obtained with the conical null-screen corneal topographer
NASA Astrophysics Data System (ADS)
Osorio-Infante, Arturo I.; Armengol-Cruz, Victor de Emanuel; Campos-García, Manuel; Cossio-Guerrero, Cesar; Marquez-Flores, Jorge; Díaz-Uribe, José Rufino
2016-09-01
In this work, we propose some algorithms to recover the centroids of the resultant image obtained by a conical nullscreen based corneal topographer. With these algorithms, we obtain the region of interest (roi) of the original image and using an image-processing algorithm, we calculate the geometric centroid of each roi. In order to improve our algorithm performance, we use different settings of null-screen targets, changing their size and number. We also improved the illumination system to avoid inhomogeneous zones in the corneal images. Finally, we report some corneal topographic measurements with the best setting we found.
NASA Astrophysics Data System (ADS)
Aime, C.; Soummer, R.
This book reports the proceedings of the second Journées d'Imagerie grave{a} Très Haute Dynamique et Détection d'Exoplanètes (Days on High Contrast Imaging and Exoplanets Detection) that were held in Nice in October, 6-10, 2003 with the joint efforts of the Collège de France, the Observatoire de la Côte d'Azur, the CNRS (Centre National de la Recherche Scientifique) and the Laboratoire Universitaire d'Astrophysique de Nice which organized the meeting. The first Journées led to the publication of Volume 8, 2003 EAS Publications Series: Astronomy with High Contrast Imaging: From Planetary Systems to Active Galactic Nuclei that collected 33 papers presented during the session of May, 13-16, 2002. It covered a very large domain of research in high contrast imaging for exoplanet detection: astrophysical science (from protoplanetary disks to AGNs), instruments and techniques (from coronagraphy to nulling), data processing. These Journées took place because of the need of a working session giving enough time to the participants to explain their work and understand that of their colleagues. The second Journées took the form of an École thématique du CNRS. The courses were held in French, but the reports are in English. The present edition reports 29 courses and short presentations given at this occasion. The texts correspond to original presentations, and a few communications, too similar to those of 2002, were not reported here to avoid duplication. This makes the two books complementary. The general theme of the school was similar to that of the former meeting, with a marked teaching objective. The courses and presentations were also more centered in optics and instrumental techniques. The main idea was to study what we could call “exoplanetographs”, instruments using apodisation, coronagraphy, nulling or other techniques to directly record the light of an exoplanet. Fundamental aspects of signal processing were deferred to a third edition of the school. A very short explanation of how the reports are ordered is given here. The Journées of 2003 started with the delocalized lectures (delocalized means here “not in Paris”!) of the Collège de France, of Antoine Labeyrie who wrote a report on Removal of coronagraphy residues with an adaptive hologram. Three invited seminars follow: Olivier Guyon (Pupil remapping techniques), Daniel Rouan (Ultra-nulling interferometers), and Kjetil Dohlen (Phase masks in astronomy). An illustration from Daniel Rouan's talk on the properties of Prouhet-Thué-Morse series was also selected for the cover figure of this edition. These papers are followed by the courses and communications given during the 4 days of the school, in a slightly different order of their presentation. The first two days were on atmospheric turbulence and adaptive optics for coronagraphy, and also coronagraphic space projects. Steve Ridgway gives a general introduction to the problem (Astronomy with high contrast imaging). This is followed by a presentation on Fourier and Statistical Optics: Shaped and Apodized apertures (Claude Aime), The effect of a coronagraph on the statistics of adaptive optics pinned speckles (Claude Aime and Rémi Soummer). A general introduction to the problem of atmospheric turbulence is made by Julien Borgnino. A presentation of the Concordia site with emphasis on its advantages for high contrast imaging is given by Eric Fossat. Several presentations relative to numerical simulations of Adaptive Optics and coronagraphy follow: Marcel Carbillet (AO for very high contrast imaging), Lyu Abe and Anthony Boccaletti share two presentations on Numerical simulations for coronagraphy. These presentations are followed by reports on experiments: Sandrine Thomas (SAM-the SOAR adaptive module), Pierre Baudoz (Cryogenic IR test of the 4QPM coronagraph), Anthony Boccaletti (Coronagraphy with JWST in the thermal IR). Pierre Bourget (Hg-Mask Coronagraph) ends this part with a coronagraph using a mercury drop as a Lyot mask. The next session focused on nulling interferometry and we gather here the corresponding contribution. Two complementary reports on theory and experiment of Bracewell interferometry were made by Yves Rabbia (Theoretical aspects of Darwin) and Marc Ollivier (Experimental aspects of Darwin). Olivier Absil gave a report on the ground based nulling interferometer experiment (Effects of atmospheric turbulence on GENIE) and Valérie Weber on MAII (Nulling interferometric broadbord). A comparison between nulling and different classes of coronagraphs was made by Olivier Guyon (Coronagraphy vs. nulling). A few prospective papers have been regrouped at the end of the book: Interferometric remapped array nulling (Lyu Abe), Multiple-stage apodized Lyot coronagraph (Claude Aime and Rémi Soummer), Piston sensor using dispersed speckles (Virginie Borkowski), Principle of a coaxial achromatic interfero coronagraph (Jean Gay), Coronagraphic imaging on the VLTI with VIDA (Olivier Lardière), Phase contrast apodisation (Frantz Martinache) The last section regroups science aspects and results on sky, using high contrast imaging: Low mass companions searches using high dynamic range imaging (Jean-Luc Beuzit). The last paper by Claire Moutou (Ground-based direct imaging of exoplanets) can be read as a prospective conclusion of the Journées. C. Aime and R. Soummer
Optimization of White-Matter-Nulled Magnetization Prepared Rapid Gradient Echo (MP-RAGE) Imaging
Saranathan, Manojkumar; Tourdias, Thomas; Bayram, Ersin; Ghanouni, Pejman; Rutt, Brian K.
2014-01-01
Purpose To optimize the white-matter-nulled (WMn) Magnetization Prepared Rapid Gradient Echo (MP-RAGE) sequence at 7T, with comparisons to 3T. Methods Optimal parameters for maximising SNR efficiency were derived. The effect of flip angle and TR on image blurring was modeled using simulations and validated in vivo. A novel 2D-centric radial fan beam (RFB) k-space segmentation scheme was used to shorten scan times and improve parallel imaging. Healthy subjects as well as patients with multiple sclerosis and tremor were scanned using the optimized protocols. Results Inversion repetition times (TS) of 4.5s and 6s were found to yield the highest SNR efficiency for WMn MP-RAGE at 3T and 7T, respectively. Blurring was more sensitive to flip in WMn than in CSFn MP-RAGE and relatively insensitive to TR for both regimes. The 2D RFB scheme had 19% and 47% higher thalamic SNR and SNR efficiency than the 1D centric scheme for WMn MP-RAGE. Compared to 3T, SNR and SNR efficiency were higher for the 7T WMn regime by 56% and 41% respectively. MS lesions in the cortex and thalamus as well as thalamic subnuclei in tremor patients were clearly delineated using WMn MP-RAGE. Conclusion Optimization and new view ordering enabled MP-RAGE imaging with 0.8–1 mm3 isotropic spatial resolution in scan times of 5 minutes with whole brain coverage. PMID:24889754
Phased array ghost elimination.
Kellman, Peter; McVeigh, Elliot R
2006-05-01
Parallel imaging may be applied to cancel ghosts caused by a variety of distortion mechanisms, including distortions such as off-resonance or local flow, which are space variant. Phased array combining coefficients may be calculated that null ghost artifacts at known locations based on a constrained optimization, which optimizes SNR subject to the nulling constraint. The resultant phased array ghost elimination (PAGE) technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation is applied to full field-of-view (FOV) images. The phased array method for ghost elimination may result in greater flexibility in designing acquisition strategies. For example, in multi-shot EPI applications ghosts are typically mitigated by the use of an interleaved phase encode acquisition order. An alternative strategy is to use a sequential, non-interleaved phase encode order and cancel the resultant ghosts using PAGE parallel imaging. Cancellation of ghosts by means of phased array processing makes sequential, non-interleaved phase encode acquisition order practical, and permits a reduction in repetition time, TR, by eliminating the need for echo-shifting. Sequential, non-interleaved phase encode order has benefits of reduced distortion due to off-resonance, in-plane flow and EPI delay misalignment. Furthermore, the use of EPI with PAGE has inherent fat-water separation and has been used to provide off-resonance correction using a technique referred to as lipid elimination with an echo-shifting N/2-ghost acquisition (LEENA), and may further generalized using the multi-point Dixon method. Other applications of PAGE include cancelling ghosts which arise due to amplitude or phase variation during the approach to steady state. Parallel imaging requires estimates of the complex coil sensitivities. In vivo estimates may be derived by temporally varying the phase encode ordering to obtain a full k-space dataset in a scheme similar to the autocalibrating TSENSE method. This scheme is a generalization of the UNFOLD method used for removing aliasing in undersampled acquisitions. The more general scheme may be used to modulate each EPI ghost image to a separate temporal frequency as described in this paper. Copyright (c) 2006 John Wiley & Sons, Ltd.
Phased array ghost elimination
Kellman, Peter; McVeigh, Elliot R.
2007-01-01
Parallel imaging may be applied to cancel ghosts caused by a variety of distortion mechanisms, including distortions such as off-resonance or local flow, which are space variant. Phased array combining coefficients may be calculated that null ghost artifacts at known locations based on a constrained optimization, which optimizes SNR subject to the nulling constraint. The resultant phased array ghost elimination (PAGE) technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation is applied to full field-of-view (FOV) images. The phased array method for ghost elimination may result in greater flexibility in designing acquisition strategies. For example, in multi-shot EPI applications ghosts are typically mitigated by the use of an interleaved phase encode acquisition order. An alternative strategy is to use a sequential, non-interleaved phase encode order and cancel the resultant ghosts using PAGE parallel imaging. Cancellation of ghosts by means of phased array processing makes sequential, non-interleaved phase encode acquisition order practical, and permits a reduction in repetition time, TR, by eliminating the need for echo-shifting. Sequential, non-interleaved phase encode order has benefits of reduced distortion due to off-resonance, in-plane flow and EPI delay misalignment. Furthermore, the use of EPI with PAGE has inherent fat-water separation and has been used to provide off-resonance correction using a technique referred to as lipid elimination with an echo-shifting N/2-ghost acquisition (LEENA), and may further generalized using the multi-point Dixon method. Other applications of PAGE include cancelling ghosts which arise due to amplitude or phase variation during the approach to steady state. Parallel imaging requires estimates of the complex coil sensitivities. In vivo estimates may be derived by temporally varying the phase encode ordering to obtain a full k-space dataset in a scheme similar to the autocalibrating TSENSE method. This scheme is a generalization of the UNFOLD method used for removing aliasing in undersampled acquisitions. The more general scheme may be used to modulate each EPI ghost image to a separate temporal frequency as described in this paper. PMID:16705636
Stavisky, Sergey D; Kao, Jonathan C; Ryu, Stephen I; Shenoy, Krishna V
2017-07-05
Neural circuits must transform new inputs into outputs without prematurely affecting downstream circuits while still maintaining other ongoing communication with these targets. We investigated how this isolation is achieved in the motor cortex when macaques received visual feedback signaling a movement perturbation. To overcome limitations in estimating the mapping from cortex to arm movements, we also conducted brain-machine interface (BMI) experiments where we could definitively identify neural firing patterns as output-null or output-potent. This revealed that perturbation-evoked responses were initially restricted to output-null patterns that cancelled out at the neural population code readout and only later entered output-potent neural dimensions. This mechanism was facilitated by the circuit's large null space and its ability to strongly modulate output-potent dimensions when generating corrective movements. These results show that the nervous system can temporarily isolate portions of a circuit's activity from its downstream targets by restricting this activity to the circuit's output-null neural dimensions. Copyright © 2017 Elsevier Inc. All rights reserved.
Wavefront sensing in space: flight demonstration II of the PICTURE sounding rocket payload
NASA Astrophysics Data System (ADS)
Douglas, Ewan S.; Mendillo, Christopher B.; Cook, Timothy A.; Cahoy, Kerri L.; Chakrabarti, Supriya
2018-01-01
A NASA sounding rocket for high-contrast imaging with a visible nulling coronagraph, the Planet Imaging Concept Testbed Using a Rocket Experiment (PICTURE) payload, has made two suborbital attempts to observe the warm dust disk inferred around Epsilon Eridani. The first flight in 2011 demonstrated a 5 mas fine pointing system in space. The reduced flight data from the second launch, on November 25, 2015, presented herein, demonstrate active sensing of wavefront phase in space. Despite several anomalies in flight, postfacto reduction phase stepping interferometer data provide insight into the wavefront sensing precision and the system stability for a portion of the pupil. These measurements show the actuation of a 32 × 32-actuator microelectromechanical system deformable mirror. The wavefront sensor reached a median precision of 1.4 nm per pixel, with 95% of samples between 0.8 and 12.0 nm per pixel. The median system stability, including telescope and coronagraph wavefront errors other than tip, tilt, and piston, was 3.6 nm per pixel, with 95% of samples between 1.2 and 23.7 nm per pixel.
NASA Astrophysics Data System (ADS)
Wang, Mingming; Luo, Jianjun; Fang, Jing; Yuan, Jianping
2018-03-01
The existence of the path dependent dynamic singularities limits the volume of available workspace of free-floating space robot and induces enormous joint velocities when such singularities are met. In order to overcome this demerit, this paper presents an optimal joint trajectory planning method using forward kinematics equations of free-floating space robot, while joint motion laws are delineated with application of the concept of reaction null-space. Bézier curve, in conjunction with the null-space column vectors, are applied to describe the joint trajectories. Considering the forward kinematics equations of the free-floating space robot, the trajectory planning issue is consequently transferred to an optimization issue while the control points to construct the Bézier curve are the design variables. A constrained differential evolution (DE) scheme with premature handling strategy is implemented to find the optimal solution of the design variables while specific objectives and imposed constraints are satisfied. Differ from traditional methods, we synthesize null-space and specialized curve to provide a novel viewpoint for trajectory planning of free-floating space robot. Simulation results are presented for trajectory planning of 7 degree-of-freedom (DOF) kinematically redundant manipulator mounted on a free-floating spacecraft and demonstrate the feasibility and effectiveness of the proposed method.
Gordon, J.A.; Freedman, B.R.; Zuskov, A.; Iozzo, R.V.; Birk, D.E.; Soslowsky, L.J.
2015-01-01
Achilles tendons are a common source of pain and injury, and their pathology may originate from aberrant structure function relationships. Small leucine rich proteoglycans (SLRPs) influence mechanical and structural properties in a tendon-specific manner. However, their roles in the Achilles tendon have not been defined. The objective of this study was to evaluate the mechanical and structural differences observed in mouse Achilles tendons lacking class I SLRPs; either decorin or biglycan. In addition, empirical modeling techniques based on mechanical and image-based measures were employed. Achilles tendons from decorin-null (Dcn−/−) and biglycan-null (Bgn−/−) C57BL/6 female mice (N=102) were used. Each tendon underwent a dynamic mechanical testing protocol including simultaneous polarized light image capture to evaluate both structural and mechanical properties of each Achilles tendon. An empirical damage model was adapted for application to genetic variation and for use with image based structural properties to predict tendon dynamic mechanical properties. We found that Achilles tendons lacking decorin and biglycan had inferior mechanical and structural properties that were age dependent; and that simple empirical models, based on previously described damage models, were predictive of Achilles tendon dynamic modulus in both decorin- and biglycan-null mice. PMID:25888014
Gordon, J A; Freedman, B R; Zuskov, A; Iozzo, R V; Birk, D E; Soslowsky, L J
2015-07-16
Achilles tendons are a common source of pain and injury, and their pathology may originate from aberrant structure function relationships. Small leucine rich proteoglycans (SLRPs) influence mechanical and structural properties in a tendon-specific manner. However, their roles in the Achilles tendon have not been defined. The objective of this study was to evaluate the mechanical and structural differences observed in mouse Achilles tendons lacking class I SLRPs; either decorin or biglycan. In addition, empirical modeling techniques based on mechanical and image-based measures were employed. Achilles tendons from decorin-null (Dcn(-/-)) and biglycan-null (Bgn(-/-)) C57BL/6 female mice (N=102) were used. Each tendon underwent a dynamic mechanical testing protocol including simultaneous polarized light image capture to evaluate both structural and mechanical properties of each Achilles tendon. An empirical damage model was adapted for application to genetic variation and for use with image based structural properties to predict tendon dynamic mechanical properties. We found that Achilles tendons lacking decorin and biglycan had inferior mechanical and structural properties that were age dependent; and that simple empirical models, based on previously described damage models, were predictive of Achilles tendon dynamic modulus in both decorin- and biglycan-null mice. Copyright © 2015 Elsevier Ltd. All rights reserved.
Technology Advancement of the Visible Nulling Coronagraph
NASA Technical Reports Server (NTRS)
Lyon, Richard G.; Clampin, Mark; Petrone, Peter; Thompson, Patrick; Bolcar, Matt; Madison, Timothy; Woodruff, Robert; Noecker, Charley; Kendrick, Steve
2010-01-01
The critical high contrast imaging technology for the Extrasolar Planetary Imaging Coronagraph (EPIC) mission concept is the visible nulling coronagraph (VNC). EPIC would be capable of imaging jovian planets, dust/debris disks, and potentially super-Earths and contribute to answering how bright the debris disks are for candidate stars. The contrast requirement for EPIC is 10(exp 9) contrast at 125 milli-arseconds inner working angle. To advance the VNC technology NASA/Goddard Space Flight Center, in collaboration with Lockheed-Martin, previously developed a vacuum VNC testbed, and achieved narrowband and broadband suppression of the core of the Airy disk. Recently our group was awarded a NASA Technology Development for Exoplanet Missions to achieve two milestones: (i) 10(exp 8) contrast in narrowband light, and, (ii) 10(ecp 9) contrast in broader band light; one milestone per year, and both at 2 Lambda/D inner working angle. These will be achieved with our 2nd generation testbed known as the visible nulling testbed (VNT). It contains a MEMS based hex-packed segmented deformable mirror known as the multiple mirror array (MMA) and coherent fiber bundle, i.e. a spatial filter array (SFA). The MMA is in one interferometric arm and works to set the wavefront differences between the arms to zero. Each of the MMA segments is optically mapped to a single mode fiber of the SFA, and the SFA passively cleans the sub-aperture wavefront error leaving only piston, tip and tilt error to be controlled. The piston degree of freedom on each segment is used to correct the wavefront errors, while the tip/tilt is used to simultaneously correct the amplitude errors. Thus the VNT controls both amplitude and wavefront errors with a single MMA in closed-loop in a vacuum tank at approx.20 Hz. Herein we will discuss our ongoing progress with the VNT.
Li, Xia; Guo, Meifang; Su, Yongfu
2016-01-01
In this article, a new multidirectional monotone hybrid iteration algorithm for finding a solution to the split common fixed point problem is presented for two countable families of quasi-nonexpansive mappings in Banach spaces. Strong convergence theorems are proved. The application of the result is to consider the split common null point problem of maximal monotone operators in Banach spaces. Strong convergence theorems for finding a solution of the split common null point problem are derived. This iteration algorithm can accelerate the convergence speed of iterative sequence. The results of this paper improve and extend the recent results of Takahashi and Yao (Fixed Point Theory Appl 2015:87, 2015) and many others .
Internet Pornography Use and Sexual Body Image in a Dutch Sample
Cranney, Stephen
2016-01-01
Objectives A commonly attributed cause of sexual body image dissatisfaction is pornography use. This relationship has received little verification. Methods The relationship between sexual body image dissatisfaction and Internet pornography use was tested using a large-N sample of Dutch respondents. Results/Conclusion Penis size dissatisfaction is associated with pornography use. The relationship between pornography use and breast size dissatisfaction is null. These results support prior speculation and self-reports about the relationship between pornography use and sexual body image among men. These results also support a prior null finding of the relationship between breast size satisfaction for women and pornography use. PMID:26918066
Balloon Exoplanet Nulling Interferometer (BENI)
NASA Technical Reports Server (NTRS)
Lyon, Richard G.; Clampin, Mark; Woodruff, Robert A.; Vasudevan, Gopal; Ford, Holland; Petro, Larry; Herman, Jay; Rinehart, Stephen; Carpenter, Kenneth; Marzouk, Joe
2009-01-01
We evaluate the feasibility of using a balloon-borne nulling interferometer to detect and characterize exosolar planets and debris disks. The existing instrument consists of a 3-telescope Fizeau imaging interferometer with 3 fast steering mirrors and 3 delay lines operating at 800 Hz for closed-loop control of wavefront errors and fine pointing. A compact visible nulling interferometer is under development which when coupled to the imaging interferometer would in-principle allow deep suppression of starlight. We have conducted atmospheric simulations of the environment above 100,000 feet and believe balloons are a feasible path forward towards detection and characterization of a limited set of exoplanets and their debris disks. Herein we will discuss the BENI instrument, the balloon environment and the feasibility of such as mission.
Ghost artifact cancellation using phased array processing.
Kellman, P; McVeigh, E R
2001-08-01
In this article, a method for phased array combining is formulated which may be used to cancel ghosts caused by a variety of distortion mechanisms, including space variant distortions such as local flow or off-resonance. This method is based on a constrained optimization, which optimizes SNR subject to the constraint of nulling ghost artifacts at known locations. The resultant technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation it is applied to full field-of-view (FOV) images. The method is applied to multishot EPI with noninterleaved phase encode acquisition. A number of benefits, as compared to the conventional interleaved approach, are reduced distortion due to off-resonance, in-plane flow, and EPI delay misalignment, as well as eliminating the need for echo-shifting. Experimental results demonstrate the cancellation for both phantom as well as cardiac imaging examples.
Ghost Artifact Cancellation Using Phased Array Processing
Kellman, Peter; McVeigh, Elliot R.
2007-01-01
In this article, a method for phased array combining is formulated which may be used to cancel ghosts caused by a variety of distortion mechanisms, including space variant distortions such as local flow or off-resonance. This method is based on a constrained optimization, which optimizes SNR subject to the constraint of nulling ghost artifacts at known locations. The resultant technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation it is applied to full field-of-view (FOV) images. The method is applied to multishot EPI with noninterleaved phase encode acquisition. A number of benefits, as compared to the conventional interleaved approach, are reduced distortion due to off-resonance, in-plane flow, and EPI delay misalignment, as well as eliminating the need for echo-shifting. Experimental results demonstrate the cancellation for both phantom as well as cardiac imaging examples. PMID:11477638
Beatty, William S.; Webb, Elisabeth B.; Kesler, Dylan C.; Naylor, Luke W.; Raedeke, Andrew H.; Humburg, Dale D.; Coluccy, John M.; Soulliere, G.
2015-01-01
Bird conservation Joint Ventures are collaborative partnerships between public agencies and private organizations that facilitate habitat management to support waterfowl and other bird populations. A subset of Joint Ventures has developed energetic carrying capacity models (ECCs) to translate regional waterfowl population goals into habitat objectives during the non-breeding period. Energetic carrying capacity models consider food biomass, metabolism, and available habitat to estimate waterfowl carrying capacity within an area. To evaluate Joint Venture ECCs in the context of waterfowl space use, we monitored 33 female mallards (Anas platyrhynchos) and 55 female American black ducks (A. rubripes) using global positioning system satellite telemetry in the central and eastern United States. To quantify space use, we measured first-passage time (FPT: time required for an individual to transit across a circle of a given radius) at biologically relevant spatial scales for mallards (3.46 km) and American black ducks (2.30 km) during the non-breeding period, which included autumn migration, winter, and spring migration. We developed a series of models to predict FPT using Joint Venture ECCs and compared them to a biological null model that quantified habitat composition and a statistical null model, which included intercept and random terms. Energetic carrying capacity models predicted mallard space use more efficiently during autumn and spring migrations, but the statistical null was the top model for winter. For American black ducks, ECCs did not improve predictions of space use; the biological null was top ranked for winter and the statistical null was top ranked for spring migration. Thus, ECCs provided limited insight into predicting waterfowl space use during the non-breeding season. Refined estimates of spatial and temporal variation in food abundance, habitat conditions, and anthropogenic disturbance will likely improve ECCs and benefit conservation planners in linking non-breeding waterfowl habitat objectives with distribution and population parameters. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Parameter estimation uncertainty: Comparing apples and apples?
NASA Astrophysics Data System (ADS)
Hart, D.; Yoon, H.; McKenna, S. A.
2012-12-01
Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests that M-NSMC can provide a computationally efficient and practical solution for predictive uncertainty analysis in highly nonlinear and complex subsurface flow and transport models. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Statistical modeling, detection, and segmentation of stains in digitized fabric images
NASA Astrophysics Data System (ADS)
Gururajan, Arunkumar; Sari-Sarraf, Hamed; Hequet, Eric F.
2007-02-01
This paper will describe a novel and automated system based on a computer vision approach, for objective evaluation of stain release on cotton fabrics. Digitized color images of the stained fabrics are obtained, and the pixel values in the color and intensity planes of these images are probabilistically modeled as a Gaussian Mixture Model (GMM). Stain detection is posed as a decision theoretic problem, where the null hypothesis corresponds to absence of a stain. The null hypothesis and the alternate hypothesis mathematically translate into a first order GMM and a second order GMM respectively. The parameters of the GMM are estimated using a modified Expectation-Maximization (EM) algorithm. Minimum Description Length (MDL) is then used as the test statistic to decide the verity of the null hypothesis. The stain is then segmented by a decision rule based on the probability map generated by the EM algorithm. The proposed approach was tested on a dataset of 48 fabric images soiled with stains of ketchup, corn oil, mustard, ragu sauce, revlon makeup and grape juice. The decision theoretic part of the algorithm produced a correct detection rate (true positive) of 93% and a false alarm rate of 5% on these set of images.
Twistor Geometry of Null Foliations in Complex Euclidean Space
NASA Astrophysics Data System (ADS)
Taghavi-Chabert, Arman
2017-01-01
We give a detailed account of the geometric correspondence between a smooth complex projective quadric hypersurface Q^n of dimension n ≥ 3, and its twistor space PT, defined to be the space of all linear subspaces of maximal dimension of Q^n. Viewing complex Euclidean space CE^n as a dense open subset of Q^n, we show how local foliations tangent to certain integrable holomorphic totally null distributions of maximal rank on CE^n can be constructed in terms of complex submanifolds of PT. The construction is illustrated by means of two examples, one involving conformal Killing spinors, the other, conformal Killing-Yano 2-forms. We focus on the odd-dimensional case, and we treat the even-dimensional case only tangentially for comparison.
A bio-image sensor for simultaneous detection of multi-neurotransmitters.
Lee, You-Na; Okumura, Koichi; Horio, Tomoko; Iwata, Tatsuya; Takahashi, Kazuhiro; Hattori, Toshiaki; Sawada, Kazuaki
2018-03-01
We report here a new bio-image sensor for simultaneous detection of spatial and temporal distribution of multi-neurotransmitters. It consists of multiple enzyme-immobilized membranes on a 128 × 128 pixel array with read-out circuit. Apyrase and acetylcholinesterase (AChE), as selective elements, are used to recognize adenosine 5'-triphosphate (ATP) and acetylcholine (ACh), respectively. To enhance the spatial resolution, hydrogen ion (H + ) diffusion barrier layers are deposited on top of the bio-image sensor and demonstrated their prevention capability. The results are used to design the space among enzyme-immobilized pixels and the null H + sensor to minimize the undesired signal overlap by H + diffusion. Using this bio-image sensor, we can obtain H + diffusion-independent imaging of concentration gradients of ATP and ACh in real-time. The sensing characteristics, such as sensitivity and detection of limit, are determined experimentally. With the proposed bio-image sensor the possibility exists for customizable monitoring of the activities of various neurochemicals by using different kinds of proton-consuming or generating enzymes. Copyright © 2017 Elsevier B.V. All rights reserved.
Final results of the PERSEE experiment
NASA Astrophysics Data System (ADS)
Le Duigou, J. M.; Lozi, J.; Cassaing, F.; Houairi, K.; Sorrente, B.; Montri, J.; Jacquinod, S.; Reess, J.-M.; Pham, L.; Lhome, E.; Buey, T.; Hénault, F.; Marcotto, A.; Girard, P.; Mauclert, N.; Barillot, M.; Coudé du Foresto, V.; Ollivier, M.
2012-07-01
The PERSEE breadboard, developed by a consortium including CNES, IAS, LESIA, OCA, ONERA and TAS since 2005, is a nulling demonstrator that couples an infrared nulling interferometer with a formation flying simulator able to introduce realistic disturbances in the set-up. The general idea is to prove that an adequate optical design can considerably relax the constraints applying at the spacecrafts level of a future interferometric space mission like Darwin/TPF or one of its precursors. The breadboard is now fully operational and the measurements sequences are managed from a remote control room using automatic procedures. A set of excellent results were obtained in 2011. The measured polychromatic nulling depth with non polarized light is 8.8 10-6 stabilized at 9 10-8 in the 1.65-2.45 μm spectral band (37 % bandwidth) during 100 s. This result was extended to a 7h duration thanks to an automatic calibration process. The various contributors are identified and the nulling budget is now well mastered. We also proved that harmonic disturbances in the 1-100 Hz up to several ten’s of nm rms can be very efficiently corrected by a Linear Quadratic Control (LQG) if a sufficient flux is available. These results are important contributions to the feasibility of a future space based nulling interferometer.
Final results of the PERSEE experiment
NASA Astrophysics Data System (ADS)
Le Duigou, J.-M.; Lozi, J.; Cassaing, F.; Houairi, K.; Sorrente, B.; Montri, J.; Jacquinod, S.; Réess, J.-M.; Pham, L.; Lhomé, E.; Buey, T.; Hénault, F.; Marcotto, A.; Girard, P.; Mauclert, N.; Barillot, M.; Coudé du Foresto, V.; Ollivier, M.
2017-11-01
The PERSEE breadboard, developed by a consortium including CNES, IAS, LESIA, OCA, ONERA and TAS since 2006, is a nulling demonstrator that couples an infrared nulling interferometer with a formation flying simulator able to introduce realistic disturbances in the set-up. The general idea is to prove that an adequate optical design can considerably release the constraints applied at the spacecrafts level of a future interferometric space mission like Darwin/TPF or one of its precursors. The breadboard is now fully operational and the measurements sequences are managed from a remote control room using automatic procedures. A set of excellent results were obtained in 2011: the measured polychromatic nulling depth with non polarized light is 8.8x10-6 stabilized at 9x10-8 in the [1.65-2.45] μm spectral band (37% bandwidth) during 100s. This result was extended to a 7h duration thanks to an automatic calibration process. The various contributors are identified and the nulling budget is now well mastered. We also proved that harmonic disturbances in the 1-100Hz up to several tens of nm rms can be very efficiently corrected by a Linear Quadratic Control (LQG) if a sufficient flux is available. These results are important contributions to the feasibility of a future space based nulling interferometer.
Divertor with a third-order null of the poloidal field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryutov, D. D.; Umansky, M. V.
2013-09-15
A concept and preliminary feasibility analysis of a divertor with the third-order poloidal field null is presented. The third-order null is the point where not only the field itself but also its first and second spatial derivatives are zero. In this case, the separatrix near the null-point has eight branches, and the number of strike-points increases from 2 (as in the standard divertor) to six. It is shown that this magnetic configuration can be created by a proper adjustment of the currents in a set of three divertor coils. If the currents are somewhat different from the required values, themore » configuration becomes that of three closely spaced first-order nulls. Analytic approach, suitable for a quick orientation in the problem, is used. Potential advantages and disadvantages of this configuration are briefly discussed.« less
NASA Astrophysics Data System (ADS)
Herbonnet, Ricardo; Buddendiek, Axel; Kuijken, Konrad
2017-03-01
Context. Current optical imaging surveys for cosmology cover large areas of sky. Exploiting the statistical power of these surveys for weak lensing measurements requires shape measurement methods with subpercent systematic errors. Aims: We introduce a new weak lensing shear measurement algorithm, shear nulling after PSF Gaussianisation (SNAPG), designed to avoid the noise biases that affect most other methods. Methods: SNAPG operates on images that have been convolved with a kernel that renders the point spread function (PSF) a circular Gaussian, and uses weighted second moments of the sources. The response of such second moments to a shear of the pre-seeing galaxy image can be predicted analytically, allowing us to construct a shear nulling scheme that finds the shear parameters for which the observed galaxies are consistent with an unsheared, isotropically oriented population of sources. The inverse of this nulling shear is then an estimate of the gravitational lensing shear. Results: We identify the uncertainty of the estimated centre of each galaxy as the source of noise bias, and incorporate an approximate estimate of the centroid covariance into the scheme. We test the method on extensive suites of simulated galaxies of increasing complexity, and find that it is capable of shear measurements with multiplicative bias below 0.5 percent.
Lin, Hung-Yu; Flask, Chris A; Dale, Brian M; Duerk, Jeffrey L
2007-06-01
To investigate and evaluate a new rapid dark-blood vessel-wall imaging method using random bipolar gradients with a radial steady-state free precession (SSFP) acquisition in carotid applications. The carotid artery bifurcations of four asymptomatic volunteers (28-37 years old, mean age = 31 years) were included in this study. Dark-blood contrast was achieved through the use of random bipolar gradients applied prior to the signal acquisition of each radial projection in a balanced SSFP acquisition. The resulting phase variation for moving spins established significant destructive interference in the low-frequency region of k-space. This phase variation resulted in a net nulling of the signal from flowing spins, while the bipolar gradients had a minimal effect on the static spins. The net effect was that the regular SSFP signal amplitude (SA) in stationary tissues was preserved while dark-blood contrast was achieved for moving spins. In this implementation, application of the random bipolar gradient pulses along all three spatial directions nulled the signal from both in-plane and through-plane flow in phantom and in vivo studies. In vivo imaging trials confirmed that dark-blood contrast can be achieved with the radial random bipolar SSFP method, thereby substantially reversing the vessel-to-lumen contrast-to-noise ratio (CNR) of a conventional rectilinear SSFP "bright-blood" acquisition from bright blood to dark blood with only a modest increase in TR (approximately 4 msec) to accommodate the additional bipolar gradients. Overall, this sequence offers a simple and effective dark-blood contrast mechanism for high-SNR SSFP acquisitions in vessel wall imaging within a short acquisition time.
Surprising structures hiding in Penrose’s future null infinity
NASA Astrophysics Data System (ADS)
Newman, Ezra T.
2017-07-01
Since the late1950s, almost all discussions of asymptotically flat (Einstein-Maxwell) space-times have taken place in the context of Penrose’s null infinity, I+. In addition, almost all calculations have used the Bondi coordinate and tetrad systems. Beginning with a known asymptotically flat solution to the Einstein-Maxwell equations, we show first, that there are other natural coordinate systems, near I+, (analogous to light-cones in flat-space) that are based on (asymptotically) shear-free null geodesic congruences (analogous to the flat-space case). Using these new coordinates and their associated tetrad, we define the complex dipole moment, (the mass dipole plus i times angular momentum), from the l = 1 harmonic coefficient of a component of the asymptotic Weyl tensor. Second, from this definition, from the Bianchi identities and from the Bondi-Sachs mass and linear momentum, we show that there exists a large number of results—identifications and dynamics—identical to those of classical mechanics and electrodynamics. They include, among many others, {P}=M{v}+..., {L}= {r} × {P} , spin, Newton’s second law with the rocket force term (\\dotM v) and radiation reaction, angular momentum conservation and others. All these relations take place in the rather mysterious H-space rather than in space-time. This leads to the enigma: ‘why do these well known relations of classical mechanics take place in H-space?’ and ‘What is the physical meaning of H-space?’
Landua, John D.; Bu, Wen; Wei, Wei; Li, Fuhai; Wong, Stephen T.C.; Dickinson, Mary E.; Rosen, Jeffrey M.; Lewis, Michael T.
2014-01-01
Cancer stem cells (CSCs, or tumor-initiating cells) may be responsible for tumor formation in many types of cancer, including breast cancer. Using high-resolution imaging techniques, we analyzed the relationship between a Wnt-responsive, CSC-enriched population and the tumor vasculature using p53-null mouse mammary tumors transduced with a lentiviral Wnt signaling reporter. Consistent with their localization in the normal mammary gland, Wnt-responsive cells in tumors were enriched in the basal/myoepithelial population and generally located in close proximity to blood vessels. The Wnt-responsive CSCs did not colocalize with the hypoxia-inducible factor 1α-positive cells in these p53-null basal-like tumors. Average vessel diameter and vessel tortuosity were increased in p53-null mouse tumors, as well as in a human tumor xenograft as compared with the normal mammary gland. The combined strategy of monitoring the fluorescently labeled CSCs and vasculature using high-resolution imaging techniques provides a unique opportunity to study the CSC and its surrounding vasculature. PMID:24797826
NASA Astrophysics Data System (ADS)
Hsieh, Cheng-Ta; Huang, Kae-Horng; Lee, Chang-Hsing; Han, Chin-Chuan; Fan, Kuo-Chin
2017-12-01
Robust face recognition under illumination variations is an important and challenging task in a face recognition system, particularly for face recognition in the wild. In this paper, a face image preprocessing approach, called spatial adaptive shadow compensation (SASC), is proposed to eliminate shadows in the face image due to different lighting directions. First, spatial adaptive histogram equalization (SAHE), which uses face intensity prior model, is proposed to enhance the contrast of each local face region without generating visible noises in smooth face areas. Adaptive shadow compensation (ASC), which performs shadow compensation in each local image block, is then used to produce a wellcompensated face image appropriate for face feature extraction and recognition. Finally, null-space linear discriminant analysis (NLDA) is employed to extract discriminant features from SASC compensated images. Experiments performed on the Yale B, Yale B extended, and CMU PIE face databases have shown that the proposed SASC always yields the best face recognition accuracy. That is, SASC is more robust to face recognition under illumination variations than other shadow compensation approaches.
Modular Hamiltonians for deformed half-spaces and the averaged null energy condition
Faulkner, Thomas; Leigh, Robert G.; Parrikar, Onkar; ...
2016-09-08
We study modular Hamiltonians corresponding to the vacuum state for deformed half-spaces in relativistic quantum field theories on R 1,d-1. We show that in addition to the usual boost generator, there is a contribution to the modular Hamiltonian at first order in the shape deformation, proportional to the integral of the null components of the stress tensor along the Rindler horizon. We use this fact along with monotonicity of relative entropy to prove the averaged null energy condition in Minkowski space-time. This subsequently gives a new proof of the Hofman-Maldacena bounds on the parameters appearing in CFT three-point functions. Ourmore » main technical advance involves adapting newly developed perturbative methods for calculating entanglement entropy to the problem at hand. Our methods were recently used to prove certain results on the shape dependence of entanglement in CFTs and here we generalize these results to excited states and real time dynamics. Finally, we discuss the AdS/CFT counterpart of this result, making connection with the recently proposed gravitational dual for modular Hamiltonians in holographic theories.« less
Modular Hamiltonians for deformed half-spaces and the averaged null energy condition
NASA Astrophysics Data System (ADS)
Faulkner, Thomas; Leigh, Robert G.; Parrikar, Onkar; Wang, Huajia
2016-09-01
We study modular Hamiltonians corresponding to the vacuum state for deformed half-spaces in relativistic quantum field theories on {{R}}^{1,d-1} . We show that in addition to the usual boost generator, there is a contribution to the modular Hamiltonian at first order in the shape deformation, proportional to the integral of the null components of the stress tensor along the Rindler horizon. We use this fact along with monotonicity of relative entropy to prove the averaged null energy condition in Minkowski space-time. This subsequently gives a new proof of the Hofman-Maldacena bounds on the parameters appearing in CFT three-point functions. Our main technical advance involves adapting newly developed perturbative methods for calculating entanglement entropy to the problem at hand. These methods were recently used to prove certain results on the shape dependence of entanglement in CFTs and here we generalize these results to excited states and real time dynamics. We also discuss the AdS/CFT counterpart of this result, making connection with the recently proposed gravitational dual for modular Hamiltonians in holographic theories.
Complexity-reduced implementations of complete and null-space-based linear discriminant analysis.
Lu, Gui-Fu; Zheng, Wenming
2013-10-01
Dimensionality reduction has become an important data preprocessing step in a lot of applications. Linear discriminant analysis (LDA) is one of the most well-known dimensionality reduction methods. However, the classical LDA cannot be used directly in the small sample size (SSS) problem where the within-class scatter matrix is singular. In the past, many generalized LDA methods has been reported to address the SSS problem. Among these methods, complete linear discriminant analysis (CLDA) and null-space-based LDA (NLDA) provide good performances. The existing implementations of CLDA are computationally expensive. In this paper, we propose a new and fast implementation of CLDA. Our proposed implementation of CLDA, which is the most efficient one, is equivalent to the existing implementations of CLDA in theory. Since CLDA is an extension of null-space-based LDA (NLDA), our implementation of CLDA also provides a fast implementation of NLDA. Experiments on some real-world data sets demonstrate the effectiveness of our proposed new CLDA and NLDA algorithms. Copyright © 2013 Elsevier Ltd. All rights reserved.
Modular Hamiltonians for deformed half-spaces and the averaged null energy condition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faulkner, Thomas; Leigh, Robert G.; Parrikar, Onkar
We study modular Hamiltonians corresponding to the vacuum state for deformed half-spaces in relativistic quantum field theories on R 1,d-1. We show that in addition to the usual boost generator, there is a contribution to the modular Hamiltonian at first order in the shape deformation, proportional to the integral of the null components of the stress tensor along the Rindler horizon. We use this fact along with monotonicity of relative entropy to prove the averaged null energy condition in Minkowski space-time. This subsequently gives a new proof of the Hofman-Maldacena bounds on the parameters appearing in CFT three-point functions. Ourmore » main technical advance involves adapting newly developed perturbative methods for calculating entanglement entropy to the problem at hand. Our methods were recently used to prove certain results on the shape dependence of entanglement in CFTs and here we generalize these results to excited states and real time dynamics. Finally, we discuss the AdS/CFT counterpart of this result, making connection with the recently proposed gravitational dual for modular Hamiltonians in holographic theories.« less
NASA Astrophysics Data System (ADS)
Liao, Haitao; Wu, Wenwang; Fang, Daining
2018-07-01
A coupled approach combining the reduced space Sequential Quadratic Programming (SQP) method with the harmonic balance condensation technique for finding the worst resonance response is developed. The nonlinear equality constraints of the optimization problem are imposed on the condensed harmonic balance equations. Making use of the null space decomposition technique, the original optimization formulation in the full space is mathematically simplified, and solved in the reduced space by means of the reduced SQP method. The transformation matrix that maps the full space to the null space of the constrained optimization problem is constructed via the coordinate basis scheme. The removal of the nonlinear equality constraints is accomplished, resulting in a simple optimization problem subject to bound constraints. Moreover, second order correction technique is introduced to overcome Maratos effect. The combination application of the reduced SQP method and condensation technique permits a large reduction of the computational cost. Finally, the effectiveness and applicability of the proposed methodology is demonstrated by two numerical examples.
Multi-Segment Radius Measurement Using an Absolute Distance Meter Through a Null Assembly
NASA Technical Reports Server (NTRS)
Merle, Cormic; Wick, Eric; Hayden, Joseph
2011-01-01
This system was one of the test methods considered for measuring the radius of curvature of one or more of the 18 segmented mirrors that form the 6.5 m diameter primary mirror (PM) of the James Webb Space Telescope (JWST). The assembled telescope will be tested at cryogenic temperatures in a 17-m diameter by 27-m high vacuum chamber at the Johnson Space Center. This system uses a Leica Absolute Distance Meter (ADM), at a wavelength of 780 nm, combined with beam-steering and beam-shaping optics to make a differential distance measurement between a ring mirror on the reflective null assembly and individual PM segments. The ADM is located inside the same Pressure-Tight Enclosure (PTE) that houses the test interferometer. The PTE maintains the ADM and interferometer at ambient temperature and pressure so that they are not directly exposed to the telescope s harsh cryogenic and vacuum environment. This system takes advantage of the existing achromatic objective and reflective null assembly used by the test interferometer to direct four ADM beamlets to four PM segments through an optical path that is coincident with the interferometer beam. A mask, positioned on a linear slide, contains an array of 1.25 mm diameter circular subapertures that map to each of the 18 PM segments as well as six positions around the ring mirror. A down-collimated 4 mm ADM beam simultaneously covers 4 adjacent PM segment beamlets and one ring mirror beamlet. The radius, or spacing, of all 18 segments can be measured with the addition of two orthogonally-oriented scanning pentaprisms used to steer the ADM beam to any one of six different sub-aperture configurations at the plane of the ring mirror. The interferometer beam, at a wavelength of 687 nm, and the ADM beamlets, at a wavelength of 780 nm, pass through the objective and null so that the rays are normally incident on the parabolic PM surface. After reflecting off the PM, both the ADM and interferometer beams return to their respective instruments on nearly the same path. A fifth beamlet, acting as a differential reference, reflects off a ring mirror attached to the objective and null and returns to the ADM. The spacings between the ring mirror, objective, and null are known through manufacturing tolerances as well as through an in situ null wavefront alignment of the interferometer test beam with a reflective hologram located near the caustic of the null. Since total path length between the ring mirror and PM segments is highly deterministic, any ADM-measured departures from the predicted path length can be attributed to either spacing error or radius error in the PM. It is estimated that the path length measurement between the ring mirror and a PM segment is accurate to better than 100 m. The unique features of this invention include the differential distance measuring capability and its integration into an existing cryogenic and vacuum compatible interferometric optical test.
Imaging issues for interferometry with CGH null correctors
NASA Astrophysics Data System (ADS)
Burge, James H.; Zhao, Chunyu; Zhou, Ping
2010-07-01
Aspheric surfaces, such as telescope mirrors, are commonly measured using interferometry with computer generated hologram (CGH) null correctors. The interferometers can be made with high precision and low noise, and CGHs can control wavefront errors to accuracy approaching 1 nm for difficult aspheric surfaces. However, such optical systems are typically poorly suited for high performance imaging. The aspheric surface must be viewed through a CGH that was intentionally designed to introduce many hundreds of waves of aberration. The imaging aberrations create difficulties for the measurements by coupling both geometric and diffraction effects into the measurement. These issues are explored here, and we show how the use of larger holograms can mitigate these effects.
Recent New Ideas and Directions for Space-Based Nulling Interferometry
NASA Technical Reports Server (NTRS)
Serabyn, Eugene (Gene)
2004-01-01
This document is composed of two viewgraph presentations. The first is entitled "Recent New Ideas and Directions for Space-Based Nulling Interferometry." It reviews our understanding of interferometry compared to a year or so ago: (1) Simpler options identified, (2) A degree of flexibility is possible, allowing switching (or degradation) between some options, (3) Not necessary to define every component to the exclusion of all other possibilities and (4) MIR fibers are becoming a reality. The second, entitled "The Fiber Nuller," reviews the idea of Combining beams in a fiber instead of at a beamsplitter.
Configuration and Management of Wireless Sensor Networks
2005-12-01
monitor network status. B. CONCLUSIONS AND FUTURE WORK WSNs are an exciting and useful technology which will be used in various areas in the...int h = getSize().height; Image resizedImage = null; ImageFilter replicate = new ReplicateScaleFilter(w, h); ImageProducer prod = new
A Fourier dimensionality reduction model for big data interferometric imaging
NASA Astrophysics Data System (ADS)
Vijay Kartik, S.; Carrillo, Rafael E.; Thiran, Jean-Philippe; Wiaux, Yves
2017-06-01
Data dimensionality reduction in radio interferometry can provide savings of computational resources for image reconstruction through reduced memory footprints and lighter computations per iteration, which is important for the scalability of imaging methods to the big data setting of the next-generation telescopes. This article sheds new light on dimensionality reduction from the perspective of the compressed sensing theory and studies its interplay with imaging algorithms designed in the context of convex optimization. We propose a post-gridding linear data embedding to the space spanned by the left singular vectors of the measurement operator, providing a dimensionality reduction below image size. This embedding preserves the null space of the measurement operator and hence its sampling properties are also preserved in light of the compressed sensing theory. We show that this can be approximated by first computing the dirty image and then applying a weighted subsampled discrete Fourier transform to obtain the final reduced data vector. This Fourier dimensionality reduction model ensures a fast implementation of the full measurement operator, essential for any iterative image reconstruction method. The proposed reduction also preserves the independent and identically distributed Gaussian properties of the original measurement noise. For convex optimization-based imaging algorithms, this is key to justify the use of the standard ℓ2-norm as the data fidelity term. Our simulations confirm that this dimensionality reduction approach can be leveraged by convex optimization algorithms with no loss in imaging quality relative to reconstructing the image from the complete visibility data set. Reconstruction results in simulation settings with no direction dependent effects or calibration errors show promising performance of the proposed dimensionality reduction. Further tests on real data are planned as an extension of the current work. matlab code implementing the proposed reduction method is available on GitHub.
Flight demonstration of a milliarcsecond pointing system for direct exoplanet imaging.
Mendillo, Christopher B; Chakrabarti, Supriya; Cook, Timothy A; Hicks, Brian A; Lane, Benjamin F
2012-10-10
We present flight results from the optical pointing control system onboard the Planetary Imaging Concept Testbed Using a Rocket Experiment (PICTURE) sounding rocket. PICTURE (NASA mission number: 36.225 UG) was launched on 8 October 2011, from White Sands Missile Range. It attempted to directly image the exozodiacal dust disk of ϵ Eridani (K2V, 3.22 pc) down to an inner radius of 1.5 AU using a visible nulling coronagraph. The rocket attitude control system (ACS) provided 627 milliarcsecond (mas) RMS body pointing (~2'' peak-to-valley). The PICTURE fine pointing system (FPS) successfully stabilized the telescope beam to 5.1 mas (0.02λ/D) RMS using an angle tracker camera and fast steering mirror. This level of pointing stability is comparable to that of the Hubble Space Telescope. We present the hardware design of the FPS, a description of the limiting noise sources and a power spectral density analysis of the FPS and rocket ACS in-flight performance.
Mani, Merry; Jacob, Mathews; Kelley, Douglas; Magnotta, Vincent
2017-01-01
Purpose To introduce a novel method for the recovery of multi-shot diffusion weighted (MS-DW) images from echo-planar imaging (EPI) acquisitions. Methods Current EPI-based MS-DW reconstruction methods rely on the explicit estimation of the motion-induced phase maps to recover artifact-free images. In the new formulation, the k-space data of the artifact-free DWI is recovered using a structured low-rank matrix completion scheme, which does not require explicit estimation of the phase maps. The structured matrix is obtained as the lifting of the multi-shot data. The smooth phase-modulations between shots manifest as null-space vectors of this matrix, which implies that the structured matrix is low-rank. The missing entries of the structured matrix are filled in using a nuclear-norm minimization algorithm subject to the data-consistency. The formulation enables the natural introduction of smoothness regularization, thus enabling implicit motion-compensated recovery of the MS-DW data. Results Our experiments on in-vivo data show effective removal of artifacts arising from inter-shot motion using the proposed method. The method is shown to achieve better reconstruction than the conventional phase-based methods. Conclusion We demonstrate the utility of the proposed method to effectively recover artifact-free images from Cartesian fully/under-sampled and partial Fourier acquired data without the use of explicit phase estimates. PMID:27550212
Compression of auditory space during forward self-motion.
Teramoto, Wataru; Sakamoto, Shuichi; Furune, Fumimasa; Gyoba, Jiro; Suzuki, Yôiti
2012-01-01
Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation. Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener's physical coronal plane reached the location of one of the speakers (null point). In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point. These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial shifts in the auditory receptive field locations driven by afferent signals from vestibular system.
Integrated Optics Achromatic Nuller for Stellar Interferometry
NASA Technical Reports Server (NTRS)
Ksendzov, Alexander
2012-01-01
This innovation will replace a beam combiner, a phase shifter, and a mode conditioner, thus simplifying the system design and alignment, and saving weight and space in future missions. This nuller is a dielectric-waveguide-based, four-port asymmetric coupler. Its nulling performance is based on the mode-sorting property of adiabatic asymmetric couplers that are intrinsically achromatic. This nuller has been designed, and its performance modeled, in the 6.5-micrometer to 9.25-micrometer spectral interval (36% bandwidth). The calculated suppression of starlight for this 15-cm-long device is 10(exp -5) or better through the whole bandwidth. This is enough to satisfy requirements of a flagship exoplanet-characterization mission. Nulling interferometry is an approach to starlight suppression that will allow the detection and spectral characterization of Earth-like exoplanets. Nulling interferometers separate the light originating from a dim planet from the bright starlight by placing the star at the bottom of a deep, destructive interference fringe, where the starlight is effectively cancelled, or nulled, thus allowing the faint off-axis light to be much more easily seen. This process is referred to as nulling of the starlight. Achromatic nulling technology is a critical component that provides the starlight suppression in interferometer-based observatories. Previously considered space-based interferometers are aimed at approximately 6-to-20-micrometer spectral range. While containing the spectral features of many gases that are considered to be signatures of life, it also offers better planet-to-star brightness ratio than shorter wavelengths. In the Integrated Optics Achromatic Nuller (IOAN) device, the two beams from the interferometer's collecting telescopes pass through the same focusing optic and are incident on the input of the nuller.
The ExtraSolar Planetary Imaging Coronagraph
NASA Astrophysics Data System (ADS)
Clampin, M.; Lyon, R.
2010-10-01
The Extrasolar Planetary Imaging Coronagraph (EPIC) is a 1.65-m telescope employing a visible nulling coronagraph (VNC) to deliver high-contrast images of extrasolar system architectures. EPIC will survey the architectures of exosolar systems, and investigate the physical nature of planets in these solar systems. EPIC will employ a Visible Nulling Coronagraph (VNC), featuring an inner working angle of ≤2λ/D, and offers the ideal balance between performance and feasibility of implementation, while not sacrificing science return. The VNC does not demand unrealistic thermal stability from its telescope optics, achieving its primary mirror surface figure requires no new technology, and pointing stability is within state of the art. The EPIC mission will be launched into a drift-away orbit with a five-year mission lifetime.
High Contrast Vacuum Nuller Testbed (VNT) Contrast, Performance and Null Control
NASA Technical Reports Server (NTRS)
Lyon, Richard G.; Clampin, Mark; Petrone, Peter; Mallik, Udayan; Madison, Timothy; Bolcar, Matthew R.
2012-01-01
Herein we report on our Visible Nulling Coronagraph high-contrast result of 109 contrast averaged over a focal planeregion extending from 14 D with the Vacuum Nuller Testbed (VNT) in a vibration isolated vacuum chamber. TheVNC is a hybrid interferometriccoronagraphic approach for exoplanet science. It operates with high Lyot stopefficiency for filled, segmented and sparse or diluted-aperture telescopes, thereby spanning the range of potential futureNASA flight telescopes. NASAGoddard Space Flight Center (GSFC) has a well-established effort to develop the VNCand its technologies, and has developed an incremental sequence of VNC testbeds to advance this approach and itsenabling technologies. These testbeds have enabled advancement of high-contrast, visible light, nulling interferometry tounprecedented levels. The VNC is based on a modified Mach-Zehnder nulling interferometer, with a W configurationto accommodate a hex-packed MEMS based deformable mirror, a coherent fiber bundle and achromatic phase shifters.We give an overview of the VNT and discuss the high-contrast laboratory results, the optical configuration, criticaltechnologies and null sensing and control.
Progress in four-beam nulling: results from the Terrestrial Planet Finder planet detection testbed
NASA Technical Reports Server (NTRS)
Martin, Stefan
2006-01-01
The Terrestrial Planet Finder Interferometer (TPF-I) is a large space telescope consisting of four 4 meter diameter telescopes flying in formation in space together with a fifth beam combiner spacecraft.
Progress in four-beam nulling: results from the Terrestrial Planet Finder Planet Detection Testbed
NASA Technical Reports Server (NTRS)
Martin, Stefan
2006-01-01
The Terrestrial Planet Finder Interferometer (TPF-I) is a large space telescope consisting of four 4 meter diameter telescopes flying in formation in space together with a fifth beam combiner spacecraft.
Influence of pinches on magnetic reconnection in turbulent space plasmas
NASA Astrophysics Data System (ADS)
Olshevsky, Vyacheslav; Lapenta, Giovanni; Markidis, Stefano; Divin, Andrey
A generally accepted scenario of magnetic reconnection in space plasmas is the breakage of magnetic field lines in X-points. In laboratory, reconnection is widely studied in pinches, current channels embedded into twisted magnetic fields. No model of magnetic reconnection in space plasmas considers both null-points and pinches as peers. We have performed a particle-in-cell simulation of magnetic reconnection in a three-dimensional configuration where null-points are present nitially, and Z-pinches are formed during the simulation. The X-points are relatively stable, and no substantial energy dissipation is associated with them. On contrary, turbulent magnetic reconnection in the pinches causes the magnetic energy to decay at a rate of approximately 1.5 percent per ion gyro period. Current channels and twisted magnetic fields are ubiquitous in turbulent space plasmas, so pinches can be responsible for the observed high magnetic reconnection rates.
A toy Penrose inequality and its proof
NASA Astrophysics Data System (ADS)
Bengtsson, Ingemar; Jakobsson, Emma
2016-12-01
We formulate and prove a toy version of the Penrose inequality. The formulation mimics the original Penrose inequality in which the scenario is the following: a shell of null dust collapses in Minkowski space and a marginally trapped surface forms on it. Through a series of arguments relying on established assumptions, an inequality relating the area of this surface to the total energy of the shell is formulated. Then a further reformulation turns the inequality into a statement relating the area and the outer null expansion of a class of surfaces in Minkowski space itself. The inequality has been proven to hold true in many special cases, but there is no proof in general. In the toy version here presented, an analogous inequality in (2 + 1)-dimensional anti-de Sitter space turns out to hold true.
NASA Technical Reports Server (NTRS)
Shao, Michael; Serabyn, Eugene; Levine, Bruce Martin; Beichman, Charles; Liu, Duncan; Martin, Stefan; Orton, Glen; Mennesson, Bertrand; Morgan, Rhonda; Velusamy, Thangasamy;
2003-01-01
This talk describes a new concept for visible direct detection of Earth like extra solar planets using a nulling coronagraph instrument behind a 4m telescope in space. In the baseline design, a 4 beam nulling interferometer is synthesized from the telescope pupil, producing a very deep theta^4null which is then filtered by a coherent array of single mode fibers to suppress the residual scattered light. With perfect optics, the stellar leakage is less than 1e-11 of the starlight at the location of the planet. With diffraction limited telescope optics (lambda/20), suppression of the starlight to 1e-10 is possible. The concept is described along with the key advantages over more traditional approaches such as apodized aperture telescopes and Lyot type coronagraphs.
Comparison of Heat Flux Gages for High Enthalpy Flows - NASA Ames and IRS
NASA Technical Reports Server (NTRS)
Loehle, Stefan; Nawaz, Anuscheh; Herdrich, Georg; Fasoulas, Stefanos; Martinez, Edward; Raiche, George
2016-01-01
This article is a companion to a paper on heat flux measurements as initiated under a Space Act Agreement in 2011. The current focus of this collaboration between the Institute of Space Systems (IRS) of the University of Stuttgart and NASA Ames Research Center is the comparison and refinement of diagnostic measurements. A first experimental campaign to test different heat flux gages in the NASA Interaction Heating Facility (IHF) and the Plasmawindkanaele (PWK) at IRS was established. This paper focuses on the results of the measurements conducted at IRS. The tested gages included a at face and hemispherical probe head, a 4" hemispherical slug calorimeter, a null-point calorimeter from Ames and a null-point calorimeter developed for this purpose at IRS. The Ames null-point calorimeter was unfortunately defective upon arrival. The measured heat fluxes agree fairly well with each other. The reason for discrepancies can be attributed to signal-to-noise levels and the probe geometry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blume-Kohout, Robin J; Scholten, Travis L.
Quantum state tomography on a d-dimensional system demands resources that grow rapidly with d. They may be reduced by using model selection to tailor the number of parameters in the model (i.e., the size of the density matrix). Most model selection methods typically rely on a test statistic and a null theory that describes its behavior when two models are equally good. Here, we consider the loglikelihood ratio. Because of the positivity constraint ρ ≥ 0, quantum state space does not generally satisfy local asymptotic normality (LAN), meaning the classical null theory for the loglikelihood ratio (the Wilks theorem) shouldmore » not be used. Thus, understanding and quantifying how positivity affects the null behavior of this test statistic is necessary for its use in model selection for state tomography. We define a new generalization of LAN, metric-projected LAN, show that quantum state space satisfies it, and derive a replacement for the Wilks theorem. In addition to enabling reliable model selection, our results shed more light on the qualitative effects of the positivity constraint on state tomography.« less
Wave field and evanescent waves produced by a sound beam incident on a simulated sediment
NASA Astrophysics Data System (ADS)
Osterhoudt, Curtis F.; Marston, Philip L.; Morse, Scot F.
2005-09-01
When a sound beam in water is incident on a sediment at a sufficiently small grazing angle, the resulting wave field in the sediment is complicated, even for the case of flat, fluidlike sediments. The wave field in the sediment for a sound beam from a simple, unshaded, finite transducer has an evanescent component and diffractive components. These components can interfere to produce a series of nulls outside the spatial region dominated by the evanescent wave field. This situation has been experimentally simulated by using a combination of previously described immiscible liquids [Osterhoudt et al., J. Acoust. Soc. Am. 117, 2483 (2005)]. The spacing between the observed nulls is similar to that seen in a wave-number-integration-based synthesis (using OASES) for a related problem. An analysis of a dephasing distance for evanescent and algebraically decaying components [T .J. Matula and P. L. Marston, J. Acoust. Soc. Am. 97, 1389-1398 (1995)] explains the spacing of the nulls. [Work supported by ONR.
NASA Technical Reports Server (NTRS)
Levinton, Douglas B.; Cash, Webster C.; Gleason, Brian; Kaiser, Michael J.; Levine, Sara A.; Lo, Amy S.; Schindhelm, Eric; Shipley, Ann F.
2007-01-01
A new mission concept for the direct imaging of exo-solar planets called the New Worlds Observer (NWO) has been proposed. The concept involves flying a meter-class space telescope in formation with a newly-conceived, specially-shaped, deployable star-occulting shade several meters across at a separation of some tens of thousands of kilometers. The telescope would make its observations from behind the starshade in a volume of high suppression of incident irradiance from the star around which planets orbit. The required level of irradiance suppression created by the starshade for an efficacious mission is of order 0.1 to 10 parts per billion in broadband light. This paper discusses the experimental setup developed to accurately measure the suppression ratio of irradiance produced at the null position behind candidate starshade forms to these levels. It also presents results of broadband measurements which demonstrated suppression levels of just under 100 parts per billion in air using the Sun as a light source. Analytical modeling of spatial irradiance distributions surrounding the null are presented and compared with photographs of irradiance captured in situ behind candidate starshades.
The Quantum Focussing Conjecture and Quantum Null Energy Condition
NASA Astrophysics Data System (ADS)
Koeller, Jason
Evidence has been gathering over the decades that spacetime and gravity are best understood as emergent phenomenon, especially in the context of a unified description of quantum mechanics and gravity. The Quantum Focussing Conjecture (QFC) and Quantum Null Energy Condition (QNEC) are two recently-proposed relationships between entropy and geometry, and energy and entropy, respectively, which further strengthen this idea. In this thesis, we study the QFC and the QNEC. We prove the QNEC in a variety of contexts, including free field theories on Killing horizons, holographic theories on Killing horizons, and in more general curved spacetimes. We also consider the implications of the QFC and QNEC in asymptotically flat space, where they constrain the information content of gravitational radiation arriving at null infinity, and in AdS/CFT, where they are related to other semiclassical inequalities and properties of boundary-anchored extremal area surfaces. It is shown that the assumption of validity and vacuum-state saturation of the QNEC for regions of flat space defined by smooth cuts of null planes implies a local formula for the modular Hamiltonian of these regions. We also demonstrate that the QFC as originally conjectured can be violated in generic theories in d ≥ 5, which led the way to an improved formulation subsequently suggested by Stefan Leichenauer.
Forms of null Lagrangians in field theories of continuum mechanics
NASA Astrophysics Data System (ADS)
Kovalev, V. A.; Radaev, Yu. N.
2012-02-01
The divergence representation of a null Lagrangian that is regular in a star-shaped domain is used to obtain its general expression containing field gradients of order ≤ 1 in the case of spacetime of arbitrary dimension. It is shown that for a static three-component field in the three-dimensional space, a null Lagrangian can contain up to 15 independent elements in total. The general form of a null Lagrangian in the four-dimensional Minkowski spacetime is obtained (the number of physical field variables is assumed arbitrary). A complete theory of the null Lagrangian for the n-dimensional spacetime manifold (including the four-dimensional Minkowski spacetime as a special case) is given. Null Lagrangians are then used as a basis for solving an important variational problem of an integrating factor. This problem involves searching for factors that depend on the spacetime variables, field variables, and their gradients and, for a given system of partial differential equations, ensure the equality between the scalar product of a vector multiplier by the system vector and some divergence expression for arbitrary field variables and, hence, allow one to formulate a divergence conservation law on solutions to the system.
The Cauchy problem for space-time monopole equations in Sobolev spaces
NASA Astrophysics Data System (ADS)
Huh, Hyungjin; Yim, Jihyun
2018-04-01
We consider the initial value problem of space-time monopole equations in one space dimension with initial data in Sobolev space Hs. Observing null structures of the system, we prove local well-posedness in almost critical space. Unconditional uniqueness and global existence are proved for s ≥ 0. Moreover, we show that the H1 Sobolev norm grows at a rate of at most c exp(ct2).
OBSERVATION OF MAGNETIC RECONNECTION AT A 3D NULL POINT ASSOCIATED WITH A SOLAR ERUPTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, J. Q.; Yang, K.; Cheng, X.
Magnetic null has long been recognized as a special structure serving as a preferential site for magnetic reconnection (MR). However, the direct observational study of MR at null-points is largely lacking. Here, we show the observations of MR around a magnetic null associated with an eruption that resulted in an M1.7 flare and a coronal mass ejection. The Geostationary Operational Environmental Satellites X-ray profile of the flare exhibited two peaks at ∼02:23 UT and ∼02:40 UT on 2012 November 8, respectively. Based on the imaging observations, we find that the first and also primary X-ray peak was originated from MRmore » in the current sheet (CS) underneath the erupting magnetic flux rope (MFR). On the other hand, the second and also weaker X-ray peak was caused by MR around a null point located above the pre-eruption MFR. The interaction of the null point and the erupting MFR can be described as a two-step process. During the first step, the erupting and fast expanding MFR passed through the null point, resulting in a significant displacement of the magnetic field surrounding the null. During the second step, the displaced magnetic field started to move back, resulting in a converging inflow and subsequently the MR around the null. The null-point reconnection is a different process from the current sheet reconnection in this flare; the latter is the cause of the main peak of the flare, while the former is the cause of the secondary peak of the flare and the conspicuous high-lying cusp structure.« less
Fully achromatic nulling interferometer (FANI) for high SNR exoplanet characterization
NASA Astrophysics Data System (ADS)
Hénault, François
2015-09-01
Space-borne nulling interferometers have long been considered as the best option for searching and characterizing extrasolar planets located in the habitable zone of their parent stars. Solutions for achieving deep starlight extinction are now numerous and well demonstrated. However they essentially aim at realizing an achromatic central null in order to extinguish the star. In this communication is described a major improvement of the technique, where the achromatization process is extended to the entire fringe pattern. Therefore higher Signal-to-noise ratios (SNR) and appreciable simplification of the detection system should result. The basic principle of this Fully achromatic nulling interferometer (FANI) consists in inserting dispersive elements along the arms of the interferometer. Herein this principle is explained and illustrated by a preliminary optical system design. The typical achievable performance and limitations are discussed and some initial tolerance requirements are also provided.
A type N radiation field solution with Λ <0 in a curved space-time and closed time-like curves
NASA Astrophysics Data System (ADS)
Ahmed, Faizuddin
2018-05-01
An anti-de Sitter background four-dimensional type N solution of the Einstein's field equations, is presented. The matter-energy content pure radiation field satisfies the null energy condition (NEC), and the metric is free-from curvature divergence. In addition, the metric admits a non-expanding, non-twisting and shear-free geodesic null congruence which is not covariantly constant. The space-time admits closed time-like curves which appear after a certain instant of time in a causally well-behaved manner. Finally, the physical interpretation of the solution, based on the study of the equation of the geodesics deviation, is analyzed.
NASA Astrophysics Data System (ADS)
Ryutov, D. D.; Soukhanovskii, V. A.
2015-11-01
The snowflake magnetic configuration is characterized by the presence of two closely spaced poloidal field nulls that create a characteristic hexagonal (reminiscent of a snowflake) separatrix structure. The magnetic field properties and the plasma behaviour in the snowflake are determined by the simultaneous action of both nulls, this generating a lot of interesting physics, as well as providing a chance for improving divertor performance. Among potential beneficial effects of this geometry are: increased volume of a low poloidal field around the null, increased connection length, and the heat flux sharing between multiple divertor channels. The authors summarise experimental results obtained with the snowflake configuration on several tokamaks. Wherever possible, relation to the existing theoretical models is described.
Search for general relativistic effects in table-top displacement metrology
NASA Technical Reports Server (NTRS)
Halverson, Peter G.; Macdonald, Daniel R.; Diaz, Rosemary T.
2004-01-01
As displacement metrology accuracy improves, general relativistic effects will become noticeable. Metrology gauges developed for the Space Interferometry Mission were used to search for locally anisotropic space-time, with a null result at the 10 to the negative tenth power level.
Aliotta, Eric; Moulin, Kévin; Ennis, Daniel B
2018-02-01
To design and evaluate eddy current-nulled convex optimized diffusion encoding (EN-CODE) gradient waveforms for efficient diffusion tensor imaging (DTI) that is free of eddy current-induced image distortions. The EN-CODE framework was used to generate diffusion-encoding waveforms that are eddy current-compensated. The EN-CODE DTI waveform was compared with the existing eddy current-nulled twice refocused spin echo (TRSE) sequence as well as monopolar (MONO) and non-eddy current-compensated CODE in terms of echo time (TE) and image distortions. Comparisons were made in simulations, phantom experiments, and neuro imaging in 10 healthy volunteers. The EN-CODE sequence achieved eddy current compensation with a significantly shorter TE than TRSE (78 versus 96 ms) and a slightly shorter TE than MONO (78 versus 80 ms). Intravoxel signal variance was lower in phantoms with EN-CODE than with MONO (13.6 ± 11.6 versus 37.4 ± 25.8) and not different from TRSE (15.1 ± 11.6), indicating good robustness to eddy current-induced image distortions. Mean fractional anisotropy values in brain edges were also significantly lower with EN-CODE than with MONO (0.16 ± 0.01 versus 0.24 ± 0.02, P < 1 x 10 -5 ) and not different from TRSE (0.16 ± 0.01 versus 0.16 ± 0.01, P = nonsignificant). The EN-CODE sequence eliminated eddy current-induced image distortions in DTI with a TE comparable to MONO and substantially shorter than TRSE. Magn Reson Med 79:663-672, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Digital shaded-relief map of Venezuela
Garrity, Christopher P.; Hackley, Paul C.; Urbani, Franco
2004-01-01
The Digital Shaded-Relief Map of Venezuela is a composite of more than 20 tiles of 90 meter (3 arc second) pixel resolution elevation data, captured during the Shuttle Radar Topography Mission (SRTM) in February 2000. The SRTM, a joint project between the National Geospatial-Intelligence Agency (NGA) and the National Aeronautics and Space Administration (NASA), provides the most accurate and comprehensive international digital elevation dataset ever assembled. The 10-day flight mission aboard the U.S. Space Shuttle Endeavour obtained elevation data for about 80% of the world's landmass at 3-5 meter pixel resolution through the use of synthetic aperture radar (SAR) technology. SAR is desirable because it acquires data along continuous swaths, maintaining data consistency across large areas, independent of cloud cover. Swaths were captured at an altitude of 230 km, and are approximately 225 km wide with varying lengths. Rendering of the shaded-relief image required editing of the raw elevation data to remove numerous holes and anomalously high and low values inherent in the dataset. Customized ArcInfo Arc Macro Language (AML) scripts were written to interpolate areas of null values and generalize irregular elevation spikes and wells. Coastlines and major water bodies used as a clipping mask were extracted from 1:500,000-scale geologic maps of Venezuela (Bellizzia and others, 1976). The shaded-relief image was rendered with an illumination azimuth of 315? and an altitude of 65?. A vertical exaggeration of 2X was applied to the image to enhance land-surface features. Image post-processing techniques were accomplished using conventional desktop imaging software.
Strehl ratio: a tool for optimizing optical nulls and singularities.
Hénault, François
2015-07-01
In this paper a set of radial and azimuthal phase functions are reviewed that have a null Strehl ratio, which is equivalent to generating a central extinction in the image plane of an optical system. The study is conducted in the framework of Fraunhofer scalar diffraction, and is oriented toward practical cases where optical nulls or singularities are produced by deformable mirrors or phase plates. The identified solutions reveal unexpected links with the zeros of type-J Bessel functions of integer order. They include linear azimuthal phase ramps giving birth to an optical vortex, azimuthally modulated phase functions, and circular phase gratings (CPGs). It is found in particular that the CPG radiometric efficiency could be significantly improved by the null Strehl ratio condition. Simple design rules for rescaling and combining the different phase functions are also defined. Finally, the described analytical solutions could also serve as starting points for an automated searching software tool.
Search for general relativistic effects in table-top displacement metrology
NASA Technical Reports Server (NTRS)
Halverson, Peter G.; Diaz, Rosemary T.; Macdonald, Daniel R.
2004-01-01
As displacement metrology accuracy improves, general relativistic effects will become noticeable. Metrology gauges developed for the Space Interferometry Mission, were used to search for locally anisotropic space-time, with a null result at the 10 to the negative 10th power level.
Radiation patterns of interfacial dipole antennas
NASA Technical Reports Server (NTRS)
Engheta, N.; Papas, C. H.; Elachi, C.
1982-01-01
The radiation pattern of an infinitesimal electric dipole is calculated for the case where the dipole is vertically located on the plane interface of two dielectric half spaces and for the case where the dipole is lying horizontally along the interface. For the vertical case, it is found that the radiation pattern has nulls at the interface and along the dipole axis. For the horizontal case, it is found that the pattern has a null at the interface; that the pattern in the upper half space, whose index of refraction is taken to be less than that of the lower half space, has a single lobe whose maximum is normal to the interface; and that in the lower half space, in the plane normal to the interface and containing the dipole, the pattern has three lobes, whereas in the plane normal to the interface and normally bisecting the dipole, the pattern has two maxima located symmetrically about a minimum. Interpretation of these results in terms of the Cerenkov effect is given.
High Contrast Vacuum Nuller Testbed (VNT) Contrast, Performance and Null Control
NASA Technical Reports Server (NTRS)
Lyon, Richard G.; Clampin, Mark; Petrone, Peter; Mallik, Udayan; Madison, Timothy; Bolcar, Matthew R.
2012-01-01
Herein we report on our contrast assessment and the development, sensing and control of the Vacuum Nuller Testbed to realize a Visible Nulling Coronagraphy (VNC) for exoplanet detection and characterization. Tbe VNC is one of the few approaches that works with filled, segmented and sparse or diluted-aperture telescope systems. It thus spans a range of potential future NASA telescopes and could be flown as a separate instrument on such a future mission. NASA/Goddard Space Flight Center has an established effort to develop VNC technologies, and an incremental sequence of testbeds to advance this approach and its critical technologies. We discuss the development of the vacuum Visible Nulling Coronagraph testbed (VNT). The VNT is an ultra-stable vibration isolated testbed that operates under closed-loop control within a vacuum chamber. It will be used to achieve an incremental sequence of three visible-light nulling milestones with sequentially higher contrasts of 10(exp 8), 10(exp 9) and ideally 10(exp 10) at an inner working angle of 2*lambda/D. The VNT is based on a modified Mach-Zehnder nulling interferometer, with a "W" configuration to accommodate a hex-packed MEMS based deformable mirror, a coherent fiber bundle and achromatic phase shifters. We discuss the laboratory results, optical configuration, critical technologies and the null sensing and control approach.
Goldsmith, Harry-Dean Kenchington; Cvetojevic, Nick; Ireland, Michael; Madden, Stephen
2017-02-20
Understanding exoplanet formation and finding potentially habitable exoplanets is vital to an enhanced understanding of the universe. The use of nulling interferometry to strongly attenuate the central star's light provides the opportunity to see objects closer to the star than ever before. Given that exoplanets are usually warm, the 4 µm Mid-Infrared region is advantageous for such observations. The key performance parameters for a nulling interferometer are the extinction ratio it can attain and how well that is maintained across the operational bandwidth. Both parameters depend on the design and fabrication accuracy of the subcomponents and their wavelength dependence. Via detailed simulation it is shown in this paper that a planar chalcogenide photonic chip, consisting of three highly fabrication tolerant multimode interference couplers, can exceed an extinction ratio of 60 dB in double nulling operation and up to 40 dB for a single nulling operation across a wavelength window of 3.9 to 4.2 µm. This provides a beam combiner with sufficient performance, in theory, to image exoplanets.
Paranodal permeability in `myelin mutants'
Shroff, S.; Mierzwa, A.; Scherer, S.S.; Peles, E.; Arevalo, J.C.; Chao, M.V.; Rosenbluth, J.
2011-01-01
Fluorescent dextran tracers of varying sizes have been used to assess paranodal permeability in myelinated sciatic nerve fibers from control and three `myelin mutant' mice, Caspr-null, cst-null and shaking. We demonstrate that in all of these the paranode is permeable to small tracers (3kDa, 10kDa), which penetrate most fibers, and to larger tracers (40kDa, 70kDa), which penetrate far fewer fibers and move shorter distances over longer periods of time. Despite gross diminution in transverse bands in the Caspr-null and cst-null mice, the permeability of their paranodal junctions is equivalent to that in controls. Thus, deficiency of transverse bands in these mutants does not increase the permeability of their paranodal junctions to the dextrans we used, moving from the perinodal space through the paranode to the internodal periaxonal space. In addition, we show that the shaking mice, which have thinner myelin and shorter paranodes, show increased permeability to the same tracers despite the presence of transverse bands. We conclude that the extent of penetration of these tracers does not depend on the presence or absence of transverse bands but does depend on the length of the paranode and, in turn, on the length of `pathway 3', the helical extracellular pathway that passes through the paranode parallel to the lateral edge of the myelin sheath. PMID:21618613
Behavior of the maximum likelihood in quantum state tomography
NASA Astrophysics Data System (ADS)
Scholten, Travis L.; Blume-Kohout, Robin
2018-02-01
Quantum state tomography on a d-dimensional system demands resources that grow rapidly with d. They may be reduced by using model selection to tailor the number of parameters in the model (i.e., the size of the density matrix). Most model selection methods typically rely on a test statistic and a null theory that describes its behavior when two models are equally good. Here, we consider the loglikelihood ratio. Because of the positivity constraint ρ ≥ 0, quantum state space does not generally satisfy local asymptotic normality (LAN), meaning the classical null theory for the loglikelihood ratio (the Wilks theorem) should not be used. Thus, understanding and quantifying how positivity affects the null behavior of this test statistic is necessary for its use in model selection for state tomography. We define a new generalization of LAN, metric-projected LAN, show that quantum state space satisfies it, and derive a replacement for the Wilks theorem. In addition to enabling reliable model selection, our results shed more light on the qualitative effects of the positivity constraint on state tomography.
Behavior of the maximum likelihood in quantum state tomography
Blume-Kohout, Robin J; Scholten, Travis L.
2018-02-22
Quantum state tomography on a d-dimensional system demands resources that grow rapidly with d. They may be reduced by using model selection to tailor the number of parameters in the model (i.e., the size of the density matrix). Most model selection methods typically rely on a test statistic and a null theory that describes its behavior when two models are equally good. Here, we consider the loglikelihood ratio. Because of the positivity constraint ρ ≥ 0, quantum state space does not generally satisfy local asymptotic normality (LAN), meaning the classical null theory for the loglikelihood ratio (the Wilks theorem) shouldmore » not be used. Thus, understanding and quantifying how positivity affects the null behavior of this test statistic is necessary for its use in model selection for state tomography. We define a new generalization of LAN, metric-projected LAN, show that quantum state space satisfies it, and derive a replacement for the Wilks theorem. In addition to enabling reliable model selection, our results shed more light on the qualitative effects of the positivity constraint on state tomography.« less
Behavior of the maximum likelihood in quantum state tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blume-Kohout, Robin J; Scholten, Travis L.
Quantum state tomography on a d-dimensional system demands resources that grow rapidly with d. They may be reduced by using model selection to tailor the number of parameters in the model (i.e., the size of the density matrix). Most model selection methods typically rely on a test statistic and a null theory that describes its behavior when two models are equally good. Here, we consider the loglikelihood ratio. Because of the positivity constraint ρ ≥ 0, quantum state space does not generally satisfy local asymptotic normality (LAN), meaning the classical null theory for the loglikelihood ratio (the Wilks theorem) shouldmore » not be used. Thus, understanding and quantifying how positivity affects the null behavior of this test statistic is necessary for its use in model selection for state tomography. We define a new generalization of LAN, metric-projected LAN, show that quantum state space satisfies it, and derive a replacement for the Wilks theorem. In addition to enabling reliable model selection, our results shed more light on the qualitative effects of the positivity constraint on state tomography.« less
Learning Efficient Spatial-Temporal Gait Features with Deep Learning for Human Identification.
Liu, Wu; Zhang, Cheng; Ma, Huadong; Li, Shuangqun
2018-02-06
The integration of the latest breakthroughs in bioinformatics technology from one side and artificial intelligence from another side, enables remarkable advances in the fields of intelligent security guard computational biology, healthcare, and so on. Among them, biometrics based automatic human identification is one of the most fundamental and significant research topic. Human gait, which is a biometric features with the unique capability, has gained significant attentions as the remarkable characteristics of remote accessed, robust and security in the biometrics based human identification. However, the existed methods cannot well handle the indistinctive inter-class differences and large intra-class variations of human gait in real-world situation. In this paper, we have developed an efficient spatial-temporal gait features with deep learning for human identification. First of all, we proposed a gait energy image (GEI) based Siamese neural network to automatically extract robust and discriminative spatial gait features for human identification. Furthermore, we exploit the deep 3-dimensional convolutional networks to learn the human gait convolutional 3D (C3D) as the temporal gait features. Finally, the GEI and C3D gait features are embedded into the null space by the Null Foley-Sammon Transform (NFST). In the new space, the spatial-temporal features are sufficiently combined with distance metric learning to drive the similarity metric to be small for pairs of gait from the same person, and large for pairs from different persons. Consequently, the experiments on the world's largest gait database show our framework impressively outperforms state-of-the-art methods.
Wildfire cluster detection using space-time scan statistics
NASA Astrophysics Data System (ADS)
Tonini, M.; Tuia, D.; Ratle, F.; Kanevski, M.
2009-04-01
The aim of the present study is to identify spatio-temporal clusters of fires sequences using space-time scan statistics. These statistical methods are specifically designed to detect clusters and assess their significance. Basically, scan statistics work by comparing a set of events occurring inside a scanning window (or a space-time cylinder for spatio-temporal data) with those that lie outside. Windows of increasing size scan the zone across space and time: the likelihood ratio is calculated for each window (comparing the ratio "observed cases over expected" inside and outside): the window with the maximum value is assumed to be the most probable cluster, and so on. Under the null hypothesis of spatial and temporal randomness, these events are distributed according to a known discrete-state random process (Poisson or Bernoulli), which parameters can be estimated. Given this assumption, it is possible to test whether or not the null hypothesis holds in a specific area. In order to deal with fires data, the space-time permutation scan statistic has been applied since it does not require the explicit specification of the population-at risk in each cylinder. The case study is represented by Florida daily fire detection using the Moderate Resolution Imaging Spectroradiometer (MODIS) active fire product during the period 2003-2006. As result, statistically significant clusters have been identified. Performing the analyses over the entire frame period, three out of the five most likely clusters have been identified in the forest areas, on the North of the country; the other two clusters cover a large zone in the South, corresponding to agricultural land and the prairies in the Everglades. Furthermore, the analyses have been performed separately for the four years to analyze if the wildfires recur each year during the same period. It emerges that clusters of forest fires are more frequent in hot seasons (spring and summer), while in the South areas they are widely present along the whole year. The analysis of fires distribution to evaluate if they are statistically more frequent in some area or/and in some period of the year, can be useful to support fire management and to focus on prevention measures.
Nulling Data Reduction and On-Sky Performance of the Large Binocular Telescope Interferometer
NASA Technical Reports Server (NTRS)
Defrere, D.; Hinz, P. M.; Mennesson, B.; Hoffman, W. F.; Millan-Gabet, R.; Skemer, A. J.; Bailey, V.; Danchi, W. C.; Downy, E. C.; Durney, O.;
2016-01-01
The Large Binocular Telescope Interferometer (LBTI) is a versatile instrument designed for high angular resolution and high-contrast infrared imaging (1.5-13 micrometers). In this paper, we focus on the mid-infrared (8-13 micrometers) nulling mode and present its theory of operation, data reduction, and on-sky performance as of the end of the commissioning phase in 2015 March. With an interferometric baseline of 14.4 m, the LBTI nuller is specifically tuned to resolve the habitable zone of nearby main-sequence stars, where warm exozodiacal dust emission peaks. Measuring the exozodi luminosity function of nearby main-sequence stars is a key milestone to prepare for future exo-Earth direct imaging instruments. Thanks to recent progress in wavefront control and phase stabilization, as well as in data reduction techniques, the LBTI demonstrated in 2015 February a calibrated null accuracy of 0.05% over a 3 hr long observing sequence on the bright nearby A3V star Beta Leo. This is equivalent to an exozodiacal disk density of 15-30 zodi for a Sun-like star located at 10 pc, depending on the adopted disk model. This result sets a new record for high-contrast mid-infrared interferometric imaging and opens a new window on the study of planetary systems.
Nulling Stabilization in the Presence of Perturbation
NASA Astrophysics Data System (ADS)
Houairi, K.; Cassaing, F.; Le Duigou, J. M.; Barillot, M.; Coudé du Foresto, V.; Hénault, F.; Jacquinod, S.; Ollivier, M.; Reess, J.-M.; Sorrente, B.
2007-07-01
Nulling interferometry is one of the most promising methods to study habitable extrasolar systems. In this context, several projects have been proposed such as ALADDIN on ground or DARWIN and PEGASE in space. A first step towards these missions will be performed with a laboratory breadboard, named PERSEE, built by a consortium including CNES, IAS, LESIA, OCA, ONERA and TAS. Its main goals are the demonstration of a polychromatic null with a 10-4 rejection rate and a 10-5 stability despite the introduction of realistic perturbations, the study of the interfaces with the formation-flying spacecrafts and the joint operation of the cophasing system with the nuller. The broadboard integration should end in 2009, then PERSEE will be open to proposals from the scientific community.
Causal structures in Gauss-Bonnet gravity
NASA Astrophysics Data System (ADS)
Izumi, Keisuke
2014-08-01
We analyze causal structures in Gauss-Bonnet gravity. It is known that Gauss-Bonnet gravity potentially has superluminal propagation of gravitons due to its noncanonical kinetic terms. In a theory with superluminal modes, an analysis of causality based on null curves makes no sense, and thus, we need to analyze them in a different way. In this paper, using the method of the characteristics, we analyze the causal structure in Gauss-Bonnet gravity. We have the result that, on a Killing horizon, gravitons can propagate in the null direction tangent to the Killing horizon. Therefore, a Killing horizon can be a causal edge as in the case of general relativity; i.e. a Killing horizon is the "event horizon" in the sense of causality. We also analyze causal structures on nonstationary solutions with (D-2)-dimensional maximal symmetry, including spherically symmetric and flat spaces. If the geometrical null energy condition, RABNANB≥0 for any null vector NA, is satisfied, the radial velocity of gravitons must be less than or equal to that of light. However, if the geometrical null energy condition is violated, gravitons can propagate faster than light. Hence, on an evaporating black hole where the geometrical null energy condition is expected not to hold, classical gravitons can escape from the "black hole" defined with null curves. That is, the causal structures become nontrivial. It may be one of the possible solutions for the information loss paradox of evaporating black holes.
Potential of balloon payloads for in flight validation of direct and nulling interferometry concepts
NASA Astrophysics Data System (ADS)
Demangeon, Olivier; Ollivier, Marc; Le Duigou, Jean-Michel; Cassaing, Frédéric; Coudé du Foresto, Vincent; Mourard, Denis; Kern, Pierre; Lam Trong, Tien; Evrard, Jean; Absil, Olivier; Defrere, Denis; Lopez, Bruno
2010-07-01
While the question of low cost / low science precursors is raised to validate the concepts of direct and nulling interferometry space missions, balloon payloads offer a real opportunity thanks to their relatively low cost and reduced development plan. Taking into account the flight capabilities of various balloon types, we propose in this paper, several concepts of payloads associated to their flight plan. We also discuss the pros and cons of each concepts in terms of technological and science demonstration power.
Two-sample discrimination of Poisson means
NASA Technical Reports Server (NTRS)
Lampton, M.
1994-01-01
This paper presents a statistical test for detecting significant differences between two random count accumulations. The null hypothesis is that the two samples share a common random arrival process with a mean count proportional to each sample's exposure. The model represents the partition of N total events into two counts, A and B, as a sequence of N independent Bernoulli trials whose partition fraction, f, is determined by the ratio of the exposures of A and B. The detection of a significant difference is claimed when the background (null) hypothesis is rejected, which occurs when the observed sample falls in a critical region of (A, B) space. The critical region depends on f and the desired significance level, alpha. The model correctly takes into account the fluctuations in both the signals and the background data, including the important case of small numbers of counts in the signal, the background, or both. The significance can be exactly determined from the cumulative binomial distribution, which in turn can be inverted to determine the critical A(B) or B(A) contour. This paper gives efficient implementations of these tests, based on lookup tables. Applications include the detection of clustering of astronomical objects, the detection of faint emission or absorption lines in photon-limited spectroscopy, the detection of faint emitters or absorbers in photon-limited imaging, and dosimetry.
Holographic complexity in Vaidya spacetimes. Part I
NASA Astrophysics Data System (ADS)
Chapman, Shira; Marrochio, Hugo; Myers, Robert C.
2018-06-01
We examine holographic complexity in time-dependent Vaidya spacetimes with both the complexity=volume (CV) and complexity=action (CA) proposals. We focus on the evolution of the holographic complexity for a thin shell of null fluid, which collapses into empty AdS space and forms a (one-sided) black hole. In order to apply the CA approach, we introduce an action principle for the null fluid which sources the Vaidya geometries, and we carefully examine the contribution of the null shell to the action. Further, we find that adding a particular counterterm on the null boundaries of the Wheeler-DeWitt patch is essential if the gravitational action is to properly describe the complexity of the boundary state. For both the CV proposal and the CA proposal (with the extra boundary counterterm), the late time limit of the growth rate of the holographic complexity for the one-sided black hole is precisely the same as that found for an eternal black hole.
NASA Technical Reports Server (NTRS)
Bedrossian, Nazareth Sarkis
1987-01-01
The correspondence between robotic manipulators and single gimbal Control Moment Gyro (CMG) systems was exploited to aid in the understanding and design of single gimbal CMG Steering laws. A test for null motion near a singular CMG configuration was derived which is able to distinguish between escapable and unescapable singular states. Detailed analysis of the Jacobian matrix null-space was performed and results were used to develop and test a variety of single gimbal CMG steering laws. Computer simulations showed that all existing singularity avoidance methods are unable to avoid Elliptic internal singularities. A new null motion algorithm using the Moore-Penrose pseudoinverse, however, was shown by simulation to avoid Elliptic type singularities under certain conditions. The SR-inverse, with appropriate null motion was proposed as a general approach to singularity avoidance, because of its ability to avoid singularities through limited introduction of torque error. Simulation results confirmed the superior performance of this method compared to the other available and proposed pseudoinverse-based Steering laws.
A musculoskeletal shoulder model based on pseudo-inverse and null-space optimization.
Terrier, Alexandre; Aeberhard, Martin; Michellod, Yvan; Mullhaupt, Philippe; Gillet, Denis; Farron, Alain; Pioletti, Dominique P
2010-11-01
The goal of the present work was assess the feasibility of using a pseudo-inverse and null-space optimization approach in the modeling of the shoulder biomechanics. The method was applied to a simplified musculoskeletal shoulder model. The mechanical system consisted in the arm, and the external forces were the arm weight, 6 scapulo-humeral muscles and the reaction at the glenohumeral joint, which was considered as a spherical joint. The muscle wrapping was considered around the humeral head assumed spherical. The dynamical equations were solved in a Lagrangian approach. The mathematical redundancy of the mechanical system was solved in two steps: a pseudo-inverse optimization to minimize the square of the muscle stress and a null-space optimization to restrict the muscle force to physiological limits. Several movements were simulated. The mathematical and numerical aspects of the constrained redundancy problem were efficiently solved by the proposed method. The prediction of muscle moment arms was consistent with cadaveric measurements and the joint reaction force was consistent with in vivo measurements. This preliminary work demonstrated that the developed algorithm has a great potential for more complex musculoskeletal modeling of the shoulder joint. In particular it could be further applied to a non-spherical joint model, allowing for the natural translation of the humeral head in the glenoid fossa. Copyright © 2010 IPEM. Published by Elsevier Ltd. All rights reserved.
Vacuum Nuller Testbed (VNT) Performance, Characterization and Null Control: Progress Report
NASA Technical Reports Server (NTRS)
Lyon, Richard G.; Clampin, Mark; Petrone, Peter; Mallik, Udayan; Madison, Timothy; Bolcar, Matthew R.; Noecker, M. Charley; Kendrick, Stephen; Helmbrecht, Michael
2011-01-01
Herein we report on the development. sensing and control and our first results with the Vacuum Nuller Testbed to realize a Visible Nulling Coronagraph (VNC) for exoplanet coronagraphy. The VNC is one of the few approaches that works with filled. segmented and sparse or diluted-aperture telescope systems. It thus spans a range of potential future NASA telescopes and could be Hown as a separate instrument on such a future mission. NASA/Goddard Space Flight Center (GSFC) has a well-established effort to develop VNC technologies. and has developed an incremental sequence of VNC testbeds to advance this approach and the enabling technologies associated with it. We discuss the continued development of the vacuum Visible Nulling Coronagraph testbed (VNT). Tbe VNT is an ultra-stable vibration isolated testbed that operates under closed-loop control within a vacuum chamber. It will be used to achieve an incremental sequence of three visible-light nulling milestones with sequentially higher contrasts of 10(sup 8), 10(sup 9) and ideally 10(sup 10) at an inner working angle of 2*lambda/D. The VNT is based on a modified Mach-Zehnder nulling interferometer, with a "W" configuration to accommodate a hex-packed MEMS based deformable mirror, a coherent fiber bundle and achromatic phase shifters. We discuss the initial laboratory results, the optical configuration, critical technologies and the null sensing and control approach.
Measurement of steep aspheric surfaces using improved two-wavelength phase-shifting interferometer
NASA Astrophysics Data System (ADS)
Zhang, Liqiong; Wang, Shaopu; Hu, Yao; Hao, Qun
2017-10-01
Optical components with aspheric surfaces can improve the imaging quality of optical systems, and also provide extra advantages such as lighter weight, smaller volume and simper structure. In order to satisfy these performance requirements, the surface error of aspheric surfaces, especially high departure aspheric surfaces must be measured accurately and conveniently. The major obstacle of traditional null-interferometry for aspheric surface under test is that specific and complex null optics need to be designed to fully compensate for the normal aberration of the aspheric surface under test. However, non-null interferometry partially compensating for the aspheric normal aberration can test aspheric surfaces without specific null optics. In this work, a novel non-null test approach of measuring the deviation between aspheric surfaces and the best reference sphere by using improved two-wavelength phase shifting interferometer is described. With the help of the calibration based on reverse iteration optimization, we can effectively remove the retrace error and thus improve the accuracy. Simulation results demonstrate that this method can measure the aspheric surface with the departure of over tens of microns from the best reference sphere, which introduces approximately 500λ of wavefront aberration at the detector.
Whistler mode refraction in highly nonuniform magnetic fields
NASA Astrophysics Data System (ADS)
Urrutia, J. M.; Stenzel, R.
2016-12-01
In a large laboratory plasma the propagation of whistler modes is measured in highly nonuniform magnetic fields created by a current-carrying wires. Ray tracing is not applicable since the wavelength and gradient scale length are comparable. The waves are excited with a loop antenna near the wire. The antenna launches an m=1 helicon mode in a uniform plasma. The total magnetic field consists of a weak uniform background field and a nearly circular field of a straight wire across the background field. A circular loop produces 3D null points and a 2D null line. The whistler wave propagation will be shown. It is relevant to whistler mode propagation in space plasmas near magnetic null-points, small flux ropes, lunar crustal magnetic fields and active wave injection experiments.
Null geodesics and wave front singularities in the Gödel space-time
NASA Astrophysics Data System (ADS)
Kling, Thomas P.; Roebuck, Kevin; Grotzke, Eric
2018-01-01
We explore wave fronts of null geodesics in the Gödel metric emitted from point sources both at, and away from, the origin. For constant time wave fronts emitted by sources away from the origin, we find cusp ridges as well as blue sky metamorphoses where spatially disconnected portions of the wave front appear, connect to the main wave front, and then later break free and vanish. These blue sky metamorphoses in the constant time wave fronts highlight the non-causal features of the Gödel metric. We introduce a concept of physical distance along the null geodesics, and show that for wave fronts of constant physical distance, the reorganization of the points making up the wave front leads to the removal of cusp ridges.
Adaptive mesh refinement for characteristic grids
NASA Astrophysics Data System (ADS)
Thornburg, Jonathan
2011-05-01
I consider techniques for Berger-Oliger adaptive mesh refinement (AMR) when numerically solving partial differential equations with wave-like solutions, using characteristic (double-null) grids. Such AMR algorithms are naturally recursive, and the best-known past Berger-Oliger characteristic AMR algorithm, that of Pretorius and Lehner (J Comp Phys 198:10, 2004), recurses on individual "diamond" characteristic grid cells. This leads to the use of fine-grained memory management, with individual grid cells kept in two-dimensional linked lists at each refinement level. This complicates the implementation and adds overhead in both space and time. Here I describe a Berger-Oliger characteristic AMR algorithm which instead recurses on null slices. This algorithm is very similar to the usual Cauchy Berger-Oliger algorithm, and uses relatively coarse-grained memory management, allowing entire null slices to be stored in contiguous arrays in memory. The algorithm is very efficient in both space and time. I describe discretizations yielding both second and fourth order global accuracy. My code implementing the algorithm described here is included in the electronic supplementary materials accompanying this paper, and is freely available to other researchers under the terms of the GNU general public license.
Light propagation in the averaged universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bagheri, Samae; Schwarz, Dominik J., E-mail: s_bagheri@physik.uni-bielefeld.de, E-mail: dschwarz@physik.uni-bielefeld.de
Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of themore » null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.« less
Antenna Beam Pattern Characteristics of HAPS User Terminal
NASA Astrophysics Data System (ADS)
Ku, Bon-Jun; Oh, Dae Sub; Kim, Nam; Ahn, Do-Seob
High Altitude Platform Stations (HAPS) are recently considered as a green infrastructure to provide high speed multimedia services. The critical issue of HAPS is frequency sharing with satellite systems. Regulating antenna beam pattern using adaptive antenna schemes is one of means to facilitate the sharing with a space receiver for fixed satellite services on the uplink of a HAPS system operating in U bands. In this letter, we investigate antenna beam pattern characteristics of HAPS user terminals with various values of scan angles of main beam, null position angles, and null width.
Mid-Infrared Imaging of Exo-Earths: Impact of Exozodiacal Disk Structures
NASA Technical Reports Server (NTRS)
Defrere, Denis; Absil, O.; Stark, C.; den Hartog, R.; Danchi, W.
2011-01-01
The characterization of Earth-like extrasolar planets in the mid-infrared is a significant observational challenge that could be tackled by future space-based interferometers. The presence of large amounts of exozodiacal dust around nearby main sequence stars represents however a potential hurdle to obtain mid-infrared spectra of Earth-like planets. Whereas the disk brightness only affects the integration time, the emission of resonant dust structures mixes with the planet signal at the output of the interferometer and could jeopardize the spectroscopic analysis of an Earth-like planet. Fortunately, the high angular resolution provided by space-based interferometry is sufficient to spatially distinguish most of the extended exozodiacal emission from the planetary signal and only the dust located near the planet significantly contributes to the noise level. Considering modeled resonant structures created by Earth-like planets, we address in this talk the role of exozodiacal dust in two different cases: the characterization of Super-Earth planets with single space-based Bracewell interferometers (e.g., the FKSI mission) and the characterization of Earth-like planets with 4-telescope space-based nulling interferometers (e.g., the TPF-I and Darwin projects). In each case, we derive constraints on the disk parameters that can be tolerated without jeopardizing the detection of Earth-like planets
Impaired olfaction in mice lacking aquaporin-4 water channels.
Lu, Daniel C; Zhang, Hua; Zador, Zsolt; Verkman, A S
2008-09-01
Aquaporin-4 (AQP4) is a water-selective transport protein expressed in glial cells throughout the central nervous system. AQP4 deletion in mice produces alterations in several neuroexcitation phenomena, including hearing, vision, epilepsy, and cortical spreading depression. Here, we report defective olfaction and electroolfactogram responses in AQP4-null mice. Immunofluorescence indicated strong AQP4 expression in supportive cells of the nasal olfactory epithelium. The olfactory epithelium in AQP4-null mice had identical appearance, but did not express AQP4, and had approximately 12-fold reduced osmotic water permeability. Behavioral analysis showed greatly impaired olfaction in AQP4-null mice, with latency times of 17 +/- 0.7 vs. 55 +/- 5 s in wild-type vs. AQP4-null mice in a buried food pellet test, which was confirmed using an olfactory maze test. Electroolfactogram voltage responses to multiple odorants were reduced in AQP4-null mice, with maximal responses to triethylamine of 0.80 +/- 0.07 vs. 0.28 +/- 0.03 mV. Similar olfaction and electroolfactogram defects were found in outbred (CD1) and inbred (C57/bl6) mouse genetic backgrounds. Our results establish AQP4 as a novel determinant of olfaction, the deficiency of which probably impairs extracellular space K(+) buffering in the olfactory epithelium.
Multiwavelength Observations of the Candidate Disintegrating Sub-Mercury KIC 12557548b
NASA Astrophysics Data System (ADS)
Croll, Bryce; Rappaport, Saul; DeVore, John; Gilliland, Ronald L.; Crepp, Justin R.; Howard, Andrew W.; Star, Kimberly M.; Chiang, Eugene; Levine, Alan M.; Jenkins, Jon M.; Albert, Loic; Bonomo, Aldo S.; Fortney, Jonathan J.; Isaacson, Howard
2014-05-01
We present multiwavelength photometry, high angular resolution imaging, and radial velocities of the unique and confounding disintegrating low-mass planet candidate KIC 12557548b. Our high angular resolution imaging, which includes space-based Hubble Space Telescope Wide Field Camera 3 (HST/WFC3) observations in the optical (~0.53 μm and ~0.77 μm), and ground-based Keck/NIRC2 observations in K' band (~2.12 μm), allow us to rule out background and foreground candidates at angular separations greater than 0.''2 that are bright enough to be responsible for the transits we associate with KIC 12557548. Our radial velocity limit from Keck/HIRES allows us to rule out bound, low-mass stellar companions (~0.2 M ⊙) to KIC 12557548 on orbits less than 10 yr, as well as placing an upper limit on the mass of the candidate planet of 1.2 Jupiter masses; therefore, the combination of our radial velocities, high angular resolution imaging, and photometry are able to rule out most false positive interpretations of the transits. Our precise multiwavelength photometry includes two simultaneous detections of the transit of KIC 12557548b using Canada-France-Hawaii Telescope/Wide-field InfraRed Camera (CFHT/WIRCam) at 2.15 μm and the Kepler space telescope at 0.6 μm, as well as simultaneous null-detections of the transit by Kepler and HST/WFC3 at 1.4 μm. Our simultaneous HST/WFC3 and Kepler null-detections provide no evidence for radically different transit depths at these wavelengths. Our simultaneous CFHT/WIRCam detections in the near-infrared and with Kepler in the optical reveal very similar transit depths (the average ratio of the transit depths at ~2.15 μm compared with ~0.6 μm is: 1.02 ± 0.20). This suggests that if the transits we observe are due to scattering from single-size particles streaming from the planet in a comet-like tail, then the particles must be ~0.5 μm in radius or larger, which would favor that KIC 12557548b is a sub-Mercury rather than super-Mercury mass planet. Based on observations obtained with WIRCam, a joint project of CFHT, Taiwan, Korea, Canada, and France, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institute National des Sciences de l'Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawaii. Some of the data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The observatory was made possible by the generous financial support of the W.M. Keck Foundation.
Newton-Cartan gravity and torsion
NASA Astrophysics Data System (ADS)
Bergshoeff, Eric; Chatzistavrakidis, Athanasios; Romano, Luca; Rosseel, Jan
2017-10-01
We compare the gauging of the Bargmann algebra, for the case of arbitrary torsion, with the result that one obtains from a null-reduction of General Relativity. Whereas the two procedures lead to the same result for Newton-Cartan geometry with arbitrary torsion, the null-reduction of the Einstein equations necessarily leads to Newton-Cartan gravity with zero torsion. We show, for three space-time dimensions, how Newton-Cartan gravity with arbitrary torsion can be obtained by starting from a Schrödinger field theory with dynamical exponent z = 2 for a complex compensating scalar and next coupling this field theory to a z = 2 Schrödinger geometry with arbitrary torsion. The latter theory can be obtained from either a gauging of the Schrödinger algebra, for arbitrary torsion, or from a null-reduction of conformal gravity.
KiDS-i-800: comparing weak gravitational lensing measurements from same-sky surveys
NASA Astrophysics Data System (ADS)
Amon, A.; Heymans, C.; Klaes, D.; Erben, T.; Blake, C.; Hildebrandt, H.; Hoekstra, H.; Kuijken, K.; Miller, L.; Morrison, C. B.; Choi, A.; de Jong, J. T. A.; Glazebrook, K.; Irisarri, N.; Joachimi, B.; Joudaki, S.; Kannawadi, A.; Lidman, C.; Napolitano, N.; Parkinson, D.; Schneider, P.; van Uitert, E.; Viola, M.; Wolf, C.
2018-07-01
We present a weak gravitational lensing analysis of 815 deg2 of i-band imaging from the Kilo-Degree Survey (KiDS-i-800). In contrast to the deep r-band observations, which take priority during excellent seeing conditions and form the primary KiDS data set (KiDS-r-450), the complementary yet shallower KiDS-i-800 spans a wide range of observing conditions. The overlapping KiDS-i-800 and KiDS-r-450 imaging therefore provides a unique opportunity to assess the robustness of weak lensing measurements. In our analysis we introduce two new `null' tests. The `nulled' two-point shear correlation function uses a matched catalogue to show that the calibrated KiDS-i-800 and KiDS-r-450 shear measurements agree at the level of 1 ± 4 per cent. We use five galaxy lens samples to determine a `nulled' galaxy-galaxy lensing signal from the full KiDS-i-800 and KiDS-r-450 surveys and find that the measurements agree to 7 ± 5 per cent when the KiDS-i-800 source redshift distribution is calibrated using either spectroscopic redshifts, or the 30-band photometric redshifts from the COSMOS survey.
KiDS-i-800: Comparing weak gravitational lensing measurements from same-sky surveys
NASA Astrophysics Data System (ADS)
Amon, A.; Heymans, C.; Klaes, D.; Erben, T.; Blake, C.; Hildebrandt, H.; Hoekstra, H.; Kuijken, K.; Miller, L.; Morrison, C. B.; Choi, A.; de Jong, J. T. A.; Glazebrook, K.; Irisarri, N.; Joachimi, B.; Joudaki, S.; Kannawadi, A.; Lidman, C.; Napolitano, N.; Parkinson, D.; Schneider, P.; van Uitert, E.; Viola, M.; Wolf, C.
2018-04-01
We present a weak gravitational lensing analysis of 815deg2 of i-band imaging from the Kilo-Degree Survey (KiDS-i-800). In contrast to the deep r-band observations, which take priority during excellent seeing conditions and form the primary KiDS dataset (KiDS-r-450), the complementary yet shallower KiDS-i-800 spans a wide range of observing conditions. The overlapping KiDS-i-800 and KiDS-r-450 imaging therefore provides a unique opportunity to assess the robustness of weak lensing measurements. In our analysis we introduce two new `null' tests. The `nulled' two-point shear correlation function uses a matched catalogue to show that the calibrated KiDS-i-800 and KiDS-r-450 shear measurements agree at the level of 1 ± 4%. We use five galaxy lens samples to determine a `nulled' galaxy-galaxy lensing signal from the full KiDS-i-800 and KiDS-r-450 surveys and find that the measurements agree to 7 ± 5% when the KiDS-i-800 source redshift distribution is calibrated using either spectroscopic redshifts, or the 30-band photometric redshifts from the COSMOS survey.
Tuckett, Andrea Z; Thornton, Raymond H; O'Reilly, Richard J; van den Brink, Marcel R M; Zakrzewski, Johannes L
2017-05-16
Even though hematopoietic stem cell transplantation can be curative in patients with severe combined immunodeficiency, there is a need for additional strategies boosting T cell immunity in individuals suffering from genetic disorders of lymphoid development. Here we show that image-guided intrathymic injection of hematopoietic stem and progenitor cells in NOD-scid IL2rγ null mice is feasible and facilitates the generation of functional T cells conferring protective immunity. Hematopoietic stem and progenitor cells were isolated from the bone marrow of healthy C57BL/6 mice (wild-type, Luciferase + , CD45.1 + ) and injected intravenously or intrathymically into both male and female, young or aged NOD-scid IL2rγ null recipients. The in vivo fate of injected cells was analyzed by bioluminescence imaging and flow cytometry of thymus- and spleen-derived T cell populations. In addition to T cell reconstitution, we evaluated mice for evidence of immune dysregulation based on diabetes development and graft-versus-host disease. T cell immunity following intrathymic injection of hematopoietic stem and progenitor cells in NOD-scid IL2rγ null mice was assessed in a B cell lymphoma model. Despite the small size of the thymic remnant in NOD-scid IL2rγ null mice, we were able to accomplish precise intrathymic delivery of hematopoietic stem and progenitor cells by ultrasound-guided injection. Thymic reconstitution following intrathymic injection of healthy allogeneic hematopoietic cells was most effective in young male recipients, indicating that even in the setting of severe immunodeficiency, sex and age are important variables for thymic function. Allogeneic T cells generated in intrathymically injected NOD-scid IL2rγ null mice displayed anti-lymphoma activity in vivo, but we found no evidence for severe auto/alloreactivity in T cell-producing NOD-scid IL2rγ null mice, suggesting that immune dysregulation is not a major concern. Our findings suggest that intrathymic injection of donor hematopoietic stem and progenitor cells is a safe and effective strategy to establish protective T cell immunity in a mouse model of severe combined immunodeficiency.
Optical nulling apparatus and method for testing an optical surface
NASA Technical Reports Server (NTRS)
Olczak, Eugene (Inventor); Hannon, John J. (Inventor); Dey, Thomas W. (Inventor); Jensen, Arthur E. (Inventor)
2008-01-01
An optical nulling apparatus for testing an optical surface includes an aspheric mirror having a reflecting surface for imaging light near or onto the optical surface under test, where the aspheric mirror is configured to reduce spherical aberration of the optical surface under test. The apparatus includes a light source for emitting light toward the aspheric mirror, the light source longitudinally aligned with the aspheric mirror and the optical surface under test. The aspheric mirror is disposed between the light source and the optical surface under test, and the emitted light is reflected off the reflecting surface of the aspheric mirror and imaged near or onto the optical surface under test. An optical measuring device is disposed between the light source and the aspheric mirror, where light reflected from the optical surface under test enters the optical measuring device. An imaging mirror is disposed longitudinally between the light source and the aspheric mirror, and the imaging mirror is configured to again reflect light, which is first reflected from the reflecting surface of the aspheric mirror, onto the optical surface under test.
Inference of boundaries in causal sets
NASA Astrophysics Data System (ADS)
Cunningham, William J.
2018-05-01
We investigate the extrinsic geometry of causal sets in (1+1) -dimensional Minkowski spacetime. The properties of boundaries in an embedding space can be used not only to measure observables, but also to supplement the discrete action in the partition function via discretized Gibbons–Hawking–York boundary terms. We define several ways to represent a causal set using overlapping subsets, which then allows us to distinguish between null and non-null bounding hypersurfaces in an embedding space. We discuss algorithms to differentiate between different types of regions, consider when these distinctions are possible, and then apply the algorithms to several spacetime regions. Numerical results indicate the volumes of timelike boundaries can be measured to within 0.5% accuracy for flat boundaries and within 10% accuracy for highly curved boundaries for medium-sized causal sets with N = 214 spacetime elements.
The quantum null energy condition in curved space
NASA Astrophysics Data System (ADS)
Fu, Zicao; Koeller, Jason; Marolf, Donald
2017-11-01
The quantum null energy condition (QNEC) is a conjectured bound on components (Tkk = Tab ka k^b) of the stress tensor along a null vector k a at a point p in terms of a second k-derivative of the von Neumann entropy S on one side of a null congruence N through p generated by k a . The conjecture has been established for super-renormalizeable field theories at points p that lie on a bifurcate Killing horizon with null tangent k a and for large-N holographic theories on flat space. While the Koeller-Leichenauer holographic argument clearly yields an inequality for general ( p, k^a) , more conditions are generally required for this inequality to be a useful QNEC. For d≤slant 3 , for arbitrary backgroud metric we show that the QNEC is naturally finite and independent of renormalization scheme when the expansion θ of N at the point p vanishes. This is consistent with the original QNEC conjecture which required θ and the shear σab to satisfy θ \\vert _p= \\dotθ\\vert p =0 , σab\\vert _p=0 . But for d=4, 5 more conditions than even these are required. In particular, we also require the vanishing of additional derivatives and a dominant energy condition. In the above cases the holographic argument does indeed yield a finite QNEC, though for d≥slant6 we argue these properties to fail even for weakly isolated horizons (where all derivatives of θ, σab vanish) that also satisfy a dominant energy condition. On the positive side, a corrollary to our work is that, when coupled to Einstein-Hilbert gravity, d ≤slant 3 holographic theories at large N satisfy the generalized second law (GSL) of thermodynamics at leading order in Newton’s constant G. This is the first GSL proof which does not require the quantum fields to be perturbations to a Killing horizon.
Bi-dimensional null model analysis of presence-absence binary matrices.
Strona, Giovanni; Ulrich, Werner; Gotelli, Nicholas J
2018-01-01
Comparing the structure of presence/absence (i.e., binary) matrices with those of randomized counterparts is a common practice in ecology. However, differences in the randomization procedures (null models) can affect the results of the comparisons, leading matrix structural patterns to appear either "random" or not. Subjectivity in the choice of one particular null model over another makes it often advisable to compare the results obtained using several different approaches. Yet, available algorithms to randomize binary matrices differ substantially in respect to the constraints they impose on the discrepancy between observed and randomized row and column marginal totals, which complicates the interpretation of contrasting patterns. This calls for new strategies both to explore intermediate scenarios of restrictiveness in-between extreme constraint assumptions, and to properly synthesize the resulting information. Here we introduce a new modeling framework based on a flexible matrix randomization algorithm (named the "Tuning Peg" algorithm) that addresses both issues. The algorithm consists of a modified swap procedure in which the discrepancy between the row and column marginal totals of the target matrix and those of its randomized counterpart can be "tuned" in a continuous way by two parameters (controlling, respectively, row and column discrepancy). We show how combining the Tuning Peg with a wise random walk procedure makes it possible to explore the complete null space embraced by existing algorithms. This exploration allows researchers to visualize matrix structural patterns in an innovative bi-dimensional landscape of significance/effect size. We demonstrate the rational and potential of our approach with a set of simulated and real matrices, showing how the simultaneous investigation of a comprehensive and continuous portion of the null space can be extremely informative, and possibly key to resolving longstanding debates in the analysis of ecological matrices. © 2017 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.
Martelli, Saulo; Calvetti, Daniela; Somersalo, Erkki; Viceconti, Marco; Taddei, Fulvia
2013-08-09
Comparing the available electromyography (EMG) and the related uncertainties with the space of muscle forces potentially driving the same motion can provide insights into understanding human motion in healthy and pathological neuromotor conditions. However, it is not clear how effective the available computational tools are in completely sample the possible muscle forces. In this study, we compared the effectiveness of Metabolica and the Null-Space algorithm at generating a comprehensive spectrum of possible muscle forces for a representative motion frame. The hip force peak during a selected walking trial was identified using a lower-limb musculoskeletal model. The joint moments, the muscle lever arms, and the muscle force constraints extracted from the model constituted the indeterminate equilibrium equation at the joints. Two spectra, each containing 200,000 muscle force samples, were calculated using Metabolica and the Null-Space algorithm. The full hip force range was calculated using optimization and compared with the hip force ranges derived from the Metabolica and the Null-Space spectra. The Metabolica spectrum spanned a much larger force range than the NS spectrum, reaching 811N difference for the gluteus maximus intermediate bundle. The Metabolica hip force range exhibited a 0.3-0.4 BW error on the upper and lower boundaries of the full hip force range (3.4-11.3 BW), whereas the full range was imposed in the NS spectrum. The results suggest that Metabolica is well suited for exhaustively sample the spectrum of possible muscle recruitment strategy. Future studies will investigate the muscle force range in healthy and pathological neuromotor conditions. Copyright © 2013 Elsevier Ltd. All rights reserved.
2013-01-01
Background Metabolic alteration is one of the hallmarks of carcinogenesis. We aimed to identify certain metabolic biomarkers for the early detection of pancreatic cancer (PC) using the transgenic PTEN-null mouse model. Pancreas-specific deletion of PTEN in mouse caused progressive premalignant lesions such as highly proliferative ductal metaplasia. We imaged the mitochondrial redox state of the pancreases of the transgenic mice approximately eight months old using the redox scanner, i.e., the nicotinamide adenine dinucleotide/oxidized flavoproteins (NADH/Fp) fluorescence imager at low temperature. Two different approaches, the global averaging of the redox indices without considering tissue heterogeneity along tissue depth and the univariate analysis of multi-section data using tissue depth as a covariate were adopted for the statistical analysis of the multi-section imaging data. The standard deviations of the redox indices and the histogram analysis with Gaussian fit were used to determine the tissue heterogeneity. Results All methods show consistently that the PTEN deficient pancreases (Pdx1-Cre;PTENlox/lox) were significantly more heterogeneous in their mitochondrial redox state compared to the controls (PTENlox/lox). Statistical analysis taking into account the variations of the redox state with tissue depth further shows that PTEN deletion significantly shifted the pancreatic tissue to an overall more oxidized state. Oxidization of the PTEN-null group was not seen when the imaging data were analyzed by global averaging without considering the variation of the redox indices along tissue depth, indicating the importance of taking tissue heterogeneity into account for the statistical analysis of the multi-section imaging data. Conclusions This study reveals a possible link between the mitochondrial redox state alteration of the pancreas and its malignant transformation and may be further developed for establishing potential metabolic biomarkers for the early diagnosis of pancreatic cancer. PMID:24252270
Pal-Ghosh, Sonali; Tadvalkar, Gauri; Stepp, Mary Ann
2017-10-01
To determine the impact of the loss of syndecan 1 (SDC1) on intraepithelial corneal nerves (ICNs) during homeostasis, aging, and in response to 1.5-mm trephine and debridement injury. Whole-mount corneas are used to quantify ICN density and thickness over time after birth and in response to injury in SDC1-null and wild-type (WT) mice. High-resolution three-dimensional imaging is used to visualize intraepithelial nerve terminals (INTs), axon fragments, and lysosomes in corneal epithelial cells using antibodies against growth associated protein 43 (GAP43), βIII tubulin, and LAMP1. Quantitative PCR was performed to quantify expression of SDC1, SDC2, SDC3, and SDC4 in corneal epithelial mRNA. Phagocytosis was assessed by quantifying internalization of fluorescently labeled 1-μm latex beads. Intraepithelial corneal nerves innervate the corneas of SDC1-null mice more slowly. At 8 weeks, ICN density is less but thickness is greater. Apically projecting intraepithelial nerve terminals and lysosome-associated membrane glycoprotein 1 (LAMP1) are also reduced in unwounded SDC1-null corneas. Quantitative PCR and immunofluorescence studies show that SDC3 expression and localization are increased in SDC1-null ICNs. Wild-type and SDC1-null corneas lose ICN density and thickness as they age. Recovery of axon density and thickness after trephine but not debridement wounds is slower in SDC1-null corneas compared with WT. Experiments assessing phagocytosis show reduced bead internalization by SDC1-null epithelial cells. Syndecan-1 deficiency alters ICN morphology and homeostasis during aging, reduces epithelial phagocytosis, and impairs reinnervation after trephine but not debridement injury. These data provide insight into the mechanisms used by sensory nerves to reinnervate after injury.
High variability impairs motor learning regardless of whether it affects task performance.
Cardis, Marco; Casadio, Maura; Ranganathan, Rajiv
2018-01-01
Motor variability plays an important role in motor learning, although the exact mechanisms of how variability affects learning are not well understood. Recent evidence suggests that motor variability may have different effects on learning in redundant tasks, depending on whether it is present in the task space (where it affects task performance) or in the null space (where it has no effect on task performance). We examined the effect of directly introducing null and task space variability using a manipulandum during the learning of a motor task. Participants learned a bimanual shuffleboard task for 2 days, where their goal was to slide a virtual puck as close as possible toward a target. Critically, the distance traveled by the puck was determined by the sum of the left- and right-hand velocities, which meant that there was redundancy in the task. Participants were divided into five groups, based on both the dimension in which the variability was introduced and the amount of variability that was introduced during training. Results showed that although all groups were able to reduce error with practice, learning was affected more by the amount of variability introduced rather than the dimension in which variability was introduced. Specifically, groups with higher movement variability during practice showed larger errors at the end of practice compared with groups that had low variability during learning. These results suggest that although introducing variability can increase exploration of new solutions, this may adversely affect the ability to retain the learned solution. NEW & NOTEWORTHY We examined the role of introducing variability during motor learning in a redundant task. The presence of redundancy allows variability to be introduced in different dimensions: the task space (where it affects task performance) or the null space (where it does not affect task performance). We found that introducing variability affected learning adversely, but the amount of variability was more critical than the dimension in which variability was introduced.
Optic-null space medium for cover-up cloaking without any negative refraction index materials
Sun, Fei; He, Sailing
2016-01-01
With the help of optic-null medium, we propose a new way to achieve invisibility by covering up the scattering without using any negative refraction index materials. Compared with previous methods to achieve invisibility, the function of our cloak is to cover up the scattering of the objects to be concealed by a background object of strong scattering. The concealed object can receive information from the outside world without being detected. Numerical simulations verify the performance of our cloak. The proposed method will be a great addition to existing invisibility technology. PMID:27383833
D=10 Chiral Tensionless Super p-BRANES
NASA Astrophysics Data System (ADS)
Bozhilov, P.
We consider a model for tensionless (null) super-p-branes with N chiral supersymmetries in ten-dimensional flat space-time. After establishing the symmetries of the action, we give the general solution of the classical equations of motion in a particular gauge. In the case of a null superstring (p=1) we find the general solution in an arbitrary gauge. Then, using a harmonic superspace approach, the initial algebra of first- and second-class constraints is converted into an algebra of Lorentz-covariant, BFV-irreducible, first-class constraints only. The corresponding BRST charge is as for a first rank dynamical system.
Optic-null space medium for cover-up cloaking without any negative refraction index materials.
Sun, Fei; He, Sailing
2016-07-07
With the help of optic-null medium, we propose a new way to achieve invisibility by covering up the scattering without using any negative refraction index materials. Compared with previous methods to achieve invisibility, the function of our cloak is to cover up the scattering of the objects to be concealed by a background object of strong scattering. The concealed object can receive information from the outside world without being detected. Numerical simulations verify the performance of our cloak. The proposed method will be a great addition to existing invisibility technology.
NASA Astrophysics Data System (ADS)
Barillot, M.; Barthelemy, E.; Bastard, L.; Broquin, J.-E.; Hawkins, G.; Kirschner, V.; Ménard, S.; Parent, G.; Poinsot, C.; Pradel, A.; Vigreux, C.; Zhang, S.; Zhang, X.
2017-11-01
The search for Earth-like exoplanets, orbiting in the habitable zone of stars other than our Sun and showing biological activity, is one of the most exciting and challenging quests of the present time. Nulling interferometry from space, in the thermal infrared, appears as a promising candidate technique for the task of directly observing extra-solar planets. It has been studied for about 10 years by ESA and NASA in the framework of the Darwin and TPF-I missions respectively [1]. Nevertheless, nulling interferometry in the thermal infrared remains a technological challenge at several levels. Among them, the development of the "modal filter" function is mandatory for the filtering of the wavefronts in adequacy with the objective of rejecting the central star flux to an efficiency of about 105. Modal filtering [2] takes benefit of the capability of single-mode waveguides to transmit a single amplitude function, to eliminate virtually any perturbation of the interfering wavefronts, thus making very high rejection ratios possible. The modal filter may either be based on single-mode Integrated Optics (IO) and/or Fiber Optics. In this paper, we focus on IO, and more specifically on the progress of the on-going "Integrated Optics" activity of the European Space Agency.
Local modular Hamiltonians from the quantum null energy condition
NASA Astrophysics Data System (ADS)
Koeller, Jason; Leichenauer, Stefan; Levine, Adam; Shahbazi-Moghaddam, Arvin
2018-03-01
The vacuum modular Hamiltonian K of the Rindler wedge in any relativistic quantum field theory is given by the boost generator. Here we investigate the modular Hamiltonian for more general half-spaces which are bounded by an arbitrary smooth cut of a null plane. We derive a formula for the second derivative of the modular Hamiltonian with respect to the coordinates of the cut which schematically reads K''=Tv v . This formula can be integrated twice to obtain a simple expression for the modular Hamiltonian. The result naturally generalizes the standard expression for the Rindler modular Hamiltonian to this larger class of regions. Our primary assumptions are the quantum null energy condition—an inequality between the second derivative of the von Neumann entropy of a region and the stress tensor—and its saturation in the vacuum for these regions. We discuss the validity of these assumptions in free theories and holographic theories to all orders in 1 /N .
Bopp-Podolsky black holes and the no-hair theorem
NASA Astrophysics Data System (ADS)
Cuzinatto, R. R.; de Melo, C. A. M.; Medeiros, L. G.; Pimentel, B. M.; Pompeia, P. J.
2018-01-01
Bopp-Podolsky electrodynamics is generalized to curved space-times. The equations of motion are written for the case of static spherically symmetric black holes and their exterior solutions are analyzed using Bekenstein's method. It is shown that the solutions split up into two parts, namely a non-homogeneous (asymptotically massless) regime and a homogeneous (asymptotically massive) sector which is null outside the event horizon. In addition, in the simplest approach to Bopp-Podolsky black holes, the non-homogeneous solutions are found to be Maxwell's solutions leading to a Reissner-Nordström black hole. It is also demonstrated that the only exterior solution consistent with the weak and null energy conditions is the Maxwell one. Thus, in the light of the energy conditions, it is concluded that only Maxwell modes propagate outside the horizon and, therefore, the no-hair theorem is satisfied in the case of Bopp-Podolsky fields in spherically symmetric space-times.
Polarization-independent tunable spectral slicing filter in Ti:LiNbO3.
Rabelo, Renato C; Eknoyan, Ohannes; Taylor, Henry F
2011-02-01
A two-port polarization-independent tunable spectral slicing filter at the 1530 nm wavelength regime is presented. The design utilizes an asymmetric interferometer with a sparse index grating along its arms. The sparse grating makes it possible to select equally spaced frequency channels from an incident WDM signal and to place nulls between them to coincide with the signal comb frequency. The number of selected channels and nulls between them depends on the number of coupling regions used in the sparse grating. The free spectral range depends on the spacing between the coupling regions. The Z-transform method is used to synthesize the filter and determine the spectral response. The operation of a device with six coupling regions is demonstrated, and good agreement with theoretical predictions is obtained. A 3 dB bandwidth of ∼1 nm and thermal tuning over a range of ∼13 nm are measured.
A trait-based test for habitat filtering: Convex hull volume
Cornwell, W.K.; Schwilk, D.W.; Ackerly, D.D.
2006-01-01
Community assembly theory suggests that two processes affect the distribution of trait values within communities: competition and habitat filtering. Within a local community, competition leads to ecological differentiation of coexisting species, while habitat filtering reduces the spread of trait values, reflecting shared ecological tolerances. Many statistical tests for the effects of competition exist in the literature, but measures of habitat filtering are less well-developed. Here, we present convex hull volume, a construct from computational geometry, which provides an n-dimensional measure of the volume of trait space occupied by species in a community. Combined with ecological null models, this measure offers a useful test for habitat filtering. We use convex hull volume and a null model to analyze California woody-plant trait and community data. Our results show that observed plant communities occupy less trait space than expected from random assembly, a result consistent with habitat filtering. ?? 2006 by the Ecological Society of America.
Impaired olfaction in mice lacking aquaporin-4 water channels
Lu, Daniel C.; Zhang, Hua; Zador, Zsolt; Verkman, A. S.
2008-01-01
Aquaporin-4 (AQP4) is a water-selective transport protein expressed in glial cells throughout the central nervous system. AQP4 deletion in mice produces alterations in several neuroexcitation phenomena, including hearing, vision, epilepsy, and cortical spreading depression. Here, we report defective olfaction and electroolfactogram responses in AQP4-null mice. Immunofluorescence indicated strong AQP4 expression in supportive cells of the nasal olfactory epithelium. The olfactory epithelium in AQP4-null mice had identical appearance, but did not express AQP4, and had ∼12-fold reduced osmotic water permeability. Behavioral analysis showed greatly impaired olfaction in AQP4-null mice, with latency times of 17 ± 0.7 vs. 55 ± 5 s in wild-type vs. AQP4-null mice in a buried food pellet test, which was confirmed using an olfactory maze test. Electroolfactogram voltage responses to multiple odorants were reduced in AQP4-null mice, with maximal responses to triethylamine of 0.80 ± 0.07 vs. 0.28 ± 0.03 mV. Similar olfaction and electroolfactogram defects were found in outbred (CD1) and inbred (C57/bl6) mouse genetic backgrounds. Our results establish AQP4 as a novel determinant of olfaction, the deficiency of which probably impairs extracellular space K+ buffering in the olfactory epithelium.—Lu, D. C., Zhang, H., Zador, Z., Verkman, A. S. Impaired olfaction in mice lacking aquaporin-4 water channels. PMID:18511552
Facial recognition from volume-rendered magnetic resonance imaging data.
Prior, Fred W; Brunsden, Barry; Hildebolt, Charles; Nolan, Tracy S; Pringle, Michael; Vaishnavi, S Neil; Larson-Prior, Linda J
2009-01-01
Three-dimensional (3-D) reconstructions of computed tomography (CT) and magnetic resonance (MR) brain imaging studies are a routine component of both clinical practice and clinical and translational research. A side effect of such reconstructions is the creation of a potentially recognizable face. The Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy Rule requires that individually identifiable health information may not be used for research unless identifiers that may be associated with the health information including "Full face photographic images and other comparable images ..." are removed (de-identification). Thus, a key question is: Are reconstructed facial images comparable to full-face photographs for the purpose of identification? To address this question, MR images were selected from existing research repositories and subjects were asked to pair an MR reconstruction with one of 40 photographs. The chance probability that an observer could match a photograph with its 3-D MR image was 1 in 40 (0.025), and we considered 4 successes out of 40 (4/40, 0.1) to indicate that a subject could identify persons' faces from their 3-D MR images. Forty percent of the subjects were able to successfully match photographs with MR images with success rates higher than the null hypothesis success rate. The Blyth-Still-Casella 95% confidence interval for the 40% success rate was 29%-52%, and the 40% success rate was significantly higher ( P < 0.001) than our null hypothesis success rate of 1 in 10 (0.10).
Evaluation of Multimodal Imaging Biomarkers of Prostate Cancer
2015-09-01
and PET images. Figure 2 highlights the dynamic uptake of TSPO as compared to muscle. Across 60 minutes the %ID/cc continues to increase which is...p53 double null mutant mouse model. Towards that end, we have successfully acquired anatomic MRI and PET data in orthotopic tumors within the Pten...castration resistant prostate cancer, MRI, PET , FDHT, image optimization 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES
Lu, Yongbo; Kamel-El Sayed, Suzan A; Wang, Kun; Tiede-Lewis, LeAnn M; Grillo, Michael A; Veno, Patricia A; Dusevich, Vladimir; Phillips, Charlotte L; Bonewald, Lynda F; Dallas, Sarah L
2018-06-01
Type I collagen is the most abundant extracellular matrix protein in bone and other connective tissues and plays key roles in normal and pathological bone formation as well as in connective tissue disorders and fibrosis. Although much is known about the collagen biosynthetic pathway and its regulatory steps, the mechanisms by which it is assembled extracellularly are less clear. We have generated GFPtpz and mCherry-tagged collagen fusion constructs for live imaging of type I collagen assembly by replacing the α2(I)-procollagen N-terminal propeptide with GFPtpz or mCherry. These novel imaging probes were stably transfected into MLO-A5 osteoblast-like cells and fibronectin-null mouse embryonic fibroblasts (FN-null-MEFs) and used for imaging type I collagen assembly dynamics and its dependence on fibronectin. Both fusion proteins co-precipitated with α1(I)-collagen and remained intracellular without ascorbate but were assembled into α1(I) collagen-containing extracellular fibrils in the presence of ascorbate. Immunogold-EM confirmed their ultrastuctural localization in banded collagen fibrils. Live cell imaging in stably transfected MLO-A5 cells revealed the highly dynamic nature of collagen assembly and showed that during assembly the fibril networks are continually stretched and contracted due to the underlying cell motion. We also observed that cell-generated forces can physically reshape the collagen fibrils. Using co-cultures of mCherry- and GFPtpz-collagen expressing cells, we show that multiple cells contribute collagen to form collagen fiber bundles. Immuno-EM further showed that individual collagen fibrils can receive contributions of collagen from more than one cell. Live cell imaging in FN-null-MEFs expressing GFPtpz-collagen showed that collagen assembly was both dependent upon and dynamically integrated with fibronectin assembly. These GFP-collagen fusion constructs provide a powerful tool for imaging collagen in living cells and have revealed novel and fundamental insights into the dynamic mechanisms for the extracellular assembly of collagen. © 2018 American Society for Bone and Mineral Research. © 2018 American Society for Bone and Mineral Research.
Slotnick, Scott D
2017-07-01
Analysis of functional magnetic resonance imaging (fMRI) data typically involves over one hundred thousand independent statistical tests; therefore, it is necessary to correct for multiple comparisons to control familywise error. In a recent paper, Eklund, Nichols, and Knutsson used resting-state fMRI data to evaluate commonly employed methods to correct for multiple comparisons and reported unacceptable rates of familywise error. Eklund et al.'s analysis was based on the assumption that resting-state fMRI data reflect null data; however, their 'null data' actually reflected default network activity that inflated familywise error. As such, Eklund et al.'s results provide no basis to question the validity of the thousands of published fMRI studies that have corrected for multiple comparisons or the commonly employed methods to correct for multiple comparisons.
The OmpL porin does not modulate redox potential in the periplasmic space of Escherichia coli.
Sardesai, Abhijit A; Genevaux, Pierre; Schwager, Françoise; Ang, Debbie; Georgopoulos, Costa
2003-04-01
The Escherichia coli DsbA protein is the major oxidative catalyst in the periplasm. Dartigalongue et al. (EMBO J., 19, 5980-5988, 2000) reported that null mutations in the ompL gene of E.coli fully suppress all phenotypes associated with dsbA mutants, i.e. sensitivity to the reducing agent dithiothreitol (DTT) and the antibiotic benzylpenicillin, lack of motility, reduced alkaline phosphatase activity and mucoidy. They showed that OmpL is a porin and hypothesized that ompL null mutations exert their suppressive effect by preventing efflux of a putative oxidizing-reducing compound into the medium. We have repeated these experiments using two different ompL null alleles in at least three different E.coli K-12 genetic backgrounds and have failed to reproduce any of the ompL suppressive effects noted above. Also, we show that, contrary to earlier results, ompL null mutations alone do not result in partial DTT sensitivity or partial motility, nor do they appreciably affect bacterial growth rates or block propagation of the male-specific bacteriophage M13. Thus, our findings clearly demonstrate that ompL plays no perceptible role in modulating redox potential in the periplasm of E.coli.
Short-range inverse-square law experiment in space
NASA Technical Reports Server (NTRS)
Strayer, D.; Paik, H. J.; Moody, M. V.
2002-01-01
The objective of ISLES (Inverse-Square Law Experiment in Space) is to perform a null test ofNewton's law on the ISS with a resolution of one part in lo5 at ranges from 100 pm to 1 mm. ISLES will be sensitive enough to detect axions with the strongest allowed coupling and to test the string-theory prediction with R z 5 pm.
Naked singularities in higher dimensional Vaidya space-times
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, S. G.; Dadhich, Naresh
We investigate the end state of the gravitational collapse of a null fluid in higher-dimensional space-times. Both naked singularities and black holes are shown to be developing as the final outcome of the collapse. The naked singularity spectrum in a collapsing Vaidya region (4D) gets covered with the increase in dimensions and hence higher dimensions favor a black hole in comparison to a naked singularity. The cosmic censorship conjecture will be fully respected for a space of infinite dimension.
Using Riemannian geometry to obtain new results on Dikin and Karmarkar methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliveira, P.; Joao, X.; Piaui, T.
1994-12-31
We are motivated by a 1990 Karmarkar paper on Riemannian geometry and Interior Point Methods. In this talk we show 3 results. (1) Karmarkar direction can be derived from the Dikin one. This is obtained by constructing a certain Z(x) representation of the null space of the unitary simplex (e, x) = 1; then the projective direction is the image under Z(x) of the affine-scaling one, when it is restricted to that simplex. (2) Second order information on Dikin and Karmarkar methods. We establish computable Hessians for each of the metrics corresponding to both directions, thus permitting the generation ofmore » {open_quotes}second order{close_quotes} methods. (3) Dikin and Karmarkar geodesic descent methods. For those directions, we make computable the theoretical Luenberger geodesic descent method, since we are able to explicit very accurate expressions of the corresponding geodesics. Convergence results are given.« less
Determination of the Charon/Pluto Mass Ratio from Center-of-Light Astrometry
NASA Technical Reports Server (NTRS)
Foust, Jeffrey A.; Elliot, J. L.; Olkin, Catherine B.; McDonald, Stephen W.; Dunham, Edward W.; Stone, Remington P. S.; McDonald, John S.; Stone, Ronald C.
1997-01-01
The Charon/Pluto mass ratio is a fundamental but poorly known parameter of the two-body system. Previous values for the mass ratio have ranged from 0.0837 plus or minus 0.0147 (Null et al., 1993, Astron. J. 105, 2319-2335) to 0.1566 plus or minus 0.0035 (Young et al., 1994, Icarus 108,186-199). We report here a new determination of the Charon/Pluto mass ratio, using five sets of groundbased images taken at four sites in support of Pluto occultation predictions. Unlike the Null et al. and Young et A determinations, where the centers of light for Pluto and Charon could be determined separately, this technique examines the motion of the center of light of the blended Pluto-Charon image. We compute the offsets of the observed center-of-light position of Pluto-Charon from the ephemeris position of the system and fit these offsets to a model of the Pluto-Charon system. The least-squares fits to the five data sets agree within their errors, and the weighted mean mass ratio is 0.117 plus or minus 0.006. The effects of errors in the Charon light fraction, semimajor axis, and ephemeris have been examined and are equal to only a small fraction of the formal error from the fit. This result is intermediate between those of Null et al., and Young et al. and matches a new value of 0.124 plus or minus 0.008 by Null and Owen (1996, Astron. J. 111, 1368-1381). The mass ratio and resulting individual masses and densities of Pluto and Charon are consistent with a collisional origin for the Pluto-Charon system.
Magnetic topological analysis of coronal bright points
NASA Astrophysics Data System (ADS)
Galsgaard, K.; Madjarska, M. S.; Moreno-Insertis, F.; Huang, Z.; Wiegelmann, T.
2017-10-01
Context. We report on the first of a series of studies on coronal bright points which investigate the physical mechanism that generates these phenomena. Aims: The aim of this paper is to understand the magnetic-field structure that hosts the bright points. Methods: We use longitudinal magnetograms taken by the Solar Optical Telescope with the Narrowband Filter Imager. For a single case, magnetograms from the Helioseismic and Magnetic Imager were added to the analysis. The longitudinal magnetic field component is used to derive the potential magnetic fields of the large regions around the bright points. A magneto-static field extrapolation method is tested to verify the accuracy of the potential field modelling. The three dimensional magnetic fields are investigated for the presence of magnetic null points and their influence on the local magnetic domain. Results: In nine out of ten cases the bright point resides in areas where the coronal magnetic field contains an opposite polarity intrusion defining a magnetic null point above it. We find that X-ray bright points reside, in these nine cases, in a limited part of the projected fan-dome area, either fully inside the dome or expanding over a limited area below which typically a dominant flux concentration resides. The tenth bright point is located in a bipolar loop system without an overlying null point. Conclusions: All bright points in coronal holes and two out of three bright points in quiet Sun regions are seen to reside in regions containing a magnetic null point. An as yet unidentified process(es) generates the brigh points in specific regions of the fan-dome structure. The movies are available at http://www.aanda.org
Space-time slicing in Horndeski theories and its implications for non-singular bouncing solutions
NASA Astrophysics Data System (ADS)
Ijjas, Anna
2018-02-01
In this paper, we show how the proper choice of gauge is critical in analyzing the stability of non-singular cosmological bounce solutions based on Horndeski theories. We show that it is possible to construct non-singular cosmological bounce solutions with classically stable behavior for all modes with wavelengths above the Planck scale where: (a) the solution involves a stage of null-energy condition violation during which gravity is described by a modification of Einstein's general relativity; and (b) the solution reduces to Einstein gravity both before and after the null-energy condition violating stage. Similar considerations apply to galilean genesis scenarios.
Reducing the dimensions of acoustic devices using anti-acoustic-null media
NASA Astrophysics Data System (ADS)
Li, Borui; Sun, Fei; He, Sailing
2018-02-01
An anti-acoustic-null medium (anti-ANM), a special homogeneous medium with anisotropic mass density, is designed by transformation acoustics (TA). Anti-ANM can greatly compress acoustic space along the direction of its main axis, where the size compression ratio is extremely large. This special feature can be utilized to reduce the geometric dimensions of classic acoustic devices. For example, the height of a parabolic acoustic reflector can be greatly reduced. We also design a brass-air structure on the basis of the effective medium theory to materialize the anti-ANM in a broadband frequency range. Numerical simulations verify the performance of the proposed anti-ANM.
Liu, Yan; Mo, Lan; Goldfarb, David S.; Evan, Andrew P.; Liang, Fengxia; Khan, Saeed R.; Lieske, John C.
2010-01-01
Mammalian urine contains a range of macromolecule proteins that play critical roles in renal stone formation, among which Tamm-Horsfall protein (THP) is by far the most abundant. While THP is a potent inhibitor of crystal aggregation in vitro and its ablation in vivo predisposes one of the two existing mouse models to spontaneous intrarenal calcium crystallization, key controversies remain regarding the role of THP in nephrolithiasis. By carrying out a long-range follow-up of more than 250 THP-null mice and their wild-type controls, we demonstrate here that renal calcification is a highly consistent phenotype of the THP-null mice that is age and partially gene dosage dependent, but is gender and genetic background independent. Renal calcification in THP-null mice is progressive, and by 15 mo over 85% of all the THP-null mice develop spontaneous intrarenal crystals. The crystals consist primarily of calcium phosphate in the form of hydroxyapatite, are located more frequently in the interstitial space of the renal papillae than intratubularly, particularly in older animals, and lack accompanying inflammatory cell infiltration. The interstitial deposits of hydroxyapatite observed in THP-null mice bear strong resemblances to the renal crystals found in human kidneys bearing idiopathic calcium oxalate stones. Compared with 24-h urine from the wild-type mice, that of THP-null mice is supersaturated with brushite (calcium phosphate), a stone precursor, and has reduced urinary excretion of citrate, a stone inhibitor. While less frequent than renal calcinosis, renal pelvic and ureteral stones and hydronephrosis occur in the aged THP-null mice. These results provide direct in vivo evidence indicating that normal THP plays an important role in defending the urinary system against calcification and suggest that reduced expression and/or decreased function of THP could contribute to nephrolithiasis. PMID:20591941
Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Sathian, K
2018-02-01
In a recent study, Eklund et al. employed resting-state functional magnetic resonance imaging data as a surrogate for null functional magnetic resonance imaging (fMRI) datasets and posited that cluster-wise family-wise error (FWE) rate-corrected inferences made by using parametric statistical methods in fMRI studies over the past two decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; this was principally because the spatial autocorrelation functions (sACF) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggested otherwise. Here, we show that accounting for non-Gaussian signal components such as those arising from resting-state neural activity as well as physiological responses and motion artifacts in the null fMRI datasets yields first- and second-level general linear model analysis residuals with nearly uniform and Gaussian sACF. Further comparison with nonparametric permutation tests indicates that cluster-based FWE corrected inferences made with Gaussian spatial noise approximations are valid.
NASA Astrophysics Data System (ADS)
Su, Yanfeng; Cai, Zhijian; Liu, Quan; Lu, Yifan; Guo, Peiliang; Shi, Lingyan; Wu, Jianhong
2018-04-01
In this paper, an autostereoscopic three-dimensional (3D) display system based on synthetic hologram reconstruction is proposed and implemented. The system uses a single phase-only spatial light modulator to load the synthetic hologram of the left and right stereo images, and the parallax angle between two reconstructed stereo images is enlarged by a grating to meet the split angle requirement of normal stereoscopic vision. To realize the crosstalk-free autostereoscopic 3D display with high light utilization efficiency, the groove parameters of the grating are specifically designed by the rigorous coupled-wave theory for suppressing the zero-order diffraction, and then the zero-order nulled grating is fabricated by the holographic lithography and the ion beam etching. Furthermore, the diffraction efficiency of the fabricated grating is measured under the illumination of a laser beam with a wavelength of 532 nm. Finally, the experimental verification system for the proposed autostereoscopic 3D display is presented. The experimental results prove that the proposed system is able to generate stereoscopic 3D images with good performances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mawet, D.; Ruane, G.; Xuan, W.
2017-04-01
High-dispersion coronagraphy (HDC) optimally combines high-contrast imaging techniques such as adaptive optics/wavefront control plus coronagraphy to high spectral resolution spectroscopy. HDC is a critical pathway toward fully characterizing exoplanet atmospheres across a broad range of masses from giant gaseous planets down to Earth-like planets. In addition to determining the molecular composition of exoplanet atmospheres, HDC also enables Doppler mapping of atmosphere inhomogeneities (temperature, clouds, wind), as well as precise measurements of exoplanet rotational velocities. Here, we demonstrate an innovative concept for injecting the directly imaged planet light into a single-mode fiber, linking a high-contrast adaptively corrected coronagraph to a high-resolutionmore » spectrograph (diffraction-limited or not). Our laboratory demonstration includes three key milestones: close-to-theoretical injection efficiency, accurate pointing and tracking, and on-fiber coherent modulation and speckle nulling of spurious starlight signal coupling into the fiber. Using the extreme modal selectivity of single-mode fibers, we also demonstrated speckle suppression gains that outperform conventional image-based speckle nulling by at least two orders of magnitude.« less
NASA Astrophysics Data System (ADS)
Mawet, D.; Ruane, G.; Xuan, W.; Echeverri, D.; Klimovich, N.; Randolph, M.; Fucik, J.; Wallace, J. K.; Wang, J.; Vasisht, G.; Dekany, R.; Mennesson, B.; Choquet, E.; Delorme, J.-R.; Serabyn, E.
2017-04-01
High-dispersion coronagraphy (HDC) optimally combines high-contrast imaging techniques such as adaptive optics/wavefront control plus coronagraphy to high spectral resolution spectroscopy. HDC is a critical pathway toward fully characterizing exoplanet atmospheres across a broad range of masses from giant gaseous planets down to Earth-like planets. In addition to determining the molecular composition of exoplanet atmospheres, HDC also enables Doppler mapping of atmosphere inhomogeneities (temperature, clouds, wind), as well as precise measurements of exoplanet rotational velocities. Here, we demonstrate an innovative concept for injecting the directly imaged planet light into a single-mode fiber, linking a high-contrast adaptively corrected coronagraph to a high-resolution spectrograph (diffraction-limited or not). Our laboratory demonstration includes three key milestones: close-to-theoretical injection efficiency, accurate pointing and tracking, and on-fiber coherent modulation and speckle nulling of spurious starlight signal coupling into the fiber. Using the extreme modal selectivity of single-mode fibers, we also demonstrated speckle suppression gains that outperform conventional image-based speckle nulling by at least two orders of magnitude.
Initiation of Coronal Mass Ejections by Tether-Cutting Reconnection
NASA Technical Reports Server (NTRS)
Moore, Ronald L.; Sterling, Alphonse C.; Falconer, David A.; Six, N. Frank (Technical Monitor)
2002-01-01
We present and interpret examples of the eruptive motion and flare brightening observed in the onset of magnetic explosions that produce coronal mass ejections. The observations are photospheric magnetograms and sequences of coronal and/or chromospheric images. In our examples, the explosion is apparently driven by the ejective eruption of a sigmoidal sheared-field flux rope from the core of an initially closed bipole. This eruption is initiated (triggered and unleashed) by reconnection located either (1) internally, low in the sheared core field, or (2) externally, at a magnetic null above the closed bipole. The internal reconnection is commonly called 'tether-cutting" reconnection, and the external reconnection is commonly called "break-out' reconnection. We point out that break-out reconnection amounts to external tether cutting. In one example, the eruptive motion of the sheared core field starts several minutes prior to any detectable brightening in the coronal images. We suggest that in this case the eruption is triggered by internal tether-cutting reconnection that at first is too slow and/or too localized to produce detectable heating in the coronal images. This work is supported by NASA's Office of Space Science through its Solar & Heliospheric Physics Supporting Research & Technology program and its Sun-Earth Connection Guest Investigator program.
Measurement Via Optical Near-Nulling and Subaperture Stitching
NASA Technical Reports Server (NTRS)
Forbes, Greg; De Vries, Gary; Murphy, Paul; Brophy, Chris
2012-01-01
A subaperture stitching interferometer system provides near-nulling of a subaperture wavefront reflected from an object of interest over a portion of a surface of the object. A variable optical element located in the radiation path adjustably provides near-nulling to facilitate stitching of subaperture interferograms, creating an interferogram representative of the entire surface of interest. This enables testing of aspheric surfaces without null optics customized for each surface prescription. The surface shapes of objects such as lenses and other precision components are often measured with interferometry. However, interferometers have a limited capture range, and thus the test wavefront cannot be too different from the reference or the interference cannot be analyzed. Furthermore, the performance of the interferometer is usually best when the test and reference wavefronts are nearly identical (referred to as a null condition). Thus, it is necessary when performing such measurements to correct for known variations in shape to ensure that unintended variations are within the capture range of the interferometer and accurately measured. This invention is a system for nearnulling within a subaperture stitching interferometer, although in principle, the concept can be employed by wavefront measuring gauges other than interferometers. The system employs a light source for providing coherent radiation of a subaperture extent. An object of interest is placed to modify the radiation (e.g., to reflect or pass the radiation), and a variable optical element is located to interact with, and nearly null, the affected radiation. A detector or imaging device is situated to obtain interference patterns in the modified radiation. Multiple subaperture interferograms are taken and are stitched, or joined, to provide an interferogram representative of the entire surface of the object of interest. The primary aspect of the invention is the use of adjustable corrective optics in the context of subaperture stitching near-nulling interferometry, wherein a complex surface is analyzed via multiple, separate, overlapping interferograms. For complex surfaces, the problem of managing the identification and placement of corrective optics becomes even more pronounced, to the extent that in most cases the null corrector optics are specific to the particular asphere prescription and no others (i.e. another asphere requires completely different null correction optics). In principle, the near-nulling technique does not require subaperture stitching at all. Building a near-null system that is practically useful relies on two key features: simplicity and universality. If the system is too complex, it will be difficult to calibrate and model its manufacturing errors, rendering it useless as a precision metrology tool and/or prohibitively expensive. If the system is not applicable to a wide range of test parts, then it does not provide significant value over conventional null-correction technology. Subaperture stitching enables simpler and more universal near-null systems to be effective, because a fraction of a surface is necessarily less complex than the whole surface (excepting the extreme case of a fractal surface description). The technique of near-nulling can significantly enhance aspheric subaperture stitching capability by allowing the interferometer to capture a wider range of aspheres. More over, subaperture stitching is essential to a truly effective near-nulling system, since looking at a fraction of the surface keeps the wavefront complexity within the capability of a relatively simple nearnull apparatus. Furthermore, by reducing the subaperture size, the complexity of the measured wavefront can be reduced until it is within the capability of the near-null design.
NASA Technical Reports Server (NTRS)
Hamer, H. A.; Johnson, K. G.
1986-01-01
An analysis was performed to determine the effects of model error on the control of a large flexible space antenna. Control was achieved by employing two three-axis control-moment gyros (CMG's) located on the antenna column. State variables were estimated by including an observer in the control loop that used attitude and attitude-rate sensors on the column. Errors were assumed to exist in the individual model parameters: modal frequency, modal damping, mode slope (control-influence coefficients), and moment of inertia. Their effects on control-system performance were analyzed either for (1) nulling initial disturbances in the rigid-body modes, or (2) nulling initial disturbances in the first three flexible modes. The study includes the effects on stability, time to null, and control requirements (defined as maximum torque and total momentum), as well as on the accuracy of obtaining initial estimates of the disturbances. The effects on the transients of the undisturbed modes are also included. The results, which are compared for decoupled and linear quadratic regulator (LQR) control procedures, are shown in tabular form, parametric plots, and as sample time histories of modal-amplitude and control responses. Results of the analysis showed that the effects of model errors on the control-system performance were generally comparable for both control procedures. The effect of mode-slope error was the most serious of all model errors.
Interferometry in the Era of Very Large Telescopes
NASA Technical Reports Server (NTRS)
Barry, Richard K.
2010-01-01
Research in modern stellar interferometry has focused primarily on ground-based observatories, with very long baselines or large apertures, that have benefited from recent advances in fringe tracking, phase reconstruction, adaptive optics, guided optics, and modern detectors. As one example, a great deal of effort has been put into development of ground-based nulling interferometers. The nulling technique is the sparse aperture equivalent of conventional coronography used in filled aperture telescopes. In this mode the stellar light itself is suppressed by a destructive fringe, effectively enhancing the contrast of the circumstellar material located near the star. Nulling interferometry has helped to advance our understanding of the astrophysics of many distant objects by providing the spatial resolution necessary to localize the various faint emission sources near bright objects. We illustrate the current capabilities of this technique by describing the first scientific results from the Keck Interferometer Nuller that combines the light from the two largest optical telescopes in the world including new, unpublished measurements of exozodiacal dust disks. We discuss prospects in the near future for interferometry in general, the capabilities of secondary masking interferometry on very large telescopes, and of nulling interferometry using outriggers on very large telescopes. We discuss future development of a simplified space-borne NIR nulling architecture, the Fourier-Kelvin Stellar Interferometer, capable of detecting and characterizing an Earth twin in the near future and how such a mission would benefit from the optical wavelength coverage offered by large, ground-based instruments.
NASA Astrophysics Data System (ADS)
van der Avoort, Casper
2006-05-01
Optical long baseline stellar interferometry is an observational technique in astronomy that already exists for over a century, but is truly blooming during the last decades. The undoubted value of stellar interferometry as a technique to measure stellar parameters beyond the classical resolution limit is more and more spreading to the regime of synthesis imaging. With optical aperture synthesis imaging, the measurement of parameters is extended to the reconstruction of high resolution stellar images. A number of optical telescope arrays for synthesis imaging are operational on Earth, while space-based telescope arrays are being designed. For all imaging arrays, the combination of the light collected by the telescopes in the array can be performed in a number of ways. In this thesis, methods are introduced to model these methods of beam combination and compare their effectiveness in the generation of data to be used to reconstruct the image of a stellar object. One of these methods of beam combination is to be applied in a future space telescope. The European Space Agency is developing a mission that can valuably be extended with an imaging beam combiner. This mission is labeled Darwin, as its main goal is to provide information on the origin of life. The primary objective is the detection of planets around nearby stars - called exoplanets- and more precisely, Earth-like exoplanets. This detection is based on a signal, rather than an image. With an imaging mode, designed as described in this thesis, Darwin can make images of, for example, the planetary system to which the detected exoplanet belongs or, as another example, of the dust disk around a star out of which planets form. Such images will greatly contribute to the understanding of the formation of our own planetary system and of how and when life became possible on Earth. The comparison of beam combination methods for interferometric imaging occupies most of the pages of this thesis. Additional chapters will treat related subjects, being experimental work on beam combination optics, a description of a novel formalism for aberration retrieval and experimental work on nulling interferometry. The Chapters on interferometric imaging are organized in such a way that not only the physical principles behind a stellar interferometer are clear, but these chapters also form a basis for the method of analysis applied to the interferometers - -or rather beam combination methods- under consideration. The imaging process in a stellar interferometer will be treated as the inversion of a linear system of equations. The definition of interferometric imaging in this thesis can be stated to be the reconstruction of a luminosity distribution function on the sky, that is, in angular measure, larger than the angular diffraction limited spot size -or Point-Spread Function (PSF)- of a single telescope in the array and that contains, again in angular measure, spatial structure that is much smaller than the PSF of a single telescope. This reconstruction has to be based on knowledge of the dimensions of the telescope array and the detector. The detector collects intensity data that is formed by observation of the polychromatic luminosity distribution on the sky and is deteriorated by the quantum-nature of light and an imperfect electronic detection process. Therefore, the imaging study presented in this thesis can be regarded to be a study on the signal characteristics of various interferometers while imaging a polychromatic wide-field stellar source. The collection of beam combination methods under consideration consists of four types. Among these are two well-known types, having either co-axially combined beams as in the Michelson-Morley experiment to demonstrate the existence of ether, or beams that follow optical paths as if an aperture mask were placed in front of a telescope, making the beams combine in the focus of that telescope, as suggested by Fizeau. For separated apertures rather than an aperture mask, these optical paths are stated to be homothetic. In short, these two types will be addressed as the Michelson or the Homothetic type. The other two types are addressed as Densified and Staircase. The first one is short for densified pupil imaging, an imaging technique very similar to the Homothetic type, be it that the natural course of light after the aperture mask is altered. However, the combination of the beams of light is again in focus. The Staircase method is an alternative to the co-axial Michelson method and lends its name from the fact that a staircase-shaped mirror is placed in an intermediate focal plane after each telescope in the array, before combining the beams of light co-axially. This addition allows stellar imaging as with the Michelson type, with the advantage of covering a large field-of-view. The details of these methods will intensively be discussed in this thesis, but the introduction of them at this point allows a short list of results, found by comparing them for equal imaging tasks. Homothetic imagers are best suited for covering a wide field-of-view, considering the information content of the interferometric signals these arrays produce. The large number of detectors does not seem to limit the imaging performance in the presence of noise, due to the high ratio of coherent versus incoherent information in the detector signal. The imaging efficiency of a Michelson type array is also high, although -considering only polychromatic wide-field imaging tasks- the ratio of coherent versus incoherent information in the detected signals is very low. This results in very large observation times needed to produce images comparable to those obtained with a Homothetic array. A detailed presentation of the characteristics of the detected signals in a co-axial Michelson array reveal that such signals, obtained by polychromatic observation of extended sources, have fringe envelope functions that do not allow Fourier-spectroscopy to obtain high-resolution spectroscopic information about such a source. For the Densified case, it is found that this method can indeed provide an interferometric PSF that is more favorable than a homothetic PSF, but only for narrow-angle observations. For polychromatic wide-field observations, the Densified-PSF is field-dependent, for which the image reconstruction process can account. Wide-field imaging using the favorable properties of the Densified-PSF can be performed, by using special settings of the delay or optical path length difference between interferometer arms and including observations with several settings of delay in the observation data. The Staircase method is the second best method for the imaging task under consideration. The discontinuous nature of the staircase-shaped mirrors does not give rise to a discontinuous reconstructed luminosity distribution or non-uniformly covered spatial frequencies. The intrinsic efficiency of the interferometric signal in this type of interferometer is worse than that of the other co-axial method, although the ratio of coherent versus incoherent signal in the data -the length of the fringe packet in one intensity trace-e- is nearly ultimate. The inefficiency is overwhelmingly compensated for by the very short observation time needed. Besides numerical studies of interferometer arrays, one interferometric imager was also studied experimentally. A homothetic imager was built, comprising three telescopes with fully separated beam relay optics. The pointing direction, the location and the optical path length of two of the three beams are electronically controllable. The beams can be focused together to interfere, via a beam combiner consisting of curved surfaces. This set-up allows to measure the required accuracies at which certain optical elements have to be positioned. Moreover, this set-up demonstrates that without knowledge of the initial pointing directions, locations and optical path lengths of the beams, the situation of homothesis can be attained, solely based on information from the focal plane of the set-up. Further experiments show that the approximation of exact homothesis is limited by the optical quality of the beam combiner optics. Parallel to the experiments on homothesis, a study was performed to evaluate the use of the Extended Nijboer-Zernike (ENZ) formalism for analysis of multiple aperture optical systems. It is envisaged that an aberration retrieval algorithm, provided with the common focus of a homothetic array, can be used to detect misalignment of or even aberrations in the sub-apertures of the sparse synthetic aperture. The ENZ formalism is a powerful tool to describe the focal intensity profile in an optical imaging system, imaging a monochromatic point source through a pupil that is allowed to have a certain transmission profile and phase aberration function over the pupil. Moreover, the formalism allows calculation of intensity profiles outside the best-focus plane. With the intensity information of several through-focus planes, enough information is available to reconstruct the pupil function from it. The formalism is described, including the reconstruction algorithm. Although very good results are obtained for general pupil functions, the results for synthetic pupil functions are not very promising. The detailed description of the ENZ-aberration retrieval reveals the origin of the breakdown of the retrieval process. Finally, a description of experiments on nulling interferometry is given, starting with the presentation of an experimental set-up for three-beam nulling. A novel strategy for polychromatic nulling is treated here, with the goal of relieving the tight phase constraint on the spectra in the individual beams. This theoretically allows broad band-nulling with a high rejection ratio without using achromatic phase shifters. The disappointing results led to an investigation of the spectra of the individual beams. The origin of the unsatisfactory level of the rejection ratio is found in the spectral unbalance of the beams. Before branching off, the beams have an equal spectrum. Then, the encounter of different optical elements with individually applied coatings, the control of beam-power per beam and finally the beam coupling into a single-mode fiber, apparently alter the spectra in such a way that the theoretically achievable level of the rejection ratio cannot be reached. The research described in this thesis provides onsets for research in several areas of interest related to aperture synthesis and guidelines concerning the design of synthetic telescopes for imaging. As such, this research contributes to the improvement of instrumentation for observational astronomy, in particular for stellar interferometry. While nulling interferometry is the detection technique that allows a space telescope array such as ESA-Darwin to identify exoplanets, optical aperture synthesis imaging is the technique that can make images of the planetary systems to which these exoplanets belong. Moreover, many objects can be observed that represent earlier versions of our planetary system, our Sun and even our galaxy, the Milky Way. Observing these objects might answer questions about the origins of the Earth itself and the life on it.
NASA Astrophysics Data System (ADS)
Hadaway, James B.; Wells, Conrad; Olczak, Gene; Waldman, Mark; Whitman, Tony; Cosentino, Joseph; Connolly, Mark; Chaney, David; Telfer, Randal
2016-07-01
The James Webb Space Telescope (JWST) primary mirror (PM) is 6.6 m in diameter and consists of 18 hexagonal segments, each 1.5 m point-to-point. Each segment has a six degree-of-freedom hexapod actuation system and a radius of-curvature (RoC) actuation system. The full telescope will be tested at its cryogenic operating temperature at Johnson Space Center. This testing will include center-of-curvature measurements of the PM, using the Center-of-Curvature Optical Assembly (COCOA) and the Absolute Distance Meter Assembly (ADMA). The COCOA includes an interferometer, a reflective null, an interferometer-null calibration system, coarse and fine alignment systems, and two displacement measuring interferometer systems. A multiple-wavelength interferometer (MWIF) is used for alignment and phasing of the PM segments. The ADMA is used to measure, and set, the spacing between the PM and the focus of the COCOA null (i.e. the PM center-of-curvature) for determination of the ROC. The performance of these metrology systems was assessed during two cryogenic tests at JSC. This testing was performed using the JWST Pathfinder telescope, consisting mostly of engineering development and spare hardware. The Pathfinder PM consists of two spare segments. These tests provided the opportunity to assess how well the center-of-curvature optical metrology hardware, along with the software and procedures, performed using real JWST telescope hardware. This paper will describe the test setup, the testing performed, and the resulting metrology system performance. The knowledge gained and the lessons learned during this testing will be of great benefit to the accurate and efficient cryogenic testing of the JWST flight telescope.
NASA Technical Reports Server (NTRS)
Hadaway, James B.; Wells, Conrad; Olczak, Gene; Waldman, Mark; Whitman, Tony; Cosentino, Joseph; Connolly, Mark; Chaney, David; Telfer, Randal
2016-01-01
The James Webb Space Telescope (JWST) primary mirror (PM) is 6.6 m in diameter and consists of 18 hexagonal segments, each 1.5 m point-to-point. Each segment has a six degree-of-freedom hexapod actuation system and a radius-of-curvature (RoC) actuation system. The full telescope will be tested at its cryogenic operating temperature at Johnson Space Center. This testing will include center-of-curvature measurements of the PM, using the Center-of-Curvature Optical Assembly (COCOA) and the Absolute Distance Meter Assembly (ADMA). The COCOA includes an interferometer, a reflective null, an interferometer-null calibration system, coarse & fine alignment systems, and two displacement measuring interferometer systems. A multiple-wavelength interferometer (MWIF) is used for alignment & phasing of the PM segments. The ADMA is used to measure, and set, the spacing between the PM and the focus of the COCOA null (i.e. the PM center-of-curvature) for determination of the ROC. The performance of these metrology systems was assessed during two cryogenic tests at JSC. This testing was performed using the JWST Pathfinder telescope, consisting mostly of engineering development & spare hardware. The Pathfinder PM consists of two spare segments. These tests provided the opportunity to assess how well the center-of-curvature optical metrology hardware, along with the software & procedures, performed using real JWST telescope hardware. This paper will describe the test setup, the testing performed, and the resulting metrology system performance. The knowledge gained and the lessons learned during this testing will be of great benefit to the accurate & efficient cryogenic testing of the JWST flight telescope.
The underlying pathway structure of biochemical reaction networks
Schilling, Christophe H.; Palsson, Bernhard O.
1998-01-01
Bioinformatics is yielding extensive, and in some cases complete, genetic and biochemical information about individual cell types and cellular processes, providing the composition of living cells and the molecular structure of its components. These components together perform integrated cellular functions that now need to be analyzed. In particular, the functional definition of biochemical pathways and their role in the context of the whole cell is lacking. In this study, we show how the mass balance constraints that govern the function of biochemical reaction networks lead to the translation of this problem into the realm of linear algebra. The functional capabilities of biochemical reaction networks, and thus the choices that cells can make, are reflected in the null space of their stoichiometric matrix. The null space is spanned by a finite number of basis vectors. We present an algorithm for the synthesis of a set of basis vectors for spanning the null space of the stoichiometric matrix, in which these basis vectors represent the underlying biochemical pathways that are fundamental to the corresponding biochemical reaction network. In other words, all possible flux distributions achievable by a defined set of biochemical reactions are represented by a linear combination of these basis pathways. These basis pathways thus represent the underlying pathway structure of the defined biochemical reaction network. This development is significant from a fundamental and conceptual standpoint because it yields a holistic definition of biochemical pathways in contrast to definitions that have arisen from the historical development of our knowledge about biochemical processes. Additionally, this new conceptual framework will be important in defining, characterizing, and studying biochemical pathways from the rapidly growing information on cellular function. PMID:9539712
DAMA confronts null searches in the effective theory of dark matter-nucleon interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Catena, Riccardo; Ibarra, Alejandro; Wild, Sebastian
2016-05-17
We examine the dark matter interpretation of the modulation signal reported by the DAMA experiment from the perspective of effective field theories displaying Galilean invariance. We consider the most general effective coupling leading to the elastic scattering of a dark matter particle with spin 0 or 1/2 off a nucleon, and we analyze the compatibility of the DAMA signal with the null results from other direct detection experiments, as well as with the non-observation of a high energy neutrino flux in the direction of the Sun from dark matter annihilation. To this end, we develop a novel semi-analytical approach formore » comparing experimental results in the high-dimensional parameter space of the non-relativistic effective theory. Assuming the standard halo model, we find a strong tension between the dark matter interpretation of the DAMA modulation signal and the null result experiments. We also list possible ways-out of this conclusion.« less
NASA Astrophysics Data System (ADS)
Masson, Sophie; Pariat, Étienne; Valori, Gherardo; Deng, Na; Liu, Chang; Wang, Haimin; Reid, Hamish
2017-08-01
Context. The dynamics of ultraviolet (UV) emissions during solar flares provides constraints on the physical mechanisms involved in the trigger and the evolution of flares. In particular it provides some information on the location of the reconnection sites and the associated magnetic fluxes. In this respect, confined flares are far less understood than eruptive flares generating coronal mass ejections. Aims: We present a detailed study of a confined circular flare dynamics associated with three UV late phases in order to understand more precisely which topological elements are present and how they constrain the dynamics of the flare. Methods: We perform a non-linear force-free field extrapolation of the confined flare observed with the Helioseismic and Magnetic Imager (HMI) and Atmospheric Imaging Assembly (AIA) instruments on board Solar Dynamics Observatory (SDO). From the 3D magnetic field we compute the squashing factor and we analyse its distribution. Conjointly, we analyse the AIA extreme ultraviolet (EUV) light curves and images in order to identify the post-flare loops, and their temporal and thermal evolution. By combining the two analyses we are able to propose a detailed scenario that explains the dynamics of the flare. Results: Our topological analysis shows that in addition to a null-point topology with the fan separatrix, the spine lines and its surrounding quasi-separatix layer (QSL) halo (typical for a circular flare), a flux rope and its hyperbolic flux tube (HFT) are enclosed below the null. By comparing the magnetic field topology and the EUV post-flare loops we obtain an almost perfect match between the footpoints of the separatrices and the EUV 1600 Å ribbons and between the HFT field line footpoints and bright spots observed inside the circular ribbons. We show, for the first time in a confined flare, that magnetic reconnection occurred initially at the HFT below the flux rope. Reconnection at the null point between the flux rope and the overlying field is only initiated in a second phase. In addition, we showed that the EUV late phase observed after the main flare episode is caused by the cooling loops of different length which have all reconnected at the null point during the impulsive phase. Conclusions: Our analysis shows in one example that flux ropes are present in null-point topology not only for eruptive and jet events, but also for confined flares. This allows us to conjecture on the analogies between conditions that govern the generation of jets, confined flares or eruptive flares. A movie is available at http://www.aanda.org
On-Orbit Prospective Echocardiography on International Space Station Crew
NASA Technical Reports Server (NTRS)
Hamilton, Douglas R.; Sargsyan, Ashot E.; Martin, David S.; Garcia, Kathleen M.; Melton, Shannon L.; Feiveson, Alan; Dulchavsky, Scott A.
2010-01-01
Introduction A prospective trial of echocardiography was conducted on of six crewmembers onboard the International Space Station. The main objective was to determine the efficacy of remotely guided tele-echocardiography, including just-in-time e-training methods and determine what "space normal" echocardiographic data is. Methods Each crewmember operator (n=6) had 2-hour preflight training. Baseline echocardiographic data were collected 55 to 167days preflight. Similar equipment was used in each 60-minute in-flight session (mean microgravity exposure - 114 days (34 -- 190)). On Orbit ultrasound operators used an e-learning system within 24h of these sessions. Expert assistance was provided using ultrasound video downlink and two-way voice. Testing was repeated 5 to 16 days after landing. Separate ANOVA was used on each echocardiographic variable (n=33). Within each ANOVA, three tests were made: a) effect of mission phase (preflight, in-flight, post flight); b) effect of echo technician (two technicians independently analyzed the data); c) interaction between mission phase and technician. Results Nine rejections of the null hypothesis (mission phase or technician or both had no effect) were discovered and considered for follow up. Of these, six rejections were for significant technician effects, not as a result of space flight. Three rejections of the null hypothesis (Aortic Valve time velocity integral, Mitral E wave Velocity and heart rate) were attributable to space flight, however determined not to be clinically significant. No rejections were due to the interaction between technician and space flight. Conclusion No consistent clinically significant effects of long-duration space flight were seen in echocardiographic variables of the given group of subjects.
Long time existence from interior gluing
NASA Astrophysics Data System (ADS)
Chruściel, Piotr T.
2017-07-01
We prove completeness-to-the-future of null hypersurfaces emanating outwards from large spheres, in vacuum space-times evolving from general asymptotically flat data with well-defined energy-momentum. The proof uses scaling and a gluing construction to reduce the problem to Bieri’s stability theorem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Callebaut, Nele; Gubser, Steven S.; Samberg, Andreas
We study segmented strings in flat space and in AdS 3. In flat space, these well known classical motions describe strings which at any instant of time are piecewise linear. In AdS 3, the worldsheet is composed of faces each of which is a region bounded by null geodesics in an AdS 2 subspace of AdS 3. The time evolution can be described by specifying the null geodesic motion of kinks in the string at which two segments are joined. The outcome of collisions of kinks on the worldsheet can be worked out essentially using considerations of causality. We studymore » several examples of closed segmented strings in AdS 3 and find an unexpected quasi-periodic behavior. Here, we also work out a WKB analysis of quantum states of yo-yo strings in AdS 5 and find a logarithmic term reminiscent of the logarithmic twist of string states on the leading Regge trajectory.« less
Callebaut, Nele; Gubser, Steven S.; Samberg, Andreas; ...
2015-11-17
We study segmented strings in flat space and in AdS 3. In flat space, these well known classical motions describe strings which at any instant of time are piecewise linear. In AdS 3, the worldsheet is composed of faces each of which is a region bounded by null geodesics in an AdS 2 subspace of AdS 3. The time evolution can be described by specifying the null geodesic motion of kinks in the string at which two segments are joined. The outcome of collisions of kinks on the worldsheet can be worked out essentially using considerations of causality. We studymore » several examples of closed segmented strings in AdS 3 and find an unexpected quasi-periodic behavior. Here, we also work out a WKB analysis of quantum states of yo-yo strings in AdS 5 and find a logarithmic term reminiscent of the logarithmic twist of string states on the leading Regge trajectory.« less
NASA Astrophysics Data System (ADS)
Douglas, Ewan Streets
This work explores remote sensing of planetary atmospheres and their circumstellar surroundings. The terrestrial ionosphere is a highly variable space plasma embedded in the thermosphere. Generated by solar radiation and predominantly composed of oxygen ions at high altitudes, the ionosphere is dynamically and chemically coupled to the neutral atmosphere. Variations in ionospheric plasma density impact radio astronomy and communications. Inverting observations of 83.4 nm photons resonantly scattered by singly ionized oxygen holds promise for remotely sensing the ionospheric plasma density. This hypothesis was tested by comparing 83.4 nm limb profiles recorded by the Remote Atmospheric and Ionospheric Detection System aboard the International Space Station to a forward model driven by coincident plasma densities measured independently via ground-based incoherent scatter radar. A comparison study of two separate radar overflights with different limb profile morphologies found agreement between the forward model and measured limb profiles. A new implementation of Chapman parameter retrieval via Markov chain Monte Carlo techniques quantifies the precision of the plasma densities inferred from 83.4 nm emission profiles. This first study demonstrates the utility of 83.4 nm emission for ionospheric remote sensing. Future visible and ultraviolet spectroscopy will characterize the composition of exoplanet atmospheres; therefore, the second study advances technologies for the direct imaging and spectroscopy of exoplanets. Such spectroscopy requires the development of new technologies to separate relatively dim exoplanet light from parent star light. High-contrast observations at short wavelengths require spaceborne telescopes to circumvent atmospheric aberrations. The Planet Imaging Concept Testbed Using a Rocket Experiment (PICTURE) team designed a suborbital sounding rocket payload to demonstrate visible light high-contrast imaging with a visible nulling coronagraph. Laboratory operations of the PICTURE coronagraph achieved the high-contrast imaging sensitivity necessary to test for the predicted warm circumstellar belt around Epsilon Eridani. Interferometric wavefront measurements of calibration target Beta Orionis recorded during the second test flight in November 2015 demonstrate the first active wavefront sensing with a piezoelectric mirror stage and activation of a micromachine deformable mirror in space. These two studies advance our "close-to-home'' knowledge of atmospheres and move exoplanetary studies closer to detailed measurements of atmospheres outside our solar system.
NASA Technical Reports Server (NTRS)
2010-01-01
Topics covered include: Software Tool Integrating Data Flow Diagrams and Petri Nets; Adaptive Nulling for Interferometric Detection of Planets; Reducing the Volume of NASA Earth-Science Data; Reception of Multiple Telemetry Signals via One Dish Antenna; Space-Qualified Traveling-Wave Tube; Smart Power Supply for Battery-Powered Systems; Parallel Processing of Broad-Band PPM Signals; Inexpensive Implementation of Many Strain Gauges; Constant-Differential-Pressure Two-Fluid Accumulator; Inflatable Tubular Structures Rigidized with Foams; Power Generator with Thermo-Differential Modules; Mechanical Extraction of Power From Ocean Currents and Tides; Nitrous Oxide/Paraffin Hybrid Rocket Engines; Optimized Li-Ion Electrolytes Containing Fluorinated Ester Co-Solvents; Probabilistic Multi-Factor Interaction Model for Complex Material Behavior; Foldable Instrumented Bits for Ultrasonic/Sonic Penetrators; Compact Rare Earth Emitter Hollow Cathode; High-Precision Shape Control of In-Space Deployable Large Membrane/Thin-Shell Reflectors; Rapid Active Sampling Package; Miniature Lightweight Ion Pump; Cryogenic Transport of High-Pressure-System Recharge Gas; Water-Vapor Raman Lidar System Reaches Higher Altitude; Compact Ku-Band T/R Module for High-Resolution Radar Imaging of Cold Land Processes; Wide-Field-of-View, High-Resolution, Stereoscopic Imager; Electrical Capacitance Volume Tomography with High-Contrast Dielectrics; Wavefront Control and Image Restoration with Less Computing; Polarization Imaging Apparatus; Stereoscopic Machine-Vision System Using Projected Circles; Metal Vapor Arcing Risk Assessment Tool; Performance Bounds on Two Concatenated, Interleaved Codes; Parameterizing Coefficients of a POD-Based Dynamical System; Confidence-Based Feature Acquisition; Algorithm for Lossless Compression of Calibrated Hyperspectral Imagery; Universal Decoder for PPM of any Order; Algorithm for Stabilizing a POD-Based Dynamical System; Mission Reliability Estimation for Repairable Robot Teams; Processing AIRS Scientific Data Through Level 3; Web-Based Requesting and Scheduling Use of Facilities; AutoGen Version 5.0; Time-Tag Generation Script; PPM Receiver Implemented in Software; Tropospheric Emission Spectrometer Product File Readers; Reporting Differences Between Spacecraft Sequence Files; Coordinating "Execute" Data for ISS and Space Shuttle; Database for Safety-Oriented Tracking of Chemicals; Apparatus for Cold, Pressurized Biogeochemical Experiments; Growing B Lymphocytes in a Three-Dimensional Culture System; Tissue-like 3D Assemblies of Human Broncho-Epithelial Cells; Isolation of Resistance-Bearing Microorganisms; Oscillating Cell Culture Bioreactor; and Liquid Cooling/Warming Garment.
Thermal performance of a customized multilayer insulation (MLI)
NASA Technical Reports Server (NTRS)
Leonhard, K. E.
1976-01-01
The thermal performance of a LH2 tank on a shroudless vehicle was investigated. The 1.52 m (60 in) tank was insulated with 2 MLI blankets consisting of 18 double aluminized Mylar radiation shields and 19 silk net spacers. The temperature of outer space was simulated by using a cryoshroud which was maintained at near liquid hydrogen temperature. The heating effects of a payload were simulated by utilizing a thermal payload simulator (TPS) viewing the tank. The test program consisted of three major test categories: (1) null testing, (2) thermal performance testing of the tank installed MLI system, and (3) thermal testing of a customized MLI configuration. TPS surface temperatures during the null test were maintained at near hydrogen temperature and during test categories 2 and 3 at 289 K (520R). The heat flow rate through the tank installed MLI at a tank/TPS spacing of 0.457 m was 1.204 watts with no MLI on the TPS and 0.059 watts through the customized MLI with three blankets on the TPS. Reducing the tank/TPS spacing from 0.457 m to 0.152 m the heat flow through the customized MLI increased by 10 percent.
NASA Technical Reports Server (NTRS)
2004-01-01
Topics covered include: COTS MEMS Flow-Measurement Probes; Measurement of an Evaporating Drop on a Reflective Substrate; Airplane Ice Detector Based on a Microwave Transmission Line; Microwave/Sonic Apparatus Measures Flow and Density in Pipe; Reducing Errors by Use of Redundancy in Gravity Measurements; Membrane-Based Water Evaporator for a Space Suit; Compact Microscope Imaging System with Intelligent Controls; Chirped-Superlattice, Blocked-Intersubband QWIP; Charge-Dissipative Electrical Cables; Deep-Sea Video Cameras Without Pressure Housings; RFID and Memory Devices Fabricated Integrally on Substrates; Analyzing Dynamics of Cooperating Spacecraft; Spacecraft Attitude Maneuver Planning Using Genetic Algorithms; Forensic Analysis of Compromised Computers; Document Concurrence System; Managing an Archive of Images; MPT Prediction of Aircraft-Engine Fan Noise; Improving Control of Two Motor Controllers; Electro-deionization Using Micro-separated Bipolar Membranes; Safer Electrolytes for Lithium-Ion Cells; Rotating Reverse-Osmosis for Water Purification; Making Precise Resonators for Mesoscale Vibratory Gyroscopes; Robotic End Effectors for Hard-Rock Climbing; Improved Nutation Damper for a Spin-Stabilized Spacecraft; Exhaust Nozzle for a Multitube Detonative Combustion Engine; Arc-Second Pointer for Balloon-Borne Astronomical Instrument; Compact, Automated Centrifugal Slide-Staining System; Two-Armed, Mobile, Sensate Research Robot; Compensating for Effects of Humidity on Electronic Noses; Brush/Fin Thermal Interfaces; Multispectral Scanner for Monitoring Plants; Coding for Communication Channels with Dead-Time Constraints; System for Better Spacing of Airplanes En Route; Algorithm for Training a Recurrent Multilayer Perceptron; Orbiter Interface Unit and Early Communication System; White-Light Nulling Interferometers for Detecting Planets; and Development of Methodology for Programming Autonomous Agents.
Asymptotic Charges at Null Infinity in Any Dimension
NASA Astrophysics Data System (ADS)
Campoleoni, Andrea; Francia, Dario; Heissenberg, Carlo
2018-03-01
We analyse the conservation laws associated with large gauge transformations of massless fields in Minkowski space. Our aim is to highlight the interplay between boundary conditions and finiteness of the asymptotically conserved charges in any space-time dimension, both even and odd, greater than or equal to three. After discussing non-linear Yang-Mills theory and revisiting linearised gravity, our investigation extends to cover the infrared behaviour of bosonic massless quanta of any spin.
Conditioned Limit Theorems for Some Null Recurrent Markov Processes
1976-08-01
Chapter 1 INTRODUCTION 1.1 Summary of Results Let (Vk, k ! 0) be a discrete time Markov process with state space EC(- , ) and let S be...explain our results in some detail. 2 We begin by stating our three basic assumptions: (1) vk s k 2 0 Is a Markov process with state space E C(-o,%); (Ii... 12 n 3. CONDITIONING ON T (, > n.................................1.9 3.1 Preliminary Results
Stereochronoscopy of the optic disc with stereoscopic cameras.
Schirmer, K E; Kratky, V
1980-09-01
Goldmann's chronoscopy, a form of time-based photogrammetry of the optic disc, can be accomplished by repeated simultaneous stereophotography. The variations in centration and image orientation are nulled by azimuthal rotation and fusion of two pairs of stereophotographs taken at separate times.
Impulsive spherical gravitational waves
NASA Astrophysics Data System (ADS)
Aliev, A. N.; Nutku, Y.
2001-03-01
Penrose's identification with warp provides the general framework for constructing the continuous form of impulsive gravitational wave metrics. We present the two-component spinor formalism for the derivation of the full family of impulsive spherical gravitational wave metrics which brings out the power in identification with warp and leads to the simplest derivation of exact solutions. These solutions of the Einstein vacuum field equations are obtained by cutting Minkowski space into two pieces along a null cone and re-identifying them with warp which is given by an arbitrary nonlinear holomorphic transformation. Using two-component spinor techniques we construct a new metric describing an impulsive spherical gravitational wave where the vertex of the null cone lies on a worldline with constant acceleration.
Ryutov, D. D.; Soukhanovskii, V. A.
2015-11-17
The snowflake magnetic configuration is characterized by the presence of two closely spaced poloidal field nulls that create a characteristic hexagonal (reminiscent of a snowflake) separatrix structure. The magnetic field properties and the plasma behaviour in the snowflake are determined by the simultaneous action of both nulls, this generating a lot of interesting physics, as well as providing a chance for improving divertor performance. One of the most interesting effects of the snowflake geometry is the heat flux sharing between multiple divertor channels. The authors summarise experimental results obtained with the snowflake configuration on several tokamaks. Wherever possible, relation tomore » the existing theoretical models is described. Divertor concepts utilizing the properties of a snowflake configuration are briefly discussed.« less
NASA Technical Reports Server (NTRS)
Zhang, William W.
2010-01-01
The International X-ray Observatory (IXO) is the next major space X-ray observatory, performing both imaging and spectroscopic studies of all kinds of objects in the Universe. It is a collaborative mission of the National Aeronautics and Space Administration of the United States, the European Space Agency, and Japan Aerospace Exploration Agency. It is to be launched into a Sun-Earth L2 orbit in 2021. One of the most challenging aspects of the mission is the construction of a flight mirror assembly capable focusing X-rays in the band of 0.1 to 40 keY with an angular resolution of better than 5 arc-seconds and with an effective collection area of more than 3 sq m. The mirror assembly will consist of approximately 15,000 parabolic and hyperbolic mirror segments, each of which is approximately 200mm by 300mm with a thickness of 0.4mm. The manufacture and qualification of these mirror segments and their integration into the giant mirror assembly have been the objectives of a vigorous technology development program at NASA's Goddard Space Flight Center. Each of these mirror segments needs to be measured and qualified for both optical figure and mechanical dimensions. In this talk, I will describe the technology program with a particular emphasis on a measurement system we are developing to meet those requirements, including the use of coordinate measuring machines, Fizeau interferometers, and custom-designed, and -built null lens. This system is capable of measuring highly off-axis aspherical or cylindrical mirrors with repeatability, accuracy, and speed.
Interface extinction and subsurface peaking of the radiation pattern of a line source
NASA Technical Reports Server (NTRS)
Engheta, N.; Papas, C. H.; Elachi, C.
1981-01-01
The radiation pattern of a line source lying along the plane interface of two dielectric half-spaces is calculated. It is found that the pattern at the interface has a null (interface extinction); that the pattern in the upper half-space, whose index of refraction is taken to be less than that of the lower half-space, has a single lobe with a maximum normal to the interface; and that the pattern in the lower half-space (subsurface region) has two maxima (peaks) straddling symmetrically a minimum. Interpretation of these results in terms of ray optics, Oseen's extinction theorem, and the Cerenkov effect are given.
NASA Astrophysics Data System (ADS)
Leboulleux, Lucie; N'Diaye, Mamadou; Riggs, A. J. E.; Egron, Sylvain; Mazoyer, Johan; Pueyo, Laurent; Choquet, Elodie; Perrin, Marshall D.; Kasdin, Jeremy; Sauvage, Jean-François; Fusco, Thierry; Soummer, Rémi
2016-07-01
Segmented telescopes are a possible approach to enable large-aperture space telescopes for the direct imaging and spectroscopy of habitable worlds. However, the increased complexity of their aperture geometry, due to their central obstruction, support structures and segment gaps, makes high-contrast imaging very challenging. The High-contrast imager for Complex Aperture Telescopes (HiCAT) was designed to study and develop solutions for such telescope pupils using wavefront control and starlight suppression. The testbed design has the flexibility to enable studies with increasing complexity for telescope aperture geometries starting with off-axis telescopes, then on-axis telescopes with central obstruction and support structures (e.g. the Wide Field Infrared Survey Telescope [WFIRST]), up to on-axis segmented telescopes e.g. including various concepts for a Large UV, Optical, IR telescope (LUVOIR), such as the High Definition Space Telescope (HDST). We completed optical alignment in the summer of 2014 and a first deformable mirror was successfully integrated in the testbed, with a total wavefront error of 13nm RMS over a 18mm diameter circular pupil in open loop. HiCAT will also be provided with a segmented mirror conjugated with a shaped pupil representing the HDST configuration, to directly study wavefront control in the presence of segment gaps, central obstruction and spider. We recently applied a focal plane wavefront control method combined with a classical Lyot coronagraph on HiCAT, and we found limitations on contrast performance due to vibration effect. In this communication, we analyze this instability and study its impact on the performance of wavefront control algorithms. We present our Speckle Nulling code to control and correct for wavefront errors both in simulation mode and on testbed mode. This routine is first tested in simulation mode without instability to validate our code. We then add simulated vibrations to study the degradation of contrast performance in the presence of these effects.
Decomposition of the optical transfer function: wavefront coding imaging systems
NASA Astrophysics Data System (ADS)
Muyo, Gonzalo; Harvey, Andy R.
2005-10-01
We describe the mapping of the optical transfer function (OTF) of an incoherent imaging system into a geometrical representation. We show that for defocused traditional and wavefront-coded systems the OTF can be represented as a generalized Cornu spiral. This representation provides a physical insight into the way in which wavefront coding can increase the depth of field of an imaging system and permits analytical quantification of salient OTF parameters, such as the depth of focus, the location of nulls, and amplitude and phase modulation of the wavefront-coding OTF.
A novel auditory ossicles membrane and the development of conductive hearing loss in Dmp1-null mice.
Lv, Kun; Huang, Haiyang; Yi, Xing; Chertoff, Mark E; Li, Chaoyuan; Yuan, Baozhi; Hinton, Robert J; Feng, Jian Q
2017-10-01
Genetic mouse models are widely used for understanding human diseases but we know much less about the anatomical structure of the auditory ossicles in the mouse than we do about human ossicles. Furthermore, current studies have mainly focused on disease conditions such as osteomalacia and rickets in patients with hypophosphatemia rickets, although the reason that these patients develop late-onset hearing loss is unknown. In this study, we first analyzed Dmp1 lac Z knock-in auditory ossicles (in which the blue reporter is used to trace DMP1 expression in osteocytes) using X-gal staining and discovered a novel bony membrane surrounding the mouse malleus. This finding was further confirmed by 3-D micro-CT, X-ray, and alizarin red stained images. We speculate that this unique structure amplifies and facilitates sound wave transmissions in two ways: increasing the contact surface between the eardrum and malleus and accelerating the sound transmission due to its mineral content. Next, we documented a progressive deterioration in the Dmp1-null auditory ossicle structures using multiple imaging techniques. The auditory brainstem response test demonstrated a conductive hearing loss in the adult Dmp1-null mice. This finding may help to explain in part why patients with DMP1 mutations develop late-onset hearing loss, and supports the critical role of DMP1 in maintaining the integrity of the auditory ossicles and its bony membrane. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hilditch, David; Harms, Enno; Bugner, Marcus; Rüter, Hannes; Brügmann, Bernd
2018-03-01
A long-standing problem in numerical relativity is the satisfactory treatment of future null-infinity. We propose an approach for the evolution of hyperboloidal initial data in which the outer boundary of the computational domain is placed at infinity. The main idea is to apply the ‘dual foliation’ formalism in combination with hyperboloidal coordinates and the generalized harmonic gauge formulation. The strength of the present approach is that, following the ideas of Zenginoğlu, a hyperboloidal layer can be naturally attached to a central region using standard coordinates of numerical relativity applications. Employing a generalization of the standard hyperboloidal slices, developed by Calabrese et al, we find that all formally singular terms take a trivial limit as we head to null-infinity. A byproduct is a numerical approach for hyperboloidal evolution of nonlinear wave equations violating the null-condition. The height-function method, used often for fixed background spacetimes, is generalized in such a way that the slices can be dynamically ‘waggled’ to maintain the desired outgoing coordinate lightspeed precisely. This is achieved by dynamically solving the eikonal equation. As a first numerical test of the new approach we solve the 3D flat space scalar wave equation. The simulations, performed with the pseudospectral bamps code, show that outgoing waves are cleanly absorbed at null-infinity and that errors converge away rapidly as resolution is increased.
NASA Technical Reports Server (NTRS)
Barry, R. K.; Danchi, W. C.; Deming, L. D.; Richardson, L. J.; Kuchner, M. J.; Seager, S.; Frey, B. J.; Martino, A. J.; Lee, K. A.; Zuray, M.;
2006-01-01
The Fourier-Kelvin Stellar Interferometer (FKSI) is a mission concept for a spacecraft-borne nulling interferometer for high-resolution astronomy and the direct detection of exoplanets and assay of their environments and atmospheres. FKSI is a high angular resolution system operating in the near to midinfrared spectral region and is a scientific and technological pathfinder to the Darwin and Terrestrial Planet Finder (TPF) missions. The instrument is configured with an optical system consisting, depending on configuration, of two 0.5 - 1.0 m telescopes on a 12.5 - 20 m boom feeding a symmetric, dual Mach- Zehnder beam combiner. We report on progress on our nulling testbed including the design of an optical pathlength null-tracking control system and development of a testing regime for hollow-core fiber waveguides proposed for use in wavefront cleanup. We also report results of integrated simulation studies of the planet detection performance of FKSI and results from an in-depth control system and residual optical pathlength jitter analysis.
Neuroimaging Research: from Null-Hypothesis Falsification to Out-Of-Sample Generalization
ERIC Educational Resources Information Center
Bzdok, Danilo; Varoquaux, Gaël; Thirion, Bertrand
2017-01-01
Brain-imaging technology has boosted the quantification of neurobiological phenomena underlying human mental operations and their disturbances. Since its inception, drawing inference on neurophysiological effects hinged on classical statistical methods, especially, the general linear model. The tens of thousands of variables per brain scan were…
Radar Imaging of Spheres in 3D using MUSIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chambers, D H; Berryman, J G
2003-01-21
We have shown that multiple spheres can be imaged by linear and planar EM arrays using only one component of polarization. The imaging approach involves calculating the SVD of the scattering response matrix, selecting a subset of singular values that represents noise, and evaluating the MUSIC functional. The noise threshold applied to the spectrum of singular values for optimal performance is typically around 1%. The resulting signal subspace includes more than one singular value per sphere. The presence of reflections from the ground improves height localization, even for a linear array parallel to the ground. However, the interference between directmore » and reflected energy modulates the field, creating periodic nulls that can obscure targets in typical images. These nulls are largely eliminated by normalizing the MUSIC functional with the broadside beam pattern of the array. The resulting images show excellent localization for 1 and 2 spheres. The performance for the 3 sphere configurations are complicated by shadowing effects and the greater range of the 3rd sphere in case 2. Two of the three spheres are easily located by MUSIC but the third is difficult to distinguish from other local maxima of the complex imaging functional. Improvement is seen when the linear array is replace with a planar array, which increases the effective aperture height. Further analysis of the singular values and their relationship to modes of scattering from the spheres, as well as better ways to exploit polarization, should improve performance. Work along these lines is currently being pursued by the authors.« less
Targeting SRC Family Kinases and HSP90 in Lung Cancer
2016-12-01
inhalation of Adeno-Cre, followed by MRI imaging at regular intervals to detect tumor initiation and growth, followed by euthanasia and processing of...experimental endpoint. 10 mice were used per time point Representative MRI data describing tumor volume (TV) are shown in Figure 1. Quantification of data is...dasatinib, we were able to make several conclusions. Figure 1. Representative MRI images from Nedd9wt or Nedd9 null Kras mutant mice, treated with
Global convergence in leaf respiration from estimates of thermal acclimation across time and space.
Vanderwel, Mark C; Slot, Martijn; Lichstein, Jeremy W; Reich, Peter B; Kattge, Jens; Atkin, Owen K; Bloomfield, Keith J; Tjoelker, Mark G; Kitajima, Kaoru
2015-09-01
Recent compilations of experimental and observational data have documented global temperature-dependent patterns of variation in leaf dark respiration (R), but it remains unclear whether local adjustments in respiration over time (through thermal acclimation) are consistent with the patterns in R found across geographical temperature gradients. We integrated results from two global empirical syntheses into a simple temperature-dependent respiration framework to compare the measured effects of respiration acclimation-over-time and variation-across-space to one another, and to a null model in which acclimation is ignored. Using these models, we projected the influence of thermal acclimation on: seasonal variation in R; spatial variation in mean annual R across a global temperature gradient; and future increases in R under climate change. The measured strength of acclimation-over-time produces differences in annual R across spatial temperature gradients that agree well with global variation-across-space. Our models further project that acclimation effects could potentially halve increases in R (compared with the null model) as the climate warms over the 21st Century. Convergence in global temperature-dependent patterns of R indicates that physiological adjustments arising from thermal acclimation are capable of explaining observed variation in leaf respiration at ambient growth temperatures across the globe. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
Improving the Nulling Beamformer Using Subspace Suppression.
Rana, Kunjan D; Hämäläinen, Matti S; Vaina, Lucia M
2018-01-01
Magnetoencephalography (MEG) captures the magnetic fields generated by neuronal current sources with sensors outside the head. In MEG analysis these current sources are estimated from the measured data to identify the locations and time courses of neural activity. Since there is no unique solution to this so-called inverse problem, multiple source estimation techniques have been developed. The nulling beamformer (NB), a modified form of the linearly constrained minimum variance (LCMV) beamformer, is specifically used in the process of inferring interregional interactions and is designed to eliminate shared signal contributions, or cross-talk, between regions of interest (ROIs) that would otherwise interfere with the connectivity analyses. The nulling beamformer applies the truncated singular value decomposition (TSVD) to remove small signal contributions from a ROI to the sensor signals. However, ROIs with strong crosstalk will have high separating power in the weaker components, which may be removed by the TSVD operation. To address this issue we propose a new method, the nulling beamformer with subspace suppression (NBSS). This method, controlled by a tuning parameter, reweights the singular values of the gain matrix mapping from source to sensor space such that components with high overlap are reduced. By doing so, we are able to measure signals between nearby source locations with limited cross-talk interference, allowing for reliable cortical connectivity analysis between them. In two simulations, we demonstrated that NBSS reduces cross-talk while retaining ROIs' signal power, and has higher separating power than both the minimum norm estimate (MNE) and the nulling beamformer without subspace suppression. We also showed that NBSS successfully localized the auditory M100 event-related field in primary auditory cortex, measured from a subject undergoing an auditory localizer task, and suppressed cross-talk in a nearby region in the superior temporal sulcus.
The Four-Quadrant Phase-Mask Coronagraph. I. Principle
NASA Astrophysics Data System (ADS)
Rouan, D.; Riaud, P.; Boccaletti, A.; Clénet, Y.; Labeyrie, A.
2000-11-01
We describe a new type of coronagraph, based on the principle of a phase mask as proposed by Roddier and Roddier a few years ago but using an original mask design found by one of us (D. R.), a four-quadrant binary phase mask (0, π) covering the full field of view at the focal plane. The mutually destructive interferences of the coherent light from the main source produce a very efficient nulling. The computed rejection rate of this coronagraph appears to be very high since, when perfectly aligned and phase-error free, it could in principle reduce the total amount of light from the bright source by a factor of 108, corresponding to a gain of 20 mag in brightness at the location of the first Airy ring, relative to the Airy peak. In the real world the gain is of course reduced by a strong factor, but nulling is still performing quite well, provided that the perturbation of the phase, for instance, due to the Earth's atmosphere, is efficiently corrected by adaptive optics. We show from simulations that a detection at a contrast of 10 mag between a star and a faint companion is achievable in excellent conditions, while 8 mag appears routinely feasible. This coronagraph appears less sensitive to atmospheric turbulence and has a larger dynamic range than other recently proposed nulling techniques: the phase-mask coronagraph (by Roddier and Roddier) or the Achromatic Interfero-Coronagraph (by Gay and Rabbia). We present the principle of the four-quadrant coronagraph and results of a first series of simulations. We compare those results with theoretical performances of other devices. We briefly analyze the different limitations in space or ground-based observations, as well as the issue of manufacturing the device. We also discuss several ways to improve the detection of a faint companion around a bright object. We conclude that, with respect to previous techniques, an instrument equipped with this coronagraph should have better performance and even enable the imaging of extrasolar giant planets at a young stage, when coupled with additional cleaning techniques.
Spatial autocorrelation in growth of undisturbed natural pine stands across Georgia
Raymond L. Czaplewski; Robin M. Reich; William A. Bechtold
1994-01-01
Moran's I statistic measures the spatial autocorrelation in a random variable measured at discrete locations in space. Permutation procedures test the null hypothesis that the observed Moran's I value is no greater than that expected by chance. The spatial autocorrelation of gross basal area increment is analyzed for undisturbed, naturally regenerated stands...
Davies, M H; Elias, E; Acharya, S; Cotton, W; Faulder, G C; Fryer, A A; Strange, R C
1993-01-01
Studies were carried out to test the hypothesis that the GSTM1 null phenotype at the mu (mu) class glutathione S-transferase 1 locus is associated with an increased predisposition to primary biliary cirrhosis. Starch gel electrophoresis was used to compare the prevalence of GSTM1 null phenotype 0 in patients with end stage primary biliary cirrhosis and a group of controls without evidence of liver disease. The prevalence of GSTM1 null phenotype in the primary biliary cirrhosis and control groups was similar; 39% and 45% respectively. In the primary biliary cirrhosis group all subjects were of the common GSTM1 0, GSTM1 A, GSTM1 B or GSTM1 A, B phenotypes while in the controls, one subject showed an isoform with an anodal mobility compatible with it being a product of the putative GSTM1*3 allele. As the GSTM1 phenotype might be changed by the disease process, the polymerase chain reaction was used to amplify the exon 4-exon 5 region of GSTM1 and show that in 13 control subjects and 11 patients with primary biliary cirrhosis, GSTM1 positive and negative genotypes were associated with corresponding GSTM1 expressing and non-expressing phenotypes respectively. The control subject with GSTM1 3 phenotype showed a positive genotype. Images Figure 1 Figure 2 PMID:8491405
Tropomodulin 1 Constrains Fiber Cell Geometry during Elongation and Maturation in the Lens Cortex
Nowak, Roberta B.
2012-01-01
Lens fiber cells exhibit a high degree of hexagonal packing geometry, determined partly by tropomodulin 1 (Tmod1), which stabilizes the spectrin-actin network on lens fiber cell membranes. To ascertain whether Tmod1 is required during epithelial cell differentiation to fiber cells or during fiber cell elongation and maturation, the authors quantified the extent of fiber cell disorder in the Tmod1-null lens and determined locations of disorder by confocal microscopy and computational image analysis. First, nearest neighbor analysis of fiber cell geometry in Tmod1-null lenses showed that disorder is confined to focal patches. Second, differentiating epithelial cells at the equator aligned into ordered meridional rows in Tmod1-null lenses, with disordered patches first observed in elongating fiber cells. Third, as fiber cells were displaced inward in Tmod1-null lenses, total disordered area increased due to increased sizes (but not numbers) of individual disordered patches. The authors conclude that Tmod1 is required first to coordinate fiber cell shapes and interactions during tip migration and elongation and second to stabilize ordered fiber cell geometry during maturation in the lens cortex. An unstable spectrin-actin network without Tmod1 may result in imbalanced forces along membranes, leading to fiber cell rearrangements during elongation, followed by propagation of disorder as fiber cells mature. PMID:22473940
Spatial effects in real networks: Measures, null models, and applications
NASA Astrophysics Data System (ADS)
Ruzzenenti, Franco; Picciolo, Francesco; Basosi, Riccardo; Garlaschelli, Diego
2012-12-01
Spatially embedded networks are shaped by a combination of purely topological (space-independent) and space-dependent formation rules. While it is quite easy to artificially generate networks where the relative importance of these two factors can be varied arbitrarily, it is much more difficult to disentangle these two architectural effects in real networks. Here we propose a solution to this problem, by introducing global and local measures of spatial effects that, through a comparison with adequate null models, effectively filter out the spurious contribution of nonspatial constraints. Our filtering allows us to consistently compare different embedded networks or different historical snapshots of the same network. As a challenging application we analyze the World Trade Web, whose topology is known to depend on geographic distances but is also strongly determined by nonspatial constraints (degree sequence or gross domestic product). Remarkably, we are able to detect weak but significant spatial effects both locally and globally in the network, showing that our method succeeds in retrieving spatial information even when nonspatial factors dominate. We finally relate our results to the economic literature on gravity models and trade globalization.
NASA Technical Reports Server (NTRS)
Hamer, H. A.; Johnson, K. G.; Young, J. W.
1985-01-01
An analysis is performed to compare decoupled and linear quadratic regulator (LQR) procedures for the control of a large, flexible space antenna. Control objectives involve: (1) commanding changes in the rigid-body modes, (2) nulling initial disturbances in the rigid-body modes, or (3) nulling initial disturbances in the first three flexible modes. Control is achieved with two three-axis control-moment gyros located on the antenna column. Results are presented to illustrate various effects on control requirements for the two procedures. These effects include errors in the initial estimates of state variables, variations in the type, number, and location of sensors, and deletions of state-variable estimates for certain flexible modes after control activation. The advantages of incorporating a time lag in the control feedback are also illustrated. In addition, the effects of inoperative-control situations are analyzed with regard to control requirements and resultant modal responses. Comparisons are included which show the effects of perfect state feedback with no residual modes (ideal case). Time-history responses are presented to illustrate the various effects on the control procedures.
Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo
Herckenrath, Daan; Langevin, Christian D.; Doherty, John
2011-01-01
Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of generating calibration-constrained parameter fields approximately doubled. Predictive uncertainty variance computed through the NSMC method was compared with that computed through linear analysis. The results were in good agreement, with the NSMC method estimate showing a slightly smaller range of prediction uncertainty than was calculated by the linear method. Copyright 2011 by the American Geophysical Union.
A search for chiral signatures on Mars.
Sparks, William B; Hough, James H; Bergeron, Louis E
2005-12-01
It is thought that the chiral molecules of living material can induce circular polarization in light at levels much higher than expected from abiotic processes. We therefore obtained high quality imaging circular polarimetry of the martian surface during the favorable opposition of 2003 to seek evidence of anomalous optical activity. We used two narrow-band filters covering 43% of the martian surface, 15% of it in-depth. With polarization noise levels <0.1% (4.3 upper limits 0.2-0.3%) and spatial resolution 210 km, we did not find any regions of circular polarization. When data were averaged over the observed face of the planet, we did see a small non-zero circular polarization 0.02%, which may be due to effects associated with the opposition configuration though it is at the limit of the instrumental capability. Our observations covered only a small fraction of parameter space, so although we obtained a null result, we cannot exclude the presence of optical activity at other wavelengths, in other locations, or at higher spatial resolution.
Sanz-Martín, José M; Pacheco-Arjona, José Ramón; Bello-Rico, Víctor; Vargas, Walter A; Monod, Michel; Díaz-Mínguez, José M; Thon, Michael R; Sukno, Serenella A
2016-09-01
Colletotrichum graminicola causes maize anthracnose, an agronomically important disease with a worldwide distribution. We have identified a fungalysin metalloprotease (Cgfl) with a role in virulence. Transcriptional profiling experiments and live cell imaging show that Cgfl is specifically expressed during the biotrophic stage of infection. To determine whether Cgfl has a role in virulence, we obtained null mutants lacking Cgfl and performed pathogenicity and live microscopy assays. The appressorium morphology of the null mutants is normal, but they exhibit delayed development during the infection process on maize leaves and roots, showing that Cgfl has a role in virulence. In vitro chitinase activity assays of leaves infected with wild-type and null mutant strains show that, in the absence of Cgfl, maize leaves exhibit increased chitinase activity. Phylogenetic analyses show that Cgfl is highly conserved in fungi. Similarity searches, phylogenetic analysis and transcriptional profiling show that C. graminicola encodes two LysM domain-containing homologues of Ecp6, suggesting that this fungus employs both Cgfl-mediated and LysM protein-mediated strategies to control chitin signalling. © 2015 BSPP and John Wiley & Sons Ltd.
Approaches for Achieving Broadband Achromatic Phase Shifts for Visible Nulling Coronagraphy
NASA Technical Reports Server (NTRS)
Bolcar, Matthew R.; Lyon, Richard G.
2012-01-01
Visible nulling coronagraphy is one of the few approaches to the direct detection and characterization of Jovian and Terrestrial exoplanets that works with segmented aperture telescopes. Jovian and Terrestrial planets require at least 10(exp -9) and 10(exp -10) image plane contrasts, respectively, within the spectral bandpass and thus require a nearly achromatic pi-phase difference between the arms of the interferometer. An achromatic pi-phase shift can be achieved by several techniques, including sequential angled thick glass plates of varying dispersive materials, distributed thin-film multilayer coatings, and techniques that leverage the polarization-dependent phase shift of total-internal reflections. Herein we describe two such techniques: sequential thick glass plates and Fresnel rhomb prisms. A viable technique must achieve the achromatic phase shift while simultaneously minimizing the intensity difference, chromatic beam spread and polarization variation between each arm. In this paper we describe the above techniques and report on efforts to design, model, fabricate, align the trades associated with each technique that will lead to an implementations of the most promising one in Goddard's Visible Nulling Coronagraph (VNC).
NASA Tech Briefs, February 2011
NASA Technical Reports Server (NTRS)
2011-01-01
Topics covered include: Multi-Segment Radius Measurement Using an Absolute Distance Meter Through a Null Assembly; Fiber-Optic Magnetic-Field-Strength Measurement System for Lightning Detection; Photocatalytic Active Radiation Measurements and Use; Computer Generated Hologram System for Wavefront Measurement System Calibration; Non-Contact Thermal Properties Measurement with Low-Power Laser and IR Camera System; SpaceCube 2.0: An Advanced Hybrid Onboard Data Processor; CMOS Imager Has Better Cross-Talk and Full-Well Performance; High-Performance Wireless Telemetry; Telemetry-Based Ranging; JWST Wavefront Control Toolbox; Java Image I/O for VICAR, PDS, and ISIS; X-Band Acquisition Aid Software; Antimicrobial-Coated Granules for Disinfecting Water; Range 7 Scanner Integration with PaR Robot Scanning System; Methods of Antimicrobial Coating of Diverse Materials; High-Operating-Temperature Barrier Infrared Detector with Tailorable Cutoff Wavelength; A Model of Reduced Kinetics for Alkane Oxidation Using Constituents and Species for N-Heptane; Thermally Conductive Tape Based on Carbon Nanotube Arrays; Two Catalysts for Selective Oxidation of Contaminant Gases; Nanoscale Metal Oxide Semiconductors for Gas Sensing; Lightweight, Ultra-High-Temperature, CMC-Lined Carbon/Carbon Structures; Sample Acquisition and Handling System from a Remote Platform; Improved Rare-Earth Emitter Hollow Cathode; High-Temperature Smart Structures for Engine Noise Reduction and Performance Enhancement; Cryogenic Scan Mechanism for Fourier Transform Spectrometer; Piezoelectric Rotary Tube Motor; Thermoelectric Energy Conversion Technology for High-Altitude Airships; Combustor Computations for CO2-Neutral Aviation; Use of Dynamic Distortion to Predict and Alleviate Loss of Control; Cycle Time Reduction in Trapped Mercury Ion Atomic Frequency Standards; and A (201)Hg+ Comagnetometer for (199)Hg+ Trapped Ion Space Atomic Clocks.
Characterization of the Stabilized Test Bench of Nulling Interferometry PERSÉE
NASA Astrophysics Data System (ADS)
Lozi, Julien; Ollivier, M.; Cassaing, F.; Le Duigou, J.; CNES; Onera/Dota/HRA; IAS; LESIA; OCA; TAS
2013-01-01
There are two problems with the observation of exoplanets: the contrast between the planet and the star and their very low separation. One technique solving these problems is nulling interferometry: two pupils are recombined to make a destructive interference on the star, and their base is adjusted to create a constructive interference on the planet. However, to ensure a sufficient extinction of the star, the optical path difference between the beams must be around the nanometer, and the pointing must be better than one hundredth of Airy disk, despite the external disturbances.To validate the critical points of such a space mission, a laboratory demonstrator, PERSÉE, was defined by a consortium led by the french space agency CNES, including IAS, LESIA, ONERA, OCA and Thales Alenia Space and integrated in Paris Observatory. This bench simulates the entire space mission (interferometer and nanometric cophasing system). Its goal is to deliver and maintain an extinction of 10^-4 stable at better than 10^-5 over a few hours in the presence of typical injected disturbances.My thesis work consisted in integrating the bench in successive stages and to develop calibration procedures. This helped me to characterize the critical elements separately before grouping them. After having implemented the control loops of the cophasing system, their precise analysis helped me to reduce down to 0.3 nm rms the residual OPD, and 0.4 % of the Airy disk the residual tip/tilt, despite disturbances of tens of nanometers, consisting of several tens of vibrational frequencies between 1 and 100 Hz. This has been achieved by the implementation of a linear quadratic Gaussian controller, parameterized by the preliminary measurement of the disturbance to minimize. Thanks to these excellent results, I obtained on the band [1.65 - 2.45] µm a record null rate of 8.8x10^-6 stabilized at 9x10^-7 over a few hours, a decade better than the original specifications. An extrapolation of these results to the case of a space mission shows that the expected performance is achievable if the available flux is sufficiently important. With telescopes of 40 cm and a control frequency around 100 Hz, stars brighter than magnitude 9 should be observable.
ESA to test the smartest technique for detecting extrasolar planets from the ground
NASA Astrophysics Data System (ADS)
2002-03-01
GENIE will use ESO's Very Large Telescopes Credits: European Southern Observatory This photo shows an aerial view of the observing platform on the top of Paranal mountain (from late 1999), with the four enclosu Three 1.8-m VLTI Auxiliary Telescopes (ATs) and paths of the light beams have been superposed on the photo. Also seen are some of the 30 'stations' where the ATs will be positioned for observations and from where the light beams from the telescopes can enter the Interferometric Tunnel below. The straight structures are supports for the rails on which the telescopes can move from one station to another. The Interferometric Laboratory (partly subterranean) is at the centre of the platform. How nulling interferometry works Credits: ESA 2002/Medialab How nulling interferometry works In nulling interferometry, light from a distant star (red beams) hits each telescope, labelled T1 and T2, simultaneously. Before the resultant light beams are combined, the beam from one telescope is delayed by half a wavelength. This means that when the rays are brought together, peaks from one telescope line up with troughs from the other and so are cancelled out (represented by the straight red line), leaving no starlight. Light from a planet (blue beams), orbiting the star, enters the telescopes at an angle. This introduces a delay in the light reaching the second telescope. So, even after the half wavelength change in one of the rays, when the beams are combined they are reinforced (represented by the large blue waves) rather than cancelled out. Illustration by Medialab. Nulling interferometry combines the signal from a number of different telescopes in such a way that the light from the central star is cancelled out, leaving the much fainter planet easier to see. This is possible because light is a wave with peaks and troughs. Usually when combining light from two or more telescopes, a technique called interferometry, the peaks are lined up with one another to boost the signal. In nulling interferometry, however, the peaks are lined up with the troughs so they cancel out to nothing and the star disappears. Planets in orbit around the star show up, however, because they are offset from the central star and their light takes different paths through the telescope system. ESA and ESO will build a new instrument called GENIE (Ground-based European Nulling Interferometer Experiment) to perform nulling interferometry using ESO's Very Large Telescope (VLT), a collection of four 8-metre telescopes in Chile. It will be the biggest investigation of nulling interferometry to date. "It's being tested in the lab in a number of places but we can do more," says Malcolm Fridlund, project scientist for the Darwin mission at the European Space Research and Technology Centre, the Netherlands. "We intend to use the world's largest telescope and the world's largest interferometer to get very high resolution." Using GENIE to perfect this technique will provide invaluable information for engineers about how to build the 'hub' spacecraft of the Darwin flotilla. Scheduled for launch in the middle of the next decade Darwin is a collection of six space telescopes and two other spacecraft, which will together search for Earth-like planets around nearby stars. The hub will combine the light from the telescopes. "If you see the way of getting to Darwin as being outlined by a number of technological milestones this is one of the most important ones," says Malcolm Fridlund. Once up and running, GENIE will also provide a training ground for astronomers who will later use Darwin. For example, it will allow them to perfect their methods of interpreting Darwin data because, as well as the engineering tests, GENIE will be capable of real science. One of its greatest tasks will be to develop the target list of stars for Darwin to study. As recently discovered by ESA's Ulysses spaceprobe, the signature of a planetary system is probably a ring of dust surrounding the central star. GENIE will be able to look for these dust rings and make sure that the dust is not so dense that it will mask the planets from view. GENIE will see failed stars, known as brown dwarfs and, if the instrument performs to expectations, may also see some of the already-discovered giant planets. So far, these worlds have never been seen, only inferred to exist by the effect they have on their parent stars. From Earth, two things handicap nulling interferometry. Firstly, the atmosphere smears out the starlight so that its cancellation is a hundred times less effective than it will be in space. Secondly, planets are most easily seen using infrared wavelengths because they are warm. So, observing from the surface of Earth, itself a planet emitting infrared radiation, is like peering through fog. In space, these two problems disappear and Darwin will be able to see smaller, Earth-like worlds. "We have calculated that with Darwin we could see an 'Earth' if it were ten light-years away with a few hours of observation time. With the VLT, it would be impossible because of the atmosphere. Even if the atmosphere weren't there it would take 450 days because of the infrared background released by the Earth. So we have to go into space," says Fridlund. GENIE is expected to be on-line by 2006.
Tonnessen-Murray, Crystal; Ungerleider, Nathan A; Rao, Sonia G; Wasylishen, Amanda R; Frey, Wesley D; Jackson, James G
2018-05-28
p53 is a transcription factor that regulates expression of genes involved in cell cycle arrest, senescence, and apoptosis. TP53 harbors mutations that inactivate its transcriptional activity in roughly 30% of breast cancers, and these tumors are much more likely to undergo a pathological complete response to chemotherapy. Thus, the gene expression program activated by wild-type p53 contributes to a poor response. We used an in vivo genetic model system to comprehensively define the p53- and p21-dependent genes and pathways modulated in tumors following doxorubicin treatment. We identified genes differentially expressed in spontaneous mammary tumors harvested from treated MMTV-Wnt1 mice that respond poorly (Trp53+/+) or favorably (Trp53-null) and those that lack the critical senescence/arrest p53 target gene Cdkn1a. Trp53 wild-type tumors differentially expressed nearly 10-fold more genes than Trp53-null tumors after treatment. Pathway analyses showed that genes involved in cell cycle, senescence, and inflammation were enriched in treated Trp53 wild-type tumors; however, no genes/pathways were identified that adequately explain the superior cell death/tumor regression observed in Trp53-null tumors. Cdkn1a-null tumors that retained arrest capacity (responded poorly) and those that proliferated (responded well) after treatment had remarkably different gene regulation. For instance, Cdkn1a-null tumors that arrested upregulated Cdkn2a (p16), suggesting an alternative, p21-independent route to arrest. Live animal imaging of longitudinal gene expression of a senescence/inflammation gene reporter in Trp53+/+ tumors showed induction during and after chemotherapy treatment, while tumors were arrested, but expression rapidly diminished immediately upon relapse. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Submarine harbor navigation using image data
NASA Astrophysics Data System (ADS)
Stubberud, Stephen C.; Kramer, Kathleen A.
2017-01-01
The process of ingress and egress of a United States Navy submarine is a human-intensive process that takes numerous individuals to monitor locations and for hazards. Sailors pass vocal information to bridge where it is processed manually. There is interest in using video imaging of the periscope view to more automatically provide navigation within harbors and other points of ingress and egress. In this paper, video-based navigation is examined as a target-tracking problem. While some image-processing methods claim to provide range information, the moving platform problem and weather concerns, such as fog, reduce the effectiveness of these range estimates. The video-navigation problem then becomes an angle-only tracking problem. Angle-only tracking is known to be fraught with difficulties, due to the fact that the unobservable space is not the null space. When using a Kalman filter estimator to perform the tracking, significant errors arise which could endanger the submarine. This work analyzes the performance of the Kalman filter when angle-only measurements are used to provide the target tracks. This paper addresses estimation unobservability and the minimal set of requirements that are needed to address it in this complex but real-world problem. Three major issues are addressed: the knowledge of navigation beacons/landmarks' locations, the minimal number of these beacons needed to maintain the course, and update rates of the angles of the landmarks as the periscope rotates and landmarks become obscured due to blockage and weather. The goal is to address the problem of navigation to and from the docks, while maintaining the traversing of the harbor channel based on maritime rules relying solely on the image-based data. The minimal number of beacons will be considered. For this effort, the image correlation from frame to frame is assumed to be achieved perfectly. Variation in the update rates and the dropping of data due to rotation and obscuration is considered. The analysis will be based on a simple straight-line channel harbor entry to the dock, similar to a submarine entering the submarine port in San Diego.
Hubble Space Telescope: SRM/QA observations and lessons learned
NASA Technical Reports Server (NTRS)
Rodney, George A.
1990-01-01
The Hubble Space Telescope (HST) Optical Systems Board of Investigation was established on July 2, 1990 to review, analyze, and evaluate the facts and circumstances regarding the manufacture, development, and testing of the HST Optical Telescope Assembly (OTA). Specifically, the board was tasked to ascertain what caused the spherical aberration and how it escaped notice until on-orbit operation. The error that caused the on-orbit spherical aberration in the primary mirror was traced to the assembly process of the Reflective Null Corrector, one of the three Null Correctors developed as special test equipment (STE) to measure and test the primary mirror. Therefore, the safety, reliability, maintainability, and quality assurance (SRM&QA) investigation covers the events and the overall product assurance environment during the manufacturing phase of the primary mirror and Null Correctors (from 1978 through 1981). The SRM&QA issues that were identified during the HST investigation are summarized. The crucial product assurance requirements (including nonconformance processing) for the HST are examined. The history of Quality Assurance (QA) practices at Perkin-Elmer (P-E) for the period under investigation are reviewed. The importance of the information management function is discussed relative to data retention/control issues. Metrology and other critical technical issues also are discussed. The SRM&QA lessons learned from the investigation are presented along with specific recommendations. Appendix A provides the MSFC SRM&QA report. Appendix B provides supplemental reference materials. Appendix C presents the findings of the independent optical consultants, Optical Research Associates (ORA). Appendix D provides further details of the fault-tree analysis portion of the investigation process.
Effects of the space environment on the health and safety of space workers
NASA Technical Reports Server (NTRS)
Hull, W. E.
1980-01-01
Large numbers of individuals are required to work in space to assemble and operate a Solar Power Satellite. The physiological and behavioral consequences for large groups of men and women who perform complex tasks in the vehicular or extravehicular environments over long periods of orbital stay time were considered. The most disturbing consequences of exposure to the null gravity environment found relate to: (1) a generalized cardiovascular deconditioning along with loss of a significant amount of body fluid volume; (2) loss of bone minerals and muscle mass; and (3) degraded performance of neutral mechanisms which govern equilibrium and spatial orientation.
Effects of the space environment on the health and safety of space workers
NASA Astrophysics Data System (ADS)
Hull, W. E.
1980-07-01
Large numbers of individuals are required to work in space to assemble and operate a Solar Power Satellite. The physiological and behavioral consequences for large groups of men and women who perform complex tasks in the vehicular or extravehicular environments over long periods of orbital stay time were considered. The most disturbing consequences of exposure to the null gravity environment found relate to: (1) a generalized cardiovascular deconditioning along with loss of a significant amount of body fluid volume; (2) loss of bone minerals and muscle mass; and (3) degraded performance of neutral mechanisms which govern equilibrium and spatial orientation.
Ion-trajectory analysis for micromotion minimization and the measurement of small forces
NASA Astrophysics Data System (ADS)
Gloger, Timm F.; Kaufmann, Peter; Kaufmann, Delia; Baig, M. Tanveer; Collath, Thomas; Johanning, Michael; Wunderlich, Christof
2015-10-01
For experiments with ions confined in a Paul trap, minimization of micromotion is often essential. In order to diagnose and compensate micromotion we have implemented a method that allows for finding the position of the radio-frequency (rf) null reliably and efficiently, in principle, without any variation of direct current (dc) voltages. We apply a trap modulation technique and focus-scanning imaging to extract three-dimensional ion positions for various rf drive powers and analyze the power dependence of the equilibrium position of the trapped ion. In contrast to commonly used methods, the search algorithm directly makes use of a physical effect as opposed to efficient numerical minimization in a high-dimensional parameter space. Using this method we achieve a compensation of the residual electric field that causes excess micromotion in the radial plane of a linear Paul trap down to 0.09 Vm-1 . Additionally, the precise position determination of a single harmonically trapped ion employed here can also be utilized for the detection of small forces. This is demonstrated by determining light pressure forces with a precision of 135 yN. As the method is based on imaging only, it can be applied to several ions simultaneously and is independent of laser direction and thus well suited to be used with, for example, surface-electrode traps.
NOTE: Circular symmetry in topologically massive gravity
NASA Astrophysics Data System (ADS)
Deser, S.; Franklin, J.
2010-05-01
We re-derive, compactly, a topologically massive gravity (TMG) decoupling theorem: source-free TMG separates into its Einstein and Cotton sectors for spaces with a hypersurface-orthogonal Killing vector, here concretely for circular symmetry. We then generalize the theorem to include matter; surprisingly, the single Killing symmetry also forces conformal invariance, requiring the sources to be null.
Matrix theory interpretation of discrete light cone quantization string worldsheets
Grignani; Orland; Paniak; Semenoff
2000-10-16
We study the null compactification of type-IIA string perturbation theory at finite temperature. We prove a theorem about Riemann surfaces establishing that the moduli spaces of infinite-momentum-frame superstring worldsheets are identical to those of branched-cover instantons in the matrix-string model conjectured to describe M theory. This means that the identification of string degrees of freedom in the matrix model proposed by Dijkgraaf, Verlinde, and Verlinde is correct and that its natural generalization produces the moduli space of Riemann surfaces at all orders in the genus expansion.
Lahvis, Garet P; Pyzalski, Robert W; Glover, Edward; Pitot, Henry C; McElwee, Matthew K; Bradfield, Christopher A
2005-03-01
A developmental role for the Ahr locus has been indicated by the observation that mice harboring a null allele display a portocaval vascular shunt throughout life. To define the ontogeny and determine the identity of this shunt, we developed a visualization approach in which three-dimensional (3D) images of the developing liver vasculature are generated from serial sections. Applying this 3D visualization approach at multiple developmental times allowed us to demonstrate that the portocaval shunt observed in Ahr-null mice is the remnant of an embryonic structure and is not acquired after birth. We observed that the shunt is found in late-stage wild-type embryos but closes during the first 48 h of postnatal life. In contrast, the same structure fails to close in Ahr-null mice and remains open throughout adulthood. The ontogeny of this shunt, along with its 3D position, allowed us to conclude that this shunt is a patent developmental structure known as the ductus venosus (DV). Upon searching for a physiological cause of the patent DV, we observed that during the first 48 h, most major hepatic veins, such as the portal and umbilical veins, normally decrease in diameter but do not change in Ahr-null mice. This observation suggests that failure of the DV to close may be the consequence of increased blood pressure or a failure in vasoconstriction in the developing liver.
Analysis of recoverable current from one component of magnetic flux density in MREIT and MRCDI.
Park, Chunjae; Lee, Byung Il; Kwon, Oh In
2007-06-07
Magnetic resonance current density imaging (MRCDI) provides a current density image by measuring the induced magnetic flux density within the subject with a magnetic resonance imaging (MRI) scanner. Magnetic resonance electrical impedance tomography (MREIT) has been focused on extracting some useful information of the current density and conductivity distribution in the subject Omega using measured B(z), one component of the magnetic flux density B. In this paper, we analyze the map Tau from current density vector field J to one component of magnetic flux density B(z) without any assumption on the conductivity. The map Tau provides an orthogonal decomposition J = J(P) + J(N) of the current J where J(N) belongs to the null space of the map Tau. We explicitly describe the projected current density J(P) from measured B(z). Based on the decomposition, we prove that B(z) data due to one injection current guarantee a unique determination of the isotropic conductivity under assumptions that the current is two-dimensional and the conductivity value on the surface is known. For a two-dimensional dominating current case, the projected current density J(P) provides a good approximation of the true current J without accumulating noise effects. Numerical simulations show that J(P) from measured B(z) is quite similar to the target J. Biological tissue phantom experiments compare J(P) with the reconstructed J via the reconstructed isotropic conductivity using the harmonic B(z) algorithm.
Alternative Derivations for the Poisson Integral Formula
ERIC Educational Resources Information Center
Chen, J. T.; Wu, C. S.
2006-01-01
Poisson integral formula is revisited. The kernel in the Poisson integral formula can be derived in a series form through the direct BEM free of the concept of image point by using the null-field integral equation in conjunction with the degenerate kernels. The degenerate kernels for the closed-form Green's function and the series form of Poisson…
Light rays and the tidal gravitational pendulum
NASA Astrophysics Data System (ADS)
Farley, A. N. St J.
2018-05-01
Null geodesic deviation in classical general relativity is expressed in terms of a scalar function, defined as the invariant magnitude of the connecting vector between neighbouring light rays in a null geodesic congruence projected onto a two-dimensional screen space orthogonal to the rays, where λ is an affine parameter along the rays. We demonstrate that η satisfies a harmonic oscillator-like equation with a λ-dependent frequency, which comprises terms accounting for local matter affecting the congruence and tidal gravitational effects from distant matter or gravitational waves passing through the congruence, represented by the amplitude, of a complex Weyl driving term. Oscillating solutions for η imply the presence of conjugate or focal points along the rays. A polarisation angle, is introduced comprising the orientation of the connecting vector on the screen space and the phase, of the Weyl driving term. Interpreting β as the polarisation of a gravitational wave encountering the light rays, we consider linearly polarised waves in the first instance. A highly non-linear, second-order ordinary differential equation, (the tidal pendulum equation), is then derived, so-called due to its analogy with the equation describing a non-linear, variable-length pendulum oscillating under gravity. The variable pendulum length is represented by the connecting vector magnitude, whilst the acceleration due to gravity in the familiar pendulum formulation is effectively replaced by . A tidal torque interpretation is also developed, where the torque is expressed as a coupling between the moment of inertia of the pendulum and the tidal gravitational field. Precessional effects are briefly discussed. A solution to the tidal pendulum equation in terms of familiar gravitational lensing variables is presented. The potential emergence of chaos in general relativity is discussed in the context of circularly, elliptically or randomly polarised gravitational waves encountering the null congruence.
Optical testing of the LSST combined primary/tertiary mirror
NASA Astrophysics Data System (ADS)
Tuell, Michael T.; Martin, Hubert M.; Burge, James H.; Gressler, William J.; Zhao, Chunyu
2010-07-01
The Large Synoptic Survey Telescope (LSST) utilizes a three-mirror design in which the primary (M1) and tertiary (M3) mirrors are two concentric aspheric surfaces on one monolithic substrate. The substrate material is Ohara E6 borosilicate glass, in a honeycomb sandwich configuration, currently in production at The University of Arizona's Steward Observatory Mirror Lab. In addition to the normal requirements for smooth surfaces of the appropriate prescriptions, the alignment of the two surfaces must be accurately measured and controlled in the production lab. Both the pointing and centration of the two optical axes are important parameters, in addition to the axial spacing of the two vertices. This paper describes the basic metrology systems for each surface, with particular attention to the alignment of the two surfaces. These surfaces are aspheric enough to require null correctors for each wavefront. Both M1 and M3 are concave surfaces with both non-zero conic constants and higher-order terms (6th order for M1 and both 6th and 8th orders for M3). M1 is hyperboloidal and can utilize a standard Offner null corrector. M3 is an oblate ellipsoid, so has positive spherical aberration. We have chosen to place a phase-etched computer-generated hologram (CGH) between the mirror surface and the center-of-curvature (CoC), whereas the M1 null lens is beyond the CoC. One relatively new metrology tool is the laser tracker, which is relied upon to measure the alignment and spacings. A separate laser tracker system will be used to measure both surfaces during loose abrasive grinding and initial polishing.
The Effect of Magnetic Topology on the Escape of Flare Particles
NASA Technical Reports Server (NTRS)
Antiochos, S. K.; Masson, S.; DeVore, C. R.
2012-01-01
Magnetic reconnection in the solar atmosphere is believed to be the driver of most solar explosive phenomena. Therefore, the topology of the coronal magnetic field is central to understanding the solar drivers of space weather. Of particular importance to space weather are the impulsive Solar Energetic particles that are associated with some CME/eruptive flare events. Observationally, the magnetic configuration of active regions where solar eruptions originate appears to agree with the standard eruptive flare model. According to this model, particles accelerated at the flare reconnection site should remain trapped in the corona and the ejected plasmoid. However, flare-accelerated particles frequently reach the Earth long before the CME does. We present a model that may account for the injection of energetic particles onto open magnetic flux tubes connecting to the Earth. Our model is based on the well-known 2.5D breakout topology, which has a coronal null point (null line) and a four-flux system. A key new addition, however, is that we include an isothermal solar wind with open-flux regions. Depending on the location of the open flux with respect to the null point, we find that the flare reconnection can consist of two distinct phases. At first, the flare reconnection involves only closed field, but if the eruption occurs close to the open field, we find a second phase involving interchange reconnection between open and closed. We argue that this second reconnection episode is responsible for the injection of flare-accelerated particles into the interplanetary medium. We will report on our recent work toward understanding how flare particles escape to the heliosphere. This work uses high-resolution 2.5D MHD numerical simulations performed with the Adaptively Refined MHD Solver (ARMS).
USDA-ARS?s Scientific Manuscript database
Ozone uptake by plants leads to an increase in reactive oxygen species (ROS) in the intercellular space of leaves and induces signalling processes reported to involve the membrane-bound heterotrimeric G-protein complex. Therefore, potential G-protein-mediated response mechanisms to ozone were compar...
Towards a framework for testing general relativity with extreme-mass-ratio-inspiral observations
NASA Astrophysics Data System (ADS)
Chua, A. J. K.; Hee, S.; Handley, W. J.; Higson, E.; Moore, C. J.; Gair, J. R.; Hobson, M. P.; Lasenby, A. N.
2018-07-01
Extreme-mass-ratio-inspiral observations from future space-based gravitational-wave detectors such as LISA will enable strong-field tests of general relativity with unprecedented precision, but at prohibitive computational cost if existing statistical techniques are used. In one such test that is currently employed for LIGO black hole binary mergers, generic deviations from relativity are represented by N deformation parameters in a generalized waveform model; the Bayesian evidence for each of its 2N combinatorial submodels is then combined into a posterior odds ratio for modified gravity over relativity in a null-hypothesis test. We adapt and apply this test to a generalized model for extreme-mass-ratio inspirals constructed on deformed black hole spacetimes, and focus our investigation on how computational efficiency can be increased through an evidence-free method of model selection. This method is akin to the algorithm known as product-space Markov chain Monte Carlo, but uses nested sampling and improved error estimates from a rethreading technique. We perform benchmarking and robustness checks for the method, and find order-of-magnitude computational gains over regular nested sampling in the case of synthetic data generated from the null model.
NASA Astrophysics Data System (ADS)
Wiedermann, Marc; Donges, Jonathan F.; Kurths, Jürgen; Donner, Reik V.
2016-04-01
Networks with nodes embedded in a metric space have gained increasing interest in recent years. The effects of spatial embedding on the networks' structural characteristics, however, are rarely taken into account when studying their macroscopic properties. Here, we propose a hierarchy of null models to generate random surrogates from a given spatially embedded network that can preserve certain global and local statistics associated with the nodes' embedding in a metric space. Comparing the original network's and the resulting surrogates' global characteristics allows one to quantify to what extent these characteristics are already predetermined by the spatial embedding of the nodes and links. We apply our framework to various real-world spatial networks and show that the proposed models capture macroscopic properties of the networks under study much better than standard random network models that do not account for the nodes' spatial embedding. Depending on the actual performance of the proposed null models, the networks are categorized into different classes. Since many real-world complex networks are in fact spatial networks, the proposed approach is relevant for disentangling the underlying complex system structure from spatial embedding of nodes in many fields, ranging from social systems over infrastructure and neurophysiology to climatology.
Constrained Null Space Component Analysis for Semiblind Source Separation Problem.
Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn
2018-02-01
The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.
Towards a framework for testing general relativity with extreme-mass-ratio-inspiral observations
NASA Astrophysics Data System (ADS)
Chua, A. J. K.; Hee, S.; Handley, W. J.; Higson, E.; Moore, C. J.; Gair, J. R.; Hobson, M. P.; Lasenby, A. N.
2018-04-01
Extreme-mass-ratio-inspiral observations from future space-based gravitational-wave detectors such as LISA will enable strong-field tests of general relativity with unprecedented precision, but at prohibitive computational cost if existing statistical techniques are used. In one such test that is currently employed for LIGO black-hole binary mergers, generic deviations from relativity are represented by N deformation parameters in a generalised waveform model; the Bayesian evidence for each of its 2N combinatorial submodels is then combined into a posterior odds ratio for modified gravity over relativity in a null-hypothesis test. We adapt and apply this test to a generalised model for extreme-mass-ratio inspirals constructed on deformed black-hole spacetimes, and focus our investigation on how computational efficiency can be increased through an evidence-free method of model selection. This method is akin to the algorithm known as product-space Markov chain Monte Carlo, but uses nested sampling and improved error estimates from a rethreading technique. We perform benchmarking and robustness checks for the method, and find order-of-magnitude computational gains over regular nested sampling in the case of synthetic data generated from the null model.
Lovelock vacua with a recurrent null vector field
NASA Astrophysics Data System (ADS)
Ortaggio, Marcello
2018-02-01
Vacuum solutions of Lovelock gravity in the presence of a recurrent null vector field (a subset of Kundt spacetimes) are studied. We first discuss the general field equations, which constrain both the base space and the profile functions. While choosing a "generic" base space puts stronger constraints on the profile, in special cases there also exist solutions containing arbitrary functions (at least for certain values of the coupling constants). These and other properties (such as the p p - waves subclass and the overlap with VSI, CSI and universal spacetimes) are subsequently analyzed in more detail in lower dimensions n =5 , 6 as well as for particular choices of the base manifold. The obtained solutions describe various classes of nonexpanding gravitational waves propagating, e.g., in Nariai-like backgrounds M2×Σn -2. An Appendix contains some results about general (i.e., not necessarily Kundt) Lovelock vacua of Riemann type III/N and of Weyl and traceless-Ricci type III/N. For example, it is pointed out that for theories admitting a triply degenerate maximally symmetric vacuum, all the (reduced) field equations are satisfied identically, giving rise to large classes of exact solutions.
On the accuracy of palaeopole estimations from magnetic field measurements
NASA Astrophysics Data System (ADS)
Vervelidou, F.; Lesur, V.; Morschhauser, A.; Grott, M.; Thomas, P.
2017-12-01
Various techniques have been proposed for palaeopole position estimation based on magnetic field measurements. Such estimates can offer insights into the rotational dynamics and the dynamo history of moons and terrestrial planets carrying a crustal magnetic field. Motivated by discrepancies in the estimated palaeopole positions among various studies regarding the Moon and Mars, we examine the limitations of magnetic field measurements as source of information for palaeopole position studies. It is already known that magnetic field measurements cannot constrain the null space of the magnetization nor its full spectral content. However, the extent to which these limitations affect palaeopole estimates has not been previously investigated in a systematic way. In this study, by means of the vector Spherical Harmonics formalism, we show that inferring palaeopole positions from magnetic field measurements necessarily introduces, explicitly or implicitly, assumptions about both the null space and the full spectral content of the magnetization. Moreover, we demonstrate through synthetic tests that if these assumptions are inaccurate, then the resulting palaeopole position estimates are wrong. Based on this finding, we make suggestions that can allow future palaeopole studies to be conducted in a more constructive way.
A multiple maximum scatter difference discriminant criterion for facial feature extraction.
Song, Fengxi; Zhang, David; Mei, Dayong; Guo, Zhongwei
2007-12-01
Maximum scatter difference (MSD) discriminant criterion was a recently presented binary discriminant criterion for pattern classification that utilizes the generalized scatter difference rather than the generalized Rayleigh quotient as a class separability measure, thereby avoiding the singularity problem when addressing small-sample-size problems. MSD classifiers based on this criterion have been quite effective on face-recognition tasks, but as they are binary classifiers, they are not as efficient on large-scale classification tasks. To address the problem, this paper generalizes the classification-oriented binary criterion to its multiple counterpart--multiple MSD (MMSD) discriminant criterion for facial feature extraction. The MMSD feature-extraction method, which is based on this novel discriminant criterion, is a new subspace-based feature-extraction method. Unlike most other subspace-based feature-extraction methods, the MMSD computes its discriminant vectors from both the range of the between-class scatter matrix and the null space of the within-class scatter matrix. The MMSD is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the benchmark database, FERET, show that the MMSD out-performs state-of-the-art facial feature-extraction methods such as null space method, direct linear discriminant analysis (LDA), eigenface, Fisherface, and complete LDA.
Calculating observables in inhomogeneous cosmologies. Part I: general framework
NASA Astrophysics Data System (ADS)
Hellaby, Charles; Walters, Anthony
2018-02-01
We lay out a general framework for calculating the variation of a set of cosmological observables, down the past null cone of an arbitrarily placed observer, in a given arbitrary inhomogeneous metric. The observables include redshift, proper motions, area distance and redshift-space density. Of particular interest are observables that are zero in the spherically symmetric case, such as proper motions. The algorithm is based on the null geodesic equation and the geodesic deviation equation, and it is tailored to creating a practical numerical implementation. The algorithm provides a method for tracking which light rays connect moving objects to the observer at successive times. Our algorithm is applied to the particular case of the Szekeres metric. A numerical implementation has been created and some results will be presented in a subsequent paper. Future work will explore the range of possibilities.
ISLES: Probing Extra Dimensions Using a Superconducting Accelerometer
NASA Technical Reports Server (NTRS)
Paik, Ho Jung; Moody, M. Vol; Prieto-Gortcheva, Violeta A.
2003-01-01
In string theories, extra dimensions must be compactified. The possibility that gravity can have large radii of compactification leads to a violation of the inverse square law at submillimeter distances. The objective of ISLES is to perform a null test of Newton s law in space with a resolution of one part in 10(exp 5) or better at 100 microns. The experiment will be cooled to less than or equal to 2 K, which permits superconducting magnetic levitation of the test masses. To minimize Newtonian errors, ISLES employs a near null source, a circular disk of large diameter-to-thickness ratio. Two test masses, also disk-shaped, are suspended on the two sides of the source mass at a nominal distance of 100 microns. The signal is detected by a superconducting differential accelerometer. A ground test apparatus is under construction.
SCExAO: First Results and On-Sky Performance
NASA Astrophysics Data System (ADS)
Currie, Thayne; Guyon, Olivier; Martinache, Frantz; Clergeon, Christophe; McElwain, Michael; Thalmann, Christian; Jovanovic, Nemanja; Singh, Garima; Kudo, Tomoyuki
2014-01-01
We present new on-sky results for the Subaru Coronagraphic Extreme Adaptive Optics imager (SCExAO) verifying and quantifying the contrast gain enabled by key components: the closed-loop coronagraphic low-order wavefront sensor (CLOWFS) and focal plane wavefront control (``speckle nulling''). SCExAO will soon be coupled with a high-order, Pyramid wavefront sensor which will yield > 90% Strehl ratio and enable 106-107 contrast at small angular separations allowing us to image gas giant planets at solar system scales. Upcoming instruments like VAMPIRES, FIRST, and CHARIS will expand SCExAO's science capabilities.
NASA Technical Reports Server (NTRS)
2012-01-01
Topics covered include: Mars Science Laboratory Drill; Ultra-Compact Motor Controller; A Reversible Thermally Driven Pump for Use in a Sub-Kelvin Magnetic Refrigerator; Shape Memory Composite Hybrid Hinge; Binding Causes of Printed Wiring Assemblies with Card-Loks; Coring Sample Acquisition Tool; Joining and Assembly of Bulk Metallic Glass Composites Through Capacitive Discharge; 670-GHz Schottky Diode-Based Subharmonic Mixer with CPW Circuits and 70-GHz IF; Self-Nulling Lock-in Detection Electronics for Capacitance Probe Electrometer; Discontinuous Mode Power Supply; Optimal Dynamic Sub-Threshold Technique for Extreme Low Power Consumption for VLSI; Hardware for Accelerating N-Modular Redundant Systems for High-Reliability Computing; Blocking Filters with Enhanced Throughput for X-Ray Microcalorimetry; High-Thermal-Conductivity Fabrics; Imidazolium-Based Polymeric Materials as Alkaline Anion-Exchange Fuel Cell Membranes; Electrospun Nanofiber Coating of Fiber Materials: A Composite Toughening Approach; Experimental Modeling of Sterilization Effects for Atmospheric Entry Heating on Microorganisms; Saliva Preservative for Diagnostic Purposes; Hands-Free Transcranial Color Doppler Probe; Aerosol and Surface Parameter Retrievals for a Multi-Angle, Multiband Spectrometer LogScope; TraceContract; AIRS Maps from Space Processing Software; POSTMAN: Point of Sail Tacking for Maritime Autonomous Navigation; Space Operations Learning Center; OVERSMART Reporting Tool for Flow Computations Over Large Grid Systems; Large Eddy Simulation (LES) of Particle-Laden Temporal Mixing Layers; Projection of Stabilized Aerial Imagery Onto Digital Elevation Maps for Geo-Rectified and Jitter-Free Viewing; Iterative Transform Phase Diversity: An Image-Based Object and Wavefront Recovery; 3D Drop Size Distribution Extrapolation Algorithm Using a Single Disdrometer; Social Networking Adapted for Distributed Scientific Collaboration; General Methodology for Designing Spacecraft Trajectories; Hemispherical Field-of-View Above-Water Surface Imager for Submarines; and Quantum-Well Infrared Photodetector (QWIP) Focal Plane Assembly.
NASA Astrophysics Data System (ADS)
Katayama, Soichiro
We consider the Cauchy problem for systems of nonlinear wave equations with multiple propagation speeds in three space dimensions. Under the null condition for such systems, the global existence of small amplitude solutions is known. In this paper, we will show that the global solution is asymptotically free in the energy sense, by obtaining the asymptotic pointwise behavior of the derivatives of the solution. Nonetheless we can also show that the pointwise behavior of the solution itself may be quite different from that of the free solution. In connection with the above results, a theorem is also developed to characterize asymptotically free solutions for wave equations in arbitrary space dimensions.
Ambiguous Tilt and Translation Motion Cues in Astronauts after Space Flight
NASA Technical Reports Server (NTRS)
Clement, G.; Harm, D. L.; Rupert, A. H.; Beaton, K. H.; Wood, S. J.
2008-01-01
Adaptive changes during space flight in how the brain integrates vestibular cues with visual, proprioceptive, and somatosensory information can lead to impaired movement coordination, vertigo, spatial disorientation, and perceptual illusions following transitions between gravity levels. This joint ESA-NASA pre- and post-flight experiment is designed to examine both the physiological basis and operational implications for disorientation and tilt-translation disturbances in astronauts following short-duration space flights. The first specific aim is to examine the effects of stimulus frequency on adaptive changes in eye movements and motion perception during independent tilt and translation motion profiles. Roll motion is provided by a variable radius centrifuge. Pitch motion is provided by NASA's Tilt-Translation Sled in which the resultant gravitoinertial vector remains aligned with the body longitudinal axis during tilt motion (referred to as the Z-axis gravitoinertial or ZAG paradigm). We hypothesize that the adaptation of otolith-mediated responses to these stimuli will have specific frequency characteristics, being greatest in the mid-frequency range where there is a crossover of tilt and translation. The second specific aim is to employ a closed-loop nulling task in which subjects are tasked to use a joystick to null-out tilt motion disturbances on these two devices. The stimuli consist of random steps or sum-of-sinusoids stimuli, including the ZAG profiles on the Tilt-Translation Sled. We hypothesize that the ability to control tilt orientation will be compromised following space flight, with increased control errors corresponding to changes in self-motion perception. The third specific aim is to evaluate how sensory substitution aids can be used to improve manual control performance. During the closed-loop nulling task on both devices, small tactors placed around the torso vibrate according to the actual body tilt angle relative to gravity. We hypothesize that performance on the closed-loop tilt control task will be improved with this tactile display feedback of tilt orientation. The current plans include testing on eight crewmembers following Space Shuttle missions or short stay onboard the International Space Station. Measurements are obtained pre-flight at L-120 (plus or minus 30), L-90 (plus or minus 30), and L-30, (plus or minus 10) days and post-flight at R+0, R+1, R+2 or 3, R+4 or 5, and R+8 days. Pre-and post-flight testing (from R+1 on) is performed in the Neuroscience Laboratory at the NASA Johnson Space Center on both the Tilt-Translation Device and a variable radius centrifuge. A second variable radius centrifuge, provided by DLR for another joint ESA-NASA project, has been installed at the Baseline Data Collection Facility at Kennedy Space Center to collect data immediately after landing. ZAG was initiated with STS-122/1E and the first post-flight testing will take place after STS-123/1JA landing.
Spaceborne Imaging Radar-C instrument
NASA Technical Reports Server (NTRS)
Huneycutt, Bryan L.
1993-01-01
The Spaceborne Imaging Radar-C is the next radar in the series of spaceborne radar experiments, which began with Seasat and continued with SIR-A and SIR-B. The SIR-C instrument has been designed to obtain simultaneous multifrequency and simultaneous multipolarization radar images from a low earth orbit. It is a multiparameter imaging radar that will be flown during at least two different seasons. The instrument operates in the squint alignment mode, the extended aperture mode, the scansar mode, and the interferometry mode. The instrument uses engineering techniques such as beam nulling for echo tracking, pulse repetition frequency hopping for Doppler centroid tracking, generating the frequency step chirp for radar parameter flexibility, block floating-point quantizing for data rate compression, and elevation beamwidth broadening for increasing the swath illumination.
Some controversial multiple testing problems in regulatory applications.
Hung, H M James; Wang, Sue-Jane
2009-01-01
Multiple testing problems in regulatory applications are often more challenging than the problems of handling a set of mathematical symbols representing multiple null hypotheses under testing. In the union-intersection setting, it is important to define a family of null hypotheses relevant to the clinical questions at issue. The distinction between primary endpoint and secondary endpoint needs to be considered properly in different clinical applications. Without proper consideration, the widely used sequential gate keeping strategies often impose too many logical restrictions to make sense, particularly to deal with the problem of testing multiple doses and multiple endpoints, the problem of testing a composite endpoint and its component endpoints, and the problem of testing superiority and noninferiority in the presence of multiple endpoints. Partitioning the null hypotheses involved in closed testing into clinical relevant orderings or sets can be a viable alternative to resolving the illogical problems requiring more attention from clinical trialists in defining the clinical hypotheses or clinical question(s) at the design stage. In the intersection-union setting there is little room for alleviating the stringency of the requirement that each endpoint must meet the same intended alpha level, unless the parameter space under the null hypothesis can be substantially restricted. Such restriction often requires insurmountable justification and usually cannot be supported by the internal data. Thus, a possible remedial approach to alleviate the possible conservatism as a result of this requirement is a group-sequential design strategy that starts with a conservative sample size planning and then utilizes an alpha spending function to possibly reach the conclusion early.
Detection of long nulls in PSR B1706-16, a pulsar with large timing irregularities
NASA Astrophysics Data System (ADS)
Naidu, Arun; Joshi, Bhal Chandra; Manoharan, P. K.; Krishnakumar, M. A.
2018-04-01
Single pulse observations, characterizing in detail, the nulling behaviour of PSR B1706-16 are being reported for the first time in this paper. Our regular long duration monitoring of this pulsar reveals long nulls of 2-5 h with an overall nulling fraction of 31 ± 2 per cent. The pulsar shows two distinct phases of emission. It is usually in an active phase, characterized by pulsations interspersed with shorter nulls, with a nulling fraction of about 15 per cent, but it also rarely switches to an inactive phase, consisting of long nulls. The nulls in this pulsar are concurrent between 326.5 and 610 MHz. Profile mode changes accompanied by changes in fluctuation properties are seen in this pulsar, which switches from mode A before a null to mode B after the null. The distribution of null durations in this pulsar is bimodal. With its occasional long nulls, PSR B1706-16 joins the small group of intermediate nullers, which lie between the classical nullers and the intermittent pulsars. Similar to other intermediate nullers, PSR B1706-16 shows high timing noise, which could be due to its rare long nulls if one assumes that the slowdown rate during such nulls is different from that during the bursts.
Asymptotic symmetries and electromagnetic memory
NASA Astrophysics Data System (ADS)
Pasterski, Sabrina
2017-09-01
Recent investigations into asymptotic symmetries of gauge theory and gravity have illuminated connections between gauge field zero-mode sectors, the corresponding soft factors, and their classically observable counterparts — so called "memories". Namely, low frequency emissions in momentum space correspond to long time integrations of the corre-sponding radiation in position space. Memory effect observables constructed in this manner are non-vanishing in typical scattering processes, which has implications for the asymptotic symmetry group. Here we complete this triad for the case of large U(1) gauge symmetries at null infinity. In particular, we show that the previously studied electromagnetic memory effect, whereby the passage of electromagnetic radiation produces a net velocity kick for test charges in a distant detector, is the position space observable corresponding to th Weinberg soft photon pole in momentum space scattering amplitudes.
NASA Technical Reports Server (NTRS)
Berry, Richard; Rajagopa, J.; Danchi, W. C.; Allen, R. J.; Benford, D. J.; Deming, D.; Gezari, D. Y.; Kuchner, M.; Leisawitz, D. T.; Linfield, R.
2005-01-01
The Fourier-Kelvin Stellar Interferometer (FKSI) is a mission concept for an imaging and nulling interferometer for the near-infrared to mid-infrared spectral region (3-8 microns). FKSI is conceived as a scientific and technological pathfinder to TPF/DARWIN as well as SPIRIT, SPECS, and SAFIR. It will also be a high angular resolution system complementary to JWST. The scientific emphasis of the mission is on the evolution of protostellar systems, from just after the collapse of the precursor molecular cloud core, through the formation of the disk surrounding the protostar, the formation of planets in the disk, and eventual dispersal of the disk material. FKSI will also search for brown dwarfs and Jupiter mass and smaller planets, and could also play a very powerful role in the investigation of the structure of active galactic nuclei and extra-galactic star formation. We report additional studies of the imaging capabilities of the FKSI with various configurations of two to five telescopes, studies of the capabilities of FKSI assuming an increase in long wavelength response to 10 or 12 microns (depending on availability of detectors), and preliminary results from our nulling testbed.
Mathison, Megumi; Singh, Vivek P; Chiuchiolo, Maria J; Sanagasetti, Deepthi; Mao, Yun; Patel, Vivekkumar B; Yang, Jianchang; Kaminsky, Stephen M; Crystal, Ronald G; Rosengart, Todd K
2017-02-01
The reprogramming of cardiac fibroblasts into induced cardiomyocyte-like cells improves ventricular function in myocardial infarction models. Only integrating persistent expression vectors have thus far been used to induce reprogramming, potentially limiting its clinical applicability. We therefore tested the reprogramming potential of nonintegrating, acute expression adenoviral (Ad) vectors. Ad or lentivirus vectors encoding Gata4 (G), Mef2c (M), and Tbx5 (T) were validated in vitro. Sprague-Dawley rats then underwent coronary ligation and Ad-mediated administration of vascular endothelial growth factor to generate infarct prevascularization. Three weeks later, animals received Ad or lentivirus encoding G, M, or T (AdGMT or LentiGMT) or an equivalent dose of a null vector (n = 11, 10, and 10, respectively). Outcomes were analyzed by echocardiography, magnetic resonance imaging, and histology. Ad and lentivirus vectors provided equivalent G, M, and T expression in vitro. AdGMT and LentiGMT both likewise induced expression of the cardiomyocyte marker cardiac troponin T in approximately 6% of cardiac fibroblasts versus <1% cardiac troponin T expression in AdNull (adenoviral vector that does not encode a transgene)-treated cells. Infarcted myocardium that had been treated with AdGMT likewise demonstrated greater density of cells expressing the cardiomyocyte marker beta myosin heavy chain 7 compared with AdNull-treated animals. Echocardiography demonstrated that AdGMT and LentiGMT both increased ejection fraction compared with AdNull (AdGMT: 21% ± 3%, LentiGMT: 14% ± 5%, AdNull: -0.4% ± 2%; P < .05). Ad vectors are at least as effective as lentiviral vectors in inducing cardiac fibroblast transdifferentiation into induced cardiomyocyte-like cells and improving cardiac function in postinfarct rat hearts. Short-term expression Ad vectors may represent an important means to induce cardiac cellular reprogramming in humans. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.
Quasi-isotropic VHF antenna array design study for the International Ultraviolet Explorer satellite
NASA Technical Reports Server (NTRS)
Raines, J. K.
1975-01-01
Results of a study to design a quasi-isotropic VHF antenna array for the IUE satellite are presented. A free space configuration was obtained that has no nulls deeper than -6.4 dbi in each of two orthogonal polarizations. A computer program named SOAP that analyzes the electromagnetic interaction between antennas and complicated conducting bodies, such as satellites was developed.
The path towards high-contrast imaging with the VLTI: the Hi-5 project
NASA Astrophysics Data System (ADS)
Defrère, D.; Absil, O.; Berger, J.-P.; Boulet, T.; Danchi, W. C.; Ertel, S.; Gallenne, A.; Hénault, F.; Hinz, P.; Huby, E.; Ireland, M.; Kraus, S.; Labadie, L.; Le Bouquin, J.-B.; Martin, G.; Matter, A.; Mérand, A.; Mennesson, B.; Minardi, S.; Monnier, J. D.; Norris, B.; de Xivry, G. Orban; Pedretti, E.; Pott, J.-U.; Reggiani, M.; Serabyn, E.; Surdej, J.; Tristram, K. R. W.; Woillez, J.
2018-06-01
The development of high-contrast capabilities has long been recognized as one of the top priorities for the VLTI. As of today, the VLTI routinely achieves contrasts of a few 10- 3 in the near-infrared with PIONIER (H band) and GRAVITY (K band). Nulling interferometers in the northern hemisphere and non-redundant aperture masking experiments have, however, demonstrated that contrasts of at least a few 10- 4 are within reach using specific beam combination and data acquisition techniques. In this paper, we explore the possibility to reach similar or higher contrasts on the VLTI. After reviewing the state-of-the-art in high-contrast infrared interferometry, we discuss key features that made the success of other high-contrast interferometric instruments (e.g., integrated optics, nulling, closure phase, and statistical data reduction) and address possible avenues to improve the contrast of the VLTI by at least one order of magnitude. In particular, we discuss the possibility to use integrated optics, proven in the near-infrared, in the thermal near-infrared (L and M bands, 3-5 μm), a sweet spot to image and characterize young extra-solar planetary systems. Finally, we address the science cases of a high-contrast VLTI imaging instrument and focus particularly on exoplanet science (young exoplanets, planet formation, and exozodiacal disks), stellar physics (fundamental parameters and multiplicity), and extragalactic astrophysics (active galactic nuclei and fundamental constants). Synergies and scientific preparation for other potential future instruments such as the Planet Formation Imager are also briefly discussed. This project is called Hi-5 for High-contrast Interferometry up to 5 μm.
Mougin, Olivier; Abdel-Fahim, Rasha; Dineen, Robert; Pitiot, Alain; Evangelou, Nikos; Gowland, Penny
2016-11-01
To present an improved three-dimensional (3D) interleaved phase sensitive inversion recovery (PSIR) sequence including a concomitantly acquired new contrast, null point imaging (NPI), to help detect and classify abnormalities in cortical gray matter. The 3D gradient echo PSIR images were acquired at 0.6 mm isotropic resolution on 11 multiple sclerosis (MS) patients and 9 controls subjects using a 7 Tesla (T) MRI scanner, and 2 MS patients at 3T. Cortical abnormalities were delineated on the NPI/PSIR data and later classified according to position in the cortex. The NPI helped detect cortical lesions within the cortical ribbon with increased, positive contrast compared with the PSIR. It also provided improved intrinsic delineation of the ribbon, increasing confidence in classifying the lesions' locations. The proposed PSIR facilitates the classification of cortical lesions by providing two T 1 -weighted 3D datasets with isotropic resolution, including the NPI showing cortical lesions with clear delineation of the gray/white matter boundary and minimal partial volume effects. Magn Reson Med 76:1512-1516, 2016. © 2015 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2015 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.
Self-Nulling Lock-in Detection Electronics for Capacitance Probe Electrometer
NASA Technical Reports Server (NTRS)
Blaes, Brent R.; Schaefer, Rembrandt T.
2012-01-01
A multi-channel electrometer voltmeter that employs self-nulling lock-in detection electronics in conjunction with a mechanical resonator with noncontact voltage sensing electrodes has been developed for space-based measurement of an Internal Electrostatic Discharge Monitor (IESDM). The IESDM is new sensor technology targeted for integration into a Space Environmental Monitor (SEM) subsystem used for the characterization and monitoring of deep dielectric charging on spacecraft. Use of an AC-coupled lock-in amplifier with closed-loop sense-signal nulling via generation of an active guard-driving feedback voltage provides the resolution, accuracy, linearity and stability needed for long-term space-based measurement of the IESDM. This implementation relies on adjusting the feedback voltage to drive the sense current received from the resonator s variable-capacitance-probe voltage transducer to approximately zero, as limited by the signal-to-noise performance of the loop electronics. The magnitude of the sense current is proportional to the difference between the input voltage being measured and the feedback voltage, which matches the input voltage when the sense current is zero. High signal-to-noise-ratio (SNR) is achieved by synchronous detection of the sense signal using the correlated reference signal derived from the oscillator circuit that drives the mechanical resonator. The magnitude of the feedback voltage, while the loop is in a settled state with essentially zero sense current, is an accurate estimate of the input voltage being measured. This technique has many beneficial attributes including immunity to drift, high linearity, high SNR from synchronous detection of a single-frequency carrier selected to avoid potentially noisy 1/f low-frequency spectrum of the signal-chain electronics, and high accuracy provided through the benefits of a driven shield encasing the capacitance- probe transducer and guarded input triaxial lead-in. Measurements obtained from a 2- channel prototype electrometer have demonstrated good accuracy (|error| < 0.2 V) and high stability. Twenty-four-hour tests have been performed with virtually no drift. Additionally, 5,500 repeated one-second measurements of 100 V input were shown to be approximately normally distributed with a standard deviation of 140 mV.
Figure and Ground in the Visual Cortex: V2 Combines Stereoscopic Cues with Gestalt Rules
Qiu, Fangtu T.; von der Heydt, Rüdiger
2006-01-01
Figure-ground organization is a process by which the visual system identifies some image regions as foreground and others as background, inferring three-dimensional (3D) layout from 2D displays. A recent study reported that edge responses of neurons in area V2 are selective for side-of-figure, suggesting that figure-ground organization is encoded in the contour signals (border-ownership coding). Here we show that area V2 combines two strategies of computation, one that exploits binocular stereoscopic information for the definition of local depth order, and another that exploits the global configuration of contours (gestalt factors). These are combined in single neurons so that the ‘near’ side of the preferred 3D edge generally coincides with the preferred side-of-figure in 2D displays. Thus, area V2 represents the borders of 2D figures as edges of surfaces, as if the figures were objects in 3D space. Even in 3D displays gestalt factors influence the responses and can enhance or null the stereoscopic depth information. PMID:15996555
Astronomical Optical Interferometry. I. Methods and Instrumentation
NASA Astrophysics Data System (ADS)
Jankov, S.
2010-12-01
Previous decade has seen an achievement of large interferometric projects including 8-10m telescopes and 100m class baselines. Modern computer and control technology has enabled the interferometric combination of light from separate telescopes also in the visible and infrared regimes. Imaging with milli-arcsecond (mas) resolution and astrometry with micro-arcsecond (muas) precision have thus become reality. Here, I review the methods and instrumentation corresponding to the current state in the field of astronomical optical interferometry. First, this review summarizes the development from the pioneering works of Fizeau and Michelson. Next, the fundamental observables are described, followed by the discussion of the basic design principles of modern interferometers. The basic interferometric techniques such as speckle and aperture masking interferometry, aperture synthesis and nulling interferometry are disscused as well. Using the experience of past and existing facilities to illustrate important points, I consider particularly the new generation of large interferometers that has been recently commissioned (most notably, the CHARA, Keck, VLT and LBT Interferometers). Finally, I discuss the longer-term future of optical interferometry, including the possibilities of new large-scale ground-based projects and prospects for space interferometry.
Figure and ground in the visual cortex: v2 combines stereoscopic cues with gestalt rules.
Qiu, Fangtu T; von der Heydt, Rüdiger
2005-07-07
Figure-ground organization is a process by which the visual system identifies some image regions as foreground and others as background, inferring 3D layout from 2D displays. A recent study reported that edge responses of neurons in area V2 are selective for side-of-figure, suggesting that figure-ground organization is encoded in the contour signals (border ownership coding). Here, we show that area V2 combines two strategies of computation, one that exploits binocular stereoscopic information for the definition of local depth order, and another that exploits the global configuration of contours (Gestalt factors). These are combined in single neurons so that the "near" side of the preferred 3D edge generally coincides with the preferred side-of-figure in 2D displays. Thus, area V2 represents the borders of 2D figures as edges of surfaces, as if the figures were objects in 3D space. Even in 3D displays, Gestalt factors influence the responses and can enhance or null the stereoscopic depth information.
Null hypersurface quantization, electromagnetic duality and asympotic symmetries of Maxwell theory
NASA Astrophysics Data System (ADS)
Bhattacharyya, Arpan; Hung, Ling-Yan; Jiang, Yikun
2018-03-01
In this paper we consider introducing careful regularization at the quantization of Maxwell theory in the asymptotic null infinity. This allows systematic discussions of the commutators in various boundary conditions, and application of Dirac brackets accordingly in a controlled manner. This method is most useful when we consider asymptotic charges that are not localized at the boundary u → ±∞ like large gauge transformations. We show that our method reproduces the operator algebra in known cases, and it can be applied to other space-time symmetry charges such as the BMS transformations. We also obtain the asymptotic form of the U(1) charge following from the electromagnetic duality in an explicitly EM symmetric Schwarz-Sen type action. Using our regularization method, we demonstrate that the charge generates the expected transformation of a helicity operator. Our method promises applications in more generic theories.
Integrability in dipole-deformed \\boldsymbol{N=4} super Yang-Mills
NASA Astrophysics Data System (ADS)
Guica, Monica; Levkovich Maslyuk, Fedor; Zarembo, Konstantin
2017-09-01
We study the null dipole deformation of N=4 super Yang-Mills theory, which is an example of a potentially solvable ‘dipole CFT’: a theory that is non-local along a null direction, has non-relativistic conformal invariance along the remaining ones, and is holographically dual to a Schrödinger space-time. We initiate the field-theoretical study of the spectrum in this model by using integrability inherited from the parent theory. The dipole deformation corresponds to a nondiagonal Drinfeld-Reshetikhin twist in the spin chain picture, which renders the traditional Bethe ansatz inapplicable from the very beginning. We use instead the Baxter equation supplemented with nontrivial asymptotics, which gives the full 1-loop spectrum in the sl(2) sector. We show that anomalous dimensions of long gauge theory operators perfectly match the string theory prediction, providing a quantitative test of Schrödinger holography. Dedicated to the memory of Petr Petrovich Kulish.
Adaptive jammer nulling in EHF communications satellites
NASA Astrophysics Data System (ADS)
Bhagwan, Jai; Kavanagh, Stephen; Yen, J. L.
A preliminary investigation is reviewed concerning adaptive null steering multibeam uplink receiving system concepts for future extremely high frequency communications satellites. Primary alternatives in the design of the uplink antenna, the multibeam adaptive nulling receiver, and the processing algorithm and optimization criterion are discussed. The alternatives are phased array, lens or reflector antennas, nulling at radio frequency or an intermediate frequency, wideband versus narrowband nulling, and various adaptive nulling algorithms. A primary determinant of the hardware complexity is the receiving system architecture, which is described for the alternative antenna and nulling concepts. The final concept chosen will be influenced by the nulling performance requirements, cost, and technological readiness.
Broken chiral symmetry on a null plane
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beane, Silas R., E-mail: silas@physics.unh.edu
2013-10-15
On a null-plane (light-front), all effects of spontaneous chiral symmetry breaking are contained in the three Hamiltonians (dynamical Poincaré generators), while the vacuum state is a chiral invariant. This property is used to give a general proof of Goldstone’s theorem on a null-plane. Focusing on null-plane QCD with N degenerate flavors of light quarks, the chiral-symmetry breaking Hamiltonians are obtained, and the role of vacuum condensates is clarified. In particular, the null-plane Gell-Mann–Oakes–Renner formula is derived, and a general prescription is given for mapping all chiral-symmetry breaking QCD condensates to chiral-symmetry conserving null-plane QCD condensates. The utility of the null-planemore » description lies in the operator algebra that mixes the null-plane Hamiltonians and the chiral symmetry charges. It is demonstrated that in a certain non-trivial limit, the null-plane operator algebra reduces to the symmetry group SU(2N) of the constituent quark model. -- Highlights: •A proof (the first) of Goldstone’s theorem on a null-plane is given. •The puzzle of chiral-symmetry breaking condensates on a null-plane is solved. •The emergence of spin-flavor symmetries in null-plane QCD is demonstrated.« less
Fundus autofluorescence findings in a mouse model of retinal detachment.
Secondi, Roberta; Kong, Jian; Blonska, Anna M; Staurenghi, Giovanni; Sparrow, Janet R
2012-08-07
Fundus autofluorescence (fundus AF) changes were monitored in a mouse model of retinal detachment (RD). RD was induced by transscleral injection of hyaluronic acid (Healon) or sterile balanced salt solution (BSS) into the subretinal space of 4-5-day-old albino Abca4 null mutant and Abca4 wild-type mice. Images acquired by confocal scanning laser ophthalmoscopy (Spectralis HRA) were correlated with spectral domain optical coherence tomography (SD-OCT), infrared reflectance (IR), fluorescence spectroscopy, and histologic analysis. Results. In the area of detached retina, multiple hyperreflective spots in IR images corresponded to punctate areas of intense autofluorescence visible in fundus AF mode. The puncta exhibited changes in fluorescence intensity with time. SD-OCT disclosed undulations of the neural retina and hyperreflectivity of the photoreceptor layer that likely corresponded to histologically visible photoreceptor cell rosettes. Fluorescence emission spectra generated using flat-mounted retina, and 488 and 561 nm excitation, were similar to that of RPE lipofuscin. With increased excitation wavelength, the emission maximum shifted towards longer wavelengths, a characteristic typical of fundus autofluorescence. In detached retinas, hyper-autofluorescent spots appeared to originate from photoreceptor outer segments that were arranged within retinal folds and rosettes. Consistent with this interpretation is the finding that the autofluorescence was spectroscopically similar to the bisretinoids that constitute RPE lipofuscin. Under the conditions of a RD, abnormal autofluorescence may arise from excessive production of bisretinoid by impaired photoreceptor cells.
Massless spinning particle and null-string on AdS d : projective-space approach
NASA Astrophysics Data System (ADS)
Uvarov, D. V.
2018-07-01
The massless spinning particle and the tensionless string models on an AdS d background in the projective-space realization are proposed as constrained Hamiltonian systems. Various forms of particle and string Lagrangians are derived and classical mechanics is studied including the Lax-type representation of the equations of motion. After that, the transition to the quantum theory is discussed. The analysis of potential anomalies in the tensionless string model necessitates the introduction of ghosts and BRST charge. It is shown that a quantum BRST charge is nilpotent for any d if coordinate-momentum ordering for the phase-space bosonic variables, Weyl ordering for the fermions and cb () ordering for the ghosts is chosen, while conformal reparametrizations and space-time dilatations turn out to be anomalous for ordering in terms of positive and negative Fourier modes of the phase-space variables and ghosts.
Asympotics with positive cosmological constant
NASA Astrophysics Data System (ADS)
Bonga, Beatrice; Ashtekar, Abhay; Kesavan, Aruna
2014-03-01
Since observations to date imply that our universe has a positive cosmological constant, one needs an extension of the theory of isolated systems and gravitational radiation in full general relativity from the asymptotically flat to asymptotically de Sitter space-times. In current definitions, one mimics the boundary conditions used in asymptotically AdS context to conclude that the asymptotic symmetry group is the de Sitter group. However, these conditions severely restricts radiation and in fact rules out non-zero flux of energy, momentum and angular momentum carried by gravitational waves. Therefore, these formulations of asymptotically de Sitter space-times are uninteresting beyond non-radiative spacetimes. The situation is compared and contrasted with conserved charges and fluxes at null infinity in asymptotically flat space-times.
SCExAO: First Results and On-Sky Performance
NASA Technical Reports Server (NTRS)
Currie, Thayne; Guyon, Olivier; Martinache, Frantz; Clergeon, Christophe; McElwain, Michael; Thalmann, Christian; Jovanovic, Nemanja; Singh, Garima; Kudo, Tomoyuki
2013-01-01
We present new on-sky results for the Subaru Coronagraphic Extreme Adaptive Optics imager (SCExAO) verifying and quantifying the contrast gain enabled by key components: the closed-loop coronagraphic low-order wavefront sensor (CLOWFS) and focal plane wavefront control ("speckle nulling"). SCExAO will soon be coupled with a high-order, Pyramid wavefront sensor which will yield greater than 90% Strehl ratio and enable 10(exp 6) -10(exp 7) contrast at small angular separations allowing us to image gas giant planets at solar system scales. Upcoming instruments like VAMPIRES, FIRST, and CHARIS will expand SCExAO's science capabilities.
A Solution Space for a System of Null-State Partial Differential Equations: Part 2
NASA Astrophysics Data System (ADS)
Flores, Steven M.; Kleban, Peter
2015-01-01
This article is the second of four that completely and rigorously characterize a solution space for a homogeneous system of 2 N + 3 linear partial differential equations in 2 N variables that arises in conformal field theory (CFT) and multiple Schramm-Löwner evolution (SLE). The system comprises 2 N null-state equations and three conformal Ward identities which govern CFT correlation functions of 2 N one-leg boundary operators. In the first article (Flores and Kleban, Commun Math Phys, arXiv:1212.2301, 2012), we use methods of analysis and linear algebra to prove that dim , with C N the Nth Catalan number. The analysis of that article is complete except for the proof of a lemma that it invokes. The purpose of this article is to provide that proof. The lemma states that if every interval among ( x 2, x 3), ( x 3, x 4),…,( x 2 N-1, x 2 N ) is a two-leg interval of (defined in Flores and Kleban, Commun Math Phys, arXiv:1212.2301, 2012), then F vanishes. Proving this lemma by contradiction, we show that the existence of such a nonzero function implies the existence of a non-vanishing CFT two-point function involving primary operators with different conformal weights, an impossibility. This proof (which is rigorous in spite of our occasional reference to CFT) involves two different types of estimates, those that give the asymptotic behavior of F as the length of one interval vanishes, and those that give this behavior as the lengths of two intervals vanish simultaneously. We derive these estimates by using Green functions to rewrite certain null-state PDEs as integral equations, combining other null-state PDEs to obtain Schauder interior estimates, and then repeatedly integrating the integral equations with these estimates until we obtain optimal bounds. Estimates in which two interval lengths vanish simultaneously divide into two cases: two adjacent intervals and two non-adjacent intervals. The analysis of the latter case is similar to that for one vanishing interval length. In contrast, the analysis of the former case is more complicated, involving a Green function that contains the Jacobi heat kernel as its essential ingredient.
Hill, Mary C.
2010-01-01
Doherty and Hunt (2009) present important ideas for first-order-second moment sensitivity analysis, but five issues are discussed in this comment. First, considering the composite-scaled sensitivity (CSS) jointly with parameter correlation coefficients (PCC) in a CSS/PCC analysis addresses the difficulties with CSS mentioned in the introduction. Second, their new parameter identifiability statistic actually is likely to do a poor job of parameter identifiability in common situations. The statistic instead performs the very useful role of showing how model parameters are included in the estimated singular value decomposition (SVD) parameters. Its close relation to CSS is shown. Third, the idea from p. 125 that a suitable truncation point for SVD parameters can be identified using the prediction variance is challenged using results from Moore and Doherty (2005). Fourth, the relative error reduction statistic of Doherty and Hunt is shown to belong to an emerging set of statistics here named perturbed calculated variance statistics. Finally, the perturbed calculated variance statistics OPR and PPR mentioned on p. 121 are shown to explicitly include the parameter null-space component of uncertainty. Indeed, OPR and PPR results that account for null-space uncertainty have appeared in the literature since 2000.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Hao; Gu, Bao-Min; Wang, Yong-Qiang
The future gravitational wave (GW) observations of compact binaries and their possible electromagnetic counterparts may be used to probe the nature of the extra dimension. It is widely accepted that gravitons and photons are the only two completely confirmed objects that can travel along null geodesics in our four-dimensional space-time. However, if there exist extra dimensions and only GWs can propagate freely in the bulk, the causal propagations of GWs and electromagnetic waves (EMWs) are in general different. In this paper, we study null geodesics of GWs and EMWs in a five-dimensional anti-de Sitter space-time in the presence of themore » curvature of the universe. We show that for general cases the horizon radius of GW is longer than EMW within equal time. Taking the GW150914 event detected by the Advanced Laser Interferometer Gravitational-Wave Observatory and the X-ray event detected by the Fermi Gamma-ray Burst Monitor as an example, we study how the curvature k and the constant curvature radius l affect the horizon radii of GW and EMW in the de Sitter and Einstein-de Sitter models of the universe. This provides an alternative method for probing extra dimension through future GW observations of compact binaries and their electromagnetic counterparts.« less
Fine Guidance Sensing for Coronagraphic Observatories
NASA Technical Reports Server (NTRS)
Brugarolas, Paul; Alexander, James W.; Trauger, John T.; Moody, Dwight C.
2011-01-01
Three options have been developed for Fine Guidance Sensing (FGS) for coronagraphic observatories using a Fine Guidance Camera within a coronagraphic instrument. Coronagraphic observatories require very fine precision pointing in order to image faint objects at very small distances from a target star. The Fine Guidance Camera measures the direction to the target star. The first option, referred to as Spot, was to collect all of the light reflected from a coronagraph occulter onto a focal plane, producing an Airy-type point spread function (PSF). This would allow almost all of the starlight from the central star to be used for centroiding. The second approach, referred to as Punctured Disk, collects the light that bypasses a central obscuration, producing a PSF with a punctured central disk. The final approach, referred to as Lyot, collects light after passing through the occulter at the Lyot stop. The study includes generation of representative images for each option by the science team, followed by an engineering evaluation of a centroiding or a photometric algorithm for each option. After the alignment of the coronagraph to the fine guidance system, a "nulling" point on the FGS focal point is determined by calibration. This alignment is implemented by a fine alignment mechanism that is part of the fine guidance camera selection mirror. If the star images meet the modeling assumptions, and the star "centroid" can be driven to that nulling point, the contrast for the coronagraph will be maximized.
[Dilemma of null hypothesis in ecological hypothesis's experiment test.
Li, Ji
2016-06-01
Experimental test is one of the major test methods of ecological hypothesis, though there are many arguments due to null hypothesis. Quinn and Dunham (1983) analyzed the hypothesis deduction model from Platt (1964) and thus stated that there is no null hypothesis in ecology that can be strictly tested by experiments. Fisher's falsificationism and Neyman-Pearson (N-P)'s non-decisivity inhibit statistical null hypothesis from being strictly tested. Moreover, since the null hypothesis H 0 (α=1, β=0) and alternative hypothesis H 1 '(α'=1, β'=0) in ecological progresses are diffe-rent from classic physics, the ecological null hypothesis can neither be strictly tested experimentally. These dilemmas of null hypothesis could be relieved via the reduction of P value, careful selection of null hypothesis, non-centralization of non-null hypothesis, and two-tailed test. However, the statistical null hypothesis significance testing (NHST) should not to be equivalent to the causality logistical test in ecological hypothesis. Hence, the findings and conclusions about methodological studies and experimental tests based on NHST are not always logically reliable.
Ding, Shengli; Blue, Randal E.; Morgan, Douglas R.; Lund, Pauline K.
2015-01-01
Background Activatable near-infrared fluorescent (NIRF) probes have been used for ex vivo and in vivo detection of intestinal tumors in animal models. We hypothesized that NIRF probes activatable by cathepsins or MMPs will detect and quantify dextran sulphate sodium (DSS) induced acute colonic inflammation in wild type (WT) mice or chronic colitis in IL-10 null mice ex vivo or in vivo. Methods WT mice given DSS, water controls and IL-10 null mice with chronic colitis were administered probes by retro-orbital injection. FMT2500 LX system imaged fresh and fixed intestine ex vivo and mice in vivo. Inflammation detected by probes was verified by histology and colitis scoring. NIRF signal intensity was quantified using 2D region of interest (ROI) ex vivo or 3D ROI-analysis in vivo. Results Ex vivo, seven probes tested yielded significant higher NIRF signals in colon of DSS treated mice versus controls. A subset of probes was tested in IL-10 null mice and yielded strong ex vivo signals. Ex vivo fluorescence signal with 680 series probes was preserved after formalin fixation. In DSS and IL-10 null models, ex vivo NIRF signal strongly and significantly correlated with colitis scores. In vivo, ProSense680, CatK680FAST and MMPsense680 yielded significantly higher NIRF signals in DSS treated mice than controls but background was high in controls. Conclusion Both cathepsin or MMP-activated NIRF-probes can detect and quantify colonic inflammation ex vivo. ProSense680 yielded the strongest signals in DSS colitis ex vivo and in vivo, but background remains a problem for in vivo quantification of colitis. PMID:24374874
Integrin α1β1 participates in Chondrocyte Transduction of Osmotic Stress.
Jablonski, Christina L.; Ferguson, Samuel; Pozzi, Ambra; Clark, Andrea L.
2014-01-01
Background/purpose: The goal of this study was to determine the role of the collagen binding receptor integrin α1β1 in regulating osmotically induced [Ca2+]i transients in chondrocytes. Methods: The [Ca2+]i transient response of chondrocytes to osmotic stress was measured using real-time confocal microscopy. Chondrocytes from wildtype and integrin α1-null mice were imaged ex vivo (in the cartilage of intact murine femora) and in vitro (isolated from the matrix, attached to glass coverslips). Immunocytochemistry was performed to detect the presence of the osmosensor, transient receptor potential vanilloid-4 (TRPV4), and the agonist GSK1016790A (GSK101) was used to test for its functionality on chondrocytes from wildtype and integrin α1-null mice. Results/interpretation: Deletion of the integrin α1 subunit inhibited the ability of chondrocytes to respond to a hypo-osmotic stress with [Ca2+]i transients ex vivo and in vitro. The percentage of chondrocytes responding ex vivo was smaller than in vitro and of the cells that responded, more single [Ca2+]i transients were observed ex vivo compared to in vitro. Immunocytochemistry confirmed the presence of TRPV4 on wildtype and integrin α1-null chondrocytes, however application of GSK101 revealed that TRPV4 could be activated on wildtype but not integrin α1-null chondrocytes. Integrin α1β1 is a key participant in chondrocyte transduction of a hypo-osmotic stress. Furthermore, the mechanism by which integrin α1β1 influences osmotransduction is independent of matrix binding, but likely dependent on the chondrocyte osmosensor TRPV4. PMID:24495803
Integrin α1β1 participates in chondrocyte transduction of osmotic stress.
Jablonski, Christina L; Ferguson, Samuel; Pozzi, Ambra; Clark, Andrea L
2014-02-28
The goal of this study was to determine the role of the collagen binding receptor integrin α1β1 in regulating osmotically induced [Ca(2+)]i transients in chondrocytes. The [Ca(2+)]i transient response of chondrocytes to osmotic stress was measured using real-time confocal microscopy. Chondrocytes from wildtype and integrin α1-null mice were imaged ex vivo (in the cartilage of intact murine femora) and in vitro (isolated from the matrix, attached to glass coverslips). Immunocytochemistry was performed to detect the presence of the osmosensor, transient receptor potential vanilloid-4 (TRPV4), and the agonist GSK1016790A (GSK101) was used to test for its functionality on chondrocytes from wildtype and integrin α1-null mice. Deletion of the integrin α1 subunit inhibited the ability of chondrocytes to respond to a hypo-osmotic stress with [Ca(2+)]i transients ex vivo and in vitro. The percentage of chondrocytes responding ex vivo was smaller than in vitro and of the cells that responded, more single [Ca(2+)]i transients were observed ex vivo compared to in vitro. Immunocytochemistry confirmed the presence of TRPV4 on wildtype and integrin α1-null chondrocytes, however application of GSK101 revealed that TRPV4 could be activated on wildtype but not integrin α1-null chondrocytes. Integrin α1β1 is a key participant in chondrocyte transduction of a hypo-osmotic stress. Furthermore, the mechanism by which integrin α1β1 influences osmotransduction is independent of matrix binding, but likely dependent on the chondrocyte osmosensor TRPV4. Copyright © 2014 Elsevier Inc. All rights reserved.
Minimal sufficient positive-operator valued measure on a separable Hilbert space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuramochi, Yui, E-mail: kuramochi.yui.22c@st.kyoto-u.ac.jp
We introduce a concept of a minimal sufficient positive-operator valued measure (POVM), which is the least redundant POVM among the POVMs that have the equivalent information about the measured quantum system. Assuming the system Hilbert space to be separable, we show that for a given POVM, a sufficient statistic called a Lehmann-Scheffé-Bahadur statistic induces a minimal sufficient POVM. We also show that every POVM has an equivalent minimal sufficient POVM and that such a minimal sufficient POVM is unique up to relabeling neglecting null sets. We apply these results to discrete POVMs and information conservation conditions proposed by the author.
On twisting type [N] ⊗ [N] Ricci flat complex spacetimes with two homothetic symmetries
NASA Astrophysics Data System (ADS)
Chudecki, Adam; Przanowski, Maciej
2018-04-01
In this article, H H spaces of type [N] ⊗ [N] with twisting congruence of null geodesics defined by the 4-fold undotted and dotted Penrose spinors are investigated. It is assumed that these spaces admit two homothetic symmetries. The general form of the homothetic vector fields is found. New coordinates are introduced, which enable us to reduce the H H system of partial differential equations to one ordinary differential equation (ODE) on one holomorphic function. In a special case, this is a second-order ODE and its general solution is explicitly given. In the generic case, one gets rather involved fifth-order ODE.
Meterwavelength Single-pulse Polarimetric Emission Survey. III. The Phenomenon of Nulling in Pulsars
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basu, Rahul; Mitra, Dipanjan; Melikidze, George I., E-mail: rahulbasu.astro@gmail.com
A detailed analysis of nulling was conducted for the pulsars studied in the Meterwavelength Single-pulse Polarimetric Emission Survey. We characterized nulling in 36 pulsars including 17 pulsars where the phenomenon was reported for the first time. The most dominant nulls lasted for a short duration, less than five periods. Longer duration nulls extending to hundreds of periods were also seen in some cases. A careful analysis showed the presence of periodicities in the transition from the null to the burst states in 11 pulsars. In our earlier work, fluctuation spectrum analysis showed multiple periodicities in 6 of these 11 pulsars.more » We demonstrate that the longer periodicity in each case was associated with nulling. The shorter periodicities usually originate from subpulse drifting. The nulling periodicities were more aligned with the periodic amplitude modulation, indicating a possible common origin for both. The most prevalent nulls last for a single period and can be potentially explained using random variations affecting the plasma processes in the pulsar magnetosphere. On the other hand, longer-duration nulls require changes in the pair-production processes, which need an external triggering mechanism for the changes. The presence of periodic nulling puts an added constraint on the triggering mechanism, which also needs to be periodic.« less
NASA Astrophysics Data System (ADS)
Hughes, J. D.; White, J.; Doherty, J.
2011-12-01
Linear prediction uncertainty analysis in a Bayesian framework was applied to guide the conditioning of an integrated surface water/groundwater model that will be used to predict the effects of groundwater withdrawals on surface-water and groundwater flows. Linear prediction uncertainty analysis is an effective approach for identifying (1) raw and processed data most effective for model conditioning prior to inversion, (2) specific observations and periods of time critically sensitive to specific predictions, and (3) additional observation data that would reduce model uncertainty relative to specific predictions. We present results for a two-dimensional groundwater model of a 2,186 km2 area of the Biscayne aquifer in south Florida implicitly coupled to a surface-water routing model of the actively managed canal system. The model domain includes 5 municipal well fields withdrawing more than 1 Mm3/day and 17 operable surface-water control structures that control freshwater releases from the Everglades and freshwater discharges to Biscayne Bay. More than 10 years of daily observation data from 35 groundwater wells and 24 surface water gages are available to condition model parameters. A dense parameterization was used to fully characterize the contribution of the inversion null space to predictive uncertainty and included bias-correction parameters. This approach allows better resolution of the boundary between the inversion null space and solution space. Bias-correction parameters (e.g., rainfall, potential evapotranspiration, and structure flow multipliers) absorb information that is present in structural noise that may otherwise contaminate the estimation of more physically-based model parameters. This allows greater precision in predictions that are entirely solution-space dependent, and reduces the propensity for bias in predictions that are not. Results show that application of this analysis is an effective means of identifying those surface-water and groundwater data, both raw and processed, that minimize predictive uncertainty, while simultaneously identifying the maximum solution-space dimensionality of the inverse problem supported by the data.
Design Architectures for Optically Multiplexed Imaging
2015-09-16
which single task is the highest priority task ∗ according to Equation 16. In es- sence , the task that is most often predicted to be of the...deployment (or a null deployment from inaction), our features consisted of pairwise relationships between each placed decoy and each missile. For each...de- coy/missile pairing, we have features describing whether a decoy had been placed such that the missile would be suc- cessfully distracted by
Fast imaging of filaments in the X-point region of Alcator C-Mod
Terry, J. L.; Ballinger, S.; Brunner, D.; ...
2017-01-27
A rich variety of field-aligned fluctuations has been revealed using fast imaging of D α emission from Alcator C-Mod's lower X-point region. Field-aligned filamentary fluctuations are observed along the inner divertor leg, within the Private-Flux-Zone (PFZ), in the Scrape-Off Layer (SOL) outside the outer divertor leg, and, under some conditions, at or above the X-point. The locations and dynamics of the filaments in these regions are strikingly complex in C-Mod. Changes in the filaments’ generation appear to be ordered by plasma density and magnetic configuration. Filaments are not observed for plasmas with n/nGreenwald ≲ 0.12 nor are they observed inmore » Upper Single Null configurations. In a Lower Single Null with 0.12 ≲ n/nGreenwald ≲ 0.45 and Bx∇B directed down, filaments typically move up the inner divertor leg toward the X-point. Reversing the field direction results in the appearance of filaments outside of the outer divertor leg. With the divertor targets “detached”, filaments inside the LCFS are seen. Lastly, these studies were motivated by observations of filaments in the X-point and PFZ regions in MAST, and comparisons with those observations are made.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernuzzi, Sebastiano; Nagar, Alessandro; Zenginoglu, Anil
2011-10-15
We compute and analyze the gravitational waveform emitted to future null infinity by a system of two black holes in the large-mass-ratio limit. We consider the transition from the quasiadiabatic inspiral to plunge, merger, and ringdown. The relative dynamics is driven by a leading order in the mass ratio, 5PN-resummed, effective-one-body (EOB), analytic-radiation reaction. To compute the waveforms, we solve the Regge-Wheeler-Zerilli equations in the time-domain on a spacelike foliation, which coincides with the standard Schwarzschild foliation in the region including the motion of the small black hole, and is globally hyperboloidal, allowing us to include future null infinity inmore » the computational domain by compactification. This method is called the hyperboloidal layer method, and is discussed here for the first time in a study of the gravitational radiation emitted by black hole binaries. We consider binaries characterized by five mass ratios, {nu}=10{sup -2,-3,-4,-5,-6}, that are primary targets of space-based or third-generation gravitational wave detectors. We show significative phase differences between finite-radius and null-infinity waveforms. We test, in our context, the reliability of the extrapolation procedure routinely applied to numerical relativity waveforms. We present an updated calculation of the final and maximum gravitational recoil imparted to the merger remnant by the gravitational wave emission, v{sub kick}{sup end}/(c{nu}{sup 2})=0.04474{+-}0.00007 and v{sub kick}{sup max}/(c{nu}{sup 2})=0.05248{+-}0.00008. As a self-consistency test of the method, we show an excellent fractional agreement (even during the plunge) between the 5PN EOB-resummed mechanical angular momentum loss and the gravitational wave angular momentum flux computed at null infinity. New results concerning the radiation emitted from unstable circular orbits are also presented. The high accuracy waveforms computed here could be considered for the construction of template banks or for calibrating analytic models such as the effective-one-body model.« less
Tolerance analysis of null lenses using an end-use system performance criterion
NASA Astrophysics Data System (ADS)
Rodgers, J. Michael
2000-07-01
An effective method of assigning tolerances to a null lens is to determine the effects of null-lens fabrication and alignment errors on the end-use system itself, not simply the null lens. This paper describes a method to assign null- lens tolerances based on their effect on any performance parameter of the end-use system.
Gravitational collapse in Husain space-time for Brans-Dicke gravity theory with power-law potential
NASA Astrophysics Data System (ADS)
Rudra, Prabir; Biswas, Ritabrata; Debnath, Ujjal
2014-12-01
The motive of this work is to study gravitational collapse in Husain space-time in Brans-Dicke gravity theory. Among many scalar-tensor theories of gravity, Brans-Dicke is the simplest and the impact of it can be regulated by two parameters associated with it, namely, the Brans-Dicke parameter, ω, and the potential-scalar field dependency parameter n respectively. V. Husain's work on exact solution for null fluid collapse in 1996 has influenced many authors to follow his way to find the end-state of the homogeneous/inhomogeneous dust cloud. Vaidya's metric is used all over to follow the nature of future outgoing radial null geodesics. Detecting whether the central singularity is naked or wrapped by an event horizon, by the existence of future directed radial null geodesic emitted in past from the singularity is the basic objective. To point out the existence of positive trajectory tangent solution, both particular parametric cases (through tabular forms) and wide range contouring process have been applied. Precisely, perfect fluid's EoS satisfies a wide range of phenomena: from dust to exotic fluid like dark energy. We have used the EoS parameter k to determine the end state of collapse in different cosmological era. Our main target is to check low ω (more deviations from Einstein gravity-more Brans Dicke effect) and negative k zones. This particularly throws light on the nature of the end-state of collapse in accelerated expansion in Brans Dicke gravity. It is seen that for positive values of EoS parameter k, the collapse results in a black hole, whereas for negative values of k, naked singularity is the only outcome. It is also to be noted that "low ω" leads to the possibility of getting more naked singularities even for a non-accelerating universe.
Gravitational Collapse in Husain space-time for Brans-Dicke Gravity Theory with Power-law Potential.
NASA Astrophysics Data System (ADS)
Rudra, Prabir
2016-07-01
The motive of this work is to study gravitational collapse in Husain space-time in Brans-Dicke gravity theory. Among many scalar-tensor theories of gravity, Brans-Dicke is the simplest and the impact of it can be regulated by two parameters associated with it, namely, the Brans-Dicke parameter, ω, and the potential-scalar field dependency parameter 'n' respectively. V. Husain's work on exact solution for null fluid collapse in 1996 has influenced many authors to follow his way to find the end-state of the homogeneous/inhomogeneous dust cloud. Vaidya's metric is used all over to follow the nature of future outgoing radial null geodesics. Detecting whether the central singularity is naked or wrapped by an event horizon, by the existence of future directed radial null geodesic emitted in past from the singularity is the basic objective. To point out the existence of positive trajectory tangent solution, both particular parametric cases(through tabular forms) and wide range contouring process have been applied. Precisely, perfect fluid's equation of state satisfies a wide range of phenomena : from dust to exotic fluid like dark energy. We have used the equation of state parameter 'k' to determine the end state of collapse in different cosmological era. Our main target is to check low ω (more deviations from Einstein gravity-more Brans Dicke effect) and negative 'k' zones. This particularly throws light on the nature of the end-state of collapse in accelerated expansion in Brans Dicke gravity. It is seen that for positive values of EoS parameter 'k', the collapse results in a black hole, whereas for negative values of 'k', naked singularity is the only outcome. It is also to be noted that "low ω" leads to the possibility of getting more naked singularities even for a non-accelerating universe.
Coordinated control of a space manipulator tested by means of an air bearing free floating platform
NASA Astrophysics Data System (ADS)
Sabatini, Marco; Gasbarri, Paolo; Palmerini, Giovanni B.
2017-10-01
A typical approach studied for the guidance of next generation space manipulators (satellites with robotic arms aimed at autonomously performing on-orbit operations) is to decouple the platform and the arm maneuvers, which are supposed to happen sequentially, mainly because of safety concerns. This control is implemented in this work as a two-stage Sequential control, where a first stage calls for the motion of the platform and the second stage calls for the motion of the manipulator. A second novel strategy is proposed, considering the platform and the manipulator as a single multibody system subject to a Coordinated control, with the goal of approaching and grasping a target spacecraft. At the scope, a region that the end effector can reach by means of the arm motion with limited reactions on the platform is identified (the so called Reaction Null workspace). The Coordinated control algorithm performs a gain modulation (finalized to a balanced contribution of the platform and arm motion) as a function of the target position within this Reaction Null map. The result is a coordinated maneuver in which the end effector moves thanks to the platform motion, predominant in a first phase, and to the arm motion, predominant when the Reaction-Null workspace is reached. In this way the collision avoidance and attitude over-control issues are automatically considered, without the need of splitting the mission in independent (and overall sub-optimal) segments. The guidance and control algorithms are first simulated by means of a multibody code, and successively tested in the lab by means of a free floating platform equipped with a robotic arm, moving frictionless on a flat granite table thanks to air bearings and on-off thrusters; the results will be discussed in terms of optimality of the fuel consumption and final accuracy.
NASA Technical Reports Server (NTRS)
Wood, S. J.; Clarke, A. H.; Rupert, A. H.; Harm, D. L.; Clement, G. R.
2009-01-01
Two joint ESA-NASA studies are examining changes in otolith-ocular reflexes and motion perception following short duration space flights, and the operational implications of post-flight tilt-translation ambiguity for manual control performance. Vibrotactile feedback of tilt orientation is also being evaluated as a countermeasure to improve performance during a closed-loop nulling task. METHODS. Data is currently being collected on astronaut subjects during 3 preflight sessions and during the first 8 days after Shuttle landings. Variable radius centrifugation is utilized to elicit otolith reflexes in the lateral plane without concordant roll canal cues. Unilateral centrifugation (400 deg/s, 3.5 cm radius) stimulates one otolith positioned off-axis while the opposite side is centered over the axis of rotation. During this paradigm, roll-tilt perception is measured using a subjective visual vertical task and ocular counter-rolling is obtained using binocular video-oculography. During a second paradigm (216 deg/s, <20 cm radius), the effects of stimulus frequency (0.15 - 0.6 Hz) are examined on eye movements and motion perception. A closed-loop nulling task is also performed with and without vibrotactile display feedback of chair radial position. PRELIMINARY RESULTS. Data collection is currently ongoing. Results to date suggest there is a trend for perceived tilt and translation amplitudes to be increased at the low and medium frequencies on landing day compared to pre-flight. Manual control performance is improved with vibrotactile feedback. DISCUSSION. One result of this study will be to characterize the variability (gain, asymmetry) in both otolithocular responses and motion perception during variable radius centrifugation, and measure the time course of postflight recovery. This study will also address how adaptive changes in otolith-mediated reflexes correspond to one's ability to perform closed-loop nulling tasks following G-transitions, and whether manual control performance can be improved with vibrotactile feedback of orientation.
NASA Technical Reports Server (NTRS)
Wood, Scott J.; Clarke, A. H.; Rupert, A. H.; Harm, D. L.; Clement, G. R.
2009-01-01
Two joint ESA-NASA studies are examining changes in otolith-ocular reflexes and motion perception following short duration space flights, and the operational implications of post-flight tilt-translation ambiguity for manual control performance. Vibrotactile feedback of tilt orientation is also being evaluated as a countermeasure to improve performance during a closed-loop nulling task. Data is currently being collected on astronaut subjects during 3 preflight sessions and during the first 8 days after Shuttle landings. Variable radius centrifugation is utilized to elicit otolith reflexes in the lateral plane without concordant roll canal cues. Unilateral centrifugation (400 deg/s, 3.5 cm radius) stimulates one otolith positioned off-axis while the opposite side is centered over the axis of rotation. During this paradigm, roll-tilt perception is measured using a subjective visual vertical task and ocular counter-rolling is obtained using binocular video-oculography. During a second paradigm (216 deg/s, less than 20 cm radius), the effects of stimulus frequency (0.15 - 0.6 Hz) are examined on eye movements and motion perception. A closed-loop nulling task is also performed with and without vibrotactile display feedback of chair radial position. Data collection is currently ongoing. Results to date suggest there is a trend for perceived tilt and translation amplitudes to be increased at the low and medium frequencies on landing day compared to pre-flight. Manual control performance is improved with vibrotactile feedback. One result of this study will be to characterize the variability (gain, asymmetry) in both otolith-ocular responses and motion perception during variable radius centrifugation, and measure the time course of post-flight recovery. This study will also address how adaptive changes in otolith-mediated reflexes correspond to one's ability to perform closed-loop nulling tasks following G-transitions, and whether manual control performance can be improved with vibrotactile feedback of orientation.
Diffuse prior monotonic likelihood ratio test for evaluation of fused image quality measures.
Wei, Chuanming; Kaplan, Lance M; Burks, Stephen D; Blum, Rick S
2011-02-01
This paper introduces a novel method to score how well proposed fused image quality measures (FIQMs) indicate the effectiveness of humans to detect targets in fused imagery. The human detection performance is measured via human perception experiments. A good FIQM should relate to perception results in a monotonic fashion. The method computes a new diffuse prior monotonic likelihood ratio (DPMLR) to facilitate the comparison of the H(1) hypothesis that the intrinsic human detection performance is related to the FIQM via a monotonic function against the null hypothesis that the detection and image quality relationship is random. The paper discusses many interesting properties of the DPMLR and demonstrates the effectiveness of the DPMLR test via Monte Carlo simulations. Finally, the DPMLR is used to score FIQMs with test cases considering over 35 scenes and various image fusion algorithms.
Embedded importance watermarking for image verification in radiology
NASA Astrophysics Data System (ADS)
Osborne, Domininc; Rogers, D.; Sorell, M.; Abbott, Derek
2004-03-01
Digital medical images used in radiology are quite different to everyday continuous tone images. Radiology images require that all detailed diagnostic information can be extracted, which traditionally constrains digital medical images to be of large size and stored without loss of information. In order to transmit diagnostic images over a narrowband wireless communication link for remote diagnosis, lossy compression schemes must be used. This involves discarding detailed information and compressing the data, making it more susceptible to error. The loss of image detail and incidental degradation occurring during transmission have potential legal accountability issues, especially in the case of the null diagnosis of a tumor. The work proposed here investigates techniques for verifying the voracity of medical images - in particular, detailing the use of embedded watermarking as an objective means to ensure that important parts of the medical image can be verified. We propose a result to show how embedded watermarking can be used to differentiate contextual from detailed information. The type of images that will be used include spiral hairline fractures and small tumors, which contain the essential diagnostic high spatial frequency information.
Predicting Cost and Schedule Growth for Military and Civil Space Systems
2008-03-01
the Shapiro-Wilk Test , and testing the residuals for constant variance using the Breusch - Pagan test . For logistic models, diagnostics include...the Breusch - Pagan Test . With this test , a p-value below 0.05 rejects the null hypothesis that the residuals have constant variance. Thus, similar...to the Shapiro- Wilk Test , because the optimal model will have constant variance of its residuals, this requires Breusch - Pagan p-values over 0.05
Qu, Wei; Diwan, Bhalchandra A.; Liu, Jie; Goyer, Robert A.; Dawson, Tammy; Horton, John L.; Cherian, M. George; Waalkes, Michael P.
2002-01-01
Susceptibility to lead toxicity in MT-null mice and cells, lacking the major forms of the metallothionein (MT) gene, was compared to wild-type (WT) mice or cells. Male MT-null and WT mice received lead in the drinking water (0 to 4000 ppm) for 10 to 20 weeks. Lead did not alter body weight in any group. Unlike WT mice, lead-treated MT-null mice showed dose-related nephromegaly. In addition, after lead exposure renal function was significantly diminished in MT-null mice in comparison to WT mice. MT-null mice accumulated less renal lead than WT mice and did not form lead inclusion bodies, which were present in the kidneys of WT mice. In gene array analysis, renal glutathione S-transferases were up-regulated after lead in MT-null mice only. In vitro studies on fibroblast cell lines derived from MT-null and WT mice showed that MT-null cells were much more sensitive to lead cytotoxicity. MT-null cells accumulated less lead and formed no inclusion bodies. The MT-null phenotype seems to preclude lead-induced inclusion body formation and increases lead toxicity at the organ and cellular level despite reducing lead accumulation. This study reveals important roles for MT in chronic lead toxicity, lead accumulation, and inclusion body formation. PMID:11891201
Ambitwistor Strings in Four Dimensions
NASA Astrophysics Data System (ADS)
Geyer, Yvonne; Lipstein, Arthur E.; Mason, Lionel
2014-08-01
We develop ambitwistor string theories for four dimensions to obtain new formulas for tree-level gauge and gravity amplitudes with arbitrary amounts of supersymmetry. Ambitwistor space is the space of complex null geodesics in complexified Minkowski space, and in contrast to earlier ambitwistor strings, we use twistors rather than vectors to represent this space. Although superficially similar to the original twistor string theories of Witten, Berkovits, and Skinner, these theories differ in the assignment of world sheet spins of the fields, rely on both twistor and dual twistor representatives for the vertex operators, and use the ambitwistor procedure for calculating correlation functions. Our models are much more flexible, no longer requiring maximal supersymmetry, and the resulting formulas for amplitudes are simpler, having substantially reduced moduli. These are supported on the solutions to the scattering equations refined according to helicity and can be checked by comparison with corresponding formulas of Witten and of Cachazo and Skinner.
Dependence of hydrogen arcjet operation on electrode geometry
NASA Technical Reports Server (NTRS)
Pencil, Eric J.; Sankovic, John M.; Sarmiento, Charles J.; Hamley, John A.
1992-01-01
The dependence of 2kW hydrogen arcjet performance on cathode to anode electrode spacing was evaluated at specific impulses of 900 and 1000 s. Less than 2 absolute percent change in efficiency was measured for the spacings tested which did not repeat the 14 absolute percent variation reported in earlier work with similar electrode designs. A different nozzle configuration was used to quantify the variation in hydrogen arcjet performance over an extended range of electrode spacing. Electrode gap variation resulted in less than 3 absolute percent change in efficiency. These null results suggested that electrode spacing is decoupled from hydrogen arcjet ignition. The dependence of breakdown voltage on mass flow rate and electrode agreed with Paschen curves for hydrogen. Preliminary characterization of the dependence of hydrogen arcjet ignition on rates of pulse repetition and pulse voltage rise were also included for comparison with previous results obtained using simulated hydrazine.
Dependence of hydrogen arcjet operation on electrode geometry
NASA Technical Reports Server (NTRS)
Pencil, Eric J.; Sankovic, John M.; Sarmiento, Charles J.; Hamley, John A.
1992-01-01
The dependence of 2 kW hydrogen arcjet performance on cathode to anode electrode spacing was evaluated at specific impulses of 900 and 1000 s. Less than 2 absolute percent change in efficiency was measured for the spacings tested which did not repeat the 14 absolute percent variation reported in earlier work with similar electrode designs. A different nozzle configuration was used to quantify the variation in hydrogen arcjet performance over an extended range of electrode spacing. Electrode gap variation resulted in less than 3 absolute percent change in efficiency. These null results suggested that electrode spacing is decoupled from hydrogen arcjet performance considerations over the ranges tested. Initial studies were conducted on hydrogen arcjet ignition. The dependence of breakdown voltage on mass flow rate and hydrogen arcjet ignition on rates of pulse repetition and pulse voltage rise were also included for comparison with previous results obtained using simulated hydrazine.
Adaptive Nulling for the Terrestrial Planet Finder Interferometer
NASA Technical Reports Server (NTRS)
Peters, Robert D.; Lay, Oliver P.; Jeganathan, Muthu; Hirai, Akiko
2006-01-01
A description of adaptive nulling for Terrestrial Planet Finder Interferometer (TPFI) is presented. The topics include: 1) Nulling in TPF-I; 2) Why Do Adaptive Nulling; 3) Parallel High-Order Compensator Design; 4) Phase and Amplitude Control; 5) Development Activates; 6) Requirements; 7) Simplified Experimental Setup; 8) Intensity Correction; and 9) Intensity Dispersion Stability. A short summary is also given on adaptive nulling for the TPFI.
Kroken, Abby R.; Chen, Camille K.; Evans, David J.; Yahr, Timothy L.
2018-01-01
ABSTRACT Pseudomonas aeruginosa is internalized into multiple types of epithelial cell in vitro and in vivo and yet is often regarded as an exclusively extracellular pathogen. Paradoxically, ExoS, a type three secretion system (T3SS) effector, has antiphagocytic activities but is required for intracellular survival of P. aeruginosa and its occupation of bleb niches in epithelial cells. Here, we addressed mechanisms for this dichotomy using invasive (ExoS-expressing) P. aeruginosa and corresponding effector-null isogenic T3SS mutants, effector-null mutants of cytotoxic P. aeruginosa with and without ExoS transformation, antibiotic exclusion assays, and imaging using a T3SS-GFP reporter. Except for effector-null PA103, all strains were internalized while encoding ExoS. Intracellular bacteria showed T3SS activation that continued in replicating daughter cells. Correcting the fleQ mutation in effector-null PA103 promoted internalization by >10-fold with or without ExoS. Conversely, mutating fleQ in PAO1 reduced internalization by >10-fold, also with or without ExoS. Effector-null PA103 remained less well internalized than PAO1 matched for fleQ status, but only with ExoS expression, suggesting additional differences between these strains. Quantifying T3SS activation using GFP fluorescence and quantitative reverse transcription-PCR (qRT-PCR) showed that T3SS expression was hyperinducible for strain PA103ΔexoUT versus other isolates and was unrelated to fleQ status. These findings support the principle that P. aeruginosa is not exclusively an extracellular pathogen, with internalization influenced by the relative proportions of T3SS-positive and T3SS-negative bacteria in the population during host cell interaction. These data also challenge current thinking about T3SS effector delivery into host cells and suggest that T3SS bistability is an important consideration in studying P. aeruginosa pathogenesis. PMID:29717012
Haider, Husnain Kh; Jiang, Shujia; Idris, Niagara M; Ashraf, Muhammad
2008-11-21
We hypothesized that mesenchymal stem cells (MSCs) overexpressing insulin-like growth factor (IGF)-1 showed improved survival and engraftment in the infarcted heart and promoted stem cell recruitment through paracrine release of stromal cell-derived factor (SDF)-1alpha. Rat bone marrow-derived MSCs were used as nontransduced ((Norm)MSCs) or transduced with adenoviral-null vector ((Null)MSCs) or vector encoding for IGF-1 ((IGF-1)MSCs). (IGF-1)MSCs secreted higher IGF-1 until 12 days of observation (P<0.001 versus (Null)MSCs). Molecular studies revealed activation of phosphoinositide 3-kinase, Akt, and Bcl.xL and inhibition of glycogen synthase kinase 3beta besides release of SDF-1alpha in parallel with IGF-1 expression in (IGF-1)MSCs. For in vivo studies, 70 muL of DMEM without cells (group 1) or containing 1.5x10(6) (Null)MSCs (group 2) or (IGF-1)MSCs (group 3) were implanted intramyocardially in a female rat model of permanent coronary artery occlusion. One week later, immunoblot on rat heart tissue (n=4 per group) showed elevated myocardial IGF-1 and phospho-Akt in group 3 and higher survival of (IGF-1)MSCs (P<0.06 versus (Null)MSCs) (n=6 per group). SDF-1alpha was increased in group 3 animal hearts (20-fold versus group 2), with massive mobilization and homing of ckit(+), MDR1(+), CD31(+), and CD34(+) cells into the infarcted heart. Infarction size was significantly reduced in cell transplanted groups compared with the control. Confocal imaging after immunostaining for myosin heavy chain, actinin, connexin-43, and von Willebrand factor VIII showed extensive angiomyogenesis in the infarcted heart. Indices of left ventricular function, including ejection fraction and fractional shortening, were improved in group 3 as compared with group 1 (P<0.05). In conclusion, the strategy of IGF-1 transgene expression induced massive stem cell mobilization via SDF-1alpha signaling and culminated in extensive angiomyogenesis in the infarcted heart.
Implosive Collapse about Magnetic Null Points: A Quantitative Comparison between 2D and 3D Nulls
NASA Astrophysics Data System (ADS)
Thurgood, Jonathan O.; Pontin, David I.; McLaughlin, James A.
2018-03-01
Null collapse is an implosive process whereby MHD waves focus their energy in the vicinity of a null point, forming a current sheet and initiating magnetic reconnection. We consider, for the first time, the case of collapsing 3D magnetic null points in nonlinear, resistive MHD using numerical simulation, exploring key physical aspects of the system as well as performing a detailed parameter study. We find that within a particular plane containing the 3D null, the plasma and current density enhancements resulting from the collapse are quantitatively and qualitatively as per the 2D case in both the linear and nonlinear collapse regimes. However, the scaling with resistivity of the 3D reconnection rate—which is a global quantity—is found to be less favorable when the magnetic null point is more rotationally symmetric, due to the action of increased magnetic back-pressure. Furthermore, we find that, with increasing ambient plasma pressure, the collapse can be throttled, as is the case for 2D nulls. We discuss this pressure-limiting in the context of fast reconnection in the solar atmosphere and suggest mechanisms by which it may be overcome. We also discuss the implications of the results in the context of null collapse as a trigger mechanism of Oscillatory Reconnection, a time-dependent reconnection mechanism, and also within the wider subject of wave–null point interactions. We conclude that, in general, increasingly rotationally asymmetric nulls will be more favorable in terms of magnetic energy release via null collapse than their more symmetric counterparts.
Fermilab Education Office - Special Events for Students and Families
students and families. These include: null Fermilab Outdoor Family Fair (K-12) null Wonders of Science (2-7 ) null Family Open House (3-12) null STEM Career Expo (9-12) Search Programs - Search Science Adventures
Conical twist fields and null polygonal Wilson loops
NASA Astrophysics Data System (ADS)
Castro-Alvaredo, Olalla A.; Doyon, Benjamin; Fioravanti, Davide
2018-06-01
Using an extension of the concept of twist field in QFT to space-time (external) symmetries, we study conical twist fields in two-dimensional integrable QFT. These create conical singularities of arbitrary excess angle. We show that, upon appropriate identification between the excess angle and the number of sheets, they have the same conformal dimension as branch-point twist fields commonly used to represent partition functions on Riemann surfaces, and that both fields have closely related form factors. However, we show that conical twist fields are truly different from branch-point twist fields. They generate different operator product expansions (short distance expansions) and form factor expansions (large distance expansions). In fact, we verify in free field theories, by re-summing form factors, that the conical twist fields operator product expansions are correctly reproduced. We propose that conical twist fields are the correct fields in order to understand null polygonal Wilson loops/gluon scattering amplitudes of planar maximally supersymmetric Yang-Mills theory.
Bosten, J. M.; Beer, R. D.; MacLeod, D. I. A.
2015-01-01
To shed light on the perceptual basis of the color white, we measured settings of unique white in a dark surround. We find that settings reliably show more variability in an oblique (blue-yellow) direction in color space than along the cardinal axes of the cone-opponent mechanisms. This is against the idea that white perception arises at the null point of the cone-opponent mechanisms, but one alternative possibility is that it occurs through calibration to the visual environment. We found that the locus of maximum variability in settings lies close to the locus of natural daylights, suggesting that variability may result from uncertainty about the color of the illuminant. We tested this by manipulating uncertainty. First, we altered the extent to which the task was absolute (requiring knowledge of the illumination) or relative. We found no clear effect of this factor on the reduction in sensitivity in the blue-yellow direction. Second, we provided a white surround as a cue to the illumination or left the surround dark. Sensitivity was selectively worse in the blue-yellow direction when the surround was black than when it was white. Our results can be functionally related to the statistics of natural images, where a greater blue-yellow dispersion is characteristic of both reflectances (where anisotropy is weak) and illuminants (where it is very pronounced). Mechanistically, the results could suggest a neural signal responsive to deviations from the blue-yellow locus or an adaptively matched range of contrast response functions for signals that encode different directions in color space. PMID:26641948
A Gaussian Mixture Model for Nulling Pulsars
NASA Astrophysics Data System (ADS)
Kaplan, D. L.; Swiggum, J. K.; Fichtenbauer, T. D. J.; Vallisneri, M.
2018-03-01
The phenomenon of pulsar nulling—where pulsars occasionally turn off for one or more pulses—provides insight into pulsar-emission mechanisms and the processes by which pulsars turn off when they cross the “death line.” However, while ever more pulsars are found that exhibit nulling behavior, the statistical techniques used to measure nulling are biased, with limited utility and precision. In this paper, we introduce an improved algorithm, based on Gaussian mixture models, for measuring pulsar nulling behavior. We demonstrate this algorithm on a number of pulsars observed as part of a larger sample of nulling pulsars, and show that it performs considerably better than existing techniques, yielding better precision and no bias. We further validate our algorithm on simulated data. Our algorithm is widely applicable to a large number of pulsars even if they do not show obvious nulls. Moreover, it can be used to derive nulling probabilities of nulling for individual pulses, which can be used for in-depth studies.
Modular Hamiltonians on the null plane and the Markov property of the vacuum state
NASA Astrophysics Data System (ADS)
Casini, Horacio; Testé, Eduardo; Torroba, Gonzalo
2017-09-01
We compute the modular Hamiltonians of regions having the future horizon lying on a null plane. For a CFT this is equivalent to regions with a boundary of arbitrary shape lying on the null cone. These Hamiltonians have a local expression on the horizon formed by integrals of the stress tensor. We prove this result in two different ways, and show that the modular Hamiltonians of these regions form an infinite dimensional Lie algebra. The corresponding group of unitary transformations moves the fields on the null surface locally along the null generators with arbitrary null line dependent velocities, but act non-locally outside the null plane. We regain this result in greater generality using more abstract tools on the algebraic quantum field theory. Finally, we show that modular Hamiltonians on the null surface satisfy a Markov property that leads to the saturation of the strong sub-additive inequality for the entropies and to the strong super-additivity of the relative entropy.
Watanabe, Hiroshi; Nomura, Yoshikazu; Kuribayashi, Ami; Kurabayashi, Tohru
2018-02-01
We aimed to employ the Radia diagnostic software with the safety and efficacy of a new emerging dental X-ray modality (SEDENTEXCT) image quality (IQ) phantom in CT, and to evaluate its validity. The SEDENTEXCT IQ phantom and Radia diagnostic software were employed. The phantom was scanned using one medical full-body CT and two dentomaxillofacial cone beam CTs. The obtained images were imported to the Radia software, and the spatial resolution outputs were evaluated. The oversampling method was employed using our original wire phantom as a reference. The resultant modulation transfer function (MTF) curves were compared. The null hypothesis was that MTF curves generated using both methods would be in agreement. One-way analysis of variance tests were applied to the f50 and f10 values from the MTF curves. The f10 values were subjectively confirmed by observing the line pair modules. The Radia software reported the MTF curves on the xy-plane of the CT scans, but could not return f50 and f10 values on the z-axis. The null hypothesis concerning the reported MTF curves on the xy-plane was rejected. There were significant differences between the results of the Radia software and our reference method, except for f10 values in CS9300. These findings were consistent with our line pair observations. We evaluated the validity of the Radia software with the SEDENTEXCT IQ phantom. The data provided were semi-automatic, albeit with problems and statistically different from our reference. We hope the manufacturer will overcome these limitations.
Fundus Autofluorescence Findings in a Mouse Model of Retinal Detachment
Secondi, Roberta; Kong, Jian; Blonska, Anna M.; Staurenghi, Giovanni; Sparrow, Janet R.
2012-01-01
Purpose. Fundus autofluorescence (fundus AF) changes were monitored in a mouse model of retinal detachment (RD). Methods. RD was induced by transscleral injection of hyaluronic acid (Healon) or sterile balanced salt solution (BSS) into the subretinal space of 4–5-day-old albino Abca4 null mutant and Abca4 wild-type mice. Images acquired by confocal scanning laser ophthalmoscopy (Spectralis HRA) were correlated with spectral domain optical coherence tomography (SD-OCT), infrared reflectance (IR), fluorescence spectroscopy, and histologic analysis. Results. In the area of detached retina, multiple hyperreflective spots in IR images corresponded to punctate areas of intense autofluorescence visible in fundus AF mode. The puncta exhibited changes in fluorescence intensity with time. SD-OCT disclosed undulations of the neural retina and hyperreflectivity of the photoreceptor layer that likely corresponded to histologically visible photoreceptor cell rosettes. Fluorescence emission spectra generated using flat-mounted retina, and 488 and 561 nm excitation, were similar to that of RPE lipofuscin. With increased excitation wavelength, the emission maximum shifted towards longer wavelengths, a characteristic typical of fundus autofluorescence. Conclusions. In detached retinas, hyper-autofluorescent spots appeared to originate from photoreceptor outer segments that were arranged within retinal folds and rosettes. Consistent with this interpretation is the finding that the autofluorescence was spectroscopically similar to the bisretinoids that constitute RPE lipofuscin. Under the conditions of a RD, abnormal autofluorescence may arise from excessive production of bisretinoid by impaired photoreceptor cells. PMID:22786896
O-space with high resolution readouts outperforms radial imaging.
Wang, Haifeng; Tam, Leo; Kopanoglu, Emre; Peters, Dana C; Constable, R Todd; Galiana, Gigi
2017-04-01
While O-Space imaging is well known to accelerate image acquisition beyond traditional Cartesian sampling, its advantages compared to undersampled radial imaging, the linear trajectory most akin to O-Space imaging, have not been detailed. In addition, previous studies have focused on ultrafast imaging with very high acceleration factors and relatively low resolution. The purpose of this work is to directly compare O-Space and radial imaging in their potential to deliver highly undersampled images of high resolution and minimal artifacts, as needed for diagnostic applications. We report that the greatest advantages to O-Space imaging are observed with extended data acquisition readouts. A sampling strategy that uses high resolution readouts is presented and applied to compare the potential of radial and O-Space sequences to generate high resolution images at high undersampling factors. Simulations and phantom studies were performed to investigate whether use of extended readout windows in O-Space imaging would increase k-space sampling and improve image quality, compared to radial imaging. Experimental O-Space images acquired with high resolution readouts show fewer artifacts and greater sharpness than radial imaging with equivalent scan parameters. Radial images taken with longer readouts show stronger undersampling artifacts, which can cause small or subtle image features to disappear. These features are preserved in a comparable O-Space image. High resolution O-Space imaging yields highly undersampled images of high resolution and minimal artifacts. The additional nonlinear gradient field improves image quality beyond conventional radial imaging. Copyright © 2016 Elsevier Inc. All rights reserved.
The Search for Regularity: Four Aspects of Scientific Discovery.
1984-09-01
explore the processes of scientific discovery. Our goal is not to explain historical details, though the history of science is fascinating and we will...chemical laws, as well as other laws from the history of science Table 1. BACON’s method viewed as search through a data space. Initial state: the null...discovery, then a deeper answer to the above questions is required. For instance, we know from the history of science that empirical laws eventuay
Spectral methods for the spin-2 equation near the cylinder at spatial infinity
NASA Astrophysics Data System (ADS)
Macedo, Rodrigo P.; Valiente Kroon, Juan A.
2018-06-01
We solve, numerically, the massless spin-2 equations, written in terms of a gauge based on the properties of conformal geodesics, in a neighbourhood of spatial infinity using spectral methods in both space and time. This strategy allows us to compute the solutions to these equations up to the critical sets where null infinity intersects with spatial infinity. Moreover, we use the convergence rates of the numerical solutions to read-off their regularity properties.
The cosmological constant and the energy of gravitational radiation
NASA Astrophysics Data System (ADS)
Chruściel, Piotr T.; Ifsits, Lukas
2016-06-01
We propose a definition of mass for characteristic hypersurfaces in asymptotically vacuum space-times with nonvanishing cosmological constant Λ ∈R* , generalizing the definition of Trautman and Bondi for Λ =0 . We show that our definition reduces to some standard definitions in several situations. We establish a balance formula linking the characteristic mass and a suitably defined renormalized volume of the null hypersurface, generalizing the positivity identity proved by Chruściel and Paetz when Λ =0 .
Singular Perturbations and Time-Scale Methods in Control Theory: Survey 1976-1982.
1982-12-01
established in the 1960s, when they first became a means for simplified computation of optimal trajectories. It was soon recognized that singular...null-space of P(ao). The asymptotic values of the invariant zeros and associated invariant-zero directions as € O are the values computed from the...49 ’ 49 7. WEAK COUPLING AND TIME SCALES The need for model simplification with a reduction (or distribution) of computational effort is
NASA Astrophysics Data System (ADS)
Shinzawa, Hideyuki; Mizukado, Junji
2018-03-01
Tensile deformations of a partially miscible blend of polymethyl methacrylate (PMMA) and polyethylene glycol (PEG) is studied by a rheo-optical characterization near-infrared (NIR) technique to probe deformation behavior during tensile deformation. Sets of NIR spectra of the polymer samples were collected by using an acousto-optic tunable filter (AOTF) NIR spectrometer coupled with a tensile testing machine as an excitation device. While deformations of the samples were readily captured as strain-dependent NIR spectra, the entire feature of the spectra was overwhelmed with the baseline fluctuation induced by the decrease in the sample thickness and subsequent change in the light scattering. Several pretreatment techniques, including multiplicative scatter collection (MSC) and null-space projection, are subjected to the NIR spectra prior to the determination of the sequential order of the spectral intensity changes by two-dimensional (2D) correlation analysis. The comparison of the MSC and null-space projection provided an interesting insight into the system, especially deformation-induced variation of light scattering observed during the tensile testing of the polymer sample. In addition, the sequential order determined with the 2D correlation spectra revealed that orientation of a specific part of PMMA chain occurs before that of the others because of the interaction between Cdbnd O group of PMMA and terminal sbnd OH group of PEG.
Local performance optimization for a class of redundant eight-degree-of-freedom manipulators
NASA Technical Reports Server (NTRS)
Williams, Robert L., II
1994-01-01
Local performance optimization for joint limit avoidance and manipulability maximization (singularity avoidance) is obtained by using the Jacobian matrix pseudoinverse and by projecting the gradient of an objective function into the Jacobian null space. Real-time redundancy optimization control is achieved for an eight-joint redundant manipulator having a three-axis spherical shoulder, a single elbow joint, and a four-axis spherical wrist. Symbolic solutions are used for both full-Jacobian and wrist-partitioned pseudoinverses, partitioned null-space projection matrices, and all objective function gradients. A kinematic limitation of this class of manipulators and the limitation's effect on redundancy resolution are discussed. Results obtained with graphical simulation are presented to demonstrate the effectiveness of local redundant manipulator performance optimization. Actual hardware experiments performed to verify the simulated results are also discussed. A major result is that the partitioned solution is desirable because of low computation requirements. The partitioned solution is suboptimal compared with the full solution because translational and rotational terms are optimized separately; however, the results show that the difference is not significant. Singularity analysis reveals that no algorithmic singularities exist for the partitioned solution. The partitioned and full solutions share the same physical manipulator singular conditions. When compared with the full solution, the partitioned solution is shown to be ill-conditioned in smaller neighborhoods of the shared singularities.
Classical Statistics and Statistical Learning in Imaging Neuroscience
Bzdok, Danilo
2017-01-01
Brain-imaging research has predominantly generated insight by means of classical statistics, including regression-type analyses and null-hypothesis testing using t-test and ANOVA. Throughout recent years, statistical learning methods enjoy increasing popularity especially for applications in rich and complex data, including cross-validated out-of-sample prediction using pattern classification and sparsity-inducing regression. This concept paper discusses the implications of inferential justifications and algorithmic methodologies in common data analysis scenarios in neuroimaging. It is retraced how classical statistics and statistical learning originated from different historical contexts, build on different theoretical foundations, make different assumptions, and evaluate different outcome metrics to permit differently nuanced conclusions. The present considerations should help reduce current confusion between model-driven classical hypothesis testing and data-driven learning algorithms for investigating the brain with imaging techniques. PMID:29056896
Global Well-Posedness of the Incompressible Magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Cai, Yuan; Lei, Zhen
2018-06-01
This paper studies the Cauchy problem of the incompressible magnetohydro dynamic systems with or without viscosity ν. Under the assumption that the initial velocity field and the displacement of the initialmagnetic field froma non-zero constant are sufficiently small in certain weighted Sobolev spaces, the Cauchy problem is shown to be globally well-posed for all ν ≧ 0 and all spaces with dimension n ≧ 2. Such a result holds true uniformly in nonnegative viscosity parameters. The proof is based on the inherent strong null structure of the systems introduced by Lei (Commun Pure Appl Math 69(11):2072-2106, 2016) and the ghost weight technique introduced by Alinhac (Invent Math 145(3):597-618, 2001).
Model reductions using a projection formulation
NASA Technical Reports Server (NTRS)
De Villemagne, Christian; Skelton, Robert E.
1987-01-01
A new methodology for model reduction of MIMO systems exploits the notion of an oblique projection. A reduced model is uniquely defined by a projector whose range space and orthogonal to the null space are chosen among the ranges of generalized controllability and observability matrices. The reduced order models match various combinations (chosen by the designer) of four types of parameters of the full order system associated with (1) low frequency response, (2) high frequency response, (3) low frequency power spectral density, and (4) high frequency power spectral density. Thus, the proposed method is a computationally simple substitute for many existing methods, has an extreme flexibility to embrace combinations of existing methods and offers some new features.
New Techniques for High-Contrast Imaging with ADI: The ACORNS-ADI SEEDS Data Reduction Pipeline
NASA Technical Reports Server (NTRS)
Brandt, Timothy D.; McElwain, Michael W.; Turner, Edwin L.; Abe, L.; Brandner, W.; Carson, J.; Egner, S.; Feldt, M.; Golota, T.; Grady, C. A.;
2012-01-01
We describe Algorithms for Calibration, Optimized Registration, and Nulling the Star in Angular Differential Imaging (ACORNS-ADI), a new, parallelized software package to reduce high-contrast imaging data, and its application to data from the Strategic Exploration of Exoplanets and Disks (SEEDS) survey. We implement seyeral new algorithms, includbg a method to centroid saturated images, a trimmed mean for combining an image sequence that reduces noise by up to approx 20%, and a robust and computationally fast method to compute the sensitivitv of a high-contrast obsen-ation everywhere on the field-of-view without introducing artificial sources. We also include a description of image processing steps to remove electronic artifacts specific to Hawaii2-RG detectors like the one used for SEEDS, and a detailed analysis of the Locally Optimized Combination of Images (LOCI) algorithm commonly used to reduce high-contrast imaging data. ACORNS-ADI is efficient and open-source, and includes several optional features which may improve performance on data from other instruments. ACORNS-ADI is freely available for download at www.github.com/t-brandt/acorns_-adi under a BSD license
The Fourier-Kelvin Stellar Interferometer
NASA Astrophysics Data System (ADS)
Danchi, W. C.; Allen, R. J.; Benford, D. J.; Deming, D.; Gezari, D. Y.; Kuchner, M.; Leisawitz, D. T.; Linfield, R.; Millan-Gabet, R.; Monnier, J. D.; Mumma, M.; Mundy, L. G.; Noecker, C.; Rajagopal, J.; Seager, S.; Traub, W. A.
2003-10-01
The Fourier-Kelvin Stellar Interferometer (FKSI) is a mission concept for an imaging and nulling interferometer for the mid-infrared spectral region (5- 28 microns). FKSI is conceived as a scientific and technological pathfinder to TPF/DARWIN as well as the NASA Vision Missions SAFIR and SPECS. It will also be a high angular resolution infrared space observatory complementary to JWST. The scientific emphasis of the mission is on detection and spectroscopy of the atmospheres of Extra-solar Giant Planets (EGPs), the search for Brown Dwarfs and other low mass stellar companions, and the evolution of protostellar systems. FKSI can observe these systems from just after the collapse of the precursor molecular cloud core, through the formation of the disk surrounding the protostar, the formation of planets in the disk, and eventual dispersal of the disk material. FKSI could also play a very powerful role in the investigation of the structure of active galactic nuclei and extra-galactic star formation. We present the major results of a set of detailed design studies for the FKSI mission that were performed as a method of understanding major trade-offs pertinent to schedule, cost, and risk in preparation for submission of a Discovery proposal.
Error analysis and system optimization of non-null aspheric testing system
NASA Astrophysics Data System (ADS)
Luo, Yongjie; Yang, Yongying; Liu, Dong; Tian, Chao; Zhuo, Yongmo
2010-10-01
A non-null aspheric testing system, which employs partial null lens (PNL for short) and reverse iterative optimization reconstruction (ROR for short) technique, is proposed in this paper. Based on system modeling in ray tracing software, the parameter of each optical element is optimized and this makes system modeling more precise. Systematic error of non-null aspheric testing system is analyzed and can be categorized into two types, the error due to surface parameters of PNL in the system modeling and the rest from non-null interferometer by the approach of error storage subtraction. Experimental results show that, after systematic error is removed from testing result of non-null aspheric testing system, the aspheric surface is precisely reconstructed by ROR technique and the consideration of systematic error greatly increase the test accuracy of non-null aspheric testing system.
Role of Plasmodium vivax Duffy-binding protein 1 in invasion of Duffy-null Africans
Gunalan, Karthigayan; Lo, Eugenia; Hostetler, Jessica B.; Yewhalaw, Delenasaw; Mu, Jianbing; Neafsey, Daniel E.; Yan, Guiyun; Miller, Louis H.
2016-01-01
The ability of the malaria parasite Plasmodium vivax to invade erythrocytes is dependent on the expression of the Duffy blood group antigen on erythrocytes. Consequently, Africans who are null for the Duffy antigen are not susceptible to P. vivax infections. Recently, P. vivax infections in Duffy-null Africans have been documented, raising the possibility that P. vivax, a virulent pathogen in other parts of the world, may expand malarial disease in Africa. P. vivax binds the Duffy blood group antigen through its Duffy-binding protein 1 (DBP1). To determine if mutations in DBP1 resulted in the ability of P. vivax to bind Duffy-null erythrocytes, we analyzed P. vivax parasites obtained from two Duffy-null individuals living in Ethiopia where Duffy-null and -positive Africans live side-by-side. We determined that, although the DBP1s from these parasites contained unique sequences, they failed to bind Duffy-null erythrocytes, indicating that mutations in DBP1 did not account for the ability of P. vivax to infect Duffy-null Africans. However, an unusual DNA expansion of DBP1 (three and eight copies) in the two Duffy-null P. vivax infections suggests that an expansion of DBP1 may have been selected to allow low-affinity binding to another receptor on Duffy-null erythrocytes. Indeed, we show that Salvador (Sal) I P. vivax infects Squirrel monkeys independently of DBP1 binding to Squirrel monkey erythrocytes. We conclude that P. vivax Sal I and perhaps P. vivax in Duffy-null patients may have adapted to use new ligand–receptor pairs for invasion. PMID:27190089
Uchimura, Tomoya; Hollander, Judith M; Nakamura, Daisy S; Liu, Zhiyi; Rosen, Clifford J; Georgakoudi, Irene; Zeng, Li
2017-10-01
Postnatal bone growth involves a dramatic increase in length and girth. Intriguingly, this period of growth is independent of growth hormone and the underlying mechanism is poorly understood. Recently, an IGF2 mutation was identified in humans with early postnatal growth restriction. Here, we show that IGF2 is essential for longitudinal and appositional murine postnatal bone development, which involves proper timing of chondrocyte maturation and perichondrial cell differentiation and survival. Importantly, the Igf2 null mouse model does not represent a simple delay of growth but instead uncoordinated growth plate development. Furthermore, biochemical and two-photon imaging analyses identified elevated and imbalanced glucose metabolism in the Igf2 null mouse. Attenuation of glycolysis rescued the mutant phenotype of premature cartilage maturation, thereby indicating that IGF2 controls bone growth by regulating glucose metabolism in chondrocytes. This work links glucose metabolism with cartilage development and provides insight into the fundamental understanding of human growth abnormalities. © 2017. Published by The Company of Biologists Ltd.
Compact solar UV burst triggered in a magnetic field with a fan-spine topology
NASA Astrophysics Data System (ADS)
Chitta, L. P.; Peter, H.; Young, P. R.; Huang, Y.-M.
2017-09-01
Context. Solar ultraviolet (UV) bursts are small-scale features that exhibit intermittent brightenings that are thought to be due to magnetic reconnection. They are observed abundantly in the chromosphere and transition region, in particular in active regions. Aims: We investigate in detail a UV burst related to a magnetic feature that is advected by the moat flow from a sunspot towards a pore. The moving feature is parasitic in that its magnetic polarity is opposite to that of the spot and the pore. This comparably simple photospheric magnetic field distribution allows for an unambiguous interpretation of the magnetic geometry leading to the onset of the observed UV burst. Methods: We used UV spectroscopic and slit-jaw observations from the Interface Region Imaging Spectrograph (IRIS) to identify and study chromospheric and transition region spectral signatures of said UV burst. To investigate the magnetic topology surrounding the UV burst, we used a two-hour-long time sequence of simultaneous line-of-sight magnetograms from the Helioseismic and Magnetic Imager (HMI) and performed data-driven 3D magnetic field extrapolations by means of a magnetofrictional relaxation technique. We can connect UV burst signatures to the overlying extreme UV (EUV) coronal loops observed by the Atmospheric Imaging Assembly (AIA). Results: The UV burst shows a variety of extremely broad line profiles indicating plasma flows in excess of ±200 km s-1 at times. The whole structure is divided into two spatially distinct zones of predominantly up- and downflows. The magnetic field extrapolations show a persistent fan-spine magnetic topology at the UV burst. The associated 3D magnetic null point exists at a height of about 500 km above the photosphere and evolves co-spatially with the observed UV burst. The EUV emission at the footpoints of coronal loops is correlated with the evolution of the underlying UV burst. Conclusions: The magnetic field around the null point is sheared by photospheric motions, triggering magnetic reconnection that ultimately powers the observed UV burst and energises the overlying coronal loops. The location of the null point suggests that the burst is triggered low in the solar chromosphere. Movies associated to Figs. 2 and 4 are available at http://www.aanda.org
Toroidally symmetric plasma vortex at tokamak divertor null point
Umansky, M. V.; Ryutov, D. D.
2016-03-09
Reduced MHD equations are used for studying toroidally symmetric plasma dynamics near the divertor null point. Numerical solution of these equations exhibits a plasma vortex localized at the null point with the time-evolution defined by interplay of the curvature drive, magnetic restoring force, and dissipation. Convective motion is easier to achieve for a second-order null (snowflake) divertor than for a regular x-point configuration, and the size of the convection zone in a snowflake configuration grows with plasma pressure at the null point. In conclusion, the trends in simulations are consistent with tokamak experiments which indicate the presence of enhanced transportmore » at the null point.« less
Off-Axis Nulling Transfer Function Measurement: A First Assessment
NASA Technical Reports Server (NTRS)
Vedova, G. Dalla; Menut, J.-L.; Millour, F.; Petrov, R.; Cassaing, F.; Danchi, W. C.; Jacquinod, S.; Lhome, E.; Lopez, B.; Lozi, J.;
2013-01-01
We want to study a polychromatic inverse problem method with nulling interferometers to obtain information on the structures of the exozodiacal light. For this reason, during the first semester of 2013, thanks to the support of the consortium PERSEE, we launched a campaign of laboratory measurements with the nulling interferometric test bench PERSEE, operating with 9 spectral channels between J and K bands. Our objective is to characterise the transfer function, i.e. the map of the null as a function of wavelength for an off-axis source, the null being optimised on the central source or on the source photocenter. We were able to reach on-axis null depths better than 10(exp -4). This work is part of a broader project aiming at creating a simulator of a nulling interferometer in which typical noises of a real instrument are introduced. We present here our first results.
Loss of Vitamin D Receptor Produces Polyuria by Increasing Thirst
Kong, Juan; Zhang, Zhongyi; Li, Dongdong; Wong, Kari E.; Zhang, Yan; Szeto, Frances L.; Musch, Mark W.; Li, Yan Chun
2008-01-01
Vitamin D receptor (VDR)-null mice develop polyuria, but the underlying mechanism remains unknown. In this study, we investigated the relationship between vitamin D and homeostasis of water and electrolytes. VDR-null mice had polyuria, but the urine osmolarity was normal as a result of high salt excretion. The urinary responses to water restriction and to vasopressin were similar between wild-type and VDR-null mice, suggesting intact fluid-handling capacity in VDR-null mice. Compared with wild-type mice, however, renin and angiotensin II were dramatically upregulated in the kidney and brain of VDR-null mice, leading to a marked increase in water intake and salt appetite. Angiotensin II–mediated upregulation of intestinal NHE3 expression partially explained the increased salt absorption and excretion in VDR-null mice. In the brain of VDR-null mice, expression of c-Fos, which is known to associate with increased water intake, was increased in the hypothalamic paraventricular nucleus and the subfornical organ. Treatment with an angiotensin II type 1 receptor antagonist normalized water intake, urinary volume, and c-Fos expression in VDR-null mice. Furthermore, despite a salt-deficient diet to reduce intestinal salt absorption, VDR-null mice still maintained the increased water intake and urinary output. Together, these data indicate that the polyuria observed in VDR-null mice is not caused by impaired renal fluid handling or increased intestinal salt absorption but rather is the result of increased water intake induced by the increase in systemic and brain angiotensin II. PMID:18832438
Loss of vitamin D receptor produces polyuria by increasing thirst.
Kong, Juan; Zhang, Zhongyi; Li, Dongdong; Wong, Kari E; Zhang, Yan; Szeto, Frances L; Musch, Mark W; Li, Yan Chun
2008-12-01
Vitamin D receptor (VDR)-null mice develop polyuria, but the underlying mechanism remains unknown. In this study, we investigated the relationship between vitamin D and homeostasis of water and electrolytes. VDR-null mice had polyuria, but the urine osmolarity was normal as a result of high salt excretion. The urinary responses to water restriction and to vasopressin were similar between wild-type and VDR-null mice, suggesting intact fluid-handling capacity in VDR-null mice. Compared with wild-type mice, however, renin and angiotensin II were dramatically upregulated in the kidney and brain of VDR-null mice, leading to a marked increase in water intake and salt appetite. Angiotensin II-mediated upregulation of intestinal NHE3 expression partially explained the increased salt absorption and excretion in VDR-null mice. In the brain of VDR-null mice, expression of c-Fos, which is known to associate with increased water intake, was increased in the hypothalamic paraventricular nucleus and the subfornical organ. Treatment with an angiotensin II type 1 receptor antagonist normalized water intake, urinary volume, and c-Fos expression in VDR-null mice. Furthermore, despite a salt-deficient diet to reduce intestinal salt absorption, VDR-null mice still maintained the increased water intake and urinary output. Together, these data indicate that the polyuria observed in VDR-null mice is not caused by impaired renal fluid handling or increased intestinal salt absorption but rather is the result of increased water intake induced by the increase in systemic and brain angiotensin II.
Henseler, Helga; Smith, Joanna; Bowman, Adrian; Khambay, Balvinder S; Ju, Xiangyang; Ayoub, Ashraf; Ray, Arup K
2012-09-01
The latissimus dorsi muscle flap is a common method for the reconstruction of the breast following mastectomy. The study aimed to assess the quality of this reconstruction using a three-dimensional (3D) imaging method. The null hypothesis was that there was no difference in volume between the reconstructed breast and the opposite side. This study was conducted in forty-four patients who had had immediate unilateral breast reconstruction by latissimus dorsi muscle flap. The breast was captured using the 3D imaging system. Ten landmarks were digitised on the 3D images. The volume of each breast was measured by the application of Breast Analysis Tool software. The symmetry of the breast was measured using Procrustes analysis. The impact of breast position, orientation, size and intrinsic shape on the overall breast asymmetry was investigated. The null hypothesis was rejected. The reconstructed breast showed a significantly smaller volume when compared to the opposite side, p < 0.0001, a mean difference of 176.8 cc and 95% CI (103.5, 250.0). The shape and the position of the reconstructed breast were the main contributing factors to the measured asymmetry score. 3D imaging was efficient in evaluating the outcome of breast surgery. The latissimus dorsi muscle flap on its own for breast reconstruction did not restore the volume and shape of the breast fully lost due to complete mastectomy. The modification of this method and the selection of other or additional surgical techniques for breast reconstruction should be considered. The asymmetry analysis through reflection and Procrustes matching was a useful method for the objective shape analysis of the female breast and presented a new approach for breast shape assessment. The intrinsic breast shape and the positioning of the breast were major components of postoperative breast asymmetry. The reconstructed breast was smaller overall than the un-operated breast at a significant level when assessing the breast volume using the surface area. 3D imaging by multiple stereophotogrammetry was a useful tool for volume measurements, shape analysis and the evaluation of symmetry. Copyright © 2012 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Abnormal Mammary Development in 129:STAT1-Null Mice is Stroma-Dependent
Cardiff, Robert D.; Trott, Josephine F.; Hovey, Russell C.; Hubbard, Neil E.; Engelberg, Jesse A.; Tepper, Clifford G.; Willis, Brandon J.; Khan, Imran H.; Ravindran, Resmi K.; Chan, Szeman R.; Schreiber, Robert D.; Borowsky, Alexander D.
2015-01-01
Female 129:Stat1-null mice (129S6/SvEvTac-Stat1tm1Rds homozygous) uniquely develop estrogen-receptor (ER)-positive mammary tumors. Herein we report that the mammary glands (MG) of these mice have altered growth and development with abnormal terminal end buds alongside defective branching morphogenesis and ductal elongation. We also find that the 129:Stat1-null mammary fat pad (MFP) fails to sustain the growth of 129S6/SvEv wild-type and Stat1-null epithelium. These abnormalities are partially reversed by elevated serum progesterone and prolactin whereas transplantation of wild-type bone marrow into 129:Stat1-null mice does not reverse the MG developmental defects. Medium conditioned by 129:Stat1-null epithelium-cleared MFP does not stimulate epithelial proliferation, whereas it is stimulated by medium conditioned by epithelium-cleared MFP from either wild-type or 129:Stat1-null females having elevated progesterone and prolactin. Microarrays and multiplexed cytokine assays reveal that the MG of 129:Stat1-null mice has lower levels of growth factors that have been implicated in normal MG growth and development. Transplanted 129:Stat1-null tumors and their isolated cells also grow slower in 129:Stat1-null MG compared to wild-type recipient MG. These studies demonstrate that growth of normal and neoplastic 129:Stat1-null epithelium is dependent on the hormonal milieu and on factors from the mammary stroma such as cytokines. While the individual or combined effects of these factors remains to be resolved, our data supports the role of STAT1 in maintaining a tumor-suppressive MG microenvironment. PMID:26075897
Tennese, Alysa A; Wevrick, Rachel
2011-03-01
Hypothalamic dysfunction may underlie endocrine abnormalities in Prader-Willi syndrome (PWS), a genetic disorder that features GH deficiency, obesity, and infertility. One of the genes typically inactivated in PWS, MAGEL2, is highly expressed in the hypothalamus. Mice deficient for Magel2 are obese with increased fat mass and decreased lean mass and have blunted circadian rhythm. Here, we demonstrate that Magel2-null mice have abnormalities of hypothalamic endocrine axes that recapitulate phenotypes in PWS. Magel2-null mice had elevated basal corticosterone levels, and although male Magel2-null mice had an intact corticosterone response to restraint and to insulin-induced hypoglycemia, female Magel2-null mice failed to respond to hypoglycemia with increased corticosterone. After insulin-induced hypoglycemia, Magel2-null mice of both sexes became more profoundly hypoglycemic, and female mice were slower to recover euglycemia, suggesting an impaired hypothalamic counterregulatory response. GH insufficiency can produce abnormal body composition, such as that seen in PWS and in Magel2-null mice. Male Magel2-null mice had Igf-I levels similar to control littermates. Female Magel2-null mice had low Igf-I levels and reduced GH release in response to stimulation with ghrelin. Female Magel2-null mice did respond to GHRH, suggesting that their GH deficiency has a hypothalamic rather than pituitary origin. Female Magel2-null mice also had higher serum adiponectin than expected, considering their increased fat mass, and thyroid (T(4)) levels were low. Together, these findings strongly suggest that loss of MAGEL2 contributes to endocrine dysfunction of hypothalamic origin in individuals with PWS.
Tsuchiya, Hiroyuki; da Costa, Kerry-Ann; Lee, Sangmin; Renga, Barbara; Jaeschke, Hartmut; Yang, Zhihong; Orena, Stephen J; Goedken, Michael J; Zhang, Yuxia; Kong, Bo; Lebofsky, Margitta; Rudraiah, Swetha; Smalling, Rana; Guo, Grace; Fiorucci, Stefano; Zeisel, Steven H; Wang, Li
2015-05-01
Hyperhomocysteinemia is often associated with liver and metabolic diseases. We studied nuclear receptors that mediate oscillatory control of homocysteine homeostasis in mice. We studied mice with disruptions in Nr0b2 (called small heterodimer partner [SHP]-null mice), betaine-homocysteine S-methyltransferase (Bhmt), or both genes (BHMT-null/SHP-null mice), along with mice with wild-type copies of these genes (controls). Hyperhomocysteinemia was induced by feeding mice alcohol (National Institute on Alcohol Abuse and Alcoholism binge model) or chow diets along with water containing 0.18% DL-homocysteine. Some mice were placed on diets containing cholic acid (1%) or cholestyramine (2%) or high-fat diets (60%). Serum and livers were collected during a 24-hour light-dark cycle and analyzed by RNA-seq, metabolomic, and quantitative polymerase chain reaction, immunoblot, and chromatin immunoprecipitation assays. SHP-null mice had altered timing in expression of genes that regulate homocysteine metabolism compared with control mice. Oscillatory production of S-adenosylmethionine, betaine, choline, phosphocholine, glyceophosphocholine, cystathionine, cysteine, hydrogen sulfide, glutathione disulfide, and glutathione, differed between SHP-null mice and control mice. SHP inhibited transcriptional activation of Bhmt and cystathionine γ-lyase by FOXA1. Expression of Bhmt and cystathionine γ-lyase was decreased when mice were fed cholic acid but increased when they were placed on diets containing cholestyramine or high-fat content. Diets containing ethanol or homocysteine induced hyperhomocysteinemia and glucose intolerance in control, but not SHP-null, mice. In BHMT-null and BHMT-null/SHP-null mice fed a control liquid, lipid vacuoles were observed in livers. Ethanol feeding induced accumulation of macrovesicular lipid vacuoles to the greatest extent in BHMT-null and BHMT-null/SHP-null mice. Disruption of Shp in mice alters timing of expression of genes that regulate homocysteine metabolism and the liver responses to ethanol and homocysteine. SHP inhibits the transcriptional activation of Bhmt and cystathionine γ-lyase by FOXA1. Copyright © 2015 AGA Institute. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Clement, Gilles; Wood, Scott J.
2010-01-01
This joint ESA-NASA study is examining changes in motion perception following Space Shuttle flights and the operational implications of post-flight tilt-translation ambiguity for manual control performance. Vibrotactile feedback of tilt orientation is also being evaluated as a countermeasure to improve performance during a closed-loop nulling task. METHODS. Data has been collected on 5 astronaut subjects during 3 preflight sessions and during the first 8 days after Shuttle landings. Variable radius centrifugation (216 deg/s) combined with body translation (12-22 cm, peak-to-peak) is utilized to elicit roll-tilt perception (equivalent to 20 deg, peak-to-peak). A forward-backward moving sled (24-390 cm, peak-to-peak) with or without chair tilting in pitch is utilized to elicit pitch tilt perception (equivalent to 20 deg, peak-to-peak). These combinations are elicited at 0.15, 0.3, and 0.6 Hz for evaluating the effect of motion frequency on tilt-translation ambiguity. In both devices, a closed-loop nulling task is also performed during pseudorandom motion with and without vibrotactile feedback of tilt. All tests are performed in complete darkness. PRELIMINARY RESULTS. Data collection is currently ongoing. Results to date suggest there is a trend for translation motion perception to be increased at the low and medium frequencies on landing day compared to pre-flight. Manual control performance is improved with vibrotactile feedback. DISCUSSION. The results of this study indicate that post-flight recovery of motion perception and manual control performance is complete within 8 days following short-duration space missions. Vibrotactile feedback of tilt improves manual control performance both before and after flight.
Kumar, Ramiya; Mota, Linda C.; Litoff, Elizabeth J.; Rooney, John P.; Boswell, W. Tyler; Courter, Elliott; Henderson, Charles M.; Hernandez, Juan P.; Corton, J. Christopher; Moore, David D.
2017-01-01
Targeted mutant models are common in mechanistic toxicology experiments investigating the absorption, metabolism, distribution, or elimination (ADME) of chemicals from individuals. Key models include those for xenosensing transcription factors and cytochrome P450s (CYP). Here we investigated changes in transcript levels, protein expression, and steroid hydroxylation of several xenobiotic detoxifying CYPs in constitutive androstane receptor (CAR)-null and two CYP-null mouse models that have subfamily members regulated by CAR; the Cyp3a-null and a newly described Cyp2b9/10/13-null mouse model. Compensatory changes in CYP expression that occur in these models may also occur in polymorphic humans, or may complicate interpretation of ADME studies performed using these models. The loss of CAR causes significant changes in several CYPs probably due to loss of CAR-mediated constitutive regulation of these CYPs. Expression and activity changes include significant repression of Cyp2a and Cyp2b members with corresponding drops in 6α- and 16β-testosterone hydroxylase activity. Further, the ratio of 6α-/15α-hydroxylase activity, a biomarker of sexual dimorphism in the liver, indicates masculinization of female CAR-null mice, suggesting a role for CAR in the regulation of sexually dimorphic liver CYP profiles. The loss of Cyp3a causes fewer changes than CAR. Nevertheless, there are compensatory changes including gender-specific increases in Cyp2a and Cyp2b. Cyp2a and Cyp2b were down-regulated in CAR-null mice, suggesting activation of CAR and potentially PXR following loss of the Cyp3a members. However, the loss of Cyp2b causes few changes in hepatic CYP transcript levels and almost no significant compensatory changes in protein expression or activity with the possible exception of 6α-hydroxylase activity. This lack of a compensatory response in the Cyp2b9/10/13-null mice is probably due to low CYP2B hepatic expression, especially in male mice. Overall, compensatory and regulatory CYP changes followed the order CAR-null > Cyp3a-null > Cyp2b-null mice. PMID:28350814
Survival of glucose phosphate isomerase null somatic cells and germ cells in adult mouse chimaeras
Keighren, Margaret A.; Flockhart, Jean H.
2016-01-01
ABSTRACT The mouse Gpi1 gene encodes the glycolytic enzyme glucose phosphate isomerase. Homozygous Gpi1−/− null mouse embryos die but a previous study showed that some homozygous Gpi1−/− null cells survived when combined with wild-type cells in fetal chimaeras. One adult female Gpi1−/−↔Gpi1c/c chimaera with functional Gpi1−/− null oocytes was also identified in a preliminary study. The aims were to characterise the survival of Gpi1−/− null cells in adult Gpi1−/−↔Gpi1c/c chimaeras and determine if Gpi1−/− null germ cells are functional. Analysis of adult Gpi1−/−↔Gpi1c/c chimaeras with pigment and a reiterated transgenic lineage marker showed that low numbers of homozygous Gpi1−/− null cells could survive in many tissues of adult chimaeras, including oocytes. Breeding experiments confirmed that Gpi1−/− null oocytes in one female Gpi1−/−↔Gpi1c/c chimaera were functional and provided preliminary evidence that one male putative Gpi1−/−↔Gpi1c/c chimaera produced functional spermatozoa from homozygous Gpi1−/− null germ cells. Although the male chimaera was almost certainly Gpi1−/−↔Gpi1c/c, this part of the study is considered preliminary because only blood was typed for GPI. Gpi1−/− null germ cells should survive in a chimaeric testis if they are supported by wild-type Sertoli cells. It is also feasible that spermatozoa could bypass a block at GPI, but not blocks at some later steps in glycolysis, by using fructose, rather than glucose, as the substrate for glycolysis. Although chimaera analysis proved inefficient for studying the fate of Gpi1−/− null germ cells, it successfully identified functional Gpi1−/− null oocytes and revealed that some Gpi1−/− null cells could survive in many adult tissues. PMID:27103217
Kumar, Ramiya; Mota, Linda C; Litoff, Elizabeth J; Rooney, John P; Boswell, W Tyler; Courter, Elliott; Henderson, Charles M; Hernandez, Juan P; Corton, J Christopher; Moore, David D; Baldwin, William S
2017-01-01
Targeted mutant models are common in mechanistic toxicology experiments investigating the absorption, metabolism, distribution, or elimination (ADME) of chemicals from individuals. Key models include those for xenosensing transcription factors and cytochrome P450s (CYP). Here we investigated changes in transcript levels, protein expression, and steroid hydroxylation of several xenobiotic detoxifying CYPs in constitutive androstane receptor (CAR)-null and two CYP-null mouse models that have subfamily members regulated by CAR; the Cyp3a-null and a newly described Cyp2b9/10/13-null mouse model. Compensatory changes in CYP expression that occur in these models may also occur in polymorphic humans, or may complicate interpretation of ADME studies performed using these models. The loss of CAR causes significant changes in several CYPs probably due to loss of CAR-mediated constitutive regulation of these CYPs. Expression and activity changes include significant repression of Cyp2a and Cyp2b members with corresponding drops in 6α- and 16β-testosterone hydroxylase activity. Further, the ratio of 6α-/15α-hydroxylase activity, a biomarker of sexual dimorphism in the liver, indicates masculinization of female CAR-null mice, suggesting a role for CAR in the regulation of sexually dimorphic liver CYP profiles. The loss of Cyp3a causes fewer changes than CAR. Nevertheless, there are compensatory changes including gender-specific increases in Cyp2a and Cyp2b. Cyp2a and Cyp2b were down-regulated in CAR-null mice, suggesting activation of CAR and potentially PXR following loss of the Cyp3a members. However, the loss of Cyp2b causes few changes in hepatic CYP transcript levels and almost no significant compensatory changes in protein expression or activity with the possible exception of 6α-hydroxylase activity. This lack of a compensatory response in the Cyp2b9/10/13-null mice is probably due to low CYP2B hepatic expression, especially in male mice. Overall, compensatory and regulatory CYP changes followed the order CAR-null > Cyp3a-null > Cyp2b-null mice.
An Evaluative Study of the Defense Mechanism Test
1990-07-01
massive increase in this effect. The use of a parallel test with more dissimilar stimuli is also not a practicable option, as in this case results would no...training" was practically null (0.07). In order to be sure that this result was not simply due to inappropriate application of Swedish weightings of the...the threatening image is a father figure, catching the boy masturbating (the violin being a phallic symbol). Unsuccessful candidates In the test have
Beaton, Kara H.; Huffman, W. Cary; Schubert, Michael C.
2015-01-01
Increased ocular positioning misalignments upon exposure to altered gravity levels (g-levels) have been strongly correlated with space motion sickness (SMS) severity, possibly due to underlying otolith asymmetries uncompensated in novel gravitational environments. We investigated vertical and torsional ocular positioning misalignments elicited by the 0 and 1.8 g g-levels of parabolic flight and used these data to develop a computational model to describe how such misalignments might arise. Ocular misalignments were inferred through two perceptual nulling tasks: Vertical Alignment Nulling (VAN) and Torsional Alignment Nulling (TAN). All test subjects exhibited significant differences in ocular misalignments in the novel g-levels, which we postulate to be the result of healthy individuals with 1 g-tuned central compensatory mechanisms unadapted to the parabolic flight environment. Furthermore, the magnitude and direction of ocular misalignments in hypo-g and hyper-g, in comparison to 1 g, were nonlinear and nonmonotonic. Previous linear models of central compensation do not predict this. Here we show that a single model of the form a + bgε, where a, b, and ε are the model parameters and g is the current g-level, accounts for both the vertical and torsional ocular misalignment data observed inflight. Furthering our understanding of oculomotor control is critical for the development of interventions that promote adaptation in spaceflight (e.g., countermeasures for novel g-level exposure) and terrestrial (e.g., rehabilitation protocols for vestibular pathology) environments. PMID:26082691
NASA Technical Reports Server (NTRS)
2011-01-01
Topics covered include: 1) Method to Estimate the Dissolved Air Content in Hydraulic Fluid; 2) Method for Measuring Collimator-Pointing Sensitivity to Temperature Changes; 3) High-Temperature Thermometer Using Cr-Doped GdAlO3 Broadband Luminescence; 4)Metrology Arrangement for Measuring the Positions of Mirrors of a Submillimeter Telescope; 5) On-Wafer S-Parameter Measurements in the 325-508-GHz Band; 6) Reconfigurable Microwave Phase Delay Element for Frequency Reference and Phase-Shifter Applications; 7) High-Speed Isolation Board for Flight Hardware Testing; 8) High-Throughput, Adaptive FFT Architecture for FPGA-Based Spaceborne Data Processors; 9) 3D Orbit Visualization for Earth-Observing Missions; 10) MaROS: Web Visualization of Mars Orbiting and Landed Assets; 11) RAPID: Collaborative Commanding and Monitoring of Lunar Assets; 12) Image Segmentation, Registration, Compression, and Matching; 13) Image Calibration; 14) Rapid ISS Power Availability Simulator; 15) A Method of Strengthening Composite/Metal Joints; 16) Pre-Finishing of SiC for Optical Applications; 17) Optimization of Indium Bump Morphology for Improved Flip Chip Devices; 18) Measuring Moisture Levels in Graphite Epoxy Composite Sandwich Structures; 19) Marshall Convergent Spray Formulation Improvement for High Temperatures; 20) Real-Time Deposition Monitor for Ultrathin Conductive Films; 21) Optimized Li-Ion Electrolytes Containing Triphenyl Phosphate as a Flame-Retardant Additive; 22) Radiation-Resistant Hybrid Lotus Effect for Achieving Photoelectrocatalytic Self-Cleaning Anticontamination Coatings; 23) Improved, Low-Stress Economical Submerged Pipeline; 24) Optical Fiber Array Assemblies for Space Flight on the Lunar Reconnaissance Orbiter; 25) Local Leak Detection and Health Monitoring of Pressurized Tanks; 26) Dielectric Covered Planar Antennas at Submillimeter Wavelengths for Terahertz Imaging; 27) Automated Cryocooler Monitor and Control System; 28) Broadband Achromatic Phase Shifter for a Nulling Interferometer; 29) Super Dwarf Wheat for Growth in Confined Spaces; 30) Fine Guidance Sensing for Coronagraphic Observatories; 31) Single-Antenna Temperature- and Humidity-Sounding Microwave Receiver; 32) Multi-Wavelength, Multi-Beam, and Polarization-Sensitive Laser Transmitter for Surface Mapping; 33) Optical Communications Link to Airborne Transceiver; 34) Ascent Heating Thermal Analysis on Spacecraft Adaptor Fairings; 35) Entanglement in Self-Supervised Dynamics; 36) Prioritized LT Codes; 37) Fast Image Texture Classification Using Decision Trees; 38) Constraint Embedding Technique for Multibody System Dynamics; 39) Improved Systematic Pointing Error Model for the DSN Antennas; 40) Observability and Estimation of Distributed Space Systems via Local Information-Exchange Networks; 41) More-Accurate Model of Flows in Rocket Injectors; 42) In-Orbit Instrument-Pointing Calibration Using the Moon as a Target; 43) Reliability of Ceramic Column Grid Array Interconnect Packages Under Extreme Temperatures; 44) Six Degrees-of-Freedom Ascent Control for Small-Body Touch and Go; and 45) Optical-Path-Difference Linear Mechanism for the Panchromatic Fourier Transform Spectrometer.
Look again: effects of brain images and mind-brain dualism on lay evaluations of research.
Hook, Cayce J; Farah, Martha J
2013-09-01
Brain scans have frequently been credited with uniquely seductive and persuasive qualities, leading to claims that fMRI research receives a disproportionate share of public attention and funding. It has been suggested that functional brain images are fascinating because they contradict dualist beliefs regarding the relationship between the body and the mind. Although previous research has indicated that brain images can increase judgments of an article's scientific reasoning, the hypotheses that brain scans make research appear more interesting, surprising, or worthy of funding have not been tested. Neither has the relation between the allure of brain imaging and dualism. In the following three studies, laypersons rated both fictional research descriptions and real science news articles accompanied by brain scans, bar charts, or photographs. Across 988 participants, we found little evidence of neuroimaging's seductive allure or of its relation to self-professed dualistic beliefs. These results, taken together with other recent null findings, suggest that brain images are less powerful than has been argued.
Metzner, Rebecca; Schwestka-Polly, Rainer; Helms, Hans-Joachim; Wiechmann, Dirk
2015-06-20
Orthodontic protraction of mandibular molars without maxillary counterbalance extraction in cases of aplasia or extraction requires stable anchorage. Reinforcement may be achieved by using either temporary anchorage devices (TAD) or a fixed, functional appliance. The objective was to compare the clinical effectiveness of both methods by testing the null-hypothesis of no significant difference in velocity of space closure (in mm/month) between them. In addition, we set out to describe the quality of posterior space management and treatment-related factors, such as loss of anchorage (assessed in terms of proportions of gap closure by posterior protraction or anterior retraction), frequencies of incomplete space closure, and potential improvement in the sagittal canine relationship. Twenty-seven subjects (15 male/12 female) with a total of 36 sites treated with a lingual multi-bracket appliance were available for retrospective evaluation of the effects of anchorage reinforcement achieved with either a Herbst appliance (n(subjects) = 15; 7 both-sided/8 single-sided Herbst appliances; n(sites) = 22) or TADs (n(subjects )= 12; 2 both-sided; 10 single-sided; n(sites) = 14). Descriptive analysis was based on measurements using intra-oral photographs which were individually scaled to corresponding plaster casts and taken on insertion of anchorage mechanics (T1), following removal of anchorage mechanics (T2), and at the end of multi-bracket treatment (T3). The null-hypothesis was rejected: The rate of mean molar protraction was significantly faster in the Herbst-reinforced group (0.51 mm/month) than in the TAD group (0.35). While complete space closure by sheer protraction of posterior teeth was achieved in all Herbst-treated cases, space closure in the TAD group was achieved in 76.9% of subjects by sheer protraction of molars, and it was incomplete in 50% of cases (mean gap residues: 1 mm). Whilst there was a deterioration in the canine relationship towards Angle-Class II malocclusion in 57.14% of space closure sites in TAD-treated subjects (indicating a loss of anchorage), an improvement in canine occlusion was observed in 90.9% of Herbst-treated cases. Subjects requiring rapid space closure by molar protraction in combination with a correction of distal occlusion may benefit from using Herbst appliances for anterior segment anchorage reinforcement rather than TAD anchorage.
Bioprocessing feasibility analysis. [thymic hormone bioassay and electrophoresis
NASA Technical Reports Server (NTRS)
1978-01-01
The biology and pathophysiology of the thymus gland is discussed and a clinical procedure for thymic hormone assay is described. The separation of null lymphocytes from mice spleens and the functional characteristics of the cells after storage and transportation were investigated to develop a clinical procedure for thymic hormone assay, and to determine whether a ground-based approach will provide the desired end-product in sufficient quantities, or whether the microgravity of space should be exploited for more economical preparation of the hormone.
A Geometric Framework for the Kinematics of Crystals With Defects
2006-02-01
which parallel transport preserves dot products of vectors, i.e. r G G ¼ 0. It is called the Levi - Civita connection [57] or the Riemannian connection...yielding a null covariant derivative of the metric tensor is called a metric connection. The Levi – Civita connection of (8) is metric. Note that in...tensor formed by inserting the Levi – Civita con- nection (8) into (10). A geometric space B0 with metric G having R G ¼ 0 is called flat. One may show
Loop-corrected Virasoro symmetry of 4D quantum gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, T.; Kapec, D.; Raclariu, A.
Recently a boundary energy-momentum tensor T zz has been constructed from the soft graviton operator for any 4D quantum theory of gravity in asymptotically flat space. Up to an “anomaly” which is one-loop exact, T zz generates a Virasoro action on the 2D celestial sphere at null infinity. Here we show by explicit construction that the effects of the IR divergent part of the anomaly can be eliminated by a one-loop renormalization that shifts T zz .
Loop-corrected Virasoro symmetry of 4D quantum gravity
He, T.; Kapec, D.; Raclariu, A.; ...
2017-08-16
Recently a boundary energy-momentum tensor T zz has been constructed from the soft graviton operator for any 4D quantum theory of gravity in asymptotically flat space. Up to an “anomaly” which is one-loop exact, T zz generates a Virasoro action on the 2D celestial sphere at null infinity. Here we show by explicit construction that the effects of the IR divergent part of the anomaly can be eliminated by a one-loop renormalization that shifts T zz .
ERIC Educational Resources Information Center
McCabe, Declan J.; Knight, Evelyn J.
2016-01-01
Since being introduced by Connor and Simberloff in response to Diamond's assembly rules, null model analysis has been a controversial tool in community ecology. Despite being commonly used in the primary literature, null model analysis has not featured prominently in general textbooks. Complexity of approaches along with difficulty in interpreting…
Alignment of optical system components using an ADM beam through a null assembly
NASA Technical Reports Server (NTRS)
Hayden, Joseph E. (Inventor); Olczak, Eugene G. (Inventor)
2010-01-01
A system for testing an optical surface includes a rangefinder configured to emit a light beam and a null assembly located between the rangefinder and the optical surface. The null assembly is configured to receive and to reflect the emitted light beam toward the optical surface. The light beam reflected from the null assembly is further reflected back from the optical surface toward the null assembly as a return light beam. The rangefinder is configured to measure a distance to the optical surface using the return light beam.
Interpreting null results from measurements with uncertain correlations: an info-gap approach.
Ben-Haim, Yakov
2011-01-01
Null events—not detecting a pernicious agent—are the basis for declaring the agent is absent. Repeated nulls strengthen confidence in the declaration. However, correlations between observations are difficult to assess in many situations and introduce uncertainty in interpreting repeated nulls. We quantify uncertain correlations using an info-gap model, which is an unbounded family of nested sets of possible probabilities. An info-gap model is nonprobabilistic and entails no assumption about a worst case. We then evaluate the robustness, to uncertain correlations, of estimates of the probability of a null event. This is then the basis for evaluating a nonprobabilistic robustness-based confidence interval for the probability of a null. © 2010 Society for Risk Analysis.
Driver ASIC Environmental Testing and Performance Optimization for SpaceBased Active Mirrors
NASA Astrophysics Data System (ADS)
Mejia Prada, Camilo
Direct imaging of Earth-like planets requires techniques for light suppression, such as coronagraphs or nulling interferometers, in which deformable mirrors (DM) are a principal component. On ground-based systems, DMs are used to correct for turbulence in the Earth’s atmosphere in addition to static aberrations in the optics. For space-based observations, DMs are used to correct for static and quasi- static aberrations in the optical train. State-of-the-art, high-actuator count deformable mirrors suffer from external heavy and bulky electronics in which electrical connections are made through thousands of wires. We are instead developing Application Specific Integrated Circuits (ASICs) capable of direct integration with the DM in a single small package. This integrated ASIC-DM is ideal for space missions, where it offers significant reduction in mass, power and complexity, and performance compatible with high-contrast observations of exoplanets. We have successfully prototyped and tested a 32x32 format Switch-Mode (SM) ASIC which consumes only 2mW static power (total, not per-actuator). A number of constraints were imposed on key parameters of this ASIC design, including sub-picoamp levels of leakage across turned-off switches and from switch-to-substrate, control resolution of 0.04 mV, satisfactory rise/fall times, and a near-zero on-chip crosstalk over a useful range of operating temperatures. This driver ASIC technology is currently at TRL 4. This Supporting Technology proposal will further develop the ASIC technology to TRL 5 by carrying on environmental tests and further optimizing performance, with the end goal of making ASICs suitable for space-based deployment. The effort will be led by JPL, which has considerable expertise with DMs used in highcontrast imaging systems for exoplanet missions and in adaptive optic systems, and in design of DM driver electronics. Microscale, which developed the prototype of the ASICDM, will continue its development. We propose a three-part program to advance the device maturity. The effort will cover (1) radiation hardness, (2) thermal-vacuum environment tests, and (3) parameter performance optimization. We expect to implement the results in an optimized ASIC design for NASA's space applications, expanding the current state-of-the-art into radiation-hardened electronics robust enough for a space environment. This effort will fill technology gaps listed in the Exoplanet Exploration Program Technology Plan 2017 : “The challenge is believed to not be the mosaicking of 48×48 devices or 32×32 devices (to reach 128×128) but rather dealing with the enormous number of interconnects and their electronics.”. After the close of this effort, continued ASIC development is of course planned, leading to further improvement in parameters.
Abu-Amero, Khaled K; Al-Boudari, Olayan M; Mohamed, Gamal H; Dzimiri, Nduna
2006-01-01
Background The association of the deletion in GSTT1 and GSTM1 genes with coronary artery disease (CAD) among smokers is controversial. In addition, no such investigation has previously been conducted among Arabs. Methods We genotyped 1054 CAD patients and 762 controls for GSTT1 and GSTM1 deletion by multiplex polymerase chain reaction. Both CAD and controls were Saudi Arabs. Results In the control group (n = 762), 82.3% had the T wild M wildgenotype, 9% had the Twild M null, 2.4% had the Tnull M wild and 6.3% had the Tnull M null genotype. Among the CAD group (n = 1054), 29.5% had the Twild M wild genotype, 26.6% (p < .001) had the Twild M null, 8.3% (p < .001) had the Tnull M wild and 35.6% (p < .001) had the Tnull M null genotype, indicating a significant association of the Twild M null, Tnull M wild and Tnull M null genotypes with CAD. Univariate analysis also showed that smoking, age, hypercholesterolemia and hypertriglyceridemia, diabetes mellitus, family history of CAD, hypertension and obesity are all associated with CAD, whereas gender and myocardial infarction are not. Binary logistic regression for smoking and genotypes indicated that only M null and Tnullare interacting with smoking. However, further subgroup analysis stratifying the data by smoking status suggested that genotype-smoking interactions have no effect on the development of CAD. Conclusion GSTT1 and GSTM1 null-genotypes are risk factor for CAD independent of genotype-smoking interaction. PMID:16620396
Ghosh, Soma; Sur, Surojit; Yerram, Sashidhar R.; Rago, Carlo; Bhunia, Anil K.; Hossain, M. Zulfiquer; Paun, Bogdan C.; Ren, Yunzhao R.; Iacobuzio-Donahue, Christine A.; Azad, Nilofer A.; Kern, Scott E.
2014-01-01
Large-magnitude numerical distinctions (>10-fold) among drug responses of genetically contrasting cancers were crucial for guiding the development of some targeted therapies. Similar strategies brought epidemiological clues and prevention goals for genetic diseases. Such numerical guides, however, were incomplete or low magnitude for Fanconi anemia pathway (FANC) gene mutations relevant to cancer in FANC-mutation carriers (heterozygotes). We generated a four-gene FANC-null cancer panel, including the engineering of new PALB2/FANCN-null cancer cells by homologous recombination. A characteristic matching of FANCC-null, FANCG-null, BRCA2/FANCD1-null, and PALB2/FANCN-null phenotypes was confirmed by uniform tumor regression on single-dose cross-linker therapy in mice and by shared chemical hypersensitivities to various inter-strand cross-linking agents and γ-radiation in vitro. Some compounds, however, had contrasting magnitudes of sensitivity; a strikingly high (19- to 22-fold) hypersensitivity was seen among PALB2-null and BRCA2-null cells for the ethanol metabolite, acetaldehyde, associated with widespread chromosomal breakage at a concentration not producing breaks in parental cells. Because FANC-defective cancer cells can share or differ in their chemical sensitivities, patterns of selective hypersensitivity hold implications for the evolutionary understanding of this pathway. Clinical decisions for cancer-relevant prevention and management of FANC-mutation carriers could be modified by expanded studies of high-magnitude sensitivities. PMID:24200853
NASA Technical Reports Server (NTRS)
Goepfert, T. M.; McCarthy, M.; Kittrell, F. S.; Stephens, C.; Ullrich, R. L.; Brinkley, B. R.; Medina, D.
2000-01-01
Mammary epithelial cells from p53 null mice have been shown recently to exhibit an increased risk for tumor development. Hormonal stimulation markedly increased tumor development in p53 null mammary cells. Here we demonstrate that mammary tumors arising in p53 null mammary cells are highly aneuploid, with greater than 70% of the tumor cells containing altered chromosome number and a mean chromosome number of 56. Normal mammary cells of p53 null genotype and aged less than 14 wk do not exhibit aneuploidy in primary cell culture. Significantly, the hormone progesterone, but not estrogen, increases the incidence of aneuploidy in morphologically normal p53 null mammary epithelial cells. Such cells exhibited 40% aneuploidy and a mean chromosome number of 54. The increase in aneuploidy measured in p53 null tumor cells or hormonally stimulated normal p53 null cells was not accompanied by centrosome amplification. These results suggest that normal levels of progesterone can facilitate chromosomal instability in the absence of the tumor suppressor gene, p53. The results support the emerging hypothesis based both on human epidemiological and animal model studies that progesterone markedly enhances mammary tumorigenesis.
Memon, Mushtaq A.; Anway, Matthew D.; Covert, Trevor R.; Uzumcu, Mehmet; Skinner, Michael K.
2008-01-01
The role transforming growth factor beta (TGFb) isoforms TGFb1, TGFb2 and TGFb3 have in the regulation of embryonic gonadal development was investigated with the use of null-mutant (i.e. knockout) mice for each of the TGFb isoforms. Late embryonic gonadal development was investigated because homozygote TGFb null-mutant mice generally die around birth, with some embryonic loss as well. In the testis, the TGFb1 null-mutant mice had a decrease in the number of germ cells at birth, postnatal day 0 (P0). In the testis, the TGFb2 null-mutant mice had a decrease in the number of seminiferous cords at embryonic day 15 (E15). In the ovary, the TGFb2 null-mutant mice had an increase in the number of germ cells at P0. TGFb isoforms appear to have a role in gonadal development, but interactions between the isoforms is speculated to compensate in the different TGFb isoform null-mutant mice. PMID:18790002
Super-BMS3 algebras from {N}=2 flat supergravities
NASA Astrophysics Data System (ADS)
Lodato, Ivano; Merbis, Wout
2016-11-01
We consider two possible flat space limits of three dimensional {N}=(1, 1) AdS supergravity. They differ by how the supercharges are scaled with the AdS radius ℓ: the first limit (democratic) leads to the usual super-Poincaré theory, while a novel `twisted' theory of supergravity stems from the second (despotic) limit. We then propose boundary conditions such that the asymptotic symmetry algebras at null infinity correspond to supersymmetric extensions of the BMS algebras previously derived in connection to non- and ultra-relativistic limits of the {N}=(1, 1) Virasoro algebra in two dimensions. Finally, we study the supersymmetric energy bounds and find the explicit form of the asymptotic and global Killing spinors of supersymmetric solutions in both flat space supergravity theories.
Misconceptions in recent papers on special relativity and absolute space theories
NASA Technical Reports Server (NTRS)
Torr, D. G.; Kolen, P.
1982-01-01
Several recent papers which purport to substantiate or negate arguments in favor of certain theories of absolute space have been based on fallacious principles. This paper discusses three related instances, indicating where misconceptions have arisen. It is established, contrary to popular belief, that the classical Lorentz ether theory accounts for all the experimental evidence which supports the special theory of relativity. It is demonstrated that the ether theory predicts the null results obtained from pulsar timing and Moessbauer experiments. It is concluded that a measurement of the one-way velocity of light has physical meaning within the context of the Lorentz theory, and it is argued that an adequately designed experiment to measure the one-way velocity of light should be attempted.
Querying databases of trajectories of differential equations 2: Index functions
NASA Technical Reports Server (NTRS)
Grossman, Robert
1991-01-01
Suppose that a large number of parameterized trajectories (gamma) of a dynamical system evolving in R sup N are stored in a database. Let eta is contained R sup N denote a parameterized path in Euclidean space, and let parallel to center dot parallel to denote a norm on the space of paths. A data structures and indices for trajectories are defined and algorithms are given to answer queries of the following forms: Query 1. Given a path eta, determine whether eta occurs as a subtrajectory of any trajectory gamma from the database. If so, return the trajectory; otherwise, return null. Query 2. Given a path eta, return the trajectory gamma from the database which minimizes the norm parallel to eta - gamma parallel.
Exoplanet Community Report on Direct Infrared Imaging of Exoplanets
NASA Technical Reports Server (NTRS)
Danchi, William C.; Lawson, Peter R.
2009-01-01
Direct infrared imaging and spectroscopy of exoplanets will allow for detailed characterization of the atmospheric constituents of more than 200 nearby Earth-like planets, more than is possible with any other method under consideration. A flagship mission based on larger passively cooled infrared telescopes and formation flying technologies would have the highest angular resolution of any concept under consideration. The 2008 Exoplanet Forum committee on Direct Infrared Imaging of Exoplanets recommends: (1) a vigorous technology program including component development, integrated testbeds, and end-to-end modeling in the areas of formation flying and mid-infrared nulling; (2) a probe-scale mission based on a passively cooled structurally connected interferometer to be started within the next two to five years, for exoplanetary system characterization that is not accessible from the ground, and which would provide transformative science and lay the engineering groundwork for the flagship mission with formation flying elements. Such a mission would enable a complete exozodiacal dust survey (<1 solar system zodi) in the habitable zone of all nearby stars. This information will allow for a more efficient strategy of spectral characterization of Earth-sized planets for the flagship missions, and also will allow for optimization of the search strategy of an astrometric mission if such a mission were delayed due to cost or technology reasons. (3) Both the flagship and probe missions should be pursued with international partners if possible. Fruitful collaboration with international partners on mission concepts and relevant technology should be continued. (4) Research and Analysis (R&A) should be supported for the development of preliminary science and mission designs. Ongoing efforts to characterize the the typical level of exozodiacal light around Sun-like stars with ground-based nulling technology should be continued.
NASA Astrophysics Data System (ADS)
Wells, Conrad; Hadaway, James B.; Olczak, Gene; Cosentino, Joseph; Johnston, John D.; Whitman, Tony; Connolly, Mark; Chaney, David; Knight, J. Scott; Telfer, Randal
2016-07-01
The James Webb Space Telescope (JWST) Optical Telescope Element (OTE) consists of a 6.6 m clear aperture, 18 segment primary mirror, all-reflective, three-mirror anastigmat operating at cryogenic temperatures. To verify performance of the primary mirror, a full aperture center of curvature optical null test is performed under cryogenic conditions in Chamber A at the National Aeronautics and Space Administration (NASA) Johnson Space Center (JSC) using an instantaneous phase measuring interferometer. After phasing the mirrors during the JWST Pathfinder testing, the interferometer is utilized to characterize the mirror relative piston and tilt dynamics under different facility configurations. The correlation between the motions seen on detectors at the focal plane and the interferometer validates the use of the interferometer for dynamic investigations. The success of planned test hardware improvements will be characterized by the multi-wavelength interferometer (MWIF) at the Center of Curvature Optical Assembly (CoCOA).
Magnetic resonance separation imaging using a divided inversion recovery technique (DIRT).
Goldfarb, James W
2010-04-01
The divided inversion recovery technique is an MRI separation method based on tissue T(1) relaxation differences. When tissue T(1) relaxation times are longer than the time between inversion pulses in a segmented inversion recovery pulse sequence, longitudinal magnetization does not pass through the null point. Prior to additional inversion pulses, longitudinal magnetization may have an opposite polarity. Spatial displacement of tissues in inversion recovery balanced steady-state free-precession imaging has been shown to be due to this magnetization phase change resulting from incomplete magnetization recovery. In this paper, it is shown how this phase change can be used to provide image separation. A pulse sequence parameter, the time between inversion pulses (T180), can be adjusted to provide water-fat or fluid separation. Example water-fat and fluid separation images of the head, heart, and abdomen are presented. The water-fat separation performance was investigated by comparing image intensities in short-axis divided inversion recovery technique images of the heart. Fat, blood, and fluid signal was suppressed to the background noise level. Additionally, the separation performance was not affected by main magnetic field inhomogeneities.
Magnetoacoustic Waves in a Stratified Atmosphere with a Magnetic Null Point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarr, Lucas A.; Linton, Mark; Leake, James, E-mail: lucas.tarr.ctr@nrl.navy.mil
2017-03-01
We perform nonlinear MHD simulations to study the propagation of magnetoacoustic waves from the photosphere to the low corona. We focus on a 2D system with a gravitationally stratified atmosphere and three photospheric concentrations of magnetic flux that produce a magnetic null point with a magnetic dome topology. We find that a single wavepacket introduced at the lower boundary splits into multiple secondary wavepackets. A portion of the packet refracts toward the null owing to the varying Alfvén speed. Waves incident on the equipartition contour surrounding the null, where the sound and Alfvén speeds coincide, partially transmit, reflect, and mode-convertmore » between branches of the local dispersion relation. Approximately 15.5% of the wavepacket’s initial energy ( E {sub input}) converges on the null, mostly as a fast magnetoacoustic wave. Conversion is very efficient: 70% of the energy incident on the null is converted to slow modes propagating away from the null, 7% leaves as a fast wave, and the remaining 23% (0.036 E {sub input}) is locally dissipated. The acoustic energy leaving the null is strongly concentrated along field lines near each of the null’s four separatrices. The portion of the wavepacket that refracts toward the null, and the amount of current accumulation, depends on the vertical and horizontal wavenumbers and the centroid position of the wavepacket as it crosses the photosphere. Regions that refract toward or away from the null do not simply coincide with regions of open versus closed magnetic field or regions of particular field orientation. We also model wavepacket propagation using a WKB method and find that it agrees qualitatively, though not quantitatively, with the results of the numerical simulation.« less
An improved null model for assessing the net effects of multiple stressors on communities.
Thompson, Patrick L; MacLennan, Megan M; Vinebrooke, Rolf D
2018-01-01
Ecological stressors (i.e., environmental factors outside their normal range of variation) can mediate each other through their interactions, leading to unexpected combined effects on communities. Determining whether the net effect of stressors is ecologically surprising requires comparing their cumulative impact to a null model that represents the linear combination of their individual effects (i.e., an additive expectation). However, we show that standard additive and multiplicative null models that base their predictions on the effects of single stressors on community properties (e.g., species richness or biomass) do not provide this linear expectation, leading to incorrect interpretations of antagonistic and synergistic responses by communities. We present an alternative, the compositional null model, which instead bases its predictions on the effects of stressors on individual species, and then aggregates them to the community level. Simulations demonstrate the improved ability of the compositional null model to accurately provide a linear expectation of the net effect of stressors. We simulate the response of communities to paired stressors that affect species in a purely additive fashion and compare the relative abilities of the compositional null model and two standard community property null models (additive and multiplicative) to predict these linear changes in species richness and community biomass across different combinations (both positive, negative, or opposite) and intensities of stressors. The compositional model predicts the linear effects of multiple stressors under almost all scenarios, allowing for proper classification of net effects, whereas the standard null models do not. Our findings suggest that current estimates of the prevalence of ecological surprises on communities based on community property null models are unreliable, and should be improved by integrating the responses of individual species to the community level as does our compositional null model. © 2017 John Wiley & Sons Ltd.
High frequency generation in the corona: Resonant cavities
NASA Astrophysics Data System (ADS)
Santamaria, I. C.; Van Doorsselaere, T.
2018-03-01
Aims: Null points are prominent magnetic field singularities in which the magnetic field strength strongly decreases in very small spatial scales. Around null points, predicted to be ubiquitous in the solar chromosphere and corona, the wave behavior changes considerably. Null points are also responsible for driving very energetic phenomena, and for contributing to chromospheric and coronal heating. In previous works we demonstrated that slow magneto-acoustic shock waves were generated in the chromosphere propagate through the null point, thereby producing a train of secondary shocks escaping along the field lines. A particular combination of the shock wave speeds generates waves at a frequency of 80 MHz. The present work aims to investigate this high frequency region around a coronal null point to give a plausible explanation to its generation at that particular frequency. Methods: We carried out a set of two-dimensional numerical simulations of wave propagation in the neighborhood of a null point located in the corona. We varied both the amplitude of the driver and the atmospheric properties to investigate the sensitivity of the high frequency waves to these parameters. Results: We demonstrate that the wave frequency is sensitive to the atmospheric parameters in the corona, but it is independent of the strength of the driver. Thus, the null point behaves as a resonant cavity generating waves at specific frequencies that depend on the background equilibrium model. Moreover, we conclude that the high frequency wave train generated at the null point is not necessarily a result of the interaction between the null point and a shock wave. This wave train can be also developed by the interaction between the null point and fast acoustic-like magneto-acoustic waves, that is, this interaction within the linear regime.
Kager, Leo; Bruce, Lesley J; Zeitlhofer, Petra; Flatt, Joanna F; Maia, Tabita M; Ribeiro, M Leticia; Fahrner, Bernhard; Fritsch, Gerhard; Boztug, Kaan; Haas, Oskar A
2017-03-01
We describe the second patient with anionic exchanger 1/band 3 null phenotype (band 3 null VIENNA ), which was caused by a novel nonsense mutation c.1430C>A (p.Ser477X) in exon 12 of SLC4A1. We also update on the previous band 3 null COIMBRA patient, thereby elucidating the physiological implications of total loss of AE1/band 3. Besides transfusion-dependent severe hemolytic anemia and complete distal renal tubular acidosis, dyserythropoiesis was identified in the band 3 null VIENNA patient, suggesting a role for band 3 in erythropoiesis. Moreover, we also, for the first time, report that long-term survival is possible in band 3 null patients. © 2016 Wiley Periodicals, Inc.
On the Penrose inequality along null hypersurfaces
NASA Astrophysics Data System (ADS)
Mars, Marc; Soria, Alberto
2016-06-01
The null Penrose inequality, i.e. the Penrose inequality in terms of the Bondi energy, is studied by introducing a functional on surfaces and studying its properties along a null hypersurface Ω extending to past null infinity. We prove a general Penrose-type inequality which involves the limit at infinity of the Hawking energy along a specific class of geodesic foliations called Geodesic Asymptotically Bondi (GAB), which are shown to always exist. Whenever this foliation approaches large spheres, this inequality becomes the null Penrose inequality and we recover the results of Ludvigsen-Vickers (1983 J. Phys. A: Math. Gen. 16 3349-53) and Bergqvist (1997 Class. Quantum Grav. 14 2577-83). By exploiting further properties of the functional along general geodesic foliations, we introduce an approach to the null Penrose inequality called the Renormalized Area Method and find a set of two conditions which imply the validity of the null Penrose inequality. One of the conditions involves a limit at infinity and the other a restriction on the spacetime curvature along the flow. We investigate their range of applicability in two particular but interesting cases, namely the shear-free and vacuum case, where the null Penrose inequality is known to hold from the results by Sauter (2008 PhD Thesis Zürich ETH), and the case of null shells propagating in the Minkowski spacetime. Finally, a general inequality bounding the area of the quasi-local black hole in terms of an asymptotic quantity intrinsic of Ω is derived.
Ghosh, Soma; Sur, Surojit; Yerram, Sashidhar R; Rago, Carlo; Bhunia, Anil K; Hossain, M Zulfiquer; Paun, Bogdan C; Ren, Yunzhao R; Iacobuzio-Donahue, Christine A; Azad, Nilofer A; Kern, Scott E
2014-01-01
Large-magnitude numerical distinctions (>10-fold) among drug responses of genetically contrasting cancers were crucial for guiding the development of some targeted therapies. Similar strategies brought epidemiological clues and prevention goals for genetic diseases. Such numerical guides, however, were incomplete or low magnitude for Fanconi anemia pathway (FANC) gene mutations relevant to cancer in FANC-mutation carriers (heterozygotes). We generated a four-gene FANC-null cancer panel, including the engineering of new PALB2/FANCN-null cancer cells by homologous recombination. A characteristic matching of FANCC-null, FANCG-null, BRCA2/FANCD1-null, and PALB2/FANCN-null phenotypes was confirmed by uniform tumor regression on single-dose cross-linker therapy in mice and by shared chemical hypersensitivities to various inter-strand cross-linking agents and γ-radiation in vitro. Some compounds, however, had contrasting magnitudes of sensitivity; a strikingly high (19- to 22-fold) hypersensitivity was seen among PALB2-null and BRCA2-null cells for the ethanol metabolite, acetaldehyde, associated with widespread chromosomal breakage at a concentration not producing breaks in parental cells. Because FANC-defective cancer cells can share or differ in their chemical sensitivities, patterns of selective hypersensitivity hold implications for the evolutionary understanding of this pathway. Clinical decisions for cancer-relevant prevention and management of FANC-mutation carriers could be modified by expanded studies of high-magnitude sensitivities. Copyright © 2014 American Society for Investigative Pathology. Published by Elsevier Inc. All rights reserved.
Rane, Swati; Talati, Pratik; Donahue, Manus J; Heckers, Stephan
2016-06-01
Inflow-vascular space occupancy (iVASO) measures arterial cerebral blood volume (aCBV) using accurate blood water nulling (inversion time [TI]) when arterial blood reaches the capillary, i.e., at the arterial arrival time. This work assessed the reproducibility of iVASO measurements in the hippocampus and cortex at multiple TIs. The iVASO approach was implemented at multiple TIs in 10 healthy volunteers at 3 Tesla. aCBV values were measured at each TI in the left and right hippocampus, and the cortex. Reproducibility of aCBV measurements within scans (same day) and across sessions (different days) was assessed using the intraclass correlation coefficient (ICC). Overall hippocampal aCBV was significantly higher than cortical aCBV, likely due to higher gray matter volume. Hippocampal ICC values were high at short TIs (≤914 ms; intrascan values = 0.80-0.96, interscan values = 0.61-0.91). Cortically, high ICC values were observed at intermediate TIs of 914 (intra: 0.93, inter: 0.87) and 1034 ms (intra: 0.96, inter: 0.86). The ICC values were comparable to established contrast-based CBV measures. iVASO measurements are reproducible within and across sessions. TIs for iVASO measurements should be chosen carefully, taking into account heterogeneous arterial arrival times in different brain regions. Magn Reson Med 75:2379-2387, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haraldsdóttir, Hulda S.; Fleming, Ronan M. T.
Conserved moieties are groups of atoms that remain intact in all reactions of a metabolic network. Identification of conserved moieties gives insight into the structure and function of metabolic networks and facilitates metabolic modelling. All moiety conservation relations can be represented as nonnegative integer vectors in the left null space of the stoichiometric matrix corresponding to a biochemical network. Algorithms exist to compute such vectors based only on reaction stoichiometry but their computational complexity has limited their application to relatively small metabolic networks. Moreover, the vectors returned by existing algorithms do not, in general, represent conservation of a specific moietymore » with a defined atomic structure. Here, we show that identification of conserved moieties requires data on reaction atom mappings in addition to stoichiometry. We present a novel method to identify conserved moieties in metabolic networks by graph theoretical analysis of their underlying atom transition networks. Our method returns the exact group of atoms belonging to each conserved moiety as well as the corresponding vector in the left null space of the stoichiometric matrix. It can be implemented as a pipeline of polynomial time algorithms. Our implementation completes in under five minutes on a metabolic network with more than 4,000 mass balanced reactions. The scalability of the method enables extension of existing applications for moiety conservation relations to genome-scale metabolic networks. Finally, we also give examples of new applications made possible by elucidating the atomic structure of conserved moieties.« less
General relativistic radiative transfer code in rotating black hole space-time: ARTIST
NASA Astrophysics Data System (ADS)
Takahashi, Rohta; Umemura, Masayuki
2017-02-01
We present a general relativistic radiative transfer code, ARTIST (Authentic Radiative Transfer In Space-Time), that is a perfectly causal scheme to pursue the propagation of radiation with absorption and scattering around a Kerr black hole. The code explicitly solves the invariant radiation intensity along null geodesics in the Kerr-Schild coordinates, and therefore properly includes light bending, Doppler boosting, frame dragging, and gravitational redshifts. The notable aspect of ARTIST is that it conserves the radiative energy with high accuracy, and is not subject to the numerical diffusion, since the transfer is solved on long characteristics along null geodesics. We first solve the wavefront propagation around a Kerr black hole that was originally explored by Hanni. This demonstrates repeated wavefront collisions, light bending, and causal propagation of radiation with the speed of light. We show that the decay rate of the total energy of wavefronts near a black hole is determined solely by the black hole spin in late phases, in agreement with analytic expectations. As a result, the ARTIST turns out to correctly solve the general relativistic radiation fields until late phases as t ˜ 90 M. We also explore the effects of absorption and scattering, and apply this code for a photon wall problem and an orbiting hotspot problem. All the simulations in this study are performed in the equatorial plane around a Kerr black hole. The ARTIST is the first step to realize the general relativistic radiation hydrodynamics.
Haraldsdóttir, Hulda S.; Fleming, Ronan M. T.
2016-01-01
Conserved moieties are groups of atoms that remain intact in all reactions of a metabolic network. Identification of conserved moieties gives insight into the structure and function of metabolic networks and facilitates metabolic modelling. All moiety conservation relations can be represented as nonnegative integer vectors in the left null space of the stoichiometric matrix corresponding to a biochemical network. Algorithms exist to compute such vectors based only on reaction stoichiometry but their computational complexity has limited their application to relatively small metabolic networks. Moreover, the vectors returned by existing algorithms do not, in general, represent conservation of a specific moiety with a defined atomic structure. Here, we show that identification of conserved moieties requires data on reaction atom mappings in addition to stoichiometry. We present a novel method to identify conserved moieties in metabolic networks by graph theoretical analysis of their underlying atom transition networks. Our method returns the exact group of atoms belonging to each conserved moiety as well as the corresponding vector in the left null space of the stoichiometric matrix. It can be implemented as a pipeline of polynomial time algorithms. Our implementation completes in under five minutes on a metabolic network with more than 4,000 mass balanced reactions. The scalability of the method enables extension of existing applications for moiety conservation relations to genome-scale metabolic networks. We also give examples of new applications made possible by elucidating the atomic structure of conserved moieties. PMID:27870845
Haraldsdóttir, Hulda S.; Fleming, Ronan M. T.
2016-11-21
Conserved moieties are groups of atoms that remain intact in all reactions of a metabolic network. Identification of conserved moieties gives insight into the structure and function of metabolic networks and facilitates metabolic modelling. All moiety conservation relations can be represented as nonnegative integer vectors in the left null space of the stoichiometric matrix corresponding to a biochemical network. Algorithms exist to compute such vectors based only on reaction stoichiometry but their computational complexity has limited their application to relatively small metabolic networks. Moreover, the vectors returned by existing algorithms do not, in general, represent conservation of a specific moietymore » with a defined atomic structure. Here, we show that identification of conserved moieties requires data on reaction atom mappings in addition to stoichiometry. We present a novel method to identify conserved moieties in metabolic networks by graph theoretical analysis of their underlying atom transition networks. Our method returns the exact group of atoms belonging to each conserved moiety as well as the corresponding vector in the left null space of the stoichiometric matrix. It can be implemented as a pipeline of polynomial time algorithms. Our implementation completes in under five minutes on a metabolic network with more than 4,000 mass balanced reactions. The scalability of the method enables extension of existing applications for moiety conservation relations to genome-scale metabolic networks. Finally, we also give examples of new applications made possible by elucidating the atomic structure of conserved moieties.« less
Haraldsdóttir, Hulda S; Fleming, Ronan M T
2016-11-01
Conserved moieties are groups of atoms that remain intact in all reactions of a metabolic network. Identification of conserved moieties gives insight into the structure and function of metabolic networks and facilitates metabolic modelling. All moiety conservation relations can be represented as nonnegative integer vectors in the left null space of the stoichiometric matrix corresponding to a biochemical network. Algorithms exist to compute such vectors based only on reaction stoichiometry but their computational complexity has limited their application to relatively small metabolic networks. Moreover, the vectors returned by existing algorithms do not, in general, represent conservation of a specific moiety with a defined atomic structure. Here, we show that identification of conserved moieties requires data on reaction atom mappings in addition to stoichiometry. We present a novel method to identify conserved moieties in metabolic networks by graph theoretical analysis of their underlying atom transition networks. Our method returns the exact group of atoms belonging to each conserved moiety as well as the corresponding vector in the left null space of the stoichiometric matrix. It can be implemented as a pipeline of polynomial time algorithms. Our implementation completes in under five minutes on a metabolic network with more than 4,000 mass balanced reactions. The scalability of the method enables extension of existing applications for moiety conservation relations to genome-scale metabolic networks. We also give examples of new applications made possible by elucidating the atomic structure of conserved moieties.
Nichols, Matthew; Elustondo, Pia A; Warford, Jordan; Thirumaran, Aruloli; Pavlov, Evgeny V; Robertson, George S
2017-08-01
The effects of global mitochondrial calcium (Ca 2+ ) uniporter (MCU) deficiency on hypoxic-ischemic (HI) brain injury, neuronal Ca 2+ handling, bioenergetics and hypoxic preconditioning (HPC) were examined. Forebrain mitochondria isolated from global MCU nulls displayed markedly reduced Ca 2+ uptake and Ca 2+ -induced opening of the membrane permeability transition pore. Despite evidence that these effects should be neuroprotective, global MCU nulls and wild-type (WT) mice suffered comparable HI brain damage. Energetic stress enhanced glycolysis and depressed Complex I activity in global MCU null, relative to WT, cortical neurons. HI reduced forebrain NADH levels more in global MCU nulls than WT mice suggesting that increased glycolytic consumption of NADH suppressed Complex I activity. Compared to WT neurons, pyruvate dehydrogenase (PDH) was hyper-phosphorylated in MCU nulls at several sites that lower the supply of substrates for the tricarboxylic acid cycle. Elevation of cytosolic Ca 2+ with glutamate or ionomycin decreased PDH phosphorylation in MCU null neurons suggesting the use of alternative mitochondrial Ca 2+ transport. Under basal conditions, global MCU nulls showed similar increases of Ca 2+ handling genes in the hippocampus as WT mice subjected to HPC. We propose that long-term adaptations, common to HPC, in global MCU nulls compromise resistance to HI brain injury and disrupt HPC.
Shocks and currents in stratified atmospheres with a magnetic null point
NASA Astrophysics Data System (ADS)
Tarr, Lucas A.; Linton, Mark
2017-08-01
We use the resistive MHD code LARE (Arber et al 2001) to inject a compressive MHD wavepacket into a stratified atmosphere that has a single magnetic null point, as recently described in Tarr et al 2017. The 2.5D simulation represents a slice through a small ephemeral region or area of plage. The strong gradients in field strength and connectivity related to the presence of the null produce substantially different dynamics compared to the more slowly varying fields typically used in simple sunspot models. The wave-null interaction produces a fast mode shock that collapses the null into a current sheet and generates a set of outward propagating (from the null) slow mode shocks confined to field lines near each separatrix. A combination of oscillatory reconnection and shock dissipation ultimately raise the plasma's internal energy at the null and along each separatrix by 25-50% above the background. The resulting pressure gradients must be balanced by Lorentz forces, so that the final state has contact discontinuities along each separatrix and a persistent current at the null. The simulation demonstrates that fast and slow mode waves localize currents to the topologically important locations of the field, just as their Alfvenic counterparts do, and also illustrates the necessity of treating waves and reconnection as coupled phenomena.
Luo, Jianhua; Mou, Zhiying; Qin, Binjie; Li, Wanqing; Ogunbona, Philip; Robini, Marc C; Zhu, Yuemin
2018-07-01
Reconstructing magnetic resonance images from undersampled k-space data is a challenging problem. This paper introduces a novel method of image reconstruction from undersampled k-space data based on the concept of singularizing operators and a novel singular k-space model. Exploring the sparsity of an image in the k-space, the singular k-space model (SKM) is proposed in terms of the k-space functions of a singularizing operator. The singularizing operator is constructed by combining basic difference operators. An algorithm is developed to reliably estimate the model parameters from undersampled k-space data. The estimated parameters are then used to recover the missing k-space data through the model, subsequently achieving high-quality reconstruction of the image using inverse Fourier transform. Experiments on physical phantom and real brain MR images have shown that the proposed SKM method constantly outperforms the popular total variation (TV) and the classical zero-filling (ZF) methods regardless of the undersampling rates, the noise levels, and the image structures. For the same objective quality of the reconstructed images, the proposed method requires much less k-space data than the TV method. The SKM method is an effective method for fast MRI reconstruction from the undersampled k-space data. Graphical abstract Two Real Images and their sparsified images by singularizing operator.
Making High Accuracy Null Depth Measurements for the LBTI Exozodi Survey
NASA Technical Reports Server (NTRS)
Mennesson, Bertrand; Defrere, Denis; Nowak, Matthias; Hinz, Philip; Millan-Gabet, Rafael; Absil, Oliver; Bailey, Vanessa; Bryden, Geoffrey; Danchi, William C.; Kennedy, Grant M.;
2016-01-01
The characterization of exozodiacal light emission is both important for the understanding of planetary systems evolution and for the preparation of future space missions aiming to characterize low mass planets in the habitable zone of nearby main sequence stars. The Large Binocular Telescope Interferometer (LBTI) exozodi survey aims at providing a ten-fold improvement over current state of the art, measuring dust emission levels down to a typical accuracy of 12 zodis per star, for a representative ensemble of 30+ high priority targets. Such measurements promise to yield a final accuracy of about 2 zodis on the median exozodi level of the targets sample. Reaching a 1 sigma measurement uncertainty of 12 zodis per star corresponds to measuring interferometric cancellation (null) levels, i.e visibilities at the few 100 ppm uncertainty level. We discuss here the challenges posed by making such high accuracy mid-infrared visibility measurements from the ground and present the methodology we developed for achieving current best levels of 500 ppm or so. We also discuss current limitations and plans for enhanced exozodi observations over the next few years at LBTI.
Making High Accuracy Null Depth Measurements for the LBTI ExoZodi Survey
NASA Technical Reports Server (NTRS)
Mennesson, Bertrand; Defrere, Denis; Nowak, Matthew; Hinz, Philip; Millan-Gabet, Rafael; Absil, Olivier; Bailey, Vanessa; Bryden, Geoffrey; Danchi, William; Kennedy, Grant M.;
2016-01-01
The characterization of exozodiacal light emission is both important for the understanding of planetary systems evolution and for the preparation of future space missions aiming to characterize low mass planets in the habitable zone of nearby main sequence stars. The Large Binocular Telescope Interferometer (LBTI) exozodi survey aims at providing a ten-fold improvement over current state of the art, measuring dust emission levels down to a typical accuracy of approximately 12 zodis per star, for a representative ensemble of approximately 30+ high priority targets. Such measurements promise to yield a final accuracy of about 2 zodis on the median exozodi level of the targets sample. Reaching a 1 sigma measurement uncertainty of 12 zodis per star corresponds to measuring interferometric cancellation (null) levels, i.e visibilities at the few 100 ppm uncertainty level. We discuss here the challenges posed by making such high accuracy mid-infrared visibility measurements from the ground and present the methodology we developed for achieving current best levels of 500 ppm or so. We also discuss current limitations and plans for enhanced exozodi observations over the next few years at LBTI.
Optimized Hyper Beamforming of Linear Antenna Arrays Using Collective Animal Behaviour
Ram, Gopi; Mandal, Durbadal; Kar, Rajib; Ghoshal, Sakti Prasad
2013-01-01
A novel optimization technique which is developed on mimicking the collective animal behaviour (CAB) is applied for the optimal design of hyper beamforming of linear antenna arrays. Hyper beamforming is based on sum and difference beam patterns of the array, each raised to the power of a hyperbeam exponent parameter. The optimized hyperbeam is achieved by optimization of current excitation weights and uniform interelement spacing. As compared to conventional hyper beamforming of linear antenna array, real coded genetic algorithm (RGA), particle swarm optimization (PSO), and differential evolution (DE) applied to the hyper beam of the same array can achieve reduction in sidelobe level (SLL) and same or less first null beam width (FNBW), keeping the same value of hyperbeam exponent. Again, further reductions of sidelobe level (SLL) and first null beam width (FNBW) have been achieved by the proposed collective animal behaviour (CAB) algorithm. CAB finds near global optimal solution unlike RGA, PSO, and DE in the present problem. The above comparative optimization is illustrated through 10-, 14-, and 20-element linear antenna arrays to establish the optimization efficacy of CAB. PMID:23970843
NASA Astrophysics Data System (ADS)
Wells, Conrad; Olczak, Gene; Merle, Cormic; Dey, Tom; Waldman, Mark; Whitman, Tony; Wick, Eric; Peer, Aaron
2010-08-01
The James Webb Space Telescope (JWST) Optical Telescope Element (OTE) consists of a 6.6 m clear aperture, allreflective, three-mirror anastigmat. The 18-segment primary mirror (PM) presents unique and challenging assembly, integration, alignment and testing requirements. A full aperture center of curvature optical test is performed in cryogenic vacuum conditions at the integrated observatory level to verify PM performance requirements. The Center of Curvature Optical Assembly (CoCOA), designed and being built by ITT satisfies the requirements for this test. The CoCOA contains a multi wave interferometer, patented reflective null lens, actuation for alignment, full in situ calibration capability, coarse and fine alignment sensing systems, as well as a system for monitoring changes in the PM to CoCOA distance. Two wave front calibration tests are utilized to verify the low and Mid/High spatial frequencies, overcoming the limitations of the standard null/hologram configuration in its ability to resolve mid and high spatial frequencies. This paper will introduce the systems level architecture and optical test layout for the CoCOA.
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-04-30
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-01-01
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umansky, M. V.; Ryutov, D. D.
Reduced MHD equations are used for studying toroidally symmetric plasma dynamics near the divertor null point. Numerical solution of these equations exhibits a plasma vortex localized at the null point with the time-evolution defined by interplay of the curvature drive, magnetic restoring force, and dissipation. Convective motion is easier to achieve for a second-order null (snowflake) divertor than for a regular x-point configuration, and the size of the convection zone in a snowflake configuration grows with plasma pressure at the null point. In conclusion, the trends in simulations are consistent with tokamak experiments which indicate the presence of enhanced transportmore » at the null point.« less
What predicts the strength of simultaneous color contrast?
Ratnasingam, Sivalogeswaran; Anderson, Barton L.
2017-01-01
The perceived color of a uniform image patch depends not only on the spectral content of the light that reaches the eye but also on its context. One of the most extensively studied forms of context dependence is a simultaneous contrast display: a center-surround display containing a homogeneous target embedded in a homogenous surround. A number of models have been proposed to account for the chromatic transformations of targets induced by such surrounds, but they were typically derived in the restricted context of experiments using achromatic targets with surrounds that varied along the cardinal axes of color space. There is currently no theoretical consensus that predicts the target color that produces the largest perceived color difference for two arbitrarily chosen surround colors, or what surround would give the largest color induction for an arbitrarily chosen target. Here, we present a method for assessing simultaneous contrast that avoids some of the methodological issues that arise with nulling and matching experiments and diminishes the contribution of temporal adaption. Observers were presented with pairs of center-surround patterns and ordered them from largest to smallest in perceived dissimilarity. We find that the perceived difference for two arbitrarily chosen surrounds is largest when the target falls on the line connecting the two surrounds in color space. We also find that the magnitude of induction is larger for larger differences between chromatic targets and surrounds of the same hue. Our results are consistent with the direction law (Ekroll & Faul, 2012b), and with a generalization of Kirschmann's fourth law, even for viewing conditions that do not favor temporal adaptation. PMID:28245494
Current progress on TPFI nulling architectures at Jet Propulsion Laboratory
NASA Technical Reports Server (NTRS)
Gappinger, Robert O.; Wallace, J. Kent; Bartos, Randall D.; Macdonald, Daniel R.; Brown, Kenneth A.
2005-01-01
Infrared interferometric nulling is a promising technology for exoplanet detection. Nulling research for the Terrestrial Planet Finder Interferometer has been exploring a variety of interferometer architectures at the Jet Propulsion Laboratory (JPL).
Cardiomyocyte-specific desmin rescue of desmin null cardiomyopathy excludes vascular involvement.
Weisleder, Noah; Soumaka, Elisavet; Abbasi, Shahrzad; Taegtmeyer, Heinrich; Capetanaki, Yassemi
2004-01-01
Mice deficient in desmin, the muscle-specific member of the intermediate filament gene family, display defects in all muscle types and particularly in the myocardium. Desmin null hearts develop cardiomyocyte hypertrophy and dilated cardiomyopathy (DCM) characterized by extensive myocyte cell death, calcific fibrosis and multiple ultrastructural defects. Several lines of evidence suggest impaired vascular function in desmin null animals. To determine whether altered capillary function or an intrinsic cardiomyocyte defect is responsible for desmin null DCM, transgenic mice were generated to rescue desmin expression specifically to cardiomyocytes. Desmin rescue mice display a wild-type cardiac phenotype with no fibrosis or calcification in the myocardium and normalization of coronary flow. Cardiomyocyte ultrastructure is also restored to normal. Markers of hypertrophy upregulated in desmin null hearts return to wild-type levels in desmin rescue mice. Working hearts were perfused to assess coronary flow and cardiac power. Restoration of a wild-type cardiac phenotype in a desmin null background by expression of desmin specifically within cardiomyocyte indicates that defects in the desmin null heart are due to an intrinsic cardiomyocytes defect rather than compromised coronary circulation.
Generalized probabilistic scale space for image restoration.
Wong, Alexander; Mishra, Akshaya K
2010-10-01
A novel generalized sampling-based probabilistic scale space theory is proposed for image restoration. We explore extending the definition of scale space to better account for both noise and observation models, which is important for producing accurately restored images. A new class of scale-space realizations based on sampling and probability theory is introduced to realize this extended definition in the context of image restoration. Experimental results using 2-D images show that generalized sampling-based probabilistic scale-space theory can be used to produce more accurate restored images when compared with state-of-the-art scale-space formulations, particularly under situations characterized by low signal-to-noise ratios and image degradation.
Specifications for Managed Strings, Second Edition
2010-05-01
const char * cstr , const size_t maxsize, const char *charset); 10 | CMU/SEI-2010-TR-018 Runtime-Constraints s shall not be a null pointer...strcreate_m function creates a managed string, referenced by s, given a conventional string cstr (which may be null or empty). maxsize specifies the...characters to those in the null-terminated byte string cstr (which may be empty). If charset is a null pointer, no restricted character set is defined. If
An omnibus test for the global null hypothesis.
Futschik, Andreas; Taus, Thomas; Zehetmayer, Sonja
2018-01-01
Global hypothesis tests are a useful tool in the context of clinical trials, genetic studies, or meta-analyses, when researchers are not interested in testing individual hypotheses, but in testing whether none of the hypotheses is false. There are several possibilities how to test the global null hypothesis when the individual null hypotheses are independent. If it is assumed that many of the individual null hypotheses are false, combination tests have been recommended to maximize power. If, however, it is assumed that only one or a few null hypotheses are false, global tests based on individual test statistics are more powerful (e.g. Bonferroni or Simes test). However, usually there is no a priori knowledge on the number of false individual null hypotheses. We therefore propose an omnibus test based on cumulative sums of the transformed p-values. We show that this test yields an impressive overall performance. The proposed method is implemented in an R-package called omnibus.
Continuous development of current sheets near and away from magnetic nulls
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Sanjay; Bhattacharyya, R.
2016-04-15
The presented computations compare the strength of current sheets which develop near and away from the magnetic nulls. To ensure the spontaneous generation of current sheets, the computations are performed congruently with Parker's magnetostatic theorem. The simulations evince current sheets near two dimensional and three dimensional magnetic nulls as well as away from them. An important finding of this work is in the demonstration of comparative scaling of peak current density with numerical resolution, for these different types of current sheets. The results document current sheets near two dimensional magnetic nulls to have larger strength while exhibiting a stronger scalingmore » than the current sheets close to three dimensional magnetic nulls or away from any magnetic null. The comparative scaling points to a scenario where the magnetic topology near a developing current sheet is important for energetics of the subsequent reconnection.« less
Experimental evaluation of achromatic phase shifters for mid-infrared starlight suppression.
Gappinger, Robert O; Diaz, Rosemary T; Ksendzov, Alexander; Lawson, Peter R; Lay, Oliver P; Liewer, Kurt M; Loya, Frank M; Martin, Stefan R; Serabyn, Eugene; Wallace, James K
2009-02-10
Phase shifters are a key component of nulling interferometry, one of the potential routes to enabling the measurement of faint exoplanet spectra. Here, three different achromatic phase shifters are evaluated experimentally in the mid-infrared, where such nulling interferometers may someday operate. The methods evaluated include the use of dispersive glasses, a through-focus field inversion, and field reversals on reflection from antisymmetric flat-mirror periscopes. All three approaches yielded deep, broadband, mid-infrared nulls, but the deepest broadband nulls were obtained with the periscope architecture. In the periscope system, average null depths of 4x10(-5) were obtained with a 25% bandwidth, and 2x10(-5) with a 20% bandwidth, at a central wavelength of 9.5 mum. The best short term nulls at 20% bandwidth were approximately 9x10(-6), in line with error budget predictions and the limits of the current generation of hardware.
The Importance of Proving the Null
Gallistel, C. R.
2010-01-01
Null hypotheses are simple, precise, and theoretically important. Conventional statistical analysis cannot support them; Bayesian analysis can. The challenge in a Bayesian analysis is to formulate a suitably vague alternative, because the vaguer the alternative is (the more it spreads out the unit mass of prior probability), the more the null is favored. A general solution is a sensitivity analysis: Compute the odds for or against the null as a function of the limit(s) on the vagueness of the alternative. If the odds on the null approach 1 from above as the hypothesized maximum size of the possible effect approaches 0, then the data favor the null over any vaguer alternative to it. The simple computations and the intuitive graphic representation of the analysis are illustrated by the analysis of diverse examples from the current literature. They pose 3 common experimental questions: (a) Are 2 means the same? (b) Is performance at chance? (c) Are factors additive? PMID:19348549
Light cone structure near null infinity of the Kerr metric
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai Shan; Shang Yu; Graduate School of Chinese Academy of Sciences, Beijing, 100080
2007-02-15
Motivated by our attempt to understand the question of angular momentum of a relativistic rotating source carried away by gravitational waves, in the asymptotic regime near future null infinity of the Kerr metric, a family of null hypersurfaces intersecting null infinity in shearfree (good) cuts are constructed by means of asymptotic expansion of the eikonal equation. The geometry of the null hypersurfaces as well as the asymptotic structure of the Kerr metric near null infinity are studied. To the lowest order in angular momentum, the Bondi-Sachs form of the Kerr metric is worked out. The Newman-Unti formalism is then furthermore » developed, with which the Newman-Penrose constants of the Kerr metric are computed and shown to be zero. Possible physical implications of the vanishing of the Newman-Penrose constants of the Kerr metric are also briefly discussed.« less
Mohanty, S; Jermyn, K A; Early, A; Kawata, T; Aubry, L; Ceccarelli, A; Schaap, P; Williams, J G; Firtel, R A
1999-08-01
Dd-STATa is a structural and functional homologue of the metazoan STAT (Signal Transducer and Activator of Transcription) proteins. We show that Dd-STATa null cells exhibit several distinct developmental phenotypes. The aggregation of Dd-STATa null cells is delayed and they chemotax slowly to a cyclic AMP source, suggesting a role for Dd-STATa in these early processes. In Dd-STATa null strains, slug-like structures are formed but they have an aberrant pattern of gene expression. In such slugs, ecmB/lacZ, a marker that is normally specific for cells on the stalk cell differentiation pathway, is expressed throughout the prestalk region. Stalk cell differentiation in Dictyostelium has been proposed to be under negative control, mediated by repressor elements present in the promoters of stalk cell-specific genes. Dd-STATa binds these repressor elements in vitro and the ectopic expression of ecmB/lacZ in the null strain provides in vivo evidence that Dd-STATa is the repressor protein that regulates commitment to stalk cell differentiation. Dd-STATa null cells display aberrant behavior in a monolayer assay wherein stalk cell differentiation is induced using the stalk cell morphogen DIF. The ecmB gene, a general marker for stalk cell differentiation, is greatly overinduced by DIF in Dd-STATa null cells. Also, Dd-STATa null cells are hypersensitive to DIF for expression of ST/lacZ, a marker for the earliest stages in the differentiation of one of the stalk cell sub-types. We suggest that both these manifestations of DIF hypersensitivity in the null strain result from the balance between activation and repression of the promoter elements being tipped in favor of activation when the repressor is absent. Paradoxically, although Dd-STATa null cells are hypersensitive to the inducing effects of DIF and readily form stalk cells in monolayer assay, the Dd-STATa null cells show little or no terminal stalk cell differentiation within the slug. Dd-STATa null slugs remain developmentally arrested for several days before forming very small spore masses supported by a column of apparently undifferentiated cells. Thus, complete stalk cell differentiation appears to require at least two events: a commitment step, whereby the repression exerted by Dd-STATa is lifted, and a second step that is blocked in a Dd-STATa null organism. This latter step may involve extracellular cAMP, a known repressor of stalk cell differentiation, because Dd-STATa null cells are abnormally sensitive to the inhibitory effects of extracellular cyclic AMP.
Mid-space-independent deformable image registration.
Aganj, Iman; Iglesias, Juan Eugenio; Reuter, Martin; Sabuncu, Mert Rory; Fischl, Bruce
2017-05-15
Aligning images in a mid-space is a common approach to ensuring that deformable image registration is symmetric - that it does not depend on the arbitrary ordering of the input images. The results are, however, generally dependent on the mathematical definition of the mid-space. In particular, the set of possible solutions is typically restricted by the constraints that are enforced on the transformations to prevent the mid-space from drifting too far from the native image spaces. The use of an implicit atlas has been proposed as an approach to mid-space image registration. In this work, we show that when the atlas is aligned to each image in the native image space, the data term of implicit-atlas-based deformable registration is inherently independent of the mid-space. In addition, we show that the regularization term can be reformulated independently of the mid-space as well. We derive a new symmetric cost function that only depends on the transformation morphing the images to each other, rather than to the atlas. This eliminates the need for anti-drift constraints, thereby expanding the space of allowable deformations. We provide an implementation scheme for the proposed framework, and validate it through diffeomorphic registration experiments on brain magnetic resonance images. Copyright © 2017 Elsevier Inc. All rights reserved.
Mid-Space-Independent Deformable Image Registration
Aganj, Iman; Iglesias, Juan Eugenio; Reuter, Martin; Sabuncu, Mert Rory; Fischl, Bruce
2017-01-01
Aligning images in a mid-space is a common approach to ensuring that deformable image registration is symmetric – that it does not depend on the arbitrary ordering of the input images. The results are, however, generally dependent on the mathematical definition of the mid-space. In particular, the set of possible solutions is typically restricted by the constraints that are enforced on the transformations to prevent the mid-space from drifting too far from the native image spaces. The use of an implicit atlas has been proposed as an approach to mid-space image registration. In this work, we show that when the atlas is aligned to each image in the native image space, the data term of implicit-atlas-based deformable registration is inherently independent of the mid-space. In addition, we show that the regularization term can be reformulated independently of the mid-space as well. We derive a new symmetric cost function that only depends on the transformation morphing the images to each other, rather than to the atlas. This eliminates the need for anti-drift constraints, thereby expanding the space of allowable deformations. We provide an implementation scheme for the proposed framework, and validate it through diffeomorphic registration experiments on brain magnetic resonance images. PMID:28242316
Arbour, J H; López-Fernández, H
2014-11-01
Morphological, lineage and ecological diversity can vary substantially even among closely related lineages. Factors that influence morphological diversification, especially in functionally relevant traits, can help to explain the modern distribution of disparity across phylogenies and communities. Multivariate axes of feeding functional morphology from 75 species of Neotropical cichlid and a stepwise-AIC algorithm were used to estimate the adaptive landscape of functional morphospace in Cichlinae. Adaptive landscape complexity and convergence, as well as the functional diversity of Cichlinae, were compared with expectations under null evolutionary models. Neotropical cichlid feeding function varied primarily between traits associated with ram feeding vs. suction feeding/biting and secondarily with oral jaw muscle size and pharyngeal crushing capacity. The number of changes in selective regimes and the amount of convergence between lineages was higher than expected under a null model of evolution, but convergence was not higher than expected under a similarly complex adaptive landscape. Functional disparity was compatible with an adaptive landscape model, whereas the distribution of evolutionary change through morphospace corresponded with a process of evolution towards a single adaptive peak. The continentally distributed Neotropical cichlids have evolved relatively rapidly towards a number of adaptive peaks in functional trait space. Selection in Cichlinae functional morphospace is more complex than expected under null evolutionary models. The complexity of selective constraints in feeding morphology has likely been a significant contributor to the diversity of feeding ecology in this clade. © 2014 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.
Formation of the Sun-aligned arc region and the void (polar slot) under the null-separator structure
NASA Astrophysics Data System (ADS)
Tanaka, T.; Obara, T.; Watanabe, M.; Fujita, S.; Ebihara, Y.; Kataoka, R.
2017-04-01
From the global magnetosphere-ionosphere coupling simulation, we examined the formation of the Sun-aligned arc region and the void (polar slot) under the northward interplanetary magnetic field (IMF) with negative By condition. In the magnetospheric null-separator structure, the separatrices generated from two null points and two separators divide the entire space into four types of magnetic region, i.e., the IMF, the northern open magnetic field, the southern open magnetic field, and the closed magnetic field. In the ionosphere, the Sun-aligned arc region and the void are reproduced in the distributions of simulated plasma pressure and field-aligned current. The outermost closed magnetic field lines on the boundary (separatrix) between the northern open magnetic field and the closed magnetic field are projected to the northern ionosphere at the boundary between the Sun-aligned arc region and the void, both on the morning and evening sides. The magnetic field lines at the plasma sheet inner edge are projected to the equatorward boundary of the oval. Therefore, the Sun-aligned arc region is on the closed magnetic field lines of the plasma sheet. In the plasma sheet, an inflated structure (bulge) is generated at the junction of the tilted plasma sheet in the far-to-middle tail and nontilted plasma sheet in the ring current region. In the Northern Hemisphere, the bulge is on the evening side wrapped by the outermost closed magnetic field lines that are connected to the northern evening ionosphere. This inflated structure (bulge) is associated with shear flows that cause the Sun-aligned arc.
An empirical model to forecast solar wind velocity through statistical modeling
NASA Astrophysics Data System (ADS)
Gao, Y.; Ridley, A. J.
2013-12-01
The accurate prediction of the solar wind velocity has been a major challenge in the space weather community. Previous studies proposed many empirical and semi-empirical models to forecast the solar wind velocity based on either the historical observations, e.g. the persistence model, or the instantaneous observations of the sun, e.g. the Wang-Sheeley-Arge model. In this study, we use the one-minute WIND data from January 1995 to August 2012 to investigate and compare the performances of 4 models often used in literature, here referred to as the null model, the persistence model, the one-solar-rotation-ago model, and the Wang-Sheeley-Arge model. It is found that, measured by root mean square error, the persistence model gives the most accurate predictions within two days. Beyond two days, the Wang-Sheeley-Arge model serves as the best model, though it only slightly outperforms the null model and the one-solar-rotation-ago model. Finally, we apply the least-square regression to linearly combine the null model, the persistence model, and the one-solar-rotation-ago model to propose a 'general persistence model'. By comparing its performance against the 4 aforementioned models, it is found that the accuracy of the general persistence model outperforms the other 4 models within five days. Due to its great simplicity and superb performance, we believe that the general persistence model can serve as a benchmark in the forecast of solar wind velocity and has the potential to be modified to arrive at better models.
Frömke, Cornelia; Hothorn, Ludwig A; Kropf, Siegfried
2008-01-27
In many research areas it is necessary to find differences between treatment groups with several variables. For example, studies of microarray data seek to find a significant difference in location parameters from zero or one for ratios thereof for each variable. However, in some studies a significant deviation of the difference in locations from zero (or 1 in terms of the ratio) is biologically meaningless. A relevant difference or ratio is sought in such cases. This article addresses the use of relevance-shifted tests on ratios for a multivariate parallel two-sample group design. Two empirical procedures are proposed which embed the relevance-shifted test on ratios. As both procedures test a hypothesis for each variable, the resulting multiple testing problem has to be considered. Hence, the procedures include a multiplicity correction. Both procedures are extensions of available procedures for point null hypotheses achieving exact control of the familywise error rate. Whereas the shift of the null hypothesis alone would give straight-forward solutions, the problems that are the reason for the empirical considerations discussed here arise by the fact that the shift is considered in both directions and the whole parameter space in between these two limits has to be accepted as null hypothesis. The first algorithm to be discussed uses a permutation algorithm, and is appropriate for designs with a moderately large number of observations. However, many experiments have limited sample sizes. Then the second procedure might be more appropriate, where multiplicity is corrected according to a concept of data-driven order of hypotheses.
[Predictive model based multimetric index of macroinvertebrates for river health assessment].
Chen, Kai; Yu, Hai Yan; Zhang, Ji Wei; Wang, Bei Xin; Chen, Qiu Wen
2017-06-18
Improving the stability of integrity of biotic index (IBI; i.e., multi-metric indices, MMI) across temporal and spatial scales is one of the most important issues in water ecosystem integrity bioassessment and water environment management. Using datasets of field-based macroinvertebrate and physicochemical variables and GIS-based natural predictors (e.g., geomorphology and climate) and land use variables collected at 227 river sites from 2004 to 2011 across the Zhejiang Province, China, we used random forests (RF) to adjust the effects of natural variations at temporal and spatial scales on macroinvertebrate metrics. We then developed natural variations adjusted (predictive) and unadjusted (null) MMIs and compared performance between them. The core me-trics selected for predictive and null MMIs were different from each other, and natural variations within core metrics in predictive MMI explained by RF models ranged between 11.4% and 61.2%. The predictive MMI was more precise and accurate, but less responsive and sensitive than null MMI. The multivariate nearest-neighbor test determined that 9 test sites and 1 most degraded site were flagged outside of the environmental space of the reference site network. We found that combination of predictive MMI developed by using predictive model and the nearest-neighbor test performed best and decreased risks of inferring type I (designating a water body as being in poor biological condition, when it was actually in good condition) and type II (designating a water body as being in good biological condition, when it was actually in poor condition) errors. Our results provided an effective method to improve the stability and performance of integrity of biotic index.
Perfluorocarbon Nanoparticles for Physiological and Molecular Imaging and Therapy
Chen, Junjie; Pan, Hua; Lanza, Gregory M.; Wickline, Samuel A.
2014-01-01
Herein we review the use of non-nephrotoxic perfluorocarbon nanoparticles (PFC NP) for noninvasive detection and therapy of kidney diseases, and provide a synopsis of other related literature pertinent to anticipated clinical application. Recent reports indicate that PFC NP allow quantitative mapping of kidney perfusion, and oxygenation after ischemia-reperfusion injury with the use of a novel multi-nuclear 1H/19F magnetic resonance imaging (MRI) approach,. Furthermore, when conjugated with targeting ligands, the functionalized PFC NP offer unique and quantitative capabilities for imaging inflammation in the kidney of atherosclerotic ApoE-null mice. Additionally, PFC NP can facilitate drug delivery for treatment of inflammation, thrombosis, and angiogenesis in selected conditions that are comorbidities for to kidney failure. The excellent safety profile of PFC NP with respect to kidney injury positions these nanomedicine approaches as promising diagnostic and therapeutic candidates for treating and following acute and chronic kidney diseases. PMID:24206599
The origin of nulls mode changes and timing noise in pulsars
NASA Astrophysics Data System (ADS)
Jones, P. B.
A solvable polar cap model obtained previously has normal states which may be associated with radio emission and null states. The solutions cannot be time-independent; the neutron star surface temperature T and mean surface nuclear charge Z are both functions of time. The normal and null states, and the transitions between them, form closed cycles in the T-Z plane. Normal-null transitions can occur inside a fraction of the area on the neutron star surface intersected by open magnetic flux lines. The fraction increases with pulsar period and becomes unity when the pulsar nears extinction. Frequency noise, mode changes, and pulse nulls have a common explanation in the transitions.
The origin of nulls, mode changes and timing noise in pulsars
NASA Astrophysics Data System (ADS)
Jones, P. B.
1982-09-01
A solvable polar cap model obtained previously has normal states which may be associated with radio emission, and null states. The solutions cannot be time-independent; the neutron star surface temperature T and mean surface nuclear charge Z are both functions of time. The normal and null states and the transitions between them, form closed cycles in the T-Z plane. Normal-null transitions can occur inside a fraction of the area of the neutron star surface intersected by open magnetic flux lines. The fraction increases with pulsar period and becomes unity when the pulsar nears extinction. Frequency noise, mode changes and pulse nulls have a common explanation in the transitions.
NASA Astrophysics Data System (ADS)
Shi, Li; Wu, Lun; Chi, Guanghua; Liu, Yu
2016-10-01
Space and place are two fundamental concepts in geography. Geographical factors have long been known as drivers of many aspects of people's social networks. But whether and how space and place affect social networks differently are still unclear. The widespread use of location-aware devices provides a novel source for distinguishing the mechanisms of their impacts on social networks. Using mobile phone data, this paper explores the effects of space and place on social networks. From the perspective of space, we confirm the distance decay effect in social networks, based on a comparison between synthetic social ties generated by a null model and actual social ties derived from real-world data. From the perspective of place, we introduce several measures to evaluate interactions between individuals and inspect the trio relationship including distance, spatio-temporal co-occurrence, and social ties. We found that people's interaction is a more important factor than spatial proximity, indicating that the spatial factor has a stronger impact on social networks in place compared to that in space. Furthermore, we verify the hypothesis that interactions play an important role in strengthening friendships.
Overview of LBTI: A Multipurpose Facility for High Spatial Resolution Observations
NASA Technical Reports Server (NTRS)
Hinz, P. M.; Defrere, D.; Skemer, A.; Bailey, V.; Stone, J.; Spalding, E.; Vaz, A.; Pinna, E.; Puglisi, A.; Esposito, S.;
2016-01-01
The Large Binocular Telescope Interferometer (LBTI) is a high spatial resolution instrument developed for coherent imaging and nulling interferometry using the 14.4 m baseline of the 2x8.4 m LBT. The unique telescope design, comprising of the dual apertures on a common elevation-azimuth mount, enables a broad use of observing modes. The full system is comprised of dual adaptive optics systems, a near-infrared phasing camera, a 1-5 micrometer camera (called LMIRCam), and an 8-13 micrometer camera (called NOMIC). The key program for LBTI is the Hunt for Observable Signatures of Terrestrial planetary Systems (HOSTS), a survey using nulling interferometry to constrain the typical brightness from exozodiacal dust around nearby stars. Additional observations focus on the detection and characterization of giant planets in the thermal infrared, high spatial resolution imaging of complex scenes such as Jupiter's moon, Io, planets forming in transition disks, and the structure of active Galactic Nuclei (AGN). Several instrumental upgrades are currently underway to improve and expand the capabilities of LBTI. These include: Improving the performance and limiting magnitude of the parallel adaptive optics systems; quadrupling the field of view of LMIRcam (increasing to 20"x20"); adding an integral field spectrometry mode; and implementing a new algorithm for path length correction that accounts for dispersion due to atmospheric water vapor. We present the current architecture and performance of LBTI, as well as an overview of the upgrades.
An ellipsometric approach towards the description of inhomogeneous polymer-based Langmuir layers
Rottke, Falko O; Schulz, Burkhard; Richau, Klaus; Kratz, Karl
2016-01-01
Summary The applicability of nulling-based ellipsometric mapping as a complementary method next to Brewster angle microscopy (BAM) and imaging ellipsometry (IE) is presented for the characterization of ultrathin films at the air–water interface. First, the methodology is demonstrated for a vertically nonmoving Langmuir layer of star-shaped, 4-arm poly(ω-pentadecalactone) (PPDL-D4). Using nulling-based ellipsometric mapping, PPDL-D4-based inhomogeneously structured morphologies with a vertical dimension in the lower nm range could be mapped. In addition to the identification of these structures, the differentiation between a monolayer and bare water was possible. Second, the potential and limitations of this method were verified by applying it to more versatile Langmuir layers of telechelic poly[(rac-lactide)-co-glycolide]-diol (PLGA). All ellipsometric maps were converted into thickness maps by introduction of the refractive index that was derived from independent ellipsometric experiments, and the result was additionally evaluated in terms of the root mean square roughness, R q. Thereby, a three-dimensional view into the layers was enabled and morphological inhomogeneity could be quantified. PMID:27826490
NASA Technical Reports Server (NTRS)
Wendel, Deirdre E.; Reiff, Patricia H.; Goldstein, Melvyn L.
2010-01-01
We simulate a northward IMF cusp reconnection event at the magnetopause using the OpenGGCM resistive MHD code. The ACE input data, solar wind parameters, and dipole tilt belong to a 2002 reconnection event observed by IMAGE and Cluster. Based on a fully three-dimensional skeleton separators, nulls, and parallel electric fields, we show magnetic draping, convection, ionospheric field line tying play a role in producing a series of locally reconnecting nulls with flux ropes. The flux ropes in the cusp along the global separator line of symmetry. In 2D projection, the flux ropes the appearance of a tearing mode with a series of 'x's' and 'o's' but bearing a kind of 'guide field' that exists only within the magnetopause. The reconnecting field lines in the string of ropes involve IMF and both open and closed Earth magnetic field lines. The observed magnetic geometry reproduces the findings of a superposed epoch impact parameter study derived from the Cluster magnetometer data for the same event. The observed geometry has repercussions for spacecraft observations of cusp reconnection and for the imposed boundary conditions reconnection simulations.
Peripheral Frequency of CD4+ CD28− Cells in Acute Ischemic Stroke
Tuttolomondo, Antonino; Pecoraro, Rosaria; Casuccio, Alessandra; Di Raimondo, Domenico; Buttà, Carmelo; Clemente, Giuseppe; Corte, Vittoriano della; Guggino, Giuliana; Arnao, Valentina; Maida, Carlo; Simonetta, Irene; Maugeri, Rosario; Squatrito, Rosario; Pinto, Antonio
2015-01-01
Abstract CD4+ CD28− T cells also called CD28 null cells have been reported as increased in the clinical setting of acute coronary syndrome. Only 2 studies previously analyzed peripheral frequency of CD28 null cells in subjects with acute ischemic stroke but, to our knowledge, peripheral frequency of CD28 null cells in each TOAST subtype of ischemic stroke has never been evaluated. We hypothesized that CD4+ cells and, in particular, the CD28 null cell subset could show a different degree of peripheral percentage in subjects with acute ischemic stroke in relation to clinical subtype and severity of ischemic stroke. The aim of our study was to analyze peripheral frequency of CD28 null cells in subjects with acute ischemic stroke in relation to TOAST diagnostic subtype, and to evaluate their relationship with scores of clinical severity of acute ischemic stroke, and their predictive role in the diagnosis of acute ischemic stroke and diagnostic subtype We enrolled 98 consecutive subjects admitted to our recruitment wards with a diagnosis of ischemic stroke. As controls we enrolled 66 hospitalized patients without a diagnosis of acute ischemic stroke. Peripheral frequency of CD4+ and CD28 null cells has been evaluated with a FACS Calibur flow cytometer. Subjects with acute ischemic stroke had a significantly higher peripheral frequency of CD4+ cells and CD28 null cells compared to control subjects without acute ischemic stroke. Subjects with cardioembolic stroke had a significantly higher peripheral frequency of CD4+ cells and CD28 null cells compared to subjects with other TOAST subtypes. We observed a significant relationship between CD28 null cells peripheral percentage and Scandinavian Stroke Scale and NIHSS scores. ROC curve analysis showed that CD28 null cell percentage may be useful to differentiate between stroke subtypes. These findings seem suggest a possible role for a T-cell component also in acute ischemic stroke clinical setting showing a different peripheral frequency of CD28 null cells in relation of each TOAST subtype of stroke. PMID:25997053
NASA Astrophysics Data System (ADS)
Pan, Feng; Ding, Xiaoxue; Launey, Kristina D.; Draayer, J. P.
2018-06-01
A simple and effective algebraic isospin projection procedure for constructing orthonormal basis vectors of irreducible representations of O (5) ⊃OT (3) ⊗ON (2) from those in the canonical O (5) ⊃ SUΛ (2) ⊗ SUI (2) basis is outlined. The expansion coefficients are components of null space vectors of the projection matrix with four nonzero elements in each row in general. Explicit formulae for evaluating OT (3)-reduced matrix elements of O (5) generators are derived.
2007-12-11
Implemented both carrier and code phase tracking loop for performance evaluation of a minimum power beam forming algorithm and null steering algorithm...4 Antennal Antenna2 Antenna K RF RF RF ct, Ct~2 ChKx1 X2 ....... Xk A W ~ ~ =Z, x W ,=1 Fig. 5. Schematics of a K-element antenna array spatial...adaptive processor Antennal Antenna K A N-i V/ ( Vil= .i= VK Fig. 6. Schematics of a K-element antenna array space-time adaptive processor Two additional
Study of Hind Limb Tissue Gas Phase Formation in Response to Suspended Adynamia and Hypokinesia
NASA Technical Reports Server (NTRS)
Butler, Bruce D.
1996-01-01
The purpose of this study was to investigate the hypothesis that reduced joint/muscle activity (hypo kinesia) as well as reduced or null loading of limbs (adynamia) in gravity would result in reduced decompression-induced gas phase and symptoms of decompression sickness (DCS). Finding a correlation between the two phenomena would correspond to the proposed reduction in tissue gas phase formation in astronauts undergoing decompression during extravehicular activity (EVA) in microgravity. The observation may further explain the reported low incidence of DCS in space.
Multi-objective control for cooperative payload transport with rotorcraft UAVs.
Gimenez, Javier; Gandolfo, Daniel C; Salinas, Lucio R; Rosales, Claudio; Carelli, Ricardo
2018-06-01
A novel kinematic formation controller based on null-space theory is proposed to transport a cable-suspended payload with two rotorcraft UAVs considering collision avoidance, wind perturbations, and properly distribution of the load weight. An accurate 6-DoF nonlinear dynamic model of a helicopter and models for flexible cables and payload are included to test the proposal in a realistic scenario. System stability is demonstrated using Lyapunov theory and several simulation results show the good performance of the approach. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Self-calibrated correlation imaging with k-space variant correlation functions.
Li, Yu; Edalati, Masoud; Du, Xingfu; Wang, Hui; Cao, Jie J
2018-03-01
Correlation imaging is a previously developed high-speed MRI framework that converts parallel imaging reconstruction into the estimate of correlation functions. The presented work aims to demonstrate this framework can provide a speed gain over parallel imaging by estimating k-space variant correlation functions. Because of Fourier encoding with gradients, outer k-space data contain higher spatial-frequency image components arising primarily from tissue boundaries. As a result of tissue-boundary sparsity in the human anatomy, neighboring k-space data correlation varies from the central to the outer k-space. By estimating k-space variant correlation functions with an iterative self-calibration method, correlation imaging can benefit from neighboring k-space data correlation associated with both coil sensitivity encoding and tissue-boundary sparsity, thereby providing a speed gain over parallel imaging that relies only on coil sensitivity encoding. This new approach is investigated in brain imaging and free-breathing neonatal cardiac imaging. Correlation imaging performs better than existing parallel imaging techniques in simulated brain imaging acceleration experiments. The higher speed enables real-time data acquisition for neonatal cardiac imaging in which physiological motion is fast and non-periodic. With k-space variant correlation functions, correlation imaging gives a higher speed than parallel imaging and offers the potential to image physiological motion in real-time. Magn Reson Med 79:1483-1494, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Bayes factor and posterior probability: Complementary statistical evidence to p-value.
Lin, Ruitao; Yin, Guosheng
2015-09-01
As a convention, a p-value is often computed in hypothesis testing and compared with the nominal level of 0.05 to determine whether to reject the null hypothesis. Although the smaller the p-value, the more significant the statistical test, it is difficult to perceive the p-value in a probability scale and quantify it as the strength of the data against the null hypothesis. In contrast, the Bayesian posterior probability of the null hypothesis has an explicit interpretation of how strong the data support the null. We make a comparison of the p-value and the posterior probability by considering a recent clinical trial. The results show that even when we reject the null hypothesis, there is still a substantial probability (around 20%) that the null is true. Not only should we examine whether the data would have rarely occurred under the null hypothesis, but we also need to know whether the data would be rare under the alternative. As a result, the p-value only provides one side of the information, for which the Bayes factor and posterior probability may offer complementary evidence. Copyright © 2015 Elsevier Inc. All rights reserved.
Lyra’s cosmology of hybrid universe in Bianchi-V space-time
NASA Astrophysics Data System (ADS)
Yadav, Anil Kumar; Bhardwaj, Vinod Kumar
2018-06-01
In this paper we have searched for the existence of Lyra’s cosmology in a hybrid universe with minimal interaction between dark energy and normal matter using Bianchi-V space-time. To derive the exact solution, the average scale factor is taken as a={({t}n{e}kt)}\\frac{1{m}} which describes the hybrid nature of the scale factor and generates a model of the transitioning universe from the early deceleration phase to the present acceleration phase. The quintessence model makes the matter content of the derived universe remarkably able to satisfy the null, dominant and strong energy condition. It has been found that the time varying displacement β(t) co-relates with the nature of cosmological constant Λ(t). We also discuss some physical and geometrical features of the universe.
NASA Astrophysics Data System (ADS)
Kuniyal, Ravi Shankar; Uniyal, Rashmi; Biswas, Anindya; Nandan, Hemwati; Purohit, K. D.
2018-06-01
We investigate the geodesic motion of massless test particles in the background of a noncommutative geometry-inspired Schwarzschild black hole. The behavior of effective potential is analyzed in the equatorial plane and the possible motions of massless particles (i.e. photons) for different values of impact parameter are discussed accordingly. We have also calculated the frequency shift of photons in this space-time. Further, the mass parameter of a noncommutative inspired Schwarzschild black hole is computed in terms of the measurable redshift of photons emitted by massive particles moving along circular geodesics in equatorial plane. The strength of gravitational fields of noncommutative geometry-inspired Schwarzschild black hole and usual Schwarzschild black hole in General Relativity is also compared.
Terrestrial Planet Finder Interferometer Technology Status and Plans
NASA Technical Reports Server (NTRS)
Lawson, Perter R.; Ahmed, A.; Gappinger, R. O.; Ksendzov, A.; Lay, O. P.; Martin, S. R.; Peters, R. D.; Scharf, D. P.; Wallace, J. K.; Ware, B.
2006-01-01
A viewgraph presentation on the technology status and plans for Terrestrial Planet Finder Interferometer is shown. The topics include: 1) The Navigator Program; 2) TPF-I Project Overview; 3) Project Organization; 4) Technology Plan for TPF-I; 5) TPF-I Testbeds; 6) Nulling Error Budget; 7) Nulling Testbeds; 8) Nulling Requirements; 9) Achromatic Nulling Testbed; 10) Single Mode Spatial Filter Technology; 11) Adaptive Nuller Testbed; 12) TPF-I: Planet Detection Testbed (PDT); 13) Planet Detection Testbed Phase Modulation Experiment; and 14) Formation Control Testbed.
Xu, Fen; Burk, David; Gao, Zhanguo; Yin, Jun; Zhang, Xia
2012-01-01
The histone deacetylase sirtuin 1 (SIRT1) inhibits adipocyte differentiation and suppresses inflammation by targeting the transcription factors peroxisome proliferator-activated receptor γ and nuclear factor κB. Although this suggests that adiposity and inflammation should be enhanced when SIRT1 activity is inactivated in the body, this hypothesis has not been tested in SIRT1 null (SIRT1−/−) mice. In this study, we addressed this issue by investigating the adipose tissue in SIRT1−/− mice. Compared with their wild-type littermates, SIRT1 null mice exhibited a significant reduction in body weight. In adipose tissue, the average size of adipocytes was smaller, the content of extracellular matrix was lower, adiponectin and leptin were expressed at 60% of normal level, and adipocyte differentiation was reduced. All of these changes were observed with a 50% reduction in capillary density that was determined using a three-dimensional imaging technique. Except for vascular endothelial growth factor, the expression of several angiogenic factors (Pdgf, Hgf, endothelin, apelin, and Tgf-β) was reduced by about 50%. Macrophage infiltration and inflammatory cytokine expression were 70% less in the adipose tissue of null mice and macrophage differentiation was significantly inhibited in SIRT1−/− mouse embryonic fibroblasts in vitro. In wild-type mice, macrophage deletion led to a reduction in vascular density. These data suggest that SIRT1 controls adipose tissue function through regulation of angiogenesis, whose deficiency is associated with macrophage malfunction in SIRT1−/− mice. The study supports the concept that inflammation regulates angiogenesis in the adipose tissue. PMID:22315447
NASA Astrophysics Data System (ADS)
Zheng, Jing-Yi; Boustany, Nada N.
2010-07-01
Optical scatter imaging is used to estimate organelle size distributions in immortalized baby mouse kidney cells treated with 0.4 μM staurosporine to induce apoptosis. The study comprises apoptosis competent iBMK cells (W2) expressing the proapoptotic proteins Bax/Bak, apoptosis resistant Bax/Bak null cells (D3), and W2 and D3 cells expressing yellow fluorescent protein (YFP) or YFP fused to the antiapoptotic protein Bcl-xL (YFP-Bcl-xL). YFP expression is diffuse within the transfected cells, while YFP-Bcl-xL is localized to the mitochondria. Our results show a significant increase in the mean subcellular particle size from approximately 1.1 to 1.4 μm in both Bax/Bak expressing and Bax/Bak null cells after 60 min of STS treatment compared to DMSO-treated control cells. This dynamic is blocked by overexpression of YFP-Bcl-xL in Bax/Bak expressing cells, but is less significantly inhibited by YFP-Bcl-xL in Bax/Bak null cells. Our data suggest that the increase in subcellular particle size at the onset of apoptosis is modulated by Bcl-xL in the presence of Bax/Bak, but it occurs upstream of the final commitment to programmed cell death. Mitochondrial localization of YFP-Bcl-xL and the finding that micron-sized particles give rise to the scattering signal further suggest that alterations in mitochondrial morphology may underlie the observed changes in light scattering.
Minimum spanning tree analysis of the human connectome.
van Dellen, Edwin; Sommer, Iris E; Bohlken, Marc M; Tewarie, Prejaas; Draaisma, Laurijn; Zalesky, Andrew; Di Biase, Maria; Brown, Jesse A; Douw, Linda; Otte, Willem M; Mandl, René C W; Stam, Cornelis J
2018-06-01
One of the challenges of brain network analysis is to directly compare network organization between subjects, irrespective of the number or strength of connections. In this study, we used minimum spanning tree (MST; a unique, acyclic subnetwork with a fixed number of connections) analysis to characterize the human brain network to create an empirical reference network. Such a reference network could be used as a null model of connections that form the backbone structure of the human brain. We analyzed the MST in three diffusion-weighted imaging datasets of healthy adults. The MST of the group mean connectivity matrix was used as the empirical null-model. The MST of individual subjects matched this reference MST for a mean 58%-88% of connections, depending on the analysis pipeline. Hub nodes in the MST matched with previously reported locations of hub regions, including the so-called rich club nodes (a subset of high-degree, highly interconnected nodes). Although most brain network studies have focused primarily on cortical connections, cortical-subcortical connections were consistently present in the MST across subjects. Brain network efficiency was higher when these connections were included in the analysis, suggesting that these tracts may be utilized as the major neural communication routes. Finally, we confirmed that MST characteristics index the effects of brain aging. We conclude that the MST provides an elegant and straightforward approach to analyze structural brain networks, and to test network topological features of individual subjects in comparison to empirical null models. © 2018 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Surfactant-Associated Protein A Provides Critical Immunoprotection in Neonatal Mice▿
George, Caroline L. S.; Goss, Kelli L.; Meyerholz, David K.; Lamb, Fred S.; Snyder, Jeanne M.
2008-01-01
The collectins surfactant-associated protein A (SP-A) and SP-D are components of innate immunity that are present before birth. Both proteins bind pathogens and assist in clearing infection. The significance of SP-A and SP-D as components of the neonatal immune system has not been investigated. To determine the role of SP-A and SP-D in neonatal immunity, wild-type, SP-A null, and SP-D null mice were bred in a bacterium-laden environment (corn dust bedding) or in a semisterile environment (cellulose fiber bedding). When reared in the corn dust bedding, SP-A null pups had significant mortality (P < 0.001) compared to both wild-type and SP-D null pups exposed to the same environment. The mortality of the SP-A null pups was associated with significant gastrointestinal tract pathology but little lung pathology. Moribund SP-A null newborn mice exhibited Bacillus sp. and Enterococcus sp. peritonitis. When the mother or newborn produced SP-A, newborn survival was significantly improved (P < 0.05) compared to the results when there was a complete absence of SP-A in both the mother and the pup. Significant sources of SP-A likely to protect a newborn include the neonatal lung and gastrointestinal tract but not the lactating mammary tissue of the mother. Furthermore, exogenous SP-A delivered by mouth to newborn SP-A null pups with SP-A null mothers improved newborn survival in the corn dust environment. Therefore, a lack of SP-D did not affect newborn survival, while SP-A produced by either the mother or the pup or oral exogenous SP-A significantly reduced newborn mortality associated with environmentally induced infection in SP-A null newborns. PMID:17967856
P value and the theory of hypothesis testing: an explanation for new researchers.
Biau, David Jean; Jolles, Brigitte M; Porcher, Raphaël
2010-03-01
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Simón, Oihane; Williams, Trevor; Asensio, Aaron C.; Ros, Sarhay; Gaya, Andrea; Caballero, Primitivo; Possee, Robert D.
2008-01-01
The genome of Spodoptera frugiperda multiple nucleopolyhedrovirus (NPV) was inserted into a bacmid (Sfbac) and used to produce a mutant lacking open reading frame 29 (Sf29null). Sf29null bacmid DNA was able to generate an infection in S. frugiperda. Approximately six times less DNA was present in occlusion bodies (OBs) produced by the Sf29null bacmid in comparison to viruses containing this gene. This reduction in DNA content was consistent with fewer virus particles being packaged within Sf29null bacmid OBs, as determined by fractionation of dissolved polyhedra and comparison of occlusion-derived virus (ODV) infectivity in cell culture. DNA from Sfbac, Sf29null, or Sf29null-repair, in which the gene deletion had been repaired, were equally infectious when used to transfect S. frugiperda. All three viruses produced similar numbers of OBs, although those from Sf29null were 10-fold less infectious than viruses with the gene. Insects infected with Sf29null bacmid died ∼24 h later than positive controls, consistent with the reduced virus particle content of Sf29null OBs. Transcripts from Sf29 were detected in infected insects 12 h prior to those from the polyhedrin gene. Homologs to Sf29 were present in other group II NPVs, and similar sequences were present in entomopoxviruses. Analysis of the Sf29 predicted protein sequence revealed signal peptide and transmembrane domains, but the presence of 12 potential N-glycosylation sites suggest that it is not an ODV envelope protein. Other motifs, including zinc-binding and threonine-rich regions, suggest degradation and adhesion functions. We conclude that Sf29 is a viral factor that determines the number of ODVs occluded in each OB. PMID:18550678
Mota, Linda C.; Hernandez, Juan P.
2010-01-01
Constitutive androgen receptor (CAR) is activated by several chemicals and in turn regulates multiple detoxification genes. Our research demonstrates that parathion is one of the most potent, environmentally relevant CAR activators with an EC50 of 1.43 μM. Therefore, animal studies were conducted to determine whether CAR was activated by parathion in vivo. Surprisingly, CAR-null mice, but not wild-type (WT) mice, showed significant parathion-induced toxicity. However, parathion did not induce Cyp2b expression, suggesting that parathion is not a CAR activator in vivo, presumably because of its short half-life. CAR expression is also associated with the expression of several drug-metabolizing cytochromes P450 (P450). CAR-null mice demonstrate lower expression of Cyp2b9, Cyp2b10, Cyp2c29, and Cyp3a11 primarily, but not exclusively in males. Therefore, we incubated microsomes from untreated WT and CAR-null mice with parathion in the presence of esterase inhibitors to determine whether CAR-null mice show perturbed P450-mediated parathion metabolism compared with that in WT mice. The metabolism of parathion to paraoxon and p-nitrophenol (PNP) was reduced in CAR-null mice with male CAR-null mice showing reduced production of both paraoxon and PNP, and female CAR-null mice showing reduced production of only PNP. Overall, the data indicate that CAR-null mice metabolize parathion slower than WT mice. These results provide a potential mechanism for increased sensitivity of individuals with lower CAR activity such as newborns to parathion and potentially other chemicals due to decreased metabolic capacity. PMID:20573718
Current Structure and Nonideal Behavior at Magnetic Null Points in the Turbulent Magnetosheath
NASA Technical Reports Server (NTRS)
Wendel, D. E.; Adrian, M. L.
2013-01-01
The Poincaré index indicates that the Cluster spacecraft tetrahedron entraps a number of 3-D magnetic nulls during an encounter with the turbulent magnetosheath. Previous researchers have found evidence for reconnection at one of the many filamentary current layers observed by Cluster in this region. We find that many of the entrained nulls are also associated with strong currents. We dissect the current structure of a pair of spiral nulls that may be topologically connected. At both nulls, we find a strong current along the spine, accompanied by a somewhat more modest current perpendicular to the spine that tilts the fan toward the axis of the spine. The current along the fan is comparable to the that along the spine. At least one of the nulls manifests a rotational flow pattern in the fan plane that is consistent with torsional spine reconnection as predicted by theory. These results emphasize the importance of examining the magnetic topology in interpreting the nature of currents and reconnection in 3-D turbulence.
Manfroi, Silvia; Scarcello, Antonio; Pagliaro, Pasqualepaolo
2015-10-01
Molecular genetic studies on Duffy blood group antigens have identified mutations underlying rare FY*Null and FY*X alleles. FY*Null has a high frequency in Blacks, especially from sub-Saharan Africa, while its frequency is not defined in Caucasians. FY*X allele, associated with Fy(a-b+w) phenotype, has a frequency of 2-3.5% in Caucasian people while it is absent in Blacks. During the project of extensive blood group genotyping in patients affected by hemoglobinopathies, we identified FY*X/FY*Null and FY*A/FY*Null genotypes in a Caucasian thalassemic family from Sardinia. We speculate on the frequency of FY*X and FY*Null alleles in Caucasian and Black people; further, we focused on the association of FY*X allele with weak Fyb antigen expression on red blood cells and its identification performing high sensitivity serological typing methods or genotyping. Copyright © 2015 Elsevier Ltd. All rights reserved.
Position sensor for a fuel injection element in an internal combustion engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fulkerson, D.E.; Geske, M.L.
1987-08-18
This patent describes an electronic circuit for dynamically sensing and processing signals representative of changes in a magnet field, the circuit comprising: means for sensing a change in a magnetic field external to the circuit and providing an output representative of the change; circuit means electronically coupled with the output of the sensing means for providing an output indicating the presence of the magnetic field change; and a nulling circuit coupled with the output of the sensing means and across the indicating circuit means for nulling the electronic circuit responsive to the sensing means output, to thereby avoid ambient magneticmore » fields temperature and process variations, and wherein the nulling circuit comprises a capacitor coupled to the output of the nulling circuit, means for charging and discharging the capacitor responsive to any imbalance in the input to the nulling circuit, and circuit means coupling the capacitor with the output of the sensing means for nulling any imbalance during the charging or discharging of the capacitor.« less
Naked singularity resolution in cylindrical collapse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurita, Yasunari; Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto, 606-8502; Nakao, Ken-ichi
In this paper, we study the gravitational collapse of null dust in cylindrically symmetric spacetime. The naked singularity necessarily forms at the symmetry axis. We consider the situation in which null dust is emitted again from the naked singularity formed by the collapsed null dust and investigate the backreaction by this emission for the naked singularity. We show a very peculiar but physically important case in which the same amount of null dust as that of the collapsed one is emitted from the naked singularity as soon as the ingoing null dust hits the symmetry axis and forms the nakedmore » singularity. In this case, although this naked singularity satisfies the strong curvature condition by Krolak (limiting focusing condition), geodesics which hit the singularity can be extended uniquely across the singularity. Therefore, we may say that the collapsing null dust passes through the singularity formed by itself and then leaves for infinity. Finally, the singularity completely disappears and the flat spacetime remains.« less
Evaluation of null-point detection methods on simulation data
NASA Astrophysics Data System (ADS)
Olshevsky, Vyacheslav; Fu, Huishan; Vaivads, Andris; Khotyaintsev, Yuri; Lapenta, Giovanni; Markidis, Stefano
2014-05-01
We model the measurements of artificial spacecraft that resemble the configuration of CLUSTER propagating in the particle-in-cell simulation of turbulent magnetic reconnection. The simulation domain contains multiple isolated X-type null-points, but the majority are O-type null-points. Simulations show that current pinches surrounded by twisted fields, analogous to laboratory pinches, are formed along the sequences of O-type nulls. In the simulation, the magnetic reconnection is mainly driven by the kinking of the pinches, at spatial scales of several ion inertial lentghs. We compute the locations of magnetic null-points and detect their type. When the satellites are separated by the fractions of ion inertial length, as it is for CLUSTER, they are able to locate both the isolated null-points, and the pinches. We apply the method to the real CLUSTER data and speculate how common are pinches in the magnetosphere, and whether they play a dominant role in the dissipation of magnetic energy.
The importance of proving the null.
Gallistel, C R
2009-04-01
Null hypotheses are simple, precise, and theoretically important. Conventional statistical analysis cannot support them; Bayesian analysis can. The challenge in a Bayesian analysis is to formulate a suitably vague alternative, because the vaguer the alternative is (the more it spreads out the unit mass of prior probability), the more the null is favored. A general solution is a sensitivity analysis: Compute the odds for or against the null as a function of the limit(s) on the vagueness of the alternative. If the odds on the null approach 1 from above as the hypothesized maximum size of the possible effect approaches 0, then the data favor the null over any vaguer alternative to it. The simple computations and the intuitive graphic representation of the analysis are illustrated by the analysis of diverse examples from the current literature. They pose 3 common experimental questions: (a) Are 2 means the same? (b) Is performance at chance? (c) Are factors additive? (c) 2009 APA, all rights reserved
Jurkowska, Halina; Niewiadomski, Julie; Hirschberger, Lawrence L.; Roman, Heather B.; Mazor, Kevin M.; Liu, Xiaojing; Locasale, Jason W.; Park, Eunkyue
2016-01-01
The cysteine dioxygenase (Cdo1)-null and the cysteine sulfinic acid decarboxylase (Csad)-null mouse are not able to synthesize hypotaurine/taurine by the cysteine/cysteine sulfinate pathway and have very low tissue taurine levels. These mice provide excellent models for studying the effects of taurine on biological processes. Using these mouse models, we identified betaine:homocysteine methyltransferase (BHMT) as a protein whose in vivo expression is robustly regulated by taurine. BHMT levels are low in liver of both Cdo1-null and Csad-null mice, but are restored to wild-type levels by dietary taurine supplementation. A lack of BHMT activity was indicated by an increase in the hepatic betaine level. In contrast to observations in liver of Cdo1-null and Csad-null mice, BHMT was not affected by taurine supplementation of primary hepatocytes from these mice. Likewise, CSAD abundance was not affected by taurine supplementation of primary hepatocytes, although it was robustly upregulated in liver of Cdo1-null and Csad-null mice and lowered to wild-type levels by dietary taurine supplementation. The mechanism by which taurine status affects hepatic CSAD and BHMT expression appears to be complex and to require factors outside of hepatocytes. Within the liver, mRNA abundance for both CSAD and BHMT was upregulated in parallel with protein levels, indicating regulation of BHMT and CSAD mRNA synthesis or degradation. PMID:26481005
Ameloblast Modulation and Transport of Cl−, Na+, and K+ during Amelogenesis
Bronckers, A.L.J.J.; Lyaruu, D.; Jalali, R.; Medina, J.F.; Zandieh-Doulabi, B.; DenBesten, P.K.
2015-01-01
Ameloblasts express transmembrane proteins for transport of mineral ions and regulation of pH in the enamel space. Two major transporters recently identified in ameloblasts are the Na+K+-dependent calcium transporter NCKX4 and the Na+-dependent HPO42– (Pi) cotransporter NaPi-2b. To regulate pH, ameloblasts express anion exchanger 2 (Ae2a,b), chloride channel Cftr, and amelogenins that can bind protons. Exposure to fluoride or null mutation of Cftr, Ae2a,b, or Amelx each results in formation of hypomineralized enamel. We hypothesized that enamel hypomineralization associated with disturbed pH regulation results from reduced ion transport by NCKX4 and NaPi-2b. This was tested by correlation analyses among the levels of Ca, Pi, Cl, Na, and K in forming enamel of mice with null mutation of Cftr, Ae2a,b, and Amelx, according to quantitative x-ray electron probe microanalysis. Immunohistochemistry, polymerase chain reaction analysis, and Western blotting confirmed the presence of apical NaPi-2b and Nckx4 in maturation-stage ameloblasts. In wild-type mice, K levels in enamel were negatively correlated with Ca and Cl but less negatively or even positively in fluorotic enamel. Na did not correlate with P or Ca in enamel of wild-type mice but showed strong positive correlation in fluorotic and nonfluorotic Ae2a,b- and Cftr-null enamel. In hypomineralizing enamel of all models tested, 1) Cl− was strongly reduced; 2) K+ and Na+ accumulated (Na+ not in Amelx-null enamel); and 3) modulation was delayed or blocked. These results suggest that a Na+K+-dependent calcium transporter (likely NCKX4) and a Na+-dependent Pi transporter (potentially NaPi-2b) located in ruffle-ended ameloblasts operate in a coordinated way with the pH-regulating machinery to transport Ca2+, Pi, and bicarbonate into maturation-stage enamel. Acidification and/or associated physicochemical/electrochemical changes in ion levels in enamel fluid near the apical ameloblast membrane may reduce the transport activity of mineral transporters, which results in hypomineralization. PMID:26403673
Ameloblast Modulation and Transport of Cl⁻, Na⁺, and K⁺ during Amelogenesis.
Bronckers, A L J J; Lyaruu, D; Jalali, R; Medina, J F; Zandieh-Doulabi, B; DenBesten, P K
2015-12-01
Ameloblasts express transmembrane proteins for transport of mineral ions and regulation of pH in the enamel space. Two major transporters recently identified in ameloblasts are the Na(+)K(+)-dependent calcium transporter NCKX4 and the Na(+)-dependent HPO4 (2-) (Pi) cotransporter NaPi-2b. To regulate pH, ameloblasts express anion exchanger 2 (Ae2a,b), chloride channel Cftr, and amelogenins that can bind protons. Exposure to fluoride or null mutation of Cftr, Ae2a,b, or Amelx each results in formation of hypomineralized enamel. We hypothesized that enamel hypomineralization associated with disturbed pH regulation results from reduced ion transport by NCKX4 and NaPi-2b. This was tested by correlation analyses among the levels of Ca, Pi, Cl, Na, and K in forming enamel of mice with null mutation of Cftr, Ae2a,b, and Amelx, according to quantitative x-ray electron probe microanalysis. Immunohistochemistry, polymerase chain reaction analysis, and Western blotting confirmed the presence of apical NaPi-2b and Nckx4 in maturation-stage ameloblasts. In wild-type mice, K levels in enamel were negatively correlated with Ca and Cl but less negatively or even positively in fluorotic enamel. Na did not correlate with P or Ca in enamel of wild-type mice but showed strong positive correlation in fluorotic and nonfluorotic Ae2a,b- and Cftr-null enamel. In hypomineralizing enamel of all models tested, 1) Cl(-) was strongly reduced; 2) K(+) and Na(+) accumulated (Na(+) not in Amelx-null enamel); and 3) modulation was delayed or blocked. These results suggest that a Na(+)K(+)-dependent calcium transporter (likely NCKX4) and a Na(+)-dependent Pi transporter (potentially NaPi-2b) located in ruffle-ended ameloblasts operate in a coordinated way with the pH-regulating machinery to transport Ca(2+), Pi, and bicarbonate into maturation-stage enamel. Acidification and/or associated physicochemical/electrochemical changes in ion levels in enamel fluid near the apical ameloblast membrane may reduce the transport activity of mineral transporters, which results in hypomineralization. © International & American Associations for Dental Research 2015.
Multiple-Star System Adaptive Vortex Coronagraphy Using a Liquid Crystal Light Valve
NASA Astrophysics Data System (ADS)
Aleksanyan, Artur; Kravets, Nina; Brasselet, Etienne
2017-05-01
We propose the development of a high-contrast imaging technique enabling the simultaneous and selective nulling of several light sources. This is done by realizing a reconfigurable multiple-vortex phase mask made of a liquid crystal thin film on which local topological features can be addressed electro-optically. The method is illustrated by reporting on a triple-star optical vortex coronagraphy laboratory demonstration, which can be easily extended to higher multiplicity. These results allow considering the direct observation and analysis of worlds with multiple suns and more complex extrasolar planetary systems.
Use of discrete chromatic space to tune the image tone in a color image mosaic
NASA Astrophysics Data System (ADS)
Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Zheng, Li
2003-09-01
Color image process is a very important problem. However, the main approach presently of them is to transfer RGB colour space into another colour space, such as HIS (Hue, Intensity and Saturation). YIQ, LUV and so on. Virutally, it may not be a valid way to process colour airborne image just in one colour space. Because the electromagnetic wave is physically altered in every wave band, while the color image is perceived based on psychology vision. Therefore, it's necessary to propose an approach accord with physical transformation and psychological perception. Then, an analysis on how to use relative colour spaces to process colour airborne photo is discussed and an application on how to tune the image tone in colour airborne image mosaic is introduced. As a practice, a complete approach to perform the mosaic on color airborne images via taking full advantage of relative color spaces is discussed in the application.
Imaging the Surfaces of Stars from Space
NASA Astrophysics Data System (ADS)
Carpenter, Kenneth; Rau, Gioia
2018-04-01
Imaging of Stellar Surfacess has been dominated to-date by ground-based observations, but space-based facilities offer tremendous potential for extending the wavelength coverage and ultimately the resolution of such efforts. We review the imaging accomplished so far from space and then talk about exciting future prospects. The earliest attempts from space indirectly produced surface maps via the Doppler Imaging Technique, using UV spectra obtained with the International Ultraviolet Explorer (IUE). Later, the first direct UV images were obtained with the Hubble Space Telescope (HST), of Mira and Betelgeuse, using the Faint Object Camera (FOC). We will show this work and then investigate prospects for IR imaging with the James Webb Space Telescope (JWST). The real potential of space-based Imaging of Stellar Surfacess, however, lies in the future, when large-baseline Fizeau interferometers, such as the UV-optical Stellar Imager (SI) Vision Mission, with a 30-element array and 500m max baseline, are flown. We describe SI and its science goals, which include 0.1 milli-arcsec spectral Imaging of Stellar Surfacess and the probing of internal structure and flows via asteroseismology.
Pronouns in Catalan: Information, Discourse and Strategy
ERIC Educational Resources Information Center
Mayol, Laia
2009-01-01
This thesis investigates the variation between null and overt pronouns in subject position in Catalan, a null subject language. I argue that null and overt subject pronouns are two resources that speakers efficiently deploy to signal their intended interpretation regarding antecedent choice or semantic meaning, and that communicative agents…
The Importance of Proving the Null
ERIC Educational Resources Information Center
Gallistel, C. R.
2009-01-01
Null hypotheses are simple, precise, and theoretically important. Conventional statistical analysis cannot support them; Bayesian analysis can. The challenge in a Bayesian analysis is to formulate a suitably vague alternative, because the vaguer the alternative is (the more it spreads out the unit mass of prior probability), the more the null is…
Background: Simulation studies have previously demonstrated that time-series analyses using smoothing splines correctly model null health-air pollution associations. Methods: We repeatedly simulated season, meteorology and air quality for the metropolitan area of Atlanta from cyc...
Complex Fuzzy Set-Valued Complex Fuzzy Measures and Their Properties
Ma, Shengquan; Li, Shenggang
2014-01-01
Let F*(K) be the set of all fuzzy complex numbers. In this paper some classical and measure-theoretical notions are extended to the case of complex fuzzy sets. They are fuzzy complex number-valued distance on F*(K), fuzzy complex number-valued measure on F*(K), and some related notions, such as null-additivity, pseudo-null-additivity, null-subtraction, pseudo-null-subtraction, autocontionuous from above, autocontionuous from below, and autocontinuity of the defined fuzzy complex number-valued measures. Properties of fuzzy complex number-valued measures are studied in detail. PMID:25093202
NASA Astrophysics Data System (ADS)
LaBombard, B.; Kuang, A. Q.; Brunner, D.; Faust, I.; Mumgaard, R.; Reinke, M. L.; Terry, J. L.; Howard, N.; Hughes, J. W.; Chilenski, M.; Lin, Y.; Marmar, E.; Rice, J. E.; Rodriguez-Fernandez, P.; Wallace, G.; Whyte, D. G.; Wolfe, S.; Wukitch, S.
2017-07-01
The impurity screening response of the high-field side (HFS) scrape-off layer (SOL) to localized nitrogen injection is investigated on Alcator C-Mod for magnetic equilibria spanning lower-single-null, double-null and upper-single-null configurations under otherwise identical plasma conditions. L-mode, EDA H-mode and I-mode discharges are investigated. HFS impurity screening is found to depend on magnetic flux balance and the direction of B × \
Applications of warped geometries: From cosmology to cold atoms
NASA Astrophysics Data System (ADS)
Brown, C. M.
This thesis describes several interrelated projects furthering the study of branes on warped geometries in string theory. First, we consider the non-perturbative interaction between D3 and D7 branes which stabilizes the overall volume in braneworld compactification scenarios. This interaction might offer stable nonsupersymmetric vacua which would naturally break supersymmetry if occupied by D3 branes. We derive the equations for the nonsupersymmetric vacua of the D3-brane and analyze them in the case of two particular 7-brane embeddings at the bottom of the warped deformed conifold. These geometries have negative dark energy. Stability of these models is possible but not generic. Further, we reevaluate brane/flux annihilation in a warped throat with one stabilized Kahler modulus. We find that depending on the relative size of various fluxes three things can occur: the decay process proceeds unhindered, the D3-branes are forbidden to decay classically, or the entire space decompactifies. Additionally, we show that the Kahler modulus receives a contribution from the collective 3-brane tension allowing significant changes in the compactified volume during the transition. Next, furthering the effort to describe cold atoms using AdS/CFT, we construct charged asymptotically Schrodinger black hole solutions of IIB supergravity. We begin by obtaining a closed-form expression for the null Melvin twist of many type IIB backgrounds and identify the resulting five-dimensional effective action. We use these results to demonstrate that the near-horizon physics and thermodynamics of asymptotically Schrodinger black holes obtained in this way are essentially inherited from their AdS progenitors, and verify that they admit zero-temperature extremal limits with AdS2 near-horizon geometries. Finally, in an effort to understand rotating nonrelativistic systems we use the null Melvin twist technology on a charged rotating AdS black hole and discover a type of Godel space-time. We discuss how the dual field theory avoids the closed time-like curves which arise because of Bousso's holographic screen conjecture. This Godel space-time is locally equivalent to a Schrodinger space-time that has been forced onto an S2.
A Solution Space for a System of Null-State Partial Differential Equations: Part 1
NASA Astrophysics Data System (ADS)
Flores, Steven M.; Kleban, Peter
2015-01-01
This article is the first of four that completely and rigorously characterize a solution space for a homogeneous system of 2 N + 3 linear partial differential equations (PDEs) in 2 N variables that arises in conformal field theory (CFT) and multiple Schramm-Löwner evolution (SLE). In CFT, these are null-state equations and conformal Ward identities. They govern partition functions for the continuum limit of a statistical cluster or loop-gas model, such as percolation, or more generally the Potts models and O( n) models, at the statistical mechanical critical point. (SLE partition functions also satisfy these equations.) For such a lattice model in a polygon with its 2 N sides exhibiting a free/fixed side-alternating boundary condition , this partition function is proportional to the CFT correlation function where the w i are the vertices of and where is a one-leg corner operator. (Partition functions for "crossing events" in which clusters join the fixed sides of in some specified connectivity are linear combinations of such correlation functions.) When conformally mapped onto the upper half-plane, methods of CFT show that this correlation function satisfies the system of PDEs that we consider. In this first article, we use methods of analysis to prove that the dimension of this solution space is no more than C N , the Nth Catalan number. While our motivations are based in CFT, our proofs are completely rigorous. This proof is contained entirely within this article, except for the proof of Lemma 14, which constitutes the second article (Flores and Kleban, in Commun Math Phys, arXiv:1404.0035, 2014). In the third article (Flores and Kleban, in Commun Math Phys, arXiv:1303.7182, 2013), we use the results of this article to prove that the solution space of this system of PDEs has dimension C N and is spanned by solutions constructed with the CFT Coulomb gas (contour integral) formalism. In the fourth article (Flores and Kleban, in Commun Math Phys, arXiv:1405.2747, 2014), we prove further CFT-related properties about these solutions, some useful for calculating cluster-crossing probabilities of critical lattice models in polygons.
ON THE NATURE OF RECONNECTION AT A SOLAR CORONAL NULL POINT ABOVE A SEPARATRIX DOME
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pontin, D. I.; Priest, E. R.; Galsgaard, K., E-mail: dpontin@maths.dundee.ac.uk
2013-09-10
Three-dimensional magnetic null points are ubiquitous in the solar corona and in any generic mixed-polarity magnetic field. We consider magnetic reconnection at an isolated coronal null point whose fan field lines form a dome structure. Using analytical and computational models, we demonstrate several features of spine-fan reconnection at such a null, including the fact that substantial magnetic flux transfer from one region of field line connectivity to another can occur. The flux transfer occurs across the current sheet that forms around the null point during spine-fan reconnection, and there is no separator present. Also, flipping of magnetic field lines takesmore » place in a manner similar to that observed in the quasi-separatrix layer or slip-running reconnection.« less
Are eikonal quasinormal modes linked to the unstable circular null geodesics?
NASA Astrophysics Data System (ADS)
Konoplya, R. A.; Stuchlík, Z.
2017-08-01
In Cardoso et al. [6] it was claimed that quasinormal modes which any stationary, spherically symmetric and asymptotically flat black hole emits in the eikonal regime are determined by the parameters of the circular null geodesic: the real and imaginary parts of the quasinormal mode are multiples of the frequency and instability timescale of the circular null geodesics respectively. We shall consider asymptotically flat black hole in the Einstein-Lovelock theory, find analytical expressions for gravitational quasinormal modes in the eikonal regime and analyze the null geodesics. Comparison of the both phenomena shows that the expected link between the null geodesics and quasinormal modes is violated in the Einstein-Lovelock theory. Nevertheless, the correspondence exists for a number of other cases and here we formulate its actual limits.
The data-driven null models for information dissemination tree in social networks
NASA Astrophysics Data System (ADS)
Zhang, Zhiwei; Wang, Zhenyu
2017-10-01
For the purpose of detecting relatedness and co-occurrence between users, as well as the distribution features of nodes in spreading path of a social network, this paper explores topological characteristics of information dissemination trees (IDT) that can be employed indirectly to probe the information dissemination laws within social networks. Hence, three different null models of IDT are presented in this article, including the statistical-constrained 0-order IDT null model, the random-rewire-broken-edge 0-order IDT null model and the random-rewire-broken-edge 2-order IDT null model. These null models firstly generate the corresponding randomized copy of an actual IDT; then the extended significance profile, which is developed by adding the cascade ratio of information dissemination path, is exploited not only to evaluate degree correlation of two nodes associated with an edge, but also to assess the cascade ratio of different length of information dissemination paths. The experimental correspondences of the empirical analysis for several SinaWeibo IDTs and Twitter IDTs indicate that the IDT null models presented in this paper perform well in terms of degree correlation of nodes and dissemination path cascade ratio, which can be better to reveal the features of information dissemination and to fit the situation of real social networks.
Cobb, Laura K; Appel, Lawrence J; Franco, Manuel; Jones-Smith, Jessica C; Nur, Alana; Anderson, Cheryl A M
2015-07-01
To examine the relationship between local food environments and obesity and assess the quality of studies reviewed. Systematic keyword searches identified studies from US and Canada that assessed the relationship of obesity to local food environments. We applied a quality metric based on design, exposure and outcome measurement, and analysis. We identified 71 studies representing 65 cohorts. Overall, study quality was low; 60 studies were cross-sectional. Associations between food outlet availability and obesity were predominantly null. Among non-null associations, we saw a trend toward inverse associations between supermarket availability and obesity (22 negative, 4 positive, 67 null) and direct associations between fast food and obesity (29 positive, 6 negative, 71 null) in adults. We saw direct associations between fast food availability and obesity in lower income children (12 positive, 7 null). Indices including multiple food outlets were most consistently associated with obesity in adults (18 expected, 1 not expected, 17 null). Limiting to higher quality studies did not affect results. Despite the large number of studies, we found limited evidence for associations between local food environments and obesity. The predominantly null associations should be interpreted cautiously due to the low quality of available studies. © 2015 The Obesity Society.
Prum, Richard O
2010-11-01
The Fisher-inspired, arbitrary intersexual selection models of Lande (1981) and Kirkpatrick (1982), including both stable and unstable equilibrium conditions, provide the appropriate null model for the evolution of traits and preferences by intersexual selection. Like the Hardy–Weinberg equilibrium, the Lande–Kirkpatrick (LK) mechanism arises as an intrinsic consequence of genetic variation in trait and preference in the absence of other evolutionary forces. The LK mechanism is equivalent to other intersexual selection mechanisms in the absence of additional selection on preference and with additional trait-viability and preference-viability correlations equal to zero. The LK null model predicts the evolution of arbitrary display traits that are neither honest nor dishonest, indicate nothing other than mating availability, and lack any meaning or design other than their potential to correspond to mating preferences. The current standard for demonstrating an arbitrary trait is impossible to meet because it requires proof of the null hypothesis. The LK null model makes distinct predictions about the evolvability of traits and preferences. Examples of recent intersexual selection research document the confirmationist pitfalls of lacking a null model. Incorporation of the LK null into intersexual selection will contribute to serious examination of the extent to which natural selection on preferences shapes signals.
Viewing condition dependence of the gaze-evoked nystagmus in Arnold Chiari type 1 malformation.
Ghasia, Fatema F; Gulati, Deepak; Westbrook, Edward L; Shaikh, Aasef G
2014-04-15
Saccadic eye movements rapidly shift gaze to the target of interest. Once the eyes reach a given target, the brainstem ocular motor integrator utilizes feedback from various sources to assure steady gaze. One of such sources is cerebellum whose lesion can impair neural integration leading to gaze-evoked nystagmus. The gaze evoked nystagmus is characterized by drifts moving the eyes away from the target and a null position where the drifts are absent. The extent of impairment in the neural integration for two opposite eccentricities might determine the location of the null position. Eye in the orbit position might also determine the location of the null. We report this phenomenon in a patient with Arnold Chiari type 1 malformation who had intermittent esotropia and horizontal gaze-evoked nystagmus with a shift in the null position. During binocular viewing, the null was shifted to the right. During monocular viewing, when the eye under cover drifted nasally (secondary to the esotropia), the null of the gaze-evoked nystagmus reorganized toward the center. We speculate that the output of the neural integrator is altered from the bilateral conflicting eye in the orbit position secondary to the strabismus. This could possibly explain the reorganization of the location of the null position. Copyright © 2014 Elsevier B.V. All rights reserved.
Sachs' free data in real connection variables
NASA Astrophysics Data System (ADS)
De Paoli, Elena; Speziale, Simone
2017-11-01
We discuss the Hamiltonian dynamics of general relativity with real connection variables on a null foliation, and use the Newman-Penrose formalism to shed light on the geometric meaning of the various constraints. We identify the equivalent of Sachs' constraint-free initial data as projections of connection components related to null rotations, i.e. the translational part of the ISO(2) group stabilising the internal null direction soldered to the hypersurface. A pair of second-class constraints reduces these connection components to the shear of a null geodesic congruence, thus establishing equivalence with the second-order formalism, which we show in details at the level of symplectic potentials. A special feature of the first-order formulation is that Sachs' propagating equations for the shear, away from the initial hypersurface, are turned into tertiary constraints; their role is to preserve the relation between connection and shear under retarded time evolution. The conversion of wave-like propagating equations into constraints is possible thanks to an algebraic Bianchi identity; the same one that allows one to describe the radiative data at future null infinity in terms of a shear of a (non-geodesic) asymptotic null vector field in the physical spacetime. Finally, we compute the modification to the spin coefficients and the null congruence in the presence of torsion.
Cobb, Laura K; Appel, Lawrence J; Franco, Manuel; Jones-Smith, Jessica C; Nur, Alana; Anderson, Cheryl AM
2015-01-01
Objective To examine the relationship between local food environments and obesity and assess the quality of studies reviewed. Methods Systematic keyword searches identified studies from US and Canada that assessed the relationship of obesity to local food environments. We applied a quality metric based on design, exposure and outcome measurement, and analysis. Results We identified 71 studies representing 65 cohorts. Overall, study quality was low; 60 studies were cross-sectional. Associations between food outlet availability and obesity were predominantly null. Among non-null associations, we saw a trend toward inverse associations between supermarket availability and obesity (22 negative, 4 positive, 67 null) and direct associations between fast food and obesity (29 positive, 6 negative, 71 null) in adults. We saw direct associations between fast food availability and obesity in lower income children (12 positive, 7 null). Indices including multiple food outlets were most consistently associated with obesity in adults (18 expected, 1 not expected, 17 null). Limiting to higher quality studies did not affect results. Conclusions Despite the large number of studies, we found limited evidence for associations between local food environments and obesity. The predominantly null associations should be interpreted cautiously due to the low quality of available studies. PMID:26096983
The predictive consequences of parameterization
NASA Astrophysics Data System (ADS)
White, J.; Hughes, J. D.; Doherty, J. E.
2013-12-01
In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.
The distribution of genetic variance across phenotypic space and the response to selection.
Blows, Mark W; McGuigan, Katrina
2015-05-01
The role of adaptation in biological invasions will depend on the availability of genetic variation for traits under selection in the new environment. Although genetic variation is present for most traits in most populations, selection is expected to act on combinations of traits, not individual traits in isolation. The distribution of genetic variance across trait combinations can be characterized by the empirical spectral distribution of the genetic variance-covariance (G) matrix. Empirical spectral distributions of G from a range of trait types and taxa all exhibit a characteristic shape; some trait combinations have large levels of genetic variance, while others have very little genetic variance. In this study, we review what is known about the empirical spectral distribution of G and show how it predicts the response to selection across phenotypic space. In particular, trait combinations that form a nearly null genetic subspace with little genetic variance respond only inconsistently to selection. We go on to set out a framework for understanding how the empirical spectral distribution of G may differ from the random expectations that have been developed under random matrix theory (RMT). Using a data set containing a large number of gene expression traits, we illustrate how hypotheses concerning the distribution of multivariate genetic variance can be tested using RMT methods. We suggest that the relative alignment between novel selection pressures during invasion and the nearly null genetic subspace is likely to be an important component of the success or failure of invasion, and for the likelihood of rapid adaptation in small populations in general. © 2014 John Wiley & Sons Ltd.
Fabrication of the LSST monolithic primary-tertiary mirror
NASA Astrophysics Data System (ADS)
Tuell, Michael T.; Martin, Hubert M.; Burge, James H.; Ketelsen, Dean A.; Law, Kevin; Gressler, William J.; Zhao, Chunyu
2012-09-01
As previously reported (at the SPIE Astronomical Instrumentation conference of 2010 in San Diego1), the Large Synoptic Survey Telescope (LSST) utilizes a three-mirror design in which the primary (M1) and tertiary (M3) mirrors are two concentric aspheric surfaces on one monolithic substrate. The substrate material is Ohara E6 borosilicate glass, in a honeycomb sandwich configuration, currently in production at The University of Arizona’s Steward Observatory Mirror Lab. We will provide an update to the status of the mirrors and metrology systems, which have advanced from concepts to hardware in the past two years. In addition to the normal requirements for smooth surfaces of the appropriate prescriptions, the alignment of the two surfaces must be accurately measured and controlled in the production lab, reducing the degrees of freedom needed to be controlled in the telescope. The surface specification is described as a structure function, related to seeing in excellent conditions. Both the pointing and centration of the two optical axes are important parameters, in addition to the axial spacing of the two vertices. This paper details the manufacturing process and metrology systems for each surface, including the alignment of the two surfaces. M1 is a hyperboloid and can utilize a standard Offner null corrector, whereas M3 is an oblate ellipsoid, so it has positive spherical aberration. The null corrector is a phase-etched computer-generated hologram (CGH) between the mirror surface and the center-of-curvature. Laser trackers are relied upon to measure the alignment and spacing as well as rough-surface metrology during looseabrasive grinding.
New Techniques for High-contrast Imaging with ADI: The ACORNS-ADI SEEDS Data Reduction Pipeline
NASA Astrophysics Data System (ADS)
Brandt, Timothy D.; McElwain, Michael W.; Turner, Edwin L.; Abe, L.; Brandner, W.; Carson, J.; Egner, S.; Feldt, M.; Golota, T.; Goto, M.; Grady, C. A.; Guyon, O.; Hashimoto, J.; Hayano, Y.; Hayashi, M.; Hayashi, S.; Henning, T.; Hodapp, K. W.; Ishii, M.; Iye, M.; Janson, M.; Kandori, R.; Knapp, G. R.; Kudo, T.; Kusakabe, N.; Kuzuhara, M.; Kwon, J.; Matsuo, T.; Miyama, S.; Morino, J.-I.; Moro-Martín, A.; Nishimura, T.; Pyo, T.-S.; Serabyn, E.; Suto, H.; Suzuki, R.; Takami, M.; Takato, N.; Terada, H.; Thalmann, C.; Tomono, D.; Watanabe, M.; Wisniewski, J. P.; Yamada, T.; Takami, H.; Usuda, T.; Tamura, M.
2013-02-01
We describe Algorithms for Calibration, Optimized Registration, and Nulling the Star in Angular Differential Imaging (ACORNS-ADI), a new, parallelized software package to reduce high-contrast imaging data, and its application to data from the SEEDS survey. We implement several new algorithms, including a method to register saturated images, a trimmed mean for combining an image sequence that reduces noise by up to ~20%, and a robust and computationally fast method to compute the sensitivity of a high-contrast observation everywhere on the field of view without introducing artificial sources. We also include a description of image processing steps to remove electronic artifacts specific to Hawaii2-RG detectors like the one used for SEEDS, and a detailed analysis of the Locally Optimized Combination of Images (LOCI) algorithm commonly used to reduce high-contrast imaging data. ACORNS-ADI is written in python. It is efficient and open-source, and includes several optional features which may improve performance on data from other instruments. ACORNS-ADI requires minimal modification to reduce data from instruments other than HiCIAO. It is freely available for download at www.github.com/t-brandt/acorns-adi under a Berkeley Software Distribution (BSD) license. Based on data collected at Subaru Telescope, which is operated by the National Astronomical Observatory of Japan.
Co-Phasing the Large Binocular Telescope:. [Status and Performance of LBTI-PHASECam
NASA Technical Reports Server (NTRS)
Defrere, D.; Hinz, P.; Downey, E.; Ashby, D.; Bailey, V.; Brusa, G.; Christou, J.; Danchi, W. C.; Grenz, P.; Hill, J. M.;
2014-01-01
The Large Binocular Telescope Interferometer is a NASA-funded nulling and imaging instrument designed to coherently combine the two 8.4-m primary mirrors of the LBT for high-sensitivity, high-contrast, and high-resolution infrared imaging (1.5-13 micrometer). PHASECam is LBTI's near-infrared camera used to measure tip-tilt and phase variations between the two AO-corrected apertures and provide high-angular resolution observations. We report on the status of the system and describe its on-sky performance measured during the first semester of 2014. With a spatial resolution equivalent to that of a 22.8-meter telescope and the light-gathering power of single 11.8-meter mirror, the co-phased LBT can be considered to be a forerunner of the next-generation extremely large telescopes (ELT).
Wavelet-space correlation imaging for high-speed MRI without motion monitoring or data segmentation.
Li, Yu; Wang, Hui; Tkach, Jean; Roach, David; Woods, Jason; Dumoulin, Charles
2015-12-01
This study aims to (i) develop a new high-speed MRI approach by implementing correlation imaging in wavelet-space, and (ii) demonstrate the ability of wavelet-space correlation imaging to image human anatomy with involuntary or physiological motion. Correlation imaging is a high-speed MRI framework in which image reconstruction relies on quantification of data correlation. The presented work integrates correlation imaging with a wavelet transform technique developed originally in the field of signal and image processing. This provides a new high-speed MRI approach to motion-free data collection without motion monitoring or data segmentation. The new approach, called "wavelet-space correlation imaging", is investigated in brain imaging with involuntary motion and chest imaging with free-breathing. Wavelet-space correlation imaging can exceed the speed limit of conventional parallel imaging methods. Using this approach with high acceleration factors (6 for brain MRI, 16 for cardiac MRI, and 8 for lung MRI), motion-free images can be generated in static brain MRI with involuntary motion and nonsegmented dynamic cardiac/lung MRI with free-breathing. Wavelet-space correlation imaging enables high-speed MRI in the presence of involuntary motion or physiological dynamics without motion monitoring or data segmentation. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Verardo, E.; Atteia, O.; Rouvreau, L.
2015-12-01
In-situ bioremediation is a commonly used remediation technology to clean up the subsurface of petroleum-contaminated sites. Forecasting remedial performance (in terms of flux and mass reduction) is a challenge due to uncertainties associated with source properties and the uncertainties associated with contribution and efficiency of concentration reducing mechanisms. In this study, predictive uncertainty analysis of bio-remediation system efficiency is carried out with the null-space Monte Carlo (NSMC) method which combines the calibration solution-space parameters with the ensemble of null-space parameters, creating sets of calibration-constrained parameters for input to follow-on remedial efficiency. The first step in the NSMC methodology for uncertainty analysis is model calibration. The model calibration was conducted by matching simulated BTEX concentration to a total of 48 observations from historical data before implementation of treatment. Two different bio-remediation designs were then implemented in the calibrated model. The first consists in pumping/injection wells and the second in permeable barrier coupled with infiltration across slotted piping. The NSMC method was used to calculate 1000 calibration-constrained parameter sets for the two different models. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. The first variant implementation of the NSMC is based on a single calibrated model. In the second variant, models were calibrated from different initial parameter sets. NSMC calibration-constrained parameter sets were sampled from these different calibrated models. We demonstrate that in context of nonlinear model, second variant avoids to underestimate parameter uncertainty which may lead to a poor quantification of predictive uncertainty. Application of the proposed approach to manage bioremediation of groundwater in a real site shows that it is effective to provide support in management of the in-situ bioremediation systems. Moreover, this study demonstrates that the NSMC method provides a computationally efficient and practical methodology of utilizing model predictive uncertainty methods in environmental management.
NASA Astrophysics Data System (ADS)
Filatov, Alexei Vladimirovich
2002-09-01
Using electromagnetic forces to suspend rotating objects (rotors) without mechanical contact is often an appealing technical solution. Magnetic suspensions are typically required to have adequate load capacity and stiffness, and low rotational loss. Other desired features include low price, high reliability and manufacturability. With recent advances in permanent-magnet materials, the required forces can often be obtained by simply using the interaction between permanent magnets. While a magnetic bearing based entirely on permanent magnets could be expected to be inexpensive, reliable and easy to manufacture, a fundamental physical principle known as Earnshaw's theorem maintains that this type of suspension cannot be statically stable. Therefore, some other physical mechanisms must be included. One such mechanism employs the interaction between a conductor and a nonuniform magnetic field in relative motion. Its advantages include simplicity, reliability, wide range of operating temperature and system autonomy (no external wiring and power supplies are required). The disadvantages of the earlier embodiments were high rotational loss, low stiffness and load capacity. This dissertation proposes a novel type of magnetic bearing stabilized by the field-conductor interaction. One of the advantages of this bearing is that no electric field, E, develops in the conductor during the rotor rotation when the system is in no-load equilibrium. Because of this we refer to it as the Null-E Bearing. Null-E Bearings have potential for lower rotational loss and higher load capacity and stiffness than other bearings utilizing the field-conductor interaction. Their performance is highly insensitive to manufacturing inaccuracies. The Null-E Bearing in its basic form can be augmented with supplementary electronics to improve its performance. Depending on the degree of the electronics involvement, a variety of magnetic bearings can be developed ranging from a completely passive to an active magnetic bearing of a novel type. This dissertation contains theoretical analysis of the Null-E Bearing operation, including derivation of the stability conditions and estimation of some of the rotational losses. The validity of the theoretical conclusions has been demonstrated by building and testing a prototype in which non-contact suspension of a 3.2-kg rotor is achieved at spin speeds above 18 Hz.
Dai, Mei; Liou, Benjamin; Swope, Brittany; Wang, Xiaohong; Zhang, Wujuan; Inskeep, Venette; Grabowski, Gregory A; Sun, Ying; Pan, Dao
2016-01-01
To study the neuronal deficits in neuronopathic Gaucher Disease (nGD), the chronological behavioral profiles and the age of onset of brain abnormalities were characterized in a chronic nGD mouse model (9V/null). Progressive accumulation of glucosylceramide (GC) and glucosylsphingosine (GS) in the brain of 9V/null mice were observed at as early as 6 and 3 months of age for GC and GS, respectively. Abnormal accumulation of α-synuclein was present in the 9V/null brain as detected by immunofluorescence and Western blot analysis. In a repeated open-field test, the 9V/null mice (9 months and older) displayed significantly less environmental habituation and spent more time exploring the open-field than age-matched WT group, indicating the onset of short-term spatial memory deficits. In the marble burying test, the 9V/null group had a shorter latency to initiate burying activity at 3 months of age, whereas the latency increased significantly at ≥12 months of age; 9V/null females buried significantly more marbles to completion than the WT group, suggesting an abnormal response to the instinctive behavior and an abnormal activity in non-associative anxiety-like behavior. In the conditional fear test, only the 9V/null males exhibited a significant decrease in response to contextual fear, but both genders showed less response to auditory-cued fear compared to age- and gender-matched WT at 12 months of age. These results indicate hippocampus-related emotional memory defects. Abnormal gait emerged in 9V/null mice with wider front-paw and hind-paw widths, as well as longer stride in a gender-dependent manner with different ages of onset. Significantly higher liver- and spleen-to-body weight ratios were detected in 9V/null mice with different ages of onsets. These data provide temporal evaluation of neurobehavioral dysfunctions and brain pathology in 9V/null mice that can be used for experimental designs to evaluate novel therapies for nGD.
Dai, Mei; Liou, Benjamin; Swope, Brittany; Wang, Xiaohong; Zhang, Wujuan; Inskeep, Venette; Grabowski, Gregory A.; Sun, Ying; Pan, Dao
2016-01-01
To study the neuronal deficits in neuronopathic Gaucher Disease (nGD), the chronological behavioral profiles and the age of onset of brain abnormalities were characterized in a chronic nGD mouse model (9V/null). Progressive accumulation of glucosylceramide (GC) and glucosylsphingosine (GS) in the brain of 9V/null mice were observed at as early as 6 and 3 months of age for GC and GS, respectively. Abnormal accumulation of α-synuclein was present in the 9V/null brain as detected by immunofluorescence and Western blot analysis. In a repeated open-field test, the 9V/null mice (9 months and older) displayed significantly less environmental habituation and spent more time exploring the open-field than age-matched WT group, indicating the onset of short-term spatial memory deficits. In the marble burying test, the 9V/null group had a shorter latency to initiate burying activity at 3 months of age, whereas the latency increased significantly at ≥12 months of age; 9V/null females buried significantly more marbles to completion than the WT group, suggesting an abnormal response to the instinctive behavior and an abnormal activity in non-associative anxiety-like behavior. In the conditional fear test, only the 9V/null males exhibited a significant decrease in response to contextual fear, but both genders showed less response to auditory-cued fear compared to age- and gender-matched WT at 12 months of age. These results indicate hippocampus-related emotional memory defects. Abnormal gait emerged in 9V/null mice with wider front-paw and hind-paw widths, as well as longer stride in a gender-dependent manner with different ages of onset. Significantly higher liver- and spleen-to-body weight ratios were detected in 9V/null mice with different ages of onsets. These data provide temporal evaluation of neurobehavioral dysfunctions and brain pathology in 9V/null mice that can be used for experimental designs to evaluate novel therapies for nGD. PMID:27598339
ElAlfy, Mohsen Saleh; Adly, Amira Abdel Moneam; Ebeid, Fatma Soliman ElSayed; Eissa, Deena Samir; Ismail, Eman Abdel Rahman; Mohammed, Yasser Hassan; Ahmed, Manar Elsayed; Saad, Aya Sayed
2018-06-20
Sickle cell disease (SCD) is associated with alterations in immune phenotypes. CD4 + CD28 null T lymphocytes have pro-inflammatory functions and are linked to vascular diseases. To assess the percentage of CD4 + CD28 null T lymphocytes, natural killer cells (NK), and IFN-gamma levels, we compared 40 children and adolescents with SCD with 40 healthy controls and evaluated their relation to disease severity and response to therapy. Patients with SCD steady state were studied, focusing on history of frequent vaso-occlusive crisis, hydroxyurea therapy, and IFN-gamma levels. Analysis of CD4 + CD28 null T lymphocytes and NK cells was done by flow cytometry. Liver and cardiac iron overload were assessed. CD4 + CD28 null T lymphocytes, NK cells, and IFN-gamma levels were significantly higher in patients than controls. Patients with history of frequent vaso-occlusive crisis and those with vascular complications had higher percentage of CD4 + CD28 null T lymphocytes and IFN-gamma while levels were significantly lower among hydroxyurea-treated patients. CD4 + CD28 null T lymphocytes were positively correlated to transfusional iron input while these cells and IFN-gamma were negatively correlated to cardiac T2* and duration of hydroxyurea therapy. NK cells were correlated to HbS and indirect bilirubin. Increased expression of CD4 + CD28 null T lymphocytes highlights their role in immune dysfunction and pathophysiology of SCD complications.
Influence of Choice of Null Network on Small-World Parameters of Structural Correlation Networks
Hosseini, S. M. Hadi; Kesler, Shelli R.
2013-01-01
In recent years, coordinated variations in brain morphology (e.g., volume, thickness) have been employed as a measure of structural association between brain regions to infer large-scale structural correlation networks. Recent evidence suggests that brain networks constructed in this manner are inherently more clustered than random networks of the same size and degree. Thus, null networks constructed by randomizing topology are not a good choice for benchmarking small-world parameters of these networks. In the present report, we investigated the influence of choice of null networks on small-world parameters of gray matter correlation networks in healthy individuals and survivors of acute lymphoblastic leukemia. Three types of null networks were studied: 1) networks constructed by topology randomization (TOP), 2) networks matched to the distributional properties of the observed covariance matrix (HQS), and 3) networks generated from correlation of randomized input data (COR). The results revealed that the choice of null network not only influences the estimated small-world parameters, it also influences the results of between-group differences in small-world parameters. In addition, at higher network densities, the choice of null network influences the direction of group differences in network measures. Our data suggest that the choice of null network is quite crucial for interpretation of group differences in small-world parameters of structural correlation networks. We argue that none of the available null models is perfect for estimation of small-world parameters for correlation networks and the relative strengths and weaknesses of the selected model should be carefully considered with respect to obtained network measures. PMID:23840672
Quantum fluctuating geometries and the information paradox
NASA Astrophysics Data System (ADS)
Eyheralde, Rodrigo; Campiglia, Miguel; Gambini, Rodolfo; Pullin, Jorge
2017-12-01
We study Hawking radiation on the quantum space-time of a collapsing null shell. We use the geometric optics approximation as in Hawking’s original papers to treat the radiation. The quantum space-time is constructed by superposing the classical geometries associated with collapsing shells with uncertainty in their position and mass. We show that there are departures from thermality in the radiation even though we are not considering a back reaction. One recovers the usual profile for the Hawking radiation as a function of frequency in the limit where the space-time is classical. However, when quantum corrections are taken into account, the profile of the Hawking radiation as a function of time contains information about the initial state of the collapsing shell. More work will be needed to determine whether all the information can be recovered. The calculations show that non-trivial quantum effects can occur in regions of low curvature when horizons are involved, as is proposed in the firewall scenario, for instance.
Casimir energy in Kerr space-time
NASA Astrophysics Data System (ADS)
Sorge, F.
2014-10-01
We investigate the vacuum energy of a scalar massless field confined in a Casimir cavity moving in a circular equatorial orbit in the exact Kerr space-time geometry. We find that both the orbital motion of the cavity and the underlying space-time geometry conspire in lowering the absolute value of the (renormalized) Casimir energy ⟨ɛvac⟩ren , as measured by a comoving observer, with respect to whom the cavity is at rest. This, in turn, causes a weakening in the attractive force between the Casimir plates. In particular, we show that the vacuum energy density ⟨ɛvac⟩ren→0 when the orbital path of the Casimir cavity comes close to the corotating or counter-rotating circular null orbits (possibly geodesic) allowed by the Kerr geometry. Such an effect could be of some astrophysical interest on relevant orbits, such as the Kerr innermost stable circular orbits, being potentially related to particle confinement (as in some interquark models). The present work generalizes previous results obtained by several authors in the weak field approximation.
Does movement influence representations of time and space?
2017-01-01
Embodied cognition posits that abstract conceptual knowledge such as mental representations of time and space are at least partially grounded in sensorimotor experiences. If true, then the execution of whole-body movements should result in modulations of temporal and spatial reference frames. To scrutinize this hypothesis, in two experiments participants either walked forward, backward or stood on a treadmill and responded either to an ambiguous temporal question (Experiment 1) or an ambiguous spatial question (Experiment 2) at the end of the walking manipulation. Results confirmed the ambiguousness of the questions in the control condition. Nevertheless, despite large power, walking forward or backward did not influence the answers or response times to the temporal (Experiment 1) or spatial (Experiment 2) question. A follow-up Experiment 3 indicated that this is also true for walking actively (or passively) in free space (as opposed to a treadmill). We explore possible reasons for the null-finding as concerns the modulation of temporal and spatial reference frames by movements and we critically discuss the methodological and theoretical implications. PMID:28376130
Does movement influence representations of time and space?
Loeffler, Jonna; Raab, Markus; Cañal-Bruland, Rouwen
2017-01-01
Embodied cognition posits that abstract conceptual knowledge such as mental representations of time and space are at least partially grounded in sensorimotor experiences. If true, then the execution of whole-body movements should result in modulations of temporal and spatial reference frames. To scrutinize this hypothesis, in two experiments participants either walked forward, backward or stood on a treadmill and responded either to an ambiguous temporal question (Experiment 1) or an ambiguous spatial question (Experiment 2) at the end of the walking manipulation. Results confirmed the ambiguousness of the questions in the control condition. Nevertheless, despite large power, walking forward or backward did not influence the answers or response times to the temporal (Experiment 1) or spatial (Experiment 2) question. A follow-up Experiment 3 indicated that this is also true for walking actively (or passively) in free space (as opposed to a treadmill). We explore possible reasons for the null-finding as concerns the modulation of temporal and spatial reference frames by movements and we critically discuss the methodological and theoretical implications.
Wavelet-space Correlation Imaging for High-speed MRI without Motion Monitoring or Data Segmentation
Li, Yu; Wang, Hui; Tkach, Jean; Roach, David; Woods, Jason; Dumoulin, Charles
2014-01-01
Purpose This study aims to 1) develop a new high-speed MRI approach by implementing correlation imaging in wavelet-space, and 2) demonstrate the ability of wavelet-space correlation imaging to image human anatomy with involuntary or physiological motion. Methods Correlation imaging is a high-speed MRI framework in which image reconstruction relies on quantification of data correlation. The presented work integrates correlation imaging with a wavelet transform technique developed originally in the field of signal and image processing. This provides a new high-speed MRI approach to motion-free data collection without motion monitoring or data segmentation. The new approach, called “wavelet-space correlation imaging”, is investigated in brain imaging with involuntary motion and chest imaging with free-breathing. Results Wavelet-space correlation imaging can exceed the speed limit of conventional parallel imaging methods. Using this approach with high acceleration factors (6 for brain MRI, 16 for cardiac MRI and 8 for lung MRI), motion-free images can be generated in static brain MRI with involuntary motion and nonsegmented dynamic cardiac/lung MRI with free-breathing. Conclusion Wavelet-space correlation imaging enables high-speed MRI in the presence of involuntary motion or physiological dynamics without motion monitoring or data segmentation. PMID:25470230
Visual and Plastic Arts in Teaching Literacy: Null Curricula?
ERIC Educational Resources Information Center
Wakeland, Robin Gay
2010-01-01
Visual and plastic arts in contemporary literacy instruction equal null curricula. Studies show that painting and sculpture facilitate teaching reading and writing (literacy), yet such pedagogy has not been formally adopted into USA curriculum. An example of null curriculum can be found in late 19th - early 20th century education the USA…
A Critique of One-Tailed Hypothesis Test Procedures in Business and Economics Statistics Textbooks.
ERIC Educational Resources Information Center
Liu, Tung; Stone, Courtenay C.
1999-01-01
Surveys introductory business and economics statistics textbooks and finds that they differ over the best way to explain one-tailed hypothesis tests: the simple null-hypothesis approach or the composite null-hypothesis approach. Argues that the composite null-hypothesis approach contains methodological shortcomings that make it more difficult for…
Targeted mutant models are common in mechanistic toxicology experiments investigating the absorption, metabolism, distribution, or elimination (ADME) of chemicals from individuals. Key models include those for xenosensing transcription factors and cytochrome P450s (CYP). Here we ...
Compensation for PKMζ in long-term potentiation and spatial long-term memory in mutant mice.
Tsokas, Panayiotis; Hsieh, Changchi; Yao, Yudong; Lesburguères, Edith; Wallace, Emma Jane Claire; Tcherepanov, Andrew; Jothianandan, Desingarao; Hartley, Benjamin Rush; Pan, Ling; Rivard, Bruno; Farese, Robert V; Sajan, Mini P; Bergold, Peter John; Hernández, Alejandro Iván; Cottrell, James E; Shouval, Harel Z; Fenton, André Antonio; Sacktor, Todd Charlton
2016-05-17
PKMζ is a persistently active PKC isoform proposed to maintain late-LTP and long-term memory. But late-LTP and memory are maintained without PKMζ in PKMζ-null mice. Two hypotheses can account for these findings. First, PKMζ is unimportant for LTP or memory. Second, PKMζ is essential for late-LTP and long-term memory in wild-type mice, and PKMζ-null mice recruit compensatory mechanisms. We find that whereas PKMζ persistently increases in LTP maintenance in wild-type mice, PKCι/λ, a gene-product closely related to PKMζ, persistently increases in LTP maintenance in PKMζ-null mice. Using a pharmacogenetic approach, we find PKMζ-antisense in hippocampus blocks late-LTP and spatial long-term memory in wild-type mice, but not in PKMζ-null mice without the target mRNA. Conversely, a PKCι/λ-antagonist disrupts late-LTP and spatial memory in PKMζ-null mice but not in wild-type mice. Thus, whereas PKMζ is essential for wild-type LTP and long-term memory, persistent PKCι/λ activation compensates for PKMζ loss in PKMζ-null mice.
Yoo, Min Heui; Kim, Tae-Youn; Yoon, Young Hee; Koh, Jae-Young
2016-01-01
To investigate the role of synaptic zinc in the ASD pathogenesis, we examined zinc transporter 3 (ZnT3) null mice. At 4–5 weeks of age, male but not female ZnT3 null mice exhibited autistic-like behaviors. Cortical volume and neurite density were significantly greater in male ZnT3 null mice than in WT mice. In male ZnT3 null mice, consistent with enhanced neurotrophic stimuli, the level of BDNF as well as activity of MMP-9 was increased. Consistent with known roles for MMPs in BDNF upregulation, 2.5-week treatment with minocycline, an MMP inhibitor, significantly attenuated BDNF levels as well as megalencephaly and autistic-like behaviors. Although the ZnT3 null state removed synaptic zinc, it rather increased free zinc in the cytosol of brain cells, which appeared to increase MMP-9 activity and BDNF levels. The present results suggest that zinc dyshomeostasis during the critical period of brain development may be a possible contributing mechanism for ASD. PMID:27352957
Yoo, Min Heui; Kim, Tae-Youn; Yoon, Young Hee; Koh, Jae-Young
2016-06-29
To investigate the role of synaptic zinc in the ASD pathogenesis, we examined zinc transporter 3 (ZnT3) null mice. At 4-5 weeks of age, male but not female ZnT3 null mice exhibited autistic-like behaviors. Cortical volume and neurite density were significantly greater in male ZnT3 null mice than in WT mice. In male ZnT3 null mice, consistent with enhanced neurotrophic stimuli, the level of BDNF as well as activity of MMP-9 was increased. Consistent with known roles for MMPs in BDNF upregulation, 2.5-week treatment with minocycline, an MMP inhibitor, significantly attenuated BDNF levels as well as megalencephaly and autistic-like behaviors. Although the ZnT3 null state removed synaptic zinc, it rather increased free zinc in the cytosol of brain cells, which appeared to increase MMP-9 activity and BDNF levels. The present results suggest that zinc dyshomeostasis during the critical period of brain development may be a possible contributing mechanism for ASD.
Modeling of "Stripe" Wave Phenomena Seen by the CHARM II and ACES Sounding Rockets
NASA Astrophysics Data System (ADS)
Dombrowski, M. P.; Labelle, J. W.
2010-12-01
Two recent sounding-rocket missions—CHARM II and ACES—have been launched from Poker Flat Research Range, carrying the Dartmouth High-Frequency Experiment (HFE) among their primary instruments. The HFE is a receiver system which effectively yields continuous (100% duty cycle) E-field waveform measurements up to 5 MHz. The CHARM II sounding rocket was launched 9:49 UT on 15 February 2010 into a substorm, while the ACES mission consisted of two rockets, launched into quiet aurora at 9:49 and 9:50 UT on 29 January 2009. At approximately 350 km on CHARM II and the ACES High-Flyer, the HFE detected short (~2s) bursts of broadband (200-500 kHz) noise with a 'stripe' pattern of nulls imposed on it. These nulls have 10 to 20 kHz width and spacing, and many show a regular, non-linear frequency-time relation. These events are different from the 'stripes' discussed by Samara and LaBelle [2006] and Colpitts et al. [2010], because of the density of the stripes, the non-linearity, and the appearance of being an absorptive rather than emissive phenomenon. These events are similar to 'stripe' features reported by Brittain et al. [1983] in the VLF range, explained as an interference pattern between a downward-traveling whistler-mode wave and its reflection off the bottom of the ionosphere. Following their analysis method, we modeled our stripes as higher-frequency interfering whistlers reflecting off of a density gradient. This model predicts the near-hyperbolic frequency-time curves and high density of the nulls, and therefore shows promise at explaining the new observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gascoyne, Andrew, E-mail: a.d.gascoyne@sheffield.ac.uk
2015-03-15
Using a full orbit test particle approach, we analyse the motion of a single proton in the vicinity of magnetic null point configurations which are solutions to the kinematic, steady state, resistive magnetohydrodynamics equations. We consider two magnetic configurations, namely, the sheared and torsional spine reconnection regimes [E. R. Priest and D. I. Pontin, Phys. Plasmas 16, 122101 (2009); P. Wyper and R. Jain, Phys. Plasmas 17, 092902 (2010)]; each produce an associated electric field and thus the possibility of accelerating charged particles to high energy levels, i.e., > MeV, as observed in solar flares [R. P. Lin, Space Sci. Rev. 124,more » 233 (2006)]. The particle's energy gain is strongly dependent on the location of injection and is characterised by the angle of approach β, with optimum angle of approach β{sub opt} as the value of β which produces the maximum energy gain. We examine the topological features of each regime and analyse the effect on the energy gain of the proton. We also calculate the complete Lyapunov spectrum for the considered dynamical systems in order to correctly quantify the chaotic nature of the particle orbits. We find that the sheared model is a good candidate for the acceleration of particles, and for increased shear, we expect a larger population to be accelerated to higher energy levels. In the strong electric field regime (E{sub 0}=1500 V/m), the torsional model produces chaotic particle orbits quantified by the calculation of multiple positive Lyapunov exponents in the spectrum, whereas the sheared model produces chaotic orbits only in the neighbourhood of the null point.« less
Jeevanandham, Balaji; Kalyanpur, Tejas; Gupta, Prashant; Cherian, Mathew
2017-06-01
This study was to assess the usefulness of newer three-dimensional (3D)-T 1 sampling perfection with application optimized contrast using different flip-angle evolutions (SPACE) and 3D-T 2 fluid-attenuated inversion recovery (FLAIR) sequences in evaluation of meningeal abnormalities. 78 patients who presented with high suspicion of meningeal abnormalities were evaluated using post-contrast 3D-T 2 -FLAIR, 3D-T 1 magnetization-prepared rapid gradient-echo (MPRAGE) and 3D-T 1 -SPACE sequences. The images were evaluated independently by two radiologists for cortical gyral, sulcal space, basal cisterns and dural enhancement. The diagnoses were confirmed by further investigations including histopathology. Post-contrast 3D-T 1 -SPACE and 3D-T 2 -FLAIR images yielded significantly more information than MPRAGE images (p < 0.05 for both SPACE and FLAIR images) in detection of meningeal abnormalities. SPACE images best demonstrated abnormalities in dural and sulcal spaces, whereas FLAIR was useful for basal cisterns enhancement. Both SPACE and FLAIR performed equally well in detection of gyral enhancement. In all 10 patients, where both SPACE and T 2 -FLAIR images failed to demonstrate any abnormality, further analysis was also negative. The 3D-T 1 -SPACE sequence best demonstrated abnormalities in dural and sulcal spaces, whereas FLAIR was useful for abnormalities in basal cisterns. Both SPACE and FLAIR performed holds good for detection of gyral enhancement. Post-contrast SPACE and FLAIR sequences are superior to the MPRAGE sequence for evaluation of meningeal abnormalities and when used in combination have the maximum sensitivity for leptomeningeal abnormalities. The negative-predictive value is nearly 100%, where no leptomeningeal abnormality was detected on these sequences. Advances in knowledge: Post-contrast 3D-T 1 -SPACE and 3D-T 2 -FLAIR images are more useful than 3D-T 1 -MPRAGE images in evaluation of meningeal abnormalities.
Reducing the uncertainty in the fidelity of seismic imaging results
NASA Astrophysics Data System (ADS)
Zhou, H. W.; Zou, Z.
2017-12-01
A key aspect in geoscientific inversion is quantifying the quality of the results. In seismic imaging, we must quantify the uncertainty of every imaging result based on field data, because data noise and methodology limitations may produce artifacts. Detection of artifacts is therefore an important aspect in uncertainty quantification in geoscientific inversion. Quantifying the uncertainty of seismic imaging solutions means assessing their fidelity, which defines the truthfulness of the imaged targets in terms of their resolution, position error and artifact. Key challenges to achieving the fidelity of seismic imaging include: (1) Difficulty to tell signal from artifact and noise; (2) Limitations in signal-to-noise ratio and seismic illumination; and (3) The multi-scale nature of the data space and model space. Most seismic imaging studies of the Earth's crust and mantle have employed inversion or modeling approaches. Though they are in opposite directions of mapping between the data space and model space, both inversion and modeling seek the best model to minimize the misfit in the data space, which unfortunately is not the output space. The fact that the selection and uncertainty of the output model are not judged in the output space has exacerbated the nonuniqueness problem for inversion and modeling. In contrast, the practice in exploration seismology has long established a two-fold approach of seismic imaging: Using velocity modeling building to establish the long-wavelength reference velocity models, and using seismic migration to map the short-wavelength reflectivity structures. Most interestingly, seismic migration maps the data into an output space called imaging space, where the output reflection images of the subsurface are formed based on an imaging condition. A good example is the reverse time migration, which seeks the reflectivity image as the best fit in the image space between the extrapolation of time-reversed waveform data and the prediction based on estimated velocity model and source parameters. I will illustrate the benefits of deciding the best output result in the output space for inversion, using examples from seismic imaging.
Eisenhart lifts and symmetries of time-dependent systems
NASA Astrophysics Data System (ADS)
Cariglia, M.; Duval, C.; Gibbons, G. W.; Horváthy, P. A.
2016-10-01
Certain dissipative systems, such as Caldirola and Kannai's damped simple harmonic oscillator, may be modelled by time-dependent Lagrangian and hence time dependent Hamiltonian systems with n degrees of freedom. In this paper we treat these systems, their projective and conformal symmetries as well as their quantisation from the point of view of the Eisenhart lift to a Bargmann spacetime in n + 2 dimensions, equipped with its covariantly constant null Killing vector field. Reparametrisation of the time variable corresponds to conformal rescalings of the Bargmann metric. We show how the Arnold map lifts to Bargmann spacetime. We contrast the greater generality of the Caldirola-Kannai approach with that of Arnold and Bateman. At the level of quantum mechanics, we are able to show how the relevant Schrödinger equation emerges naturally using the techniques of quantum field theory in curved spacetimes, since a covariantly constant null Killing vector field gives rise to well defined one particle Hilbert space. Time-dependent Lagrangians arise naturally also in cosmology and give rise to the phenomenon of Hubble friction. We provide an account of this for Friedmann-Lemaître and Bianchi cosmologies and how it fits in with our previous discussion in the non-relativistic limit.
NASA Technical Reports Server (NTRS)
Jones, Charles; Waliser, Duane E.; Lau, K. M.; Stern, W.
2003-01-01
The Madden-Julian Oscillation (MJO) is known as the dominant mode of tropical intraseasonal variability and has an important role in the coupled-atmosphere system. This study used twin numerical model experiments to investigate the influence of the MJO activity on weather predictability in the midlatitudes of the Northern Hemisphere during boreal winter. The National Aeronautics and Space Administration (NASA) Goddard laboratory for the Atmospheres (GLA) general circulation model was first used in a 10-yr simulation with fixed climatological SSTs to generate a validation data set as well as to select initial conditions for active MJO periods and Null cases. Two perturbation numerical experiments were performed for the 75 cases selected [(4 MJO phases + Null phase) _ 15 initial conditions in each]. For each alternative initial condition, the model was integrated for 90 days. Mean anomaly correlations in the midlatitudes of the Northern Hemisphere (2O deg N_60 deg.N) and standardized root-mean-square errors were computed to validate forecasts and control run. The analyses of 500-hPa geopotential height, 200-hPa Streamfunction and 850-hPa zonal wind component systematically show larger predictability during periods of active MJO as opposed to quiescent episodes of the oscillation.
Tressoldi, Patrizio E.
2011-01-01
Starting from the famous phrase “extraordinary claims require extraordinary evidence,” we will present the evidence supporting the concept that human visual perception may have non-local properties, in other words, that it may operate beyond the space and time constraints of sensory organs, in order to discuss which criteria can be used to define evidence as extraordinary. This evidence has been obtained from seven databases which are related to six different protocols used to test the reality and the functioning of non-local perception, analyzed using both a frequentist and a new Bayesian meta-analysis statistical procedure. According to a frequentist meta-analysis, the null hypothesis can be rejected for all six protocols even if the effect sizes range from 0.007 to 0.28. According to Bayesian meta-analysis, the Bayes factors provides strong evidence to support the alternative hypothesis (H1) over the null hypothesis (H0), but only for three out of the six protocols. We will discuss whether quantitative psychology can contribute to defining the criteria for the acceptance of new scientific ideas in order to avoid the inconclusive controversies between supporters and opponents. PMID:21713069
Castilla-Ortega, Estela; Pavón, Francisco Javier; Sánchez-Marín, Laura; Estivill-Torrús, Guillermo; Pedraza, Carmen; Blanco, Eduardo; Suárez, Juan; Santín, Luis; Rodríguez de Fonseca, Fernando; Serrano, Antonia
2016-04-01
Lysophosphatidic acid species (LPA) are lipid bioactive signaling molecules that have been recently implicated in the modulation of emotional and motivational behaviors. The present study investigates the consequences of either genetic deletion or pharmacological blockade of lysophosphatidic acid receptor-1 (LPA1) in alcohol consumption. The experiments were performed in alcohol-drinking animals by using LPA1-null mice and administering the LPA1 receptor antagonist Ki16425 in both mice and rats. In the two-bottle free choice paradigm, the LPA1-null mice preferred the alcohol more than their wild-type counterparts. Whereas the male LPA1-null mice displayed this higher preference at all doses tested, the female LPA1-null mice only consumed more alcohol at 6% concentration. The male LPA1-null mice were then further characterized, showing a notably increased ethanol drinking after a deprivation period and a reduced sleep time after acute ethanol administration. In addition, LPA1-null mice were more anxious than the wild-type mice in the elevated plus maze test. For the pharmacological experiments, the acute administration of the antagonist Ki16425 consistently increased ethanol consumption in both wild-type mice and rats; while it did not modulate alcohol drinking in the LPA1-null mice and lacked intrinsic rewarding properties and locomotor effects in a conditioned place preference paradigm. In addition, LPA1-null mice exhibited a marked reduction on the expression of glutamate-transmission-related genes in the prefrontal cortex similar to those described in alcohol-exposed rodents. Results suggest a relevant role for the LPA/LPA1 signaling system in alcoholism. In addition, the LPA1-null mice emerge as a new model for genetic vulnerability to excessive alcohol drinking. The pharmacological manipulation of LPA1 receptor arises as a new target for the study and treatment of alcoholism. Copyright © 2015 Elsevier Ltd. All rights reserved.
On Nulling, Drifting, and Their Interactions in PSRs J1741-0840 and J1840-0840
NASA Astrophysics Data System (ADS)
Gajjar, V.; Yuan, J. P.; Yuen, R.; Wen, Z. G.; Liu, Z. Y.; Wang, N.
2017-12-01
We report detailed investigation of nulling and drifting behavior of two pulsars PSRs J1741-0840 and J1840-0840 observed from the Giant Meterwave Radio Telescope at 625 MHz. PSR J1741-0840 was found to show a nulling fraction (NF) of around 30% ± 5% while PSR J1840-0840 was shown to have an NF of around 50% ± 6%. We measured drifting behavior from different profile components in PSR J1840-0840 for the first time with the leading component showing drifting with 13.5 ± 0.7 periods while the weak trailing component showed drifting of around 18 ± 1 periods. Large nulling hampers accuracy of these quantities derived using standard Fourier techniques. A more accurate comparison was drawn from driftband slopes, measured after sub-pulse modeling. These measurements revealed interesting sporadic and irregular drifting behavior in both pulsars. We conclude that the previously reported different drifting periodicities in the trailing component of PSR J1741-0840 is likely due to the spread in these driftband slopes. We also find that both components of PSR J1840-0840 show similar driftband slopes within the uncertainties. Unique nulling-drifting interaction is identified in PSR J1840-0840 where, on most occasions, the pulsar tends to start nulling after what appears to be the end of a driftband. Similarly, when the pulsar switches back to an emission phase, on most occasions it starts at the beginning of a new driftband in both components. Such behaviors have not been detected in any other pulsars to our knowledge. We also found that PSR J1741-0840 seems to have no memory of its previous burst phase while PSR J1840-0840 clearly exhibits memory of its previous state even after longer nulls for both components. We discuss possible explanations for these intriguing nulling-drifting interactions seen in both pulsars based on various pulsar nulling models.
Rosales, Corina; Patel, Niket; Gillard, Baiba K.; Yelamanchili, Dedipya; Yang, Yaliu; Courtney, Harry S.; Santos, Raul D.; Gotto, Antonio M.; Pownall, Henry J.
2016-01-01
The reaction of Streptococcal serum opacity factor (SOF) against plasma high-density lipoproteins (HDL) produces a large cholesteryl ester-rich microemulsion (CERM), a smaller neo HDL that is apolipoprotein (apo) AI-poor, and lipid-free apo AI. SOF is active vs. both human and mouse plasma HDL. In vivo injection of SOF into mice reduces plasma cholesterol ~40% in 3 hours while forming the same products observed in vitro, but at different ratios. Previous studies supported the hypothesis that labile apo AI is required for the SOF reaction vs. HDL. Here we further tested that hypothesis by studies of SOF against HDL from apo AI-null mice. When injected into apo AI-null mice, SOF reduced plasma cholesterol ~35% in three hours. The reaction of SOF vs. apo AI-null HDL in vitro produced a CERM and neo HDL, but no lipid-free apo. Moreover, according to the rate of CERM formation, the extent and rate of the SOF reaction vs. apo AI-null mouse HDL was less than that against wild-type (WT) mouse HDL. Chaotropic perturbation studies using guanidine hydrochloride showed that apo AI-null HDL was more stable than WT HDL. Human apo AI added to apo AI-null HDL was quantitatively incorporated, giving reconstituted HDL. Both SOF and guanidine hydrochloride displaced apo AI from the reconstituted HDL. These results support the conclusion that apo AI-null HDL is more stable than WT HDL because it lacks apo AI, a labile protein that is readily displaced by physico-chemical and biochemical perturbations. Thus, apo AI-null HDL is less SOF-reactive than WT HDL. The properties of apo AI-null HDL can be partially restored to those of WT HDL by the spontaneous incorporation of human apo AI. It remains to be determined what other HDL functions are affected by apo AI deletion. PMID:25790332
Adaptive Nulling for Interferometric Detection of Planets
NASA Technical Reports Server (NTRS)
Lay, Oliver P.; Peters, Robert D.
2010-01-01
An adaptive-nulling method has been proposed to augment the nulling-optical- interferometry method of detection of Earth-like planets around distant stars. The method is intended to reduce the cost of building and aligning the highly precise optical components and assemblies needed for nulling. Typically, at the mid-infrared wavelengths used for detecting planets orbiting distant stars, a star is millions of times brighter than an Earth-sized planet. In order to directly detect the light from the planet, it is necessary to remove most of the light coming from the star. Nulling interferometry is one way to suppress the light from the star without appreciably suppressing the light from the planet. In nulling interferometry in its simplest form, one uses two nominally identical telescopes aimed in the same direction and separated laterally by a suitable distance. The light collected by the two telescopes is processed through optical trains and combined on a detector. The optical trains are designed such that the electric fields produced by an on-axis source (the star) are in anti-phase at the detector while the electric fields from the planet, which is slightly off-axis, combine in phase, so that the contrast ratio between the star and the planet is greatly decreased. If the electric fields from the star are exactly equal in amplitude and opposite in phase, then the star is effectively nulled out. Nulling is effective only if it is complete in the sense that it occurs simultaneously in both polarization states and at all wavelengths of interest. The need to ensure complete nulling translates to extremely tight demands upon the design and fabrication of the complex optical trains: The two telescopes must be highly symmetric, the reflectivities of the many mirrors in the telescopes and other optics must be carefully tailored, the optical coatings must be extremely uniform, sources of contamination must be minimized, optical surfaces must be nearly ideal, and alignments must be extremely precise. Satisfaction of all of these requirements entails substantial cost.
Phase-space evolution of x-ray coherence in phase-sensitive imaging.
Wu, Xizeng; Liu, Hong
2008-08-01
X-ray coherence evolution in the imaging process plays a key role for x-ray phase-sensitive imaging. In this work we present a phase-space formulation for the phase-sensitive imaging. The theory is reformulated in terms of the cross-spectral density and associated Wigner distribution. The phase-space formulation enables an explicit and quantitative account of partial coherence effects on phase-sensitive imaging. The presented formulas for x-ray spectral density at the detector can be used for performing accurate phase retrieval and optimizing the phase-contrast visibility. The concept of phase-space shearing length derived from this phase-space formulation clarifies the spatial coherence requirement for phase-sensitive imaging with incoherent sources. The theory has been applied to x-ray Talbot interferometric imaging as well. The peak coherence condition derived reveals new insights into three-grating-based Talbot-interferometric imaging and gratings-based x-ray dark-field imaging.
A theoretical formulation of the electrophysiological inverse problem on the sphere
NASA Astrophysics Data System (ADS)
Riera, Jorge J.; Valdés, Pedro A.; Tanabe, Kunio; Kawashima, Ryuta
2006-04-01
The construction of three-dimensional images of the primary current density (PCD) produced by neuronal activity is a problem of great current interest in the neuroimaging community, though being initially formulated in the 1970s. There exist even now enthusiastic debates about the authenticity of most of the inverse solutions proposed in the literature, in which low resolution electrical tomography (LORETA) is a focus of attention. However, in our opinion, the capabilities and limitations of the electro and magneto encephalographic techniques to determine PCD configurations have not been extensively explored from a theoretical framework, even for simple volume conductor models of the head. In this paper, the electrophysiological inverse problem for the spherical head model is cast in terms of reproducing kernel Hilbert spaces (RKHS) formalism, which allows us to identify the null spaces of the implicated linear integral operators and also to define their representers. The PCD are described in terms of a continuous basis for the RKHS, which explicitly separates the harmonic and non-harmonic components. The RKHS concept permits us to bring LORETA into the scope of the general smoothing splines theory. A particular way of calculating the general smoothing splines is illustrated, avoiding a brute force discretization prematurely. The Bayes information criterion is used to handle dissimilarities in the signal/noise ratios and physical dimensions of the measurement modalities, which could affect the estimation of the amount of smoothness required for that class of inverse solution to be well specified. In order to validate the proposed method, we have estimated the 3D spherical smoothing splines from two data sets: electric potentials obtained from a skull phantom and magnetic fields recorded from subjects performing an experiment of human faces recognition.
NASA Astrophysics Data System (ADS)
Stahle, D.; Griffin, D.; Cleaveland, M.; Fye, F.; Meko, D.; Cayan, D.; Dettinger, M.; Redmond, K.
2007-05-01
A new network of 36 moisture sensitive tree-ring chronologies has been developed in and near the drainage basins of the Sacramento and San Joaquin Rivers. The network is based entirely on blue oak (Quercus douglasii), which is a California endemic found from the lower forest border up into the mixed conifer zone in the Coast Ranges, Sierra Nevada, and Cascades. These blue oak tree-ring chronologies are highly correlated with winter-spring precipitation totals, Sacramento and San Joaquin streamflow, and with seasonal variations in salinity and null zone position in San Francisco Bay. Null zone is the non-tidal bottom water location where density-driven salinity and river-driven freshwater currents balance (zero flow). It is the area of highest turbidity, water residence time, sediment accumulation, and net primary productivity in the estuary. Null zone position is measured by the distance from the Golden Gate of the 2 per mil bottom water isohaline and is primarily controlled by discharge from the Sacramento and San Joaquin Rivers (and ultimately by winter-spring precipitation). The location of the null zone is an estuarine habitat indicator, a policy variable used for ecosystem management, and can have a major impact on biological resources in the San Francisco estuary. Precipitation-sensitive blue oak chronologies can be used to estimate null zone position based on the strong biogeophysical interaction among terrestrial, aquatic, and estuarine ecosystems, orchestrated by precipitation. The null zone reconstruction is 626-years long and provides a unique long term perspective on the interannual to decadal variability of this important estuarine habitat indicator. Consecutive two-year droughts (or longer) allow the null zone to shrink into the confined upper reaches of Suisun Bay, causing a dramatic reduction in phytoplankton production and favoring colonization of the estuary by marine biota. The reconstruction indicates an approximate 10 year recurrence interval between these consecutive two-year droughts and null zone maxima. Composite analyses of the Palmer drought index over North America indicate that the drought and wetness regimes associated with maxima and minima in reconstructed null zone position are largely restricted to the California sector. Composite analyses of the 20th century global sea surface temperature (SST) field indicate that wet years over central California with good oak growth, high flows, and a seaward position for the null zone (minima) are associated with warm El Nino conditions and a "Pineapple Express" SST pattern. The composite SST pattern is not as strong during dry years with poor growth, low flows, and a landward position of the null zone (maxima), but the composite warm SST anomaly in the eastern North Pacific during maxima would be consistent with a persistent ridge and drought over western North America.
Behavior of Tachyon in String Cosmology Based on Gauged WZW Model
NASA Astrophysics Data System (ADS)
Lee, Sunggeun; Nam, Soonkeon
We investigate a string theoretic cosmological model in the context of the gauged Wess-Zumino-Witten model. Our model is based on a product of non-compact coset space and a spectator flat space; [SL(2, R)/U(1)]k × ℝ2. We extend the formerly studied semiclassical consideration with infinite Kac-Moody level k to a finite one. In this case, the tachyon field appears in the effective action, and we solve the Einstein equation to determine the behavior of tachyon as a function of time. We find that tachyon field dominates over dilaton field in early times. In particular, we consider the energy conditions of the matter fields consisting of the dilaton and the tachyon which affect the initial singularity. We find that not only the strong energy but also the null energy condition is violated.
Vector-averaged gravity does not alter acetylcholine receptor single channel properties
NASA Technical Reports Server (NTRS)
Reitstetter, R.; Gruener, R.
1994-01-01
To examine the physiological sensitivity of membrane receptors to altered gravity, we examined the single channel properties of the acetylcholine receptor (AChR), in co-cultures of Xenopus myocytes and neurons, to vector-averaged gravity in the clinostat. This experimental paradigm produces an environment in which, from the cell's perspective, the gravitational vector is "nulled" by continuous averaging. In that respect, the clinostat simulates one aspect of space microgravity where the gravity force is greatly reduced. After clinorotation, the AChR channel mean open-time and conductance were statistically not different from control values but showed a rotation-dependent trend that suggests a process of cellular adaptation to clinorotation. These findings therefore suggest that the ACHR channel function may not be affected in the microgravity of space despite changes in the receptor's cellular organization.
Liu, Huiling; Xia, Bingbing; Yi, Dehui
2016-01-01
We propose a new feature extraction method of liver pathological image based on multispatial mapping and statistical properties. For liver pathological images of Hematein Eosin staining, the image of R and B channels can reflect the sensitivity of liver pathological images better, while the entropy space and Local Binary Pattern (LBP) space can reflect the texture features of the image better. To obtain the more comprehensive information, we map liver pathological images to the entropy space, LBP space, R space, and B space. The traditional Higher Order Local Autocorrelation Coefficients (HLAC) cannot reflect the overall information of the image, so we propose an average correction HLAC feature. We calculate the statistical properties and the average gray value of pathological images and then update the current pixel value as the absolute value of the difference between the current pixel gray value and the average gray value, which can be more sensitive to the gray value changes of pathological images. Lastly the HLAC template is used to calculate the features of the updated image. The experiment results show that the improved features of the multispatial mapping have the better classification performance for the liver cancer. PMID:27022407
Development and recent results from the Subaru coronagraphic extreme adaptive optics system
NASA Astrophysics Data System (ADS)
Jovanovic, N.; Guyon, O.; Martinache, F.; Clergeon, C.; Singh, G.; Kudo, T.; Newman, K.; Kuhn, J.; Serabyn, E.; Norris, B.; Tuthill, P.; Stewart, P.; Huby, E.; Perrin, G.; Lacour, S.; Vievard, S.; Murakami, N.; Fumika, O.; Minowa, Y.; Hayano, Y.; White, J.; Lai, O.; Marchis, F.; Duchene, G.; Kotani, T.; Woillez, J.
2014-07-01
The Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) instrument is one of a handful of extreme adaptive optics systems set to come online in 2014. The extreme adaptive optics correction is realized by a combination of precise wavefront sensing via a non-modulated pyramid wavefront sensor and a 2000 element deformable mirror. This system has recently begun on-sky commissioning and was operated in closed loop for several minutes at a time with a loop speed of 800 Hz, on ~150 modes. Further suppression of quasi-static speckles is possible via a process called "speckle nulling" which can create a dark hole in a portion of the frame allowing for an enhancement in contrast, and has been successfully tested on-sky. In addition to the wavefront correction there are a suite of coronagraphs on board to null out the host star which include the phase induced amplitude apodization (PIAA), the vector vortex, 8 octant phase mask, 4 quadrant phase mask and shaped pupil versions which operate in the NIR (y-K bands). The PIAA and vector vortex will allow for high contrast imaging down to an angular separation of 1 λ/D to be reached; a factor of 3 closer in than other extreme AO systems. Making use of the left over visible light not used by the wavefront sensor is VAMPIRES and FIRST. These modules are based on aperture masking interferometry and allow for sub-diffraction limited imaging with moderate contrasts of ~100-1000:1. Both modules have undergone initial testing on-sky and are set to be fully commissioned by the end of 2014.
Assessment of facial golden proportions among young Japanese women.
Mizumoto, Yasushi; Deguchi, Toshio; Fong, Kelvin W C
2009-08-01
Facial proportions are of interest in orthodontics. The null hypothesis is that there is no difference in golden proportions of the soft-tissue facial balance between Japanese and white women. Facial proportions were assessed by examining photographs of 3 groups of Asian women: group 1, 30 young adult patients with a skeletal Class 1 occlusion; group 2, 30 models; and group 3, 14 popular actresses. Photographic prints or slides were digitized for image analysis. Group 1 subjects had standardized photos taken as part of their treatment. Photos of the subjects in groups 2 and 3 were collected from magazines and other sources and were of varying sizes; therefore, the output image size was not considered. The range of measurement errors was 0.17% to 1.16%. ANOVA was selected because the data set was normally distributed with homogeneous variances. The subjects in the 3 groups showed good total facial proportions. The proportions of the face-height components in group 1 were similar to the golden proportion, which indicated a longer, lower facial height and shorter nose. Group 2 differed from the golden proportion, with a short, lower facial height. Group 3 had golden proportions in all 7 measurements. The proportion of the face width deviated from the golden proportion, indicating a small mouth or wide-set eyes in groups 1 and 2. The null hypothesis was verified in the group 3 actresses in the facial height components. Some measurements in groups 1 and 2 showed different facial proportions that deviated from the golden proportion (ratio).
Manna, Soumen K.; Patterson, Andrew D.; Yang, Qian; Krausz, Kristopher W.; Li, Henghong; Idle, Jeffrey R.; Fornace, Albert J.; Gonzalez, Frank J.
2010-01-01
Alcohol-induced liver disease (ALD) is a leading cause of non-accident-related deaths in the United States. Although liver damage caused by ALD is reversible when discovered at the earlier stages, current risk assessment tools are relatively non-specific. Identification of an early specific signature of ALD would aid in therapeutic intervention and recovery. In this study the metabolic changes associated with alcohol-induced liver disease were examined using alcohol-fed male Ppara-null mouse as a model of ALD. Principal components analysis of the mass spectrometry-based urinary metabolic profile showed that alcohol-treated wild-type and Ppara-null mice could be distinguished from control animals without information on history of alcohol consumption. The urinary excretion of ethyl-sulfate, ethyl-β-D-glucuronide, 4-hydroxyphenylacetic acid, and 4-hydroxyphenylacetic acid sulfate was elevated and that of the 2-hydroxyphenylacetic acid, adipic acid, and pimelic acid was depleted during alcohol treatment in both wild-type and the Ppara-null mice albeit to different extents. However, indole-3-lactic acid was exclusively elevated by alcohol exposure in Ppara-null mice. The elevation of indole-3-lactic acid is mechanistically related to the molecular events associated with development of ALD in alcohol-treated Ppara-null mice. This study demonstrated the ability of metabolomics approach to identify early, noninvasive biomarkers of ALD pathogenesis in Ppara-null mouse model. PMID:20540569