1974-09-07
ellipticity filter. The source waveforms are recreated by an inverse transform of those complex ampli- tudes associated with the same azimuth...terms of the three complex data points and the ellipticity. Having solved the equations for all frequency bins, the inverse transform of...Transform of those complex amplitudes associated with Source 1, yielding the signal a (t). Similarly, take the inverse Transform of all
NASA Astrophysics Data System (ADS)
Schäfer, M.; Groos, L.; Forbriger, T.; Bohlen, T.
2014-09-01
Full-waveform inversion (FWI) of shallow-seismic surface waves is able to reconstruct lateral variations of subsurface elastic properties. Line-source simulation for point-source data is required when applying algorithms of 2-D adjoint FWI to recorded shallow-seismic field data. The equivalent line-source response for point-source data can be obtained by convolving the waveforms with √{t^{-1}} (t: traveltime), which produces a phase shift of π/4. Subsequently an amplitude correction must be applied. In this work we recommend to scale the seismograms with √{2 r v_ph} at small receiver offsets r, where vph is the phase velocity, and gradually shift to applying a √{t^{-1}} time-domain taper and scaling the waveforms with r√{2} for larger receiver offsets r. We call this the hybrid transformation which is adapted for direct body and Rayleigh waves and demonstrate its outstanding performance on a 2-D heterogeneous structure. The fit of the phases as well as the amplitudes for all shot locations and components (vertical and radial) is excellent with respect to the reference line-source data. An approach for 1-D media based on Fourier-Bessel integral transformation generates strong artefacts for waves produced by 2-D structures. The theoretical background for both approaches is presented in a companion contribution. In the current contribution we study their performance when applied to waves propagating in a significantly 2-D-heterogeneous structure. We calculate synthetic seismograms for 2-D structure for line sources as well as point sources. Line-source simulations obtained from the point-source seismograms through different approaches are then compared to the corresponding line-source reference waveforms. Although being derived by approximation the hybrid transformation performs excellently except for explicitly back-scattered waves. In reconstruction tests we further invert point-source synthetic seismograms by a 2-D FWI to subsurface structure and evaluate its ability to reproduce the original structural model in comparison to the inversion of line-source synthetic data. Even when applying no explicit correction to the point-source waveforms prior to inversion only moderate artefacts appear in the results. However, the overall performance is best in terms of model reproduction and ability to reproduce the original data in a 3-D simulation if inverted waveforms are obtained by the hybrid transformation.
Improving the local wavenumber method by automatic DEXP transformation
NASA Astrophysics Data System (ADS)
Abbas, Mahmoud Ahmed; Fedi, Maurizio; Florio, Giovanni
2014-12-01
In this paper we present a new method for source parameter estimation, based on the local wavenumber function. We make use of the stable properties of the Depth from EXtreme Points (DEXP) method, in which the depth to the source is determined at the extreme points of the field scaled with a power-law of the altitude. Thus the method results particularly suited to deal with local wavenumber of high-order, as it is able to overcome its known instability caused by the use of high-order derivatives. The DEXP transformation enjoys a relevant feature when applied to the local wavenumber function: the scaling-law is in fact independent of the structural index. So, differently from the DEXP transformation applied directly to potential fields, the Local Wavenumber DEXP transformation is fully automatic and may be implemented as a very fast imaging method, mapping every kind of source at the correct depth. Also the simultaneous presence of sources with different homogeneity degree can be easily and correctly treated. The method was applied to synthetic and real examples from Bulgaria and Italy and the results agree well with known information about the causative sources.
Yi, Qitao; Chen, Qiuwen; Hu, Liuming; Shi, Wenqing
2017-05-16
This research developed an innovative approach to reveal nitrogen sources, transformation, and transport in large and complex river networks in the Taihu Lake basin using measurement of dual stable isotopes of nitrate. The spatial patterns of δ 15 N corresponded to the urbanization level, and the nitrogen cycle was associated with the hydrological regime at the basin level. During the high flow season of summer, nonpoint sources from fertilizer/soils and atmospheric deposition constituted the highest proportion of the total nitrogen load. The point sources from sewage/manure, with high ammonium concentrations and high δ 15 N and δ 18 O contents in the form of nitrate, accounted for the largest inputs among all sources during the low flow season of winter. Hot spot areas with heavy point source pollution were identified, and the pollutant transport routes were revealed. Nitrification occurred widely during the warm seasons, with decreased δ 18 O values; whereas great potential for denitrification existed during the low flow seasons of autumn and spring. The study showed that point source reduction could have effects over the short-term; however, long-term efforts to substantially control agriculture nonpoint sources are essential to eutrophication alleviation for the receiving lake, which clarifies the relationship between point and nonpoint source control.
NASA Astrophysics Data System (ADS)
Yu, Le; Zhang, Dengrong; Holden, Eun-Jung
2008-07-01
Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.
Solution of the weighted symmetric similarity transformations based on quaternions
NASA Astrophysics Data System (ADS)
Mercan, H.; Akyilmaz, O.; Aydin, C.
2017-12-01
A new method through Gauss-Helmert model of adjustment is presented for the solution of the similarity transformations, either 3D or 2D, in the frame of errors-in-variables (EIV) model. EIV model assumes that all the variables in the mathematical model are contaminated by random errors. Total least squares estimation technique may be used to solve the EIV model. Accounting for the heteroscedastic uncertainty both in the target and the source coordinates, that is the more common and general case in practice, leads to a more realistic estimation of the transformation parameters. The presented algorithm can handle the heteroscedastic transformation problems, i.e., positions of the both target and the source points may have full covariance matrices. Therefore, there is no limitation such as the isotropic or the homogenous accuracy for the reference point coordinates. The developed algorithm takes the advantage of the quaternion definition which uniquely represents a 3D rotation matrix. The transformation parameters: scale, translations, and the quaternion (so that the rotation matrix) along with their covariances, are iteratively estimated with rapid convergence. Moreover, prior least squares (LS) estimation of the unknown transformation parameters is not required to start the iterations. We also show that the developed method can also be used to estimate the 2D similarity transformation parameters by simply treating the problem as a 3D transformation problem with zero (0) values assigned for the z-components of both target and source points. The efficiency of the new algorithm is presented with the numerical examples and comparisons with the results of the previous studies which use the same data set. Simulation experiments for the evaluation and comparison of the proposed and the conventional weighted LS (WLS) method is also presented.
Sediment delivery to the Gulf of Alaska: source mechanisms along a glaciated transform margin
Dobson, M.R.; O'Leary, D.; Veart, M.
1998-01-01
Sediment delivery to the Gulf of Alaska occurs via four areally extensive deep-water fans, sourced from grounded tidewater glaciers. During periods of climatic cooling, glaciers cross a narrow shelf and discharge sediment down the continental slope. Because the coastal terrain is dominated by fjords and a narrow, high-relief Pacific watershed, deposition is dominated by channellized point-source fan accumulations, the volumes of which are primarily a function of climate. The sediment distribution is modified by a long-term tectonic translation of the Pacific plate to the north along the transform margin. As a result, the deep-water fans are gradually moved away from the climatically controlled point sources. Sets of abandoned channels record the effect of translation during the Plio-Pleistocene.
A weighted adjustment of a similarity transformation between two point sets containing errors
NASA Astrophysics Data System (ADS)
Marx, C.
2017-10-01
For an adjustment of a similarity transformation, it is often appropriate to consider that both the source and the target coordinates of the transformation are affected by errors. For the least squares adjustment of this problem, a direct solution is possible in the cases of specific-weighing schemas of the coordinates. Such a problem is considered in the present contribution and a direct solution is generally derived for the m-dimensional space. The applied weighing schema allows (fully populated) point-wise weight matrices for the source and target coordinates, both weight matrices have to be proportional to each other. Additionally, the solutions of two borderline cases of this weighting schema are derived, which only consider errors in the source or target coordinates. The investigated solution of the rotation matrix of the adjustment is independent of the scaling between the weight matrices of the source and the target coordinates. The mentioned borderline cases, therefore, have the same solution of the rotation matrix. The direct solution method is successfully tested on an example of a 3D similarity transformation using a comparison with an iterative solution based on the Gauß-Helmert model.
Illusion induced overlapped optics.
Zang, XiaoFei; Shi, Cheng; Li, Zhou; Chen, Lin; Cai, Bin; Zhu, YiMing; Zhu, HaiBin
2014-01-13
The traditional transformation-based cloak seems like it can only hide objects by bending the incident electromagnetic waves around the hidden region. In this paper, we prove that invisible cloaks can be applied to realize the overlapped optics. No matter how many in-phase point sources are located in the hidden region, all of them can overlap each other (this can be considered as illusion effect), leading to the perfect optical interference effect. In addition, a singular parameter-independent cloak is also designed to obtain quasi-overlapped optics. Even more amazing of overlapped optics is that if N identical separated in-phase point sources covered with the illusion media, the total power outside the transformation region is N2I0 (not NI0) (I0 is the power of just one point source, and N is the number point sources), which seems violating the law of conservation of energy. A theoretical model based on interference effect is proposed to interpret the total power of these two kinds of overlapped optics effects. Our investigation may have wide applications in high power coherent laser beams, and multiple laser diodes, and so on.
Occurrence of Surface Water Contaminations: An Overview
NASA Astrophysics Data System (ADS)
Shahabudin, M. M.; Musa, S.
2018-04-01
Water is a part of our life and needed by all organisms. As time goes by, the needs by human increased transforming water quality into bad conditions. Surface water contaminated in various ways which is pointed sources and non-pointed sources. Pointed sources means the source are distinguished from the source such from drains or factory but the non-pointed always occurred in mixed of elements of pollutants. This paper is reviewing the occurrence of the contaminations with effects that occurred around us. Pollutant factors from natural or anthropology factors such nutrients, pathogens, and chemical elements contributed to contaminations. Most of the effects from contaminated surface water contributed to the public health effects also to the environments.
Convex Hull Aided Registration Method (CHARM).
Fan, Jingfan; Yang, Jian; Zhao, Yitian; Ai, Danni; Liu, Yonghuai; Wang, Ge; Wang, Yongtian
2017-09-01
Non-rigid registration finds many applications such as photogrammetry, motion tracking, model retrieval, and object recognition. In this paper we propose a novel convex hull aided registration method (CHARM) to match two point sets subject to a non-rigid transformation. First, two convex hulls are extracted from the source and target respectively. Then, all points of the point sets are projected onto the reference plane through each triangular facet of the hulls. From these projections, invariant features are extracted and matched optimally. The matched feature point pairs are mapped back onto the triangular facets of the convex hulls to remove outliers that are outside any relevant triangular facet. The rigid transformation from the source to the target is robustly estimated by the random sample consensus (RANSAC) scheme through minimizing the distance between the matched feature point pairs. Finally, these feature points are utilized as the control points to achieve non-rigid deformation in the form of thin-plate spline of the entire source point set towards the target one. The experimental results based on both synthetic and real data show that the proposed algorithm outperforms several state-of-the-art ones with respect to sampling, rotational angle, and data noise. In addition, the proposed CHARM algorithm also shows higher computational efficiency compared to these methods.
Correcting the extended-source calibration for the Herschel-SPIRE Fourier-transform spectrometer
NASA Astrophysics Data System (ADS)
Valtchanov, I.; Hopwood, R.; Bendo, G.; Benson, C.; Conversi, L.; Fulton, T.; Griffin, M. J.; Joubaud, T.; Lim, T.; Lu, N.; Marchili, N.; Makiwa, G.; Meyer, R. A.; Naylor, D. A.; North, C.; Papageorgiou, A.; Pearson, C.; Polehampton, E. T.; Scott, J.; Schulz, B.; Spencer, L. D.; van der Wiel, M. H. D.; Wu, R.
2018-03-01
We describe an update to the Herschel-Spectral and Photometric Imaging Receiver (SPIRE) Fourier-transform spectrometer (FTS) calibration for extended sources, which incorporates a correction for the frequency-dependent far-field feedhorn efficiency, ηff. This significant correction affects all FTS extended-source calibrated spectra in sparse or mapping mode, regardless of the spectral resolution. Line fluxes and continuum levels are underestimated by factors of 1.3-2 in thespectrometer long wavelength band (447-1018 GHz; 671-294 μm) and 1.4-1.5 in the spectrometer short wavelength band (944-1568 GHz; 318-191 μm). The correction was implemented in the FTS pipeline version 14.1 and has also been described in the SPIRE Handbook since 2017 February. Studies based on extended-source calibrated spectra produced prior to this pipeline version should be critically reconsidered using the current products available in the Herschel Science Archive. Once the extended-source calibrated spectra are corrected for ηff, the synthetic photometry and the broad-band intensities from SPIRE photometer maps agree within 2-4 per cent - similar levels to the comparison of point-source calibrated spectra and photometry from point-source calibrated maps. The two calibration schemes for the FTS are now self-consistent: the conversion between the corrected extended-source and point-source calibrated spectra can be achieved with the beam solid angle and a gain correction that accounts for the diffraction loss.
Poisson denoising on the sphere: application to the Fermi gamma ray space telescope
NASA Astrophysics Data System (ADS)
Schmitt, J.; Starck, J. L.; Casandjian, J. M.; Fadili, J.; Grenier, I.
2010-07-01
The Large Area Telescope (LAT), the main instrument of the Fermi gamma-ray Space telescope, detects high energy gamma rays with energies from 20 MeV to more than 300 GeV. The two main scientific objectives, the study of the Milky Way diffuse background and the detection of point sources, are complicated by the lack of photons. That is why we need a powerful Poisson noise removal method on the sphere which is efficient on low count Poisson data. This paper presents a new multiscale decomposition on the sphere for data with Poisson noise, called multi-scale variance stabilizing transform on the sphere (MS-VSTS). This method is based on a variance stabilizing transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has a quasi constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. MS-VSTS consists of decomposing the data into a sparse multi-scale dictionary like wavelets or curvelets, and then applying a VST on the coefficients in order to get almost Gaussian stabilized coefficients. In this work, we use the isotropic undecimated wavelet transform (IUWT) and the curvelet transform as spherical multi-scale transforms. Then, binary hypothesis testing is carried out to detect significant coefficients, and the denoised image is reconstructed with an iterative algorithm based on hybrid steepest descent (HSD). To detect point sources, we have to extract the Galactic diffuse background: an extension of the method to background separation is then proposed. In contrary, to study the Milky Way diffuse background, we remove point sources with a binary mask. The gaps have to be interpolated: an extension to inpainting is then proposed. The method, applied on simulated Fermi LAT data, proves to be adaptive, fast and easy to implement.
Launching and controlling Gaussian beams from point sources via planar transformation media
NASA Astrophysics Data System (ADS)
Odabasi, Hayrettin; Sainath, Kamalesh; Teixeira, Fernando L.
2018-02-01
Based on operations prescribed under the paradigm of complex transformation optics (CTO) [F. Teixeira and W. Chew, J. Electromagn. Waves Appl. 13, 665 (1999), 10.1163/156939399X01104; F. L. Teixeira and W. C. Chew, Int. J. Numer. Model. 13, 441 (2000), 10.1002/1099-1204(200009/10)13:5%3C441::AID-JNM376%3E3.0.CO;2-J; H. Odabasi, F. L. Teixeira, and W. C. Chew, J. Opt. Soc. Am. B 28, 1317 (2011), 10.1364/JOSAB.28.001317; B.-I. Popa and S. A. Cummer, Phys. Rev. A 84, 063837 (2011), 10.1103/PhysRevA.84.063837], it was recently shown in [G. Castaldi, S. Savoia, V. Galdi, A. Alù, and N. Engheta, Phys. Rev. Lett. 110, 173901 (2013), 10.1103/PhysRevLett.110.173901] that a complex source point (CSP) can be mimicked by parity-time (PT ) transformation media. Such coordinate transformation has a mirror symmetry for the imaginary part, and results in a balanced loss/gain metamaterial slab. A CSP produces a Gaussian beam and, consequently, a point source placed at the center of such a metamaterial slab produces a Gaussian beam propagating away from the slab. Here, we extend the CTO analysis to nonsymmetric complex coordinate transformations as put forth in [S. Savoia, G. Castaldi, and V. Galdi, J. Opt. 18, 044027 (2016), 10.1088/2040-8978/18/4/044027] and verify that, by using simply a (homogeneous) doubly anisotropic gain-media metamaterial slab, one can still mimic a CSP and produce Gaussian beam. In addition, we show that a Gaussian-like beams can be produced by point sources placed outside the slab as well. By making use of the extra degrees of freedom (the real and imaginary parts of the coordinate transformation) provided by CTO, the near-zero requirement on the real part of the resulting constitutive parameters can be relaxed to facilitate potential realization of Gaussian-like beams. We illustrate how beam properties such as peak amplitude and waist location can be controlled by a proper choice of (complex-valued) CTO Jacobian elements. In particular, the beam waist location may be moved bidirectionally by allowing for negative entries in the Jacobian (equivalent to inducing negative refraction effects). These results are then interpreted in light of the ensuing CSP location.
The Unicellular State as a Point Source in a Quantum Biological System
Torday, John S.; Miller, William B.
2016-01-01
A point source is the central and most important point or place for any group of cohering phenomena. Evolutionary development presumes that biological processes are sequentially linked, but neither directed from, nor centralized within, any specific biologic structure or stage. However, such an epigenomic entity exists and its transforming effects can be understood through the obligatory recapitulation of all eukaryotic lifeforms through a zygotic unicellular phase. This requisite biological conjunction can now be properly assessed as the focal point of reconciliation between biology and quantum phenomena, illustrated by deconvoluting complex physiologic traits back to their unicellular origins. PMID:27240413
Beam steering performance of compressed Luneburg lens based on transformation optics
NASA Astrophysics Data System (ADS)
Gao, Ju; Wang, Cong; Zhang, Kuang; Hao, Yang; Wu, Qun
2018-06-01
In this paper, two types of compressed Luneburg lenses based on transformation optics are investigated and simulated using two different sources, namely, waveguides and dipoles, which represent plane and spherical wave sources, respectively. We determined that the largest beam steering angle and the related feed point are intrinsic characteristics of a certain type of compressed Luneburg lens, and that the optimized distance between the feed and lens, gain enhancement, and side-lobe suppression are related to the type of source. Based on our results, we anticipate that these lenses will prove useful in various future antenna applications.
Integer cosine transform for image compression
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Pollara, F.; Shahshahani, M.
1991-01-01
This article describes a recently introduced transform algorithm called the integer cosine transform (ICT), which is used in transform-based data compression schemes. The ICT algorithm requires only integer operations on small integers and at the same time gives a rate-distortion performance comparable to that offered by the floating-point discrete cosine transform (DCT). The article addresses the issue of implementation complexity, which is of prime concern for source coding applications of interest in deep-space communications. Complexity reduction in the transform stage of the compression scheme is particularly relevant, since this stage accounts for most (typically over 80 percent) of the computational load.
NASA Astrophysics Data System (ADS)
D'Astous, Y.; Blanchard, M.
1982-05-01
In the past years, the Journal has published a number of articles1-5 devoted to the introduction of Fourier transform spectroscopy in the undergraduate labs. In most papers, the proposed experimental setup consists of a Michelson interferometer, a light source, a light detector, and a chart recorder. The student uses this setup to record an interferogram which is then Fourier transformed to obtain the spectrogram of the light source. Although attempts have been made to ease the task of performing the required Fourier transform,6 the use of computers and Cooley-Tukey's fast Fourier transform (FFT) algorithm7 is by far the simplest method to use. However, to be able to use FFT, one has to get a number of samples of the interferogram, a tedious job which should be kept to a minimum. (AIP)
Perspex machine: V. Compilation of C programs
NASA Astrophysics Data System (ADS)
Spanner, Matthew P.; Anderson, James A. D. W.
2006-01-01
The perspex machine arose from the unification of the Turing machine with projective geometry. The original, constructive proof used four special, perspective transformations to implement the Turing machine in projective geometry. These four transformations are now generalised and applied in a compiler, implemented in Pop11, that converts a subset of the C programming language into perspexes. This is interesting both from a geometrical and a computational point of view. Geometrically, it is interesting that program source can be converted automatically to a sequence of perspective transformations and conditional jumps, though we find that the product of homogeneous transformations with normalisation can be non-associative. Computationally, it is interesting that program source can be compiled for a Reduced Instruction Set Computer (RISC), the perspex machine, that is a Single Instruction, Zero Exception (SIZE) computer.
NASA Astrophysics Data System (ADS)
Borowiec, N.
2013-12-01
Gathering information about the roof shapes of the buildings is still current issue. One of the many sources from which we can obtain information about the buildings is the airborne laser scanning. However, detect information from cloud o points about roofs of building automatically is still a complex task. You can perform this task by helping the additional information from other sources, or based only on Lidar data. This article describes how to detect the building roof only from a point cloud. To define the shape of the roof is carried out in three tasks. The first step is to find the location of the building, the second is the precise definition of the edge, while the third is an indication of the roof planes. First step based on the grid analyses. And the next two task based on Hough Transformation. Hough transformation is a method of detecting collinear points, so a perfect match to determine the line describing a roof. To properly determine the shape of the roof is not enough only the edges, but it is necessary to indicate roofs. Thus, in studies Hough Transform, also served as a tool for detection of roof planes. The only difference is that the tool used in this case is a three-dimensional.
Fourier removal of stripe artifacts in IRAS images
NASA Technical Reports Server (NTRS)
Van Buren, Dave
1987-01-01
By working in the Fourier plane, approximate removal of stripe artifacts in IRAS images can be effected. The image of interest is smoothed and subtracted from the original, giving the high-spatial-frequency part. This 'filtered' image is then clipped to remove point sources and then Fourier transformed. Subtracting the Fourier components contributing to the stripes in this image from the Fourier transform of the original and transforming back to the image plane yields substantial removal of the stripes.
Method to Eliminate Flux Linkage DC Component in Load Transformer for Static Transfer Switch
2014-01-01
Many industrial and commercial sensitive loads are subject to the voltage sags and interruptions. The static transfer switch (STS) based on the thyristors is applied to improve the power quality and reliability. However, the transfer will result in severe inrush current in the load transformer, because of the DC component in the magnetic flux generated in the transfer process. The inrush current which is always 2~30 p.u. can cause the disoperation of relay protective devices and bring potential damage to the transformer. The way to eliminate the DC component is to transfer the related phases when the residual flux linkage of the load transformer and the prospective flux linkage of the alternate source are equal. This paper analyzes how the flux linkage of each winding in the load transformer changes in the transfer process. Based on the residual flux linkage when the preferred source is completely disconnected, the method to calculate the proper time point to close each phase of the alternate source is developed. Simulation and laboratory experiments results are presented to show the effectiveness of the transfer method. PMID:25133255
Method to eliminate flux linkage DC component in load transformer for static transfer switch.
He, Yu; Mao, Chengxiong; Lu, Jiming; Wang, Dan; Tian, Bing
2014-01-01
Many industrial and commercial sensitive loads are subject to the voltage sags and interruptions. The static transfer switch (STS) based on the thyristors is applied to improve the power quality and reliability. However, the transfer will result in severe inrush current in the load transformer, because of the DC component in the magnetic flux generated in the transfer process. The inrush current which is always 2 ~ 30 p.u. can cause the disoperation of relay protective devices and bring potential damage to the transformer. The way to eliminate the DC component is to transfer the related phases when the residual flux linkage of the load transformer and the prospective flux linkage of the alternate source are equal. This paper analyzes how the flux linkage of each winding in the load transformer changes in the transfer process. Based on the residual flux linkage when the preferred source is completely disconnected, the method to calculate the proper time point to close each phase of the alternate source is developed. Simulation and laboratory experiments results are presented to show the effectiveness of the transfer method.
NASA Astrophysics Data System (ADS)
Kelly, Brandon C.; Hughes, Philip A.; Aller, Hugh D.; Aller, Margo F.
2003-07-01
We introduce an algorithm for applying a cross-wavelet transform to analysis of quasi-periodic variations in a time series and introduce significance tests for the technique. We apply a continuous wavelet transform and the cross-wavelet algorithm to the Pearson-Readhead VLBI survey sources using data obtained from the University of Michigan 26 m paraboloid at observing frequencies of 14.5, 8.0, and 4.8 GHz. Thirty of the 62 sources were chosen to have sufficient data for analysis, having at least 100 data points for a given time series. Of these 30 sources, a little more than half exhibited evidence for quasi-periodic behavior in at least one observing frequency, with a mean characteristic period of 2.4 yr and standard deviation of 1.3 yr. We find that out of the 30 sources, there were about four timescales for every 10 time series, and about half of those sources showing quasi-periodic behavior repeated the behavior in at least one other observing frequency.
Inverse consistent non-rigid image registration based on robust point set matching
2014-01-01
Background Robust point matching (RPM) has been extensively used in non-rigid registration of images to robustly register two sets of image points. However, except for the location at control points, RPM cannot estimate the consistent correspondence between two images because RPM is a unidirectional image matching approach. Therefore, it is an important issue to make an improvement in image registration based on RPM. Methods In our work, a consistent image registration approach based on the point sets matching is proposed to incorporate the property of inverse consistency and improve registration accuracy. Instead of only estimating the forward transformation between the source point sets and the target point sets in state-of-the-art RPM algorithms, the forward and backward transformations between two point sets are estimated concurrently in our algorithm. The inverse consistency constraints are introduced to the cost function of RPM and the fuzzy correspondences between two point sets are estimated based on both the forward and backward transformations simultaneously. A modified consistent landmark thin-plate spline registration is discussed in detail to find the forward and backward transformations during the optimization of RPM. The similarity of image content is also incorporated into point matching in order to improve image matching. Results Synthetic data sets, medical images are employed to demonstrate and validate the performance of our approach. The inverse consistent errors of our algorithm are smaller than RPM. Especially, the topology of transformations is preserved well for our algorithm for the large deformation between point sets. Moreover, the distance errors of our algorithm are similar to that of RPM, and they maintain a downward trend as whole, which demonstrates the convergence of our algorithm. The registration errors for image registrations are evaluated also. Again, our algorithm achieves the lower registration errors in same iteration number. The determinant of the Jacobian matrix of the deformation field is used to analyse the smoothness of the forward and backward transformations. The forward and backward transformations estimated by our algorithm are smooth for small deformation. For registration of lung slices and individual brain slices, large or small determinant of the Jacobian matrix of the deformation fields are observed. Conclusions Results indicate the improvement of the proposed algorithm in bi-directional image registration and the decrease of the inverse consistent errors of the forward and the reverse transformations between two images. PMID:25559889
A Direction Finding Method with A 3-D Array Based on Aperture Synthesis
NASA Astrophysics Data System (ADS)
Li, Shiwen; Chen, Liangbing; Gao, Zhaozhao; Ma, Wenfeng
2018-01-01
Direction finding for electronic warfare application should provide a wider field of view as possible. But the maximum unambiguous field of view for conventional direction finding methods is a hemisphere. It cannot distinguish the direction of arrival of the signals from the back lobe of the array. In this paper, a full 3-D direction finding method based on aperture synthesis radiometry is proposed. The model of the direction finding system is illustrated, and the fundamentals are presented. The relationship between the outputs of the measurements of a 3-D array and the 3-D power distribution of the point sources can be represented by a 3-D Fourier transform, and then the 3-D power distribution of the point sources can be reconstructed by an inverse 3-D Fourier transform. And in order to display the 3-D power distribution of the point sources conveniently, the whole spherical distribution is represented by two 2-D circular distribution images, one of which is for the upper hemisphere, and the other is for the lower hemisphere. Then a numeric simulation is designed and conducted to demonstrate the feasibility of the method. The results show that the method can estimate the arbitrary direction of arrival of the signals in the 3-D space correctly.
Spherical earth gravity and magnetic anomaly analysis by equivalent point source inversion
NASA Technical Reports Server (NTRS)
Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.
1981-01-01
To facilitate geologic interpretation of satellite elevation potential field data, analysis techniques are developed and verified in the spherical domain that are commensurate with conventional flat earth methods of potential field interpretation. A powerful approach to the spherical earth problem relates potential field anomalies to a distribution of equivalent point sources by least squares matrix inversion. Linear transformations of the equivalent source field lead to corresponding geoidal anomalies, pseudo-anomalies, vector anomaly components, spatial derivatives, continuations, and differential magnetic pole reductions. A number of examples using 1 deg-averaged surface free-air gravity anomalies of POGO satellite magnetometer data for the United States, Mexico, and Central America illustrate the capabilities of the method.
A program for handling map projections of small-scale geospatial raster data
Finn, Michael P.; Steinwand, Daniel R.; Trent, Jason R.; Buehler, Robert A.; Mattli, David M.; Yamamoto, Kristina H.
2012-01-01
Scientists routinely accomplish small-scale geospatial modeling using raster datasets of global extent. Such use often requires the projection of global raster datasets onto a map or the reprojection from a given map projection associated with a dataset. The distortion characteristics of these projection transformations can have significant effects on modeling results. Distortions associated with the reprojection of global data are generally greater than distortions associated with reprojections of larger-scale, localized areas. The accuracy of areas in projected raster datasets of global extent is dependent on spatial resolution. To address these problems of projection and the associated resampling that accompanies it, methods for framing the transformation space, direct point-to-point transformations rather than gridded transformation spaces, a solution to the wrap-around problem, and an approach to alternative resampling methods are presented. The implementations of these methods are provided in an open-source software package called MapImage (or mapIMG, for short), which is designed to function on a variety of computer architectures.
Bracken, Robert E.
2004-01-01
A subroutine (FFTDC2) coded in Fortran 77 is described, which performs a Fast Fourier Transform or Discrete Fourier Transform together with necessary conditioning steps of trend removal, extension, and windowing. The source code for the entire library of required subroutines is provided with the digital release of this report. But, there is only one required entry point, the subroutine call to FFTDC2; all the other subroutines are operationally transparent to the user. Complete instructions for use of FFTDC2.F (as well as for all the other subroutines) and some practical theoretical discussions are included as comments at the beginning of the source code. This subroutine is intended to be an efficient tool for the programmer in a variety of production-level signal-processing applications.
Qualitative and semiquantitative Fourier transformation using a noncoherent system.
Rogers, G L
1979-09-15
A number of authors have pointed out that a system of zone plates combined with a diffuse source, transparent input, lens, and focusing screen will display on the output screen the Fourier transform of the input. Strictly speaking, the transform normally displayed is the cosine transform, and the bipolar output is superimposed on a dc gray level to give a positive-only intensity variation. By phase-shifting one zone plate the sine transform is obtained. Temporal modulation is possible. It is also possible to redesign the system to accept a diffusely reflecting input at the cost of introducing a phase gradient in the output. Results are given of the sine and cosine transforms of a small circular aperture. As expected, the sine transform is a uniform gray. Both transforms show unwanted artifacts beyond 0.1 rad off-axis. An analysis shows this is due to unwanted circularly symmetrical moire patterns between the zone plates.
NASA Astrophysics Data System (ADS)
Saracco, Ginette; Labazuy, Philippe; Moreau, Frédérique
2004-06-01
This study concerns the fluid flow circulation associated with magmatic intrusion during volcanic eruptions from electrical tomography studies. The objective is to localize and characterize the sources responsible for electrical disturbances during a time evolution survey between 1993 and 1999 of an active volcano, the Piton de la Fournaise. We have applied a dipolar probability tomography and a multi-scale analysis on synthetic and experimental SP data. We show the advantage of the complex continuous wavelet transform which allows to obtain directional information from the phase without a priori information on sources. In both cases, we point out a translation of potential sources through the upper depths during periods preceding a volcanic eruption around specific faults or structural features. The set of parameters obtained (vertical and horizontal localization, multipolar degree and inclination) could be taken into account as criteria to define volcanic precursors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohd, Shukri; Holford, Karen M.; Pullin, Rhys
2014-02-12
Source location is an important feature of acoustic emission (AE) damage monitoring in nuclear piping. The ability to accurately locate sources can assist in source characterisation and early warning of failure. This paper describe the development of a novelAE source location technique termed 'Wavelet Transform analysis and Modal Location (WTML)' based on Lamb wave theory and time-frequency analysis that can be used for global monitoring of plate like steel structures. Source location was performed on a steel pipe of 1500 mm long and 220 mm outer diameter with nominal thickness of 5 mm under a planar location test setup usingmore » H-N sources. The accuracy of the new technique was compared with other AE source location methods such as the time of arrival (TOA) techniqueand DeltaTlocation. Theresults of the study show that the WTML method produces more accurate location resultscompared with TOA and triple point filtering location methods. The accuracy of the WTML approach is comparable with the deltaT location method but requires no initial acoustic calibration of the structure.« less
NASA Astrophysics Data System (ADS)
Kuhlman, K. L.; Neuman, S. P.
2006-12-01
Furman and Neuman (2003) proposed a Laplace Transform Analytic Element Method (LT-AEM) for transient groundwater flow. LT-AEM applies the traditionally steady-state AEM to the Laplace transformed groundwater flow equation, and back-transforms the resulting solution to the time domain using a Fourier Series numerical inverse Laplace transform method (de Hoog, et.al., 1982). We have extended the method so it can compute hydraulic head and flow velocity distributions due to any two-dimensional combination and arrangement of point, line, circular and elliptical area sinks and sources, nested circular or elliptical regions having different hydraulic properties, and areas of specified head, flux or initial condition. The strengths of all sinks and sources, and the specified head and flux values, can all vary in both space and time in an independent and arbitrary fashion. Initial conditions may vary from one area element to another. A solution is obtained by matching heads and normal fluxes along the boundary of each element. The effect which each element has on the total flow is expressed in terms of generalized Fourier series which converge rapidly (<20 terms) in most cases. As there are more matching points than unknown Fourier terms, the matching is accomplished in Laplace space using least-squares. The method is illustrated by calculating the resulting transient head and flow velocities due to an arrangement of elements in both finite and infinite domains. The 2D LT-AEM elements already developed and implemented are currently being extended to solve the 3D groundwater flow equation.
Refinement of Methods for Evaluation of Near-Hypersingular Integrals in BEM Formulations
NASA Technical Reports Server (NTRS)
Fink, Patricia W.; Khayat, Michael A.; Wilton, Donald R.
2006-01-01
In this paper, we present advances in singularity cancellation techniques applied to integrals in BEM formulations that are nearly hypersingular. Significant advances have been made recently in singularity cancellation techniques applied to 1 R type kernels [M. Khayat, D. Wilton, IEEE Trans. Antennas and Prop., 53, pp. 3180-3190, 2005], as well as to the gradients of these kernels [P. Fink, D. Wilton, and M. Khayat, Proc. ICEAA, pp. 861-864, Torino, Italy, 2005] on curved subdomains. In these approaches, the source triangle is divided into three tangent subtriangles with a common vertex at the normal projection of the observation point onto the source element or the extended surface containing it. The geometry of a typical tangent subtriangle and its local rectangular coordinate system with origin at the projected observation point is shown in Fig. 1. Whereas singularity cancellation techniques for 1 R type kernels are now nearing maturity, the efficient handling of near-hypersingular kernels still needs attention. For example, in the gradient reference above, techniques are presented for computing the normal component of the gradient relative to the plane containing the tangent subtriangle. These techniques, summarized in the transformations in Table 1, are applied at the sub-triangle level and correspond particularly to the case in which the normal projection of the observation point lies within the boundary of the source element. They are found to be highly efficient as z approaches zero. Here, we extend the approach to cover two instances not previously addressed. First, we consider the case in which the normal projection of the observation point lies external to the source element. For such cases, we find that simple modifications to the transformations of Table 1 permit significant savings in computational cost. Second, we present techniques that permit accurate computation of the tangential components of the gradient; i.e., tangent to the plane containing the source element.
Dong, Junzi; Colburn, H. Steven
2016-01-01
In multisource, “cocktail party” sound environments, human and animal auditory systems can use spatial cues to effectively separate and follow one source of sound over competing sources. While mechanisms to extract spatial cues such as interaural time differences (ITDs) are well understood in precortical areas, how such information is reused and transformed in higher cortical regions to represent segregated sound sources is not clear. We present a computational model describing a hypothesized neural network that spans spatial cue detection areas and the cortex. This network is based on recent physiological findings that cortical neurons selectively encode target stimuli in the presence of competing maskers based on source locations (Maddox et al., 2012). We demonstrate that key features of cortical responses can be generated by the model network, which exploits spatial interactions between inputs via lateral inhibition, enabling the spatial separation of target and interfering sources while allowing monitoring of a broader acoustic space when there is no competition. We present the model network along with testable experimental paradigms as a starting point for understanding the transformation and organization of spatial information from midbrain to cortex. This network is then extended to suggest engineering solutions that may be useful for hearing-assistive devices in solving the cocktail party problem. PMID:26866056
Dong, Junzi; Colburn, H Steven; Sen, Kamal
2016-01-01
In multisource, "cocktail party" sound environments, human and animal auditory systems can use spatial cues to effectively separate and follow one source of sound over competing sources. While mechanisms to extract spatial cues such as interaural time differences (ITDs) are well understood in precortical areas, how such information is reused and transformed in higher cortical regions to represent segregated sound sources is not clear. We present a computational model describing a hypothesized neural network that spans spatial cue detection areas and the cortex. This network is based on recent physiological findings that cortical neurons selectively encode target stimuli in the presence of competing maskers based on source locations (Maddox et al., 2012). We demonstrate that key features of cortical responses can be generated by the model network, which exploits spatial interactions between inputs via lateral inhibition, enabling the spatial separation of target and interfering sources while allowing monitoring of a broader acoustic space when there is no competition. We present the model network along with testable experimental paradigms as a starting point for understanding the transformation and organization of spatial information from midbrain to cortex. This network is then extended to suggest engineering solutions that may be useful for hearing-assistive devices in solving the cocktail party problem.
NASA Astrophysics Data System (ADS)
Hu, Y.; Ji, Y.; Egbert, G. D.
2015-12-01
The fictitious time domain method (FTD), based on the correspondence principle for wave and diffusion fields, has been developed and used over the past few years primarily for marine electromagnetic (EM) modeling. Here we present results of our efforts to apply the FTD approach to land and airborne TEM problems which can reduce the computer time several orders of magnitude and preserve high accuracy. In contrast to the marine case, where sources are in the conductive sea water, we must model the EM fields in the air; to allow for topography air layers must be explicitly included in the computational domain. Furthermore, because sources for most TEM applications generally must be modeled as finite loops, it is useful to solve directly for the impulse response appropriate to the problem geometry, instead of the point-source Green functions typically used for marine problems. Our approach can be summarized as follows: (1) The EM diffusion equation is transformed to a fictitious wave equation. (2) The FTD wave equation is solved with an explicit finite difference time-stepping scheme, with CPML (Convolutional PML) boundary conditions for the whole computational domain including the air and earth , with FTD domain source corresponding to the actual transmitter geometry. Resistivity of the air layers is kept as low as possible, to compromise between efficiency (longer fictitious time step) and accuracy. We have generally found a host/air resistivity contrast of 10-3 is sufficient. (3)A "Modified" Fourier Transform (MFT) allow us recover system's impulse response from the fictitious time domain to the diffusion (frequency) domain. (4) The result is multiplied by the Fourier transformation (FT) of the real source current avoiding time consuming convolutions in the time domain. (5) The inverse FT is employed to get the final full waveform and full time response of the system in the time domain. In general, this method can be used to efficiently solve most time-domain EM simulation problems for non-point sources.
Precise and fast spatial-frequency analysis using the iterative local Fourier transform.
Lee, Sukmock; Choi, Heejoo; Kim, Dae Wook
2016-09-19
The use of the discrete Fourier transform has decreased since the introduction of the fast Fourier transform (fFT), which is a numerically efficient computing process. This paper presents the iterative local Fourier transform (ilFT), a set of new processing algorithms that iteratively apply the discrete Fourier transform within a local and optimal frequency domain. The new technique achieves 210 times higher frequency resolution than the fFT within a comparable computation time. The method's superb computing efficiency, high resolution, spectrum zoom-in capability, and overall performance are evaluated and compared to other advanced high-resolution Fourier transform techniques, such as the fFT combined with several fitting methods. The effectiveness of the ilFT is demonstrated through the data analysis of a set of Talbot self-images (1280 × 1024 pixels) obtained with an experimental setup using grating in a diverging beam produced by a coherent point source.
NASA Astrophysics Data System (ADS)
Martin, Thomas B.; Drissen, Laurent; Melchior, Anne-Laure
2018-01-01
We present a detailed description of the wavelength, astrometric and photometric calibration plan for SITELLE, the imaging Fourier transform spectrometer attached to the Canada-France-Hawaii telescope, based on observations of a red (647-685 nm) data cube of the central region (11 arcmin × 11 arcmin) of M 31. The first application, presented in this paper is a radial-velocity catalogue (with uncertainties of ∼2-6 km s-1) of nearly 800 emission-line point-like sources, including ∼450 new discoveries. Most of the sources are likely planetary nebulae, although we also detect five novae (having erupted in the first eight months of 2016) and one new supernova remnant candidate.
Solid-state current transformer
NASA Technical Reports Server (NTRS)
Farnsworth, D. L. (Inventor)
1976-01-01
A signal transformation network which is uniquely characterized to exhibit a very low input impedance while maintaining a linear transfer characteristic when driven from a voltage source and when quiescently biased in the low microampere current range is described. In its simplest form, it consists of a tightly coupled two transistor network in which a common emitter input stage is interconnected directly with an emitter follower stage to provide virtually 100 percent negative feedback to the base input of the common emitter stage. Bias to the network is supplied via the common tie point of the common emitter stage collector terminal and the emitter follower base stage terminal by a regulated constant current source, and the output of the circuit is taken from the collector of the emitter follower stage.
The excitation of long period seismic waves by a source spanning a structural discontinuity
NASA Astrophysics Data System (ADS)
Woodhouse, J. H.
Simple theoretical results are obtained for the excitation of seismic waves by an indigenous seismic source in the case that the source volume is intersected by a structural discontinuity. In the long wavelength approximation the seismic radiation is identical to that of a point source placed on one side of the discontinuity or of a different point source placed on the other side. The moment tensors of these two equivalent sources are related by a specific linear transformation and may differ appreciably both in magnitude and geometry. Either of these sources could be obtained by linear inversion of seismic data but the physical interpretation is more complicated than in the usual case. A source which involved no volume change would, for example, yield an isotropic component if, during inversion, it were assumed to lie on the wrong side of the discontinuity. The problem of determining the true moment tensor of the source is indeterminate unless further assumptions are made about the stress glut distribution; one way to resolve this indeterminancy is to assume proportionality between the integrated stress glut on each side of the discontinuity.
Furlong, Edward T.; Gray, James L.; Quanrud, David M.; Teske, Sondra S.; Werner, Stephen L.; Esposito, Kathleen; Marine, Jeremy; Ela, Wendell P.; Zaugg, Steven D.; Phillips, Patrick J.; Stinson, Beverley
2012-01-01
The ubiquitous presence of pharmaceuticals and other emerging contaminants, or trace organic compounds, in surface water has resulted in research and monitoring efforts to identify contaminant sources to surface waters and to better understand loadings from these sources. Wastewater treatment plant discharges have been identified as an important point source of trace organic compounds to surface water and understanding the transport and transformation of these contaminants through wastewater treatment process is essential to controlling their introduction to receiving waters.
Improved remote gaze estimation using corneal reflection-adaptive geometric transforms
NASA Astrophysics Data System (ADS)
Ma, Chunfei; Baek, Seung-Jin; Choi, Kang-A.; Ko, Sung-Jea
2014-05-01
Recently, the remote gaze estimation (RGE) technique has been widely applied to consumer devices as a more natural interface. In general, the conventional RGE method estimates a user's point of gaze using a geometric transform, which represents the relationship between several infrared (IR) light sources and their corresponding corneal reflections (CRs) in the eye image. Among various methods, the homography normalization (HN) method achieves state-of-the-art performance. However, the geometric transform of the HN method requiring four CRs is infeasible for the case when fewer than four CRs are available. To solve this problem, this paper proposes a new RGE method based on three alternative geometric transforms, which are adaptive to the number of CRs. Unlike the HN method, the proposed method not only can operate with two or three CRs, but can also provide superior accuracy. To further enhance the performance, an effective error correction method is also proposed. By combining the introduced transforms with the error-correction method, the proposed method not only provides high accuracy and robustness for gaze estimation, but also allows for a more flexible system setup with a different number of IR light sources. Experimental results demonstrate the effectiveness of the proposed method.
A Robust Linear Feature-Based Procedure for Automated Registration of Point Clouds
Poreba, Martyna; Goulette, François
2015-01-01
With the variety of measurement techniques available on the market today, fusing multi-source complementary information into one dataset is a matter of great interest. Target-based, point-based and feature-based methods are some of the approaches used to place data in a common reference frame by estimating its corresponding transformation parameters. This paper proposes a new linear feature-based method to perform accurate registration of point clouds, either in 2D or 3D. A two-step fast algorithm called Robust Line Matching and Registration (RLMR), which combines coarse and fine registration, was developed. The initial estimate is found from a triplet of conjugate line pairs, selected by a RANSAC algorithm. Then, this transformation is refined using an iterative optimization algorithm. Conjugates of linear features are identified with respect to a similarity metric representing a line-to-line distance. The efficiency and robustness to noise of the proposed method are evaluated and discussed. The algorithm is valid and ensures valuable results when pre-aligned point clouds with the same scale are used. The studies show that the matching accuracy is at least 99.5%. The transformation parameters are also estimated correctly. The error in rotation is better than 2.8% full scale, while the translation error is less than 12.7%. PMID:25594589
Ghannam, K; El-Fadel, M
2013-02-01
This paper examines the relative source contribution to ground-level concentrations of carbon monoxide (CO), nitrogen dioxide (NO2), and PM10 (particulate matter with an aerodynamic diameter < 10 microm) in a coastal urban area due to emissions from an industrial complex with multiple stacks, quarrying activities, and a nearby highway. For this purpose, an inventory of CO, oxide of nitrogen (NO(x)), and PM10 emissions was coupled with the non-steady-state Mesoscale Model 5/California Puff Dispersion Modeling system to simulate individual source contributions under several spatial and temporal scales. As the contribution of a particular source to ground-level concentrations can be evaluated by simulating this single-source emissions or otherwise total emissions except that source, a set of emission sensitivity simulations was designed to examine if CALPUFF maintains a linear relationship between emission rates and predicted concentrations in cases where emitted plumes overlap and chemical transformations are simulated. Source apportionment revealed that ground-level releases (i.e., highway and quarries) extended over large areas dominated the contribution to exposure levels over elevated point sources, despite the fact that cumulative emissions from point sources are higher. Sensitivity analysis indicated that chemical transformations of NO(x) are insignificant, possibly due to short-range plume transport, with CALPUFF exhibiting a linear response to changes in emission rate. The current paper points to the significance of ground-level emissions in contributing to urban air pollution exposure and questions the viability of the prevailing paradigm of point-source emission reduction, especially that the incremental improvement in air quality associated with this common abatement strategy may not accomplish the desirable benefit in terms of lower exposure with costly emissions capping. The application of atmospheric dispersion models for source apportionment helps in identifying major contributors to regional air pollution. In industrial urban areas where multiple sources with different geometry contribute to emissions, ground-level releases extended over large areas such as roads and quarries often dominate the contribution to ground-level air pollution. Industrial emissions released at elevated stack heights may experience significant dilution, resulting in minor contribution to exposure at ground level. In such contexts, emission reduction, which is invariably the abatement strategy targeting industries at a significant investment in control equipment or process change, may result in minimal return on investment in terms of improvement in air quality at sensitive receptors.
WISEGAL. WISE for the Galactic Plane
NASA Astrophysics Data System (ADS)
Noriega-Crespo, Alberto
There is truly a community effort to study on a global scale the properties of the Milky Way, like its structure, its star formation and interstellar medium, and to use this knowledge to create accurate templates to understand the properties of extragalactic systems. A testimony of this effort are the multi-wavelength surveys of the Galactic Plane that have been recently carried out or are underway from both the ground (e.g. IPHAS, ATLASGAL, JCMT Galactic Plane Survey) or space (GLIMPSE, MIPSGAL, HiGAL). Adding to this wealth of data is the recent release of approximately 57 percent of the whole sky by the Wide-field Infrared Survey Explorer (WISE) team of their high angular resolution and sensitive mid-IR (3.4, 4.6, 12 and 22 micron) images and point source catalogs, encompassing nearly three quarters of the Galactic Plane, including the less studied regions of the Outer Galaxy. The WISE Atlas Images are spectacular, but to take full advantage of them, they need to be transformed from their default Data Number (DN) units into absolute surface brightness calibrated units. Furthermore, to mitigate the contamination effect of the point sources on the extended/diffuse emission, we will remove them and create residual images. This processing will enable a wide range of science projects using the Atlas Images, where measuring the spectral energy distribution of the extended emission is crucial. In this project we propose to transform the W3 (12 micron) and W4 (22 micron) images of the Galactic Plane, in particular of the Outer Galaxy where WISE provides an unique data set, into a background-calibrated, point-source subtracted images using IRIS (DIRBE IRAS Calibrated data). This transformation will allow us to carry out research projects on Massive star formation, the properties of dust in the diffuse ISM, the three dimensional distribution of the dust emission in the Galaxy and the mid/far infrared properties of Supernova Remnants, among others, and to perform a detailed comparison between the characteristics (e.g. star formation rate, dust properties) a of the Inner and Outer Galaxy. The background-calibrated point-source subtracted images will be released to the astronomical community to be fully exploited and to be used in many other science projects, beyond those proposed in this proposal.
Optimization of the transition path of the head hardening with using the genetic algorithms
NASA Astrophysics Data System (ADS)
Wróbel, Joanna; Kulawik, Adam
2016-06-01
An automated method of choice of the transition path of the head hardening in heat treatment process for the plane steel element is proposed in this communication. This method determines the points on the path of moving heat source using the genetic algorithms. The fitness function of the used algorithm is determined on the basis of effective stresses and yield point depending on the phase composition. The path of the hardening tool and also the area of the heat affected zone is determined on the basis of obtained points. A numerical model of thermal phenomena, phase transformations in the solid state and mechanical phenomena for the hardening process is implemented in order to verify the presented method. A finite element method (FEM) was used for solving the heat transfer equation and getting required temperature fields. The moving heat source is modeled with a Gaussian distribution and the water cooling is also included. The macroscopic model based on the analysis of the CCT and CHT diagrams of the medium-carbon steel is used to determine the phase transformations in the solid state. A finite element method is also used for solving the equilibrium equations giving us the stress field. The thermal and structural strains are taken into account in the constitutive relations.
Techniques of noninvasive optical tomographic imaging
NASA Astrophysics Data System (ADS)
Rosen, Joseph; Abookasis, David; Gokhler, Mark
2006-01-01
Recently invented methods of optical tomographic imaging through scattering and absorbing media are presented. In one method, the three-dimensional structure of an object hidden between two biological tissues is recovered from many noisy speckle pictures obtained on the output of a multi-channeled optical imaging system. Objects are recovered from many speckled images observed by a digital camera through two stereoscopic microlens arrays. Each microlens in each array generates a speckle image of the object buried between the layers. In the computer each image is Fourier transformed jointly with an image of the speckled point-like source captured under the same conditions. A set of the squared magnitudes of the Fourier-transformed pictures is accumulated to form a single average picture. This final picture is again Fourier transformed, resulting in the three-dimensional reconstruction of the hidden object. In the other method, the effect of spatial longitudinal coherence is used for imaging through an absorbing layer with different thickness, or different index of refraction, along the layer. The technique is based on synthesis of multiple peak spatial degree of coherence. This degree of coherence enables us to scan simultaneously different sample points on different altitudes, and thus decreases the acquisition time. The same multi peak degree of coherence is also used for imaging through the absorbing layer. Our entire experiments are performed with a quasi-monochromatic light source. Therefore problems of dispersion and inhomogeneous absorption are avoided.
Obtaining the phase in the star test using genetic algorithms
NASA Astrophysics Data System (ADS)
Salazar Romero, Marcos A.; Vazquez-Montiel, Sergio; Cornejo-Rodriguez, Alejandro
2004-10-01
The star test is conceptually perhaps the most basic and simplest of all methods of testing image-forming optical systems, the irradiance distribution at the image of a point source (such as a star) is give for the Point Spread Function, PSF. The PSF is very sensitive to aberrations. One way to quantify the PSF is measuring the irradiance distribution on the image of the source point. On the other hand, if we know the aberrations introduced by the optical systems and utilizing the diffraction theory then we can calculate the PSF. In this work we propose a method in order to find the wavefront aberrations starting from the PSF, transforming the problem of fitting a polynomial of aberrations in a problem of optimization using Genetic Algorithm. Also, we show that this method is immune to the noise introduced in the register or recording of the image. Results of these methods are shown.
Impacts of drought on the quality of surface water of the basin
NASA Astrophysics Data System (ADS)
Huang, B. B.; Yan, D. H.; Wang, H.; Cheng, B. F.; Cui, X. H.
2013-11-01
Under the background of climate change and human's activities, there has been presenting an increase both in the frequency of droughts and the range of their impacts. Droughts may give rise to a series of resources, environmental and ecological effects, i.e. water shortage, water quality deterioration as well as the decrease in the diversity of aquatic organisms. This paper, above all, identifies the impact mechanism of drought on the surface water quality of the basin, and then systematically studies the laws of generation, transfer, transformation and degradation of pollutants during the drought, finding out that the alternating droughts and floods stage is the critical period during which the surface water quality is affected. Secondly, through employing indoor orthogonality experiments, serving drought degree, rainfall intensity and rainfall duration as the main elements and designing various scenario models, the study inspects the effects of various factors on the nitrogen loss in soil as well as the loss of non-point sources pollution and the leaching rate of nitrogen under the different alternating scenarios of drought and flood. It comes to the conclusion that the various factors and the loss of non-point source pollution are positively correlated, and under the alternating scenarios of drought and flood, there is an exacerbation in the loss of ammonium nitrogen and nitrate nitrogen in soil, which generates the transfer and transformation mechanisms of non-point source pollution from a micro level. Finally, by employing the data of Nenjiang river basin, the paper assesses the impacts of drought on the surface water quality from a macro level.
SIFT optimization and automation for matching images from multiple temporal sources
NASA Astrophysics Data System (ADS)
Castillo-Carrión, Sebastián; Guerrero-Ginel, José-Emilio
2017-05-01
Scale Invariant Feature Transformation (SIFT) was applied to extract tie-points from multiple source images. Although SIFT is reported to perform reliably under widely different radiometric and geometric conditions, using the default input parameters resulted in too few points being found. We found that the best solution was to focus on large features as these are more robust and not prone to scene changes over time, which constitutes a first approach to the automation of processes using mapping applications such as geometric correction, creation of orthophotos and 3D models generation. The optimization of five key SIFT parameters is proposed as a way of increasing the number of correct matches; the performance of SIFT is explored in different images and parameter values, finding optimization values which are corroborated using different validation imagery. The results show that the optimization model improves the performance of SIFT in correlating multitemporal images captured from different sources.
Uncertainty Propagation for Terrestrial Mobile Laser Scanner
NASA Astrophysics Data System (ADS)
Mezian, c.; Vallet, Bruno; Soheilian, Bahman; Paparoditis, Nicolas
2016-06-01
Laser scanners are used more and more in mobile mapping systems. They provide 3D point clouds that are used for object reconstruction and registration of the system. For both of those applications, uncertainty analysis of 3D points is of great interest but rarely investigated in the literature. In this paper we present a complete pipeline that takes into account all the sources of uncertainties and allows to compute a covariance matrix per 3D point. The sources of uncertainties are laser scanner, calibration of the scanner in relation to the vehicle and direct georeferencing system. We suppose that all the uncertainties follow the Gaussian law. The variances of the laser scanner measurements (two angles and one distance) are usually evaluated by the constructors. This is also the case for integrated direct georeferencing devices. Residuals of the calibration process were used to estimate the covariance matrix of the 6D transformation between scanner laser and the vehicle system. Knowing the variances of all sources of uncertainties, we applied uncertainty propagation technique to compute the variance-covariance matrix of every obtained 3D point. Such an uncertainty analysis enables to estimate the impact of different laser scanners and georeferencing devices on the quality of obtained 3D points. The obtained uncertainty values were illustrated using error ellipsoids on different datasets.
Error analysis in stereo vision for location measurement of 3D point
NASA Astrophysics Data System (ADS)
Li, Yunting; Zhang, Jun; Tian, Jinwen
2015-12-01
Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.
Determination of acoustical transfer functions using an impulse method
NASA Astrophysics Data System (ADS)
MacPherson, J.
1985-02-01
The Transfer Function of a system may be defined as the relationship of the output response to the input of a system. Whilst recent advances in digital processing systems have enabled Impulse Transfer Functions to be determined by computation of the Fast Fourier Transform, there has been little work done in applying these techniques to room acoustics. Acoustical Transfer Functions have been determined for auditoria, using an impulse method. The technique is based on the computation of the Fast Fourier Transform (FFT) of a non-ideal impulsive source, both at the source and at the receiver point. The Impulse Transfer Function (ITF) is obtained by dividing the FFT at the receiver position by the FFT of the source. This quantity is presented both as linear frequency scale plots and also as synthesized one-third octave band data. The technique enables a considerable quantity of data to be obtained from a small number of impulsive signals recorded in the field, thereby minimizing the time and effort required on site. As the characteristics of the source are taken into account in the calculation, the choice of impulsive source is non-critical. The digital analysis equipment required for the analysis is readily available commercially.
NASA Astrophysics Data System (ADS)
Singh, Arvind; Singh, Upendra Kumar
2017-02-01
This paper deals with the application of continuous wavelet transform (CWT) and Euler deconvolution methods to estimate the source depth using magnetic anomalies. These methods are utilized mainly to focus on the fundamental issue of mapping the major coal seam and locating tectonic lineaments. The main aim of the study is to locate and characterize the source of the magnetic field by transferring the data into an auxiliary space by CWT. The method has been tested on several synthetic source anomalies and finally applied to magnetic field data from Jharia coalfield, India. Using magnetic field data, the mean depth of causative sources points out the different lithospheric depth over the study region. Also, it is inferred that there are two faults, namely the northern boundary fault and the southern boundary fault, which have an orientation in the northeastern and southeastern direction respectively. Moreover, the central part of the region is more faulted and folded than the other parts and has sediment thickness of about 2.4 km. The methods give mean depth of the causative sources without any a priori information, which can be used as an initial model in any inversion algorithm.
Surface Imaging Skin Friction Instrument and Method
NASA Technical Reports Server (NTRS)
Brown, James L. (Inventor); Naughton, Jonathan W. (Inventor)
1999-01-01
A surface imaging skin friction instrument allowing 2D resolution of spatial image by a 2D Hilbert transform and 2D inverse thin-oil film solver, providing an innovation over prior art single point approaches. Incoherent, monochromatic light source can be used. The invention provides accurate, easy to use, economical measurement of larger regions of surface shear stress in a single test.
NASA Technical Reports Server (NTRS)
Boccio, Dona
2003-01-01
Terrorist suitcase nuclear devices typically using converted Soviet tactical nuclear warheads contain several kilograms of plutonium. This quantity of plutonium emits a significant number of gamma rays and neutrons as it undergoes radioactive decay. These gamma rays and neutrons normally penetrate ordinary matter to a significant distance. Unfortunately this penetrating quality of the radiation makes imaging with classical optics impractical. However, this radiation signature emitted by the nuclear source may be sufficient to be imaged from low-flying aerial platforms carrying Fourier imaging systems. The Fourier imaging system uses a pair of co-aligned absorption grids to measure a selected range of spatial frequencies from an object. These grids typically measure the spatial frequency in only one direction at a time. A grid pair that looks in all directions simultaneously would be an improvement over existing technology. A number of grid pairs governed by various parameters were investigated to solve this problem. By examining numerous configurations, it became apparent that an appropriate spiral pattern could be made to work. A set of equations was found to describe a grid pattern that produces straight fringes. Straight fringes represent a Fourier transform of a point source at infinity. An inverse Fourier transform of this fringe pattern would provide an accurate image (location and intensity) of a point source.
Transient Point Infiltration In The Unsaturated Zone
NASA Astrophysics Data System (ADS)
Buecker-Gittel, M.; Mohrlok, U.
The risk assessment of leaking sewer pipes gets more and more important due to urban groundwater management and environmental as well as health safety. This requires the quantification and balancing of transport and transformation processes based on the water flow in the unsaturated zone. The water flow from a single sewer leakage could be described as a point infiltration with time varying hydraulic conditions externally and internally. External variations are caused by the discharge in the sewer pipe as well as the state of the leakage itself. Internal variations are the results of microbiological clogging effects associated with the transformation processes. Technical as well as small scale laboratory experiments were conducted in order to investigate the water transport from an transient point infiltration. From the technical scale experiment there was evidence that the water flow takes place under transient conditions when sewage infiltrates into an unsaturated soil. Whereas the small scale experiments investigated the hydraulics of the water transport and the associated so- lute and particle transport in unsaturated soils in detail. The small scale experiment was a two-dimensional representation of such a point infiltration source where the distributed water transport could be measured by several tensiometers in the soil as well as by a selective measurement of the discharge at the bottom of the experimental setup. Several series of experiments were conducted varying the boundary and initial con- ditions in order to derive the important parameters controlling the infiltration of pure water from the point source. The results showed that there is a significant difference between the infiltration rate in the point source and the discharge rate at the bottom, that could be explained by storage processes due to an outflow resistance at the bottom. This effect is overlayn by a decreasing water content decreases over time correlated with a decreasing infiltration rate. As expected the initial conditions mainly affects the time scale for the water transport. Additionally, the influence of preferential flow paths on the discharge distribution could be found due to the heterogenieties caused by the filling and compaction process of the sandy soil.
NASA Astrophysics Data System (ADS)
Bosman, Peter A. N.; Alderliesten, Tanja
2016-03-01
We recently demonstrated the strong potential of using dual-dynamic transformation models when tackling deformable image registration problems involving large anatomical differences. Dual-dynamic transformation models employ two moving grids instead of the common single moving grid for the target image (and single fixed grid for the source image). We previously employed powerful optimization algorithms to make use of the additional flexibility offered by a dual-dynamic transformation model with good results, directly obtaining insight into the trade-off between important registration objectives as a result of taking a multi-objective approach to optimization. However, optimization has so far been initialized using two regular grids, which still leaves a great potential of dual-dynamic transformation models untapped: a-priori grid alignment with image structures/areas that are expected to deform more. This allows (far) less grid points to be used, compared to using a sufficiently refined regular grid, leading to (far) more efficient optimization, or, equivalently, more accurate results using the same number of grid points. We study the implications of exploiting this potential by experimenting with two new smart grid initialization procedures: one manual expert-based and one automated image-feature-based. We consider a CT test case with large differences in bladder volume with and without a multi-resolution scheme and find a substantial benefit of using smart grid initialization.
NASA Astrophysics Data System (ADS)
Bradley, A. M.; Segall, P.
2012-12-01
We describe software, in development, to calculate elastostatic displacement Green's functions and their derivatives for point and polygonal dislocations in three-dimensional homogeneous elastic layers above an elastic or a viscoelastic halfspace. The steps to calculate a Green's function for a point source at depth zs are as follows. 1. A grid in wavenumber space is chosen. 2. A six-element complex rotated stress-displacement vector x is obtained at each grid point by solving a two-point boundary value problem (2P-BVP). If the halfspace is viscoelastic, the solution is inverse Laplace transformed. 3. For each receiver, x is propagated to the receiver depth zr (often zr = 0) and then, 4, inverse Fourier transformed, with the Fourier component corresponding to the receiver's horizontal position. 5. The six elements are linearly combined into displacements and their derivatives. The dominant work is in step 2. The grid is chosen to represent the wavenumber-space solution with as few points as possible. First, the wavenumber space is transformed to increase sampling density near 0 wavenumber. Second, a tensor-product grid of Chebyshev points of the first kind is constructed in each quadrant of the transformed wavenumber space. Moment-tensor-dependent symmetries further reduce work. The numerical solution of the 2P-BVP problem in step 2 involves solving a linear equation A x = b. Half of the elements of x are of geophysical interest; the subset depends on whether zr ≤ zs. Denote these \\hat x. As wavenumber k increases, \\hat x can become inaccurate in finite precision arithmetic for two reasons: 1. The condition number of A becomes too large. 2. The norm-wise relative error (NWRE) in \\hat x is large even though it is small in x. To address this problem, a number of researchers have used determinants to obtain x. This may be the best approach for 6-dimensional or smaller 2P-BVP, where the combinatorial increase in work is still moderate. But there is an alternative. Let \\bar A be the matrix after scaling its columns to unit infinity norm and \\bar x the scaled x. If \\bar A is well conditioned, as it often is in (visco)elastostatic problems, then using determinants is unnecessary. Multiply each side of A x = b by a propagator matrix to the computation depth zcd prior to storing the matrix in finite precision. zcd is determined by the rule that zr and zcd must be on opposite sides of zs. Let the resulting matrix be A(zcd). Three facts imply that this rule controls the NWRE in \\hat x: 1. Diagonally scaling a matrix changes the accuracy of an element of the solution by about one ULP (unit in the last place). 2. If the NWRE of \\bar x is small, then the largest elements are accurate. 3. zcd controls the magnitude of elements in \\bar x. In step 4, to avoid numerically Fourier transforming the (nearly) non-square-integrable functions that arise when the receiver and source depths are (nearly) the same, a function is divided into an analytical part and a numerical part that goes quickly to 0 as k -> ∞ . Our poster will describe these calculations, present a preliminary interface to a C-language package in development, and show some physical results.
Automated Mounting Bias Calibration for Airborne LIDAR System
NASA Astrophysics Data System (ADS)
Zhang, J.; Jiang, W.; Jiang, S.
2012-07-01
Mounting bias is the major error source of Airborne LIDAR system. In this paper, an automated calibration method for estimating LIDAR system mounting parameters is introduced. LIDAR direct geo-referencing model is used to calculate systematic errors. Due to LIDAR footprints discretely sampled, the real corresponding laser points are hardly existence among different strips. The traditional corresponding point methodology does not seem to apply to LIDAR strip registration. We proposed a Virtual Corresponding Point Model to resolve the corresponding problem among discrete laser points. Each VCPM contains a corresponding point and three real laser footprints. Two rules are defined to calculate tie point coordinate from real laser footprints. The Scale Invariant Feature Transform (SIFT) is used to extract corresponding points in LIDAR strips, and the automatic flow of LIDAR system calibration based on VCPM is detailed described. The practical examples illustrate the feasibility and effectiveness of the proposed calibration method.
Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm
NASA Astrophysics Data System (ADS)
Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian
2018-03-01
In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.
User's Guide for the MapImage Reprojection Software Package, Version 1.01
Finn, Michael P.; Trent, Jason R.
2004-01-01
Scientists routinely accomplish small-scale geospatial modeling in the raster domain, using high-resolution datasets (such as 30-m data) for large parts of continents and low-resolution to high-resolution datasets for the entire globe. Recently, Usery and others (2003a) expanded on the previously limited empirical work with real geographic data by compiling and tabulating the accuracy of categorical areas in projected raster datasets of global extent. Geographers and applications programmers at the U.S. Geological Survey's (USGS) Mid-Continent Mapping Center (MCMC) undertook an effort to expand and evolve an internal USGS software package, MapImage, or mapimg, for raster map projection transformation (Usery and others, 2003a). Daniel R. Steinwand of Science Applications International Corporation, Earth Resources Observation Systems Data Center in Sioux Falls, S. Dak., originally developed mapimg for the USGS, basing it on the USGS's General Cartographic Transformation Package (GCTP). It operated as a command line program on the Unix operating system. Through efforts at MCMC, and in coordination with Mr. Steinwand, this program has been transformed from an application based on a command line into a software package based on a graphic user interface for Windows, Linux, and Unix machines. Usery and others (2003b) pointed out that many commercial software packages do not use exact projection equations and that even when exact projection equations are used, the software often results in error and sometimes does not complete the transformation for specific projections, at specific resampling resolutions, and for specific singularities. Direct implementation of point-to-point transformation with appropriate functions yields the variety of projections available in these software packages, but implementation with data other than points requires specific adaptation of the equations or prior preparation of the data to allow the transformation to succeed. Additional constraints apply to global raster data. It appears that some packages use the USGS's GCTP or similar point transformations without adaptation to the specific characteristics of raster data (Usery and others, 2003b). It is most common for programs to compute transformations of raster data in an inverse fashion. Such mapping can result in an erroneous position and replicate data or create pixels not in the original space. As Usery and others (2003a) indicated, mapimg performs a corresponding forward transformation to ensure the same location results from both methods. The primary benefit of this function is to mask cells outside the domain. MapImage 1.01 is now on the Web. You can download the User's Guide, source, and binaries from the following site: http://mcmcweb.er.usgs.gov/carto_research/projection/acc_proj_data.html
An FBG acoustic emission source locating system based on PHAT and GA
NASA Astrophysics Data System (ADS)
Shen, Jing-shi; Zeng, Xiao-dong; Li, Wei; Jiang, Ming-shun
2017-09-01
Using the acoustic emission locating technology to monitor the health of the structure is important for ensuring the continuous and healthy operation of the complex engineering structures and large mechanical equipment. In this paper, four fiber Bragg grating (FBG) sensors are used to establish the sensor array to locate the acoustic emission source. Firstly, the nonlinear locating equations are established based on the principle of acoustic emission, and the solution of these equations is transformed into an optimization problem. Secondly, time difference extraction algorithm based on the phase transform (PHAT) weighted generalized cross correlation provides the necessary conditions for the accurate localization. Finally, the genetic algorithm (GA) is used to solve the optimization model. In this paper, twenty points are tested in the marble plate surface, and the results show that the absolute locating error is within the range of 10 mm, which proves the accuracy of this locating method.
Performance of Four-Leg VSC based DSTATCOM using Single Phase P-Q Theory
NASA Astrophysics Data System (ADS)
Jampana, Bangarraju; Veramalla, Rajagopal; Askani, Jayalaxmi
2017-02-01
This paper presents single-phase P-Q theory for four-leg VSC based distributed static compensator (DSTATCOM) in the distribution system. The proposed DSTATCOM maintains unity power factor at source, zero voltage regulation, eliminates current harmonics, load balancing and neutral current compensation. The advantage of using four-leg VSC based DSTATCOM is to eliminate isolated/non-isolated transformer connection at point of common coupling (PCC) for neutral current compensation. The elimination of transformer connection at PCC with proposed topology will reduce cost of DSTATCOM. The single-phase P-Q theory control algorithm is used to extract fundamental component of active and reactive currents for generation of reference source currents which is based on indirect current control method. The proposed DSTATCOM is modelled and the results are validated with various consumer loads under unity power factor and zero voltage regulation modes in the MATLAB R2013a environment using simpower system toolbox.
Exact solutions for sound radiation from a moving monopole above an impedance plane.
Ochmann, Martin
2013-04-01
The acoustic field of a monopole source moving with constant velocity at constant height above an infinite locally reacting plane can be expressed in analytical form by combining the Lorentz transformation with the method of superimposing complex or real point sources. For a plane with masslike response, the solution in Lorentz space consists of a superposition of monopoles only and therefore, does not differ in principle from the solution for the corresponding stationary boundary value problem. However, by considering a frequency independent surface impedance, e.g., with pure absorbing behavior, the half-space Green's function is now comprised of not only a line of monopoles but also of dipoles. For certain field points at a special line g, this solution can be written explicitly by using an exponential integral. For arbitrary field points, the method of stationary phase leads to an asymptotic solution for the reflection coefficient which agrees with prior results from the literature.
NASA Astrophysics Data System (ADS)
Lohn, Stefan B.; Dong, Xin; Carminati, Federico
2012-12-01
Chip-Multiprocessors are going to support massive parallelism by many additional physical and logical cores. Improving performance can no longer be obtained by increasing clock-frequency because the technical limits are almost reached. Instead, parallel execution must be used to gain performance. Resources like main memory, the cache hierarchy, bandwidth of the memory bus or links between cores and sockets are not going to be improved as fast. Hence, parallelism can only result into performance gains if the memory usage is optimized and the communication between threads is minimized. Besides concurrent programming has become a domain for experts. Implementing multi-threading is error prone and labor-intensive. A full reimplementation of the whole AliRoot source-code is unaffordable. This paper describes the effort to evaluate the adaption of AliRoot to the needs of multi-threading and to provide the capability of parallel processing by using a semi-automatic source-to-source transformation to address the problems as described before and to provide a straight-forward way of parallelization with almost no interference between threads. This makes the approach simple and reduces the required manual changes in the code. In a first step, unconditional thread-safety will be introduced to bring the original sequential and thread unaware source-code into the position of utilizing multi-threading. Afterwards further investigations have to be performed to point out candidates of classes that are useful to share amongst threads. Then in a second step, the transformation has to change the code to share these classes and finally to verify if there are anymore invalid interferences between threads.
Beyond crystallography: Diffractive imaging using coherent x-ray light sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miao, J.; Ishikawa, T.; Robinson, I. K.
X-ray crystallography has been central to the development of many fields of science over the past century. It has now matured to a point that as long as good-quality crystals are available, their atomic structure can be routinely determined in three dimensions. However, many samples in physics, chemistry, materials science, nanoscience, geology, and biology are noncrystalline, and thus their three-dimensional structures are not accessible by traditional x-ray crystallography. Overcoming this hurdle has required the development of new coherent imaging methods to harness new coherent x-ray light sources. Here we review the revolutionary advances that are transforming x-ray sources and imagingmore » in the 21st century.« less
Estimating vehicle height using homographic projections
Cunningham, Mark F; Fabris, Lorenzo; Gee, Timothy F; Ghebretati, Jr., Frezghi H; Goddard, James S; Karnowski, Thomas P; Ziock, Klaus-peter
2013-07-16
Multiple homography transformations corresponding to different heights are generated in the field of view. A group of salient points within a common estimated height range is identified in a time series of video images of a moving object. Inter-salient point distances are measured for the group of salient points under the multiple homography transformations corresponding to the different heights. Variations in the inter-salient point distances under the multiple homography transformations are compared. The height of the group of salient points is estimated to be the height corresponding to the homography transformation that minimizes the variations.
SOURCES AND TRANSFORMATIONS OF NITROGEN, CARBON, AND PHOSPHORUS IN THE POTOMAC RIVER ESTUARY
NASA Astrophysics Data System (ADS)
Pennino, M. J.; Kaushal, S.
2009-12-01
Global transport of nitrogen (N), carbon (C), and phosphorus (P) in river ecosystems has been dramatically altered due to urbanization. We examined the capacity of a major tributary of the Chesapeake Bay, the Potomac River, to transform carbon, nitrogen, and phosphorus inputs from the world’s largest advanced wastewater treatment facility (Washington D.C. Water and Sewer Authority). Surface water and effluent samples were collected along longitudinal transects of the Potomac River seasonally and compared to long-term interannual records of carbon, nitrogen, and phosphorus. Water samples from seasonal longitudinal transects were analyzed for dissolved organic and inorganic nitrogen and phosphorus, total organic carbon, and particulate carbon, nitrogen, and phosphorus. The source and quality of organic matter was characterized using fluorescence spectroscopy, excitation emission matrices (EEMs), and PARAFAC modeling. Sources of nitrate were tracked using stable isotopes of nitrogen and oxygen. Along the river network stoichiometric ratios of C, N, and P were determined across sites and related to changes in flow conditions. Land use data and historical water chemistry data were also compared to assess the relative importance of non-point sources from land-use change versus point-sources of carbon, nitrogen, and phosphorus. Preliminary data from EEMs suggested that more humic-like organic matter was important above the wastewater treatment plant, but more protein-like organic matter was present below the treatment plant. Levels of nitrate and ammonia showed increases within the vicinity of the wastewater treatment outfall, but decreased rapidly downstream, potentially indicating nutrient uptake and/or denitrification. Phosphate levels decreased gradually along the river with a small increase near the wastewater treatment plant and a larger increase and decrease further downstream near the high salinity zone. Total organic carbon levels show a small decrease downstream. Ecological stoichiometric ratios along the river indicate increases in C/N ratios downstream, but no corresponding trend with C/P ratios. The N/P ratios increased directly below the treatment plant and then decreased gradually downstream. The C/N/P ratios remained level until the last two sampling stations within 20 miles of the Chesapeake Bay, where there is a large increase. Despite large inputs, there may be large variations in sources and ecological stoichiometry along rivers and estuaries, and knowledge of these transformations will be important in predicting changes in the amounts, forms, and stoichiometry of nutrient loads to coastal waters.
Sparse electrocardiogram signals recovery based on solving a row echelon-like form of system.
Cai, Pingmei; Wang, Guinan; Yu, Shiwei; Zhang, Hongjuan; Ding, Shuxue; Wu, Zikai
2016-02-01
The study of biology and medicine in a noise environment is an evolving direction in biological data analysis. Among these studies, analysis of electrocardiogram (ECG) signals in a noise environment is a challenging direction in personalized medicine. Due to its periodic characteristic, ECG signal can be roughly regarded as sparse biomedical signals. This study proposes a two-stage recovery algorithm for sparse biomedical signals in time domain. In the first stage, the concentration subspaces are found in advance. Then by exploiting these subspaces, the mixing matrix is estimated accurately. In the second stage, based on the number of active sources at each time point, the time points are divided into different layers. Next, by constructing some transformation matrices, these time points form a row echelon-like system. After that, the sources at each layer can be solved out explicitly by corresponding matrix operations. It is noting that all these operations are conducted under a weak sparse condition that the number of active sources is less than the number of observations. Experimental results show that the proposed method has a better performance for sparse ECG signal recovery problem.
NASA Technical Reports Server (NTRS)
Cutri, Roc M.; Low, Frank J.; Marvel, Kevin B.
1992-01-01
The PDS/Monet measuring engine at the National Optical Astronomy Observatory was used to obtain photometry of nearly 10,000 stars on the NGS/POSS and 2000 stars on the ESO/SRC Survey glass plates. These measurements have been used to show that global transformation functions exist that allow calibration of stellar photometry from any blue or red plate to equivalent Johnson B and Cousins R photoelectric magnitudes. The four transformation functions appropriate for the POSS O and E and ESO/SRC J and R plates were characterized, and it was found that, within the measurement uncertainties, they vary from plate to plate only by photometric zero-point offsets. A method is described to correct for the zero-point shifts and to obtain calibrated B and R photometry of stellar sources to an average accuracy of 0.3-0.4 mag within the range R between values of 8 and 19.5 for red plates in both surveys, B between values of 9 and 20.5 on POSS blue plates, and B between values of 10 and 20.5 on ESO/SRC blue plates. This calibration procedure makes it possible to obtain rapid photometry of very large numbers of stellar sources.
Portable Fourier Transform Spectroscopy for Analysis of Surface Contamination and Quality Control
NASA Technical Reports Server (NTRS)
Pugel, Diane
2012-01-01
Progress has been made into adapting and enhancing a commercially available infrared spectrometer for the development of a handheld device for in-field measurements of the chemical composition of various samples of materials. The intent is to duplicate the functionality of a benchtop Fourier transform infrared spectrometer (FTIR) within the compactness of a handheld instrument with significantly improved spectral responsivity. Existing commercial technology, like the deuterated L-alanine triglycine sulfide detectors (DLATGS), is capable of sensitive in-field chemical analysis. This proposed approach compares several subsystem elements of the FTIR inside of the commercial, non-benchtop system to the commercial benchtop systems. These subsystem elements are the detector, the preamplifier and associated electronics of the detector, the interferometer, associated readout parameters, and cooling. This effort will examine these different detector subsystem elements to look for limitations in each. These limitations will be explored collaboratively with the commercial provider, and will be prioritized to meet the deliverable objectives. The tool design will be that of a handheld gun containing the IR filament source and associated optics. It will operate in a point-and-shoot manner, pointing the source and optics at the sample under test and capturing the reflected response of the material in the same handheld gun. Data will be captured via the gun and ported to a laptop.
Transforming Care at the Bedside (TCAB): enhancing direct care and value-added care.
Dearmon, Valorie; Roussel, Linda; Buckner, Ellen B; Mulekar, Madhuri; Pomrenke, Becky; Salas, Sheri; Mosley, Aimee; Brown, Stephanie; Brown, Ann
2013-05-01
The purpose of this study was to examine the effectiveness of a Transforming Care at the Bedside initiative from a unit perspective. Improving patient outcomes and nurses' work environments are the goals of Transforming Care at the Bedside. Transforming Care at the Bedside creates programs of change originating at the point of care and directly promoting engagement of nurses to transform work processes and quality of care on medical-surgical units. This descriptive comparative study draws on multiple data sources from two nursing units: a Transforming Care at the Bedside unit where staff tested, adopted and implemented improvement ideas, and a control unit where staff continued traditional practices. Change theory provided the framework for the study. Direct care and value-added care increased on Transforming Care at the Bedside unit compared with the control unit. Transforming Care at the Bedside unit decreased in incidental overtime. Nurses reported that the process challenged old ways of thinking and increased nursing innovations. Hourly rounding, bedside reporting and the use of pain boards were seen as positive innovations. Evidence supported the value-added dimension of the Transforming Care at the Bedside process at the unit level. Nurses recognized the significance of their input into processes of change. Transformational leadership and frontline projects provide a vehicle for innovation through application of human capital. © 2012 Blackwell Publishing Ltd.
Prediction of vortex shedding from circular and noncircular bodies in subsonic flow
NASA Technical Reports Server (NTRS)
Mendenhall, Michael R.; Lesieutre, Daniel J.
1987-01-01
An engineering prediction method and associated computer code VTXCLD are presented which predict nose vortex shedding from circular and noncircular bodies in subsonic flow at angles of attack and roll. The axisymmetric body is represented by point sources and doublets, and noncircular cross sections are transformed to a circle by either analytical or numerical conformal transformations. The leeward vortices are modeled by discrete vortices in crossflow planes along the body; thus, the three-dimensional steady flow problem is reduced to a two-dimensional, unsteady, separated flow problem for solution. Comparison of measured and predicted surface pressure distributions, flowfield surveys, and aerodynamic characteristics are presented for bodies with circular and noncircular cross sectional shapes.
NASA Technical Reports Server (NTRS)
McGill, Matthew J. (Inventor); Scott, Vibart S. (Inventor); Marzouk, Marzouk (Inventor)
2001-01-01
A holographic optical element transforms a spectral distribution of light to image points. The element comprises areas, each of which acts as a separate lens to image the light incident in its area to an image point. Each area contains the recorded hologram of a point source object. The image points can be made to lie in a line in the same focal plane so as to align with a linear array detector. A version of the element has been developed that has concentric equal areas to match the circular fringe pattern of a Fabry-Perot interferometer. The element has high transmission efficiency, and when coupled with high quantum efficiency solid state detectors, provides an efficient photon-collecting detection system. The element may be used as part of the detection system in a direct detection Doppler lidar system or multiple field of view lidar system.
Effective organizational transformation in psychiatric rehabilitation and recovery.
Clossey, Laurene; Rowlett, Al
2008-01-01
The recovery model represents a new paradigm in the treatment of psychiatric disability. Many states have mandated the model's adoption by their public mental health agencies. As organizational transformation toward this new approach is rapidly occurring, guidance to make successful change is necessary. The recovery model is readily misunderstood and may be resisted by professional occupational cultures that perceive it as a threat to their expertise. Successful change agents need to understand likely sources of resistance to agency transformation, and be knowledgeable and skilled in organizational development to facilitate service conversion to the recovery model. Change agents need to carefully consider how to transform agency structure and culture and how to develop committed leadership that empowers staff. Recovery values and principles must infuse the entire organization. Guidelines to assist change agents are discussed and distilled through the example of a successful northern California recovery model mental health agency called Turning Point Community Programs. The guidance provided seeks to help make the recovery model portable across many types of mental health settings.
NASA Astrophysics Data System (ADS)
Yu, Xiaojun; Liu, Xinyu; Chen, Si; Wang, Xianghong; Liu, Linbo
2016-03-01
High-resolution optical coherence tomography (OCT) is of critical importance to disease diagnosis because it is capable of providing detailed microstructural information of the biological tissues. However, a compromise usually has to be made between its spatial resolutions and sensitivity due to the suboptimal spectral response of the system components, such as the linear camera, the dispersion grating, and the focusing lenses, etc. In this study, we demonstrate an OCT system that achieves both high spatial resolutions and enhanced sensitivity through utilizing a spectrally encoded source. The system achieves a lateral resolution of 3.1 μm and an axial resolution of 2.3 μm in air; when with a simple dispersive prism placed in the infinity space of the sample arm optics, the illumination beam on the sample is transformed into a line source with a visual angle of 10.3 mrad. Such an extended source technique allows a ~4 times larger maximum permissible exposure (MPE) than its point source counterpart, which thus improves the system sensitivity by ~6dB. In addition, the dispersive prism can be conveniently switched to a reflector. Such flexibility helps increase the penetration depth of the system without increasing the complexity of the current point source devices. We conducted experiments to characterize the system's imaging capability using the human fingertip in vivo and the swine eye optic never disc ex vivo. The higher penetration depth of such a system over the conventional point source OCT system is also demonstrated in these two tissues.
Kamali, Tschackad; Považay, Boris; Kumar, Sunil; Silberberg, Yaron; Hermann, Boris; Werkmeister, René; Drexler, Wolfgang; Unterhuber, Angelika
2014-10-01
We demonstrate a multimodal optical coherence tomography (OCT) and online Fourier transform coherent anti-Stokes Raman scattering (FTCARS) platform using a single sub-12 femtosecond (fs) Ti:sapphire laser enabling simultaneous extraction of structural and chemical ("morphomolecular") information of biological samples. Spectral domain OCT prescreens the specimen providing a fast ultrahigh (4×12 μm axial and transverse) resolution wide field morphologic overview. Additional complementary intrinsic molecular information is obtained by zooming into regions of interest for fast label-free chemical mapping with online FTCARS spectroscopy. Background-free CARS is based on a Michelson interferometer in combination with a highly linear piezo stage, which allows for quick point-to-point extraction of CARS spectra in the fingerprint region in less than 125 ms with a resolution better than 4 cm(-1) without the need for averaging. OCT morphology and CARS spectral maps indicating phosphate and carbonate bond vibrations from human bone samples are extracted to demonstrate the performance of this hybrid imaging platform.
NASA Astrophysics Data System (ADS)
Li, Qiangkun; Hu, Yawei; Jia, Qian; Song, Changji
2018-02-01
It is the key point of quantitative research on agricultural non-point source pollution load, the estimation of pollutant concentration in agricultural drain. In the guidance of uncertainty theory, the synthesis of fertilization and irrigation is used as an impulse input to the farmland, meanwhile, the pollutant concentration in agricultural drain is looked as the response process corresponding to the impulse input. The migration and transformation of pollutant in soil is expressed by Inverse Gaussian Probability Density Function. The law of pollutants migration and transformation in soil at crop different growth periods is reflected by adjusting parameters of Inverse Gaussian Distribution. Based on above, the estimation model for pollutant concentration in agricultural drain at field scale was constructed. Taking the of Qing Tong Xia Irrigation District in Ningxia as an example, the concentration of nitrate nitrogen and total phosphorus in agricultural drain was simulated by this model. The results show that the simulated results accorded with measured data approximately and Nash-Sutcliffe coefficients were 0.972 and 0.964, respectively.
Overlapped optics induced perfect coherent effects.
Li, Jian Jie; Zang, Xiao Fei; Mao, Jun Fa; Tang, Min; Zhu, Yi Ming; Zhuang, Song Lin
2013-12-20
For traditional coherent effects, two separated identical point sources can be interfered with each other only when the optical path difference is integer number of wavelengths, leading to alternate dark and bright fringes for different optical path difference. For hundreds of years, such a perfect coherent condition seems insurmountable. However, in this paper, based on transformation optics, two separated in-phase identical point sources can induce perfect interference with each other without satisfying the traditional coherent condition. This shifting illusion media is realized by inductor-capacitor transmission line network. Theoretical analysis, numerical simulations and experimental results are performed to confirm such a kind of perfect coherent effect and it is found that the total radiation power of multiple elements system can be greatly enhanced. Our investigation may be applicable to National Ignition Facility (NIF), Inertial Confined Fusion (ICF) of China, LED lighting technology, terahertz communication, and so on.
Fast underdetermined BSS architecture design methodology for real time applications.
Mopuri, Suresh; Reddy, P Sreenivasa; Acharyya, Amit; Naik, Ganesh R
2015-01-01
In this paper, we propose a high speed architecture design methodology for the Under-determined Blind Source Separation (UBSS) algorithm using our recently proposed high speed Discrete Hilbert Transform (DHT) targeting real time applications. In UBSS algorithm, unlike the typical BSS, the number of sensors are less than the number of the sources, which is of more interest in the real time applications. The DHT architecture has been implemented based on sub matrix multiplication method to compute M point DHT, which uses N point architecture recursively and where M is an integer multiples of N. The DHT architecture and state of the art architecture are coded in VHDL for 16 bit word length and ASIC implementation is carried out using UMC 90 - nm technology @V DD = 1V and @ 1MHZ clock frequency. The proposed architecture implementation and experimental comparison results show that the DHT design is two times faster than state of the art architecture.
Beyond crystallography: diffractive imaging using coherent x-ray light sources.
Miao, Jianwei; Ishikawa, Tetsuya; Robinson, Ian K; Murnane, Margaret M
2015-05-01
X-ray crystallography has been central to the development of many fields of science over the past century. It has now matured to a point that as long as good-quality crystals are available, their atomic structure can be routinely determined in three dimensions. However, many samples in physics, chemistry, materials science, nanoscience, geology, and biology are noncrystalline, and thus their three-dimensional structures are not accessible by traditional x-ray crystallography. Overcoming this hurdle has required the development of new coherent imaging methods to harness new coherent x-ray light sources. Here we review the revolutionary advances that are transforming x-ray sources and imaging in the 21st century. Copyright © 2015, American Association for the Advancement of Science.
Chaotic scattering in an open vase-shaped cavity: Topological, numerical, and experimental results
NASA Astrophysics Data System (ADS)
Novick, Jaison Allen
We present a study of trajectories in a two-dimensional, open, vase-shaped cavity in the absence of forces The classical trajectories freely propagate between elastic collisions. Bound trajectories, regular scattering trajectories, and chaotic scattering trajectories are present in the vase. Most importantly, we find that classical trajectories passing through the vase's mouth escape without return. In our simulations, we propagate bursts of trajectories from point sources located along the vase walls. We record the time for escaping trajectories to pass through the vase's neck. Constructing a plot of escape time versus the initial launch angle for the chaotic trajectories reveals a vastly complicated recursive structure or a fractal. This fractal structure can be understood by a suitable coordinate transform. Reducing the dynamics to two dimensions reveals that the chaotic dynamics are organized by a homoclinic tangle, which is formed by the union of infinitely long, intersecting stable and unstable manifolds. This study is broken down into three major components. We first present a topological theory that extracts the essential topological information from a finite subset of the tangle and encodes this information in a set of symbolic dynamical equations. These equations can be used to predict a topologically forced minimal subset of the recursive structure seen in numerically computed escape time plots. We present three applications of the theory and compare these predictions to our simulations. The second component is a presentation of an experiment in which the vase was constructed from Teflon walls using an ultrasound transducer as a point source. We compare the escaping signal to a classical simulation and find agreement between the two. Finally, we present an approximate solution to the time independent Schrodinger Equation for escaping waves. We choose a set of points at which to evaluate the wave function and interpolate trajectories connecting the source point to each "detector point". We then construct the wave function directly from these classical trajectories using the two-dimensional WKB approximation. The wave function is Fourier Transformed using a Fast Fourier Transform algorithm resulting in a spectrum in which each peak corresponds to an interpolated trajectory. Our predictions are based on an imagined experiment that uses microwave propagation within an electromagnetic waveguide. Such an experiment exploits the fact that under suitable conditions both Maxwell's Equations and the Schrodinger Equation can be reduced to the Helmholtz Equation. Therefore, our predictions, while compared to the electromagnetic experiment, contain information about the quantum system. Identifying peaks in the transmission spectrum with chaotic trajectories will allow for an additional experimental verification of the intermediate recursive structure. Finally, we summarize our results and discuss possible extensions of this project.
Acoustic propagation in a thermally stratified atmosphere
NASA Technical Reports Server (NTRS)
Vanmoorhem, W. K.
1988-01-01
Acoustic propagation in an atmosphere with a specific form of a temperature profile has been investigated by analytical means. The temperature profile used is representative of an actual atmospheric profile and contains three free parameters. Both lapse and inversion cases have been considered. Although ray solutions have been considered, the primary emphasis has been on solutions of the acoustic wave equation with point source where the sound speed varies with height above the ground corresponding to the assumed temperature profile. The method used to obtain the solution of the wave equation is based on Hankel transformation of the wave equation, approximate solution of the transformed equation for wavelength small compared to the scale of the temperature (or sound speed) profile, and approximate or numerical inversion of the Hankel transformed solution. The solution displays the characteristics found in experimental data but extensive comparison between the models and experimental data has not been carried out.
Analysis of separation test for automatic brake adjuster based on linear radon transformation
NASA Astrophysics Data System (ADS)
Luo, Zai; Jiang, Wensong; Guo, Bin; Fan, Weijun; Lu, Yi
2015-01-01
The linear Radon transformation is applied to extract inflection points for online test system under the noise conditions. The linear Radon transformation has a strong ability of anti-noise and anti-interference by fitting the online test curve in several parts, which makes it easy to handle consecutive inflection points. We applied the linear Radon transformation to the separation test system to solve the separating clearance of automatic brake adjuster. The experimental results show that the feature point extraction error of the gradient maximum optimal method is approximately equal to ±0.100, while the feature point extraction error of linear Radon transformation method can reach to ±0.010, which has a lower error than the former one. In addition, the linear Radon transformation is robust.
Registration methods for nonblind watermark detection in digital cinema applications
NASA Astrophysics Data System (ADS)
Nguyen, Philippe; Balter, Raphaele; Montfort, Nicolas; Baudry, Severine
2003-06-01
Digital watermarking may be used to enforce copyright protection of digital cinema, by embedding in each projected movie an unique identifier (fingerprint). By identifying the source of illegal copies, watermarking will thus incite movie theatre managers to enforce copyright protection, in particular by preventing people from coming in with a handy cam. We propose here a non-blind watermark method to improve the watermark detection on very impaired sequences. We first present a study on the picture impairments caused by the projection on a screen, then acquisition with a handy cam. We show that images undergo geometric deformations, which are fully described by a projective geometry model. The sequence also undergoes spatial and temporal luminance variation. Based on this study and on the impairments models which follow, we propose a method to match the retrieved sequence to the original one. First, temporal registration is performed by comparing the average luminance variation on both sequences. To compensate for geometric transformations, we used paired points from both sequences, obtained by applying a feature points detector. The matching of the feature points then enables to retrieve the geometric transform parameters. Tests show that the watermark retrieval on rectified sequences is greatly improved.
Alternate energy source usage for in situ heat treatment processes
Stone, Jr., Francis Marion; Goodwin, Charles R [League City, TX; Richard, Jr., James
2011-03-22
Systems, methods, and heaters for treating a subsurface formation are described herein. At least one system for providing power to one or more subsurface heaters is described herein. The system may include an intermittent power source; a transformer coupled to the intermittent power source, and a tap controller coupled to the transformer. The transformer may be configured to transform power from the intermittent power source to power with appropriate operating parameters for the heaters. The tap controller may be configured to monitor and control the transformer so that a constant voltage is provided to the heaters from the transformer regardless of the load of the heaters and the power output provided by the intermittent power source.
Wavespace-Based Coherent Deconvolution
NASA Technical Reports Server (NTRS)
Bahr, Christopher J.; Cattafesta, Louis N., III
2012-01-01
Array deconvolution is commonly used in aeroacoustic analysis to remove the influence of a microphone array's point spread function from a conventional beamforming map. Unfortunately, the majority of deconvolution algorithms assume that the acoustic sources in a measurement are incoherent, which can be problematic for some aeroacoustic phenomena with coherent, spatially-distributed characteristics. While several algorithms have been proposed to handle coherent sources, some are computationally intractable for many problems while others require restrictive assumptions about the source field. Newer generalized inverse techniques hold promise, but are still under investigation for general use. An alternate coherent deconvolution method is proposed based on a wavespace transformation of the array data. Wavespace analysis offers advantages over curved-wave array processing, such as providing an explicit shift-invariance in the convolution of the array sampling function with the acoustic wave field. However, usage of the wavespace transformation assumes the acoustic wave field is accurately approximated as a superposition of plane wave fields, regardless of true wavefront curvature. The wavespace technique leverages Fourier transforms to quickly evaluate a shift-invariant convolution. The method is derived for and applied to ideal incoherent and coherent plane wave fields to demonstrate its ability to determine magnitude and relative phase of multiple coherent sources. Multi-scale processing is explored as a means of accelerating solution convergence. A case with a spherical wave front is evaluated. Finally, a trailing edge noise experiment case is considered. Results show the method successfully deconvolves incoherent, partially-coherent, and coherent plane wave fields to a degree necessary for quantitative evaluation. Curved wave front cases warrant further investigation. A potential extension to nearfield beamforming is proposed.
Algorithm for Wavefront Sensing Using an Extended Scene
NASA Technical Reports Server (NTRS)
Sidick, Erkin; Green, Joseph; Ohara, Catherine
2008-01-01
A recently conceived algorithm for processing image data acquired by a Shack-Hartmann (SH) wavefront sensor is not subject to the restriction, previously applicable in SH wavefront sensing, that the image be formed from a distant star or other equivalent of a point light source. That is to say, the image could be of an extended scene. (One still has the option of using a point source.) The algorithm can be implemented in commercially available software on ordinary computers. The steps of the algorithm are the following: 1. Suppose that the image comprises M sub-images. Determine the x,y Cartesian coordinates of the centers of these sub-images and store them in a 2xM matrix. 2. Within each sub-image, choose an NxN-pixel cell centered at the coordinates determined in step 1. For the ith sub-image, let this cell be denoted as si(x,y). Let the cell of another subimage (preferably near the center of the whole extended-scene image) be designated a reference cell, denoted r(x,y). 3. Calculate the fast Fourier transforms of the sub-sub-images in the central NxN portions (where N < N and both are preferably powers of 2) of r(x,y) and si(x,y). 4. Multiply the two transforms to obtain a cross-correlation function Ci(u,v), in the Fourier domain. Then let the phase of Ci(u, v) constitute a phase function, phi(u,v). 5. Fit u and v slopes to phi (u,v) over a small u,v subdomain. 6. Compute the fast Fourier transform, Si(u,v) of the full NxN cell si(x,y). Multiply this transform by the u and phase slopes obtained in step 4. Then compute the inverse fast Fourier transform of the product. 7. Repeat steps 4 through 6 in an iteration loop, cumulating the u and slopes, until a maximum iteration number is reached or the change in image shift becomes smaller than a predetermined tolerance. 8. Repeat steps 4 through 7 for the cells of all other sub-images.
In-Stream Microbial Denitrification Potential at Wastewater Treatment Plant Discharge Sites
NASA Astrophysics Data System (ADS)
Hill, N. B.; Rahm, B. G.; Shaw, S. B.; Riha, S. J.
2014-12-01
Reactive nitrogen loading from municipal sewage discharge provides point sources of nitrate (NO3-) to rivers and streams. Through microbially-mediated denitrification, NO3- can be converted to dinitrogen (N2) and nitrous oxide (N2O) gases, which are released to the atmosphere. Preliminary observations made throughout summer 2011 near a wastewater treatment plant (WWTP) outfall in the Finger Lakes region of New York indicated that NO3- concentrations downstream of the discharge pipe were lower relative to upstream concentrations. This suggested that nitrate processing was occurring more rapidly and completely than predicted by current models and that point "sources" can in some cases be point "sinks". Molecular assays and stable isotope analyses were combined with laboratory microcosm experiments and water chemistry analyses to better understand the mechanism of nitrate transformation. Nitrite reductase (nirS and nirK) and nitrous oxide reductase (nosZ) genes were detected in water and sediment samples using qPCR. Denitrifcation genes were present attached to stream sediment, in pipe biofilm, and in WWTP discharge water. A comparison of δ18-O and δ15-N signatures also supported the hypothesis that stream NO3- had been processed biotically. Results from microcosm experiments indicated that the NO3- transformations occur at the sediment-water interface rather than in the water column. In some instances, quantities of denitrification genes were at higher concentrations attached to sediment downstream of the discharge pipe than upstream of the pipe suggesting that the wastewater discharge may be enriching the downstream sediment and could promote in-stream denitrification.
McCollom, Brittany A; Collis, Jon M
2014-09-01
A normal mode solution to the ocean acoustic problem of the Pekeris waveguide with an elastic bottom using a Green's function formulation for a compressional wave point source is considered. Analytic solutions to these types of waveguide propagation problems are strongly dependent on the eigenvalues of the problem; these eigenvalues represent horizontal wavenumbers, corresponding to propagating modes of energy. The eigenvalues arise as singularities in the inverse Hankel transform integral and are specified by roots to a characteristic equation. These roots manifest themselves as poles in the inverse transform integral and can be both subtle and difficult to determine. Following methods previously developed [S. Ivansson et al., J. Sound Vib. 161 (1993)], a root finding routine has been implemented using the argument principle. Using the roots to the characteristic equation in the Green's function formulation, full-field solutions are calculated for scenarios where an acoustic source lies in either the water column or elastic half space. Solutions are benchmarked against laboratory data and existing numerical solutions.
Medical libraries, bioinformatics, and networked information: a coming convergence?
Lynch, C
1999-10-01
Libraries will be changed by technological and social developments that are fueled by information technology, bioinformatics, and networked information. Libraries in highly focused settings such as the health sciences are at a pivotal point in their development as the synthesis of historically diverse and independent information sources transforms health care institutions. Boundaries are breaking down between published literature and research data, between research databases and clinical patient data, and between consumer health information and professional literature. This paper focuses on the dynamics that are occurring with networked information sources and the roles that libraries will need to play in the world of medical informatics in the early twenty-first century.
Transforming Functions by Rescaling Axes
ERIC Educational Resources Information Center
Ferguson, Robert
2017-01-01
Students are often asked to plot a generalised parent function from their knowledge of a parent function. One approach is to sketch the parent function, choose a few points on the parent function curve, transform and plot these points, and use the transformed points as a guide to sketching the generalised parent function. Another approach is to…
NASA Astrophysics Data System (ADS)
Kuze, A.; Suto, H.; Kataoka, F.; Shiomi, K.; Kondo, Y.; Crisp, D.; Butz, A.
2017-12-01
Atmospheric methane (CH4) has an important role in global radiative forcing of climate but its emission estimates have larger uncertainties than carbon dioxide (CO2). The area of anthropogenic emission sources is usually much smaller than 100 km2. The Thermal And Near infrared Sensor for carbon Observation Fourier-Transform Spectrometer (TANSO-FTS) onboard the Greenhouse gases Observing SATellite (GOSAT) has measured CO2 and CH4 column density using sun light reflected from the earth's surface. It has an agile pointing system and its footprint can cover 87-km2 with a single detector. By specifying pointing angles and observation time for every orbit, TANSO-FTS can target various CH4 point sources together with reference points every 3 day over years. We selected a reference point that represents CH4 background density before or after targeting a point source. By combining satellite-measured enhancement of the CH4 column density and surface measured wind data or estimates from the Weather Research and Forecasting (WRF) model, we estimated CH4emission amounts. Here, we picked up two sites in the US West Coast, where clear sky frequency is high and a series of data are available. The natural gas leak at Aliso Canyon showed a large enhancement and its decrease with time since the initial blowout. We present time series of flux estimation assuming the source is single point without influx. The observation of the cattle feedlot in Chino, California has weather station within the TANSO-FTS footprint. The wind speed is monitored continuously and the wind direction is stable at the time of GOSAT overpass. The large TANSO-FTS footprint and strong wind decreases enhancement below noise level. Weak wind shows enhancements in CH4, but the velocity data have large uncertainties. We show the detection limit of single samples and how to reduce uncertainty using time series of satellite data. We will propose that the next generation instruments for accurate anthropogenic CO2 and CH4 flux estimation have improve spatial resolution (˜1km2 ) to further enhance column density changes. We also propose adding imaging capability to monitor plume orientation. We will present laboratory model results and a sampling pattern optimization study that combines local emission source and global survey observations.
Registration algorithm of point clouds based on multiscale normal features
NASA Astrophysics Data System (ADS)
Lu, Jun; Peng, Zhongtao; Su, Hang; Xia, GuiHua
2015-01-01
The point cloud registration technology for obtaining a three-dimensional digital model is widely applied in many areas. To improve the accuracy and speed of point cloud registration, a registration method based on multiscale normal vectors is proposed. The proposed registration method mainly includes three parts: the selection of key points, the calculation of feature descriptors, and the determining and optimization of correspondences. First, key points are selected from the point cloud based on the changes of magnitude of multiscale curvatures obtained by using principal components analysis. Then the feature descriptor of each key point is proposed, which consists of 21 elements based on multiscale normal vectors and curvatures. The correspondences in a pair of two point clouds are determined according to the descriptor's similarity of key points in the source point cloud and target point cloud. Correspondences are optimized by using a random sampling consistency algorithm and clustering technology. Finally, singular value decomposition is applied to optimized correspondences so that the rigid transformation matrix between two point clouds is obtained. Experimental results show that the proposed point cloud registration algorithm has a faster calculation speed, higher registration accuracy, and better antinoise performance.
Sim, Won-Jin; Kim, Hee-Young; Choi, Sung-Deuk; Kwon, Jung-Hwan; Oh, Jeong-Eun
2013-03-15
We investigated 33 pharmaceuticals and personal care products (PPCPs) with emphasis on anthelmintics and their metabolites in human sanitary waste treatment plants (HTPs), sewage treatment plants (STPs), hospital wastewater treatment plants (HWTPs), livestock wastewater treatment plants (LWTPs), river water and seawater. PPCPs showed the characteristic specific occurrence patterns according to wastewater sources. The LWTPs and HTPs showed higher levels (maximum 3000 times in influents) of anthelmintics than other wastewater treatment plants, indicating that livestock wastewater and human sanitary waste are one of principal sources of anthelmintics. Among anthelmintics, fenbendazole and its metabolites are relatively high in the LWTPs, while human anthelmintics such as albendazole and flubendazole are most dominant in the HTPs, STPs and HWTPs. The occurrence pattern of fenbendazole's metabolites in water was different from pharmacokinetics studies, showing the possibility of transformation mechanism other than the metabolism in animal bodies by some processes unknown to us. The river water and seawater are generally affected by the point sources, but the distribution patterns in some receiving water are slightly different from the effluent, indicating the influence of non-point sources. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lachat, E.; Landes, T.; Grussenmeyer, P.
2018-05-01
Terrestrial and airborne laser scanning, photogrammetry and more generally 3D recording techniques are used in a wide range of applications. After recording several individual 3D datasets known in local systems, one of the first crucial processing steps is the registration of these data into a common reference frame. To perform such a 3D transformation, commercial and open source software as well as programs from the academic community are available. Due to some lacks in terms of computation transparency and quality assessment in these solutions, it has been decided to develop an open source algorithm which is presented in this paper. It is dedicated to the simultaneous registration of multiple point clouds as well as their georeferencing. The idea is to use this algorithm as a start point for further implementations, involving the possibility of combining 3D data from different sources. Parallel to the presentation of the global registration methodology which has been employed, the aim of this paper is to confront the results achieved this way with the above-mentioned existing solutions. For this purpose, first results obtained with the proposed algorithm to perform the global registration of ten laser scanning point clouds are presented. An analysis of the quality criteria delivered by two selected software used in this study and a reflexion about these criteria is also performed to complete the comparison of the obtained results. The final aim of this paper is to validate the current efficiency of the proposed method through these comparisons.
Prediction of subsonic vortex shedding from forebodies with chines
NASA Technical Reports Server (NTRS)
Mendenhall, Michael R.; Lesieutre, Daniel J.
1990-01-01
An engineering prediction method and associated computer code VTXCHN to predict nose vortex shedding from circular and noncircular forebodies with sharp chine edges in subsonic flow at angles of attack and roll are presented. Axisymmetric bodies are represented by point sources and doublets, and noncircular cross sections are transformed to a circle by either analytical or numerical conformal transformations. The lee side vortex wake is modeled by discrete vortices in crossflow planes along the body; thus the three-dimensional steady flow problem is reduced to a two-dimensional, unsteady, separated flow problem for solution. Comparison of measured and predicted surface pressure distributions, flow field surveys, and aerodynamic characteristics are presented for noncircular bodies alone and forebodies with sharp chines.
Early Warning Signals of Social Transformation: A Case Study from the US Southwest.
Spielmann, Katherine A; Peeples, Matthew A; Glowacki, Donna M; Dugmore, Andrew
2016-01-01
Recent research in ecology suggests that generic indicators, referred to as early warning signals (EWS), may occur before significant transformations, both critical and non-critical, in complex systems. Up to this point, research on EWS has largely focused on simple models and controlled experiments in ecology and climate science. When humans are considered in these arenas they are invariably seen as external sources of disturbance or management. In this article we explore ways to include societal components of socio-ecological systems directly in EWS analysis. Given the growing archaeological literature on 'collapses,' or transformations, in social systems, we investigate whether any early warning signals are apparent in the archaeological records of the build-up to two contemporaneous cases of social transformation in the prehistoric US Southwest, Mesa Verde and Zuni. The social transformations in these two cases differ in scope and severity, thus allowing us to explore the contexts under which warning signals may (or may not) emerge. In both cases our results show increasing variance in settlement size before the transformation, but increasing variance in social institutions only before the critical transformation in Mesa Verde. In the Zuni case, social institutions appear to have managed the process of significant social change. We conclude that variance is of broad relevance in anticipating social change, and the capacity of social institutions to mitigate transformation is critical to consider in EWS research on socio-ecological systems.
Clinical Pharmacology & Therapeutics: Past, Present and Future
Waldman, SA; Terzic, A
2016-01-01
Clinical Pharmacology & Therapeutics (CPT), the definitive and timely source for advances in human therapeutics, transcends the drug discovery, development, regulation and utilization continuum to catalyze, evolve and disseminate discipline-transformative knowledge. Prioritized themes and multidisciplinary content drive the science and practice of clinical pharmacology, offering a trusted point of reference. An authoritative herald across global communities, CPT is a timeless information vehicle at the vanguard of discovery, translation and application ushering therapeutic innovation into modern health care. PMID:28194770
Transient pressure analysis of fractured well in bi-zonal gas reservoirs
NASA Astrophysics Data System (ADS)
Zhao, Yu-Long; Zhang, Lie-Hui; Liu, Yong-hui; Hu, Shu-Yong; Liu, Qi-Guo
2015-05-01
For hydraulic fractured well, how to evaluate the properties of fracture and formation are always tough jobs and it is very complex to use the conventional method to do that, especially for partially penetrating fractured well. Although the source function is a very powerful tool to analyze the transient pressure for complex structure well, the corresponding reports on gas reservoir are rare. In this paper, the continuous point source functions in anisotropic reservoirs are derived on the basis of source function theory, Laplace transform method and Duhamel principle. Application of construction method, the continuous point source functions in bi-zonal gas reservoir with closed upper and lower boundaries are obtained. Sequentially, the physical models and transient pressure solutions are developed for fully and partially penetrating fractured vertical wells in this reservoir. Type curves of dimensionless pseudo-pressure and its derivative as function of dimensionless time are plotted as well by numerical inversion algorithm, and the flow periods and sensitive factors are also analyzed. The source functions and solutions of fractured well have both theoretical and practical application in well test interpretation for such gas reservoirs, especial for the well with stimulated reservoir volume around the well in unconventional gas reservoir by massive hydraulic fracturing which always can be described with the composite model.
Photoacoustic image reconstruction: a quantitative analysis
NASA Astrophysics Data System (ADS)
Sperl, Jonathan I.; Zell, Karin; Menzenbach, Peter; Haisch, Christoph; Ketzer, Stephan; Marquart, Markus; Koenig, Hartmut; Vogel, Mika W.
2007-07-01
Photoacoustic imaging is a promising new way to generate unprecedented contrast in ultrasound diagnostic imaging. It differs from other medical imaging approaches, in that it provides spatially resolved information about optical absorption of targeted tissue structures. Because the data acquisition process deviates from standard clinical ultrasound, choice of the proper image reconstruction method is crucial for successful application of the technique. In the literature, multiple approaches have been advocated, and the purpose of this paper is to compare four reconstruction techniques. Thereby, we focused on resolution limits, stability, reconstruction speed, and SNR. We generated experimental and simulated data and reconstructed images of the pressure distribution using four different methods: delay-and-sum (DnS), circular backprojection (CBP), generalized 2D Hough transform (HTA), and Fourier transform (FTA). All methods were able to depict the point sources properly. DnS and CBP produce blurred images containing typical superposition artifacts. The HTA provides excellent SNR and allows a good point source separation. The FTA is the fastest and shows the best FWHM. In our study, we found the FTA to show the best overall performance. It allows a very fast and theoretically exact reconstruction. Only a hardware-implemented DnS might be faster and enable real-time imaging. A commercial system may also perform several methods to fully utilize the new contrast mechanism and guarantee optimal resolution and fidelity.
Polarization transformation as an algorithm for automatic generalization and quality assessment
NASA Astrophysics Data System (ADS)
Qian, Haizhong; Meng, Liqiu
2007-06-01
Since decades it has been a dream of cartographers to computationally mimic the generalization processes in human brains for the derivation of various small-scale target maps or databases from a large-scale source map or database. This paper addresses in a systematic way the polarization transformation (PT) - a new algorithm that serves both the purpose of automatic generalization of discrete features and the quality assurance. By means of PT, two dimensional point clusters or line networks in the Cartesian system can be transformed into a polar coordinate system, which then can be unfolded as a single spectrum line r = f(α), where r and a stand for the polar radius and the polar angle respectively. After the transformation, the original features will correspond to nodes on the spectrum line delimited between 0° and 360° along the horizontal axis, and between the minimum and maximum polar radius along the vertical axis. Since PT is a lossless transformation, it allows a straighforward analysis and comparison of the original and generalized distributions, thus automatic generalization and quality assurance can be down in this way. Examples illustrate that PT algorithm meets with the requirement of generalization of discrete spatial features and is more scientific.
Characteristics of laser-induced plasma as a spectroscopic light emission source
NASA Astrophysics Data System (ADS)
Ma, Q. L.; Motto-Ros, V.; Lei, W. Q.; Wang, X. C.; Boueri, M.; Laye, F.; Zeng, C. Q.; Sausy, M.; Wartelle, A.; Bai, X. S.; Zheng, L. J.; Zeng, H. P.; Baudelet, M.; Yu, J.
2012-05-01
Laser-induced plasma is today a widespread spectroscopic emission source. It can be easily generated using compact and reliable nanosecond pulsed lasers and finds applications in various domains with laser-induced breakdown spectroscopy (LIBS). It is however such a particular medium which is intrinsically a transient and non-point light emitting source. Its timeand space-resolved diagnostics is therefore crucial for its optimized use. In this paper, we review our work on the investigation of the morphology and the evolution of the plasma. Different time scales relevant for the description of the plasma's kinetics and dynamics are covered by suitable techniques. Our results show detailed evolution and transformation of the plasma with high temporal and spatial resolutions. The effects of the laser parameters as well as the background gas are particularly studied.
Error reduction in three-dimensional metrology combining optical and touch probe data
NASA Astrophysics Data System (ADS)
Gerde, Janice R.; Christens-Barry, William A.
2010-08-01
Analysis of footwear under the Harmonized Tariff Schedule of the United States (HTSUS) is partly based on identifying the boundary ("parting line") between the "external surface area upper" (ESAU) and the sample's sole. Often, that boundary is obscured. We establish the parting line as the curved intersection between the sample outer surface and its insole surface. The outer surface is determined by discrete point cloud coordinates obtained using a laser scanner. The insole surface is defined by point cloud data, obtained using a touch probe device-a coordinate measuring machine (CMM). Because these point cloud data sets do not overlap spatially, a polynomial surface is fitted to the insole data and extended to intersect a mesh fitted to the outer surface point cloud. This line of intersection defines the ESAU boundary, permitting further fractional area calculations to proceed. The defined parting line location is sensitive to the polynomial used to fit experimental data. Extrapolation to the intersection with the ESAU can heighten this sensitivity. We discuss a methodology for transforming these data into a common reference frame. Three scenarios are considered: measurement error in point cloud coordinates, from fitting a polynomial surface to a point cloud then extrapolating beyond the data set, and error from reference frame transformation. These error sources can influence calculated surface areas. We describe experiments to assess error magnitude, the sensitivity of calculated results on these errors, and minimizing error impact on calculated quantities. Ultimately, we must ensure that statistical error from these procedures is minimized and within acceptance criteria.
Mechanisms of cell transformation in the embryonic heart.
Huang, J X; Potts, J D; Vincent, E B; Weeks, D L; Runyan, R B
1995-03-27
The process of cell transformation in the heart is a complex one. By use of the invasion bioassay, we have been able to identify several critical components of the cell transformation process in the heart. TGF beta 3 can be visualized as a switch in the environment that contributes to the initial process of cell transformation. Our data show that it is a critical switch in the transformation process. Even so, it is apparently only one of the factors involved. Others may include other TGF beta family members, the ES antigens described by Markwald and co-workers and additional unknown substances. Observing the sensitivity of the process to pertussis toxin, there is likely to be a G-protein-linked receptor involved, yet we have not identified a known ligand for this type of receptor. Clearly, there are several different signal transduction processes involved. The existence of multiple pathways is consistent with the idea that the target endothelial cells receive a variety of environmental imputs, the sum of which will produce cell transformation at the correct time and place. Adjacent endothelial cells of the ventricle that do not undergo cell transformation are apparently refractory to one or more of the stimuli. Figure 4 depicts a summary diagram of this invasion process with localization of most of the molecules mentioned in this narrative. As hypothesized here, elements of the transformation process may recapitulate aspects of gastrulation. Since some conservation of mechanism is expected in cells, it is not surprising that cells undergoing phenotypic change might reutilize mechanisms used previously to produce mesenchyme from the blastodisk. Though we have preliminary data to suggest this point, confirmation of the hypothesis by perturbation of genes such as brachyury, msx-1, etc. will be required to establish this point. The advantage of this hypothesis is that it provides, from the work of others in the area of gastrulation, a ready source of molecules and mechanisms that can be tested in the transforming heart. Whereas, perturbation of such mechanisms at gastrulation may be lethal to the embryo, such molecules and mechanisms may be responsible for the high incidence of birth defects in the heart.
NASA Astrophysics Data System (ADS)
Kholmetskii, Alexander; Missevitch, Oleg; Yarman, Tolga
2016-02-01
We address to the Poynting theorem for the bound (velocity-dependent) electromagnetic field, and demonstrate that the standard expressions for the electromagnetic energy flux and related field momentum, in general, come into the contradiction with the relativistic transformation of four-vector of total energy-momentum. We show that this inconsistency stems from the incorrect application of Poynting theorem to a system of discrete point-like charges, when the terms of self-interaction in the product {\\varvec{j}} \\cdot {\\varvec{E}} (where the current density {\\varvec{j}} and bound electric field {\\varvec{E}} are generated by the same source charge) are exogenously omitted. Implementing a transformation of the Poynting theorem to the form, where the terms of self-interaction are eliminated via Maxwell equations and vector calculus in a mathematically rigorous way (Kholmetskii et al., Phys Scr 83:055406, 2011), we obtained a novel expression for field momentum, which is fully compatible with the Lorentz transformation for total energy-momentum. The results obtained are discussed along with the novel expression for the electromagnetic energy-momentum tensor.
The Coordinate Transformation Method of High Resolution dem Data
NASA Astrophysics Data System (ADS)
Yan, Chaode; Guo, Wang; Li, Aimin
2018-04-01
Coordinate transformation methods of DEM data can be divided into two categories. One reconstruct based on original vector elevation data. The other transforms DEM data blocks by transforming parameters. But the former doesn't work in the absence of original vector data, and the later may cause errors at joint places between adjoining blocks of high resolution DEM data. In view of this problem, a method dealing with high resolution DEM data coordinate transformation is proposed. The method transforms DEM data into discrete vector elevation points, and then adjusts positions of points by bi-linear interpolation respectively. Finally, a TIN is generated by transformed points, and the new DEM data in target coordinate system is reconstructed based on TIN. An algorithm which can find blocks and transform automatically is given in this paper. The method is tested in different terrains and proved to be feasible and valid.
Aureole radiance field about a source in a scattering-absorbing medium.
Zachor, A S
1978-06-15
A technique is described for computing the aureole radiance field about a point source in a medium that absorbs and scatters according to an arbitrary phase function. When applied to an isotropic source in a homogenous medium, the method uses a double-integral transform which is evaluated recursively to obtain the aureole radiances contributed by successive scattering orders, as in the Neumann solution of the radiative transfer equation. The normalized total radiance field distribution and the variation of flux with field of view and range are given for three wavelengths in the uv and one in the visible, for a sea-level model atmosphere assumed to scatter according to a composite of the Rayleigh and modified Henyey-Greenstein phase functions. These results have application to the detection and measurement of uncollimated uv and visible sources at short ranges in the lower atmosphere.
Geometric registration of images by similarity transformation using two reference points
NASA Technical Reports Server (NTRS)
Kang, Yong Q. (Inventor); Jo, Young-Heon (Inventor); Yan, Xiao-Hai (Inventor)
2011-01-01
A method for registering a first image to a second image using a similarity transformation. The each image includes a plurality of pixels. The first image pixels are mapped to a set of first image coordinates and the second image pixels are mapped to a set of second image coordinates. The first image coordinates of two reference points in the first image are determined. The second image coordinates of these reference points in the second image are determined. A Cartesian translation of the set of second image coordinates is performed such that the second image coordinates of the first reference point match its first image coordinates. A similarity transformation of the translated set of second image coordinates is performed. This transformation scales and rotates the second image coordinates about the first reference point such that the second image coordinates of the second reference point match its first image coordinates.
The inverse of winnowing: a FORTRAN subroutine and discussion of unwinnowing discrete data
Bracken, Robert E.
2004-01-01
This report describes an unwinnowing algorithm that utilizes a discrete Fourier transform, and a resulting Fortran subroutine that winnows or unwinnows a 1-dimensional stream of discrete data; the source code is included. The unwinnowing algorithm effectively increases (by integral factors) the number of available data points while maintaining the original frequency spectrum of a data stream. This has utility when an increased data density is required together with an availability of higher order derivatives that honor the original data.
Clinical Pharmacology & Therapeutics: Past, Present, and Future.
Waldman, S A; Terzic, A
2017-03-01
Clinical Pharmacology & Therapeutics (CPT), the definitive and timely source for advances in human therapeutics, transcends the drug discovery, development, regulation, and utilization continuum to catalyze, evolve, and disseminate discipline-transformative knowledge. Prioritized themes and multidisciplinary content drive the science and practice of clinical pharmacology, offering a trusted point of reference. An authoritative herald across global communities, CPT is a timeless information vehicle at the vanguard of discovery, translation, and application ushering therapeutic innovation into modern healthcare. © 2017 American Society for Clinical Pharmacology and Therapeutics.
NASA Astrophysics Data System (ADS)
Maglevanny, I. I.; Smolar, V. A.
2016-01-01
We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called "data gaps" can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log-log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.
New fast DCT algorithms based on Loeffler's factorization
NASA Astrophysics Data System (ADS)
Hong, Yoon Mi; Kim, Il-Koo; Lee, Tammy; Cheon, Min-Su; Alshina, Elena; Han, Woo-Jin; Park, Jeong-Hoon
2012-10-01
This paper proposes a new 32-point fast discrete cosine transform (DCT) algorithm based on the Loeffler's 16-point transform. Fast integer realizations of 16-point and 32-point transforms are also provided based on the proposed transform. For the recent development of High Efficiency Video Coding (HEVC), simplified quanti-zation and de-quantization process are proposed. Three different forms of implementation with the essentially same performance, namely matrix multiplication, partial butterfly, and full factorization can be chosen accord-ing to the given platform. In terms of the number of multiplications required for the realization, our proposed full-factorization is 3~4 times faster than a partial butterfly, and about 10 times faster than direct matrix multiplication.
Image Fusion Algorithms Using Human Visual System in Transform Domain
NASA Astrophysics Data System (ADS)
Vadhi, Radhika; Swamy Kilari, Veera; Samayamantula, Srinivas Kumar
2017-08-01
The endeavor of digital image fusion is to combine the important visual parts from various sources to advance the visibility eminence of the image. The fused image has a more visual quality than any source images. In this paper, the Human Visual System (HVS) weights are used in the transform domain to select appropriate information from various source images and then to attain a fused image. In this process, mainly two steps are involved. First, apply the DWT to the registered source images. Later, identify qualitative sub-bands using HVS weights. Hence, qualitative sub-bands are selected from different sources to form high quality HVS based fused image. The quality of the HVS based fused image is evaluated with general fusion metrics. The results show the superiority among the state-of-the art resolution Transforms (MRT) such as Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), Contourlet Transform (CT), and Non Sub Sampled Contourlet Transform (NSCT) using maximum selection fusion rule.
Predict Brain MR Image Registration via Sparse Learning of Appearance and Transformation
Wang, Qian; Kim, Minjeong; Shi, Yonghong; Wu, Guorong; Shen, Dinggang
2014-01-01
We propose a new approach to register the subject image with the template by leveraging a set of intermediate images that are pre-aligned to the template. We argue that, if points in the subject and the intermediate images share similar local appearances, they may have common correspondence in the template. In this way, we learn the sparse representation of a certain subject point to reveal several similar candidate points in the intermediate images. Each selected intermediate candidate can bridge the correspondence from the subject point to the template space, thus predicting the transformation associated with the subject point at the confidence level that relates to the learned sparse coefficient. Following this strategy, we first predict transformations at selected key points, and retain multiple predictions on each key point, instead of allowing only a single correspondence. Then, by utilizing all key points and their predictions with varying confidences, we adaptively reconstruct the dense transformation field that warps the subject to the template. We further embed the prediction-reconstruction protocol above into a multi-resolution hierarchy. In the final, we refine our estimated transformation field via existing registration method in effective manners. We apply our method to registering brain MR images, and conclude that the proposed framework is competent to improve registration performances substantially. PMID:25476412
NASA Astrophysics Data System (ADS)
Jones, H. F.; Rivers, R. J.
2007-01-01
In the Schrödinger formulation of non-Hermitian quantum theories a positive-definite metric operator η≡e-Q must be introduced in order to ensure their probabilistic interpretation. This operator also gives an equivalent Hermitian theory, by means of a similarity transformation. If, however, quantum mechanics is formulated in terms of functional integrals, we show that the Q operator makes only a subliminal appearance and is not needed for the calculation of expectation values. Instead, the relation to the Hermitian theory is encoded via the external source j(t). These points are illustrated and amplified for two non-Hermitian quantum theories: the Swanson model, a non-Hermitian transform of the simple harmonic oscillator, and the wrong-sign quartic oscillator, which has been shown to be equivalent to a conventional asymmetric quartic oscillator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, H. F.; Rivers, R. J.
In the Schroedinger formulation of non-Hermitian quantum theories a positive-definite metric operator {eta}{identical_to}e{sup -Q} must be introduced in order to ensure their probabilistic interpretation. This operator also gives an equivalent Hermitian theory, by means of a similarity transformation. If, however, quantum mechanics is formulated in terms of functional integrals, we show that the Q operator makes only a subliminal appearance and is not needed for the calculation of expectation values. Instead, the relation to the Hermitian theory is encoded via the external source j(t). These points are illustrated and amplified for two non-Hermitian quantum theories: the Swanson model, a non-Hermitianmore » transform of the simple harmonic oscillator, and the wrong-sign quartic oscillator, which has been shown to be equivalent to a conventional asymmetric quartic oscillator.« less
Lin, Chitsan; Liou, Naiwei; Sun, Endy
2008-06-01
An open-path Fourier transform infrared spectroscopy (OP-FTIR) system was set up for 3-day continuous line-averaged volatile organic compound (VOC) monitoring in a paint manufacturing plant. Seven VOCs (toluene, m-xylene, p-xylene, styrene, methanol, acetone, and 2-butanone) were identified in the ambient environment. Daytime-only batch operation mode was well explained by the time-series concentration plots. Major sources of methanol, m-xylene, acetone, and 2-butanone were identified in the southeast direction where paint solvent manufacturing processes are located. However, an attempt to uncover sources of styrene was not successful because the method detection limit (MDL) of the OP-FTIR system was not sensitive enough to produce conclusive data. In the second scenario, the OP-FTIR system was set up in an industrial complex to distinguish the origins of several VOCs. Eight major VOCs were identified in the ambient environment. The pollutant detected wind-rose percentage plots that clearly showed that ethylene, propylene, 2-butanone, and toluene mainly originated from the tank storage area, whereas the source of n-butane was mainly from the butadiene manufacturing processes of the refinery plant, and ammonia was identified as an accompanying reduction product in the gasoline desulfuration process. Advantages of OP-FTIR include its ability to simultaneously and continuously analyze many compounds, and its long path length monitoring has also shown advantages in obtaining more comprehensive data than the traditional multiple, single-point monitoring methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yi, Jianbing, E-mail: yijianbing8@163.com; Yang, Xuan, E-mail: xyang0520@263.net; Li, Yan-Ran, E-mail: lyran@szu.edu.cn
2015-10-15
Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered atmore » points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the 3000 landmark points of ten cases by the authors’ method are 1.21 and 1.04 mm. In the EMPIRE10 lung registration challenge, the authors’ method ranks 24 of 39. According to the index of the maximum shear stretch, the authors’ method is also efficient to describe the discontinuous motion at the lung boundaries. Conclusions: By establishing the correspondence of the landmark points in the source phase and the other target phases combining shape matching and image intensity matching together, the mismatching issue in the robust point matching algorithm is adequately addressed. The target registration errors are statistically reduced by shifting the virtual target points and target points. The authors’ method with consideration of sliding conditions can effectively estimate the discontinuous motion, and the estimated motion is natural. The primary limitation of the proposed method is that the temporal constraints of the trajectories of voxels are not introduced into the motion model. However, the proposed method provides satisfactory motion information, which results in precise tumor coverage by the radiation dose during radiotherapy.« less
Yi, Jianbing; Yang, Xuan; Chen, Guoliang; Li, Yan-Ran
2015-10-01
Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. The performances of the authors' method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors' method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the 3000 landmark points of ten cases by the authors' method are 1.21 and 1.04 mm. In the EMPIRE10 lung registration challenge, the authors' method ranks 24 of 39. According to the index of the maximum shear stretch, the authors' method is also efficient to describe the discontinuous motion at the lung boundaries. By establishing the correspondence of the landmark points in the source phase and the other target phases combining shape matching and image intensity matching together, the mismatching issue in the robust point matching algorithm is adequately addressed. The target registration errors are statistically reduced by shifting the virtual target points and target points. The authors' method with consideration of sliding conditions can effectively estimate the discontinuous motion, and the estimated motion is natural. The primary limitation of the proposed method is that the temporal constraints of the trajectories of voxels are not introduced into the motion model. However, the proposed method provides satisfactory motion information, which results in precise tumor coverage by the radiation dose during radiotherapy.
Early Warning Signals of Social Transformation: A Case Study from the US Southwest
2016-01-01
Recent research in ecology suggests that generic indicators, referred to as early warning signals (EWS), may occur before significant transformations, both critical and non-critical, in complex systems. Up to this point, research on EWS has largely focused on simple models and controlled experiments in ecology and climate science. When humans are considered in these arenas they are invariably seen as external sources of disturbance or management. In this article we explore ways to include societal components of socio-ecological systems directly in EWS analysis. Given the growing archaeological literature on ‘collapses,’ or transformations, in social systems, we investigate whether any early warning signals are apparent in the archaeological records of the build-up to two contemporaneous cases of social transformation in the prehistoric US Southwest, Mesa Verde and Zuni. The social transformations in these two cases differ in scope and severity, thus allowing us to explore the contexts under which warning signals may (or may not) emerge. In both cases our results show increasing variance in settlement size before the transformation, but increasing variance in social institutions only before the critical transformation in Mesa Verde. In the Zuni case, social institutions appear to have managed the process of significant social change. We conclude that variance is of broad relevance in anticipating social change, and the capacity of social institutions to mitigate transformation is critical to consider in EWS research on socio-ecological systems. PMID:27706200
Vanishing points detection using combination of fast Hough transform and deep learning
NASA Astrophysics Data System (ADS)
Sheshkus, Alexander; Ingacheva, Anastasia; Nikolaev, Dmitry
2018-04-01
In this paper we propose a novel method for vanishing points detection based on convolutional neural network (CNN) approach and fast Hough transform algorithm. We show how to determine fast Hough transform neural network layer and how to use it in order to increase usability of the neural network approach to the vanishing point detection task. Our algorithm includes CNN with consequence of convolutional and fast Hough transform layers. We are building estimator for distribution of possible vanishing points in the image. This distribution can be used to find candidates of vanishing point. We provide experimental results from tests of suggested method using images collected from videos of road trips. Our approach shows stable result on test images with different projective distortions and noise. Described approach can be effectively implemented for mobile GPU and CPU.
Damage Identification in Beam Structure using Spatial Continuous Wavelet Transform
NASA Astrophysics Data System (ADS)
Janeliukstis, R.; Rucevskis, S.; Wesolowski, M.; Kovalovs, A.; Chate, A.
2015-11-01
In this paper the applicability of spatial continuous wavelet transform (CWT) technique for damage identification in the beam structure is analyzed by application of different types of wavelet functions and scaling factors. The proposed method uses exclusively mode shape data from the damaged structure. To examine limitations of the method and to ascertain its sensitivity to noisy experimental data, several sets of simulated data are analyzed. Simulated test cases include numerical mode shapes corrupted by different levels of random noise as well as mode shapes with different number of measurement points used for wavelet transform. A broad comparison of ability of different wavelet functions to detect and locate damage in beam structure is given. Effectiveness and robustness of the proposed algorithms are demonstrated experimentally on two aluminum beams containing single mill-cut damage. The modal frequencies and the corresponding mode shapes are obtained via finite element models for numerical simulations and by using a scanning laser vibrometer with PZT actuator as vibration excitation source for the experimental study.
Transformation of chlorpyrifos and chlorpyrifos-methyl in prairie pothole pore waters.
Adams, Rachel M; McAdams, Brandon C; Arnold, William A; Chin, Yu-Ping
2016-11-09
Non-point source pesticide pollution is a concern for wetlands in the prairie pothole region (PPR). Recent studies have demonstrated that reduced sulfur species (e.g., bisulfide and polysulfides) in PPR wetland pore waters directly undergo reactions with chloroacetanilide and dinitroaniline compounds. In this paper, the abiotic transformation of two organophosphate compounds, chlorpyrifos and chlorpyrifos-methyl, was studied in PPR wetland pore waters. Chlorpyrifos-methyl reacted significantly faster (up to 4 times) in pore water with reduced sulfur species relative to hydrolysis. No rate enhancement was observed in the transformation of chlorpyrifos in pore water with reduced sulfur species. The lack of reactivity was most likely caused by steric hindrance from the ethyl groups and partitioning to dissolved organic matter (DOM), thereby shielding chlorpyrifos from nucleophilic attack. Significant decreases in reaction rates were observed for chlorpyrifos in pore water with high concentrations of DOM. Rate enhancement due to other reactive species (e.g., organo-sulfur compounds) in pore water was minor for both compounds relative to the influence of bisulfide and DOM.
Konishi, Tatsunori; Harata, Masahiko
2014-01-01
We show here that the transformation efficiency of Saccharomyces cerevisiae is improved by altering carbon sources in media for pre-culturing cells prior to the transformation reactions. The transformation efficiency was increased up to sixfold by combination with existing transformation protocols. This method is widely applicable for yeast research since efficient transformation can be performed easily without changing any of the other procedures in the transformation.
Financing Renewable Energy Projects in Developing Countries: A Critical Review
NASA Astrophysics Data System (ADS)
Donastorg, A.; Renukappa, S.; Suresh, S.
2017-08-01
Access to clean and stable energy, meeting sustainable development goals, the fossil fuel dependency and depletion are some of the reasons that have impacted developing countries to transform the business as usual economy to a more sustainable economy. However, access and availability of finance is a major challenge for many developing countries. Financing renewable energy projects require access to significant resources, by multiple parties, at varying points in the project life cycles. This research aims to investigate sources and new trends in financing RE projects in developing countries. For this purpose, a detail and in-depth literature review have been conducted to explore the sources and trends of current RE financial investment and projects, to understand the gaps and limitations. This paper concludes that there are various internal and external sources of finance available for RE projects in developing countries.
NASA Astrophysics Data System (ADS)
de Oliveira, Lília M.; Santos, Nádia A. P.; Maillard, Philippe
2013-10-01
Non-point source pollution (NPSP) is perhaps the leading cause of water quality problems and one of the most challenging environmental issues given the difficulty of modeling and controlling it. In this article, we applied the Manning equation, a hydraulic concept, to improve models of non-point source pollution and determine its influence as a function of slope - land cover roughness for runoff to reach the stream. In our study the equation is somewhat taken out of its usual context to be applies to the flow of an entire watershed. Here a digital elevation model (DEM) from the SRTM satellite was used to compute the slope and data from the RapidEye satellite constellation was used to produce a land cover map later transformed into a roughness surface. The methodology is applied to a 1433 km2 watershed in Southeast Brazil mostly covered by forest, pasture, urban and wetlands. The model was used to create slope buffer of varying width in which the proportions of land cover and roughness coefficient were obtained. Next we correlated these data, through regression, with four water quality parameters measured in situ: nitrate, phosphorous, faecal coliform and turbidity. We compare our results with the ones obtained by fixed buffer. It was found that slope buffer outperformed fixed buffer with higher coefficients of determination up to 15%.
Navigational Aids: The Phenomenology of Transformative Learning
ERIC Educational Resources Information Center
Mälkki, Kaisu; Green, Larry
2014-01-01
Although the notion of transformative learning points to a desirable destination for educational endeavors, the difficulty in the journey is often neglected. Our intention is to map the experiential micro-processes involved in transformative learning such that the phenomenon is illuminated from a first-person rather than third-person point of…
NASA Astrophysics Data System (ADS)
Aubrey, A. D.; Thorpe, A. K.; Christensen, L. E.; Dinardo, S.; Frankenberg, C.; Rahn, T. A.; Dubey, M.
2013-12-01
It is critical to constrain both natural and anthropogenic sources of methane to better predict the impact on global climate change. Critical technologies for this assessment include those that can detect methane point and concentrated diffuse sources over large spatial scales. Airborne spectrometers can potentially fill this gap for large scale remote sensing of methane while in situ sensors, both ground-based and mounted on aerial platforms, can monitor and quantify at small to medium spatial scales. The Jet Propulsion Laboratory (JPL) and collaborators recently conducted a field test located near Casper, WY, at the Rocky Mountain Oilfield Test Center (RMOTC). These tests were focused on demonstrating the performance of remote and in situ sensors for quantification of point-sourced methane. A series of three controlled release points were setup at RMOTC and over the course of six experiment days, the point source flux rates were varied from 50 LPM to 2400 LPM (liters per minute). During these releases, in situ sensors measured real-time methane concentration from field towers (downwind from the release point) and using a small Unmanned Aerial System (sUAS) to characterize spatiotemporal variability of the plume structure. Concurrent with these methane point source controlled releases, airborne sensor overflights were conducted using three aircraft. The NASA Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) participated with a payload consisting of a Fourier Transform Spectrometer (FTS) and an in situ methane sensor. Two imaging spectrometers provided assessment of optical and thermal infrared detection of methane plumes. The AVIRIS-next generation (AVIRIS-ng) sensor has been demonstrated for detection of atmospheric methane in the short wave infrared region, specifically using the absorption features at ~2.3 μm. Detection of methane in the thermal infrared region was evaluated by flying the Hyperspectral Thermal Emission Spectrometer (HyTES), retrievals which interrogate spectral features in the 7.5 to 8.5 μm region. Here we discuss preliminary results from the JPL activities during the RMOTC controlled release experiment, including capabilities of airborne sensors for total columnar atmospheric methane detection and comparison to results from ground measurements and dispersion models. Potential application areas for these remote sensing technologies include assessment of anthropogenic and natural methane sources over wide spatial scales that represent significant unconstrained factors to the global methane budget.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torcellini, Paul A.; Bonnema, Eric; Goldwasser, David
Building energy consumption can only be measured at the site or at the point of utility interconnection with a building. Often, to evaluate the total energy impact, this site-based energy consumption is translated into source energy, that is, the energy at the point of fuel extraction. Consistent with this approach, the U.S. Department of Energy's (DOE) definition of zero energy buildings uses source energy as the metric to account for energy losses from the extraction, transformation, and delivery of energy. Other organizations, as well, use source energy to characterize the energy impacts. Four methods of making the conversion from sitemore » energy to source energy were investigated in the context of the DOE definition of zero energy buildings. These methods were evaluated based on three guiding principles--improve energy efficiency, reduce and stabilize power demand, and use power from nonrenewable energy sources as efficiently as possible. This study examines relative trends between strategies as they are implemented on very low-energy buildings to achieve zero energy. A typical office building was modeled and variations to this model performed. The photovoltaic output that was required to create a zero energy building was calculated. Trends were examined with these variations to study the impacts of the calculation method on the building's ability to achieve zero energy status. The paper will highlight the different methods and give conclusions on the advantages and disadvantages of the methods studied.« less
Configuration Analysis of the ERS Points in Large-Volume Metrology System
Jin, Zhangjun; Yu, Cijun; Li, Jiangxiong; Ke, Yinglin
2015-01-01
In aircraft assembly, multiple laser trackers are used simultaneously to measure large-scale aircraft components. To combine the independent measurements, the transformation matrices between the laser trackers’ coordinate systems and the assembly coordinate system are calculated, by measuring the enhanced referring system (ERS) points. This article aims to understand the influence of the configuration of the ERS points that affect the transformation matrix errors, and then optimize the deployment of the ERS points to reduce the transformation matrix errors. To optimize the deployment of the ERS points, an explicit model is derived to estimate the transformation matrix errors. The estimation model is verified by the experiment implemented in the factory floor. Based on the proposed model, a group of sensitivity coefficients are derived to evaluate the quality of the configuration of the ERS points, and then several typical configurations of the ERS points are analyzed in detail with the sensitivity coefficients. Finally general guidance is established to instruct the deployment of the ERS points in the aspects of the layout, the volume size and the number of the ERS points, as well as the position and orientation of the assembly coordinate system. PMID:26402685
A Modular Multilevel Converter with Power Mismatch Control for Grid-Connected Photovoltaic Systems
Duman, Turgay; Marti, Shilpa; Moonem, M. A.; ...
2017-05-17
A modular multilevel power converter configuration for grid connected photovoltaic (PV) systems is proposed. The converter configuration replaces the conventional bulky line frequency transformer with several high frequency transformers, potentially reducing the balance of systems cost of PV systems. The front-end converter for each port is a neutral-point diode clamped (NPC) multi-level dc-dc dual-active bridge (ML-DAB) which allows maximum power point tracking (MPPT). The integrated high frequency transformer provides the galvanic isolation between the PV and grid side and also steps up the low dc voltage from PV source. Following the ML-DAB stage, in each port, is a NPC inverter.more » N number of NPC inverters’ outputs are cascaded to attain the per-phase line-to-neutral voltage to connect directly to the distribution grid (i.e., 13.8 kV). The cascaded NPC (CNPC) inverters have the inherent advantage of using lower rated devices, smaller filters and low total harmonic distortion required for PV grid interconnection. The proposed converter system is modular, scalable, and serviceable with zero downtime with lower foot print and lower overall cost. A novel voltage balance control at each module based on power mismatch among N-ports, have been presented and verified in simulation. Analysis and simulation results are presented for the N-port converter. The converter performance has also been verified on a hardware prototype.« less
A Modular Multilevel Converter with Power Mismatch Control for Grid-Connected Photovoltaic Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duman, Turgay; Marti, Shilpa; Moonem, M. A.
A modular multilevel power converter configuration for grid connected photovoltaic (PV) systems is proposed. The converter configuration replaces the conventional bulky line frequency transformer with several high frequency transformers, potentially reducing the balance of systems cost of PV systems. The front-end converter for each port is a neutral-point diode clamped (NPC) multi-level dc-dc dual-active bridge (ML-DAB) which allows maximum power point tracking (MPPT). The integrated high frequency transformer provides the galvanic isolation between the PV and grid side and also steps up the low dc voltage from PV source. Following the ML-DAB stage, in each port, is a NPC inverter.more » N number of NPC inverters’ outputs are cascaded to attain the per-phase line-to-neutral voltage to connect directly to the distribution grid (i.e., 13.8 kV). The cascaded NPC (CNPC) inverters have the inherent advantage of using lower rated devices, smaller filters and low total harmonic distortion required for PV grid interconnection. The proposed converter system is modular, scalable, and serviceable with zero downtime with lower foot print and lower overall cost. A novel voltage balance control at each module based on power mismatch among N-ports, have been presented and verified in simulation. Analysis and simulation results are presented for the N-port converter. The converter performance has also been verified on a hardware prototype.« less
Screening for biosurfactant production by 2,4,6-trinitrotoluene-transforming bacteria.
Avila-Arias, H; Avellaneda, H; Garzón, V; Rodríguez, G; Arbeli, Z; Garcia-Bonilla, E; Villegas-Plazas, M; Roldan, F
2017-08-01
To isolate and identify TNT-transforming cultures from explosive-contaminated soils with the ability to produce biosurfactants. Bacteria (pure and mixed cultures) were selected based on their ability to transform TNT in minimum media with TNT as the sole nitrogen source and an additional carbon source. TNT-transforming bacteria were identified by 16S rRNA gene sequencing. TNT transformation rates were significantly lower when no additional carbon or nitrogen sources were added. Surfactant production was enabled by the presence of TNT. Fourteen cultures were able to transform the explosive (>50%); of these, five showed a high transformation capacity (>90%), and six produced surfactants. All explosive-transforming cultures contained Proteobacteria of the genera Achromobacter, Stenotrophomonas, Pseudomonas, Sphingobium, Raoultella, Rhizobium and Methylopila. These cultures transformed TNT when an additional carbon source was added. Remarkably, Achromobacter spanius S17 and Pseudomonas veronii S94 have high TNT transformation rates and are surfactant producers. TNT is a highly toxic, mutagenic and carcinogenic nitroaromatic explosive; therefore, bioremediation to eliminate or mitigate its presence in the environment is essential. TNT-transforming cultures that produce surfactants are a promising method for remediation. To the best of our knowledge, this is the first report that links surfactant production and TNT transformation by bacteria. © 2017 The Society for Applied Microbiology.
Object-Based Coregistration of Terrestrial Photogrammetric and ALS Point Clouds in Forested Areas
NASA Astrophysics Data System (ADS)
Polewski, P.; Erickson, A.; Yao, W.; Coops, N.; Krzystek, P.; Stilla, U.
2016-06-01
Airborne Laser Scanning (ALS) and terrestrial photogrammetry are methods applicable for mapping forested environments. While ground-based techniques provide valuable information about the forest understory, the measured point clouds are normally expressed in a local coordinate system, whose transformation into a georeferenced system requires additional effort. In contrast, ALS point clouds are usually georeferenced, yet the point density near the ground may be poor under dense overstory conditions. In this work, we propose to combine the strengths of the two data sources by co-registering the respective point clouds, thus enriching the georeferenced ALS point cloud with detailed understory information in a fully automatic manner. Due to markedly different sensor characteristics, coregistration methods which expect a high geometric similarity between keypoints are not suitable in this setting. Instead, our method focuses on the object (tree stem) level. We first calculate approximate stem positions in the terrestrial and ALS point clouds and construct, for each stem, a descriptor which quantifies the 2D and vertical distances to other stem centers (at ground height). Then, the similarities between all descriptor pairs from the two point clouds are calculated, and standard graph maximum matching techniques are employed to compute corresponding stem pairs (tiepoints). Finally, the tiepoint subset yielding the optimal rigid transformation between the terrestrial and ALS coordinate systems is determined. We test our method on simulated tree positions and a plot situated in the northern interior of the Coast Range in western Oregon, USA, using ALS data (76 x 121 m2) and a photogrammetric point cloud (33 x 35 m2) derived from terrestrial photographs taken with a handheld camera. Results on both simulated and real data show that the proposed stem descriptors are discriminative enough to derive good correspondences. Specifically, for the real plot data, 24 corresponding stems were coregistered with an average 2D position deviation of 66 cm.
NASA Technical Reports Server (NTRS)
Fink, P. W.; Khayat, M. A.; Wilton, D. R.
2005-01-01
It is known that higher order modeling of the sources and the geometry in Boundary Element Modeling (BEM) formulations is essential to highly efficient computational electromagnetics. However, in order to achieve the benefits of hIgher order basis and geometry modeling, the singular and near-singular terms arising in BEM formulations must be integrated accurately. In particular, the accurate integration of near-singular terms, which occur when observation points are near but not on source regions of the scattering object, has been considered one of the remaining limitations on the computational efficiency of integral equation methods. The method of singularity subtraction has been used extensively for the evaluation of singular and near-singular terms. Piecewise integration of the source terms in this manner, while manageable for bases of constant and linear orders, becomes unwieldy and prone to error for bases of higher order. Furthermore, we find that the singularity subtraction method is not conducive to object-oriented programming practices, particularly in the context of multiple operators. To extend the capabilities, accuracy, and maintainability of general-purpose codes, the subtraction method is being replaced in favor of the purely numerical quadrature schemes. These schemes employ singularity cancellation methods in which a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. An example of the sin,oularity cancellation approach is the Duffy method, which has two major drawbacks: 1) In the resulting integrand, it produces an angular variation about the singular point that becomes nearly-singular for observation points close to an edge of the parent element, and 2) it appears not to work well when applied to nearly-singular integrals. Recently, the authors have introduced the transformation u(x(prime))= sinh (exp -1) x(prime)/Square root of ((y prime (exp 2))+ z(exp 2) for integrating functions of the form I = Integral of (lambda(r(prime))((e(exp -jkR))/(4 pi R) d D where A (r (prime)) is a vector or scalar basis function and R = Square root of( (x(prime)(exp2) + (y(prime)(exp2) + z(exp 2)) is the distance between source and observation points. This scheme has all of the advantages of the Duffy method while avoiding the disadvantages listed above. In this presentation we will survey similar approaches for handling singular and near-singular terms for kernels with 1/R(exp 2) type behavior, addressing potential pitfalls and offering techniques to efficiently handle special cases.
NASA Astrophysics Data System (ADS)
Kraus, Adam H.
Moisture within a transformer's insulation system has been proven to degrade its dielectric strength. When installing a transformer in situ, one method used to calculate the moisture content of the transformer insulation is to measure the dew point temperature of the internal gas volume of the transformer tank. There are two instruments commercially available that are designed for dew point temperature measurement: the Alnor Model 7000 Dewpointer and the Vaisala DRYCAPRTM Hand-Held Dewpoint Meter DM70. Although these instruments perform an identical task, the design technology behind each instrument is vastly different. When the Alnor Dewpointer and Vaisala DM70 instruments are used to measure the dew point of the internal gas volume simultaneously from a pressurized transformer, their differences in dew point measurement have been observed to vary as much as 30 °F. There is minimal scientific research available that focuses on the process of measuring dew point of a gas inside a pressurized transformer, let alone this observed phenomenon. The primary objective of this work was to determine what effect certain factors potentially have on dew point measurements of a transformer's internal gas volume, in hopes of understanding the root cause of this phenomenon. Three factors that were studied include (1) human error, (2) the use of calibrated and out-of-calibration instruments, and (3) the presence of oil vapor gases in the dry air sample, and their subsequent effects on the Q-value of the sampled gas. After completing this portion of testing, none of the selected variables proved to be a direct cause of the observed discrepancies between the two instruments. The secondary objective was to validate the accuracy of each instrument as compared to its respective published range by testing against a known dew point temperature produced by a humidity generator. In a select operating range of -22 °F to -4 °F, both instruments were found to be accurate and within their specified tolerances. This temperature range is frequently encountered in oil-soaked transformers, and demonstrates that both instruments can measure accurately over a limited, yet common, range despite their different design methodologies. It is clear that there is another unknown factor present in oil-soaked transformers that is causing the observed discrepancy between these instruments. Future work will include testing on newly manufactured or rewound transformers in order to investigate other variables that could be causing this discrepancy.
Zhang, Mingjing; Wen, Ming; Zhang, Zhi-Min; Lu, Hongmei; Liang, Yizeng; Zhan, Dejian
2015-03-01
Retention time shift is one of the most challenging problems during the preprocessing of massive chromatographic datasets. Here, an improved version of the moving window fast Fourier transform cross-correlation algorithm is presented to perform nonlinear and robust alignment of chromatograms by analyzing the shifts matrix generated by moving window procedure. The shifts matrix in retention time can be estimated by fast Fourier transform cross-correlation with a moving window procedure. The refined shift of each scan point can be obtained by calculating the mode of corresponding column of the shifts matrix. This version is simple, but more effective and robust than the previously published moving window fast Fourier transform cross-correlation method. It can handle nonlinear retention time shift robustly if proper window size has been selected. The window size is the only one parameter needed to adjust and optimize. The properties of the proposed method are investigated by comparison with the previous moving window fast Fourier transform cross-correlation and recursive alignment by fast Fourier transform using chromatographic datasets. The pattern recognition results of a gas chromatography mass spectrometry dataset of metabolic syndrome can be improved significantly after preprocessing by this method. Furthermore, the proposed method is available as an open source package at https://github.com/zmzhang/MWFFT2. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ramin, Pedram; Libonati Brock, Andreas; Polesel, Fabio; Causanilles, Ana; Emke, Erik; de Voogt, Pim; Plósz, Benedek Gy
2016-12-20
Sewer pipelines, although primarily designed for sewage transport, can also be considered as bioreactors. In-sewer processes may lead to significant variations of chemical loadings from source release points to the treatment plant influent. In this study, we assessed in-sewer utilization of growth substrates (primary metabolic processes) and transformation of illicit drug biomarkers (secondary metabolic processes) by suspended biomass. Sixteen drug biomarkers were targeted, including mephedrone, methadone, cocaine, heroin, codeine, and tetrahydrocannabinol (THC) and their major human metabolites. Batch experiments were performed under aerobic and anaerobic conditions using raw wastewater. Abiotic biomarker transformation and partitioning to suspended solids and reactor wall were separately investigated under both redox conditions. A process model was identified by combining and extending the Wastewater Aerobic/anaerobic Transformations in Sewers (WATS) model and Activated Sludge Model for Xenobiotics (ASM-X). Kinetic and stoichiometric model parameters were estimated using experimental data via the Bayesian optimization method DREAM (ZS) . Results suggest that biomarker transformation significantly differs from aerobic to anaerobic conditions, and abiotic conversion is the dominant mechanism for many of the selected substances. Notably, an explicit description of biomass growth during batch experiments was crucial to avoid significant overestimation (up to 385%) of aerobic biotransformation rate constants. Predictions of in-sewer transformation provided here can reduce the uncertainty in the estimation of drug consumption as part of wastewater-based epidemiological studies.
Guo, Mengyue; Wang, Huanyu; Xie, Nengbin
2015-01-01
ABSTRACT Natural plasmid transformation of Escherichia coli is a complex process that occurs strictly on agar plates and requires the global stress response factor σS. Here, we showed that additional carbon sources could significantly enhance the transformability of E. coli. Inactivation of phosphotransferase system genes (ptsH, ptsG, and crr) caused an increase in the transformation frequency, and the addition of cyclic AMP (cAMP) neutralized the promotional effect of carbon sources. This implies a negative role of cAMP in natural transformation. Further study showed that crp and cyaA mutations conferred a higher transformation frequency, suggesting that the cAMP-cAMP receptor protein (CRP) complex has an inhibitory effect on transformation. Moreover, we observed that rpoS is negatively regulated by cAMP-CRP in early log phase and that both crp and cyaA mutants show no transformation superiority when rpoS is knocked out. Therefore, it can be concluded that both the crp and cyaA mutations derepress rpoS expression in early log phase, whereby they aid in the promotion of natural transformation ability. We also showed that the accumulation of RpoS during early log phase can account for the enhanced transformation aroused by additional carbon sources. Our results thus demonstrated that the presence of additional carbon sources promotes competence development and natural transformation by reducing cAMP-CRP and, thus, derepressing rpoS expression during log phase. This finding could contribute to a better understanding of the relationship between nutrition state and competence, as well as the mechanism of natural plasmid transformation in E. coli. IMPORTANCE Escherichia coli, which is not usually considered to be naturally transformable, was found to spontaneously take up plasmid DNA on agar plates. Researching the mechanism of natural transformation is important for understanding the role of transformation in evolution, as well as in the transfer of pathogenicity and antibiotic resistance genes. In this work, we found that carbon sources significantly improve transformation by decreasing cAMP. Then, the low level of cAMP-CRP derepresses the general stress response regulator RpoS via a biphasic regulatory pattern, thereby contributing to transformation. Thus, we demonstrate the mechanism by which carbon sources affect natural transformation, which is important for revealing information about the interplay between nutrition state and competence development in E. coli. PMID:26260461
Building Facade Reconstruction by Fusing Terrestrial Laser Points and Images
Pu, Shi; Vosselman, George
2009-01-01
Laser data and optical data have a complementary nature for three dimensional feature extraction. Efficient integration of the two data sources will lead to a more reliable and automated extraction of three dimensional features. This paper presents a semiautomatic building facade reconstruction approach, which efficiently combines information from terrestrial laser point clouds and close range images. A building facade's general structure is discovered and established using the planar features from laser data. Then strong lines in images are extracted using Canny extractor and Hough transformation, and compared with current model edges for necessary improvement. Finally, textures with optimal visibility are selected and applied according to accurate image orientations. Solutions to several challenge problems throughout the collaborated reconstruction, such as referencing between laser points and multiple images and automated texturing, are described. The limitations and remaining works of this approach are also discussed. PMID:22408539
A new route for the synthesis of titanium silicalite-1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasile, Aurelia, E-mail: aurelia_vasile@yahoo.com; Busuioc-Tomoiaga, Alina Maria; Catalysis Research Department, ChemPerformance SRL, Iasi 700337
2012-01-15
Graphical abstract: Well-prepared TS-1 was synthesized by an innovative procedure using inexpensive reagents such as fumed silica and TPABr as structure-directing agent. This is the first time when highly crystalline TS-1 is obtained in basic medium, using sodium hydroxide as HO{sup -} ion source required for the crystallization process. Hydrolysis of titanium source has been prevented by titanium complexation with acetylacetone before structuring gel. Highlights: Black-Right-Pointing-Pointer TS-1 was obtained using cheap reagents as fumed silica and tetrapropylammonium bromide. Black-Right-Pointing-Pointer First time NaOH was used as source of OH{sup -} ions required for crystallization process. Black-Right-Pointing-Pointer The hydrolysis Ti alkoxides wasmore » controlled by Ti complexation with 2,4-pentanedione. -- Abstract: A new and efficient route using inexpensive reagents such as fumed silica and tetrapropylammonium bromide is proposed for the synthesis of titanium silicalite-1. High crystalline titanium silicalite-1 was obtained in alkaline medium, using sodium hydroxide as HO{sup -} ion source required for the crystallization process. Hydrolysis of titanium source with formation of insoluble oxide species was prevented by titanium complexation with before structuring gel. The final solids were fully characterized by powder X-ray diffraction, scanning electron microscopy, Fourier transform infrared, ultraviolet-visible diffuse reflectance, Raman and atomic absorption spectroscopies, as well as nitrogen sorption analysis. It was found that a molar ratio Ti:Si of about 0.04 in the initial reaction mixture is the upper limit to which well formed titanium silicalite-1 with channels free of crystalline or amorphous material can be obtained. Above this value, solids with MFI type structure containing both Ti isomorphously substituted in the network and extralattice anatase nanoparticles inside of channels is formed.« less
Topological transformation of fractional optical vortex beams using computer generated holograms
NASA Astrophysics Data System (ADS)
Maji, Satyajit; Brundavanam, Maruthi M.
2018-04-01
Optical vortex beams with fractional topological charges (TCs) are generated by the diffraction of a Gaussian beam using computer generated holograms embedded with mixed screw-edge dislocations. When the input Gaussian beam has a finite wave-front curvature, the generated fractional vortex beams show distinct topological transformations in comparison to the integer charge optical vortices. The topological transformations at different fractional TCs are investigated through the birth and evolution of the points of phase singularity, the azimuthal momentum transformation, occurrence of critical points in the transverse momentum and the vorticity around the singular points. This study is helpful to achieve better control in optical micro-manipulation applications.
Combining points and lines in rectifying satellite images
NASA Astrophysics Data System (ADS)
Elaksher, Ahmed F.
2017-09-01
The quick advance in remote sensing technologies established the potential to gather accurate and reliable information about the Earth surface using high resolution satellite images. Remote sensing satellite images of less than one-meter pixel size are currently used in large-scale mapping. Rigorous photogrammetric equations are usually used to describe the relationship between the image coordinates and ground coordinates. These equations require the knowledge of the exterior and interior orientation parameters of the image that might not be available. On the other hand, the parallel projection transformation could be used to represent the mathematical relationship between the image-space and objectspace coordinate systems and provides the required accuracy for large-scale mapping using fewer ground control features. This article investigates the differences between point-based and line-based parallel projection transformation models in rectifying satellite images with different resolutions. The point-based parallel projection transformation model and its extended form are presented and the corresponding line-based forms are developed. Results showed that the RMS computed using the point- or line-based transformation models are equivalent and satisfy the requirement for large-scale mapping. The differences between the transformation parameters computed using the point- and line-based transformation models are insignificant. The results showed high correlation between the differences in the ground elevation and the RMS.
The hydrological consequences of human impact in the Lublin Region
NASA Astrophysics Data System (ADS)
Michalczyk, Zdzisław; Mięsiak-Wójcik, Katarzyna; Sposób, Joanna; Turczyński, Marek
2012-01-01
The Lublin Region is an area where local transformations in the natural environment, including the hydrosphere, occur. They result from the impact of agriculture, industry as well as water supply and sewage disposal. These activities lead to changes in the water network resulting from land improvement works, channel straightening and water runoff acceleration, as well as to the formation of local, both point and diffuse sources, of water pollution. The consequences of human impact are manifested in local transformations of the quality or quantity of water resources. As a result of intense groundwater draw-off, hydrogeological conditions are transformed, which is reflected in the persistence of depression cones of varied size and depth, noticeable in the vicinity of water intakes for Lublin, Chełm, Zamość and Kraśnik. The lowering of the first-level groundwater table also occurs as a consequence of the drainage of chalk and marl mine workings in Chełm and Rejowiec, whereas in the area of the hard coal mine both shallow and deep groundwater was transformed. It is important to indicate the consequences of human impact changes of water conditions as the hydrosphere resources should be used according to the principles of sustainable development.
Hybrid Monte Carlo/deterministic methods for radiation shielding problems
NASA Astrophysics Data System (ADS)
Becker, Troy L.
For the past few decades, the most common type of deep-penetration (shielding) problem simulated using Monte Carlo methods has been the source-detector problem, in which a response is calculated at a single location in space. Traditionally, the nonanalog Monte Carlo methods used to solve these problems have required significant user input to generate and sufficiently optimize the biasing parameters necessary to obtain a statistically reliable solution. It has been demonstrated that this laborious task can be replaced by automated processes that rely on a deterministic adjoint solution to set the biasing parameters---the so-called hybrid methods. The increase in computational power over recent years has also led to interest in obtaining the solution in a region of space much larger than a point detector. In this thesis, we propose two methods for solving problems ranging from source-detector problems to more global calculations---weight windows and the Transform approach. These techniques employ sonic of the same biasing elements that have been used previously; however, the fundamental difference is that here the biasing techniques are used as elements of a comprehensive tool set to distribute Monte Carlo particles in a user-specified way. The weight window achieves the user-specified Monte Carlo particle distribution by imposing a particular weight window on the system, without altering the particle physics. The Transform approach introduces a transform into the neutron transport equation, which results in a complete modification of the particle physics to produce the user-specified Monte Carlo distribution. These methods are tested in a three-dimensional multigroup Monte Carlo code. For a basic shielding problem and a more realistic one, these methods adequately solved source-detector problems and more global calculations. Furthermore, they confirmed that theoretical Monte Carlo particle distributions correspond to the simulated ones, implying that these methods can be used to achieve user-specified Monte Carlo distributions. Overall, the Transform approach performed more efficiently than the weight window methods, but it performed much more efficiently for source-detector problems than for global problems.
Induced over voltage test on transformers using enhanced Z-source inverter based circuit
NASA Astrophysics Data System (ADS)
Peter, Geno; Sherine, Anli
2017-09-01
The normal life of a transformer is well above 25 years. The economical operation of the distribution system has its roots in the equipments being used. The economy being such, that it is financially advantageous to replace transformers with more than 15 years of service in the second perennial market. Testing of transformer is required, as its an indication of the extent to which a transformer can comply with the customers specified requirements and the respective standards (IEC 60076-3). In this paper, induced over voltage testing on transformers using enhanced Z source inverter is discussed. Power electronic circuits are now essential for a whole array of industrial electronic products. The bulky motor generator set, which is used to generate the required frequency to conduct the induced over voltage testing of transformers is nowadays replaced by static frequency converter. First conventional Z-source inverter, and second an enhanced Z source inverter is being used to generate the required voltage and frequency to test the transformer for induced over voltage test, and its characteristics is analysed.
Preliminary Design and Analysis of the GIFTS Instrument Pointing System
NASA Technical Reports Server (NTRS)
Zomkowski, Paul P.
2003-01-01
The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Instrument is the next generation spectrometer for remote sensing weather satellites. The GIFTS instrument will be used to perform scans of the Earth s atmosphere by assembling a series of field-of- views (FOV) into a larger pattern. Realization of this process is achieved by step scanning the instrument FOV in a contiguous fashion across any desired portion of the visible Earth. A 2.3 arc second pointing stability, with respect to the scanning instrument, must be maintained for the duration of the FOV scan. A star tracker producing attitude data at 100 Hz rate will be used by the autonomous pointing algorithm to precisely track target FOV s on the surface of the Earth. The main objective is to validate the pointing algorithm in the presence of spacecraft disturbances and determine acceptable disturbance limits from expected noise sources. Proof of concept validation of the pointing system algorithm is carried out with a full system simulation developed using Matlab Simulink. Models for the following components function within the full system simulation: inertial reference unit (IRU), attitude control system (ACS), reaction wheels, star tracker, and mirror controller. With the spacecraft orbital position and attitude maintained to within specified limits the pointing algorithm receives quaternion, ephemeris, and initialization data that are used to construct the required mirror pointing commands at a 100 Hz rate. This comprehensive simulation will also aid in obtaining a thorough understanding of spacecraft disturbances and other sources of pointing system errors. Parameter sensitivity studies and disturbance analysis will be used to obtain limits of operability for the GIFTS instrument. The culmination of this simulation development and analysis will be used to validate the specified performance requirements outlined for this instrument.
When Dijkstra Meets Vanishing Point: A Stereo Vision Approach for Road Detection.
Zhang, Yigong; Su, Yingna; Yang, Jian; Ponce, Jean; Kong, Hui
2018-05-01
In this paper, we propose a vanishing-point constrained Dijkstra road model for road detection in a stereo-vision paradigm. First, the stereo-camera is used to generate the u- and v-disparity maps of road image, from which the horizon can be extracted. With the horizon and ground region constraints, we can robustly locate the vanishing point of road region. Second, a weighted graph is constructed using all pixels of the image, and the detected vanishing point is treated as the source node of the graph. By computing a vanishing-point constrained Dijkstra minimum-cost map, where both disparity and gradient of gray image are used to calculate cost between two neighbor pixels, the problem of detecting road borders in image is transformed into that of finding two shortest paths that originate from the vanishing point to two pixels in the last row of image. The proposed approach has been implemented and tested over 2600 grayscale images of different road scenes in the KITTI data set. The experimental results demonstrate that this training-free approach can detect horizon, vanishing point, and road regions very accurately and robustly. It can achieve promising performance.
11. Photographic copy of drawing dated February 17, 1908 (Source: ...
11. Photographic copy of drawing dated February 17, 1908 (Source: Salt River Project) Transformer building, first floor plan and sections (Transformer floor) - Theodore Roosevelt Dam, Transformer House, Salt River, Tortilla Flat, Maricopa County, AZ
Yang, Xiaoying; Tan, Lit; He, Ruimin; Fu, Guangtao; Ye, Jinyin; Liu, Qun; Wang, Guoqing
2017-12-01
It is increasingly recognized that climate change could impose both direct and indirect impacts on the quality of the water environment. Previous studies have mostly concentrated on evaluating the impacts of climate change on non-point source pollution in agricultural watersheds. Few studies have assessed the impacts of climate change on the water quality of river basins with complex point and non-point pollution sources. In view of the gap, this paper aims to establish a framework for stochastic assessment of the sensitivity of water quality to future climate change in a river basin with complex pollution sources. A sub-daily soil and water assessment tool (SWAT) model was developed to simulate the discharge, transport, and transformation of nitrogen from multiple point and non-point pollution sources in the upper Huai River basin of China. A weather generator was used to produce 50 years of synthetic daily weather data series for all 25 combinations of precipitation (changes by - 10, 0, 10, 20, and 30%) and temperature change (increases by 0, 1, 2, 3, and 4 °C) scenarios. The generated daily rainfall series was disaggregated into the hourly scale and then used to drive the sub-daily SWAT model to simulate the nitrogen cycle under different climate change scenarios. Our results in the study region have indicated that (1) both total nitrogen (TN) loads and concentrations are insensitive to temperature change; (2) TN loads are highly sensitive to precipitation change, while TN concentrations are moderately sensitive; (3) the impacts of climate change on TN concentrations are more spatiotemporally variable than its impacts on TN loads; and (4) wide distributions of TN loads and TN concentrations under individual climate change scenario illustrate the important role of climatic variability in affecting water quality conditions. In summary, the large variability in SWAT simulation results within and between each climate change scenario highlights the uncertainty of the impacts of climate change and the need to incorporate extreme conditions in managing water environment and developing climate change adaptation and mitigation strategies.
Georeferencing UAS Derivatives Through Point Cloud Registration with Archived Lidar Datasets
NASA Astrophysics Data System (ADS)
Magtalas, M. S. L. Y.; Aves, J. C. L.; Blanco, A. C.
2016-10-01
Georeferencing gathered images is a common step before performing spatial analysis and other processes on acquired datasets using unmanned aerial systems (UAS). Methods of applying spatial information to aerial images or their derivatives is through onboard GPS (Global Positioning Systems) geotagging, or through tying of models through GCPs (Ground Control Points) acquired in the field. Currently, UAS (Unmanned Aerial System) derivatives are limited to meter-levels of accuracy when their generation is unaided with points of known position on the ground. The use of ground control points established using survey-grade GPS or GNSS receivers can greatly reduce model errors to centimeter levels. However, this comes with additional costs not only with instrument acquisition and survey operations, but also in actual time spent in the field. This study uses a workflow for cloud-based post-processing of UAS data in combination with already existing LiDAR data. The georeferencing of the UAV point cloud is executed using the Iterative Closest Point algorithm (ICP). It is applied through the open-source CloudCompare software (Girardeau-Montaut, 2006) on a `skeleton point cloud'. This skeleton point cloud consists of manually extracted features consistent on both LiDAR and UAV data. For this cloud, roads and buildings with minimal deviations given their differing dates of acquisition are considered consistent. Transformation parameters are computed for the skeleton cloud which could then be applied to the whole UAS dataset. In addition, a separate cloud consisting of non-vegetation features automatically derived using CANUPO classification algorithm (Brodu and Lague, 2012) was used to generate a separate set of parameters. Ground survey is done to validate the transformed cloud. An RMSE value of around 16 centimeters was found when comparing validation data to the models georeferenced using the CANUPO cloud and the manual skeleton cloud. Cloud-to-cloud distance computations of CANUPO and manual skeleton clouds were obtained with values for both equal to around 0.67 meters at 1.73 standard deviation.
Source-receptor matrix calculation with a Source-receptor matrix calculation with a backward mode
NASA Astrophysics Data System (ADS)
Seibert, P.; Frank, A.
2003-08-01
The possibility to calculate linear-source receptor relationships for the transport of atmospheric trace substances with a Lagrangian particle dispersion model (LPDM) running in backward mode is shown and presented with many tests and examples. The derivation includes the action of sources and of any first-order processes (transformation with prescribed rates, dry and wet deposition, radioactive decay, ...). The backward mode is computationally advantageous if the number of receptors is less than the number of sources considered. The combination of an LPDM with the backward (adjoint) methodology is especially attractive for the application to point measurements, which can be handled without artificial numerical diffusion. Practical hints are provided for source-receptor calculations with different settings, both in forward and backward mode. The equivalence of forward and backward calculations is shown in simple tests for release and sampling of particles, pure wet deposition, pure convective redistribution and realistic transport over a short distance. Furthermore, an application example explaining measurements of Cs-137 in Stockholm as transport from areas contaminated heavily in the Chernobyl disaster is included.
NASA Astrophysics Data System (ADS)
Montereali, R. M.; Bonfigli, F.; Menchini, F.; Vincenti, M. A.
2012-08-01
Broad-band light-emitting radiation-induced F2 and F3+ electronic point defects, which are stable and laser-active at room temperature in lithium fluoride crystals and films, are used in dosimeters, tuneable color-center lasers, broad-band miniaturized light sources and novel radiation imaging detectors. A brief review of their photoemission properties is presented, and their behavior at liquid nitrogen temperatures is discussed. Some experimental data from optical spectroscopy and fluorescence microscopy of these radiation-induced point defects in LiF crystals and thin films are used to obtain information about the coloration curves, the efficiency of point defect formation, the effects of photo-bleaching processes, etc. Control of the local formation, stabilization, and transformation of radiation-induced light-emitting defect centers is crucial for the development of optically active micro-components and nanostructures. Some of the advantages of low temperature measurements for novel confocal laser scanning fluorescence microscopy techniques, widely used for spatial mapping of these point defects through the optical reading of their visible photoluminescence, are highlighted.
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.
2001-12-01
A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.
Spontaneous transformation of adult mesenchymal stem cells from cynomolgus macaques in vitro
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Zhenhua; Key Laboratory of Neurodegeneration, Ministry of Education, Beijing; Department of Anatomy, Anhui Medical University, Hefei, 230032
2011-12-10
Mesenchymal stem cells (MSCs) have shown potential clinical utility in cell therapy and tissue engineering, due to their ability to proliferate as well as to differentiate into multiple lineages, including osteogenic, adipogenic, and chondrogenic specifications. Therefore, it is crucial to assess the safety of MSCs while extensive expansion ex vivo is a prerequisite to obtain the cell numbers for cell transplantation. Here we show that MSCs derived from adult cynomolgus monkey can undergo spontaneous transformation following in vitro culture. In comparison with MSCs, the spontaneously transformed mesenchymal cells (TMCs) display significantly different growth pattern and morphology, reminiscent of the characteristicsmore » of tumor cells. Importantly, TMCs are highly tumorigenic, causing subcutaneous tumors when injected into NOD/SCID mice. Moreover, no multiple differentiation potential of TMCs is observed in vitro or in vivo, suggesting that spontaneously transformed adult stem cells may not necessarily turn into cancer stem cells. These data indicate a direct transformation of cynomolgus monkey MSCs into tumor cells following long-term expansion in vitro. The spontaneous transformation of the cultured cynomolgus monkey MSCs may have important implications for ongoing clinical trials and for models of oncogenesis, thus warranting a more strict assessment of MSCs prior to cell therapy. -- Highlights: Black-Right-Pointing-Pointer Spontaneous transformation of cynomolgus monkey MSCs in vitro. Black-Right-Pointing-Pointer Transformed mesenchymal cells lack multipotency. Black-Right-Pointing-Pointer Transformed mesenchymal cells are highly tumorigenic. Black-Right-Pointing-Pointer Transformed mesenchymal cells do not have the characteristics of cancer stem cells.« less
NASA Astrophysics Data System (ADS)
Mahanthesh, B.; Gireesha, B. J.; Shashikumar, N. S.; Hayat, T.; Alsaedi, A.
2018-06-01
Present work aims to investigate the features of the exponential space dependent heat source (ESHS) and cross-diffusion effects in Marangoni convective heat mass transfer flow due to an infinite disk. Flow analysis is comprised with magnetohydrodynamics (MHD). The effects of Joule heating, viscous dissipation and solar radiation are also utilized. The thermal and solute field on the disk surface varies in a quadratic manner. The ordinary differential equations have been obtained by utilizing Von Kármán transformations. The resulting problem under consideration is solved numerically via Runge-Kutta-Fehlberg based shooting scheme. The effects of involved pertinent flow parameters are explored by graphical illustrations. Results point out that the ESHS effect dominates thermal dependent heat source effect on thermal boundary layer growth. The concentration and temperature distributions and their associated layer thicknesses are enhanced by Marangoni effect.
NASA Astrophysics Data System (ADS)
Koiter, A. J.; Owens, P. N.; Petticrew, E. L.; Lobb, D. A.
2013-10-01
Sediment fingerprinting is a technique that is increasingly being used to improve the understanding of sediment dynamics within river basins. At present, one of the main limitations of the technique is the ability to link sediment back to their sources due to the non-conservative nature of many of the sediment properties. The processes that occur between the sediment source locations and the point of collection downstream are not well understood or quantified and currently represent a black-box in the sediment fingerprinting approach. The literature on sediment fingerprinting tends to assume that there is a direct connection between sources and sinks, while much of the broader environmental sedimentology literature identifies that numerous chemical, biological and physical transformations and alterations can occur as sediment moves through the landscape. The focus of this paper is on the processes that drive particle size and organic matter selectivity and biological, geochemical and physical transformations and how understanding these processes can be used to guide sampling protocols, fingerprint selection and data interpretation. The application of statistical approaches without consideration of how unique sediment fingerprints have developed and how robust they are within the environment is a major limitation of many recent studies. This review summarises the current information, identifies areas that need further investigation and provides recommendations for sediment fingerprinting that should be considered for adoption in future studies if the full potential and utility of the approach are to be realised.
Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform.
De Queiroz, Ricardo; Chou, Philip A
2016-06-01
In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time and with the recent possibility of real time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably to the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this work represents the state-of-the-art in intra-frame compression of point clouds for real-time 3D video.
Equivalent circuit of radio frequency-plasma with the transformer model
NASA Astrophysics Data System (ADS)
Nishida, K.; Mochizuki, S.; Ohta, M.; Yasumoto, M.; Lettry, J.; Mattei, S.; Hatayama, A.
2014-02-01
LINAC4 H- source is radio frequency (RF) driven type source. In the RF system, it is required to match the load impedance, which includes H- source, to that of final amplifier. We model RF plasma inside the H- source as circuit elements using transformer model so that characteristics of the load impedance become calculable. It has been shown that the modeling based on the transformer model works well to predict the resistance and inductance of the plasma.
Fixed Point Problems for Linear Transformations on Pythagorean Triples
ERIC Educational Resources Information Center
Zhan, M.-Q.; Tong, J.-C.; Braza, P.
2006-01-01
In this article, an attempt is made to find all linear transformations that map a standard Pythagorean triple (a Pythagorean triple [x y z][superscript T] with y being even) into a standard Pythagorean triple, which have [3 4 5][superscript T] as their fixed point. All such transformations form a monoid S* under matrix product. It is found that S*…
Improved image decompression for reduced transform coding artifacts
NASA Technical Reports Server (NTRS)
Orourke, Thomas P.; Stevenson, Robert L.
1994-01-01
The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.
Analysis, Thematic Maps and Data Mining from Point Cloud to Ontology for Software Development
NASA Astrophysics Data System (ADS)
Nespeca, R.; De Luca, L.
2016-06-01
The primary purpose of the survey for the restoration of Cultural Heritage is the interpretation of the state of building preservation. For this, the advantages of the remote sensing systems that generate dense point cloud (range-based or image-based) are not limited only to the acquired data. The paper shows that it is possible to extrapolate very useful information in diagnostics using spatial annotation, with the use of algorithms already implemented in open-source software. Generally, the drawing of degradation maps is the result of manual work, so dependent on the subjectivity of the operator. This paper describes a method of extraction and visualization of information, obtained by mathematical procedures, quantitative, repeatable and verifiable. The case study is a part of the east facade of the Eglise collégiale Saint-Maurice also called Notre Dame des Grâces, in Caromb, in southern France. The work was conducted on the matrix of information contained in the point cloud asci format. The first result is the extrapolation of new geometric descriptors. First, we create the digital maps with the calculated quantities. Subsequently, we have moved to semi-quantitative analyses that transform new data into useful information. We have written the algorithms for accurate selection, for the segmentation of point cloud, for automatic calculation of the real surface and the volume. Furthermore, we have created the graph of spatial distribution of the descriptors. This work shows that if we work during the data processing we can transform the point cloud into an enriched database: the use, the management and the data mining is easy, fast and effective for everyone involved in the restoration process.
Source imaging of potential fields through a matrix space-domain algorithm
NASA Astrophysics Data System (ADS)
Baniamerian, Jamaledin; Oskooi, Behrooz; Fedi, Maurizio
2017-01-01
Imaging of potential fields yields a fast 3D representation of the source distribution of potential fields. Imaging methods are all based on multiscale methods allowing the source parameters of potential fields to be estimated from a simultaneous analysis of the field at various scales or, in other words, at many altitudes. Accuracy in performing upward continuation and differentiation of the field has therefore a key role for this class of methods. We here describe an accurate method for performing upward continuation and vertical differentiation in the space-domain. We perform a direct discretization of the integral equations for upward continuation and Hilbert transform; from these equations we then define matrix operators performing the transformation, which are symmetric (upward continuation) or anti-symmetric (differentiation), respectively. Thanks to these properties, just the first row of the matrices needs to be computed, so to decrease dramatically the computation cost. Our approach allows a simple procedure, with the advantage of not involving large data extension or tapering, as due instead in case of Fourier domain computation. It also allows level-to-drape upward continuation and a stable differentiation at high frequencies; finally, upward continuation and differentiation kernels may be merged into a single kernel. The accuracy of our approach is shown to be important for multi-scale algorithms, such as the continuous wavelet transform or the DEXP (depth from extreme point method), because border errors, which tend to propagate largely at the largest scales, are radically reduced. The application of our algorithm to synthetic and real-case gravity and magnetic data sets confirms the accuracy of our space domain strategy over FFT algorithms and standard convolution procedures.
NASA Astrophysics Data System (ADS)
Kato, Y.; Takenaka, T.; Yano, K.; Kiriyama, R.; Kurisu, Y.; Nozaki, D.; Muramatsu, M.; Kitagawa, A.; Uchida, T.; Yoshida, Y.; Sato, F.; Iida, T.
2012-11-01
Multiply charged ions to be used prospectively are produced from solid pure material in an electron cyclotron resonance ion source (ECRIS). Recently a pure iron source is also required for the production of caged iron ions in the fullerene in order to control cells in vivo in bio-nano science and technology. We adopt directly heating iron rod by induction heating (IH) because it has non-contact with insulated materials which are impurity gas sources. We choose molybdenum wire for the IH coils because it doesn't need water cooling. To improve power efficiency and temperature control, we propose to the new circuit without previously using the serial and parallel dummy coils (SPD) for matching and safety. We made the circuit consisted of inductively coupled coils which are thin-flat and helix shape, and which insulates the IH power source from the evaporator. This coupling coils circuit, i.e. insulated induction heating coil transformer (IHCT), can be move mechanically. The secondary current can be adjusted precisely and continuously. Heating efficiency by using the IHCT is much higher than those of previous experiments by using the SPD, because leakage flux is decreased and matching is improved simultaneously. We are able to adjust the temperature in heating the vapor source around melting point. And then the vapor pressure can be controlled precisely by using the IHCT. We can control ±10K around 1500°C by this method, and also recognize to controlling iron vapor flux experimentally in the extreme low pressures. Now we come into next stage of developing induction heating vapor source for materials with furthermore high temperature melting points above 2000K with the IHCT, and then apply it in our ECRIS.
Xie, Qing; Tao, Junhan; Wang, Yongqiang; Geng, Jianghai; Cheng, Shuyi; Lü, Fangcheng
2014-08-01
Fast and accurate positioning of partial discharge (PD) sources in transformer oil is very important for the safe, stable operation of power systems because it allows timely elimination of insulation faults. There is usually more than one PD source once an insulation fault occurs in the transformer oil. This study, which has both theoretical and practical significance, proposes a method of identifying multiple PD sources in the transformer oil. The method combines the two-sided correlation transformation algorithm in the broadband signal focusing and the modified Gerschgorin disk estimator. The method of classification of multiple signals is used to determine the directions of arrival of signals from multiple PD sources. The ultrasonic array positioning method is based on the multi-platform direction finding and the global optimization searching. Both the 4 × 4 square planar ultrasonic sensor array and the ultrasonic array detection platform are built to test the method of identifying and positioning multiple PD sources. The obtained results verify the validity and the engineering practicability of this method.
An Improved Method for Real-Time 3D Construction of DTM
NASA Astrophysics Data System (ADS)
Wei, Yi
This paper discusses the real-time optimal construction of DTM by two measures. One is to improve coordinate transformation of discrete points acquired from lidar, after processing a total number of 10000 data points, the formula calculation for transformation costs 0.810s, while the table look-up method for transformation costs 0.188s, indicating that the latter is superior to the former. The other one is to adjust the density of the point cloud acquired from lidar, the certain amount of the data points are used for 3D construction in proper proportion in order to meet different needs for 3D imaging, and ultimately increase efficiency of DTM construction while saving system resources.
NASA Astrophysics Data System (ADS)
Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.
1997-07-01
We apply to the specific case of images taken with the ROSAT PSPC detector our wavelet-based X-ray source detection algorithm presented in a companion paper. Such images are characterized by the presence of detector ``ribs,'' strongly varying point-spread function, and vignetting, so that their analysis provides a challenge for any detection algorithm. First, we apply the algorithm to simulated images of a flat background, as seen with the PSPC, in order to calibrate the number of spurious detections as a function of significance threshold and to ascertain that the spatial distribution of spurious detections is uniform, i.e., unaffected by the ribs; this goal was achieved using the exposure map in the detection procedure. Then, we analyze simulations of PSPC images with a realistic number of point sources; the results are used to determine the efficiency of source detection and the accuracy of output quantities such as source count rate, size, and position, upon a comparison with input source data. It turns out that sources with 10 photons or less may be confidently detected near the image center in medium-length (~104 s), background-limited PSPC exposures. The positions of sources detected near the image center (off-axis angles < 15') are accurate to within a few arcseconds. Output count rates and sizes are in agreement with the input quantities, within a factor of 2 in 90% of the cases. The errors on position, count rate, and size increase with off-axis angle and for detections of lower significance. We have also checked that the upper limits computed with our method are consistent with the count rates of undetected input sources. Finally, we have tested the algorithm by applying it on various actual PSPC images, among the most challenging for automated detection procedures (crowded fields, extended sources, and nonuniform diffuse emission). The performance of our method in these images is satisfactory and outperforms those of other current X-ray detection techniques, such as those employed to produce the MPE and WGA catalogs of PSPC sources, in terms of both detection reliability and efficiency. We have also investigated the theoretical limit for point-source detection, with the result that even sources with only 2-3 photons may be reliably detected using an efficient method in images with sufficiently high resolution and low background.
Glick, S J; Hawkins, W G; King, M A; Penney, B C; Soares, E J; Byrne, C L
1992-01-01
The application of stationary restoration techniques to SPECT images assumes that the modulation transfer function (MTF) of the imaging system is shift invariant. It was hypothesized that using intrinsic attenuation correction (i.e., methods which explicitly invert the exponential radon transform) would yield a three-dimensional (3-D) MTF which varies less with position within the transverse slices than the combined conjugate view two-dimensional (2-D) MTF varies with depth. Thus the assumption of shift invariance would become less of an approximation for 3-D post- than for 2-D pre-reconstruction restoration filtering. SPECT acquisitions were obtained from point sources located at various positions in three differently shaped, water-filled phantoms. The data were reconstructed with intrinsic attenuation correction, and 3-D MTFs were calculated. Four different intrinsic attenuation correction methods were compared: (1) exponentially weighted backprojection, (2) a modified exponentially weighted backprojection as described by Tanaka et al. [Phys. Med. Biol. 29, 1489-1500 (1984)], (3) a Fourier domain technique as described by Bellini et al. [IEEE Trans. ASSP 27, 213-218 (1979)], and (4) the circular harmonic transform (CHT) method as described by Hawkins et al. [IEEE Trans. Med. Imag. 7, 135-148 (1988)]. The dependence of the 3-D MTF obtained with these methods, on point source location within an attenuator, and on shape of the attenuator, was studied. These 3-D MTFs were compared to: (1) those MTFs obtained with no attenuation correction, and (2) the depth dependence of the arithmetic mean combined conjugate view 2-D MTFs.(ABSTRACT TRUNCATED AT 250 WORDS)
Detrecting and Locating Partial Discharges in Transformers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shourbaji, A.; Richards, R.; Kisner, R. A.
A collaborative research between the Oak Ridge National Laboratory (ORNL), the American Electric Power (AEP), the Tennessee Valley Authority (TVA), and the State of Ohio Energy Office (OEO) has been formed to conduct a feasibility study to detect and locate partial discharges (PDs) inside large transformers. The success of early detection of the PDs is necessary to avoid costly catastrophic failures that can occur if the process of PD is ignored. The detection method under this research is based on an innovative technology developed by ORNL researchers using optical methods to sense the acoustical energy produced by the PDs. ORNLmore » researchers conducted experimental studies to detect PD using an optical fiber as an acoustic sensor capable of detecting acoustical disturbances at any point along its length. This technical approach also has the potential to locate the point at which the PD was sensed within the transformer. Several optical approaches were experimentally investigated, including interferometric detection of acoustical disturbances along the sensing fiber, light detection and ranging (LIDAR) techniques using frequency modulation continuous wave (FMCW), frequency modulated (FM) laser with a multimode fiber, FM laser with a single mode fiber, and amplitude modulated (AM) laser with a multimode fiber. The implementation of the optical fiber-based acoustic measurement technique would include installing a fiber inside a transformer allowing real-time detection of PDs and determining their locations. The fibers are nonconductive and very small (core plus cladding are diameters of 125 μm for single-mode fibers and 230 μm for multimode fibers). The research identified the capabilities and limitations of using optical technology to detect and locate sources of acoustical disturbances such as in PDs in large transformers. Amplitude modulation techniques showed the most promising results and deserve further research to better quantify the technique’s sensitivity and its ability to characterize a PD event. Other sensing techniques have been also identified, such as the wavelength shifting fiber optics and custom fabricated fibers with special coatings.« less
Hypothesis Support Mechanism for Mid-Level Visual Pattern Recognition
NASA Technical Reports Server (NTRS)
Amador, Jose J (Inventor)
2007-01-01
A method of mid-level pattern recognition provides for a pose invariant Hough Transform by parametrizing pairs of points in a pattern with respect to at least two reference points, thereby providing a parameter table that is scale- or rotation-invariant. A corresponding inverse transform may be applied to test hypothesized matches in an image and a distance transform utilized to quantify the level of match.
Electrical distribution studies for the 200 Area tank farms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisler, J.B.
1994-08-26
This is an engineering study providing reliability numbers for various design configurations as well as computer analyses (Captor/Dapper) of the existing distribution system to the 480V side of the unit substations. The objective of the study was to assure the adequacy of the existing electrical system components from the connection at the high voltage supply point through the transformation and distribution equipment to the point where it is reduced to its useful voltage level. It also was to evaluate the reasonableness of proposed solutions of identified deficiencies and recommendations of possible alternate solutions. The electrical utilities are normally considered themore » most vital of the utility systems on a site because all other utility systems depend on electrical power. The system accepts electric power from the external sources, reduces it to a lower voltage, and distributes it to end-use points throughout the site. By classic definition, all utility systems extend to a point 5 feet from the facility perimeter. An exception is made to this definition for the electric utilities at this site. The electrical Utility System ends at the low voltage section of the unit substation, which reduces the voltage from 13.8 kV to 2,400, 480, 277/480 or 120/208 volts. These transformers are located at various distances from existing facilities. The adequacy of the distribution system which transports the power from the main substation to the individual area substations and other load centers is evaluated and factored into the impact of the future load forecast.« less
Interstitial loop transformations in FeCr
Béland, Laurent Karim; Osetsky, Yuri N.; Stoller, Roger E.; ...
2015-03-27
Here, we improve the Self-Evolving Atomistic Kinetic Monte Carlo (SEAKMC) algorithm by integrating the Activation Relaxation Technique nouveau (ARTn), a powerful open-ended saddle-point search method, into the algorithm. We use it to investigate the reaction of 37-interstitial 1/2[1 1 1] and 1/2[View the MathML source] loops in FeCr at 10 at.% Cr. They transform into 1/2[1 1 1], 1/2[View the MathML source], [1 0 0] and [0 1 0] 74-interstitial clusters with an overall barrier of 0.85 eV. We find that Cr decoration locally inhibits the rotation of crowdions, which dictates the final loop orientation. Moreover, the final loop orientationmore » depends on the details of the Cr decoration. Generally, a region of a given orientation is favored if Cr near its interface with a region of another orientation is able to inhibit reorientation at this interface more than the Cr present at the other interfaces. Also, we find that substitutional Cr atoms can diffuse from energetically unfavorable to energetically favorable sites within the interlocked 37-interstitial loops conformation with barriers of less than 0.35 eV.« less
Optimizing transformations of stencil operations for parallel cache-based architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bassetti, F.; Davis, K.
This paper describes a new technique for optimizing serial and parallel stencil- and stencil-like operations for cache-based architectures. This technique takes advantage of the semantic knowledge implicity in stencil-like computations. The technique is implemented as a source-to-source program transformation; because of its specificity it could not be expected of a conventional compiler. Empirical results demonstrate a uniform factor of two speedup. The experiments clearly show the benefits of this technique to be a consequence, as intended, of the reduction in cache misses. The test codes are based on a 5-point stencil obtained by the discretization of the Poisson equation andmore » applied to a two-dimensional uniform grid using the Jacobi method as an iterative solver. Results are presented for a 1-D tiling for a single processor, and in parallel using 1-D data partition. For the parallel case both blocking and non-blocking communication are tested. The same scheme of experiments has bee n performed for the 2-D tiling case. However, for the parallel case the 2-D partitioning is not discussed here, so the parallel case handled for 2-D is 2-D tiling with 1-D data partitioning.« less
Normalization methods in time series of platelet function assays
Van Poucke, Sven; Zhang, Zhongheng; Roest, Mark; Vukicevic, Milan; Beran, Maud; Lauwereins, Bart; Zheng, Ming-Hua; Henskens, Yvonne; Lancé, Marcus; Marcus, Abraham
2016-01-01
Abstract Platelet function can be quantitatively assessed by specific assays such as light-transmission aggregometry, multiple-electrode aggregometry measuring the response to adenosine diphosphate (ADP), arachidonic acid, collagen, and thrombin-receptor activating peptide and viscoelastic tests such as rotational thromboelastometry (ROTEM). The task of extracting meaningful statistical and clinical information from high-dimensional data spaces in temporal multivariate clinical data represented in multivariate time series is complex. Building insightful visualizations for multivariate time series demands adequate usage of normalization techniques. In this article, various methods for data normalization (z-transformation, range transformation, proportion transformation, and interquartile range) are presented and visualized discussing the most suited approach for platelet function data series. Normalization was calculated per assay (test) for all time points and per time point for all tests. Interquartile range, range transformation, and z-transformation demonstrated the correlation as calculated by the Spearman correlation test, when normalized per assay (test) for all time points. When normalizing per time point for all tests, no correlation could be abstracted from the charts as was the case when using all data as 1 dataset for normalization. PMID:27428217
The Sedov Blast Wave as a Radial Piston Verification Test
Pederson, Clark; Brown, Bart; Morgan, Nathaniel
2016-06-22
The Sedov blast wave is of great utility as a verification problem for hydrodynamic methods. The typical implementation uses an energized cell of finite dimensions to represent the energy point source. We avoid this approximation by directly finding the effects of the energy source as a boundary condition (BC). Furthermore, the proposed method transforms the Sedov problem into an outward moving radial piston problem with a time-varying velocity. A portion of the mesh adjacent to the origin is removed and the boundaries of this hole are forced with the velocities from the Sedov solution. This verification test is implemented onmore » two types of meshes, and convergence is shown. Our results from the typical initial condition (IC) method and the new BC method are compared.« less
Source-receptor matrix calculation with a Lagrangian particle dispersion model in backward mode
NASA Astrophysics Data System (ADS)
Seibert, P.; Frank, A.
2004-01-01
The possibility to calculate linear-source receptor relationships for the transport of atmospheric trace substances with a Lagrangian particle dispersion model (LPDM) running in backward mode is shown and presented with many tests and examples. This mode requires only minor modifications of the forward LPDM. The derivation includes the action of sources and of any first-order processes (transformation with prescribed rates, dry and wet deposition, radioactive decay, etc.). The backward mode is computationally advantageous if the number of receptors is less than the number of sources considered. The combination of an LPDM with the backward (adjoint) methodology is especially attractive for the application to point measurements, which can be handled without artificial numerical diffusion. Practical hints are provided for source-receptor calculations with different settings, both in forward and backward mode. The equivalence of forward and backward calculations is shown in simple tests for release and sampling of particles, pure wet deposition, pure convective redistribution and realistic transport over a short distance. Furthermore, an application example explaining measurements of Cs-137 in Stockholm as transport from areas contaminated heavily in the Chernobyl disaster is included.
NASA Astrophysics Data System (ADS)
Pasten-Zapata, Ernesto; Ledesma-Ruiz, Rogelio; Ramirez, Aldo; Harter, Thomas; Mahlknecht, Jürgen
2014-05-01
To effectively manage groundwater quality it is essential to understand sources of contamination and underground processes. The objective of the study was to identify sources and fate of nitrate pollution occurring in an aquifer underneath a sub-humid to humid region in NE Mexico which provides 10% of national citrus production. Nitrate isotopes and halide ratios were applied to understand nitrate sources and transformations in relation to land use/land cover. It was found that the study area is subject to diverse nitrate sources including organic waste and wastewater, synthetic fertilizers and soil processes. Animal manure and sewage from septic tanks were the causes of groundwater nitrate pollution within orchards and vegetable agriculture. Dairy activities within a radius of 1,000m from a sampling point increased nitrate pollution. Leachates from septic tanks incited nitrate pollution in residential areas. Soil nitrogen and animal waste were the sources of nitrate in groundwater under shrubland and grassland. Partial denitrification processes were evidenced. The denitrification process helped to attenuate nitrate concentration in the agricultural lands and grassland particularly during summer months.
Wavelet transform analysis of the small-scale X-ray structure of the cluster Abell 1367
NASA Technical Reports Server (NTRS)
Grebeney, S. A.; Forman, W.; Jones, C.; Murray, S.
1995-01-01
We have developed a new technique based on a wavelet transform analysis to quantify the small-scale (less than a few arcminutes) X-ray structure of clusters of galaxies. We apply this technique to the ROSAT position sensitive proportional counter (PSPC) and Einstein high-resolution imager (HRI) images of the central region of the cluster Abell 1367 to detect sources embedded within the diffuse intracluster medium. In addition to detecting sources and determining their fluxes and positions, we show that the wavelet analysis allows a characterization of the sources extents. In particular, the wavelet scale at which a given source achieves a maximum signal-to-noise ratio in the wavelet images provides an estimate of the angular extent of the source. To account for the widely varying point response of the ROSAT PSPC as a function of off-axis angle requires a quantitative measurement of the source size and a comparison to a calibration derived from the analysis of a Deep Survey image. Therefore, we assume that each source could be described as an isotropic two-dimensional Gaussian and used the wavelet amplitudes, at different scales, to determine the equivalent Gaussian Full Width Half-Maximum (FWHM) (and its uncertainty) appropriate for each source. In our analysis of the ROSAT PSPC image, we detect 31 X-ray sources above the diffuse cluster emission (within a radius of 24 min), 16 of which are apparently associated with cluster galaxies and two with serendipitous, background quasars. We find that the angular extents of 11 sources exceed the nominal width of the PSPC point-spread function. Four of these extended sources were previously detected by Bechtold et al. (1983) as 1 sec scale features using the Einstein HRI. The same wavelet analysis technique was applied to the Einstein HRI image. We detect 28 sources in the HRI image, of which nine are extended. Eight of the extended sources correspond to sources previously detected by Bechtold et al. Overall, using both the PSPC and the HRI observations, we detect 16 extended features, of which nine have galaxies coincided with the X-ray-measured positions (within the positional error circles). These extended sources have luminosities lying in the range (3 - 30) x 10(exp 40) ergs/s and gas masses of approximately (1 - 30) x 10(exp 9) solar mass, if the X-rays are of thermal origin. We confirm the presence of extended features in A1367 first reported by Bechtold et al. (1983). The nature of these systems remains uncertain. The luminosities are large if the emission is attributed to single galaxies, and several of the extended features have no associated galaxy counterparts. The extended features may be associated with galaxy groups, as suggested by Canizares, Fabbiano, & Trinchieri (1987), although the number required is large.
Potential-field sounding using Euler's homogeneity equation and Zidarov bubbling
Cordell, Lindrith
1994-01-01
Potential-field (gravity) data are transformed into a physical-property (density) distribution in a lower half-space, constrained solely by assumed upper bounds on physical-property contrast and data error. A two-step process is involved. The data are first transformed to an equivalent set of line (2-D case) or point (3-D case) sources, using Euler's homogeneity equation evaluated iteratively on the largest residual data value. Then, mass is converted to a volume-density product, constrained to an upper density bound, by 'bubbling,' which exploits circular or radial expansion to redistribute density without changing the associated gravity field. The method can be developed for gravity or magnetic data in two or three dimensions. The results can provide a beginning for interpretation of potential-field data where few independent constraints exist, or more likely, can be used to develop models and confirm or extend interpretation of other geophysical data sets.
Combining catalytical and biological processes to transform cellulose into high value-added products
NASA Astrophysics Data System (ADS)
Gavilà, Lorenc; Güell, Edgar J.; Maru, Biniam T.; Medina, Francesc; Constantí, Magda
2017-04-01
Cellulose, the most abundant polymer of biomass, has an enormous potential as a source of chemicals and energy. However, its nature does not facilitate its exploitation in industry. As an entry point, here, two different strategies to hydrolyse cellulose are proposed. A solid and a liquid acid catalysts are tested. As a solid acid catalyst, zirconia and different zirconia-doped materials are proved, meanwhile liquid acid catalyst is carried out by sulfuric acid. Sulfuric acid proved to hydrolyse 78% of cellulose, while zirconia doped with sulfur converted 22% of cellulose. Both hydrolysates were used for fermentation with different microbial strains depending on the desired product: Citrobacter freundii H3 and Lactobacillus delbrueckii, for H2 or lactic acid production respectively. A measure of 2 mol H2/mol of glucose was obtained from the hydrolysate using zirconia with Citrobacter freundii; and Lactobacillus delbrueckii transformed all glucose into optically pure D-lactic acid.
Phonologic-graphemic transcodifier for Portuguese Language spoken in Brazil (PLB)
NASA Astrophysics Data System (ADS)
Fragadasilva, Francisco Jose; Saotome, Osamu; Deoliveira, Carlos Alberto
An automatic speech-to-text transformer system, suited to unlimited vocabulary, is presented. The basic acoustic unit considered are the allophones of the phonemes corresponding to the Portuguese language spoken in Brazil (PLB). The input to the system is a phonetic sequence, from a former step of isolated word recognition of slowly spoken speech. In a first stage, the system eliminates phonetic elements that don't belong to PLB. Using knowledge sources such as phonetics, phonology, orthography, and PLB specific lexicon, the output is a sequence of written words, ordered by probabilistic criterion that constitutes the set of graphemic possibilities to that input sequence. Pronunciation differences of some regions of Brazil are considered, but only those that cause differences in phonological transcription, because those of phonetic level are absorbed, during the transformation to phonological level. In the final stage, all possible written words are analyzed for orthography and grammar point of view, to eliminate the incorrect ones.
NASA Astrophysics Data System (ADS)
Bradshaw, A. M.; Reuter, B.; Hamacher, T.
2015-08-01
The energy transformation process beginning to take place in many countries as a response to climate change will reduce substantially the consumption of fossil fuels, but at the same time cause a large increase in the demand for other raw materials. Whereas it is difficult to estimate the quantities of, for example, iron, copper and aluminium required, the situation is somewhat simpler for the rare elements that might be needed in a sustainable energy economy based largely on photovoltaic sources, wind and possibly nuclear fusion. We consider briefly each of these technologies and discuss the supply risks associated with the rare elements required, if they were to be used in the quantities that might be required for a global energy transformation process. In passing, we point out the need in resource studies to define the terms "rare", "scarce" and "critical" and to use them in a consistent way.
Discrete cosine and sine transforms generalized to honeycomb lattice
NASA Astrophysics Data System (ADS)
Hrivnák, Jiří; Motlochová, Lenka
2018-06-01
The discrete cosine and sine transforms are generalized to a triangular fragment of the honeycomb lattice. The honeycomb point sets are constructed by subtracting the root lattice from the weight lattice points of the crystallographic root system A2. The two-variable orbit functions of the Weyl group of A2, discretized simultaneously on the weight and root lattices, induce a novel parametric family of extended Weyl orbit functions. The periodicity and von Neumann and Dirichlet boundary properties of the extended Weyl orbit functions are detailed. Three types of discrete complex Fourier-Weyl transforms and real-valued Hartley-Weyl transforms are described. Unitary transform matrices and interpolating behavior of the discrete transforms are exemplified. Consequences of the developed discrete transforms for transversal eigenvibrations of the mechanical graphene model are discussed.
Error assessment of local tie vectors in space geodesy
NASA Astrophysics Data System (ADS)
Falkenberg, Jana; Heinkelmann, Robert; Schuh, Harald
2014-05-01
For the computation of the ITRF, the data of the geometric space-geodetic techniques on co-location sites are combined. The combination increases the redundancy and offers the possibility to utilize the strengths of each technique while mitigating their weaknesses. To enable the combination of co-located techniques each technique needs to have a well-defined geometric reference point. The linking of the geometric reference points enables the combination of the technique-specific coordinate to a multi-technique site coordinate. The vectors between these reference points are called "local ties". The realization of local ties is usually reached by local surveys of the distances and or angles between the reference points. Identified temporal variations of the reference points are considered in the local tie determination only indirectly by assuming a mean position. Finally, the local ties measured in the local surveying network are to be transformed into the ITRF, the global geocentric equatorial coordinate system of the space-geodetic techniques. The current IERS procedure for the combination of the space-geodetic techniques includes the local tie vectors with an error floor of three millimeters plus a distance dependent component. This error floor, however, significantly underestimates the real accuracy of local tie determination. To fullfill the GGOS goals of 1 mm position and 0.1 mm/yr velocity accuracy, an accuracy of the local tie will be mandatory at the sub-mm level, which is currently not achievable. To assess the local tie effects on ITRF computations, investigations of the error sources will be done to realistically assess and consider them. Hence, a reasonable estimate of all the included errors of the various local ties is needed. An appropriate estimate could also improve the separation of local tie error and technique-specific error contributions to uncertainties and thus access the accuracy of space-geodetic techniques. Our investigations concern the simulation of the error contribution of each component of the local tie definition and determination. A closer look into the models of reference point definition, of accessibility, of measurement, and of transformation is necessary to properly model the error of the local tie. The effect of temporal variations on the local ties will be studied as well. The transformation of the local survey into the ITRF can be assumed to be the largest error contributor, in particular the orientation of the local surveying network to the ITRF.
Infrared divergences for free quantum fields in cosmological spacetimes
NASA Astrophysics Data System (ADS)
Higuchi, Atsushi; Rendell, Nicola
2018-06-01
We investigate the nature of infrared divergences for the free graviton and inflaton two-point functions in flat Friedman–Lemaître–Robertson–Walker spacetime. These divergences arise because the momentum integral for these two-point functions diverges in the infrared. It is straightforward to see that the power of the momentum in the integrand can be increased by 2 in the infrared using large gauge transformations, which are sufficient for rendering these two-point functions infrared finite for slow-roll inflation. In other words, if the integrand of the momentum integral for these two-point functions behaves like , where p is the momentum, in the infrared, then it can be made to behave like by large gauge transformations. On the other hand, it is known that, if one smears these two-point functions in a gauge-invariant manner, the power of the momentum in the integrand is changed from to . This fact suggests that the power of the momentum in the integrand for these two-point functions can be increased by 4 using large gauge transformations. In this paper we show that this is indeed the case. Thus, the two-point functions for the graviton and inflaton fields can be made finite by large gauge transformations for a large class of potentials and states in single-field inflation.
Alternate energy source usage methods for in situ heat treatment processes
Stone, Jr., Francis Marion; Goodwin, Charles R; Richard, Jr., James E
2014-10-14
Systems, methods, and heaters for treating a subsurface formation are described herein. At least one method for providing power to one or more subsurface heaters is described herein. The method may include monitoring one or more operating parameters of the heaters, the intermittent power source, and a transformer coupled to the intermittent power source that transforms power from the intermittent power source to power with appropriate operating parameters for the heaters; and controlling the power output of the transformer so that a constant voltage is provided to the heaters regardless of the load of the heaters and the power output provided by the intermittent power source.
Spatial transformation abilities and their relation to later mathematics performance.
Frick, Andrea
2018-04-10
Using a longitudinal approach, this study investigated the relational structure of different spatial transformation skills at kindergarten age, and how these spatial skills relate to children's later mathematics performance. Children were tested at three time points, in kindergarten, first grade, and second grade (N = 119). Exploratory factor analyses revealed two subcomponents of spatial transformation skills: one representing egocentric transformations (mental rotation and spatial scaling), and one representing allocentric transformations (e.g., cross-sectioning, perspective taking). Structural equation modeling suggested that egocentric transformation skills showed their strongest relation to the part of the mathematics test tapping arithmetic operations, whereas allocentric transformations were strongly related to Numeric-Logical and Spatial Functions as well as geometry. The present findings point to a tight connection between early mental transformation skills, particularly the ones requiring a high level of spatial flexibility and a strong sense for spatial magnitudes, and children's mathematics performance at the beginning of their school career.
A three dimensional point cloud registration method based on rotation matrix eigenvalue
NASA Astrophysics Data System (ADS)
Wang, Chao; Zhou, Xiang; Fei, Zixuan; Gao, Xiaofei; Jin, Rui
2017-09-01
We usually need to measure an object at multiple angles in the traditional optical three-dimensional measurement method, due to the reasons for the block, and then use point cloud registration methods to obtain a complete threedimensional shape of the object. The point cloud registration based on a turntable is essential to calculate the coordinate transformation matrix between the camera coordinate system and the turntable coordinate system. We usually calculate the transformation matrix by fitting the rotation center and the rotation axis normal of the turntable in the traditional method, which is limited by measuring the field of view. The range of exact feature points used for fitting the rotation center and the rotation axis normal is approximately distributed within an arc less than 120 degrees, resulting in a low fit accuracy. In this paper, we proposes a better method, based on the invariant eigenvalue principle of rotation matrix in the turntable coordinate system and the coordinate transformation matrix of the corresponding coordinate points. First of all, we control the rotation angle of the calibration plate with the turntable to calibrate the coordinate transformation matrix of the corresponding coordinate points by using the least squares method. And then we use the feature decomposition to calculate the coordinate transformation matrix of the camera coordinate system and the turntable coordinate system. Compared with the traditional previous method, it has a higher accuracy, better robustness and it is not affected by the camera field of view. In this method, the coincidence error of the corresponding points on the calibration plate after registration is less than 0.1mm.
Zhang, Lanyue; Ding, Dandan; Yang, Desen; Wang, Jia; Shi, Jie
2017-01-01
Spherical microphone arrays have been paid increasing attention for their ability to locate a sound source with arbitrary incident angle in three-dimensional space. Low-frequency sound sources are usually located by using spherical near-field acoustic holography. The reconstruction surface and holography surface are conformal surfaces in the conventional sound field transformation based on generalized Fourier transform. When the sound source is on the cylindrical surface, it is difficult to locate by using spherical surface conformal transform. The non-conformal sound field transformation by making a transfer matrix based on spherical harmonic wave decomposition is proposed in this paper, which can achieve the transformation of a spherical surface into a cylindrical surface by using spherical array data. The theoretical expressions of the proposed method are deduced, and the performance of the method is simulated. Moreover, the experiment of sound source localization by using a spherical array with randomly and uniformly distributed elements is carried out. Results show that the non-conformal surface sound field transformation from a spherical surface to a cylindrical surface is realized by using the proposed method. The localization deviation is around 0.01 m, and the resolution is around 0.3 m. The application of the spherical array is extended, and the localization ability of the spherical array is improved. PMID:28489065
NASA Astrophysics Data System (ADS)
Nikulin, Igor F.; Dumin, Yurii V.
2016-02-01
The basic observational properties of "coronal partings"-the special type of quasi-one-dimensional magnetic structures, identified by a comparison of the coronal X-ray and EUV images with solar magnetograms-are investigated. They represent the channels of opposite polarity inside the unipolar large-scale magnetic fields, formed by the rows of magnetic arcs directed to the neighboring sources of the background polarity. The most important characteristics of the partings are discussed. It can be naturally assumed that-from the evolutionary and spatial points of view-the partings can transform into the coronal holes and visa versa. The classes of global, intersecting, and complex partings are identified.
Graves, J. Anthony; Rothermund, Kristi; Wang, Tao; Qian, Wei; Van Houten, Bennett; Prochownik, Edward V.
2010-01-01
Deregulation of c-Myc (Myc) occurs in many cancers. In addition to transforming various cell types, Myc also influences additional transformation-associated cellular phenotypes including proliferation, survival, genomic instability, reactive oxygen species production, and metabolism. Although Myc is wild type in most cancers (wtMyc), it occasionally acquires point mutations in certain lymphomas. Some of these mutations confer a survival advantage despite partially attenuating proliferation and transformation. Here, we have evaluated four naturally-occurring or synthetic point mutations of Myc for their ability to affect these phenotypes, as well as to promote genomic instability, to generate reactive oxygen species and to up-regulate aerobic glycolysis and oxidative phosphorylation. Our findings indicate that many of these phenotypes are genetically and functionally independent of one another and are not necessary for transformation. Specifically, the higher rate of glucose metabolism known to be associated with wtMyc deregulation was found to be independent of transformation. One mutation (Q131R) was greatly impaired for nearly all of the studied Myc phenotypes, yet was able to retain some ability to transform. These findings indicate that, while the Myc phenotypes examined here make additive contributions to transformation, none, with the possible exception of increased reliance on extracellular glutamine for survival, are necessary for achieving this state. PMID:21060841
Automatic co-registration of 3D multi-sensor point clouds
NASA Astrophysics Data System (ADS)
Persad, Ravi Ancil; Armenakis, Costas
2017-08-01
We propose an approach for the automatic coarse alignment of 3D point clouds which have been acquired from various platforms. The method is based on 2D keypoint matching performed on height map images of the point clouds. Initially, a multi-scale wavelet keypoint detector is applied, followed by adaptive non-maxima suppression. A scale, rotation and translation-invariant descriptor is then computed for all keypoints. The descriptor is built using the log-polar mapping of Gabor filter derivatives in combination with the so-called Rapid Transform. In the final step, source and target height map keypoint correspondences are determined using a bi-directional nearest neighbour similarity check, together with a threshold-free modified-RANSAC. Experiments with urban and non-urban scenes are presented and results show scale errors ranging from 0.01 to 0.03, 3D rotation errors in the order of 0.2° to 0.3° and 3D translation errors from 0.09 m to 1.1 m.
Linear Power-Flow Models in Multiphase Distribution Networks: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, Andrey; Dall'Anese, Emiliano
This paper considers multiphase unbalanced distribution systems and develops approximate power-flow models where bus-voltages, line-currents, and powers at the point of common coupling are linearly related to the nodal net power injections. The linearization approach is grounded on a fixed-point interpretation of the AC power-flow equations, and it is applicable to distribution systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. The proposed linear models can facilitate the development of computationally-affordable optimization and control applications -- frommore » advanced distribution management systems settings to online and distributed optimization routines. Performance of the proposed models is evaluated on different test feeders.« less
A technique for phase correction in Fourier transform spectroscopy
NASA Astrophysics Data System (ADS)
Artsang, P.; Pongchalee, P.; Palawong, K.; Buisset, C.; Meemon, P.
2018-03-01
Fourier transform spectroscopy (FTS) is a type of spectroscopy that can be used to analyze components in the sample. The basic setup that is commonly used in this technique is "Michelson interferometer". The interference signal obtained from interferometer can be Fourier transformed into the spectral pattern of the illuminating light source. To experimentally study the concept of the Fourier transform spectroscopy, the project started by setup the Michelson interferometer in the laboratory. The implemented system used a broadband light source in near infrared region (0.81-0.89 μm) and controlled the movable mirror by using computer controlled motorized translation stage. In the early study, there is no sample the interference path. Therefore, the theoretical spectral results after the Fourier transformation of the captured interferogram must be the spectral shape of the light source. One main challenge of the FTS is to retrieve the correct phase information of the inferferogram that relates with the correct spectral shape of the light source. One main source of the phase distortion in FTS that we observed from our system is the non-linear movement of the movable reference mirror of the Michelson interferometer. Therefore, to improve the result, we coupled a monochromatic light source to the implemented interferometer. We simultaneously measured the interferograms of the monochromatic and broadband light sources. The interferogram of the monochromatic light source was used to correct the phase of the interferogram of the broadband light source. The result shows significant improvement in the computed spectral shape.
Transformation Systems at NASA Ames
NASA Technical Reports Server (NTRS)
Buntine, Wray; Fischer, Bernd; Havelund, Klaus; Lowry, Michael; Pressburger, TOm; Roach, Steve; Robinson, Peter; VanBaalen, Jeffrey
1999-01-01
In this paper, we describe the experiences of the Automated Software Engineering Group at the NASA Ames Research Center in the development and application of three different transformation systems. The systems span the entire technology range, from deductive synthesis, to logic-based transformation, to almost compiler-like source-to-source transformation. These systems also span a range of NASA applications, including solving solar system geometry problems, generating data analysis software, and analyzing multi-threaded Java code.
Studies on the coupling transformer to improve the performance of microwave ion source.
Misra, Anuraag; Pandit, V S
2014-06-01
A 2.45 GHz microwave ion source has been developed and installed at the Variable Energy Cyclotron Centre to produce high intensity proton beam. It is operational and has already produced more than 12 mA of proton beam with just 350 W of microwave power. In order to optimize the coupling of microwave power to the plasma, a maximally flat matching transformer has been used. In this paper, we first describe an analytical method to design the matching transformer and then present the results of rigorous simulation performed using ANSYS HFSS code to understand the effect of different parameters on the transformed impedance and reflection and transmission coefficients. Based on the simulation results, we have chosen two different coupling transformers which are double ridged waveguides with ridge widths of 24 mm and 48 mm. We have fabricated these transformers and performed experiments to study the influence of these transformers on the coupling of microwave to plasma and extracted beam current from the ion source.
Studies on the coupling transformer to improve the performance of microwave ion source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Misra, Anuraag, E-mail: pandit@vecc.gov.in, E-mail: vspandit12@gmail.com, E-mail: anuraag@vecc.gov.in; Pandit, V. S., E-mail: pandit@vecc.gov.in, E-mail: vspandit12@gmail.com, E-mail: anuraag@vecc.gov.in
A 2.45 GHz microwave ion source has been developed and installed at the Variable Energy Cyclotron Centre to produce high intensity proton beam. It is operational and has already produced more than 12 mA of proton beam with just 350 W of microwave power. In order to optimize the coupling of microwave power to the plasma, a maximally flat matching transformer has been used. In this paper, we first describe an analytical method to design the matching transformer and then present the results of rigorous simulation performed using ANSYS HFSS code to understand the effect of different parameters on themore » transformed impedance and reflection and transmission coefficients. Based on the simulation results, we have chosen two different coupling transformers which are double ridged waveguides with ridge widths of 24 mm and 48 mm. We have fabricated these transformers and performed experiments to study the influence of these transformers on the coupling of microwave to plasma and extracted beam current from the ion source.« less
NASA Astrophysics Data System (ADS)
Lorek, Dariusz
2016-12-01
The article presents a framework for integrating historical sources with elements of the geographical space recorded in unique cartographic materials. The aim of the project was to elaborate a method of integrating spatial data sources that would facilitate studying and presenting the phenomena of economic history. The proposed methodology for multimedia integration of old materials made it possible to demonstrate the successive stages of the transformation which was characteristic of the 19th-century space. The point of reference for this process of integrating information was topographic maps from the first half of the 19th century, while the research area comprised the castle complex in Kórnik together with the small town - the pre-industrial landscape in Wielkopolska (Greater Poland). On the basis of map and plan transformation, graphic processing of the scans of old drawings, texture mapping of the facades of historic buildings, and a 360° panorama, the source material collected was integrated. The final product was a few-minute-long video, composed of nine sequences. It captures the changing form of the castle building together with its facades, the castle park, and its further topographic and urban surroundings, since the beginning of the 19th century till the present day. For a topographic map sheet dating back to the first half of the 19th century, in which the hachuring method had been used to present land relief, a terrain model was generated. The transition from parallel to bird's-eye-view perspective served to demonstrate the distinctive character of the pre-industrial landscape.
User's Guide for MapIMG 2: Map Image Re-projection Software Package
Finn, Michael P.; Trent, Jason R.; Buehler, Robert A.
2006-01-01
BACKGROUND Scientists routinely accomplish small-scale geospatial modeling in the raster domain, using high-resolution datasets for large parts of continents and low-resolution to high-resolution datasets for the entire globe. Direct implementation of point-to-point transformation with appropriate functions yields the variety of projections available in commercial software packages, but implementation with data other than points requires specific adaptation of the transformation equations or prior preparation of the data to allow the transformation to succeed. It seems that some of these packages use the U.S. Geological Survey's (USGS) General Cartographic Transformation Package (GCTP) or similar point transformations without adaptation to the specific characteristics of raster data (Usery and others, 2003a). Usery and others (2003b) compiled and tabulated the accuracy of categorical areas in projected raster datasets of global extent. Based on the shortcomings identified in these studies, geographers and applications programmers at the USGS expanded and evolved a USGS software package, MapIMG, for raster map projection transformation (Finn and Trent, 2004). Daniel R. Steinwand of Science Applications International Corporation, National Center for Earth Resources Observation and Science, originally developed MapIMG for the USGS, basing it on GCTP. Through previous and continuing efforts at the USGS' National Geospatial Technical Operations Center, this program has been transformed from an application based on command line input into a software package based on a graphical user interface for Windows, Linux, and other UNIX machines.
C-point and V-point singularity lattice formation and index sign conversion methods
NASA Astrophysics Data System (ADS)
Kumar Pal, Sushanta; Ruchi; Senthilkumaran, P.
2017-06-01
The generic singularities in an ellipse field are C-points namely stars, lemons and monstars in a polarization distribution with C-point indices (-1/2), (+1/2) and (+1/2) respectively. Similar to C-point singularities, there are V-point singularities that occur in a vector field and are characterized by Poincare-Hopf index of integer values. In this paper we show that the superposition of three homogenously polarized beams in different linear states leads to the formation of polarization singularity lattice. Three point sources at the focal plane of the lens are used to create three interfering plane waves. A radial/azimuthal polarization converter (S-wave plate) placed near the focal plane modulates the polarization states of the three beams. The interference pattern is found to host C-points and V-points in a hexagonal lattice. The C-points occur at intensity maxima and V-points occur at intensity minima. Modulating the state of polarization (SOP) of three plane waves from radial to azimuthal does not essentially change the nature of polarization singularity lattice as the Poincare-Hopf index for both radial and azimuthal polarization distributions is (+1). Hence a transformation from a star to a lemon is not trivial, as such a transformation requires not a single SOP change, but a change in whole spatial SOP distribution. Further there is no change in the lattice structure and the C- and V-points appear at locations where they were present earlier. Hence to convert an interlacing star and V-point lattice into an interlacing lemon and V-point lattice, the interferometer requires modification. We show for the first time a method to change the polarity of C-point and V-point indices. This means that lemons can be converted into stars and stars can be converted into lemons. Similarly the positive V-point can be converted to negative V-point and vice versa. The intensity distribution in all these lattices is invariant as the SOPs of the three beams are changed in an orderly fashion. It shows degeneracy as long as the SOPs of the three beams are drawn from polarization distributions that have Poincare-Hopf index of same magnitude. Various topological aspects of these lattices are presented with the help of Stokes field S12, which is constructed using generalized Stokes parameters of a fully polarized light. We envisage that such polarization lattice structure may lead to novel concept of structured polarization illumination methods in super resolution microscopy.
Modernisation of the Narod fluxgate electronics at Budkov Geomagnetic Observatory
NASA Astrophysics Data System (ADS)
Vlk, Michal
2013-04-01
From the signal point of view, fluxgate unit is a low-frequency parametric up-convertor where the output signal is picked up in bands near second harmonic of the pump frequency fp (sometimes called idler for historic reasons) and purity of idler is augmented by orthogonal construction of the pump and pick-up coil. In our concept, the pump source uses Heegner quartz oscillator near 8 MHz, synchronous divider to 16 kHz (fp) and switched current booster. Rectangular pulse is used for feeding the original ferroresonant pump source with neutralizing transformer in the case of symmetric shielded cabling. Input transformer has split primary winding for using symmetrical shielded input cabling and secondary winding tuned by polystyrol capacitor and loaded by inverting integrator bridged by capacitor. This structure behaves like resistor cooled to low temperature. Next stage is bandpass filter (derivator) with a gain tuned to 2 fp with leaky FDNRs followed by current booster. Another part of the system is low-noise peak elimination and bias circuit. Heart of the system is a 120-V precision source which uses 3.3-V Zener diode chain - thermistor bridge in the feedback. Peak elimination circuit logics consists of the envelope detector, comparators, asynchronous counter in hardwired logics, set of weighted resistor chains and discrete MOS switches in current-mode. All HV components use airy montage to prevent the ground-leak. After 200 m long coaxial line, the signal is galvanically separated by transformer and fed into A/D converter, which is ordinary HD audio (96 kHz) soundcard. Real sample rate is constructed by a-posteriori data processing when statistic properties of the incoming sample are known. The sampled signal is band-pass filtered with a 200-Hz filter centered at 2 fp. The signal is then fed through a first-order allpass centered at 2 fp. The result approximates Hilbert transform sufficiently good for detecting the envelope via square sum-root rule. The signal is further decimated via IIR filters to sample-rate 187.5 Hz. Raw instrument data files are saved hourly in floating-point binary files and are marked by time stamps obtained from NTP server. A-posteriory processing of (plesiochronous) instrument data consists of downsampling by IIRs to 12 Hz, irrational (time-mark driven) upsampling to 13 Hz and then using the INTERMAGNET standard FIR filter (5 sec to 1 min) to obtain 1-min data. Because the range of the signal processing system is about 60 nT (range of the peak elimination circuit is 3.8 uT), the resulting magnetograms look like the La Cour ones.
Poisson Noise Removal in Spherical Multichannel Images: Application to Fermi data
NASA Astrophysics Data System (ADS)
Schmitt, Jérémy; Starck, Jean-Luc; Fadili, Jalal; Digel, Seth
2012-03-01
The Fermi Gamma-ray Space Telescope, which was launched by NASA in June 2008, is a powerful space observatory which studies the high-energy gamma-ray sky [5]. Fermi's main instrument, the Large Area Telescope (LAT), detects photons in an energy range between 20MeV and >300 GeV. The LAT is much more sensitive than its predecessor, the energetic gamma ray experiment telescope (EGRET) telescope on the Compton Gamma-ray Observatory, and is expected to find several thousand gamma-ray point sources, which is an order of magnitude more than its predecessor EGRET [13]. Even with its relatively large acceptance (∼2m2 sr), the number of photons detected by the LAT outside the Galactic plane and away from intense sources is relatively low and the sky overall has a diffuse glow from cosmic-ray interactions with interstellar gas and low energy photons that makes a background against which point sources need to be detected. In addition, the per-photon angular resolution of the LAT is relatively poor and strongly energy dependent, ranging from>10° at 20MeV to ∼0.1° above 100 GeV. Consequently, the spherical photon count images obtained by Fermi are degraded by the fluctuations on the number of detected photons. This kind of noise is strongly signal dependent : on the brightest parts of the image like the galactic plane or the brightest sources, we have a lot of photons per pixel, and so the photon noise is low. Outside the galactic plane, the number of photons per pixel is low, which means that the photon noise is high. Such a signal-dependent noise cannot be accurately modeled by a Gaussian distribution. The basic photon-imaging model assumes that the number of detected photons at each pixel location is Poisson distributed. More specifically, the image is considered as a realization of an inhomogeneous Poisson process. This statistical noise makes the source detection more difficult, consequently it is highly desirable to have an efficient denoising method for spherical Poisson data. Several techniques have been proposed in the literature to estimate Poisson intensity in 2-dimensional (2D). A major class of methods adopt a multiscale Bayesian framework specifically tailored for Poisson data [18], independently initiated by Timmerman and Nowak [23] and Kolaczyk [14]. Lefkimmiaits et al. [15] proposed an improved Bayesian framework for analyzing Poisson processes, based on a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities in adjacent scales are modeled as mixtures of conjugate parametric distributions. Another approach includes preprocessing the count data by a variance stabilizing transform(VST) such as theAnscombe [4] and the Fisz [10] transforms, applied respectively in the spatial [8] or in the wavelet domain [11]. The transform reforms the data so that the noise approximately becomes Gaussian with a constant variance. Standard techniques for independent identically distributed Gaussian noise are then used for denoising. Zhang et al. [25] proposed a powerful method called multiscale (MS-VST). It consists in combining a VST with a multiscale transform (wavelets, ridgelets, or curvelets), yielding asymptotically normally distributed coefficients with known variances. The interest of using a multiscale method is to exploit the sparsity properties of the data : the data are transformed into a domain in which it is sparse, and, as the noise is not sparse in any transform domain, it is easy to separate it from the signal. When the noise is Gaussian of known variance, it is easy to remove it with a high thresholding in the wavelet domain. The choice of the multiscale transform depends on the morphology of the data. Wavelets represent more efficiently regular structures and isotropic singularities, whereas ridgelets are designed to represent global lines in an image, and curvelets represent efficiently curvilinear contours. Significant coefficients are then detected with binary hypothesis testing, and the final estimate is reconstructed with an iterative scheme. In Ref
Can satellite-based monitoring techniques be used to quantify volcanic CO2 emissions?
NASA Astrophysics Data System (ADS)
Schwandner, Florian M.; Carn, Simon A.; Kuze, Akihiko; Kataoka, Fumie; Shiomi, Kei; Goto, Naoki; Popp, Christoph; Ajiro, Masataka; Suto, Hiroshi; Takeda, Toru; Kanekon, Sayaka; Sealing, Christine; Flower, Verity
2014-05-01
Since 2010, we investigate and improve possible methods to regularly target volcanic centers from space in order to detect volcanic carbon dioxide (CO2) point source anomalies, using the Japanese Greenhouse gas Observing SATellite (GOSAT). Our long-term goals are: (a) better spatial and temporal coverage of volcano monitoring techniques; (b) improvement of the currently highly uncertain global CO2 emission inventory for volcanoes, and (c) use of volcanic CO2 emissions for high altitude, strong point source emission and dispersion studies in atmospheric science. The difficulties posed by strong relief, orogenic clouds, and aerosols are minimized by a small field of view, enhanced spectral resolving power, by employing repeat target mode observation strategies, and by comparison to continuous ground based sensor network validation data. GOSAT is a single-instrument Earth observing greenhouse gas mission aboard JAXA's IBUKI satellite in sun-synchronous polar orbit. GOSAT's Fourier-Transform Spectrometer (TANSO-FTS) has been producing total column XCO2 data since January 2009, at a repeat cycle of 3 days, offering great opportunities for temporal monitoring of point sources. GOSAT's 10 km field of view can spatially integrate entire volcanic edifices within one 'shot' in precise target mode. While it doesn't have any spatial scanning or mapping capability, it does have strong spectral resolving power and agile pointing capability to focus on several targets of interest per orbit. Sufficient uncertainty reduction is achieved through comprehensive in-flight vicarious calibration, in close collaboration between NASA and JAXA. Challenges with the on-board pointing mirror system have been compensated for employing custom observation planning strategies, including repeat sacrificial upstream reference points to control pointing mirror motion, empirical individualized target offset compensation, observation pattern simulations to minimize view angle azimuth. Since summer 2010 we have conducted repeated target mode observations of now almost 40 persistently active global volcanoes and other point sources, including Etna (Italy), Mayon (Philippines), Hawaii (USA), Popocatepetl (Mexico), and Ambrym (Vanuatu), using GOSAT FTS SWIR data. In this presentation we will summarize results from over three years of measurements and progress toward understanding detectability with this method. In emerging collaboration with the Deep Carbon Observatory's DECADE program, the World Organization of Volcano Observatories (WOVO) global database of volcanic unrest (WOVOdat), and country specific observatories and agencies we see a growing potential for ground based validation synergies. Complementing the ongoing GOSAT mission, NASA is on schedule to launch its OCO-2 satellite in July 2014, which will provide higher spatial but lower temporal resolution. Further orbiting and geostationary satellite sensors are in planning at JAXA, NASA, and ESA.
The method for homography estimation between two planes based on lines and points
NASA Astrophysics Data System (ADS)
Shemiakina, Julia; Zhukovsky, Alexander; Nikolaev, Dmitry
2018-04-01
The paper considers the problem of estimating a transform connecting two images of one plane object. The method based on RANSAC is proposed for calculating the parameters of projective transform which uses points and lines correspondences simultaneously. A series of experiments was performed on synthesized data. Presented results show that the algorithm convergence rate is significantly higher when actual lines are used instead of points of lines intersection. When using both lines and feature points it is shown that the convergence rate does not depend on the ratio between lines and feature points in the input dataset.
9. Photographic copy of drawing dated June 24, 1908 (Source: ...
9. Photographic copy of drawing dated June 24, 1908 (Source: Salt River Project) Transformer house, general drawing - Theodore Roosevelt Dam, Transformer House, Salt River, Tortilla Flat, Maricopa County, AZ
An X-band parabolic antenna based on gradient metasurface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Wang; Yang, Helin, E-mail: emyang@mail.ccnu.edu.cn; Tian, Ying
We present a novel parabolic antenna by employing reflection gradient metasurface which is composed of a series of circle patches on a grounded dielectric substrate. Similar to the traditional parabolic antenna, the proposed antenna take the metasurface as a “parabolic reflector” and a patch antenna was placed at the focal point of the metasurface as a feed source, then the quasi-spherical wave emitted by the source is reflected and transformed to plane wave with high efficiency. Due to the focus effect of reflection, the beam width of the antenna has been decreased from 85.9° to 13° and the gain hasmore » been increased from 6.5 dB to 20.8 dB. Simulation and measurement results of both near and far-field plots demonstrate good focusing properties of the proposed parabolic antenna.« less
Pursuing optimal electric machines transient diagnosis: The adaptive slope transform
NASA Astrophysics Data System (ADS)
Pons-Llinares, Joan; Riera-Guasp, Martín; Antonino-Daviu, Jose A.; Habetler, Thomas G.
2016-12-01
The aim of this paper is to introduce a new linear time-frequency transform to improve the detection of fault components in electric machines transient currents. Linear transforms are analysed from the perspective of the atoms used. A criterion to select the atoms at every point of the time-frequency plane is proposed, taking into account the characteristics of the searched component at each point. This criterion leads to the definition of the Adaptive Slope Transform, which enables a complete and optimal capture of the different components evolutions in a transient current. A comparison with conventional linear transforms (Short-Time Fourier Transform and Wavelet Transform) is carried out, showing their inherent limitations. The approach is tested with laboratory and field motors, and the Lower Sideband Harmonic is captured for the first time during an induction motor startup and subsequent load oscillations, accurately tracking its evolution.
Pacaci, Anil; Gonul, Suat; Sinaci, A Anil; Yuksel, Mustafa; Laleci Erturkmen, Gokce B
2018-01-01
Background: Utilization of the available observational healthcare datasets is key to complement and strengthen the postmarketing safety studies. Use of common data models (CDM) is the predominant approach in order to enable large scale systematic analyses on disparate data models and vocabularies. Current CDM transformation practices depend on proprietarily developed Extract-Transform-Load (ETL) procedures, which require knowledge both on the semantics and technical characteristics of the source datasets and target CDM. Purpose: In this study, our aim is to develop a modular but coordinated transformation approach in order to separate semantic and technical steps of transformation processes, which do not have a strict separation in traditional ETL approaches. Such an approach would discretize the operations to extract data from source electronic health record systems, alignment of the source, and target models on the semantic level and the operations to populate target common data repositories. Approach: In order to separate the activities that are required to transform heterogeneous data sources to a target CDM, we introduce a semantic transformation approach composed of three steps: (1) transformation of source datasets to Resource Description Framework (RDF) format, (2) application of semantic conversion rules to get the data as instances of ontological model of the target CDM, and (3) population of repositories, which comply with the specifications of the CDM, by processing the RDF instances from step 2. The proposed approach has been implemented on real healthcare settings where Observational Medical Outcomes Partnership (OMOP) CDM has been chosen as the common data model and a comprehensive comparative analysis between the native and transformed data has been conducted. Results: Health records of ~1 million patients have been successfully transformed to an OMOP CDM based database from the source database. Descriptive statistics obtained from the source and target databases present analogous and consistent results. Discussion and Conclusion: Our method goes beyond the traditional ETL approaches by being more declarative and rigorous. Declarative because the use of RDF based mapping rules makes each mapping more transparent and understandable to humans while retaining logic-based computability. Rigorous because the mappings would be based on computer readable semantics which are amenable to validation through logic-based inference methods.
Parallel and pipeline computation of fast unitary transforms
NASA Technical Reports Server (NTRS)
Fino, B. J.; Algazi, V. R.
1975-01-01
The letter discusses the parallel and pipeline organization of fast-unitary-transform algorithms such as the fast Fourier transform, and points out the efficiency of a combined parallel-pipeline processor of a transform such as the Haar transform, in which (2 to the n-th power) -1 hardware 'butterflies' generate a transform of order 2 to the n-th power every computation cycle.
Arsenite induces cell transformation by reactive oxygen species, AKT, ERK1/2, and p70S6K1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carpenter, Richard L.; Jiang, Yue; Jing, Yi
2011-10-28
Highlights: Black-Right-Pointing-Pointer Chronic exposure to arsenite induces cell proliferation and transformation. Black-Right-Pointing-Pointer Arsenite-induced transformation increases ROS production and downstream signalings. Black-Right-Pointing-Pointer Inhibition of ROS levels via catalase reduces arsenite-induced cell transformation. Black-Right-Pointing-Pointer Interruption of AKT, ERK, or p70S6K1 inhibits arsenite-induced cell transformation. -- Abstract: Arsenic is naturally occurring element that exists in both organic and inorganic formulations. The inorganic form arsenite has a positive association with development of multiple cancer types. There are significant populations throughout the world with high exposure to arsenite via drinking water. Thus, human exposure to arsenic has become a significant public health problem. Recent evidencemore » suggests that reactive oxygen species (ROS) mediate multiple changes to cell behavior after acute arsenic exposure, including activation of proliferative signaling and angiogenesis. However, the role of ROS in mediating cell transformation by chronic arsenic exposure is unknown. We found that cells chronically exposed to sodium arsenite increased proliferation and gained anchorage-independent growth. This cell transformation phenotype required constitutive activation of AKT, ERK1/2, mTOR, and p70S6K1. We also observed these cells constitutively produce ROS, which was required for the constitutive activation of AKT, ERK1/2, mTOR, and p70S6K1. Suppression of ROS levels by forced expression of catalase also reduced cell proliferation and anchorage-independent growth. These results indicate cell transformation induced by chronic arsenic exposure is mediated by increased cellular levels of ROS, which mediates activation of AKT, ERK1/2, and p70S6K1.« less
Ardila-Rey, Jorge Alfredo; Rojas-Moreno, Mónica Victoria; Martínez-Tarifa, Juan Manuel; Robles, Guillermo
2014-02-19
Partial discharge (PD) detection is a standardized technique to qualify electrical insulation in machines and power cables. Several techniques that analyze the waveform of the pulses have been proposed to discriminate noise from PD activity. Among them, spectral power ratio representation shows great flexibility in the separation of the sources of PD. Mapping spectral power ratios in two-dimensional plots leads to clusters of points which group pulses with similar characteristics. The position in the map depends on the nature of the partial discharge, the setup and the frequency response of the sensors. If these clusters are clearly separated, the subsequent task of identifying the source of the discharge is straightforward so the distance between clusters can be a figure of merit to suggest the best option for PD recognition. In this paper, two inductive sensors with different frequency responses to pulsed signals, a high frequency current transformer and an inductive loop sensor, are analyzed to test their performance in detecting and separating the sources of partial discharges.
Temporal and frequency characteristics of a narrow light beam in sea water.
Luchinin, Alexander G; Kirillin, Mikhail Yu
2016-09-20
The structure of a light field in sea water excited by a unidirectional point-sized pulsed source is studied by Monte Carlo technique. The pulse shape registered at the distances up to 120 m from the source on the beam axis and in its axial region is calculated with a time resolution of 1 ps. It is shown that with the increase of the distance from the source the pulse splits into two parts formed by components of various scattering orders. Frequency and phase responses of the beam are calculated by means of the fast Fourier transform. It is also shown that for higher frequencies, the attenuation of harmonic components of the field is larger. In the range of parameters corresponding to pulse splitting on the beam axis, the attenuation of harmonic components in particular spectral ranges exceeds the attenuation predicted by Bouguer law. In this case, the transverse distribution of the amplitudes of these harmonics is minimal on the beam axis.
The XMM-SERVS survey: new XMM-Newton point-source catalog for the XMM-LSS field
NASA Astrophysics Data System (ADS)
Chen, C.-T. J.; Brandt, W. N.; Luo, B.; Ranalli, P.; Yang, G.; Alexander, D. M.; Bauer, F. E.; Kelson, D. D.; Lacy, M.; Nyland, K.; Tozzi, P.; Vito, F.; Cirasuolo, M.; Gilli, R.; Jarvis, M. J.; Lehmer, B. D.; Paolillo, M.; Schneider, D. P.; Shemmer, O.; Smail, I.; Sun, M.; Tanaka, M.; Vaccari, M.; Vignali, C.; Xue, Y. Q.; Banerji, M.; Chow, K. E.; Häußler, B.; Norris, R. P.; Silverman, J. D.; Trump, J. R.
2018-04-01
We present an X-ray point-source catalog from the XMM-Large Scale Structure survey region (XMM-LSS), one of the XMM-Spitzer Extragalactic Representative Volume Survey (XMM-SERVS) fields. We target the XMM-LSS region with 1.3 Ms of new XMM-Newton AO-15 observations, transforming the archival X-ray coverage in this region into a 5.3 deg2 contiguous field with uniform X-ray coverage totaling 2.7 Ms of flare-filtered exposure, with a 46 ks median PN exposure time. We provide an X-ray catalog of 5242 sources detected in the soft (0.5-2 keV), hard (2-10 keV), and/or full (0.5-10 keV) bands with a 1% expected spurious fraction determined from simulations. A total of 2381 new X-ray sources are detected compared to previous source catalogs in the same area. Our survey has flux limits of 1.7 × 10-15, 1.3 × 10-14, and 6.5 × 10-15 erg cm-2 s-1 over 90% of its area in the soft, hard, and full bands, respectively, which is comparable to those of the XMM-COSMOS survey. We identify multiwavelength counterpart candidates for 99.9% of the X-ray sources, of which 93% are considered as reliable based on their matching likelihood ratios. The reliabilities of these high-likelihood-ratio counterparts are further confirmed to be ≈97% reliable based on deep Chandra coverage over ≈5% of the XMM-LSS region. Results of multiwavelength identifications are also included in the source catalog, along with basic optical-to-infrared photometry and spectroscopic redshifts from publicly available surveys. We compute photometric redshifts for X-ray sources in 4.5 deg2 of our field where forced-aperture multi-band photometry is available; >70% of the X-ray sources in this subfield have either spectroscopic or high-quality photometric redshifts.
A power transformer as a source of noise.
Zawieska, Wiktor Marek
2007-01-01
This article presents selected results of analyses and simulations carried out as part of research performed at the Central Institute of Labor Protection - the National Research Institute (CIOP-PIB) in connection with the development of a system for active reduction of noise emitted by high power electricity transformers. This analysis covers the transformer as a source of noise as well as a mathematical description of the phenomenon of radiation of vibroacoustic energy through a transformer enclosure modeled as a vibrating rectangular plate. Also described is an acoustic model of the transformer in the form of an array of loudspeakers.
Fipronil is a phenylpyrazole insecticide that is widely used in residential and agricultural settings to control ants, roaches, termites, and other pests. Fipronil and its transformation products have been found in a variety of environmental matrices, but the source[s] which make...
Vázquez-Rodríguez, Adiari I.; Hansel, Colleen M.; Zhang, Tong; ...
2015-06-23
Mercury (Hg) is a toxic heavy metal that poses significant environmental and human health risks. Soils and sediments, where Hg can exist as the Hg sulfide mineral metacinnabar (β-HgS), represent major Hg reservoirs in aquatic environments. Metacinnabar has historically been considered a sink for Hg in all but severely acidic environments, and thus disregarded as a potential source of Hg back to aqueous or gaseous pools. In this study, we conducted a combination of field and laboratory incubations to identify the potential for metacinnabar as a source of dissolved Hg within near neutral pH environments and the underpinning (a)biotic mechanismsmore » at play. We show that the abundant and widespread sulfur-oxidizing bacteria of the genus Thiobacillus extensively colonized metacinnabar chips incubated within aerobic, near neutral pH creek sediments. Laboratory incubations of axenic Thiobacillus thioparus cultures led to the release of metacinnabar-hosted Hg(II) and subsequent volatilization to Hg(0). This dissolution and volatilization was greatly enhanced in the presence of thiosulfate, which served a dual role by enhancing HgS dissolution through Hg complexation and providing an additional metabolic substrate for Thiobacillus. These findings reveal a new coupled abiotic-biotic pathway for the transformation of metacinnabar-bound Hg(II) to Hg(0), while expanding the sulfide substrates available for neutrophilic chemosynthetic bacteria to Hg-laden sulfides. Lastly, they also point to mineral-hosted Hg as an underappreciated source of gaseous elemental Hg to the environment.« less
Vázquez-Rodríguez, Adiari I.; Hansel, Colleen M.; Zhang, Tong; Lamborg, Carl H.; Santelli, Cara M.; Webb, Samuel M.; Brooks, Scott C.
2015-01-01
Mercury (Hg) is a toxic heavy metal that poses significant environmental and human health risks. Soils and sediments, where Hg can exist as the Hg sulfide mineral metacinnabar (β-HgS), represent major Hg reservoirs in aquatic environments. Metacinnabar has historically been considered a sink for Hg in all but severely acidic environments, and thus disregarded as a potential source of Hg back to aqueous or gaseous pools. Here, we conducted a combination of field and laboratory incubations to identify the potential for metacinnabar as a source of dissolved Hg within near neutral pH environments and the underpinning (a)biotic mechanisms at play. We show that the abundant and widespread sulfur-oxidizing bacteria of the genus Thiobacillus extensively colonized metacinnabar chips incubated within aerobic, near neutral pH creek sediments. Laboratory incubations of axenic Thiobacillus thioparus cultures led to the release of metacinnabar-hosted Hg(II) and subsequent volatilization to Hg(0). This dissolution and volatilization was greatly enhanced in the presence of thiosulfate, which served a dual role by enhancing HgS dissolution through Hg complexation and providing an additional metabolic substrate for Thiobacillus. These findings reveal a new coupled abiotic-biotic pathway for the transformation of metacinnabar-bound Hg(II) to Hg(0), while expanding the sulfide substrates available for neutrophilic chemosynthetic bacteria to Hg-laden sulfides. They also point to mineral-hosted Hg as an underappreciated source of gaseous elemental Hg to the environment. PMID:26157421
Critical indices for reversible gamma-alpha phase transformation in metallic cerium
NASA Astrophysics Data System (ADS)
Soldatova, E. D.; Tkachenko, T. B.
1980-08-01
Critical indices for cerium have been determined within the framework of the pseudobinary solution theory along the phase equilibrium curve, the critical isotherm, and the critical isobar. The results obtained verify the validity of relationships proposed by Rushbrook (1963), Griffiths (1965), and Coopersmith (1968). It is concluded that reversible gamma-alpha transformation in metallic cerium is a critical-type transformation, and cerium has a critical point on the phase diagram similar to the critical point of the liquid-vapor system.
NASA Astrophysics Data System (ADS)
Sedghi, Mohammad Mahdi; Samani, Nozar; Sleep, Brent
2009-06-01
The Laplace domain solutions have been obtained for three-dimensional groundwater flow to a well in confined and unconfined wedge-shaped aquifers. The solutions take into account partial penetration effects, instantaneous drainage or delayed yield, vertical anisotropy and the water table boundary condition. As a basis, the Laplace domain solutions for drawdown created by a point source in uniform, anisotropic confined and unconfined wedge-shaped aquifers are first derived. Then, by the principle of superposition the point source solutions are extended to the cases of partially and fully penetrating wells. Unlike the previous solution for the confined aquifer that contains improper integrals arising from the Hankel transform [Yeh HD, Chang YC. New analytical solutions for groundwater flow in wedge-shaped aquifers with various topographic boundary conditions. Adv Water Resour 2006;26:471-80], numerical evaluation of our solution is relatively easy using well known numerical Laplace inversion methods. The effects of wedge angle, pumping well location and observation point location on drawdown and the effects of partial penetration, screen location and delay index on the wedge boundary hydraulic gradient in unconfined aquifers have also been investigated. The results are presented in the form of dimensionless drawdown-time and boundary gradient-time type curves. The curves are useful for parameter identification, calculation of stream depletion rates and the assessment of water budgets in river basins.
AN OPTIMIZED 64X64 POINT TWO-DIMENSIONAL FAST FOURIER TRANSFORM
NASA Technical Reports Server (NTRS)
Miko, J.
1994-01-01
Scientists at Goddard have developed an efficient and powerful program-- An Optimized 64x64 Point Two-Dimensional Fast Fourier Transform-- which combines the performance of real and complex valued one-dimensional Fast Fourier Transforms (FFT's) to execute a two-dimensional FFT and its power spectrum coefficients. These coefficients can be used in many applications, including spectrum analysis, convolution, digital filtering, image processing, and data compression. The program's efficiency results from its technique of expanding all arithmetic operations within one 64-point FFT; its high processing rate results from its operation on a high-speed digital signal processor. For non-real-time analysis, the program requires as input an ASCII data file of 64x64 (4096) real valued data points. As output, this analysis produces an ASCII data file of 64x64 power spectrum coefficients. To generate these coefficients, the program employs a row-column decomposition technique. First, it performs a radix-4 one-dimensional FFT on each row of input, producing complex valued results. Then, it performs a one-dimensional FFT on each column of these results to produce complex valued two-dimensional FFT results. Finally, the program sums the squares of the real and imaginary values to generate the power spectrum coefficients. The program requires a Banshee accelerator board with 128K bytes of memory from Atlanta Signal Processors (404/892-7265) installed on an IBM PC/AT compatible computer (DOS ver. 3.0 or higher) with at least one 16-bit expansion slot. For real-time operation, an ASPI daughter board is also needed. The real-time configuration reads 16-bit integer input data directly into the accelerator board, operating on 64x64 point frames of data. The program's memory management also allows accumulation of the coefficient results. The real-time processing rate to calculate and accumulate the 64x64 power spectrum output coefficients is less than 17.0 mSec. Documentation is included in the price of the program. Source code is written in C, 8086 Assembly, and Texas Instruments TMS320C30 Assembly Languages. This program is available on a 5.25 inch 360K MS-DOS format diskette. IBM and IBM PC are registered trademarks of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation.
A cascade method for TFT-LCD defect detection
NASA Astrophysics Data System (ADS)
Yi, Songsong; Wu, Xiaojun; Yu, Zhiyang; Mo, Zhuoya
2017-07-01
In this paper, we propose a novel cascade detection algorithm which focuses on point and line defects on TFT-LCD. At the first step of the algorithm, we use the gray level difference of su-bimage to segment the abnormal area. The second step is based on phase only transform (POT) which corresponds to the Discrete Fourier Transform (DFT), normalized by the magnitude. It can remove regularities like texture and noise. After that, we improve the method of setting regions of interest (ROI) with the method of edge segmentation and polar transformation. The algorithm has outstanding performance in both computation speed and accuracy. It can solve most of the defect detections including dark point, light point, dark line, etc.
Representational momentum in perception and grasping: translating versus transforming objects.
Brouwer, Anne-Marie; Franz, Volker H; Thornton, Ian M
2004-07-14
Representational momentum is the tendency to misremember the stopping point of a moving object as further forward in the direction of movement. Results of several studies suggest that this effect is typical for changes in position (e.g., translation) and not for changes in object shape (transformation). Additionally, the effect seems to be stronger in motor tasks than in perceptual tasks. Here, participants judged the final distance between two spheres after this distance had been increasing or decreasing. The spheres were two separately translating objects or were connected to form a single transforming object (a dumbbell). Participants also performed a motor task in which they grasped virtual versions of the final objects. We found representational momentum for the visual judgment task for both stimulus types. As predicted, it was stronger for the spheres than for the dumbbells. In contrast, for grasping, only the dumbbells produced representational momentum (larger maximum grip aperture when the dumbbells had been growing compared to when they had been shrinking). Because type of stimulus change had these different effects on representational momentum for perception and action, we conclude that different sources of information are used in the two tasks or that they are governed by different mechanisms.
Todorović, Dejan
2008-01-01
Every image of a scene produced in accord with the rules of linear perspective has an associated projection centre. Only if observed from that position does the image provide the stimulus which is equivalent to the one provided by the original scene. According to the perspective-transformation hypothesis, observing the image from other vantage points should result in specific transformations of the structure of the conveyed scene, whereas according to the vantage-point compensation hypothesis it should have little effect. Geometrical analyses illustrating the transformation theory are presented. An experiment is reported to confront the two theories. The results provide little support for the compensation theory and are generally in accord with the transformation theory, but also show systematic deviations from it, possibly due to cue conflict and asymmetry of visual angles.
An investigation of ride quality rating scales
NASA Technical Reports Server (NTRS)
Dempsey, T. K.; Coates, G. D.; Leatherwood, J. D.
1977-01-01
An experimental investigation was conducted for the combined purposes of determining the relative merits of various category scales for the prediction of human discomfort response to vibration and for determining the mathematical relationships whereby subjective data are transformed from one scale to other scales. There were 16 category scales analyzed representing various parametric combinations of polarity, that is, unipolar and bipolar, scale type, and number of scalar points. Results indicated that unipolar continuous-type scales containing either seven or nine scalar points provide the greatest reliability and discriminability. Transformations of subjective data between category scales were found to be feasible with unipolar scales of a larger number of scalar points providing the greatest accuracy of transformation. The results contain coefficients for transformation of subjective data between the category scales investigated. A result of particular interest was that the comfort half of a bipolar scale was seldom used by subjects to describe their subjective reaction to vibration.
Leverage points for sustainability transformation.
Abson, David J; Fischer, Joern; Leventon, Julia; Newig, Jens; Schomerus, Thomas; Vilsmaier, Ulli; von Wehrden, Henrik; Abernethy, Paivi; Ives, Christopher D; Jager, Nicolas W; Lang, Daniel J
2017-02-01
Despite substantial focus on sustainability issues in both science and politics, humanity remains on largely unsustainable development trajectories. Partly, this is due to the failure of sustainability science to engage with the root causes of unsustainability. Drawing on ideas by Donella Meadows, we argue that many sustainability interventions target highly tangible, but essentially weak, leverage points (i.e. using interventions that are easy, but have limited potential for transformational change). Thus, there is an urgent need to focus on less obvious but potentially far more powerful areas of intervention. We propose a research agenda inspired by systems thinking that focuses on transformational 'sustainability interventions', centred on three realms of leverage: reconnecting people to nature, restructuring institutions and rethinking how knowledge is created and used in pursuit of sustainability. The notion of leverage points has the potential to act as a boundary object for genuinely transformational sustainability science.
Visco-elastic controlled-source full waveform inversion without surface waves
NASA Astrophysics Data System (ADS)
Paschke, Marco; Krause, Martin; Bleibinhaus, Florian
2016-04-01
We developed a frequency-domain visco-elastic full waveform inversion for onshore seismic experiments with topography. The forward modeling is based on a finite-difference time-domain algorithm by Robertsson that uses the image-method to ensure a stress-free condition at the surface. The time-domain data is Fourier-transformed at every point in the model space during the forward modeling for a given set of frequencies. The motivation for this approach is the reduced amount of memory when computing kernels, and the straightforward implementation of the multiscale approach. For the inversion, we calculate the Frechet derivative matrix explicitly, and we implement a Levenberg-Marquardt scheme that allows for computing the resolution matrix. To reduce the size of the Frechet derivative matrix, and to stabilize the inversion, an adapted inverse mesh is used. The node spacing is controlled by the velocity distribution and the chosen frequencies. To focus the inversion on body waves (P, P-coda, and S) we mute the surface waves from the data. Consistent spatiotemporal weighting factors are applied to the wavefields during the Fourier transform to obtain the corresponding kernels. We test our code with a synthetic study using the Marmousi model with arbitrary topography. This study also demonstrates the importance of topography and muting surface waves in controlled-source full waveform inversion.
A note on parallel and pipeline computation of fast unitary transforms
NASA Technical Reports Server (NTRS)
Fino, B. J.; Algazi, V. R.
1974-01-01
The parallel and pipeline organization of fast unitary transform algorithms such as the Fast Fourier Transform are discussed. The efficiency is pointed out of a combined parallel-pipeline processor of a transform such as the Haar transform in which 2 to the n minus 1 power hardware butterflies generate a transform of order 2 to the n power every computation cycle.
A programmable metasurface with dynamic polarization, scattering and focusing control
NASA Astrophysics Data System (ADS)
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-10-01
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.
A programmable metasurface with dynamic polarization, scattering and focusing control
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-01-01
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications. PMID:27774997
A programmable metasurface with dynamic polarization, scattering and focusing control.
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-10-24
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.
Multiprocessor computer overset grid method and apparatus
Barnette, Daniel W.; Ober, Curtis C.
2003-01-01
A multiprocessor computer overset grid method and apparatus comprises associating points in each overset grid with processors and using mapped interpolation transformations to communicate intermediate values between processors assigned base and target points of the interpolation transformations. The method allows a multiprocessor computer to operate with effective load balance on overset grid applications.
10. Photographic copy of drawing dated January 22, 1908 (Source: ...
10. Photographic copy of drawing dated January 22, 1908 (Source: Salt River Project) General plans, index to detail plans and sections, transformer house - Theodore Roosevelt Dam, Transformer House, Salt River, Tortilla Flat, Maricopa County, AZ
Characterization of the Atacama B-mode Search
NASA Astrophysics Data System (ADS)
Simon, S. M.; Raghunathan, S.; Appel, J. W.; Becker, D. T.; Campusano, L. E.; Cho, H. M.; Essinger-Hileman, T.; Ho, S. P.; Irwin, K. D.; Jarosik, N.; Kusaka, A.; Niemack, M. D.; Nixon, G. W.; Nolta, M. R.; Page, L. A.; Palma, G. A.; Parker, L. P.; Sievers, J. L.; Staggs, S. T.; Visnjic, K.
2014-07-01
The Atacama B-mode Search (ABS), which began observations in February of 2012, is a crossed-Dragone telescope located at an elevation of 5190 m in the Atacama Desert in Chile. ABS is searching for the B-mode polarization spectrum of the cosmic microwave background (CMB) at large angular scales from multipole moments of ` ~ 50 ~ 500, a range that includes the primor- dial B-mode peak from inflationary gravity waves at ~ 100. The ABS focal plane consists of 240 pixels sensitive to 145 GHz, each containing two transition-edge sensor bolometers coupled to orthogonal polarizations with a planar ortho-mode transducer. An ambient-temperature con- tinuously rotating half-wave plate and 4 K optics make the ABS instrument unique. We discuss the characterization of the detector spectral responses with a Fourier transform spectrometer and demonstrate that the pointing model is adequate. We also present measurements of the beam from point sources and compare them with simulations.
1991-12-01
TRANSFORM, WIGNER - VILLE DISTRIBUTION , AND NONSTATIONARY SIGNAL REPRESENTATIONS 6. AUTHOR(S) J. C. Allen 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS...bispectrum yields a bispectral direction finder. Estimates of time-frequency distributions produce Wigner - Ville and Gabor direction-finders. Some types...Beamforming Concepts: Source Localization Using the Bispectrum, Gabor Transform, Wigner - Ville Distribution , and Nonstationary Signal Representations
Coarse Point Cloud Registration by Egi Matching of Voxel Clusters
NASA Astrophysics Data System (ADS)
Wang, Jinhu; Lindenbergh, Roderik; Shen, Yueqian; Menenti, Massimo
2016-06-01
Laser scanning samples the surface geometry of objects efficiently and records versatile information as point clouds. However, often more scans are required to fully cover a scene. Therefore, a registration step is required that transforms the different scans into a common coordinate system. The registration of point clouds is usually conducted in two steps, i.e. coarse registration followed by fine registration. In this study an automatic marker-free coarse registration method for pair-wise scans is presented. First the two input point clouds are re-sampled as voxels and dimensionality features of the voxels are determined by principal component analysis (PCA). Then voxel cells with the same dimensionality are clustered. Next, the Extended Gaussian Image (EGI) descriptor of those voxel clusters are constructed using significant eigenvectors of each voxel in the cluster. Correspondences between clusters in source and target data are obtained according to the similarity between their EGI descriptors. The random sampling consensus (RANSAC) algorithm is employed to remove outlying correspondences until a coarse alignment is obtained. If necessary, a fine registration is performed in a final step. This new method is illustrated on scan data sampling two indoor scenarios. The results of the tests are evaluated by computing the point to point distance between the two input point clouds. The presented two tests resulted in mean distances of 7.6 mm and 9.5 mm respectively, which are adequate for fine registration.
Exploring Function Transformations Using the Common Core
ERIC Educational Resources Information Center
Hall, Becky; Giacin, Rich
2013-01-01
When examining transformations of the plane in geometry, teachers typically have students experiment with transformations of polygons. Students are usually quick to notice patterns with ordered pairs. The Common Core State Standard, Geometry, Congruence 2 (G-CO.2), requires students to describe transformations as functions that take points in the…
A Robust False Matching Points Detection Method for Remote Sensing Image Registration
NASA Astrophysics Data System (ADS)
Shan, X. J.; Tang, P.
2015-04-01
Given the influences of illumination, imaging angle, and geometric distortion, among others, false matching points still occur in all image registration algorithms. Therefore, false matching points detection is an important step in remote sensing image registration. Random Sample Consensus (RANSAC) is typically used to detect false matching points. However, RANSAC method cannot detect all false matching points in some remote sensing images. Therefore, a robust false matching points detection method based on Knearest- neighbour (K-NN) graph (KGD) is proposed in this method to obtain robust and high accuracy result. The KGD method starts with the construction of the K-NN graph in one image. K-NN graph can be first generated for each matching points and its K nearest matching points. Local transformation model for each matching point is then obtained by using its K nearest matching points. The error of each matching point is computed by using its transformation model. Last, L matching points with largest error are identified false matching points and removed. This process is iterative until all errors are smaller than the given threshold. In addition, KGD method can be used in combination with other methods, such as RANSAC. Several remote sensing images with different resolutions and terrains are used in the experiment. We evaluate the performance of KGD method, RANSAC + KGD method, RANSAC, and Graph Transformation Matching (GTM). The experimental results demonstrate the superior performance of the KGD and RANSAC + KGD methods.
Selective thermal transformation of old computer printed circuit boards to Cu-Sn based alloy.
Shokri, Ali; Pahlevani, Farshid; Cole, Ivan; Sahajwalla, Veena
2017-09-01
This study investigates, verifies and determines the optimal parameters for the selective thermal transformation of problematic electronic waste (e-waste) to produce value-added copper-tin (Cu-Sn) based alloys; thereby demonstrating a novel new pathway for the cost-effective recovery of resources from one of the world's fastest growing and most challenging waste streams. Using outdated computer printed circuit boards (PCBs), a ubiquitous component of e-waste, we investigated transformations across a range of temperatures and time frames. Results indicate a two-step heat treatment process, using a low temperature step followed by a high temperature step, can be used to produce and separate off, first, a lead (Pb) based alloy and, subsequently, a Cu-Sn based alloy. We also found a single-step heat treatment process at a moderate temperature of 900 °C can be used to directly transform old PCBs to produce a Cu-Sn based alloy, while capturing the Pb and antimony (Sb) as alloying elements to prevent the emission of these low melting point elements. These results demonstrate old computer PCBs, large volumes of which are already within global waste stockpiles, can be considered a potential source of value-added metal alloys, opening up a new opportunity for utilizing e-waste to produce metal alloys in local micro-factories. Copyright © 2017 Elsevier Ltd. All rights reserved.
6. Photographic copy of photograph (Source: Salt River Project Archives, ...
6. Photographic copy of photograph (Source: Salt River Project Archives, Tempe, Lubken collection, #R-295) Transformer house under construction. View looking north. October 5, 1908. - Theodore Roosevelt Dam, Transformer House, Salt River, Tortilla Flat, Maricopa County, AZ
5. Photographic copy of photograph (Source: Salt River Project Archives, ...
5. Photographic copy of photograph (Source: Salt River Project Archives, Tempe, Lubken collection, #R-273) Transformer house under construction. View looking north. July 1, 1908. - Theodore Roosevelt Dam, Transformer House, Salt River, Tortilla Flat, Maricopa County, AZ
8. Photographic copy of photograph (Source: Salt River Project Archives, ...
8. Photographic copy of photograph (Source: Salt River Project Archives, Tempe, Box 8040, File 29) View of transformer house looking north. No date. CA. 1920. - Theodore Roosevelt Dam, Transformer House, Salt River, Tortilla Flat, Maricopa County, AZ
A hardware implementation of the discrete Pascal transform for image processing
NASA Astrophysics Data System (ADS)
Goodman, Thomas J.; Aburdene, Maurice F.
2006-02-01
The discrete Pascal transform is a polynomial transform with applications in pattern recognition, digital filtering, and digital image processing. It already has been shown that the Pascal transform matrix can be decomposed into a product of binary matrices. Such a factorization leads to a fast and efficient hardware implementation without the use of multipliers, which consume large amounts of hardware. We recently developed a field-programmable gate array (FPGA) implementation to compute the Pascal transform. Our goal was to demonstrate the computational efficiency of the transform while keeping hardware requirements at a minimum. Images are uploaded into memory from a remote computer prior to processing, and the transform coefficients can be offloaded from the FPGA board for analysis. Design techniques like as-soon-as-possible scheduling and adder sharing allowed us to develop a fast and efficient system. An eight-point, one-dimensional transform completes in 13 clock cycles and requires only four adders. An 8x8 two-dimensional transform completes in 240 cycles and requires only a top-level controller in addition to the one-dimensional transform hardware. Finally, through minor modifications to the controller, the transform operations can be pipelined to achieve 100% utilization of the four adders, allowing one eight-point transform to complete every seven clock cycles.
NASA Astrophysics Data System (ADS)
Cheng, Jun; Zhang, Jun; Tian, Jinwen
2015-12-01
Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.
METHOD AND MEANS FOR RECOGNIZING COMPLEX PATTERNS
Hough, P.V.C.
1962-12-18
This patent relates to a method and means for recognizing a complex pattern in a picture. The picture is divided into framelets, each framelet being sized so that any segment of the complex pattern therewithin is essentially a straight line. Each framelet is scanned to produce an electrical pulse for each point scanned on the segment therewithin. Each of the electrical pulses of each segment is then transformed into a separate strnight line to form a plane transform in a pictorial display. Each line in the plane transform of a segment is positioned laterally so that a point on the line midway between the top and the bottom of the pictorial display occurs at a distance from the left edge of the pictorial display equal to the distance of the generating point in the segment from the left edge of the framelet. Each line in the plane transform of a segment is inclined in the pictorial display at an angle to the vertical whose tangent is proportional to the vertical displacement of the generating point in the segment from the center of the framelet. The coordinate position of the point of intersection of the lines in the pictorial display for each segment is determined and recorded. The sum total of said recorded coordinate positions being representative of the complex pattern. (AEC)
NASA Technical Reports Server (NTRS)
Wolfe, R. H., Jr.; Juday, R. D.
1982-01-01
Interimage matching is the process of determining the geometric transformation required to conform spatially one image to another. In principle, the parameters of that transformation are varied until some measure of some difference between the two images is minimized or some measure of sameness (e.g., cross-correlation) is maximized. The number of such parameters to vary is faily large (six for merely an affine transformation), and it is customary to attempt an a priori transformation reducing the complexity of the residual transformation or subdivide the image into small enough match zones (control points or patches) that a simple transformation (e.g., pure translation) is applicable, yet large enough to facilitate matching. In the latter case, a complex mapping function is fit to the results (e.g., translation offsets) in all the patches. The methods reviewed have all chosen one or both of the above options, ranging from a priori along-line correction for line-dependent effects (the high-frequency correction) to a full sensor-to-geobase transformation with subsequent subdivision into a grid of match points.
Nanoscale heat transfer and phase transformation surrounding intensely heated nanoparticles
NASA Astrophysics Data System (ADS)
Sasikumar, Kiran
Over the last decade there has been significant ongoing research to use nanoparticles for hyperthermia-based destruction of cancer cells. In this regard, the investigation of highly non-equilibrium thermal systems created by ultrafast laser excitation is a particularly challenging and important aspect of nanoscale heat transfer. It has been observed experimentally that noble metal nanoparticles, illuminated by radiation at the plasmon resonance wavelength, can act as localized heat sources at nanometer-length scales. Achieving biological response by delivering heat via nanoscale heat sources has also been demonstrated. However, an understanding of the thermal transport at these scales and associated phase transformations is lacking. A striking observation made in several laser-heating experiments is that embedded metal nanoparticles heated to extreme temperatures may even melt without an associated boiling of the surrounding fluid. This unusual phase stability is not well understood and designing experiments to understand the physics of this phenomenon is a challenging task. In this thesis, we will resort to molecular dynamics (MD) simulations, which offer a powerful tool to investigate this phenomenon, without assumptions underlying continuum-level model formulations. We present the results from a series of steady state and transient non-equilibrium MD simulations performed on an intensely heated nanoparticle immersed in a model liquid. For small nanoparticles (1-10 nm in diameter) we observe a stable liquid phase near the nanoparticle surface, which can be at a temperature well above the boiling point. Furthermore, we report the existence of a critical nanoparticle size (4 nm in diameter) below which we do not observe formation of vapor even when local fluid temperatures exceed the critical temperature. Instead, we report the existence of a stable fluid region with a density much larger than that of the vapor phase. We explain this stability in terms of the Laplace pressure associated with the formation of a vapor nanocavity and the associated effect on the Gibbs free energy. Separately, we also demonstrate the role of extreme temperature gradients (108-1010 K/m) in elevating the boiling point of liquids. We show that, assuming local thermal equilibrium, the observed elevation of the boiling point is associated with the interplay between the "bulk" driving force for the phase change and surface tension of the liquid-vapor interface that suppresses the transformation. In transient simulations that mimic laser-heating experiments we observe the formation and collapse of vapor bubbles around the nanoparticles beyond a threshold. Detailed analysis of the cavitation dynamics indicates adiabatic formation followed by an isothermal final stage of growth and isothermal collapse.
NASA Astrophysics Data System (ADS)
Sawicki, Jean-Paul; Saint-Eve, Frédéric; Petit, Pierre; Aillerie, Michel
2017-02-01
This paper presents results of experiments aimed to verify a formula able to compute duty cycle in the case of pulse width modulation control for a DC-DC converter designed and realized in laboratory. This converter, called Magnetically Coupled Boost (MCB) is sized to step up only one photovoltaic module voltage to supply directly grid inverters. Duty cycle formula will be checked in a first time by identifying internal parameter, auto-transformer ratio, and in a second time by checking stability of operating point on the side of photovoltaic module. Thinking on nature of generator source and load connected to converter leads to imagine additional experiments to decide if auto-transformer ratio parameter could be used with fixed value or on the contrary with adaptive value. Effects of load variations on converter behavior or impact of possible shading on photovoltaic module are also mentioned, with aim to design robust control laws, in the case of parallel association, designed to compensate unwanted effects due to output voltage coupling.
Statistical Mechanical Proof of the Second Law of Thermodynamics based on Volume Entropy
NASA Astrophysics Data System (ADS)
Campisi, Michele
2007-10-01
As pointed out in [M. Campisi. Stud. Hist. Phil. M. P. 36 (2005) 275-290] the volume entropy (that is the logarithm of the volume of phase space enclosed by the constant energy hyper-surface) provides a good mechanical analogue of thermodynamic entropy because it satisfies the heat theorem and it is an adiabatic invariant. This property explains the ``equal'' sign in Clausius principle (Sf>=Si) in a purely mechanical way and suggests that the volume entropy might explain the ``larger than'' sign (i.e. the Law of Entropy Increase) if non adiabatic transformations were considered. Based on the principles of quantum mechanics here we prove that, provided the initial equilibrium satisfy the natural condition of decreasing ordering of probabilities, the expectation value of the volume entropy cannot decrease for arbitrary transformations performed by some external sources of work on a insulated system. This can be regarded as a rigorous quantum mechanical proof of the Second Law.
Lotka-Volterra representation of general nonlinear systems.
Hernández-Bermejo, B; Fairén, V
1997-02-01
In this article we elaborate on the structure of the generalized Lotka-Volterra (GLV) form for nonlinear differential equations. We discuss here the algebraic properties of the GLV family, such as the invariance under quasimonomial transformations and the underlying structure of classes of equivalence. Each class possesses a unique representative under the classical quadratic Lotka-Volterra form. We show how other standard modeling forms of biological interest, such as S-systems or mass-action systems, are naturally embedded into the GLV form, which thus provides a formal framework for their comparison and for the establishment of transformation rules. We also focus on the issue of recasting of general nonlinear systems into the GLV format. We present a procedure for doing so and point at possible sources of ambiguity that could make the resulting Lotka-Volterra system dependent on the path followed. We then provide some general theorems that define the operational and algorithmic framework in which this is not the case.
Monitoring trace gases in downtown Toronto using open-path Fourier transform infrared spectroscopy
NASA Astrophysics Data System (ADS)
Byrne, B.; Strong, K.; Colebatch, O.; Fogal, P.; Mittermeier, R. L.; Wunch, D.; Jones, D. B. A.
2017-12-01
Emissions of greenhouse gases (GHGs) in urban environments can be highly heterogeneous. For example, vehicles produce point source emissions which can result in heterogeneous GHG concentrations on scales <10 m. The highly localized scale of these emissions can make it difficult to measure mean GHG concentrations on scales of 100-1000 m. Open-Path Fourier Transform Infrared Spectroscopy (OP-FTIR) measurements offer spatial averaging and continuous measurements of several trace gases simultaneously in the same airmass. We have set up an open-path system in downtown Toronto to monitor trace gases in the urban boundary layer. Concentrations of CO2, CO, CH4, and N2O are derived from atmospheric absorption spectra recorded over a two-way atmospheric open path of 320 m using non-linear least squares fitting. Using a simple box model and co-located boundary layer height measurements, we estimate surface fluxes of these gases in downtown Toronto from our OP-FTIR observations.
NASA Technical Reports Server (NTRS)
Traub, W. A.
1984-01-01
The first physical demonstration of the principle of image reconstruction using a set of images from a diffraction-blurred elongated aperture is reported. This is an optical validation of previous theoretical and numerical simulations of the COSMIC telescope array (coherent optical system of modular imaging collectors). The present experiment utilizes 17 diffraction blurred exposures of a laboratory light source, as imaged by a lens covered by a narrow-slit aperture; the aperture is rotated 10 degrees between each exposure. The images are recorded in digitized form by a CCD camera, Fourier transformed, numerically filtered, and added; the sum is then filtered and inverse Fourier transformed to form the final image. The image reconstruction process is found to be stable with respect to uncertainties in values of all physical parameters such as effective wavelength, rotation angle, pointing jitter, and aperture shape. Future experiments will explore the effects of low counting rates, autoguiding on the image, various aperture configurations, and separated optics.
Nonlinear Bogolyubov-Valatin transformations: Two modes
NASA Astrophysics Data System (ADS)
Scharnhorst, K.; van Holten, J.-W.
2011-11-01
Extending our earlier study of nonlinear Bogolyubov-Valatin transformations (canonical transformations for fermions) for one fermionic mode, in the present paper, we perform a thorough study of general (nonlinear) canonical transformations for two fermionic modes. We find that the Bogolyubov-Valatin group for n=2 fermionic modes, which can be implemented by means of unitary SU(2n=4) transformations, is isomorphic to SO(6;R)/Z2. The investigation touches on a number of subjects. As a novelty from a mathematical point of view, we study the structure of nonlinear basis transformations in a Clifford algebra [specifically, in the Clifford algebra C(0,4)] entailing (supersymmetric) transformations among multivectors of different grades. A prominent algebraic role in this context is being played by biparavectors (linear combinations of products of Dirac matrices, quadriquaternions, sedenions) and spin bivectors (antisymmetric complex matrices). The studied biparavectors are equivalent to Eddington's E-numbers and can be understood in terms of the tensor product of two commuting copies of the division algebra of quaternions H. From a physical point of view, we present a method to diagonalize any arbitrary two-fermion Hamiltonians. Relying on Jordan-Wigner transformations for two-spin- {1}/{2} and single-spin- {3}/{2} systems, we also study nonlinear spin transformations and the related problem of diagonalizing arbitrary two-spin- {1}/{2} and single-spin- {3}/{2} Hamiltonians. Finally, from a calculational point of view, we pay due attention to explicit parametrizations of SU(4) and SO(6;R) matrices (of respective sizes 4×4 and 6×6) and their mutual relation.
Efficient matrix approach to optical wave propagation and Linear Canonical Transforms.
Shakir, Sami A; Fried, David L; Pease, Edwin A; Brennan, Terry J; Dolash, Thomas M
2015-10-05
The Fresnel diffraction integral form of optical wave propagation and the more general Linear Canonical Transforms (LCT) are cast into a matrix transformation form. Taking advantage of recent efficient matrix multiply algorithms, this approach promises an efficient computational and analytical tool that is competitive with FFT based methods but offers better behavior in terms of aliasing, transparent boundary condition, and flexibility in number of sampling points and computational window sizes of the input and output planes being independent. This flexibility makes the method significantly faster than FFT based propagators when only a single point, as in Strehl metrics, or a limited number of points, as in power-in-the-bucket metrics, are needed in the output observation plane.
Algorithms used in the Airborne Lidar Processing System (ALPS)
Nagle, David B.; Wright, C. Wayne
2016-05-23
The Airborne Lidar Processing System (ALPS) analyzes Experimental Advanced Airborne Research Lidar (EAARL) data—digitized laser-return waveforms, position, and attitude data—to derive point clouds of target surfaces. A full-waveform airborne lidar system, the EAARL seamlessly and simultaneously collects mixed environment data, including submerged, sub-aerial bare earth, and vegetation-covered topographies.ALPS uses three waveform target-detection algorithms to determine target positions within a given waveform: centroid analysis, leading edge detection, and bottom detection using water-column backscatter modeling. The centroid analysis algorithm detects opaque hard surfaces. The leading edge algorithm detects topography beneath vegetation and shallow, submerged topography. The bottom detection algorithm uses water-column backscatter modeling for deeper submerged topography in turbid water.The report describes slant range calculations and explains how ALPS uses laser range and orientation measurements to project measurement points into the Universal Transverse Mercator coordinate system. Parameters used for coordinate transformations in ALPS are described, as are Interactive Data Language-based methods for gridding EAARL point cloud data to derive digital elevation models. Noise reduction in point clouds through use of a random consensus filter is explained, and detailed pseudocode, mathematical equations, and Yorick source code accompany the report.
Structure of Lie point and variational symmetry algebras for a class of odes
NASA Astrophysics Data System (ADS)
Ndogmo, J. C.
2018-04-01
It is known for scalar ordinary differential equations, and for systems of ordinary differential equations of order not higher than the third, that their Lie point symmetry algebras is of maximal dimension if and only if they can be reduced by a point transformation to the trivial equation y(n)=0. For arbitrary systems of ordinary differential equations of order n ≥ 3 reducible by point transformations to the trivial equation, we determine the complete structure of their Lie point symmetry algebras as well as that for their variational, and their divergence symmetry algebras. As a corollary, we obtain the maximal dimension of the Lie point symmetry algebra for any system of linear or nonlinear ordinary differential equations.
The Stellar Imager (SI) Project: Resolving Stellar Surfaces, Interiors, and Magnetic Activity
NASA Technical Reports Server (NTRS)
Carpenter, Kenneth G.; Schrijver, K.; Karovska, M.
2007-01-01
The Stellar Imager (SI) is a UV/Optical. Space-Based Interferometer designed to enable 0.1 milli-arcsec (mas) spectral imaging of stellar surfaces and, via asteroseismology, stellar interiors and of the Universe in general. The ultra-sharp images of SI will revolutionize our view of many dynamic astrophysical processes by transforming point sources into extended sources, and snapshots into evolving views. The science of SI focuses on the role of magnetism in the Universe, particularly on magnetic activity on the surfaces of stars like the Sun. Its prime goal is to enable long-term forecasting of solar activity and the space weather that it drives. SI will also revolutionize our understanding of the formation of planetary systems, of the habitability and climatology of distant planets, and of many magneto-hydrodynamically controlled processes in the Universe. In this paper we discuss the science goals, technology needs, and baseline design of the SI mission.
NASA Technical Reports Server (NTRS)
Carpenter, Kenneth G.; Schrijver, Carolus J.; Karovska, Margarita
2006-01-01
The ultra-sharp images of the Stellar Imager (SI) will revolutionize our view of many dynamic astrophysical processes: The 0.1 milliarcsec resolution of this deep-space telescope will transform point sources into extended sources, and simple snapshots into spellbinding evolving views. SI s science focuses on the role of magnetism in the Universe, particularly on magnetic activity on the surfaces of stars like the Sun. SI s prime goal is to enable long-term forecasting of solar activity and the space weather that it drives in support of the Living With a Star program in the Exploration Era by imaging a sample of magnetically active stars with enough resolution to map their evolving dynamo patterns and their internal flows. By exploring the Universe at ultra-high resolution, SI will also revolutionize our understanding of the formation of planetary systems, of the habitability and climatology of distant planets, and of many magnetohydrodynamically controlled structures and processes in the Universe.
Recent wetland land loss due to hurricanes: improved estimates based upon multiple source images
Kranenburg, Christine J.; Palaseanu-Lovejoy, Monica; Barras, John A.; Brock, John C.; Wang, Ping; Rosati, Julie D.; Roberts, Tiffany M.
2011-01-01
The objective of this study was to provide a moderate resolution 30-m fractional water map of the Chenier Plain for 2003, 2006 and 2009 by using information contained in high-resolution satellite imagery of a subset of the study area. Indices and transforms pertaining to vegetation and water were created using the high-resolution imagery, and a threshold was applied to obtain a categorical land/water map. The high-resolution data was used to train a decision-tree classifier to estimate percent water in a lower resolution (Landsat) image. Two new water indices based on the tasseled cap transformation were proposed for IKONOS imagery in wetland environments and more than 700 input parameter combinations were considered for each Landsat image classified. Final selection and thresholding of the resulting percent water maps involved over 5,000 unambiguous classified random points using corresponding 1-m resolution aerial photographs, and a statistical optimization procedure to determine the threshold at which the maximum Kappa coefficient occurs. Each selected dataset has a Kappa coefficient, percent correctly classified (PCC) water, land and total greater than 90%. An accuracy assessment using 1,000 independent random points was performed. Using the validation points, the PCC values decreased to around 90%. The time series change analysis indicated that due to Hurricane Rita, the study area lost 6.5% of marsh area, and transient changes were less than 3% for either land or water. Hurricane Ike resulted in an additional 8% land loss, although not enough time has passed to discriminate between persistent and transient changes.
NASA Astrophysics Data System (ADS)
Yang, Shengfeng; Zhou, Naixie; Zheng, Hui; Ong, Shyue Ping; Luo, Jian
2018-02-01
First-order interfacial phaselike transformations that break the mirror symmetry of the symmetric ∑5 (210 ) tilt grain boundary (GB) are discovered by combining a modified genetic algorithm with hybrid Monte Carlo and molecular dynamics simulations. Density functional theory calculations confirm this prediction. This first-order coupled structural and adsorption transformation, which produces two variants of asymmetric bilayers, vanishes at an interfacial critical point. A GB complexion (phase) diagram is constructed via semigrand canonical ensemble atomistic simulations for the first time.
A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images
Yang, Qiyao; Wang, Zhiguo; Zhang, Guoxu
2017-01-01
The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS) method and a dynamic threshold denoising (DTD) method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair) of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933) on feature images and less Euclidean distance error (ED = 2.826) on landmark points, outperforming the source data (NC = −0.496, ED = 25.847) and the compared method (NC = −0.614, ED = 16.085). Moreover, our method is about ten times faster than the compared one. PMID:28316979
NASA Astrophysics Data System (ADS)
Tong, Daniel Quansong; Kang, Daiwen; Aneja, Viney P.; Ray, John D.
2005-01-01
We present in this study both measurement-based and modeling analyses for elucidation of source attribution, influence areas, and process budget of reactive nitrogen oxides at two rural southeast United States sites (Great Smoky Mountains national park (GRSM) and Mammoth Cave national park (MACA)). Availability of nitrogen oxides is considered as the limiting factor to ozone production in these areas and the relative source contribution of reactive nitrogen oxides from point or mobile sources is important in understanding why these areas have high ozone. Using two independent observation-based techniques, multiple linear regression analysis and emission inventory analysis, we demonstrate that point sources contribute a minimum of 23% of total NOy at GRSM and 27% at MACA. The influence areas for these two sites, or origins of nitrogen oxides, are investigated using trajectory-cluster analysis. The result shows that air masses from the West and Southwest sweep over GRSM most frequently, while pollutants transported from the eastern half (i.e., East, Northeast, and Southeast) have limited influence (<10% out of all air masses) on air quality at GRSM. The processes responsible for formation and removal of reactive nitrogen oxides are investigated using a comprehensive 3-D air quality model (Multiscale Air Quality SImulation Platform (MAQSIP)). The NOy contribution associated with chemical transformations to NOz and O3, based on process budget analysis, is as follows: 32% and 84% for NOz, and 26% and 80% for O3 at GRSM and MACA, respectively. The similarity between NOz and O3 process budgets suggests a close association between nitrogen oxides and effective O3 production at these rural locations.
Logarithmic Transformations in Regression: Do You Transform Back Correctly?
ERIC Educational Resources Information Center
Dambolena, Ismael G.; Eriksen, Steven E.; Kopcso, David P.
2009-01-01
The logarithmic transformation is often used in regression analysis for a variety of purposes such as the linearization of a nonlinear relationship between two or more variables. We have noticed that when this transformation is applied to the response variable, the computation of the point estimate of the conditional mean of the original response…
Poerschmann, Juergen; Koschorreck, Matthias; Górecki, Tadeusz
2017-02-01
Natural neutralization of acidic mining lakes is often limited by organic matter. The knowledge of the sources and degradability of organic matter is crucial for understanding alkalinity generation in these lakes. Sediments collected at different depths (surface sediment layer from 0 to 1 cm and deep sediment layer from 4 to 5cm) from an acidic mining lake were studied in order to characterize sedimentary organic matter based on neutral signature markers. Samples were exhaustively extracted, subjected to pre-chromatographic derivatizations and analyzed by GC/MS. Herein, molecular distributions of diagnostic alkanes/alkenes, terpenes/terpenoids, polycyclic aromatic hydrocarbons, aliphatic alcohols and ketones, sterols, and hopanes/hopanoids were addressed. Characterization of the contribution of natural vs. anthropogenic sources to the sedimentary organic matter in these extreme environments was then possible based on these distributions. With the exception of polycyclic aromatic hydrocarbons, combined concentrations across all marker classes proved higher in the surface sediment layer as compared to those in the deep sediment layer. Alkane and aliphatic alcohol distributions pointed to predominantly allochthonous over autochthonous contribution to sedimentary organic matter. Sterol patterns were dominated by phytosterols of terrestrial plants including stigmasterol and β-sitosterol. Hopanoid markers with the ββ-biohopanoid "biological" configuration were more abundant in the surface sediment layer, which pointed to higher bacterial activity. The pattern of polycyclic aromatic hydrocarbons pointed to prevailing anthropogenic input. Pyrolytic makers were likely to due to atmospheric deposition from a nearby former coal combustion facility. The combined analysis of the array of biomarkers provided new insights into the sources and transformations of organic matter in lake sediments. Copyright © 2016 Elsevier B.V. All rights reserved.
Yang, S A
2002-10-01
This paper presents an effective solution method for predicting acoustic radiation and scattering fields in two dimensions. The difficulty of the fictitious characteristic frequency is overcome by incorporating an auxiliary interior surface that satisfies certain boundary condition into the body surface. This process gives rise to a set of uniquely solvable boundary integral equations. Distributing monopoles with unknown strengths over the body and interior surfaces yields the simple source formulation. The modified boundary integral equations are further transformed to ordinary ones that contain nonsingular kernels only. This implementation allows direct application of standard quadrature formulas over the entire integration domain; that is, the collocation points are exactly the positions at which the integration points are located. Selecting the interior surface is an easy task. Moreover, only a few corresponding interior nodal points are sufficient for the computation. Numerical calculations consist of the acoustic radiation and scattering by acoustically hard elliptic and rectangular cylinders. Comparisons with analytical solutions are made. Numerical results demonstrate the efficiency and accuracy of the current solution method.
NASA Astrophysics Data System (ADS)
Tupas, M. E. A.; Dasallas, J. A.; Jiao, B. J. D.; Magallon, B. J. P.; Sempio, J. N. H.; Ramos, M. K. F.; Aranas, R. K. D.; Tamondong, A. M.
2017-10-01
The FAST-SIFT corner detector and descriptor extractor combination was used to automatically georeference DIWATA-1 Spaceborne Multispectral Imager images. Features from the Fast Accelerated Segment Test (FAST) algorithm detects corners or keypoints in an image, and these robustly detected keypoints have well-defined positions. Descriptors were computed using Scale-Invariant Feature Transform (SIFT) extractor. FAST-SIFT method effectively SMI same-subscene images detected by the NIR sensor. The method was also tested in stitching NIR images with varying subscene swept by the camera. The slave images were matched to the master image. The keypoints served as the ground control points. Random sample consensus was used to eliminate fall-out matches and ensure accuracy of the feature points from which the transformation parameters were derived. Keypoints are matched based on their descriptor vector. Nearest-neighbor matching is employed based on a metric distance between the descriptors. The metrics include Euclidean and city block, among others. Rough matching outputs not only the correct matches but also the faulty matches. A previous work in automatic georeferencing incorporates a geometric restriction. In this work, we applied a simplified version of the learning method. RANSAC was used to eliminate fall-out matches and ensure accuracy of the feature points. This method identifies if a point fits the transformation function and returns inlier matches. The transformation matrix was solved by Affine, Projective, and Polynomial models. The accuracy of the automatic georeferencing method were determined by calculating the RMSE of interest points, selected randomly, between the master image and transformed slave image.
An intelligent data model for the storage of structured grids
NASA Astrophysics Data System (ADS)
Clyne, John; Norton, Alan
2013-04-01
With support from the U.S. National Science Foundation we have developed, and currently maintain, VAPOR: a geosciences-focused, open source visual data analysis package. VAPOR enables highly interactive exploration, as well as qualitative and quantitative analysis of high-resolution simulation outputs using only a commodity, desktop computer. The enabling technology behind VAPOR's ability to interact with a data set, whose size would overwhelm all but the largest analysis computing resources, is a progressive data access file format, called the VAPOR Data Collection (VDC). The VDC is based on the discrete wavelet transform and their information compaction properties. Prior to analysis, raw data undergo a wavelet transform, concentrating the information content into a fraction of the coefficients. The coefficients are then sorted by their information content (magnitude) into a small number of bins. Data are reconstructed by applying an inverse wavelet transform. If all of the coefficient bins are used during reconstruction the process is lossless (up to floating point round-off). If only a subset of the bins are used, an approximation of the original data is produced. A crucial point here is that the principal benefit to reconstruction from a subset of wavelet coefficients is a reduction in I/O. Further, if smaller coefficients are simply discarded, or perhaps stored on more capacious tertiary storage, secondary storage requirements (e.g. disk) can be reduced as well. In practice, these reductions in I/O or storage can be on the order of tens or even hundreds. This talk will briefly describe the VAPOR Data Collection, and will present real world success stories from the geosciences that illustrate how progressive data access enables highly interactive exploration of Big Data.
NASA Astrophysics Data System (ADS)
Voss, Anja; Bärlund, Ilona; Punzet, Manuel; Williams, Richard; Teichert, Ellen; Malve, Olli; Voß, Frank
2010-05-01
Although catchment scale modelling of water and solute transport and transformations is a widely used technique to study pollution pathways and effects of natural changes, policies and mitigation measures there are only a few examples of global water quality modelling. This work will provide a description of the new continental-scale model of water quality WorldQual and the analysis of model simulations under changed climate and anthropogenic conditions with respect to changes in diffuse and point loading as well as surface water quality. BOD is used as an indicator of the level of organic pollution and its oxygen-depleting potential, and for the overall health of aquatic ecosystems. The first application of this new water quality model is to river systems of Europe. The model itself is being developed as part of the EU-funded SCENES Project which has the principal goal of developing new scenarios of the future of freshwater resources in Europe. The aim of the model is to determine chemical fluxes in different pathways combining analysis of water quantity with water quality. Simple equations, consistent with the availability of data on the continental scale, are used to simulate the response of in-stream BOD concentrations to diffuse and anthropogenic point loadings as well as flow dilution. Point sources are divided into manufacturing, domestic and urban loadings, whereas diffuse loadings come from scattered settlements, agricultural input (for instance livestock farming), and also from natural background sources. The model is tested against measured longitudinal gradients and time series data at specific river locations with different loading characteristics like the Thames that is driven by domestic loading and Ebro with relative high share of diffuse loading. With scenario studies the influence of climate and anthropogenic changes on European water resources shall be investigated with the following questions: 1. What percentage of river systems will have degraded water quality due to different driving forces? 2. How will climate change and changes in wastewater discharges affect water quality? For the analysis these scenario aspects are included: 1. climate with changed runoff (affecting diffuse pollution and loading from sealed areas), river discharge (causing dilution or concentration of point source pollution) and water temperature (affecting BOD degradation). 2. Point sources with changed population (affecting domestic pollution), connectivity to treatment plants (influencing domestic and manufacturing pollution as well as input from sealed areas and scattered settlements).
Method and apparatus for reducing the harmonic currents in alternating-current distribution networks
Beverly, Leon H.; Hance, Richard D.; Kristalinski, Alexandr L.; Visser, Age T.
1996-01-01
An improved apparatus and method reduce the harmonic content of AC line and neutral line currents in polyphase AC source distribution networks. The apparatus and method employ a polyphase Zig-Zag transformer connected between the AC source distribution network and a load. The apparatus and method also employs a mechanism for increasing the source neutral impedance of the AC source distribution network. This mechanism can consist of a choke installed in the neutral line between the AC source and the Zig-Zag transformer.
Method and apparatus for reducing the harmonic currents in alternating-current distribution networks
Beverly, L.H.; Hance, R.D.; Kristalinski, A.L.; Visser, A.T.
1996-11-19
An improved apparatus and method reduce the harmonic content of AC line and neutral line currents in polyphase AC source distribution networks. The apparatus and method employ a polyphase Zig-Zag transformer connected between the AC source distribution network and a load. The apparatus and method also employs a mechanism for increasing the source neutral impedance of the AC source distribution network. This mechanism can consist of a choke installed in the neutral line between the AC source and the Zig-Zag transformer. 23 figs.
Mikkonen, Hannah G; Clarke, Bradley O; Dasika, Raghava; Wallis, Christian J; Reichman, Suzie M
2017-02-15
Understanding ambient background concentrations in soil, at a local scale, is an essential part of environmental risk assessment. Where high resolution geochemical soil surveys have not been undertaken, soil data from alternative sources, such as environmental site assessment reports, can be used to support an understanding of ambient background conditions. Concentrations of metals/metalloids (As, Mn, Ni, Pb and Zn) were extracted from open-source environmental site assessment reports, for soils derived from the Newer Volcanics basalt, of Melbourne, Victoria, Australia. A manual screening method was applied to remove samples that were indicated to be contaminated by point sources and hence not representative of ambient background conditions. The manual screening approach was validated by comparison to data from a targeted background soil survey. Statistical methods for exclusion of contaminated samples from background soil datasets were compared to the manual screening method. The statistical methods tested included the Median plus Two Median Absolute Deviations, the upper whisker of a normal and log transformed Tukey boxplot, the point of inflection on a cumulative frequency plot and the 95th percentile. We have demonstrated that where anomalous sample results cannot be screened using site information, the Median plus Two Median Absolute Deviations is a conservative method for derivation of ambient background upper concentration limits (i.e. expected maximums). The upper whisker of a boxplot and the point of inflection on a cumulative frequency plot, were also considered adequate methods for deriving ambient background upper concentration limits, where the percentage of contaminated samples is <25%. Median ambient background concentrations of metals/metalloids in the Newer Volcanic soils of Melbourne were comparable to ambient background concentrations in Europe and the United States, except for Ni, which was naturally enriched in the basalt-derived soils of Melbourne. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
González, C. M.; Gómez, C. D.; Rojas, N. Y.; Acevedo, H.; Aristizábal, B. H.
2017-03-01
Cities in emerging countries are facing a fast growth and urbanization; however, the study of air pollutant emissions and its dynamics is scarce, making their populations vulnerable to potential effects of air pollution. This situation is critical in medium-sized urban areas built along the tropical Andean mountains. This work assesses the contribution of on-road vehicular and point-source industrial activities in the medium-sized Andean city of Manizales, Colombia. Annual fluxes of criteria pollutants, NMVOC, and greenhouse gases were estimated. Emissions were dominated by vehicular activity, with more than 90% of total estimated releases for the majority of air pollutants. On-road vehicular emissions for CO (43.4 Gg/yr) and NMVOC (9.6 Gg/yr) were mainly associated with the use of motorcycles (50% and 81% of total CO and NMVOC emissions respectively). Public transit buses were the main source of PM10 (47%) and NOx (48%). The per-capita emission index was significantly higher in Manizales than in other medium-sized cities, especially for NMVOC, CO, NOx and CO2. The unique mountainous terrain of Andean cities suggest that a methodology based on VSP model could give more realistic emission estimates, with additional model components that include slope and acceleration. Food and beverage facilities were the main contributors of point-source industrial emissions for PM10 (63%), SOx (55%) and NOx (45%), whereas scrap metal recycling had high emissions of CO (73%) and NMVOC (47%). Results provide the baseline for ongoing research in atmospheric modeling and urban air quality, in order to improve the understanding of air pollutant fluxes, transport and transformation in the atmosphere. In addition, this emission inventory could be used as a tool to identify areas of public health exposure and provide information for future decision makers.
Converter topologies and control
Rodriguez, Fernando; Qin, Hengsi; Chapman, Patrick
2018-05-01
An inverter includes a transformer that includes a first winding, a second winding, and a third winding, a DC-AC inverter electrically coupled to the first winding of the transformer, a cycloconverter electrically coupled to the second winding of the transformer, an active filter electrically coupled to the third winding of the transformer. The DC-AC inverter is adapted to convert the input DC waveform to an AC waveform delivered to the transformer at the first winding. The cycloconverter is adapted to convert an AC waveform received at the second winding of the transformer to the output AC waveform having a grid frequency of the AC grid. The active filter is adapted to sink and source power with one or more energy storage devices based on a mismatch in power between the DC source and the AC grid.
Transformations of Wordsworth's Nature in Nineteenth and Early Twentieth Century British Literature.
ERIC Educational Resources Information Center
Dodson, Charles B.
One way of making connections among various authors in a survey course is to emphasize recurring themes, images, and tropes; the instructor can point out how they are transformed by a constantly changing ethos and set of historical circumstances. A case in point is the second part of a British survey, typically going from William Blake or William…
NASA Astrophysics Data System (ADS)
Wang, Wenke; Wang, Zhan; Hou, Rongzhe; Guan, Longyao; Dang, Yan; Zhang, Zaiyong; Wang, Hao; Duan, Lei; Wang, Zhoufeng
2018-05-01
The hydrodynamic processes and impacts exerted by river-groundwater transformation need to be studied at regional and catchment scale, especially with respect to diverse geology and lithology. This work adopted an integrated method to study four typical modes (characterized primarily by lithology, flow subsystems, and gaining/losing river status) and the associated hydrodynamic processes and ecological impacts in the southern part of Junggar Basin, China. River-groundwater transformation occurs one to four times along the basin route. For mode classification, such transformation occurs: once or twice, controlled by lithological factors (mode 1); twice, impacted by geomorphic features and lithological structures (mode 2); and three or four times, controlled by both geological and lithological structures (modes 3 and 4). Results also suggest: (1) there exist local and regional groundwater flow subsystems at 400 m depth, which form a multistage nested groundwater flow system. The groundwater flow velocities are 0.1-1.0 and <0.1 m/day for each of two subsystems; (2) the primary groundwater hydro-chemical type takes on apparent horizontal and vertical zoning characteristics, and the TDS of the groundwater evidently increases along the direction of groundwater flow, driven by hydrodynamic processes; (3) the streams, wetland and terminal lakes are the end-points of the local and regional groundwater flow systems. This work indicates that not only are groundwater and river water derived from the same source, but also hydrodynamic and hydro-chemical processes and ecological effects, as a whole in arid areas, are controlled by stream-groundwater transformation.
7. Photographic copy of photograph (Source: National Archives, Rocky Mountain ...
7. Photographic copy of photograph (Source: National Archives, Rocky Mountain Region, Denver, Salt River Project History, Final History to 1916. p. 506) Interior view of transformer house. No date. CA. 1916. - Theodore Roosevelt Dam, Transformer House, Salt River, Tortilla Flat, Maricopa County, AZ
LUO, BAO; TANG, LIPING; WANG, ZHISHAN; ZHANG, JUNLAN; LING, YIQUN; FENG, WENGUANG; SUN, JU-ZHONG; STOCKARD, CECIL R.; FROST, ANDRA R.; CHEN, YIU-FAI; GRIZZLE, WILLIAM E.; FALLON, MICHAEL B.
2010-01-01
Background & Aims Hepatic production and release of endothelin 1 plays a central role in experimental hepatopulmonary syndrome after common bile duct ligation by stimulating pulmonary endothelial nitric oxide production. In thioacetamide-induced nonbiliary cirrhosis, hepatic endothelin 1 production and release do not occur, and hepatopulmonary syndrome does not develop. However, the source and regulation of hepatic endothelin 1 after common bile duct ligation are not fully characterized. We evaluated the sources of hepatic endothelin 1 production after common bile duct ligation in relation to thioacetamide cirrhosis and assessed whether transforming growth factor β1 regulates endothelin 1 production. Methods Hepatopulmonary syndrome and hepatic and plasma endothelin 1 levels were evaluated after common bile duct ligation or thioacetamide administration. Cellular sources of endothelin 1 were assessed by immunohistochemistry and laser capture microdissection of cholangiocytes. Transforming growth factor β1 expression and signaling were assessed by using immunohistochemistry and Western blotting and by evaluating normal rat cholangiocytes. Results Hepatic and plasma endothelin 1 levels increased and hepatopulmonary syndrome developed only after common bile duct ligation. Hepatic endothelin 1 and transforming growth factor β1 levels increased over a similar time frame, and cholangiocytes were a major source of each peptide. Transforming growth factor β1 signaling in cholangiocytes in vivo was evident by increased phosphorylation and nuclear localization of Smad2, and hepatic endothelin 1 levels correlated directly with liver transforming growth factor β1 and phosphorylated Smad2 levels. Transforming growth factor β1 also stimulated endothelin 1 promoter activity, expression, and production in normal rat cholangiocytes. Conclusions Cholangiocytes are a major source of hepatic endothelin 1 production during the development of hepatopulmonary syndrome after common bile duct ligation, but not in thioacetamide-induced cirrhosis. Transforming growth factor β1 stimulates cholangiocyte endothelin 1 expression and production. Cholangiocyte-derived endothelin 1 may be an important endocrine mediator of experimental hepatopulmonary syndrome. PMID:16083721
Di Vito, Alessia; Fanfoni, Massimo; Tomellini, Massimo
2010-12-01
Starting from a stochastic two-dimensional process we studied the transformation of points in disks and squares following a protocol according to which at any step the island size increases proportionally to the corresponding Voronoi tessera. Two interaction mechanisms among islands have been dealt with: coalescence and impingement. We studied the evolution of the island density and of the island size distribution functions, in dependence on island collision mechanisms for both Poissonian and correlated spatial distributions of points. The island size distribution functions have been found to be invariant with the fraction of transformed phase for a given stochastic process. The n(Θ) curve describing the island decay has been found to be independent of the shape (apart from high correlation degrees) and interaction mechanism.
Transforming Aggregate Object-Oriented Formal Specifications to Code
1999-03-01
integration issues associated with a formal-based software transformation system, such as the source specification, the problem space architecture , design architecture ... design transforms, and target software transforms. Software is critical in today’s Air Force, yet its specification, design, and development
Automatic extraction of the mid-sagittal plane using an ICP variant
NASA Astrophysics Data System (ADS)
Fieten, Lorenz; Eschweiler, Jörg; de la Fuente, Matías; Gravius, Sascha; Radermacher, Klaus
2008-03-01
Precise knowledge of the mid-sagittal plane is important for the assessment and correction of several deformities. Furthermore, the mid-sagittal plane can be used for the definition of standardized coordinate systems such as pelvis or skull coordinate systems. A popular approach for mid-sagittal plane computation is based on the selection of anatomical landmarks located either directly on the plane or symmetrically to it. However, the manual selection of landmarks is a tedious, time-consuming and error-prone task, which requires great care. In order to overcome this drawback, previously it was suggested to use the iterative closest point (ICP) algorithm: After an initial mirroring of the data points on a default mirror plane, the mirrored data points should be registered iteratively to the model points using rigid transforms. Finally, a reflection transform approximating the cumulative transform could be extracted. In this work, we present an ICP variant for the iterative optimization of the reflection parameters. It is based on a closed-form solution to the least-squares problem of matching data points to model points using a reflection. In experiments on CT pelvis and skull datasets our method showed a better ability to match homologous areas.
Linear approximations of nonlinear systems
NASA Technical Reports Server (NTRS)
Hunt, L. R.; Su, R.
1983-01-01
The development of a method for designing an automatic flight controller for short and vertical take off aircraft is discussed. This technique involves transformations of nonlinear systems to controllable linear systems and takes into account the nonlinearities of the aircraft. In general, the transformations cannot always be given in closed form. Using partial differential equations, an approximate linear system called the modified tangent model was introduced. A linear transformation of this tangent model to Brunovsky canonical form can be constructed, and from this the linear part (about a state space point x sub 0) of an exact transformation for the nonlinear system can be found. It is shown that a canonical expansion in Lie brackets about the point x sub 0 yields the same modified tangent model.
Nonuniform fast Fourier transform method for numerical diffraction simulation on tilted planes.
Xiao, Yu; Tang, Xiahui; Qin, Yingxiong; Peng, Hao; Wang, Wei; Zhong, Lijing
2016-10-01
The method, based on the rotation of the angular spectrum in the frequency domain, is generally used for the diffraction simulation between the tilted planes. Due to the rotation of the angular spectrum, the interval between the sampling points in the Fourier domain is not even. For the conventional fast Fourier transform (FFT)-based methods, a spectrum interpolation is needed to get the approximate sampling value on the equidistant sampling points. However, due to the numerical error caused by the spectrum interpolation, the calculation accuracy degrades very quickly as the rotation angle increases. Here, the diffraction propagation between the tilted planes is transformed into a problem about the discrete Fourier transform on the uneven sampling points, which can be evaluated effectively and precisely through the nonuniform fast Fourier transform method (NUFFT). The most important advantage of this method is that the conventional spectrum interpolation is avoided and the high calculation accuracy can be guaranteed for different rotation angles, even when the rotation angle is close to π/2. Also, its calculation efficiency is comparable with that of the conventional FFT-based methods. Numerical examples as well as a discussion about the calculation accuracy and the sampling method are presented.
Cosmological information in Gaussianized weak lensing signals
NASA Astrophysics Data System (ADS)
Joachimi, B.; Taylor, A. N.; Kiessling, A.
2011-11-01
Gaussianizing the one-point distribution of the weak gravitational lensing convergence has recently been shown to increase the signal-to-noise ratio contained in two-point statistics. We investigate the information on cosmology that can be extracted from the transformed convergence fields. Employing Box-Cox transformations to determine optimal transformations to Gaussianity, we develop analytical models for the transformed power spectrum, including effects of noise and smoothing. We find that optimized Box-Cox transformations perform substantially better than an offset logarithmic transformation in Gaussianizing the convergence, but both yield very similar results for the signal-to-noise ratio. None of the transformations is capable of eliminating correlations of the power spectra between different angular frequencies, which we demonstrate to have a significant impact on the errors in cosmology. Analytic models of the Gaussianized power spectrum yield good fits to the simulations and produce unbiased parameter estimates in the majority of cases, where the exceptions can be traced back to the limitations in modelling the higher order correlations of the original convergence. In the ideal case, without galaxy shape noise, we find an increase in the cumulative signal-to-noise ratio by a factor of 2.6 for angular frequencies up to ℓ= 1500, and a decrease in the area of the confidence region in the Ωm-σ8 plane, measured in terms of q-values, by a factor of 4.4 for the best performing transformation. When adding a realistic level of shape noise, all transformations perform poorly with little decorrelation of angular frequencies, a maximum increase in signal-to-noise ratio of 34 per cent, and even slightly degraded errors on cosmological parameters. We argue that to find Gaussianizing transformations of practical use, it will be necessary to go beyond transformations of the one-point distribution of the convergence, extend the analysis deeper into the non-linear regime and resort to an exploration of parameter space via simulations.
What are the Starting Points? Evaluating Base-Year Assumptions in the Asian Modeling Exercise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaturvedi, Vaibhav; Waldhoff, Stephanie; Clarke, Leon E.
2012-12-01
A common feature of model inter-comparison efforts is that the base year numbers for important parameters such as population and GDP can differ substantially across models. This paper explores the sources and implications of this variation in Asian countries across the models participating in the Asian Modeling Exercise (AME). Because the models do not all have a common base year, each team was required to provide data for 2005 for comparison purposes. This paper compares the year 2005 information for different models, noting the degree of variation in important parameters, including population, GDP, primary energy, electricity, and CO2 emissions. Itmore » then explores the difference in these key parameters across different sources of base-year information. The analysis confirms that the sources provide different values for many key parameters. This variation across data sources and additional reasons why models might provide different base-year numbers, including differences in regional definitions, differences in model base year, and differences in GDP transformation methodologies, are then discussed in the context of the AME scenarios. Finally, the paper explores the implications of base-year variation on long-term model results.« less
Improving Photometric Calibration of Meteor Video Camera Systems.
Ehlert, Steven; Kingery, Aaron; Suggs, Robert
2017-09-01
We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera band pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at ∼ 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to ∼ 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.
Improving Photometric Calibration of Meteor Video Camera Systems
NASA Technical Reports Server (NTRS)
Ehlert, Steven; Kingery, Aaron; Suggs, Robert
2017-01-01
We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at approx. 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.
Two Different Squeeze Transformations
NASA Technical Reports Server (NTRS)
Han, D. (Editor); Kim, Y. S.
1996-01-01
Lorentz boosts are squeeze transformations. While these transformations are similar to those in squeezed states of light, they are fundamentally different from both physical and mathematical points of view. The difference is illustrated in terms of two coupled harmonic oscillators, and in terms of the covariant harmonic oscillator formalism.
Management of reforming of housing-and-communal services
NASA Astrophysics Data System (ADS)
Skripnik, Oksana
2017-10-01
The international experience of reforming of housing and communal services is considered. The main scientific and methodical approaches of system transformation of the housing sphere are analyzed in the article. The main models of reforming are pointed out, interaction of participants of structural change process from the point of view of their commercial and social importance is characterized, advantages and shortcomings are revealed, model elements of the reform transformations from the point of view of the formation of investment appeal, competitiveness, energy efficiency and social importance of the carried-out actions are allocated.
Conformal structure of massless scalar amplitudes beyond tree level
NASA Astrophysics Data System (ADS)
Banerjee, Nabamita; Banerjee, Shamik; Bhatkar, Sayali Atul; Jain, Sachin
2018-04-01
We show that the one-loop on-shell four-point scattering amplitude of massless ϕ 4 scalar field theory in 4D Minkowski space time, when Mellin transformed to the Celestial sphere at infinity, transforms covariantly under the global conformal group (SL(2, ℂ)) on the sphere. The unitarity of the four-point scalar amplitudes is recast into this Mellin basis. We show that the same conformal structure also appears for the two-loop Mellin amplitude. Finally we comment on some universal structure for all loop four-point Mellin amplitudes specific to this theory.
Rapid update of discrete Fourier transform for real-time signal processing
NASA Astrophysics Data System (ADS)
Sherlock, Barry G.; Kakad, Yogendra P.
2001-10-01
In many identification and target recognition applications, the incoming signal will have properties that render it amenable to analysis or processing in the Fourier domain. In such applications, however, it is usually essential that the identification or target recognition be performed in real time. An important constraint upon real-time processing in the Fourier domain is the time taken to perform the Discrete Fourier Transform (DFT). Ideally, a new Fourier transform should be obtained after the arrival of every new data point. However, the Fast Fourier Transform (FFT) algorithm requires on the order of N log2 N operations, where N is the length of the transform, and this usually makes calculation of the transform for every new data point computationally prohibitive. In this paper, we develop an algorithm to update the existing DFT to represent the new data series that results when a new signal point is received. Updating the DFT in this way uses less computational order by a factor of log2 N. The algorithm can be modified to work in the presence of data window functions. This is a considerable advantage, because windowing is often necessary to reduce edge effects that occur because the implicit periodicity of the Fourier transform is not exhibited by the real-world signal. Versions are developed in this paper for use with the boxcar window, the split triangular, Hanning, Hamming, and Blackman windows. Generalization of these results to 2D is also presented.
Translational illusion of acoustic sources by transformation acoustics.
Sun, Fei; Li, Shichao; He, Sailing
2017-09-01
An acoustic illusion of creating a translated acoustic source is designed by utilizing transformation acoustics. An acoustic source shifter (ASS) composed of layered acoustic metamaterials is designed to achieve such an illusion. A practical example where the ASS is made with naturally available materials is also given. Numerical simulations verify the performance of the proposed device. The designed ASS may have some applications in, e.g., anti-sonar detection.
Bidirectional Elastic Image Registration Using B-Spline Affine Transformation
Gu, Suicheng; Meng, Xin; Sciurba, Frank C.; Wang, Chen; Kaminski, Naftali; Pu, Jiantao
2014-01-01
A registration scheme termed as B-spline affine transformation (BSAT) is presented in this study to elastically align two images. We define an affine transformation instead of the traditional translation at each control point. Mathematically, BSAT is a generalized form of the affine transformation and the traditional B-Spline transformation (BST). In order to improve the performance of the iterative closest point (ICP) method in registering two homologous shapes but with large deformation, a bi-directional instead of the traditional unidirectional objective / cost function is proposed. In implementation, the objective function is formulated as a sparse linear equation problem, and a sub-division strategy is used to achieve a reasonable efficiency in registration. The performance of the developed scheme was assessed using both two-dimensional (2D) synthesized dataset and three-dimensional (3D) volumetric computed tomography (CT) data. Our experiments showed that the proposed B-spline affine model could obtain reasonable registration accuracy. PMID:24530210
Analysis and Countermeasure Study on DC Bias of Main Transformer in a City
NASA Astrophysics Data System (ADS)
Wang, PengChao; Wang, Hongtao; Song, Xinpu; Gu, Jun; Liu, yong; Wu, weili
2017-07-01
According to the December 2015 Guohua Beijing thermal power transformer DC magnetic bias phenomenon, the monitoring data of 24 hours of direct current is analyzed. We find that the maximum DC current is up to 25 and is about 30s for the trend cycle, on this basis, then, of the geomagnetic storm HVDC and subway operation causes comparison of the mechanism, and make a comprehensive analysis of the thermal power plant’s geographical location, surrounding environment and electrical contact etc.. The results show that the main reason for the DC bias of Guohua thermal power transformer is the operation of the subway, and the change of the DC bias current is periodic. Finally, of Guohua thermal power transformer DC magnetic bias control method is studied, the simulation results show that the method of using neutral point with small resistance or capacitance can effectively inhibit the main transformer neutral point current.
Transform fault earthquakes in the North Atlantic: Source mechanisms and depth of faulting
NASA Technical Reports Server (NTRS)
Bergman, Eric A.; Solomon, Sean C.
1987-01-01
The centroid depths and source mechanisms of 12 large earthquakes on transform faults of the northern Mid-Atlantic Ridge were determined from an inversion of long-period body waveforms. The earthquakes occurred on the Gibbs, Oceanographer, Hayes, Kane, 15 deg 20 min, and Vema transforms. The depth extent of faulting during each earthquake was estimated from the centroid depth and the fault width. The source mechanisms for all events in this study display the strike slip motion expected for transform fault earthquakes; slip vector azimuths agree to 2 to 3 deg of the local strike of the zone of active faulting. The only anomalies in mechanism were for two earthquakes near the western end of the Vema transform which occurred on significantly nonvertical fault planes. Secondary faulting, occurring either precursory to or near the end of the main episode of strike-slip rupture, was observed for 5 of the 12 earthquakes. For three events the secondary faulting was characterized by reverse motion on fault planes striking oblique to the trend of the transform. In all three cases, the site of secondary reverse faulting is near a compression jog in the current trace of the active transform fault zone. No evidence was found to support the conclusions of Engeln, Wiens, and Stein that oceanic transform faults in general are either hotter than expected from current thermal models or weaker than normal oceanic lithosphere.
Genetic transformation of mature citrus plants.
Cervera, Magdalena; Juárez, José; Navarro, Luis; Peña, Leandro
2005-01-01
Most woody fruit species have long juvenile periods that drastically prolong the time required to analyze mature traits. Evaluation of characteristics related to fruits is a requisite to release any new variety into the market. Because of a decline in regenerative and transformation potential, genetic transformation procedures usually employ juvenile material as the source of plant tissue, therefore resulting in the production of juvenile plants. Direct transformation of mature material could ensure the production of adult transgenic plants, bypassing in this way the juvenile phase. Invigoration of the source adult material, establishment of adequate transformation and regeneration conditions, and acceleration of plant development through grafting allowed us to produce transgenic mature sweet orange trees flowering and bearing fruits in a short time period.
Ardila-Rey, Jorge Alfredo; Rojas-Moreno, Mónica Victoria; Martínez-Tarifa, Juan Manuel; Robles, Guillermo
2014-01-01
Partial discharge (PD) detection is a standardized technique to qualify electrical insulation in machines and power cables. Several techniques that analyze the waveform of the pulses have been proposed to discriminate noise from PD activity. Among them, spectral power ratio representation shows great flexibility in the separation of the sources of PD. Mapping spectral power ratios in two-dimensional plots leads to clusters of points which group pulses with similar characteristics. The position in the map depends on the nature of the partial discharge, the setup and the frequency response of the sensors. If these clusters are clearly separated, the subsequent task of identifying the source of the discharge is straightforward so the distance between clusters can be a figure of merit to suggest the best option for PD recognition. In this paper, two inductive sensors with different frequency responses to pulsed signals, a high frequency current transformer and an inductive loop sensor, are analyzed to test their performance in detecting and separating the sources of partial discharges. PMID:24556674
NASA Astrophysics Data System (ADS)
Su, Yun-Ting; Hu, Shuowen; Bethel, James S.
2017-05-01
Light detection and ranging (LIDAR) has become a widely used tool in remote sensing for mapping, surveying, modeling, and a host of other applications. The motivation behind this work is the modeling of piping systems in industrial sites, where cylinders are the most common primitive or shape. We focus on cylinder parameter estimation in three-dimensional point clouds, proposing a mathematical formulation based on angular distance to determine the cylinder orientation. We demonstrate the accuracy and robustness of the technique on synthetically generated cylinder point clouds (where the true axis orientation is known) as well as on real LIDAR data of piping systems. The proposed algorithm is compared with a discrete space Hough transform-based approach as well as a continuous space inlier approach, which iteratively discards outlier points to refine the cylinder parameter estimates. Results show that the proposed method is more computationally efficient than the Hough transform approach and is more accurate than both the Hough transform approach and the inlier method.
Pastén-Zapata, Ernesto; Ledesma-Ruiz, Rogelio; Harter, Thomas; Ramírez, Aldo I; Mahlknecht, Jürgen
2014-02-01
Nitrate isotopic values are often used as a tool to understand sources of contamination in order to effectively manage groundwater quality. However, recent literature describes that biogeochemical reactions may modify these values. Therefore, data interpretation is difficult and often vague. We provide a discussion on this topic and complement the study using halides as comparative tracers assessing an aquifer underneath a sub-humid to humid region in NE Mexico. Hydrogeological information and stable water isotopes indicate that active groundwater recharge occurs in the 8000km(2) study area under present-day climatic and hydrologic conditions. Nitrate isotopes and halide ratios indicate a diverse mix of nitrate sources and transformations. Nitrate sources include organic waste and wastewater, synthetic fertilizers and soil processes. Animal manure and sewage from septic tanks were the causes of groundwater nitrate pollution within orchards and vegetable agriculture. Dairy activities within a radius of 1,000 m from a sampling point significantly contributed to nitrate pollution. Leachates from septic tanks caused nitrate pollution in residential areas. Soil nitrogen and animal waste were the sources of nitrate in groundwater under shrubland and grassland. Partial denitrification processes helped to attenuate nitrate concentration underneath agricultural lands and grassland, especially during summer months. © 2013. Published by Elsevier B.V. All rights reserved.
Innovative design of parabolic reflector light guiding structure
NASA Astrophysics Data System (ADS)
Whang, Allen J.; Tso, Chun-Hsien; Chen, Yi-Yung
2008-02-01
Due to the idea of everlasting green architecture, it is of increasing importance to guild natural light into indoors. The advantages are multifold - to have better color rendering index, excellent energy savings from environments viewpoints and make humans more healthy, etc. Our search is to design an innovative structure, to convert outdoor sun light impinges on larger surfaces, into near linear light beam sources, later convert this light beam into near point sources which enters the indoor spaces then can be used as lighting sources indoors. We are not involved with the opto-electrical transformation, to the guild light into to the building, to perform the illumination, as well as the imaging function. Because non-imaging optics, well known for apply to the solar concentrators, that can use non-imaging structures to fulfill our needs, which can also be used as energy collectors in solar energy devices. Here, we have designed a pair of large and small parabolic reflector, which can be used to collect daylight and change area from large to small. Then we make a light-guide system that is been designed by us use of this parabolic reflector to guide the collection light, can pick up the performance for large surface source change to near linear source and a larger collection area.
NASA Astrophysics Data System (ADS)
Levit, Creon; Gazis, P.
2006-06-01
The graphics processing units (GPUs) built in to all professional desktop and laptop computers currently on the market are capable of transforming, filtering, and rendering hundreds of millions of points per second. We present a prototype open-source cross-platform (windows, linux, Apple OSX) application which leverages some of the power latent in the GPU to enable smooth interactive exploration and analysis of large high-dimensional data using a variety of classical and recent techniques. The targeted application area is the interactive analysis of complex, multivariate space science and astrophysics data sets, with dimensionalities that may surpass 100 and sample sizes that may exceed 10^6-10^8.
NASA Astrophysics Data System (ADS)
Liu, Yi; Chen, Dong-Feng; Wang, Hong-Li; Chen, Na; Li, Dan; Han, Bu-Xing; Rong, Li-Xia; Zhao, Hui; Wang, Jun; Dong, Bao-Zhong
2002-10-01
The conformation of polystyrene in the anti-solvent process of supercritical fluids (compressed CO2 + polystyrene + toluene) has been studied by small angle x-ray scattering with synchrotron radiation as an x-ray source. Coil-to-globule transformation of the polystyrene chain was observed with the increase of the anti-solvent CO2 pressure; i.e. polystyrene coiled at a pressure lower than the cloud point pressure (Pc) and turned into a globule with a uniform density at pressures higher than Pc. Fractal behaviour was also found in the chain contraction and the mass fractal dimension increased with increasing CO2 pressure.
NASA Astrophysics Data System (ADS)
Hou, Bo-Yu; Peng, Dan-Tao; Shi, Kang-Jie; Yue, Rui-Hong
For the noncommutative torus T, in the case of the noncommutative parameter θ = (Z)/(n), we construct the basis of Hilbert space Hn in terms of θ functions of the positions zi of n solitons. The wrapping around the torus generates the algebra An, which is the Zn × Zn Heisenberg group on θ functions. We find the generators g of a local elliptic su(n), which transform covariantly by the global gauge transformation of An. By acting on Hn we establish the isomorphism of An and g. We embed this g into the L-matrix of the elliptic Gaudin and Calogero-Moser models to give the dynamics. The moment map of this twisted cotangent sunT) bundle is matched to the D-equation with the Fayet-Illiopoulos source term, so the dynamics of the noncommutative solitons become that of the brane. The geometric configuration (k, u) of the spectral curve det|L(u) - k| = 0 describes the brane configuration, with the dynamical variables zi of the noncommutative solitons as the moduli T⊗ n/Sn. Furthermore, in the noncommutative Chern-Simons theory for the quantum Hall effect, the constrain equation with quasiparticle source is identified also with the moment map equation of the noncommutative sunT cotangent bundle with marked points. The eigenfunction of the Gaudin differential L-operators as the Laughlin wave function is solved by Bethe ansatz.
Fourier transform infrared spectroscopic analysis of cell differentiation
NASA Astrophysics Data System (ADS)
Ishii, Katsunori; Kimura, Akinori; Kushibiki, Toshihiro; Awazu, Kunio
2007-02-01
Stem cells and its differentiations have got a lot of attentions in regenerative medicine. The process of differentiations, the formation of tissues, has become better understood by the study using a lot of cell types progressively. These studies of cells and tissue dynamics at molecular levels are carried out through various approaches like histochemical methods, application of molecular biology and immunology. However, in case of using regenerative sources (cells, tissues and biomaterials etc.) clinically, they are measured and quality-controlled by non-invasive methods from the view point of safety. Recently, the use of Fourier Transform Infrared spectroscopy (FT-IR) has been used to monitor biochemical changes in cells, and has gained considerable importance. The objective of this study is to establish the infrared spectroscopy of cell differentiation as a quality control of cell sources for regenerative medicine. In the present study, as a basic study, we examined the adipose differentiation kinetics of preadipocyte (3T3-L1) and the osteoblast differentiation kinetics of bone marrow mesenchymal stem cells (Kusa-A1) to analyze the infrared absorption spectra. As a result, we achieved to analyze the adipose differentiation kinetics using the infrared absorption peak at 1739 cm-1 derived from ester bonds of triglyceride and osteoblast differentiation kinetics using the infrared absorption peak at 1030 cm-1 derived from phosphate groups of calcium phosphate.
Robust non-rigid registration algorithm based on local affine registration
NASA Astrophysics Data System (ADS)
Wu, Liyang; Xiong, Lei; Du, Shaoyi; Bi, Duyan; Fang, Ting; Liu, Kun; Wu, Dongpeng
2018-04-01
Aiming at the problem that the traditional point set non-rigid registration algorithm has low precision and slow convergence speed for complex local deformation data, this paper proposes a robust non-rigid registration algorithm based on local affine registration. The algorithm uses a hierarchical iterative method to complete the point set non-rigid registration from coarse to fine. In each iteration, the sub data point sets and sub model point sets are divided and the shape control points of each sub point set are updated. Then we use the control point guided affine ICP algorithm to solve the local affine transformation between the corresponding sub point sets. Next, the local affine transformation obtained by the previous step is used to update the sub data point sets and their shape control point sets. When the algorithm reaches the maximum iteration layer K, the loop ends and outputs the updated sub data point sets. Experimental results demonstrate that the accuracy and convergence of our algorithm are greatly improved compared with the traditional point set non-rigid registration algorithms.
NASA Astrophysics Data System (ADS)
Abdelrahman, El-Sayed Mohamed; Soliman, Khalid; Essa, Khalid Sayed; Abo-Ezz, Eid Ragab; El-Araby, Tarek Mohamed
2009-06-01
This paper develops a least-squares minimisation approach to determine the depth of a buried structure from numerical second horizontal derivative anomalies obtained from self-potential (SP) data using filters of successive window lengths. The method is based on using a relationship between the depth and a combination of observations at symmetric points with respect to the coordinate of the projection of the centre of the source in the plane of the measurement points with a free parameter (graticule spacing). The problem of depth determination from second derivative SP anomalies has been transformed into the problem of finding a solution to a non-linear equation of the form f(z)=0. Formulas have been derived for horizontal cylinders, spheres, and vertical cylinders. Procedures are also formulated to determine the electric dipole moment and the polarization angle. The proposed method was tested on synthetic noisy and real SP data. In the case of the synthetic data, the least-squares method determined the correct depths of the sources. In the case of practical data (SP anomalies over a sulfide ore deposit, Sariyer, Turkey and over a Malachite Mine, Jefferson County, Colorado, USA), the estimated depths of the buried structures are in good agreement with the results obtained from drilling and surface geology.
Nonlinear Extraction of Independent Components of Natural Images Using Radial Gaussianization
Lyu, Siwei; Simoncelli, Eero P.
2011-01-01
We consider the problem of efficiently encoding a signal by transforming it to a new representation whose components are statistically independent. A widely studied linear solution, known as independent component analysis (ICA), exists for the case when the signal is generated as a linear transformation of independent nongaussian sources. Here, we examine a complementary case, in which the source is nongaussian and elliptically symmetric. In this case, no invertible linear transform suffices to decompose the signal into independent components, but we show that a simple nonlinear transformation, which we call radial gaussianization (RG), is able to remove all dependencies. We then examine this methodology in the context of natural image statistics. We first show that distributions of spatially proximal bandpass filter responses are better described as elliptical than as linearly transformed independent sources. Consistent with this, we demonstrate that the reduction in dependency achieved by applying RG to either nearby pairs or blocks of bandpass filter responses is significantly greater than that achieved by ICA. Finally, we show that the RG transformation may be closely approximated by divisive normalization, which has been used to model the nonlinear response properties of visual neurons. PMID:19191599
Simplified Relativistic Force Transformation Equation.
ERIC Educational Resources Information Center
Stewart, Benjamin U.
1979-01-01
A simplified relativistic force transformation equation is derived and then used to obtain the equation for the electromagnetic forces on a charged particle, calculate the electromagnetic fields due to a point charge with constant velocity, transform electromagnetic fields in general, derive the Biot-Savart law, and relate it to Coulomb's law.…
ERIC Educational Resources Information Center
Carmeli, Abraham; Sheaffer, Zachary; Binyamin, Galy; Reiter-Palmon, Roni; Shimoni, Tali
2014-01-01
Previous research has pointed to the importance of transformational leadership in facilitating employees' creative outcomes. However, the mechanism by which transformational leadership cultivates employees' creative problem-solving capacity is not well understood. Drawing on theories of leadership, information processing and creativity,…
Critical N = (1, 1) general massive supergravity
NASA Astrophysics Data System (ADS)
Deger, Nihat Sadik; Moutsopoulos, George; Rosseel, Jan
2018-04-01
In this paper we study the supermultiplet structure of N = (1, 1) General Massive Supergravity at non-critical and critical points of its parameter space. To do this, we first linearize the theory around its maximally supersymmetric AdS3 vacuum and obtain the full linearized Lagrangian including fermionic terms. At generic values, linearized modes can be organized as two massless and 2 massive multiplets where supersymmetry relates them in the standard way. At critical points logarithmic modes appear and we find that in three of such points some of the supersymmetry transformations are non-invertible in logarithmic multiplets. However, in the fourth critical point, there is a massive logarithmic multiplet with invertible supersymmetry transformations.
Fourier spectroscopy with a one-million-point transformation
NASA Technical Reports Server (NTRS)
Connes, J.; Delouis, H.; Connes, P.; Guelachvili, G.; Maillard, J.; Michel, G.
1972-01-01
A new type of interferometer for use in Fourier spectroscopy has been devised at the Aime Cotton Laboratory of the National Center for Scientific Research (CNRS), Orsay, France. With this interferometer and newly developed computational techniques, interferograms comprising as many as one million samples can now be transformed. The techniques are described, and examples of spectra of thorium and holmium, derived from one million-point interferograms, are presented.
Simulation study on the lightning overvoltage invasion control transformer intelligent substation
NASA Astrophysics Data System (ADS)
Xi, Chuyan; Hao, Jie; Zhang, Ying
2018-04-01
By simulating lightning on substation line of one intelligent substation, research the influence of different lightning points on lightning invasion wave overvoltage, and the necessity of arrester for the main transformer. The results show, in a certain lightning protection measures, the installation of arrester nearby the main transformer can effectively reduce the overvoltage value of bus and the main transformer [1].
How the Way We Talk Can Change the Way We Work: Seven Languages for Transformation.
ERIC Educational Resources Information Center
Kegan, Robert; Lahey, Lisa Laskow
This book proposes the possibility of "extraordinary change" in individuals and organizations and suggests a source of "boundless energy" for bringing those changes into being. That source is freed by removing internal barriers to change within oneself. It provides personal experiences of transformational learning to introduce…
Nature and transformation of dissolved organic matter in treatment wetlands
Barber, L.B.; Leenheer, J.A.; Noyes, T.I.; Stiles, E.A.
2001-01-01
This investigation into the occurrence, character, and transformation of dissolved organic matter (DOM) in treatment wetlands in the western United States shows that (i) the nature of DOM in the source water has a major influence on transformations that occur during treatment, (ii) the climate factors have a secondary effect on transformations, (iii) the wetlands receiving treated wastewater can produce a net increase in DOM, and (iv) the hierarchical analytical approach used in this study can measure the subtle DOM transformations that occur. As wastewater treatment plant effluent passes through treatment wetlands, the DOM undergoes transformation to become more aromatic and oxygenated. Autochthonous sources are contributed to the DOM, the nature of which is governed by the developmental stage of the wetland system as well as vegetation patterns. Concentrations of specific wastewaterderived organic contaminants such as linear alkylbenzene sulfonate, caffeine, and ethylenediaminetetraacetic acid were significantly attenuated by wetland treatment and were not contributed by internal loading.
Converter topologies for common mode voltage reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodriguez, Fernando
An inverter includes a three-winding transformer, a DC-AC inverter electrically coupled to the first winding of the transformer, a cycloconverter electrically coupled to the second winding of the transformer, and an active filter electrically coupled to the third winding of the transformer. The DC-AC inverter is adapted to convert the input DC waveform to an AC waveform delivered to the transformer at the first winding. The cycloconverter is adapted to convert an AC waveform received at the second winding of the transformer to the output AC waveform having a grid frequency of the AC grid. The active filter is adaptedmore » to sink and source power with one or more energy storage devices based on a mismatch in power between the DC source and the AC grid. At least two of the DC-AC inverter, the cycloconverter, or the active filter are electrically coupled via a common reference electrical interconnect.« less
Discrete frequency infrared microspectroscopy and imaging with a tunable quantum cascade laser
Kole, Matthew R.; Reddy, Rohith K.; Schulmerich, Matthew V.; Gelber, Matthew K.; Bhargava, Rohit
2012-01-01
Fourier-transform infrared imaging (FT-IR) is a well-established modality but requires the acquisition of a spectrum over a large bandwidth, even in cases where only a few spectral features may be of interest. Discrete frequency infrared (DF-IR) methods are now emerging in which a small number of measurements may provide all the analytical information needed. The DF-IR approach is enabled by the development of new sources integrating frequency selection, in particular of tunable, narrow-bandwidth sources with enough power at each wavelength to successfully make absorption measurements. Here, we describe a DF-IR imaging microscope that uses an external cavity quantum cascade laser (QCL) as a source. We present two configurations, one with an uncooled bolometer as a detector and another with a liquid nitrogen cooled Mercury Cadmium Telluride (MCT) detector and compare their performance to a commercial FT-IR imaging instrument. We examine the consequences of the coherent properties of the beam with respect to imaging and compare these observations to simulations. Additionally, we demonstrate that the use of a tunable laser source represents a distinct advantage over broadband sources when using a small aperture (narrower than the wavelength of light) to perform high-quality point mapping. The two advances highlight the potential application areas for these emerging sources in IR microscopy and imaging. PMID:23113653
[A landscape ecological approach for urban non-point source pollution control].
Guo, Qinghai; Ma, Keming; Zhao, Jingzhu; Yang, Liu; Yin, Chengqing
2005-05-01
Urban non-point source pollution is a new problem appeared with the speeding development of urbanization. The particularity of urban land use and the increase of impervious surface area make urban non-point source pollution differ from agricultural non-point source pollution, and more difficult to control. Best Management Practices (BMPs) are the effective practices commonly applied in controlling urban non-point source pollution, mainly adopting local repairing practices to control the pollutants in surface runoff. Because of the close relationship between urban land use patterns and non-point source pollution, it would be rational to combine the landscape ecological planning with local BMPs to control the urban non-point source pollution, which needs, firstly, analyzing and evaluating the influence of landscape structure on water-bodies, pollution sources and pollutant removal processes to define the relationships between landscape spatial pattern and non-point source pollution and to decide the key polluted fields, and secondly, adjusting inherent landscape structures or/and joining new landscape factors to form new landscape pattern, and combining landscape planning and management through applying BMPs into planning to improve urban landscape heterogeneity and to control urban non-point source pollution.
NASA Technical Reports Server (NTRS)
Morris, A. Terry
1999-01-01
This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.
Image Tiling for Profiling Large Objects
NASA Technical Reports Server (NTRS)
Venkataraman, Ajit; Schock, Harold; Mercer, Carolyn R.
1992-01-01
Three dimensional surface measurements of large objects arc required in a variety of industrial processes. The nature of these measurements is changing as optical instruments arc beginning to replace conventional contact probes scanned over the objects. A common characteristic of the optical surface profilers is the trade off between measurement accuracy and field of view. In order to measure a large object with high accuracy, multiple views arc required. An accurate transformation between the different views is needed to bring about their registration. In this paper, we demonstrate how the transformation parameters can be obtained precisely by choosing control points which lie in the overlapping regions of the images. A good starting point for the transformation parameters is obtained by having a knowledge of the scanner position. The selection of the control points arc independent of the object geometry. By successively recording multiple views and obtaining transformation with respect to a single coordinate system, a complete physical model of an object can be obtained. Since all data arc in the same coordinate system, it can thus be used for building automatic models for free form surfaces.
New descriptor for skeletons of planar shapes: the calypter
NASA Astrophysics Data System (ADS)
Pirard, Eric; Nivart, Jean-Francois
1994-05-01
The mathematical definition of the skeleton as the locus of centers of maximal inscribed discs is a nondigitizable one. The idea presented in this paper is to incorporate the skeleton information and the chain-code of the contour into a single descriptor by associating to each point of a contour the center and radius of the maximum inscribed disc tangent at that point. This new descriptor is called calypter. The encoding of a calypter is a three stage algorithm: (1) chain coding of the contour; (2) euclidean distance transformation, (3) climbing on the distance relief from each point of the contour towards the corresponding maximal inscribed disc center. Here we introduce an integer euclidean distance transform called the holodisc distance transform. The major interest of this holodisc transform is to confer 8-connexity to the isolevels of the generated distance relief thereby allowing a climbing algorithm to proceed step by step towards the centers of the maximal inscribed discs. The calypter has a cyclic structure delivering high speed access to the skeleton data. Its potential uses are in high speed euclidean mathematical morphology, shape processing, and analysis.
A Data Cleaning Method for Big Trace Data Using Movement Consistency
Tang, Luliang; Zhang, Xia; Li, Qingquan
2018-01-01
Given the popularization of GPS technologies, the massive amount of spatiotemporal GPS traces collected by vehicles are becoming a new kind of big data source for urban geographic information extraction. The growing volume of the dataset, however, creates processing and management difficulties, while the low quality generates uncertainties when investigating human activities. Based on the conception of the error distribution law and position accuracy of the GPS data, we propose in this paper a data cleaning method for this kind of spatial big data using movement consistency. First, a trajectory is partitioned into a set of sub-trajectories using the movement characteristic points. In this process, GPS points indicate that the motion status of the vehicle has transformed from one state into another, and are regarded as the movement characteristic points. Then, GPS data are cleaned based on the similarities of GPS points and the movement consistency model of the sub-trajectory. The movement consistency model is built using the random sample consensus algorithm based on the high spatial consistency of high-quality GPS data. The proposed method is evaluated based on extensive experiments, using GPS trajectories generated by a sample of vehicles over a 7-day period in Wuhan city, China. The results show the effectiveness and efficiency of the proposed method. PMID:29522456
NASA Astrophysics Data System (ADS)
Sirmacek, B.; Lindenbergh, R. C.; Menenti, M.
2013-10-01
Fusion of 3D airborne laser (LIDAR) data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating. The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this article, we propose an approach for registering these two different data from different sensor sources. As we use iPhone camera images which are taken in front of the interested urban structure by the application user and the high resolution LIDAR point clouds of the acquired by an airborne laser sensor. After finding the photo capturing position and orientation from the iPhone photograph metafile, we automatically select the area of interest in the point cloud and transform it into a range image which has only grayscale intensity levels according to the distance from the image acquisition position. We benefit from local features for registering the iPhone image to the generated range image. In this article, we have applied the registration process based on local feature extraction and graph matching. Finally, the registration result is used for facade texture mapping on the 3D building surface mesh which is generated from the LIDAR point cloud. Our experimental results indicate possible usage of the proposed algorithm framework for 3D urban map updating and enhancing purposes.
Effect of point defects and disorder on structural phase transitions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toulouse, J.
1997-06-01
Since the beginning in 1986, the object of this project has been Structural Phase Transitions (SPT) in real as opposed to ideal materials. The first stage of the study has been centered around the role of Point Defects in SPT`s. Our intent was to use the previous knowledge we had acquired in the study of point defects in non-transforming insulators and apply it to the study of point defects in insulators undergoing phase transitions. In non-transforming insulators, point defects, in low concentrations, marginally affect the bulk properties of the host. It is nevertheless possible by resonance or relaxation methods tomore » study the point defects themselves via their local motion. In transforming solids, however, close to a phase transition, atomic motions become correlated over very large distances; there, even point defects far removed from one another can undergo correlated motions which may strongly affect the transition behavior of the host. Near a structural transition, the elastic properties win be most strongly affected so as to either raise or decrease the transition temperature, prevent the transition from taking place altogether, or simply modify its nature and the microstructure or domain structure of the resulting phase. One of the well known practical examples is calcium-stabilized zirconia in which the high temperature cubic phase is stabilized at room temperature with greatly improved mechanical properties.« less
Registration of opthalmic images using control points
NASA Astrophysics Data System (ADS)
Heneghan, Conor; Maguire, Paul
2003-03-01
A method for registering pairs of digital ophthalmic images of the retina is presented using anatomical features as control points present in both images. The anatomical features chosen are blood vessel crossings and bifurcations. These control points are identified by a combination of local contrast enhancement, and morphological processing. In general, the matching between control points is unknown, however, so an automated algorithm is used to determine the matching pairs of control points in the two images as follows. Using two control points from each image, rigid global transform (RGT) coefficients are calculated for all possible combinations of control point pairs, and the set of RGT coefficients is identified. Once control point pairs are established, registration of two images can be achieved by using linear regression to optimize an RGT, bilinear or second order polynomial global transform. An example of cross-modal image registration using an optical image and a fluorescein angiogram of an eye is presented to illustrate the technique.
Direct Linear Transformation Method for Three-Dimensional Cinematography
ERIC Educational Resources Information Center
Shapiro, Robert
1978-01-01
The ability of Direct Linear Transformation Method for three-dimensional cinematography to locate points in space was shown to meet the accuracy requirements associated with research on human movement. (JD)
Selective structural source identification
NASA Astrophysics Data System (ADS)
Totaro, Nicolas
2018-04-01
In the field of acoustic source reconstruction, the inverse Patch Transfer Function (iPTF) has been recently proposed and has shown satisfactory results whatever the shape of the vibrating surface and whatever the acoustic environment. These two interesting features are due to the virtual acoustic volume concept underlying the iPTF methods. The aim of the present article is to show how this concept of virtual subsystem can be used in structures to reconstruct the applied force distribution. Some virtual boundary conditions can be applied on a part of the structure, called virtual testing structure, to identify the force distribution applied in that zone regardless of the presence of other sources outside the zone under consideration. In the present article, the applicability of the method is only demonstrated on planar structures. However, the final example show how the method can be applied to a complex shape planar structure with point welded stiffeners even in the tested zone. In that case, if the virtual testing structure includes the stiffeners the identified force distribution only exhibits the positions of external applied forces. If the virtual testing structure does not include the stiffeners, the identified force distribution permits to localize the forces due to the coupling between the structure and the stiffeners through the welded points as well as the ones due to the external forces. This is why this approach is considered here as a selective structural source identification method. It is demonstrated that this approach clearly falls in the same framework as the Force Analysis Technique, the Virtual Fields Method or the 2D spatial Fourier transform. Even if this approach has a lot in common with these latters, it has some interesting particularities like its low sensitivity to measurement noise.
Non-invertible transformations of differential-difference equations
NASA Astrophysics Data System (ADS)
Garifullin, R. N.; Yamilov, R. I.; Levi, D.
2016-09-01
We discuss aspects of the theory of non-invertible transformations of differential-difference equations and, in particular, the notion of Miura type transformation. We introduce the concept of non-Miura type linearizable transformation and we present techniques that allow one to construct simple linearizable transformations and might help one to solve classification problems. This theory is illustrated by the example of a new integrable differential-difference equation depending on five lattice points, interesting from the viewpoint of the non-invertible transformation, which relate it to an Itoh-Narita-Bogoyavlensky equation.
Štys, Dalibor; Urban, Jan; Vaněk, Jan; Císař, Petr
2011-06-01
We report objective analysis of information in the microscopic image of the cell monolayer. The process of transfer of information about the cell by the microscope is analyzed in terms of the classical Shannon information transfer scheme. The information source is the biological object, the information transfer channel is the whole microscope including the camera chip. The destination is the model of biological system. The information contribution is analyzed as information carried by a point to overall information in the image. Subsequently we obtain information reflection of the biological object. This is transformed in the biological model which, in information terminology, is the destination. This, we propose, should be constructed as state transitions in individual cells modulated by information bonds between the cells. We show examples of detected cell states in multidimensional state space. This space is reflected as colour channel intensity phenomenological state space. We have also observed information bonds and show examples of them.
Stys, Dalibor; Urban, Jan; Vanek, Jan; Císar, Petr
2010-07-01
We report objective analysis of information in the microscopic image of the cell monolayer. The process of transfer of information about the cell by the microscope is analyzed in terms of the classical Shannon information transfer scheme. The information source is the biological object, the information transfer channel is the whole microscope including the camera chip. The destination is the model of biological system. The information contribution is analyzed as information carried by a point to overall information in the image. Subsequently we obtain information reflection of the biological object. This is transformed in the biological model which, in information terminology, is the destination. This, we propose, should be constructed as state transitions in individual cells modulated by information bonds between the cells. We show examples of detected cell states in multidimensional state space reflected in space an colour channel intensity phenomenological state space. We have also observed information bonds and show examples of them. Copyright 2010 Elsevier Ltd. All rights reserved.
Recombinational inactivation of the gene encoding nitrate reductase in Aspergillus parasiticus.
Wu, T S; Linz, J E
1993-01-01
Functional disruption of the gene encoding nitrate reductase (niaD) in Aspergillus parasiticus was conducted by two strategies, one-step gene replacement and the integrative disruption. Plasmid pPN-1, in which an internal DNA fragment of the niaD gene was replaced by a functional gene encoding orotidine monophosphate decarboxylase (pyrG), was constructed. Plasmid pPN-1 was introduced in linear form into A. parasiticus CS10 (ver-1 wh-1 pyrG) by transformation. Approximately 25% of the uridine prototrophic transformants (pyrG+) were chlorate resistant (Chlr), demonstrating their inability to utilize nitrate as a sole nitrogen source. The genetic block in nitrate utilization was confirmed to occur in the niaD gene by the absence of growth of the A. parasiticus CS10 transformants on medium containing nitrate as the sole nitrogen source and the ability to grow on several alternative nitrogen sources. Southern hybridization analysis of Chlr transformants demonstrated that the resident niaD locus was replaced by the nonfunctional allele in pPN-1. To generate an integrative disruption vector (pSKPYRG), an internal fragment of the niaD gene was subcloned into a plasmid containing the pyrG gene as a selectable marker. Circular pSKPYRG was transformed into A. parasiticus CS10. Chlr pyrG+ transformants were screened for nitrate utilization and by Southern hybridization analysis. Integrative disruption of the genomic niaD gene occurred in less than 2% of the transformants. Three gene replacement disruption transformants and two integrative disruption transformants were tested for mitotic stability after growth under nonselective conditions. All five transformants were found to stably retain the Chlr phenotype after growth on nonselective medium.(ABSTRACT TRUNCATED AT 250 WORDS) Images PMID:8215371
NASA Astrophysics Data System (ADS)
Lu, Y.; Li, C. F.
2017-12-01
Arctic Ocean remains at the forefront of geological exploration. Here we investigate its deep geological structures and geodynamics on the basis of gravity, magnetic and bathymetric data. We estimate Curie-point depth and lithospheric effective elastic thickness to understand deep geothermal structures and Arctic lithospheric evolution. A fractal exponent of 3.0 for the 3D magnetization model is used in the Curie-point depth inversion. The result shows that Curie-point depths are between 5 and 50 km. Curie depths are mostly small near the active mid-ocean ridges, corresponding well to high heat flow and active shallow volcanism. Large curie depths are distributed mainly at continental marginal seas around the Arctic Ocean. We present a map of effective elastic thickness (Te) of the lithosphere using a multitaper coherence technique, and Te are between 5 and 110 km. Te primarily depends on geothermal gradient and composition, as well as structures in the lithosphere. We find that Te and Curie-point depths are often correlated. Large Te are distributed mainly at continental region and small Te are distributed at oceanic region. The Alpha-Mendeleyev Ridge (AMR) and The Svalbard Archipelago (SA) are symmetrical with the mid-ocean ridge. AMR and SA were formed before an early stage of Eurasian basin spreading, and they are considered as conjugate large igneous provinces, which show small Te and Curie-point depths. Novaya Zemlya region has large Curie-point depths and small Te. We consider that fault and fracture near the Novaya Zemlya orogenic belt cause small Te. A series of transform faults connect Arctic mid-ocean ridge with North Atlantic mid-ocean ridge. We can see large Te near transform faults, but small Curie-point depths. We consider that although temperature near transform faults is high, but mechanically the lithosphere near transform faults are strengthened.
Automated Coarse Registration of Point Clouds in 3d Urban Scenes Using Voxel Based Plane Constraint
NASA Astrophysics Data System (ADS)
Xu, Y.; Boerner, R.; Yao, W.; Hoegner, L.; Stilla, U.
2017-09-01
For obtaining a full coverage of 3D scans in a large-scale urban area, the registration between point clouds acquired via terrestrial laser scanning (TLS) is normally mandatory. However, due to the complex urban environment, the automatic registration of different scans is still a challenging problem. In this work, we propose an automatic marker free method for fast and coarse registration between point clouds using the geometric constrains of planar patches under a voxel structure. Our proposed method consists of four major steps: the voxelization of the point cloud, the approximation of planar patches, the matching of corresponding patches, and the estimation of transformation parameters. In the voxelization step, the point cloud of each scan is organized with a 3D voxel structure, by which the entire point cloud is partitioned into small individual patches. In the following step, we represent points of each voxel with the approximated plane function, and select those patches resembling planar surfaces. Afterwards, for matching the corresponding patches, a RANSAC-based strategy is applied. Among all the planar patches of a scan, we randomly select a planar patches set of three planar surfaces, in order to build a coordinate frame via their normal vectors and their intersection points. The transformation parameters between scans are calculated from these two coordinate frames. The planar patches set with its transformation parameters owning the largest number of coplanar patches are identified as the optimal candidate set for estimating the correct transformation parameters. The experimental results using TLS datasets of different scenes reveal that our proposed method can be both effective and efficient for the coarse registration task. Especially, for the fast orientation between scans, our proposed method can achieve a registration error of less than around 2 degrees using the testing datasets, and much more efficient than the classical baseline methods.
A fast discrete S-transform for biomedical signal processing.
Brown, Robert A; Frayne, Richard
2008-01-01
Determining the frequency content of a signal is a basic operation in signal and image processing. The S-transform provides both the true frequency and globally referenced phase measurements characteristic of the Fourier transform and also generates local spectra, as does the wavelet transform. Due to this combination, the S-transform has been successfully demonstrated in a variety of biomedical signal and image processing tasks. However, the computational demands of the S-transform have limited its application in medicine to this point in time. This abstract introduces the fast S-transform, a more efficient discrete implementation of the classic S-transform with dramatically reduced computational requirements.
NASA Astrophysics Data System (ADS)
Sorokina, N. P.; Kozlov, D. N.; Kuznetsova, I. V.
2013-10-01
The results of experimental studies of the postagrogenic transformation of loamy soddy-podzolic soils on the southern slope of the Klin-Dmitrov Moraine Ridge are discussed. A chronosequence of soils (arable soils (cropland)-soils under fallow with meadow vegetation-soils under secondary forests of different ages-soils under a conventionally initial native forest) was examined, and the stages of the postagrogenic transformation of the automorphic soddy-podzolic soils were identified. The differentiation of the former plow horizon into the A1 and A1A2 horizons (according to the differences in the humus content, texture, and acidity) served as the major criterion of the soil transformation. A stage of textural differentiation with clay depletion from the uppermost layer was identified in the soils of the 20- to 60-year-old fallows. The specificity of the postagrogenic transformation of the soils on the slopes was demonstrated. From the methodological point of view, it was important to differentiate between the chronosequences of automorphic and semihydromorphic soils of the leveled interfluves and the soils of the slopes. For this purpose, a series of maps reflecting the history of the land use and the soil cover pattern was analyzed. The cartographic model included the attribute data of the soil surveys, the cartographic sources (a series of historical maps of the land use, topographic maps, remote sensing data, and a digital elevation model), and two base maps: (a) the integral map of the land use and (b) the map of the soil combinations with the separation of the zonal automorphic, semihydromorphic, and erosional soil combinations. This scheme served as a matrix for the organization and analysis of the already available and new materials.
Kuligowski, J; Quintás, G; Garrigues, S; de la Guardia, M
2010-03-15
A new background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry has been developed. It is based on the use of a point-to-point matching algorithm that compares the absorption spectra of the sample data set with those of a previously recorded reference data set in order to select an appropriate reference spectrum. The spectral range used for the point-to-point comparison is selected with minimal user-interaction, thus facilitating considerably the application of the whole method. The background correction method has been successfully tested on a chromatographic separation of four nitrophenols running acetonitrile (0.08%, v/v TFA):water (0.08%, v/v TFA) gradients with compositions ranging from 35 to 85% (v/v) acetonitrile, giving accurate results for both, baseline resolved and overlapped peaks. Copyright (c) 2009 Elsevier B.V. All rights reserved.
MHD stagnation-point flow over a nonlinearly shrinking sheet with suction effect
NASA Astrophysics Data System (ADS)
Awaludin, Izyan Syazana; Ahmad, Rokiah; Ishak, Anuar
2018-04-01
The stagnation point flow over a shrinking permeable sheet in the existence of magnetic field is numerically investigated in this paper. The system of partial differential equations are transformed to a nonlinear ordinary differential equation using similarity transformation and is solved numerically using the boundary value problem solver, bvp4c, in Matlab software. It is found that dual solutions exist for a certain range of the shrinking strength.
NASA Astrophysics Data System (ADS)
Zheng, Donghui; Chen, Lei; Li, Jinpeng; Sun, Qinyuan; Zhu, Wenhua; Anderson, James; Zhao, Jian; Schülzgen, Axel
2018-03-01
Circular carrier squeezing interferometry (CCSI) is proposed and applied to suppress phase shift error in simultaneous phase-shifting point-diffraction interferometer (SPSPDI). By introducing a defocus, four phase-shifting point-diffraction interferograms with circular carrier are acquired, and then converted into linear carrier interferograms by a coordinate transform. Rearranging the transformed interferograms into a spatial-temporal fringe (STF), so the error lobe will be separated from the phase lobe in the Fourier spectrum of the STF, and filtering the phase lobe to calculate the extended phase, when combined with the corresponding inverse coordinate transform, exactly retrieves the initial phase. Both simulations and experiments validate the ability of CCSI to suppress the ripple error generated by the phase shift error. Compared with carrier squeezing interferometry (CSI), CCSI is effective on some occasions in which a linear carrier is difficult to introduce, and with the added benefit of eliminating retrace error.
Comparison of the different approaches to generate holograms from data acquired with a Kinect sensor
NASA Astrophysics Data System (ADS)
Kang, Ji-Hoon; Leportier, Thibault; Ju, Byeong-Kwon; Song, Jin Dong; Lee, Kwang-Hoon; Park, Min-Chul
2017-05-01
Data of real scenes acquired in real-time with a Kinect sensor can be processed with different approaches to generate a hologram. 3D models can be generated from a point cloud or a mesh representation. The advantage of the point cloud approach is that computation process is well established since it involves only diffraction and propagation of point sources between parallel planes. On the other hand, the mesh representation enables to reduce the number of elements necessary to represent the object. Then, even though the computation time for the contribution of a single element increases compared to a simple point, the total computation time can be reduced significantly. However, the algorithm is more complex since propagation of elemental polygons between non-parallel planes should be implemented. Finally, since a depth map of the scene is acquired at the same time than the intensity image, a depth layer approach can also be adopted. This technique is appropriate for a fast computation since propagation of an optical wavefront from one plane to another can be handled efficiently with the fast Fourier transform. Fast computation with depth layer approach is convenient for real time applications, but point cloud method is more appropriate when high resolution is needed. In this study, since Kinect can be used to obtain both point cloud and depth map, we examine the different approaches that can be adopted for hologram computation and compare their performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotasidis, Fotis A., E-mail: Fotis.Kotasidis@unige.ch; Zaidi, Habib; Geneva Neuroscience Centre, Geneva University, CH-1205 Geneva
2014-06-15
Purpose: The Ingenuity time-of-flight (TF) PET/MR is a recently developed hybrid scanner combining the molecular imaging capabilities of PET with the excellent soft tissue contrast of MRI. It is becoming common practice to characterize the system's point spread function (PSF) and understand its variation under spatial transformations to guide clinical studies and potentially use it within resolution recovery image reconstruction algorithms. Furthermore, due to the system's utilization of overlapping and spherical symmetric Kaiser-Bessel basis functions during image reconstruction, its image space PSF and reconstructed spatial resolution could be affected by the selection of the basis function parameters. Hence, a detailedmore » investigation into the multidimensional basis function parameter space is needed to evaluate the impact of these parameters on spatial resolution. Methods: Using an array of 12 × 7 printed point sources, along with a custom made phantom, and with the MR magnet on, the system's spatially variant image-based PSF was characterized in detail. Moreover, basis function parameters were systematically varied during reconstruction (list-mode TF OSEM) to evaluate their impact on the reconstructed resolution and the image space PSF. Following the spatial resolution optimization, phantom, and clinical studies were subsequently reconstructed using representative basis function parameters. Results: Based on the analysis and under standard basis function parameters, the axial and tangential components of the PSF were found to be almost invariant under spatial transformations (∼4 mm) while the radial component varied modestly from 4 to 6.7 mm. Using a systematic investigation into the basis function parameter space, the spatial resolution was found to degrade for basis functions with a large radius and small shape parameter. However, it was found that optimizing the spatial resolution in the reconstructed PET images, while having a good basis function superposition and keeping the image representation error to a minimum, is feasible, with the parameter combination range depending upon the scanner's intrinsic resolution characteristics. Conclusions: Using the printed point source array as a MR compatible methodology for experimentally measuring the scanner's PSF, the system's spatially variant resolution properties were successfully evaluated in image space. Overall the PET subsystem exhibits excellent resolution characteristics mainly due to the fact that the raw data are not under-sampled/rebinned, enabling the spatial resolution to be dictated by the scanner's intrinsic resolution and the image reconstruction parameters. Due to the impact of these parameters on the resolution properties of the reconstructed images, the image space PSF varies both under spatial transformations and due to basis function parameter selection. Nonetheless, for a range of basis function parameters, the image space PSF remains unaffected, with the range depending on the scanner's intrinsic resolution properties.« less
Coherent pulses in the diffusive transport of charged particles`
NASA Technical Reports Server (NTRS)
Kota, J.
1994-01-01
We present exact solutions to the diffusive transport of charged particles following impulsive injection for a simple model of scattering. A modified, two-parameter relaxation-time model is considered that simulates the low rate of scattering through perpendicular pitch-angle. Scattering is taken to be isotropic within each of the foward- and backward-pointing hemispheres, respectively, but, at the same time, a reduced rate of sccattering is assumed from one hemisphere to the other one. By applying a technique of Fourier- and Laplace-transform, the inverse transformation can be performed and exact solutions can be reached. By contrast with the first, and so far only exact solutions of Federov and Shakov, this wider class of solutions gives rise to coherent pulses to appear. The present work addresses omnidirectional densities for isotropic injection from an instantaneous and localized source. The dispersion relations are briefly discussed. We find, for this particular model, two diffusive models to exist up to a certain limiting wavenumber. The corresponding eigenvalues are real at the lowest wavenumbers. Complex eigenvalues, which are responsible for coherent pulses, appear at higher wavenumbers.
Localization of virtual sound at 4 Gz.
Sandor, Patrick M B; McAnally, Ken I; Pellieux, Lionel; Martin, Russell L
2005-02-01
Acceleration directed along the body's z-axis (Gz) leads to misperception of the elevation of visual objects (the "elevator illusion"), most probably as a result of errors in the transformation from eye-centered to head-centered coordinates. We have investigated whether the location of sound sources is misperceived under increased Gz. Visually guided localization responses were made, using a remotely controlled laser pointer, to virtual auditory targets under conditions of 1 and 4 Gz induced in a human centrifuge. As these responses would be expected to be affected by the elevator illusion, we also measured the effect of Gz on the accuracy with which subjects could point to the horizon. Horizon judgments were lower at 4 Gz than at 1 Gz, so sound localization responses at 4 Gz were corrected for this error in the transformation from eye-centered to head-centered coordinates. We found that the accuracy and bias of sound localization are not significantly affected by increased Gz. The auditory modality is likely to provide a reliable means of conveying spatial information to operators in dynamic environments in which Gz can vary.
Penrose junction conditions extended: Impulsive waves with gyratons
NASA Astrophysics Data System (ADS)
Podolský, J.; Švarc, R.; Steinbauer, R.; Sämann, C.
2017-09-01
We generalize the classical junction conditions for constructing impulsive gravitational waves by the Penrose "cut and paste" method. Specifically, we study nonexpanding impulses which propagate in spaces of constant curvature with any value of the cosmological constant (that is, Minkowski, de Sitter, or anti-de Sitter universes) when additional off-diagonal metric components are present. Such components encode a possible angular momentum of the ultrarelativistic source of the impulsive wave—the so-called gyraton. We explicitly derive and analyze a specific transformation that relates the distributional form of the metric to a new form which is (Lipschitz) continuous. Such a transformation automatically implies an extended version of the Penrose junction conditions. It turns out that the conditions for identifying points of the background spacetime across the impulse are the same as in the original Penrose cut and paste construction, but their derivatives now directly represent the influence of the gyraton on the axial motion of test particles. Our results apply both for vacuum and nonvacuum solutions of Einstein's field equations and can also be extended to other theories of gravity.
Ray propagation in oblate atmospheres. [for Jupiter
NASA Technical Reports Server (NTRS)
Hubbard, W. B.
1976-01-01
Phinney and Anderson's (1968) exact theory for the inversion of radio-occultation data for planetary atmospheres breaks down seriously when applied to occultations by oblate atmospheres because of departures from Bouguer's law. It has been proposed that this breakdown can be overcome by transforming the theory to a local spherical symmetry which osculates a ray's point of closest approach. The accuracy of this transformation procedure is assessed by evaluating the size of terms which are intrinsic to an oblate atmosphere and which are not eliminated by a local spherical approximation. The departures from Bouguer's law are analyzed, and it is shown that in the lowest-order deviation from that law, the plane of refraction is defined by the normal to the atmosphere at closest approach. In the next order, it is found that the oblateness of the atmosphere 'warps' the ray path out of a single plane, but the effect appears to be negligible for most purposes. It is concluded that there seems to be no source of serious error in making an approximation of local spherical symmetry with the refraction plane defined by the normal at closest approach.
Prognostic characteristics of the lowest-mode internal waves in the Sea of Okhotsk
NASA Astrophysics Data System (ADS)
Kurkin, Andrey; Kurkina, Oxana; Zaytsev, Andrey; Rybin, Artem; Talipova, Tatiana
2017-04-01
The nonlinear dynamics of short-period internal waves on ocean shelves is well described by generalized nonlinear evolutionary models of Korteweg - de Vries type. Parameters of these models such as long wave propagation speed, nonlinear and dispersive coefficients can be calculated using hydrological data (sea water density stratification), and therefore have geographical and seasonal variations. The internal wave parameters for the basin of the Sea of Okhotsk are computed on a base of recent version of hydrological data source GDEM V3.0. Geographical and seasonal variability of internal wave characteristics is investigated. It is shown that annually or seasonally averaged data can be used for linear parameters. The nonlinear parameters are more sensitive to temporal averaging of hydrological data and detailed data are preferable to use. The zones for nonlinear parameters to change their signs (so-called "turning points") are selected. Possible internal waveforms appearing in the process of internal tide transformation including the solitary waves changing polarities are simulated for the hydrological conditions in the Sea of Okhotsk shelf to demonstrate different scenarios of internal wave adjustment, transformation, refraction and cylindrical divergence.
Fast precalculated triangular mesh algorithm for 3D binary computer-generated holograms.
Yang, Fan; Kaczorowski, Andrzej; Wilkinson, Tim D
2014-12-10
A new method for constructing computer-generated holograms using a precalculated triangular mesh is presented. The speed of calculation can be increased dramatically by exploiting both the precalculated base triangle and GPU parallel computing. Unlike algorithms using point-based sources, this method can reconstruct a more vivid 3D object instead of a "hollow image." In addition, there is no need to do a fast Fourier transform for each 3D element every time. A ferroelectric liquid crystal spatial light modulator is used to display the binary hologram within our experiment and the hologram of a base right triangle is produced by utilizing just a one-step Fourier transform in the 2D case, which can be expanded to the 3D case by multiplying by a suitable Fresnel phase plane. All 3D holograms generated in this paper are based on Fresnel propagation; thus, the Fresnel plane is treated as a vital element in producing the hologram. A GeForce GTX 770 graphics card with 2 GB memory is used to achieve parallel computing.
ERIC Educational Resources Information Center
Fernández-Balboa, Juan-Miguel
2017-01-01
The purpose of this paper is to analyze the concept of "transformative pedagogy" (TP) in physical education and sport pedagogy (PESP) and research in order to provide an alternative perspective on freedom, justice and the limits of transformation. Although some of the limits of TP have already been pointed out in the literature, such…
The 2.5-dimensional equivalent sources method for directly exposed and shielded urban canyons.
Hornikx, Maarten; Forssén, Jens
2007-11-01
When a domain in outdoor acoustics is invariant in one direction, an inverse Fourier transform can be used to transform solutions of the two-dimensional Helmholtz equation to a solution of the three-dimensional Helmholtz equation for arbitrary source and observer positions, thereby reducing the computational costs. This previously published approach [D. Duhamel, J. Sound Vib. 197, 547-571 (1996)] is called a 2.5-dimensional method and has here been extended to the urban geometry of parallel canyons, thereby using the equivalent sources method to generate the two-dimensional solutions. No atmospheric effects are considered. To keep the error arising from the transform small, two-dimensional solutions with a very fine frequency resolution are necessary due to the multiple reflections in the canyons. Using the transform, the solution for an incoherent line source can be obtained much more efficiently than by using the three-dimensional solution. It is shown that the use of a coherent line source for shielded urban canyon observer positions leads mostly to an overprediction of levels and can yield erroneous results for noise abatement schemes. Moreover, the importance of multiple facade reflections in shielded urban areas is emphasized by vehicle pass-by calculations, where cases with absorptive and diffusive surfaces have been modeled.
Self-Consistent Sources for Integrable Equations Via Deformations of Binary Darboux Transformations
NASA Astrophysics Data System (ADS)
Chvartatskyi, Oleksandr; Dimakis, Aristophanes; Müller-Hoissen, Folkert
2016-08-01
We reveal the origin and structure of self-consistent source extensions of integrable equations from the perspective of binary Darboux transformations. They arise via a deformation of the potential that is central in this method. As examples, we obtain in particular matrix versions of self-consistent source extensions of the KdV, Boussinesq, sine-Gordon, nonlinear Schrödinger, KP, Davey-Stewartson, two-dimensional Toda lattice and discrete KP equation. We also recover a (2+1)-dimensional version of the Yajima-Oikawa system from a deformation of the pKP hierarchy. By construction, these systems are accompanied by a hetero binary Darboux transformation, which generates solutions of such a system from a solution of the source-free system and additionally solutions of an associated linear system and its adjoint. The essence of all this is encoded in universal equations in the framework of bidifferential calculus.
Inferring Models of Bacterial Dynamics toward Point Sources
Jashnsaz, Hossein; Nguyen, Tyler; Petrache, Horia I.; Pressé, Steve
2015-01-01
Experiments have shown that bacteria can be sensitive to small variations in chemoattractant (CA) concentrations. Motivated by these findings, our focus here is on a regime rarely studied in experiments: bacteria tracking point CA sources (such as food patches or even prey). In tracking point sources, the CA detected by bacteria may show very large spatiotemporal fluctuations which vary with distance from the source. We present a general statistical model to describe how bacteria locate point sources of food on the basis of stochastic event detection, rather than CA gradient information. We show how all model parameters can be directly inferred from single cell tracking data even in the limit of high detection noise. Once parameterized, our model recapitulates bacterial behavior around point sources such as the “volcano effect”. In addition, while the search by bacteria for point sources such as prey may appear random, our model identifies key statistical signatures of a targeted search for a point source given any arbitrary source configuration. PMID:26466373
Lombari, P; Ercolano, E; El Alaoui, H; Chiurazzi, M
2003-04-01
We describe herein a simple and efficient transformation procedure for the production of transgenic Lotus japonicus plants. In this new procedure, dedifferentiated root explants, used as starting material, are the source of a large number of cells that are competent for the regeneration procedure, with a high susceptibility to Agrobacterium infection. The application of this protocol resulted in a tenfold increase in the number of transformants produced by a single plant in comparison to the widely used hypocotyl transformation procedure. Furthermore, our procedure allowed the use of intact plants stored for a long time at 4 degrees C, thus providing a potential continuous supply of explants for transformation experiments. The overall time of incubation under tissue culture conditions required to obtain a plant transferable into soil is 4 months. The transgenic nature of the transformants was demonstrated by the detection of beta-glucuronidase (GUS) activity in the primary transformants and by molecular analysis. Stable transformation was indicated by Mendelian segregation of the hygromycin selectable marker and of the gusA activity after selfing of the transgenic plants.
Moranda, Arianna
2017-01-01
A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities. PMID:29270328
Paladino, Ombretta; Moranda, Arianna; Seyedsalehi, Mahdi
2017-01-01
A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities.
NASA Astrophysics Data System (ADS)
Voitovich, A. P.; Kalinov, V. S.; Stupak, A. P.; Runets, L. P.
2015-03-01
Isobestic and isoemission points are recorded in the combined absorption and luminescence spectra of two types of radiation defects involved in complex processes consisting of several simultaneous parallel and sequential reactions. These points are observed if a constant sum of two terms, each formed by the product of the concentration of the corresponding defect and a characteristic integral coefficient associated with it, is conserved. The complicated processes involved in the transformation of radiation defects in lithium fluoride are studied using these points. It is found that the ratio of the changes in the concentrations of one of the components and the reaction product remains constant in the course of several simultaneous reactions.
Non-point source pollution is a diffuse source that is difficult to measure and is highly variable due to different rain patterns and other climatic conditions. In many areas, however, non-point source pollution is the greatest source of water quality degradation. Presently, stat...
Annoyance by transportation noise: The effects of source identity and tonal components.
White, Kim; Bronkhorst, Adelbert W; Meeter, Martijn
2017-05-01
Aircraft noise is consistently rated as more annoying than noise from other sources with similar intensity. In three experiments, it was investigated whether this penalty is due to the source identity of the noise. In the first experiment, four samples were played to participants engaged in a working memory task: road traffic noise, an Airbus 320 flyover, and unidentifiable, transformed versions of these samples containing the same spectral content and envelope. Original, identifiable samples were rated as more annoying than the transformed samples. A second experiment tested whether these results were due to the absence of tonal components in the transformed samples. This was partly the case: an additional sample, created from the A320 flyover by filtering out major tonal components, was rated as less annoying than the original A320 sample, but as more annoying than the transformed sample. In a third experiment, participants either received full disclosure of the generation of the samples or no information to identify the transformed samples. The transformed sample was rated as most annoying when the A320 identity was disclosed, but as least annoying when it was not. Therefore, it was concluded that annoyance is influenced by both identifiability and the presence of tonal components.
Siebers, Nina; Martius, Christopher; Eckhardt, Kai-Uwe; Garcia, Marcos V. B.; Leinweber, Peter; Amelung, Wulf
2015-01-01
The impact of termites on nutrient cycling and tropical soil formation depends on their feeding habits and related material transformation. The identification of food sources, however, is difficult, because they are variable and changed by termite activity and nest construction. Here, we related the sources and alteration of organic matter in nests from seven different termite genera and feeding habits in the Terra Firme rainforests to the properties of potential food sources soil, wood, and microepiphytes. Chemical analyses comprised isotopic composition of C and N, cellulosic (CPS), non-cellulosic (NCPS), and N-containing saccharides, and molecular composition screening using pyrolysis-field ionization mass spectrometry (Py-FIMS). The isotopic analysis revealed higher soil δ13C (-27.4‰) and δ15N (6.6‰) values in nests of wood feeding Nasutitermes and Cornitermes than in wood samples (δ13C = -29.1‰, δ15N = 3.4‰), reflecting stable-isotope enrichment with organic matter alterations during or after nest construction. This result was confirmed by elevated NCPS:CPS ratios, indicating a preferential cellulose decomposition in the nests. High portions of muramic acid (MurAc) pointed to the participation of bacteria in the transformation processes. Non-metric multidimensional scaling (NMDS) revealed increasing geophagy in the sequence Termes < Embiratermes < Anoplotermes and increasing xylophagy for Cornitermes < Nasutitermes, and that the nest material of Constrictotermes was similar to the microepiphytes sample, confirming the report that Constrictotermes belongs to the microepiphyte-feeders. We therewith document that nest chemistry of rainforest termites shows variations and evidence of modification by microbial processes, but nevertheless it primarily reflects the trophic niches of the constructors. PMID:25909987
Siebers, Nina; Martius, Christopher; Eckhardt, Kai-Uwe; Garcia, Marcos V B; Leinweber, Peter; Amelung, Wulf
2015-01-01
The impact of termites on nutrient cycling and tropical soil formation depends on their feeding habits and related material transformation. The identification of food sources, however, is difficult, because they are variable and changed by termite activity and nest construction. Here, we related the sources and alteration of organic matter in nests from seven different termite genera and feeding habits in the Terra Firme rainforests to the properties of potential food sources soil, wood, and microepiphytes. Chemical analyses comprised isotopic composition of C and N, cellulosic (CPS), non-cellulosic (NCPS), and N-containing saccharides, and molecular composition screening using pyrolysis-field ionization mass spectrometry (Py-FIMS). The isotopic analysis revealed higher soil δ13C (-27.4‰) and δ15N (6.6‰) values in nests of wood feeding Nasutitermes and Cornitermes than in wood samples (δ13C = -29.1‰, δ15N = 3.4‰), reflecting stable-isotope enrichment with organic matter alterations during or after nest construction. This result was confirmed by elevated NCPS:CPS ratios, indicating a preferential cellulose decomposition in the nests. High portions of muramic acid (MurAc) pointed to the participation of bacteria in the transformation processes. Non-metric multidimensional scaling (MDS) revealed increasing geophagy in the sequence Termes < Embiratermes < Anoplotermes and increasing xylophagy for Cornitermes < Nasutitermes., and that the nest material of Constrictotermes was similar to the microepiphytes sample, confirming the report that Constrictotermes belongs to the microepiphyte-feeders. We therewith document that nest chemistry of rainforest termites shows variations and evidence of modification by microbial processes, but nevertheless it primarily reflects the trophic niches of the constructors.
Lin, Chitsan; Liou, Naiwei; Chang, Pao-Erh; Yang, Jen-Chin; Sun, Endy
2007-04-01
Although most coke oven research is focused on the emission of polycyclic aromatic hydrocarbons, well-known carcinogens, little has been done on the emission of volatile organic compounds, some of which are also thought to be hazardous to workers and the environment. To profile coke oven gas (COG) emissions, we set up an open-path Fourier transform infrared (OP-FTIR) system on top of a battery of coke ovens at a steel mill located in Southern Taiwan and monitored average emissions in a coke processing area for 16.5 hr. Nine COGs were identified, including ammonia, CO, methane, ethane, ethylene, acetylene, propylene, cyclohexane, and O-xylene. Time series plots indicated that the type of pollutants differed over time, suggesting that different emission sources (e.g., coke pushing, quench tower, etc.) were involved at different times over the study period. This observation was confirmed by the low cross-correlation coefficients of the COGs. It was also found that, with the help of meteorological analysis, the data collected by the OP-FTIR system could be analyzed effectively to characterize differences in the location of sources. Although the traditional single-point samplings of emissions involves sampling various sources in a coke processing area at several different times and is a credible profiling of emissions, our findings strongly suggest that they are not nearly as efficient or as cost-effective as the continuous line average method used in this study. This method would make it easier and cheaper for engineers and health risk assessors to identify and to control fugitive volatile organic compound emissions and to improve environmental health.
Optical properties of honeycomb photonic structures
NASA Astrophysics Data System (ADS)
Sinelnik, Artem D.; Rybin, Mikhail V.; Lukashenko, Stanislav Y.; Limonov, Mikhail F.; Samusev, Kirill B.
2017-06-01
We study, theoretically and experimentally, optical properties of different types of honeycomb photonic structures, known also as "photonic graphene." First, we employ the two-photon polymerization method to fabricate the honeycomb structures. In the experiment, we observe a strong diffraction from a finite number of elements, thus providing a unique tool to define the exact number of scattering elements in the structure with the naked eye. Next, we study theoretically the transmission spectra of both honeycomb single layer and two-dimensional (2D) structures of parallel dielectric circular rods. When the dielectric constant of the rod materials ɛ is increasing, we reveal that a 2D photonic graphene structure transforms into a metamaterial when the lowest TE 01 Mie gap opens up below the lowest Bragg band gap. We also observe two Dirac points in the band structure of 2D photonic graphene at the K point of the Brillouin zone and demonstrate a manifestation of Dirac lensing for the TM polarization. The performance of the Dirac lens is that the 2D photonic graphene layer converts a wave from point source into a beam with flat phase surfaces at the Dirac frequency for the TM polarization.
Computer measurement and representation of the heart in two and three dimensions
NASA Technical Reports Server (NTRS)
Rasmussen, D.
1976-01-01
Methods for the measurement and display by minicomputer of cardiac images obtained from fluoroscopy to permit an accurate assessment of functional changes are discussed. Heart contours and discrete points can be digitized automatically or manually, with the recorded image in a video, cine, or print format. As each frame is digitized it is assigned a code name identifying the data source, experiment, run, view, and frame, and the images are filed for future reference in any sequence. Two views taken at the same point in the heart cycle are used to compute the spatial position of the ventricle apex and the midpoint of the aortic valve. The remainder of the points on the chamber border are corrected for the linear distortion of the X-rays by projection to a plane containing the chord between the apex and the aortic valve center and oriented so that lines perpendicular to the chord are parallel to the image intensifier face. The image of the chamber surface is obtained by generating circular cross sections with diameters perpendicular to the major chord. The transformed two- and three-dimensional imagery can be displayed in either static or animated form using a graphics terminal.
An Open Source Tool for Game Theoretic Health Data De-Identification.
Prasser, Fabian; Gaupp, James; Wan, Zhiyu; Xia, Weiyi; Vorobeychik, Yevgeniy; Kantarcioglu, Murat; Kuhn, Klaus; Malin, Brad
2017-01-01
Biomedical data continues to grow in quantity and quality, creating new opportunities for research and data-driven applications. To realize these activities at scale, data must be shared beyond its initial point of collection. To maintain privacy, healthcare organizations often de-identify data, but they assume worst-case adversaries, inducing high levels of data corruption. Recently, game theory has been proposed to account for the incentives of data publishers and recipients (who attempt to re-identify patients), but this perspective has been more hypothetical than practical. In this paper, we report on a new game theoretic data publication strategy and its integration into the open source software ARX. We evaluate our implementation with an analysis on the relationship between data transformation, utility, and efficiency for over 30,000 demographic records drawn from the U.S. Census Bureau. The results indicate that our implementation is scalable and can be combined with various data privacy risk and quality measures.
Hawaii Clean Energy Initiative 2008-2018: Celebrating 10 Years of Success
DOE Office of Scientific and Technical Information (OSTI.GOV)
Launched in January 2008, the Hawaii Clean Energy Initiative (HCEI) set out transform Hawaii into a world model for energy independence and sustainability. With its leading-edge vision to transition to a Hawaii-powered clean energy economy within a single generation, HCEI established the most aggressive clean energy goals in the nation. Ten years after its launch, HCEI has significantly outdistanced the lofty targets established as Hawaii embarked on its ambitious quest for energy independence. The state now generates 27 percent of its electricity sales from clean energy sources like wind and solar, placing it 12 percentage points ahead of HCEI's originalmore » 2015 RPS target of 15 percent. This brochure highlights some of HCEI's key accomplishments and impacts during its first decade and reveals how its new RPS goal of 100 percent by 2045, which the Hawaii state legislature adopted in May 2015, has positioned Hawaii to become the first U.S. state to produce all of its electricity from indigenous renewable sources.« less
Biological properties of propolis extracts: Something new from an ancient product.
Zabaiou, Nada; Fouache, Allan; Trousson, Amalia; Baron, Silvère; Zellagui, Amar; Lahouel, Mesbah; Lobaccaro, Jean-Marc A
2017-10-01
Natural products are an interesting source of new therapeutics, especially for cancer therapy as 70% of them have botany origin. Propolis, a resinous mixture that honey bees collect and transform from tree buds, sap flows, or other botanical sources, has been used by ethnobotany and traditional practitioners as early in Egypt as 3000 BCE. Enriched in flavonoids, phenol acids and terpene derivatives, propolis has been widely used for its antibacterial, antifungal and anti-inflammatory properties. Even though it is a challenge to standardize propolis composition, chemical analyses have pointed out interesting molecules that also present anti-oxidant and anti-proliferative properties that are of interest in the field of anti-cancer therapy. This review describes the various geographical origins and compositions of propolis, and analyzes how the main compounds of propolis could modulate cell signaling. A focus is made on the putative use of propolis in prostate cancer. Copyright © 2017 Elsevier B.V. All rights reserved.
Changing the Culture of Academic Medicine: Critical Mass or Critical Actors?
Helitzer, Deborah L; Newbill, Sharon L; Cardinali, Gina; Morahan, Page S; Chang, Shine; Magrane, Diane
2017-05-01
By 2006, women constituted 34% of academic medical faculty, reaching a critical mass. Theoretically, with critical mass, culture and policy supportive of gender equity should be evident. We explore whether having a critical mass of women transforms institutional culture and organizational change. Career development program participants were interviewed to elucidate their experiences in academic health centers (AHCs). Focus group discussions were held with institutional leaders to explore their perceptions about contemporary challenges related to gender and leadership. Content analysis of both data sources revealed points of convergence. Findings were interpreted using the theory of critical mass. Two nested domains emerged: the individual domain included the rewards and personal satisfaction of meaningful work, personal agency, tensions between cultural expectations of family and academic roles, and women's efforts to work for gender equity. The institutional domain depicted the sociocultural environment of AHCs that shaped women's experience, both personally and professionally, lack of institutional strategies to engage women in organizational initiatives, and the influence of one leader on women's ascent to leadership. The predominant evidence from this research demonstrates that the institutional barriers and sociocultural environment continue to be formidable obstacles confronting women, stalling the transformational effects expected from achieving a critical mass of women faculty. We conclude that the promise of critical mass as a turning point for women should be abandoned in favor of "critical actor" leaders, both women and men, who individually and collectively have the commitment and power to create gender-equitable cultures in AHCs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Xionggao; Department of Ophthalmology, Hainan Medical College, Haikou; Wei, Yantao
2012-03-09
Highlights: Black-Right-Pointing-Pointer Vitreous induces morphological changes and cytoskeletal rearrangements in RPE cells. Black-Right-Pointing-Pointer Rac1 is activated in vitreous-transformed RPE cells. Black-Right-Pointing-Pointer Rac inhibition prevents morphological changes in vitreous-transformed RPE cells. Black-Right-Pointing-Pointer Rac inhibition suppresses cytoskeletal rearrangements in vitreous-transformed RPE cells. Black-Right-Pointing-Pointer The vitreous-induced effects are mediated by a Rac1 GTPase/LIMK1/cofilin pathway. -- Abstract: Proliferative vitreoretinopathy (PVR) is mainly caused by retinal pigment epithelial (RPE) cell migration, invasion, proliferation and transformation into fibroblast-like cells that produce the extracellular matrix (ECM). The vitreous humor is known to play an important role in PVR. An epithelial-to-mesenchymal transdifferentiation (EMT) of human RPE cells inducedmore » by 25% vitreous treatment has been linked to stimulation of the mesenchymal phenotype, migration and invasion. Here, we characterized the effects of the vitreous on the cell morphology and cytoskeleton in human RPE cells. The signaling pathway that mediates these effects was investigated. Serum-starved RPE cells were incubated with 25% vitreous, and the morphological changes were examined by phase-contrast microscopy. Filamentous actin (F-actin) was examined by immunofluorescence and confocal microscopy. Protein phosphorylation of AKT, ERK1/2, Smad2/3, LIM kinase (LIMK) 1 and cofilin was analyzed by Western blot analysis. Vitreous treatment induced cytoskeletal rearrangements, activated Rac1 and enhanced the phosphorylation of AKT, ERK1/2 and Smad2/3. When the cells were treated with a Rac activation-specific inhibitor, the cytoskeletal rearrangements were prevented, and the phosphorylation of Smad2/3 was blocked. Vitreous treatment also enhanced the phosphorylation of LIMK1 and cofilin and the Rac inhibitor blocked this effect. We propose that vitreous-transformed human RPE cells undergo cytoskeletal rearrangements via Rac1 GTPase-dependent pathways that modulate LIMK1 and cofilin activity. The TGF{beta}-like activity of the vitreous may participate in this effect. Actin polymerization causes the cytoskeletal rearrangements that lead to the plasticity of vitreous-transformed RPE cells in PVR.« less
Geochemistry of Intra-Transform Lavas from the Galápagos Transform Fault
NASA Astrophysics Data System (ADS)
Morrow, T. A.; Mittelstaedt, E. L.; Harpp, K. S.
2013-12-01
The Galápagos plume has profoundly affected the development and evolution of the nearby (<250 km) Galápagos Transform Fault (GTF), a ~100km right-stepping offset in the Galápagos Spreading Center (GSC). The GTF can be divided into two sections that represent different stages of transform evolution: the northern section exhibits fully developed transform fault morphology, whereas the southern section is young, and deformation is more diffuse. Both segments are faulted extensively and include numerous small (<0.5km3) monogenetic volcanic cones, though volcanic activity is more common in the south. To examine the composition of the mantle source and melting conditions responsible for the intra-transform lavas, as well as the influence of the plume on GTF evolution, we present major element, trace element, and radiogenic isotope analysis of samples collected during SON0158, EWI0004, and MV1007 cruises. Radiogenic isotope ratio variations in the Galápagos Archipelago require four distinct mantle reservoirs across the region: PLUME, DM, FLO, and WD. We find that Galápagos Transform lavas are chemically distinct from nearby GSC lavas and neighboring seamounts. They have radiogenic isotopic compositions that lie on a mixing line between DM and PLUME, with little to no contribution from any other mantle reservoirs despite their geographic proximity to WD-influenced lavas erupted along the GSC and at nearby (<50km away) seamounts. Within the transform, lavas from the northern section are more enriched in radiogenic isotopes than lavas sampled in the southern section. Transform lavas are anomalously depleted in incompatible trace elements (ITEs) relative to GSC lavas, suggesting unique melting conditions within the transform. Isotopic variability along the transform axis indicates that mantle sources and/or melting mechanisms vary between the northern and southern sections, which may relate to their distances from the plume or the two-stage development and evolution of the Galápagos Transform Fault. We present a melting model that reproduces GTF lava chemistry from a mixture of two partial melts of PLUME and DM. We assume that the DM source has an ITE composition similar to the depleted upper mantle, melting is purely fractional, and lavas do not fractionate during ascent. Solutions were achieved using a Metropolis algorithm and constrained by observed GTF lava chemistry. Model results predict that GTF lavas are produced by a mixture of a ~3%×1% partial melt of the PLUME source and a ~5%×4% partial melt of the DM source. Our model predicts that a larger proportion of PLUME melts contribute to GTF lavas than DM melts. Absence of the WD component and relatively low concentrations of ITEs may indicate that lavas in the GTF are produced from a source that has already undergone partial melting and is being re-melted beneath the TF. Re-melting may be caused by extension across the GTF, or development of the southern section of the GTF via the ~1Ma ridge jump.
Equivalent Hamiltonian for the Lee model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, H. F.
2008-03-15
Using the techniques of quasi-Hermitian quantum mechanics and quantum field theory we use a similarity transformation to construct an equivalent Hermitian Hamiltonian for the Lee model. In the field theory confined to the V/N{theta} sector it effectively decouples V, replacing the three-point interaction of the original Lee model by an additional mass term for the V particle and a four-point interaction between N and {theta}. While the construction is originally motivated by the regime where the bare coupling becomes imaginary, leading to a ghost, it applies equally to the standard Hermitian regime where the bare coupling is real. In thatmore » case the similarity transformation becomes a unitary transformation.« less
On E-discretization of tori of compact simple Lie groups. II
NASA Astrophysics Data System (ADS)
Hrivnák, Jiří; Juránek, Michal
2017-10-01
Ten types of discrete Fourier transforms of Weyl orbit functions are developed. Generalizing one-dimensional cosine, sine, and exponential, each type of the Weyl orbit function represents an exponential symmetrized with respect to a subgroup of the Weyl group. Fundamental domains of even affine and dual even affine Weyl groups, governing the argument and label symmetries of the even orbit functions, are determined. The discrete orthogonality relations are formulated on finite sets of points from the refinements of the dual weight lattices. Explicit counting formulas for the number of points of the discrete transforms are deduced. Real-valued Hartley orbit functions are introduced, and all ten types of the corresponding discrete Hartley transforms are detailed.
Methods for genetic transformation of filamentous fungi.
Li, Dandan; Tang, Yu; Lin, Jun; Cai, Weiwen
2017-10-03
Filamentous fungi have been of great interest because of their excellent ability as cell factories to manufacture useful products for human beings. The development of genetic transformation techniques is a precondition that enables scientists to target and modify genes efficiently and may reveal the function of target genes. The method to deliver foreign nucleic acid into cells is the sticking point for fungal genome modification. Up to date, there are some general methods of genetic transformation for fungi, including protoplast-mediated transformation, Agrobacterium-mediated transformation, electroporation, biolistic method and shock-wave-mediated transformation. This article reviews basic protocols and principles of these transformation methods, as well as their advantages and disadvantages.
ERIC Educational Resources Information Center
Francik, Wendy A.
2012-01-01
The purpose of the research was to explore the self-directed learning and transformational learning experiences among persons with bipolar disorder. A review of previous research pointed out how personal experiences with self-directed learning and transformational learning facilitated individuals' learning to manage HIV, Methicillan-resitant…
Transformative Inquiry While Learning-Teaching: Entry Points Through Mentor-Mentee Vulnerability
ERIC Educational Resources Information Center
Tanaka, Michele T. D.; Farish, Maureen; Nicholson, Diana; Tse, Vanessa; Doll, Jenn; Archer, Elizabeth
2014-01-01
In Transformative Inquiry (TI), pre-service teachers explore issues about which they are personally passionate in order to enter into the delicate work of transformation. We examine how shared vulnerability within three mentor-mentee pairs leads to new pedagogical possibilities. Michele and Vanessa discuss poetry as a way of entering into TI and…
Transformation of Corporate Culture in Conditions of Transition to Knowledge Economics
ERIC Educational Resources Information Center
Korsakova, Tatiana V.; Chelnokova, Elena A.; Kaznacheeva, Svetlana N.; Bicheva, Irena B.; Lazutina, Antonina L.; Perova, Tatyana V.
2016-01-01
This article is devoted to the problem of corporate culture transformations which are conditioned by changes in social-economic situation. The modern paradigm of knowledge management is assumed to become the main value for forming a new vision of corporate culture. The starting point for transformations can be found in the actual corporate culture…
Galaxy Redshifts from Discrete Optimization of Correlation Functions
NASA Astrophysics Data System (ADS)
Lee, Benjamin C. G.; Budavári, Tamás; Basu, Amitabh; Rahman, Mubdi
2016-12-01
We propose a new method of constraining the redshifts of individual extragalactic sources based on celestial coordinates and their ensemble statistics. Techniques from integer linear programming (ILP) are utilized to optimize simultaneously for the angular two-point cross- and autocorrelation functions. Our novel formalism introduced here not only transforms the otherwise hopelessly expensive, brute-force combinatorial search into a linear system with integer constraints but also is readily implementable in off-the-shelf solvers. We adopt Gurobi, a commercial optimization solver, and use Python to build the cost function dynamically. The preliminary results on simulated data show potential for future applications to sky surveys by complementing and enhancing photometric redshift estimators. Our approach is the first application of ILP to astronomical analysis.
Bagnato, Giuseppe; Iulianelli, Adolfo; Sanna, Aimaro; Basile, Angelo
2017-03-23
Glycerol represents an emerging renewable bio-derived feedstock, which could be used as a source for producing hydrogen through steam reforming reaction. In this review, the state-of-the-art about glycerol production processes is reviewed, with particular focus on glycerol reforming reactions and on the main catalysts under development. Furthermore, the use of membrane catalytic reactors instead of conventional reactors for steam reforming is discussed. Finally, the review describes the utilization of the Pd-based membrane reactor technology, pointing out the ability of these alternative fuel processors to simultaneously extract high purity hydrogen and enhance the whole performances of the reaction system in terms of glycerol conversion and hydrogen yield.
Bagnato, Giuseppe; Iulianelli, Adolfo; Sanna, Aimaro; Basile, Angelo
2017-01-01
Glycerol represents an emerging renewable bio-derived feedstock, which could be used as a source for producing hydrogen through steam reforming reaction. In this review, the state-of-the-art about glycerol production processes is reviewed, with particular focus on glycerol reforming reactions and on the main catalysts under development. Furthermore, the use of membrane catalytic reactors instead of conventional reactors for steam reforming is discussed. Finally, the review describes the utilization of the Pd-based membrane reactor technology, pointing out the ability of these alternative fuel processors to simultaneously extract high purity hydrogen and enhance the whole performances of the reaction system in terms of glycerol conversion and hydrogen yield. PMID:28333121
NASA Astrophysics Data System (ADS)
Tudora, C.; Abrudeanu, M.; Stanciu, S.; Anghel, D.; Plaiaşu, G. A.; Rizea, V.; Ştirbu, I.; Cimpoeşu, N.
2018-06-01
It is highly accepted that martensitic transformation can be induced by temperature variation and by stress solicitation. Using a solar concentrator, we manage to increase the material surface temperature (till 573 respectively 873 K) in very short periods of time in order to analyze the material behavior under thermal shocks. The heating/cooling process was registered and analyzed during the experiments. Material surface was analyzed before and after thermal shocks by microstructure point of view using scanning electron microscopy (SEM) and atomic force microscopy (AFM). The experiments follow the material behavior during fast heating and propose the possibility of activating smart materials using the sun heat for aerospace applications.
Infrared and visible image fusion with spectral graph wavelet transform.
Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Zong, Jing-guo
2015-09-01
Infrared and visible image fusion technique is a popular topic in image analysis because it can integrate complementary information and obtain reliable and accurate description of scenes. Multiscale transform theory as a signal representation method is widely used in image fusion. In this paper, a novel infrared and visible image fusion method is proposed based on spectral graph wavelet transform (SGWT) and bilateral filter. The main novelty of this study is that SGWT is used for image fusion. On the one hand, source images are decomposed by SGWT in its transform domain. The proposed approach not only effectively preserves the details of different source images, but also excellently represents the irregular areas of the source images. On the other hand, a novel weighted average method based on bilateral filter is proposed to fuse low- and high-frequency subbands by taking advantage of spatial consistency of natural images. Experimental results demonstrate that the proposed method outperforms seven recently proposed image fusion methods in terms of both visual effect and objective evaluation metrics.
Multispectral data compression through transform coding and block quantization
NASA Technical Reports Server (NTRS)
Ready, P. J.; Wintz, P. A.
1972-01-01
Transform coding and block quantization techniques are applied to multispectral aircraft scanner data, and digitized satellite imagery. The multispectral source is defined and an appropriate mathematical model proposed. The Karhunen-Loeve, Fourier, and Hadamard encoders are considered and are compared to the rate distortion function for the equivalent Gaussian source and to the performance of the single sample PCM encoder.
An improved DPSM technique for modelling ultrasonic fields in cracked solids
NASA Astrophysics Data System (ADS)
Banerjee, Sourav; Kundu, Tribikram; Placko, Dominique
2007-04-01
In recent years Distributed Point Source Method (DPSM) is being used for modelling various ultrasonic, electrostatic and electromagnetic field modelling problems. In conventional DPSM several point sources are placed near the transducer face, interface and anomaly boundaries. The ultrasonic or the electromagnetic field at any point is computed by superimposing the contributions of different layers of point sources strategically placed. The conventional DPSM modelling technique is modified in this paper so that the contributions of the point sources in the shadow region can be removed from the calculations. For this purpose the conventional point sources that radiate in all directions are replaced by Controlled Space Radiation (CSR) sources. CSR sources can take care of the shadow region problem to some extent. Complete removal of the shadow region problem can be achieved by introducing artificial interfaces. Numerically synthesized fields obtained by the conventional DPSM technique that does not give any special consideration to the point sources in the shadow region and the proposed modified technique that nullifies the contributions of the point sources in the shadow region are compared. One application of this research can be found in the improved modelling of the real time ultrasonic non-destructive evaluation experiments.
On the assessment of spatial resolution of PET systems with iterative image reconstruction
NASA Astrophysics Data System (ADS)
Gong, Kuang; Cherry, Simon R.; Qi, Jinyi
2016-03-01
Spatial resolution is an important metric for performance characterization in PET systems. Measuring spatial resolution is straightforward with a linear reconstruction algorithm, such as filtered backprojection, and can be performed by reconstructing a point source scan and calculating the full-width-at-half-maximum (FWHM) along the principal directions. With the widespread adoption of iterative reconstruction methods, it is desirable to quantify the spatial resolution using an iterative reconstruction algorithm. However, the task can be difficult because the reconstruction algorithms are nonlinear and the non-negativity constraint can artificially enhance the apparent spatial resolution if a point source image is reconstructed without any background. Thus, it was recommended that a background should be added to the point source data before reconstruction for resolution measurement. However, there has been no detailed study on the effect of the point source contrast on the measured spatial resolution. Here we use point source scans from a preclinical PET scanner to investigate the relationship between measured spatial resolution and the point source contrast. We also evaluate whether the reconstruction of an isolated point source is predictive of the ability of the system to resolve two adjacent point sources. Our results indicate that when the point source contrast is below a certain threshold, the measured FWHM remains stable. Once the contrast is above the threshold, the measured FWHM monotonically decreases with increasing point source contrast. In addition, the measured FWHM also monotonically decreases with iteration number for maximum likelihood estimate. Therefore, when measuring system resolution with an iterative reconstruction algorithm, we recommend using a low-contrast point source and a fixed number of iterations.
The optimal algorithm for Multi-source RS image fusion.
Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan
2016-01-01
In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.
Isolation transformers for utility-interactive photovoltaic systems
NASA Astrophysics Data System (ADS)
Kern, E. C., Jr.
1982-12-01
Isolation transformers are used in some photovoltaic systems to isolate the photovoltaic system common mode voltage from the utility distribution system. In early system experiments with grid connected photovoltaics, such transformers were the source of significant power losses. A project at the Lincoln Laboratory and at Allied Chemical Corporation developed an improved isolation transformer to minimize such power losses. Experimental results and an analytical model of conventional and improved transformers are presented, showing considerable reductions of losses associated with the improved transformer.
Transformer coupling for transmitting direct current through a barrier
Brown, Ralph L.; Guilford, Richard P.; Stichman, John H.
1988-01-01
The transmission system for transmitting direct current from an energy source on one side of an electrical and mechanical barrier to a load on the other side of the barrier utilizes a transformer comprising a primary core on one side of the transformer and a secondary core on the other side of the transformer. The cores are magnetically coupled selectively by moving a magnetic ferrite coupler in and out of alignment with the poles of the cores. The direct current from the energy source is converted to a time varying current by an oscillating circuit, which oscillating circuit is optically coupled to a secondary winding on the secondary core to interrupt oscillations upon the voltage in the secondary winding exceeding a preselected level.
Transformer coupling for transmitting direct current through a barrier
Brown, R.L.; Guilford, R.P.; Stichman, J.H.
1987-06-29
The transmission system for transmitting direct current from an energy source on one side of an electrical and mechanical barrier to a load on the other side of the barrier utilizes a transformer comprising a primary core on one side of the transformer and a secondary core on the other side of the transformer. The cores are magnetically coupled selectively by moving a magnetic ferrite coupler in and out of alignment with the poles of the cores. The direct current from the energy source is converted to a time varying current by an oscillating circuit, which oscillating circuit is optically coupled to a secondary winding on the secondary core to interrupt oscillations upon the voltage in the secondary winding exceeding a preselected level. 4 figs.
NASA Astrophysics Data System (ADS)
Winczek, J.; Makles, K.; Gucwa, M.; Gnatowska, R.; Hatala, M.
2017-08-01
In the paper, the model of the thermal and structural strain calculation in a steel element during single-pass SAW surfacing is presented. The temperature field is described analytically assuming a bimodal volumetric model of heat source and a semi-infinite body model of the surfaced (rebuilt) workpiece. The electric arc is treated physically as one heat source. Part of the heat is transferred by the direct impact of the electric arc, while another part of the heat is transferred to the weld by the melted material of the electrode. Kinetics of phase transformations during heating is limited by temperature values at the beginning and at the end of austenitic transformation, while the progress of phase transformations during cooling is determined on the basis of TTT-welding diagramand JMA-K law for diffusive transformations, and K-M law for martensitic transformation. Totalstrains equal to the sum ofthermaland structuralstrainsinduced by phasetransformationsin weldingcycle.
NASA Astrophysics Data System (ADS)
Beyene, F.; Knospe, S.; Busch, W.
2015-04-01
Landslide detection and monitoring remain difficult with conventional differential radar interferometry (DInSAR) because most pixels of radar interferograms around landslides are affected by different error sources. These are mainly related to the nature of high radar viewing angles and related spatial distortions (such as overlays and shadows), temporal decorrelations owing to vegetation cover, and speed and direction of target sliding masses. On the other hand, GIS can be used to integrate spatial datasets obtained from many sources (including radar and non-radar sources). In this paper, a GRID data model is proposed to integrate deformation data derived from DInSAR processing with other radar origin data (coherence, layover and shadow, slope and aspect, local incidence angle) and external datasets collected from field study of landslide sites and other sources (geology, geomorphology, hydrology). After coordinate transformation and merging of data, candidate landslide representing pixels of high quality radar signals were filtered out by applying a GIS based multicriteria filtering analysis (GIS-MCFA), which excludes grid points in areas of shadow and overlay, low coherence, non-detectable and non-landslide deformations, and other possible sources of errors from the DInSAR data processing. At the end, the results obtained from GIS-MCFA have been verified by using the external datasets (existing landslide sites collected from fieldworks, geological and geomorphologic maps, rainfall data etc.).
Characterization of methane emissions in Los Angeles with airborne hyperspectral imaging
NASA Astrophysics Data System (ADS)
Saad, K.; Tratt, D. M.; Buckland, K. N.; Roehl, C. M.; Wennberg, P. O.; Wunch, D.
2017-12-01
As urban areas develop regulations to limit atmospheric methane (CH4), accurate quantification of anthropogenic emissions will be critical for program development and evaluation. However, relating emissions derived from process-level metadata to those determined from assimilating atmospheric observations of CH4 concentrations into models is particularly difficult. Non-methane hydrocarbons (NMHCs) can help differentiate between thermogenic and biogenic CH4 emissions, as they are primarily co-emitted with the former; however, these trace gases are subject to the same limitations as CH4. Remotely-sensed hyperspectral imaging bridges these approaches by measuring emissions plumes directly with spatial coverage on the order of 10 km2 min-1. We identify the sources of and evaluate emissions plumes measured by airborne infrared hyperspectral imagers flown over the Los Angeles (LA) metropolitan area, which encompasses various CH4 sources, including petroleum and natural gas wells and facilities. We quantify total CH4 and NMHC emissions, as well as their relative column densities, at the point-source level to create fingerprints of source types. We aggregate these analyses to estimate the range of variability in chemical composition across source types. These CH4 and NMHC emissions factors are additionally compared to their tropospheric column abundances measured by the Total Carbon Column Observing Network (TCCON) Pasadena Fourier transform infrared spectrometer, which provides a footprint for the LA basin.
Data-Driven Residential Load Modeling and Validation in GridLAB-D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gotseff, Peter; Lundstrom, Blake
Accurately characterizing the impacts of high penetrations of distributed energy resources (DER) on the electric distribution system has driven modeling methods from traditional static snap shots, often representing a critical point in time (e.g., summer peak load), to quasi-static time series (QSTS) simulations capturing all the effects of variable DER, associated controls and hence, impacts on the distribution system over a given time period. Unfortunately, the high time resolution DER source and load data required for model inputs is often scarce or non-existent. This paper presents work performed within the GridLAB-D model environment to synthesize, calibrate, and validate 1-second residentialmore » load models based on measured transformer loads and physics-based models suitable for QSTS electric distribution system modeling. The modeling and validation approach taken was to create a typical GridLAB-D model home that, when replicated to represent multiple diverse houses on a single transformer, creates a statistically similar load to a measured load for a given weather input. The model homes are constructed to represent the range of actual homes on an instrumented transformer: square footage, thermal integrity, heating and cooling system definition as well as realistic occupancy schedules. House model calibration and validation was performed using the distribution transformer load data and corresponding weather. The modeled loads were found to be similar to the measured loads for four evaluation metrics: 1) daily average energy, 2) daily average and standard deviation of power, 3) power spectral density, and 4) load shape.« less
Slit Function Measurement of An Imaging Spectrograph Using Fourier Transform Techniques
NASA Technical Reports Server (NTRS)
Park, Hongwoo; Swimyard, Bruce; Jakobsen, Peter; Moseley, Harvey; Greenhouse, Matthew
2004-01-01
Knowledge of a spectrograph slit function is necessary to interpret the unresolved lines in an observed spectrum. A theoretical slit function can be calculated from the sizes of the entrance slit, the detector aperture when it functions as an exit slit, the dispersion characteristic of the disperser, and the point spread function of the spectrograph. A measured slit function is preferred to the theoretical one for the correct interpretation of the spectral data. In a scanning spectrometer with a single exit slit, the slit function is easily measured. In a fixed grating/or disperser spectrograph, illuminating the entrance slit with a near monochromatic light from a pre-monochrmator or a tunable laser and varying the wavelength of the incident light can measure the slit function. Even though the latter technique had been used successfully for the slit function measurements, it had been very laborious and it would be prohibitive to an imaging spectrograph or a multi-object spectrograph that has a large field of view. We explore an alternative technique that is manageable for the measurements. In the proposed technique, the imaging spectrograph is used as a detector of a Fourier transform spectrometer. This method can be applied not only to an IR spectrograph but also has a potential to a visible/UV spectrograph including a wedge filter spectrograph. This technique will require a blackbody source of known temperature and a bolometer to characterize the interferometer part of the Fourier Transform spectrometer. This pa?er will describe the alternative slit function measurement technique using a Fourier transform spectrometer.
Study of preparation of TiB{sub 2} by TiC in Al melts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding Haimin; Key Laboratory for Liquid-Solid Structural Evolution and Processing of Materials, Ministry of Education, Shandong University, Jinan 250061; Liu Xiangfa, E-mail: xfliu@sdu.edu.cn
2012-01-15
TiB{sub 2} particles are prepared by TiC in Al melts and the characteristics of them are studied. It is found that TiC particles are unstable when boron exists in Al melts with high temperature and will transform to TiB{sub 2} and Al{sub 4}C{sub 3}. Most of the synthesized TiB{sub 2} particles are regular hexagonal prisms with submicron size. The diameter of the undersurfaces of these prisms is ranging from 200 nm to 1 {mu}m and the height is ranging from 100 nm to 300 nm. It is considered that controlling the transformation from TiC to TiB{sub 2} is an effectivemore » method to prepare small and uniform TiB{sub 2} particles. - Highlights: Black-Right-Pointing-Pointer TiC can easily transform into TiB{sub 2} in Al melts. Black-Right-Pointing-Pointer TiB{sub 2} formed by TiC will grow into regular hexagonal prisms with submicron size. Black-Right-Pointing-Pointer Controlling the transformation from TiC to TiB{sub 2} is an effective method to prepare small and uniform TiB{sub 2} particles.« less
Using shape contexts method for registration of contra lateral breasts in thermal images.
Etehadtavakol, Mahnaz; Ng, Eddie Yin-Kwee; Gheissari, Niloofar
2014-12-10
To achieve symmetric boundaries for left and right breasts boundaries in thermal images by registration. The proposed method for registration consists of two steps. In the first step, shape context, an approach as presented by Belongie and Malik was applied for registration of two breast boundaries. The shape context is an approach to measure shape similarity. Two sets of finite sample points from shape contours of two breasts are then presented. Consequently, the correspondences between the two shapes are found. By finding correspondences, the sample point which has the most similar shape context is obtained. In this study, a line up transformation which maps one shape onto the other has been estimated in order to complete shape. The used of a thin plate spline permitted good estimation of a plane transformation which has capability to map unselective points from one shape onto the other. The obtained aligning transformation of boundaries points has been applied successfully to map the two breasts interior points. Some of advantages for using shape context method in this work are as follows: (1) no special land marks or key points are needed; (2) it is tolerant to all common shape deformation; and (3) although it is uncomplicated and straightforward to use, it gives remarkably powerful descriptor for point sets significantly upgrading point set registration. Results are very promising. The proposed algorithm was implemented for 32 cases. Boundary registration is done perfectly for 28 cases. We used shape contexts method that is simple and easy to implement to achieve symmetric boundaries for left and right breasts boundaries in thermal images.
INTERIOR OF NORTH ENTRY VESTIBULE, SHOWING TRANSFORMER ROOM BEHIND WIRE ...
INTERIOR OF NORTH ENTRY VESTIBULE, SHOWING TRANSFORMER ROOM BEHIND WIRE MESH, VIEW FACING EAST-SOUTHEAST. - Naval Air Station Barbers Point, Telephone Exchange, Coral Sea Road north of Bismarck Sea Road, Ewa, Honolulu County, HI
AS Migration and Optimization of the Power Integrated Data Network
NASA Astrophysics Data System (ADS)
Zhou, Junjie; Ke, Yue
2018-03-01
In the transformation process of data integration network, the impact on the business has always been the most important reference factor to measure the quality of network transformation. With the importance of the data network carrying business, we must put forward specific design proposals during the transformation, and conduct a large number of demonstration and practice to ensure that the transformation program meets the requirements of the enterprise data network. This paper mainly demonstrates the scheme of over-migrating point-to-point access equipment in the reconstruction project of power data comprehensive network to migrate the BGP autonomous domain to the specified domain defined in the industrial standard, and to smooth the intranet OSPF protocol Migration into ISIS agreement. Through the optimization design, eventually making electric power data network performance was improved on traffic forwarding, traffic forwarding path optimized, extensibility, get larger, lower risk of potential loop, the network stability was improved, and operational cost savings, etc.
The 3D Hough Transform for plane detection in point clouds: A review and a new accumulator design
NASA Astrophysics Data System (ADS)
Borrmann, Dorit; Elseberg, Jan; Lingemann, Kai; Nüchter, Andreas
2011-03-01
The Hough Transform is a well-known method for detecting parameterized objects. It is the de facto standard for detecting lines and circles in 2-dimensional data sets. For 3D it has attained little attention so far. Even for the 2D case high computational costs have lead to the development of numerous variations for the Hough Transform. In this article we evaluate different variants of the Hough Transform with respect to their applicability to detect planes in 3D point clouds reliably. Apart from computational costs, the main problem is the representation of the accumulator. Usual implementations favor geometrical objects with certain parameters due to uneven sampling of the parameter space. We present a novel approach to design the accumulator focusing on achieving the same size for each cell and compare it to existing designs. [Figure not available: see fulltext.
The parallel algorithm for the 2D discrete wavelet transform
NASA Astrophysics Data System (ADS)
Barina, David; Najman, Pavel; Kleparnik, Petr; Kula, Michal; Zemcik, Pavel
2018-04-01
The discrete wavelet transform can be found at the heart of many image-processing algorithms. Until now, the transform on general-purpose processors (CPUs) was mostly computed using a separable lifting scheme. As the lifting scheme consists of a small number of operations, it is preferred for processing using single-core CPUs. However, considering a parallel processing using multi-core processors, this scheme is inappropriate due to a large number of steps. On such architectures, the number of steps corresponds to the number of points that represent the exchange of data. Consequently, these points often form a performance bottleneck. Our approach appropriately rearranges calculations inside the transform, and thereby reduces the number of steps. In other words, we propose a new scheme that is friendly to parallel environments. When evaluating on multi-core CPUs, we consistently overcome the original lifting scheme. The evaluation was performed on 61-core Intel Xeon Phi and 8-core Intel Xeon processors.
Wavelet-like bases for thin-wire integral equations in electromagnetics
NASA Astrophysics Data System (ADS)
Francomano, E.; Tortorici, A.; Toscano, E.; Ala, G.; Viola, F.
2005-03-01
In this paper, wavelets are used in solving, by the method of moments, a modified version of the thin-wire electric field integral equation, in frequency domain. The time domain electromagnetic quantities, are obtained by using the inverse discrete fast Fourier transform. The retarded scalar electric and vector magnetic potentials are employed in order to obtain the integral formulation. The discretized model generated by applying the direct method of moments via point-matching procedure, results in a linear system with a dense matrix which have to be solved for each frequency of the Fourier spectrum of the time domain impressed source. Therefore, orthogonal wavelet-like basis transform is used to sparsify the moment matrix. In particular, dyadic and M-band wavelet transforms have been adopted, so generating different sparse matrix structures. This leads to an efficient solution in solving the resulting sparse matrix equation. Moreover, a wavelet preconditioner is used to accelerate the convergence rate of the iterative solver employed. These numerical features are used in analyzing the transient behavior of a lightning protection system. In particular, the transient performance of the earth termination system of a lightning protection system or of the earth electrode of an electric power substation, during its operation is focused. The numerical results, obtained by running a complex structure, are discussed and the features of the used method are underlined.
A seismically active section of the Southwest Indian Ridge
NASA Astrophysics Data System (ADS)
Wald, David J.; Wallace, Terry C.
1986-10-01
The section of the Southwest Indian Ocean Ridge west of the Prince Edward Fracture zone has a large ridge axis offset and a complicated ridge-transform morphology. We have determined the source mechanisms of transform earthquakes along this portion of the ridge from an inversion of long-period P and SH waveforms. The seismicity is characterized by anomalous faulting mechanisms, source complexity and an unexpectedly large seismic moment release. Several earthquakes with dip-slip components of faulting have been recognized on the central section of the Andrew Bain and 32° E transforms suggesting geometrical complexity along the transform. This region has experienced a Mw = 8.0 transform earthquake in 1942, yet we observe a seismic slip rate during the last 20 years that is still comparable to the predicted spreading rate (1.6 cm/yr). The calculated slip rate over a period of 60 years is three times greater than the expected rate of spreading.
Polarization singularity indices in Gaussian laser beams
NASA Astrophysics Data System (ADS)
Freund, Isaac
2002-01-01
Two types of point singularities in the polarization of a paraxial Gaussian laser beam are discussed in detail. V-points, which are vector point singularities where the direction of the electric vector of a linearly polarized field becomes undefined, and C-points, which are elliptic point singularities where the ellipse orientations of elliptically polarized fields become undefined. Conventionally, V-points are characterized by the conserved integer valued Poincaré-Hopf index η, with generic value η=±1, while C-points are characterized by the conserved half-integer singularity index IC, with generic value IC=±1/2. Simple algorithms are given for generating V-points with arbitrary positive or negative integer indices, including zero, at arbitrary locations, and C-points with arbitrary positive or negative half-integer or integer indices, including zero, at arbitrary locations. Algorithms are also given for generating continuous lines of these singularities in the plane, V-lines and C-lines. V-points and C-points may be transformed one into another. A topological index based on directly measurable Stokes parameters is used to discuss this transformation. The evolution under propagation of V-points and C-points initially embedded in the beam waist is studied, as is the evolution of V-dipoles and C-dipoles.
Yang-Mills gauge conditions from Witten's open string field theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng Haidong; Siegel, Warren
2007-02-15
We construct the Zinn-Justin-Batalin-Vilkovisky action for tachyons and gauge bosons from Witten's 3-string vertex of the bosonic open string without gauge fixing. Through canonical transformations, we find the off-shell, local, gauge-covariant action up to 3-point terms, satisfying the usual field theory gauge transformations. Perturbatively, it can be extended to higher-point terms. It also gives a new gauge condition in field theory which corresponds to the Feynman-Siegel gauge on the world-sheet.
Nonlocal symmetries and Bäcklund transformations for the self-dual Yang-Mills system
NASA Astrophysics Data System (ADS)
Papachristou, C. J.; Harrison, B. Kent
1988-01-01
The observation is made that generalized evolutionary isovectors of the self-dual Yang-Mills equation, obtained by ``verticalization'' of the geometrical isovectors derived in a previous paper [J. Math. Phys. 28, 1261 (1987)], generate Bäcklund transformations for the self-dual system. In particular, new Bäcklund transformations are obtained by ``verticalizing'' the generators of point transformations on the solution manifold. A geometric ansatz for the derivation of such (generally nonlocal) symmetries is proposed.
NASA Astrophysics Data System (ADS)
Zhou, Yatong; Han, Chunying; Chi, Yue
2018-06-01
In a simultaneous source survey, no limitation is required for the shot scheduling of nearby sources and thus a huge acquisition efficiency can be obtained but at the same time making the recorded seismic data contaminated by strong blending interference. In this paper, we propose a multi-dip seislet frame based sparse inversion algorithm to iteratively separate simultaneous sources. We overcome two inherent drawbacks of traditional seislet transform. For the multi-dip problem, we propose to apply a multi-dip seislet frame thresholding strategy instead of the traditional seislet transform for deblending simultaneous-source data that contains multiple dips, e.g., containing multiple reflections. The multi-dip seislet frame strategy solves the conflicting dip problem that degrades the performance of the traditional seislet transform. For the noise issue, we propose to use a robust dip estimation algorithm that is based on velocity-slope transformation. Instead of calculating the local slope directly using the plane-wave destruction (PWD) based method, we first apply NMO-based velocity analysis and obtain NMO velocities for multi-dip components that correspond to multiples of different orders, then a fairly accurate slope estimation can be obtained using the velocity-slope conversion equation. An iterative deblending framework is given and validated through a comprehensive analysis over both numerical synthetic and field data examples.
A quality control system for digital elevation data
NASA Astrophysics Data System (ADS)
Knudsen, Thomas; Kokkendorf, Simon; Flatman, Andrew; Nielsen, Thorbjørn; Rosenkranz, Brigitte; Keller, Kristian
2015-04-01
In connection with the introduction of a new version of the Danish national coverage Digital Elevation Model (DK-DEM), the Danish Geodata Agency has developed a comprehensive quality control (QC) and metadata production (MP) system for LiDAR point cloud data. The architecture of the system reflects its origin in a national mapping organization where raw data deliveries are typically outsourced to external suppliers. It also reflects a design decision of aiming at, whenever conceivable, doing full spatial coverage tests, rather than scattered sample checks. Hence, the QC procedure is split in two phases: A reception phase and an acceptance phase. The primary aim of the reception phase is to do a quick assessment of things that can typically go wrong, and which are relatively simple to check: Data coverage, data density, strip adjustment. If a data delivery passes the reception phase, the QC continues with the acceptance phase, which checks five different aspects of the point cloud data: Vertical accuracy Vertical precision Horizontal accuracy Horizontal precision Point classification correctness The vertical descriptors are comparatively simple to measure: The vertical accuracy is checked by direct comparison with previously surveyed patches. The vertical precision is derived from the observed variance on well defined flat surface patches. These patches are automatically derived from the road centerlines registered in FOT, the official Danish map data base. The horizontal descriptors are less straightforward to measure, since potential reference material for direct comparison is typically expected to be less accurate than the LiDAR data. The solution selected is to compare photogrammetrically derived roof centerlines from FOT with LiDAR derived roof centerlines. These are constructed by taking the 3D Hough transform of a point cloud patch defined by the photogrammetrical roof polygon. The LiDAR derived roof centerline is then the intersection line of the two primary planes of the transformed data. Since the photogrammetrical and the LiDAR derived roof centerline sets are independently derived, a low RMS difference indicates that both data sets are of very high accuracy. The horizontal precision is derived by doing a similar comparison between LiDAR derived roof centerlines in the overlap zone of neighbouring flight strips. Contrary to the vertical and horizontal descriptors, the point classification correctness is neither geometric, nor well defined. In this case we must resolve by introducing a human in the loop and presenting data in a form that is as useful as possible to this human. Hence, the QC system produces maps of suspicious patterns such as Vegetation below buildings Points classified as buildings where no building is registered in the map data base Building polygons from the map data base without any building points Buildings on roads All elements of the QC process is carried out in smaller tiles (typically 1 km × 1 km) and hence trivially parallelizable. Results from the parallel executing processes are collected in a geospatial data base system (PostGIS) and the progress can be analyzed and visualized in a desktop GIS while the processes run. Implementation wise, the system is based on open source components, primarily from the OSGeo stack (GDAL, PostGIS, QGIS, NumPy, SciPy, etc.). The system specific code is also being open sourced. This open source distribution philosophy supports the parallel execution paradigm, since all available hardware can be utilized without any licensing problems. As yet, the system has only been used for QC of the first part of a new Danish elevation model. The experience has, however, been very positive. Especially notable is the utility of doing full spatial coverage tests (rather than scattered sample checks). This means that error detection and error reports are exactly as spatial as the point cloud data they concern. This makes it very easy for both data receiver and data provider, to discuss and reason about the nature and causes of irregularities.
ERIC Educational Resources Information Center
Brown, Julie
2017-01-01
This article presents an overview of the findings of a recently completed study exploring the potentially transformative impact upon learners of recognition of prior informal learning (RPL). The specific transformative dimension being reported is learner identity. In addition to providing a starting point for an evidence base within Scotland, the…
Curricular Transformation of Education in the Field of Physical and Sport Education in Slovakia
ERIC Educational Resources Information Center
Bendíková, Elena
2016-01-01
The study presents basic information on the curricular transformation of physical and sport education in Slovakia after the year 1989, which is related to the education process in the 21st century. What is more, it points to the basis for modern transformation in relation to sports as well as to insufficient undergraduate teacher training and its…
NASA Astrophysics Data System (ADS)
Lu, Peng; Lin, Wenpeng; Niu, Zheng; Su, Yirong; Wu, Jinshui
2006-10-01
Nitrogen (N) is one of the main factors affecting environmental pollution. In recent years, non-point source pollution and water body eutrophication have become increasing concerns for both scientists and the policy-makers. In order to assess the environmental hazard of soil total N pollution, a typical ecological unit was selected as the experimental site. This paper showed that Box-Cox transformation achieved normality in the data set, and dampened the effect of outliers. The best theoretical model of soil total N was a Gaussian model. Spatial variability of soil total N at NE60° and NE150° directions showed that it had a strip anisotropic structure. The ordinary kriging estimate of soil total N concentration was mapped. The spatial distribution pattern of soil total N in the direction of NE150° displayed a strip-shaped structure. Kriging standard deviations (KSD) provided valuable information that will increase the accuracy of total N mapping. The probability kriging method is useful to assess the hazard of N pollution by providing the conditional probability of N concentration exceeding the threshold value, where we found soil total N>2.0g/kg. The probability distribution of soil total N will be helpful to conduct hazard assessment, optimal fertilization, and develop management practices to control the non-point sources of N pollution.
Morillas, Héctor; Maguregui, Maite; García-Florentino, Cristina; Marcaida, Iker; Madariaga, Juan Manuel
2016-04-15
Dry deposition is one of the most dangerous processes that can take place in the environment where the compounds that are suspended in the atmosphere can react directly on different surrounding materials, promoting decay processes. Usually this process is related with industrial/urban fog and/or marine aerosol in the coastal areas. Particularly, marine aerosol transports different types of salts which can be deposited on building materials and by dry deposition promotes different decay pathways. A new analytical methodology based on the combined use of Raman Spectroscopy and SEM-EDS (point-by-point and imaging) was applied. For that purpose, firstly evaporated seawater (presence of Primary Marine Aerosol (PMA)) was analyzed. After that, using a self-made passive sampler (SMPS), different suspended particles coming from marine aerosol (transformed particles in the atmosphere (Secondary Marine Aerosol (SMA)) and metallic airborne particulate matter coming from anthropogenic sources, were analyzed. Finally in order to observe if SMA and metallic particles identified in the SMPS can be deposited on a building, sandstone samples from La Galea Fortress (Getxo, north of Spain) located in front of the sea and in the place where the passive sampler was mounted were analyzed. Copyright © 2016 Elsevier B.V. All rights reserved.
Point source emission reference materials from the Emissions Inventory Improvement Program (EIIP). Provides point source guidance on planning, emissions estimation, data collection, inventory documentation and reporting, and quality assurance/quality contr
NASA Astrophysics Data System (ADS)
Zhu, Lei; Song, JinXi; Liu, WanQing
2017-12-01
Huaxian Section is the last hydrological and water quality monitoring section of Weihe River Watershed. Weihe River Watershed above Huaxian Section is taken as the research objective in this paper and COD is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a new method to estimate pollution loads—characteristic section load(CSLD) method is suggested and point source pollution and non-point source pollution loads of Weihe River Watershed above Huaxian Section are calculated in the rainy, normal and dry season in the year 2007. The results show that the monthly point source pollution loads of Weihe River Watershed above Huaxian Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above Huaxian Section change greatly and the non-point source pollution load proportions of total pollution load of COD decrease in the normal, rainy and wet period in turn.
Calculating NH3-N pollution load of wei river watershed above Huaxian section using CSLD method
NASA Astrophysics Data System (ADS)
Zhu, Lei; Song, JinXi; Liu, WanQing
2018-02-01
Huaxian Section is the last hydrological and water quality monitoring section of Weihe River Watershed. So it is taken as the research objective in this paper and NH3-N is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a new method to estimate pollution loads—characteristic section load (CSLD)method is suggested and point source pollution and non-point source pollution loads of Weihe River Watershed above Huaxian Section are calculated in the rainy, normal and dry season in the year 2007. The results show that the monthly point source pollution loads of Weihe River Watershed above Huaxian Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above Huaxian Section change greatly. The non-point source pollution load proportions of total pollution load of NH3-N decrease in the normal, rainy and wet period in turn.
Transformative Learning: A Case for Using Grounded Theory as an Assessment Analytic
ERIC Educational Resources Information Center
Patterson, Barbara A. B.; Munoz, Leslie; Abrams, Leah; Bass, Caroline
2015-01-01
Transformative Learning Theory and pedagogies leverage disruptive experiences as catalysts for learning and teaching. By facilitating processes of critical analysis and reflection that challenge assumptions, transformative learning reframes what counts as knowledge and the sources and processes for gaining and producing it. Students develop a…
Gaia: "Thinking Like a Planet" as Transformative Learning
ERIC Educational Resources Information Center
Haigh, Martin
2014-01-01
Transformative learning may involve gentle perspective widening or something more traumatic. This paper explores the impact of a transformative pedagogy in a course that challenges learners to "think like a planet". Among six sources of intellectual anxiety, learners worry about: why Gaia Theory is neglected by their other courses; the…
NASA Astrophysics Data System (ADS)
Ju, Kyong-Sik; Ryo, Hyok-Su; Pak, Sung-Nam; Pak, Chang-Su; Ri, Sung-Guk; Ri, Dok-Hwan
2018-07-01
By using the generalized inverse-pole-figure model, the numbers of crystalline particles involved in different domain-switching near the triple tetragonal-rhombohedral-orthorhombic (T-R-O) points of three-phase polycrystalline ferroelectrics have been analytically calculated and domain-switching which can bring out phase transformations has been considered. Through polarization by an electric field, different numbers of crystalline particles can be involved in different phase transformations. According to the phase equilibrium conditions, the phase equilibrium compositions of the three phases coexisting near the T-R-O triple point have been evaluated from the results of the numbers of crystalline particles involved in different phase transformations.
Calculation of power spectrums from digital time series with missing data points
NASA Technical Reports Server (NTRS)
Murray, C. W., Jr.
1980-01-01
Two algorithms are developed for calculating power spectrums from the autocorrelation function when there are missing data points in the time series. Both methods use an average sampling interval to compute lagged products. One method, the correlation function power spectrum, takes the discrete Fourier transform of the lagged products directly to obtain the spectrum, while the other, the modified Blackman-Tukey power spectrum, takes the Fourier transform of the mean lagged products. Both techniques require fewer calculations than other procedures since only 50% to 80% of the maximum lags need be calculated. The algorithms are compared with the Fourier transform power spectrum and two least squares procedures (all for an arbitrary data spacing). Examples are given showing recovery of frequency components from simulated periodic data where portions of the time series are missing and random noise has been added to both the time points and to values of the function. In addition the methods are compared using real data. All procedures performed equally well in detecting periodicities in the data.
Toward perception-based navigation using EgoSphere
NASA Astrophysics Data System (ADS)
Kawamura, Kazuhiko; Peters, R. Alan; Wilkes, Don M.; Koku, Ahmet B.; Sekman, Ali
2002-02-01
A method for perception-based egocentric navigation of mobile robots is described. Each robot has a local short-term memory structure called the Sensory EgoSphere (SES), which is indexed by azimuth, elevation, and time. Directional sensory processing modules write information on the SES at the location corresponding to the source direction. Each robot has a partial map of its operational area that it has received a priori. The map is populated with landmarks and is not necessarily metrically accurate. Each robot is given a goal location and a route plan. The route plan is a set of via-points that are not used directly. Instead, a robot uses each point to construct a Landmark EgoSphere (LES) a circular projection of the landmarks from the map onto an EgoSphere centered at the via-point. Under normal circumstances, the LES will be mostly unaffected by slight variations in the via-point location. Thus, the route plan is transformed into a set of via-regions each described by an LES. A robot navigates by comparing the next LES in its route plan to the current contents of its SES. It heads toward the indicated landmarks until its SES matches the LES sufficiently to indicate that the robot is near the suggested via-point. The proposed method is particularly useful for enabling the exchange of robust route informa-tion between robots under low data rate communications constraints. An example of such an exchange is given.
Transition and mixing in axisymmetric jets and vortex rings
NASA Technical Reports Server (NTRS)
Allen, G. A., Jr.; Cantwell, B. J.
1986-01-01
A class of impulsively started, axisymmetric, laminar jets produced by a time dependent joint source of momentum are considered. These jets are different flows, each initially at rest in an unbounded fluid. The study is conducted at three levels of detail. First, a generalized set of analytic creeping flow solutions are derived with a method of flow classification. Second, from this set, three specific creeping flow solutions are studied in detail: the vortex ring, the round jet, and the ramp jet. This study involves derivation of vorticity, stream function, entrainment diagrams, and evolution of time lines through computer animation. From entrainment diagrams, critical points are derived and analyzed. The flow geometry is dictated by the properties and location of critical points which undergo bifurcation and topological transformation (a form of transition) with changing Reynolds number. Transition Reynolds numbers were calculated. A state space trajectory was derived describing the topological behavior of these critical points. This state space derivation yielded three states of motion which are universal for all axisymmetric jets. Third, the axisymmetric round jet is solved numerically using the unsteady laminar Navier Stokes equations. These equations were shown to be self similar for the round jet. Numerical calculations were performed up to a Reynolds number of 30 for a 60x60 point mesh. Animations generated from numerical solution showed each of the three states of motion for the round jet, including the Re = 30 case.
Contour-Based Corner Detection and Classification by Using Mean Projection Transform
Kahaki, Seyed Mostafa Mousavi; Nordin, Md Jan; Ashtari, Amir Hossein
2014-01-01
Image corner detection is a fundamental task in computer vision. Many applications require reliable detectors to accurately detect corner points, commonly achieved by using image contour information. The curvature definition is sensitive to local variation and edge aliasing, and available smoothing methods are not sufficient to address these problems properly. Hence, we propose Mean Projection Transform (MPT) as a corner classifier and parabolic fit approximation to form a robust detector. The first step is to extract corner candidates using MPT based on the integral properties of the local contours in both the horizontal and vertical directions. Then, an approximation of the parabolic fit is calculated to localize the candidate corner points. The proposed method presents fewer false-positive (FP) and false-negative (FN) points compared with recent standard corner detection techniques, especially in comparison with curvature scale space (CSS) methods. Moreover, a new evaluation metric, called accuracy of repeatability (AR), is introduced. AR combines repeatability and the localization error (Le) for finding the probability of correct detection in the target image. The output results exhibit better repeatability, localization, and AR for the detected points compared with the criteria in original and transformed images. PMID:24590354
Contour-based corner detection and classification by using mean projection transform.
Kahaki, Seyed Mostafa Mousavi; Nordin, Md Jan; Ashtari, Amir Hossein
2014-02-28
Image corner detection is a fundamental task in computer vision. Many applications require reliable detectors to accurately detect corner points, commonly achieved by using image contour information. The curvature definition is sensitive to local variation and edge aliasing, and available smoothing methods are not sufficient to address these problems properly. Hence, we propose Mean Projection Transform (MPT) as a corner classifier and parabolic fit approximation to form a robust detector. The first step is to extract corner candidates using MPT based on the integral properties of the local contours in both the horizontal and vertical directions. Then, an approximation of the parabolic fit is calculated to localize the candidate corner points. The proposed method presents fewer false-positive (FP) and false-negative (FN) points compared with recent standard corner detection techniques, especially in comparison with curvature scale space (CSS) methods. Moreover, a new evaluation metric, called accuracy of repeatability (AR), is introduced. AR combines repeatability and the localization error (Le) for finding the probability of correct detection in the target image. The output results exhibit better repeatability, localization, and AR for the detected points compared with the criteria in original and transformed images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Lei, E-mail: wanglei2239@126.com; Gao, Yi-Tian; State Key Laboratory of Software Development Environment, Beijing University of Aeronautics and Astronautics, Beijing 100191
2012-08-15
Under investigation in this paper is a variable-coefficient modified Kortweg-de Vries (vc-mKdV) model describing certain situations from the fluid mechanics, ocean dynamics and plasma physics. N-fold Darboux transformation (DT) of a variable-coefficient Ablowitz-Kaup-Newell-Segur spectral problem is constructed via a gauge transformation. Multi-solitonic solutions in terms of the double Wronskian for the vc-mKdV model are derived by the reduction of the N-fold DT. Three types of the solitonic interactions are discussed through figures: (1) Overtaking collision; (2) Head-on collision; (3) Parallel solitons. Nonlinear, dispersive and dissipative terms have the effects on the velocities of the solitonic waves while the amplitudes ofmore » the waves depend on the perturbation term. - Highlights: Black-Right-Pointing-Pointer N-fold DT is firstly applied to a vc-AKNS spectral problem. Black-Right-Pointing-Pointer Seeking a double Wronskian solution is changed into solving two systems. Black-Right-Pointing-Pointer Effects of the variable coefficients on the multi-solitonic waves are discussed in detail. Black-Right-Pointing-Pointer This work solves the problem from Yi Zhang [Ann. Phys. 323 (2008) 3059].« less
Histological Transformation and Progression in Follicular Lymphoma: A Clonal Evolution Study.
Kridel, Robert; Chan, Fong Chun; Mottok, Anja; Boyle, Merrill; Farinha, Pedro; Tan, King; Meissner, Barbara; Bashashati, Ali; McPherson, Andrew; Roth, Andrew; Shumansky, Karey; Yap, Damian; Ben-Neriah, Susana; Rosner, Jamie; Smith, Maia A; Nielsen, Cydney; Giné, Eva; Telenius, Adele; Ennishi, Daisuke; Mungall, Andrew; Moore, Richard; Morin, Ryan D; Johnson, Nathalie A; Sehn, Laurie H; Tousseyn, Thomas; Dogan, Ahmet; Connors, Joseph M; Scott, David W; Steidl, Christian; Marra, Marco A; Gascoyne, Randy D; Shah, Sohrab P
2016-12-01
Follicular lymphoma (FL) is an indolent, yet incurable B cell malignancy. A subset of patients experience an increased mortality rate driven by two distinct clinical end points: histological transformation and early progression after immunochemotherapy. The nature of tumor clonal dynamics leading to these clinical end points is poorly understood, and previously determined genetic alterations do not explain the majority of transformed cases or accurately predict early progressive disease. We contend that detailed knowledge of the expansion patterns of specific cell populations plus their associated mutations would provide insight into therapeutic strategies and disease biology over the time course of FL clinical histories. Using a combination of whole genome sequencing, targeted deep sequencing, and digital droplet PCR on matched diagnostic and relapse specimens, we deciphered the constituent clonal populations in 15 transformation cases and 6 progression cases, and measured the change in clonal population abundance over time. We observed widely divergent patterns of clonal dynamics in transformed cases relative to progressed cases. Transformation specimens were generally composed of clones that were rare or absent in diagnostic specimens, consistent with dramatic clonal expansions that came to dominate the transformation specimens. This pattern was independent of time to transformation and treatment modality. By contrast, early progression specimens were composed of clones that were already present in the diagnostic specimens and exhibited only moderate clonal dynamics, even in the presence of immunochemotherapy. Analysis of somatic mutations impacting 94 genes was undertaken in an extension cohort consisting of 395 samples from 277 patients in order to decipher disrupted biology in the two clinical end points. We found 12 genes that were more commonly mutated in transformed samples than in the preceding FL tumors, including TP53, B2M, CCND3, GNA13, S1PR2, and P2RY8. Moreover, ten genes were more commonly mutated in diagnostic specimens of patients with early progression, including TP53, BTG1, MKI67, and XBP1. Our results illuminate contrasting modes of evolution shaping the clinical histories of transformation and progression. They have implications for interpretation of evolutionary dynamics in the context of treatment-induced selective pressures, and indicate that transformation and progression will require different clinical management strategies.
Histological Transformation and Progression in Follicular Lymphoma: A Clonal Evolution Study
Mottok, Anja; Boyle, Merrill; Tan, King; Meissner, Barbara; Bashashati, Ali; Roth, Andrew; Shumansky, Karey; Nielsen, Cydney; Giné, Eva; Moore, Richard; Morin, Ryan D.; Sehn, Laurie H.; Tousseyn, Thomas; Dogan, Ahmet; Scott, David W.; Steidl, Christian; Gascoyne, Randy D.; Shah, Sohrab P.
2016-01-01
Background Follicular lymphoma (FL) is an indolent, yet incurable B cell malignancy. A subset of patients experience an increased mortality rate driven by two distinct clinical end points: histological transformation and early progression after immunochemotherapy. The nature of tumor clonal dynamics leading to these clinical end points is poorly understood, and previously determined genetic alterations do not explain the majority of transformed cases or accurately predict early progressive disease. We contend that detailed knowledge of the expansion patterns of specific cell populations plus their associated mutations would provide insight into therapeutic strategies and disease biology over the time course of FL clinical histories. Methods and Findings Using a combination of whole genome sequencing, targeted deep sequencing, and digital droplet PCR on matched diagnostic and relapse specimens, we deciphered the constituent clonal populations in 15 transformation cases and 6 progression cases, and measured the change in clonal population abundance over time. We observed widely divergent patterns of clonal dynamics in transformed cases relative to progressed cases. Transformation specimens were generally composed of clones that were rare or absent in diagnostic specimens, consistent with dramatic clonal expansions that came to dominate the transformation specimens. This pattern was independent of time to transformation and treatment modality. By contrast, early progression specimens were composed of clones that were already present in the diagnostic specimens and exhibited only moderate clonal dynamics, even in the presence of immunochemotherapy. Analysis of somatic mutations impacting 94 genes was undertaken in an extension cohort consisting of 395 samples from 277 patients in order to decipher disrupted biology in the two clinical end points. We found 12 genes that were more commonly mutated in transformed samples than in the preceding FL tumors, including TP53, B2M, CCND3, GNA13, S1PR2, and P2RY8. Moreover, ten genes were more commonly mutated in diagnostic specimens of patients with early progression, including TP53, BTG1, MKI67, and XBP1. Conclusions Our results illuminate contrasting modes of evolution shaping the clinical histories of transformation and progression. They have implications for interpretation of evolutionary dynamics in the context of treatment-induced selective pressures, and indicate that transformation and progression will require different clinical management strategies. PMID:27959929
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Aardt, Jan; Romanczyk, Paul; van Leeuwen, Martin
Terrestrial laser scanning (TLS) has emerged as an effective tool for rapid comprehensive measurement of object structure. Registration of TLS data is an important prerequisite to overcome the limitations of occlusion. However, due to the high dissimilarity of point cloud data collected from disparate viewpoints in the forest environment, adequate marker-free registration approaches have not been developed. The majority of studies instead rely on the utilization of artificial tie points (e.g., reflective tooling balls) placed within a scene to aid in coordinate transformation. We present a technique for generating view-invariant feature descriptors that are intrinsic to the point cloud datamore » and, thus, enable blind marker-free registration in forest environments. To overcome the limitation of initial pose estimation, we employ a voting method to blindly determine the optimal pairwise transformation parameters, without an a priori estimate of the initial sensor pose. To provide embedded error metrics, we developed a set theory framework in which a circular transformation is traversed between disjoint tie point subsets. This provides an upper estimate of the Root Mean Square Error (RMSE) confidence associated with each pairwise transformation. Output RMSE errors are commensurate with the RMSE of input tie points locations. Thus, while the mean output RMSE=16.3cm, improved results could be achieved with a more precise laser scanning system. This study 1) quantifies the RMSE of the proposed marker-free registration approach, 2) assesses the validity of embedded confidence metrics using receiver operator characteristic (ROC) curves, and 3) informs optimal sample spacing considerations for TLS data collection in New England forests. Furthermore, while the implications for rapid, accurate, and precise forest inventory are obvious, the conceptual framework outlined here could potentially be extended to built environments.« less
Van Aardt, Jan; Romanczyk, Paul; van Leeuwen, Martin; ...
2016-04-04
Terrestrial laser scanning (TLS) has emerged as an effective tool for rapid comprehensive measurement of object structure. Registration of TLS data is an important prerequisite to overcome the limitations of occlusion. However, due to the high dissimilarity of point cloud data collected from disparate viewpoints in the forest environment, adequate marker-free registration approaches have not been developed. The majority of studies instead rely on the utilization of artificial tie points (e.g., reflective tooling balls) placed within a scene to aid in coordinate transformation. We present a technique for generating view-invariant feature descriptors that are intrinsic to the point cloud datamore » and, thus, enable blind marker-free registration in forest environments. To overcome the limitation of initial pose estimation, we employ a voting method to blindly determine the optimal pairwise transformation parameters, without an a priori estimate of the initial sensor pose. To provide embedded error metrics, we developed a set theory framework in which a circular transformation is traversed between disjoint tie point subsets. This provides an upper estimate of the Root Mean Square Error (RMSE) confidence associated with each pairwise transformation. Output RMSE errors are commensurate with the RMSE of input tie points locations. Thus, while the mean output RMSE=16.3cm, improved results could be achieved with a more precise laser scanning system. This study 1) quantifies the RMSE of the proposed marker-free registration approach, 2) assesses the validity of embedded confidence metrics using receiver operator characteristic (ROC) curves, and 3) informs optimal sample spacing considerations for TLS data collection in New England forests. Furthermore, while the implications for rapid, accurate, and precise forest inventory are obvious, the conceptual framework outlined here could potentially be extended to built environments.« less
Thermal stabilization of static single-mirror Fourier transform spectrometers
NASA Astrophysics Data System (ADS)
Schardt, Michael; Schwaller, Christian; Tremmel, Anton J.; Koch, Alexander W.
2017-05-01
Fourier transform spectroscopy has become a standard method for spectral analysis of infrared light. With this method, an interferogram is created by two beam interference which is subsequently Fourier-transformed. Most Fourier transform spectrometers used today provide the interferogram in the temporal domain. In contrast, static Fourier transform spectrometers generate interferograms in the spatial domain. One example of this type of spectrometer is the static single-mirror Fourier transform spectrometer which offers a high etendue in combination with a simple, miniaturized optics design. As no moving parts are required, it also features a high vibration resistance and high measurement rates. However, it is susceptible to temperature variations. In this paper, we therefore discuss the main sources for temperature-induced errors in static single-mirror Fourier transform spectrometers: changes in the refractive index of the optical components used, variations of the detector sensitivity, and thermal expansion of the housing. As these errors manifest themselves in temperature-dependent wavenumber shifts and intensity shifts, they prevent static single-mirror Fourier transform spectrometers from delivering long-term stable spectra. To eliminate these shifts, we additionally present a work concept for the thermal stabilization of the spectrometer. With this stabilization, static single-mirror Fourier transform spectrometers are made suitable for infrared process spectroscopy under harsh thermal environmental conditions. As the static single-mirror Fourier transform spectrometer uses the so-called source-doubling principle, many of the mentioned findings are transferable to other designs of static Fourier transform spectrometers based on the same principle.
Gong, Tingting; Tao, Yuxian; Zhang, Xiangru; Hu, Shaoyang; Yin, Jinbao; Xian, Qiming; Ma, Jian; Xu, Bin
2017-09-19
Aromatic iodinated disinfection byproducts (DBPs) are a newly identified category of highly toxic DBPs. Among the identified aromatic iodinated DBPs, 2,4,6-triiodophenol and 2,6-diiodo-4-nitrophenol have shown relatively widespread occurrence and high toxicity. In this study, we found that 4-iodophenol underwent transformation to form 2,4,6-triiodophenol and 2,6-diiodo-4-nitrophenol in the presence of monochloramine. The transformation pathways were investigated, the decomposition kinetics of 4-iodophenol and the formation of 2,4,6-triiodophenol and 2,6-diiodo-4-nitrophenol were studied, the factors affecting the transformation were examined, the toxicity change during the transformation was evaluated, and the occurrence of the proposed transformation pathways during chloramination of source water was verified. The results revealed that 2,4,6-triiodophenol and 2,6-diiodo-4-nitrophenol, which could account for 71.0% of iodine in the transformed 4-iodophenol, were important iodinated transformation products of 4-iodophenol in the presence of monochloramine. The transformation pathways of 4-iodophenol in the presence of monochloramine were proposed and verified. The decomposition of 4-iodophenol in the presence of monochloramine followed a pseudo-second-order decay. Various factors including monochloramine dose, pH, temperature, nitrite concentration, and free chlorine contact time (before chloramination) affected the transformation. The cytotoxicity of the chloraminated 4-iodophenol samples increased continuously with contact time. The proposed transformation pathways occurred during chloramination of source water.
NASA Astrophysics Data System (ADS)
Haas, Fernando
2016-11-01
A didactic and systematic derivation of Noether point symmetries and conserved currents is put forward in special relativistic field theories, without a priori assumptions about the transformation laws. Given the Lagrangian density, the invariance condition develops as a set of partial differential equations determining the symmetry transformation. The solution is provided in the case of real scalar, complex scalar, free electromagnetic, and charged electromagnetic fields. Besides the usual conservation laws, a less popular symmetry is analyzed: the symmetry associated with the linear superposition of solutions, whenever applicable. The role of gauge invariance is emphasized. The case of the charged scalar particle under external electromagnetic fields is considered, and the accompanying Noether point symmetries determined. Noether point symmetries for a dynamical system in extended gravity cosmology are also deduced.
A modular theory of multisensory integration for motor control
Tagliabue, Michele; McIntyre, Joseph
2014-01-01
To control targeted movements, such as reaching to grasp an object or hammering a nail, the brain can use divers sources of sensory information, such as vision and proprioception. Although a variety of studies have shown that sensory signals are optimally combined according to principles of maximum likelihood, increasing evidence indicates that the CNS does not compute a single, optimal estimation of the target's position to be compared with a single optimal estimation of the hand. Rather, it employs a more modular approach in which the overall behavior is built by computing multiple concurrent comparisons carried out simultaneously in a number of different reference frames. The results of these individual comparisons are then optimally combined in order to drive the hand. In this article we examine at a computational level two formulations of concurrent models for sensory integration and compare this to the more conventional model of converging multi-sensory signals. Through a review of published studies, both our own and those performed by others, we produce evidence favoring the concurrent formulations. We then examine in detail the effects of additive signal noise as information flows through the sensorimotor system. By taking into account the noise added by sensorimotor transformations, one can explain why the CNS may shift its reliance on one sensory modality toward a greater reliance on another and investigate under what conditions those sensory transformations occur. Careful consideration of how transformed signals will co-vary with the original source also provides insight into how the CNS chooses one sensory modality over another. These concepts can be used to explain why the CNS might, for instance, create a visual representation of a task that is otherwise limited to the kinesthetic domain (e.g., pointing with one hand to a finger on the other) and why the CNS might choose to recode sensory information in an external reference frame. PMID:24550816
Husserl, Johana; Hughes, Joseph B.
2012-01-01
Flavoprotein reductases that catalyze the transformation of nitroglycerin (NG) to dinitro- or mononitroglycerols enable bacteria containing such enzymes to use NG as the nitrogen source. The inability to use the resulting mononitroglycerols limits most strains to incomplete denitration of NG. Recently, Arthrobacter strain JBH1 was isolated for the ability to grow on NG as the sole source of carbon and nitrogen, but the enzymes and mechanisms involved were not established. Here, the enzymes that enable the Arthrobacter strain to incorporate NG into a productive pathway were identified. Enzyme assays indicated that the transformation of nitroglycerin to mononitroglycerol is NADPH dependent and that the subsequent transformation of mononitroglycerol is ATP dependent. Cloning and heterologous expression revealed that a flavoprotein catalyzes selective denitration of NG to 1-mononitroglycerol (1-MNG) and that 1-MNG is transformed to 1-nitro-3-phosphoglycerol by a glycerol kinase homolog. Phosphorylation of the nitroester intermediate enables the subsequent denitration of 1-MNG in a productive pathway that supports the growth of the isolate and mineralization of NG. PMID:22427495
Husserl, Johana; Hughes, Joseph B; Spain, Jim C
2012-05-01
Flavoprotein reductases that catalyze the transformation of nitroglycerin (NG) to dinitro- or mononitroglycerols enable bacteria containing such enzymes to use NG as the nitrogen source. The inability to use the resulting mononitroglycerols limits most strains to incomplete denitration of NG. Recently, Arthrobacter strain JBH1 was isolated for the ability to grow on NG as the sole source of carbon and nitrogen, but the enzymes and mechanisms involved were not established. Here, the enzymes that enable the Arthrobacter strain to incorporate NG into a productive pathway were identified. Enzyme assays indicated that the transformation of nitroglycerin to mononitroglycerol is NADPH dependent and that the subsequent transformation of mononitroglycerol is ATP dependent. Cloning and heterologous expression revealed that a flavoprotein catalyzes selective denitration of NG to 1-mononitroglycerol (1-MNG) and that 1-MNG is transformed to 1-nitro-3-phosphoglycerol by a glycerol kinase homolog. Phosphorylation of the nitroester intermediate enables the subsequent denitration of 1-MNG in a productive pathway that supports the growth of the isolate and mineralization of NG.
The Brera Multiscale Wavelet ROSAT HRI Source Catalog. I. The Algorithm
NASA Astrophysics Data System (ADS)
Lazzati, Davide; Campana, Sergio; Rosati, Piero; Panzera, Maria Rosa; Tagliaferri, Gianpiero
1999-10-01
We present a new detection algorithm based on the wavelet transform for the analysis of high-energy astronomical images. The wavelet transform, because of its multiscale structure, is suited to the optimal detection of pointlike as well as extended sources, regardless of any loss of resolution with the off-axis angle. Sources are detected as significant enhancements in the wavelet space, after the subtraction of the nonflat components of the background. Detection thresholds are computed through Monte Carlo simulations in order to establish the expected number of spurious sources per field. The source characterization is performed through a multisource fitting in the wavelet space. The procedure is designed to correctly deal with very crowded fields, allowing for the simultaneous characterization of nearby sources. To obtain a fast and reliable estimate of the source parameters and related errors, we apply a novel decimation technique that, taking into account the correlation properties of the wavelet transform, extracts a subset of almost independent coefficients. We test the performance of this algorithm on synthetic fields, analyzing with particular care the characterization of sources in poor background situations, where the assumption of Gaussian statistics does not hold. In these cases, for which standard wavelet algorithms generally provide underestimated errors, we infer errors through a procedure that relies on robust basic statistics. Our algorithm is well suited to the analysis of images taken with the new generation of X-ray instruments equipped with CCD technology, which will produce images with very low background and/or high source density.
Classification of footwear outsole patterns using Fourier transform and local interest points.
Richetelli, Nicole; Lee, Mackenzie C; Lasky, Carleen A; Gump, Madison E; Speir, Jacqueline A
2017-06-01
Successful classification of questioned footwear has tremendous evidentiary value; the result can minimize the potential suspect pool and link a suspect to a victim, a crime scene, or even multiple crime scenes to each other. With this in mind, several different automated and semi-automated classification models have been applied to the forensic footwear recognition problem, with superior performance commonly associated with two different approaches: correlation of image power (magnitude) or phase, and the use of local interest points transformed using the Scale Invariant Feature Transform (SIFT) and compared using Random Sample Consensus (RANSAC). Despite the distinction associated with each of these methods, all three have not been cross-compared using a single dataset, of limited quality (i.e., characteristic of crime scene-like imagery), and created using a wide combination of image inputs. To address this question, the research presented here examines the classification performance of the Fourier-Mellin transform (FMT), phase-only correlation (POC), and local interest points (transformed using SIFT and compared using RANSAC), as a function of inputs that include mixed media (blood and dust), transfer mechanisms (gel lifters), enhancement techniques (digital and chemical) and variations in print substrate (ceramic tiles, vinyl tiles and paper). Results indicate that POC outperforms both FMT and SIFT+RANSAC, regardless of image input (type, quality and totality), and that the difference in stochastic dominance detected for POC is significant across all image comparison scenarios evaluated in this study. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Touval, Ayana
1992-01-01
Introduces the concept of maximum and minimum function values as turning points on the function's graphic representation and presents a method for finding these values without using calculus. The process of utilizing transformations to find the turning point of a quadratic function is extended to find the turning points of cubic functions. (MDH)
ERIC Educational Resources Information Center
Barnes, Julie; Jaqua, Kathy
2011-01-01
A kinesthetic approach to developing ideas of function transformations can get students physically and intellectually involved. This article presents low- or no-cost activities which use kinesthetics to support high school students' mathematical understanding of transformations of function graphs. The important point of these activities is to help…
NASA Astrophysics Data System (ADS)
Li, Jia; Shen, Hua; Zhu, Rihong; Gao, Jinming; Sun, Yue; Wang, Jinsong; Li, Bo
2018-06-01
The precision of the measurements of aspheric and freeform surfaces remains the primary factor restrict their manufacture and application. One effective means of measuring such surfaces involves using reference or probe beams with angle modulation, such as tilted-wave-interferometer (TWI). It is necessary to improve the measurement efficiency by obtaining the optimum point source array for different pieces before TWI measurements. For purpose of forming a point source array based on the gradients of different surfaces under test, we established a mathematical model describing the relationship between the point source array and the test surface. However, the optimal point sources are irregularly distributed. In order to achieve a flexible point source array according to the gradient of test surface, a novel interference setup using fiber array is proposed in which every point source can be independently controlled on and off. Simulations and the actual measurement examples of two different surfaces are given in this paper to verify the mathematical model. Finally, we performed an experiment of testing an off-axis ellipsoidal surface that proved the validity of the proposed interference system.
Leadership in transformation: a longitudinal study in a nursing organization.
Viitala, Riitta
2014-01-01
Not only does leadership produce changes, but those changes produce leadership in organisations. The purpose of this paper is to present a theoretical and empirical analysis of the transformation of leadership at two different historical points in a health care organisation. It leans on the perspective of social constructionism, drawing especially from the ideas of Berger and Luckmann (1966). The paper seeks to improve understanding of how leaders themselves construct leadership in relation to organisational change. The empirical material was gathered in a longitudinal case study in a nursing organisation in two different historical and situational points. It consists of written narratives produced by nurse leaders that are analysed by applying discourse analysis. The empirical study revealed that the constructions of leadership were dramatically different at the two different historical and situational points. Leadership showed up as a complex, fragile and changing phenomenon, which fluctuates along with the other organisational changes. The results signal the importance of agency in leadership and the central role of "significant others". The paper questions the traditional categorisation and labelling of leadership as well as the cross-sectional studies in understanding leadership transformation. Its originality relates to the longitudinal perspective on transformation of leadership in the context of a health care organisation.
Changing Regulations of COD Pollution Load of Weihe River Watershed above TongGuan Section, China
NASA Astrophysics Data System (ADS)
Zhu, Lei; Liu, WanQing
2018-02-01
TongGuan Section of Weihe River Watershed is a provincial section between Shaanxi Province and Henan Province, China. Weihe River Watershed above TongGuan Section is taken as the research objective in this paper and COD is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a method—characteristic section load (CSLD) method is suggested and point and non-point source pollution loads of Weihe River Watershed above TongGuan Section are calculated in the rainy, normal and dry season in 2013. The results show that the monthly point source pollution loads of Weihe River Watershed above TongGuan Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above TongGuan Section change greatly and the non-point source pollution load proportions of total pollution load of COD decrease in the rainy, wet and normal period in turn.
Three Dimensional Orbital Stability About the Earth-Moon Equilateral Libration Points.
1980-12-01
need to be rotated to the ecliptic . If e is the obliquity of the ecliptic , then the transformation matrix for this is: t F e, ’ C]u 11E -12 IF I) I...the use of the above transformation matrix. The frame for the analysis of the problem will be an Earth-centered ecliptic nonrotating rectangular system...The X-axis will point toward the vernal equinox and the Z-axis will be perpendicular to the ecliptic having the XY-plane coincident with the ecliptic
Interior point techniques for LP and NLP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evtushenko, Y.
By using surjective mapping the initial constrained optimization problem is transformed to a problem in a new space with only equality constraints. For the numerical solution of the latter problem we use the generalized gradient-projection method and Newton`s method. After inverse transformation to the initial space we obtain the family of numerical methods for solving optimization problems with equality and inequality constraints. In the linear programming case after some simplification we obtain Dikin`s algorithm, affine scaling algorithm and generalized primal dual interior point linear programming algorithm.
GARLIC, A SHIELDING PROGRAM FOR GAMMA RADIATION FROM LINE- AND CYLINDER- SOURCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roos, M.
1959-06-01
GARLlC is a program for computing the gamma ray flux or dose rate at a shielded isotropic point detector, due to a line source or the line equivalent of a cylindrical source. The source strength distribution along the line must be either uniform or an arbitrary part of the positive half-cycle of a cosine function The line source can be orierted arbitrarily with respect to the main shield and the detector, except that the detector must not be located on the line source or on its extensionThe main source is a homogeneous plane slab in which scattered radiation is accountedmore » for by multiplying each point element of the line source by a point source buildup factor inside the integral over the point elements. Between the main shield and the line source additional shields can be introduced, which are either plane slabs, parallel to the main shield, or cylindrical rings, coaxial with the line source. Scattered radiation in the additional shields can only be accounted for by constant build-up factors outside the integral. GARLlC-xyz is an extended version particularly suited for the frequently met problem of shielding a room containing a large number of line sources in diHerent positions. The program computes the angles and linear dimensions of a problem for GARLIC when the positions of the detector point and the end points of the line source are given as points in an arbitrary rectangular coordinate system. As an example the isodose curves in water are presented for a monoenergetic cosine-distributed line source at several source energies and for an operating fuel element of the Swedish reactor R3, (auth)« less
3D craniofacial registration using thin-plate spline transform and cylindrical surface projection
Chen, Yucong; Deng, Qingqiong; Duan, Fuqing
2017-01-01
Craniofacial registration is used to establish the point-to-point correspondence in a unified coordinate system among human craniofacial models. It is the foundation of craniofacial reconstruction and other craniofacial statistical analysis research. In this paper, a non-rigid 3D craniofacial registration method using thin-plate spline transform and cylindrical surface projection is proposed. First, the gradient descent optimization is utilized to improve a cylindrical surface fitting (CSF) for the reference craniofacial model. Second, the thin-plate spline transform (TPST) is applied to deform a target craniofacial model to the reference model. Finally, the cylindrical surface projection (CSP) is used to derive the point correspondence between the reference and deformed target models. To accelerate the procedure, the iterative closest point ICP algorithm is used to obtain a rough correspondence, which can provide a possible intersection area of the CSP. Finally, the inverse TPST is used to map the obtained corresponding points from the deformed target craniofacial model to the original model, and it can be realized directly by the correspondence between the original target model and the deformed target model. Three types of registration, namely, reflexive, involutive and transitive registration, are carried out to verify the effectiveness of the proposed craniofacial registration algorithm. Comparison with the methods in the literature shows that the proposed method is more accurate. PMID:28982117
3D craniofacial registration using thin-plate spline transform and cylindrical surface projection.
Chen, Yucong; Zhao, Junli; Deng, Qingqiong; Duan, Fuqing
2017-01-01
Craniofacial registration is used to establish the point-to-point correspondence in a unified coordinate system among human craniofacial models. It is the foundation of craniofacial reconstruction and other craniofacial statistical analysis research. In this paper, a non-rigid 3D craniofacial registration method using thin-plate spline transform and cylindrical surface projection is proposed. First, the gradient descent optimization is utilized to improve a cylindrical surface fitting (CSF) for the reference craniofacial model. Second, the thin-plate spline transform (TPST) is applied to deform a target craniofacial model to the reference model. Finally, the cylindrical surface projection (CSP) is used to derive the point correspondence between the reference and deformed target models. To accelerate the procedure, the iterative closest point ICP algorithm is used to obtain a rough correspondence, which can provide a possible intersection area of the CSP. Finally, the inverse TPST is used to map the obtained corresponding points from the deformed target craniofacial model to the original model, and it can be realized directly by the correspondence between the original target model and the deformed target model. Three types of registration, namely, reflexive, involutive and transitive registration, are carried out to verify the effectiveness of the proposed craniofacial registration algorithm. Comparison with the methods in the literature shows that the proposed method is more accurate.
Development of Simulated Disturbing Source for Isolation Switch
NASA Astrophysics Data System (ADS)
Cheng, Lin; Liu, Xiang; Deng, Xiaoping; Pan, Zhezhe; Zhou, Hang; Zhu, Yong
2018-01-01
In order to simulate the substation in the actual scene of the harsh electromagnetic environment, and then research on electromagnetic compatibility testing of electronic instrument transformer, On the basis of the original isolation switch as a harassment source of the electronic instrument transformer electromagnetic compatibility test system, an isolated switch simulation source system was developed, to promote the standardization of the original test. In this paper, the circuit breaker is used to control the opening and closing of the gap arc to simulate the operating of isolating switch, and the isolation switch simulation harassment source system is designed accordingly. Comparison with the actual test results of the isolating switch, it is proved that the system can meet the test requirements, and the simulation harassment source system has good stability and high reliability.
Changing the Culture of Academic Medicine: Critical Mass or Critical Actors?
Newbill, Sharon L.; Cardinali, Gina; Morahan, Page S.; Chang, Shine; Magrane, Diane
2017-01-01
Abstract Purpose: By 2006, women constituted 34% of academic medical faculty, reaching a critical mass. Theoretically, with critical mass, culture and policy supportive of gender equity should be evident. We explore whether having a critical mass of women transforms institutional culture and organizational change. Methods: Career development program participants were interviewed to elucidate their experiences in academic health centers (AHCs). Focus group discussions were held with institutional leaders to explore their perceptions about contemporary challenges related to gender and leadership. Content analysis of both data sources revealed points of convergence. Findings were interpreted using the theory of critical mass. Results: Two nested domains emerged: the individual domain included the rewards and personal satisfaction of meaningful work, personal agency, tensions between cultural expectations of family and academic roles, and women's efforts to work for gender equity. The institutional domain depicted the sociocultural environment of AHCs that shaped women's experience, both personally and professionally, lack of institutional strategies to engage women in organizational initiatives, and the influence of one leader on women's ascent to leadership. Conclusions: The predominant evidence from this research demonstrates that the institutional barriers and sociocultural environment continue to be formidable obstacles confronting women, stalling the transformational effects expected from achieving a critical mass of women faculty. We conclude that the promise of critical mass as a turning point for women should be abandoned in favor of “critical actor” leaders, both women and men, who individually and collectively have the commitment and power to create gender-equitable cultures in AHCs. PMID:28092473
Two frameworks for integrating knowledge in induction
NASA Technical Reports Server (NTRS)
Rosenbloom, Paul S.; Hirsh, Haym; Cohen, William W.; Smith, Benjamin D.
1994-01-01
The use of knowledge in inductive learning is critical for improving the quality of the concept definitions generated, reducing the number of examples required in order to learn effective concept definitions, and reducing the computation needed to find good concept definitions. Relevant knowledge may come in many forms (such as examples, descriptions, advice, and constraints) and from many sources (such as books, teachers, databases, and scientific instruments). How to extract the relevant knowledge from this plethora of possibilities, and then to integrate it together so as to appropriately affect the induction process is perhaps the key issue at this point in inductive learning. Here the focus is on the integration part of this problem; that is, how induction algorithms can, and do, utilize a range of extracted knowledge. Preliminary work on a transformational framework for defining knowledge-intensive inductive algorithms out of relatively knowledge-free algorithms is described, as is a more tentative problems-space framework that attempts to cover all induction algorithms within a single general approach. These frameworks help to organize what is known about current knowledge-intensive induction algorithms, and to point towards new algorithms.
Compensation for loads during arm movements using equilibrium-point control.
Gribble, P L; Ostry, D J
2000-12-01
A significant problem in motor control is how information about movement error is used to modify control signals to achieve desired performance. A potential source of movement error and one that is readily controllable experimentally relates to limb dynamics and associated movement-dependent loads. In this paper, we have used a position control model to examine changes to control signals for arm movements in the context of movement-dependent loads. In the model, based on the equilibrium-point hypothesis, equilibrium shifts are adjusted directly in proportion to the positional error between desired and actual movements. The model is used to simulate multi-joint movements in the presence of both "internal" loads due to joint interaction torques, and externally applied loads resulting from velocity-dependent force fields. In both cases it is shown that the model can achieve close correspondence to empirical data using a simple linear adaptation procedure. An important feature of the model is that it achieves compensation for loads during movement without the need for either coordinate transformations between positional error and associated corrective forces, or inverse dynamics calculations.
High level language-based robotic control system
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo (Inventor); Kruetz, Kenneth K. (Inventor); Jain, Abhinandan (Inventor)
1994-01-01
This invention is a robot control system based on a high level language implementing a spatial operator algebra. There are two high level languages included within the system. At the highest level, applications programs can be written in a robot-oriented applications language including broad operators such as MOVE and GRASP. The robot-oriented applications language statements are translated into statements in the spatial operator algebra language. Programming can also take place using the spatial operator algebra language. The statements in the spatial operator algebra language from either source are then translated into machine language statements for execution by a digital control computer. The system also includes the capability of executing the control code sequences in a simulation mode before actual execution to assure proper action at execution time. The robot's environment is checked as part of the process and dynamic reconfiguration is also possible. The languages and system allow the programming and control of multiple arms and the use of inward/outward spatial recursions in which every computational step can be related to a transformation from one point in the mechanical robot to another point to name two major advantages.
High level language-based robotic control system
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo (Inventor); Kreutz, Kenneth K. (Inventor); Jain, Abhinandan (Inventor)
1996-01-01
This invention is a robot control system based on a high level language implementing a spatial operator algebra. There are two high level languages included within the system. At the highest level, applications programs can be written in a robot-oriented applications language including broad operators such as MOVE and GRASP. The robot-oriented applications language statements are translated into statements in the spatial operator algebra language. Programming can also take place using the spatial operator algebra language. The statements in the spatial operator algebra language from either source are then translated into machine language statements for execution by a digital control computer. The system also includes the capability of executing the control code sequences in a simulation mode before actual execution to assure proper action at execution time. The robot's environment is checked as part of the process and dynamic reconfiguration is also possible. The languages and system allow the programming and control of multiple arms and the use of inward/outward spatial recursions in which every computational step can be related to a transformation from one point in the mechanical robot to another point to name two major advantages.
30 CFR 75.900-2 - Approved circuit schemes.
Code of Federal Regulations, 2011 CFR
2011-07-01
... device installed in the main secondary circuit at the source transformer may be used to provide undervoltage protection for each circuit that receives power from that transformer. (c) One circuit breaker may...
30 CFR 75.900-2 - Approved circuit schemes.
Code of Federal Regulations, 2014 CFR
2014-07-01
... device installed in the main secondary circuit at the source transformer may be used to provide undervoltage protection for each circuit that receives power from that transformer. (c) One circuit breaker may...
30 CFR 75.900-2 - Approved circuit schemes.
Code of Federal Regulations, 2013 CFR
2013-07-01
... device installed in the main secondary circuit at the source transformer may be used to provide undervoltage protection for each circuit that receives power from that transformer. (c) One circuit breaker may...
30 CFR 75.900-2 - Approved circuit schemes.
Code of Federal Regulations, 2012 CFR
2012-07-01
... device installed in the main secondary circuit at the source transformer may be used to provide undervoltage protection for each circuit that receives power from that transformer. (c) One circuit breaker may...
30 CFR 75.900-2 - Approved circuit schemes.
Code of Federal Regulations, 2010 CFR
2010-07-01
... device installed in the main secondary circuit at the source transformer may be used to provide undervoltage protection for each circuit that receives power from that transformer. (c) One circuit breaker may...
Local Mechanical Response of Superelastic NiTi Shape-Memory Alloy Under Uniaxial Loading
NASA Astrophysics Data System (ADS)
Xiao, Yao; Zeng, Pan; Lei, Liping; Du, Hongfei
2015-11-01
In this paper, we focus on the local mechanical response of superelastic NiTi SMA at different temperatures under uniaxial loading. In situ DIC is applied to measure the local strain of the specimen. Based on the experimental results, two types of mechanical response, which are characterized with localized phase transformation and homogenous phase transformation, are identified, respectively. Motivated by residual strain accumulation phenomenon of the superelastic mechanical response, we conduct controlled experiments, and infer that for a given material point, all (or most) of the irreversibility is accumulated when the transformation front is traversing the material point. A robust constitutive model is established to explain the experimental phenomena and we successfully simulate the evolution of local strain that agrees closely with the experimental results.
Hsi-Ping, Liu
1990-01-01
Impulse responses including near-field terms have been obtained in closed form for the zero-offset vertical seismic profiles generated by a horizontal point force acting on the surface of an elastic half-space. The method is based on the correspondence principle. Through transformation of variables, the Fourier transform of the elastic impulse response is put in a form such that the Fourier transform of the corresponding anelastic impulse response can be expressed as elementary functions and their definite integrals involving distance angular frequency, phase velocities, and attenuation factors. These results are used for accurate calculation of shear-wave arrival rise times of synthetic seismograms needed for data interpretation of anelastic-attenuation measurements in near-surface sediment. -Author
Extreme ultraviolet interferometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, Kenneth A.
EUV lithography is a promising and viable candidate for circuit fabrication with 0.1-micron critical dimension and smaller. In order to achieve diffraction-limited performance, all-reflective multilayer-coated lithographic imaging systems operating near 13-nm wavelength and 0.1 NA have system wavefront tolerances of 0.27 nm, or 0.02 waves RMS. Owing to the highly-sensitive resonant reflective properties of multilayer mirrors and extraordinarily tight tolerances set forth for their fabrication, EUV optical systems require at-wavelength EUV interferometry for final alignment and qualification. This dissertation discusses the development and successful implementation of high-accuracy EUV interferometric techniques. Proof-of-principle experiments with a prototype EUV point-diffraction interferometer for themore » measurement of Fresnel zoneplate lenses first demonstrated sub-wavelength EUV interferometric capability. These experiments spurred the development of the superior phase-shifting point-diffraction interferometer (PS/PDI), which has been implemented for the testing of an all-reflective lithographic-quality EUV optical system. Both systems rely on pinhole diffraction to produce spherical reference wavefronts in a common-path geometry. Extensive experiments demonstrate EUV wavefront-measuring precision beyond 0.02 waves RMS. EUV imaging experiments provide verification of the high-accuracy of the point-diffraction principle, and demonstrate the utility of the measurements in successfully predicting imaging performance. Complementary to the experimental research, several areas of theoretical investigation related to the novel PS/PDI system are presented. First-principles electromagnetic field simulations of pinhole diffraction are conducted to ascertain the upper limits of measurement accuracy and to guide selection of the pinhole diameter. Investigations of the relative merits of different PS/PDI configurations accompany a general study of the most significant sources of systematic measurement errors. To overcome a variety of experimental difficulties, several new methods in interferogram analysis and phase-retrieval were developed: the Fourier-Transform Method of Phase-Shift Determination, which uses Fourier-domain analysis to improve the accuracy of phase-shifting interferometry; the Fourier-Transform Guided Unwrap Method, which was developed to overcome difficulties associated with a high density of mid-spatial-frequency blemishes and which uses a low-spatial-frequency approximation to the measured wavefront to guide the phase unwrapping in the presence of noise; and, finally, an expedient method of Gram-Schmidt orthogonalization which facilitates polynomial basis transformations in wave-front surface fitting procedures.« less
Christofilos, N.C.; Ehlers, K.W.
1960-04-01
A pulsed electron gun capable of delivering pulses at voltages of the order of 1 mv and currents of the order of 100 amperes is described. The principal novelty resides in a transformer construction which is disposed in the same vacuum housing as the electron source and accelerating electrode structure of the gun to supply the accelerating potential thereto. The transformer is provided by a plurality of magnetic cores disposed in circumferentially spaced relation and having a plurality of primary windings each inductively coupled to a different one of the cores, and a helical secondary winding which is disposed coaxially of the cores and passes therethrough in circumferential succession. Additional novelty resides in the disposition of the electron source cathode filament input leads interiorly of the transformer secondary winding which is hollow, as well as in the employment of a half-wave filament supply which is synchronously operated with the transformer supply such that the transformer is pulsed during the zero current portions of the half-wave cycle.
Natural Transformation of Campylobacter jejuni Occurs Beyond Limits of Growth
Vegge, Christina S.; Brøndsted, Lone; Ligowska-Marzęta, Małgorzata; Ingmer, Hanne
2012-01-01
Campylobacter jejuni is a human bacterial pathogen. While poultry is considered to be a major source of food borne campylobacteriosis, C. jejuni is frequently found in the external environment, and water is another well-known source of human infections. Natural transformation is considered to be one of the main mechanisms for mediating transfer of genetic material and evolution of the organism. Given the diverse habitats of C. jejuni we set out to examine how environmental conditions and physiological processes affect natural transformation of C. jejuni. We show that the efficiency of transformation is correlated to the growth conditions, but more importantly that transformation occurs at growth-restrictive conditions as well as in the late stationary phase; hence revealing that growth per se is not required for C. jejuni to be competent. Yet, natural transformation of C. jejuni is an energy dependent process, that occurs in the absence of transcription but requires an active translational machinery. Moreover, we show the ATP dependent ClpP protease to be important for transformation, which possibly could be associated with reduced protein glycosylation in the ClpP mutant. In contrast, competence of C. jejuni was neither found to be involved in DNA repair following DNA damage nor to provide a growth benefit. Kinetic studies revealed that several transformation events occur per cell cycle indicating that natural transformation of C. jejuni is a highly efficient process. Thus, our findings suggest that horizontal gene transfer by natural transformation takes place in various habitats occupied by C. jejuni. PMID:23049803
Post-earthquake relaxation using a spectral element method: 2.5-D case
Pollitz, Fred
2014-01-01
The computation of quasi-static deformation for axisymmetric viscoelastic structures on a gravitating spherical earth is addressed using the spectral element method (SEM). A 2-D spectral element domain is defined with respect to spherical coordinates of radius and angular distance from a pole of symmetry, and 3-D viscoelastic structure is assumed to be azimuthally symmetric with respect to this pole. A point dislocation source that is periodic in azimuth is implemented with a truncated sequence of azimuthal order numbers. Viscoelasticity is limited to linear rheologies and is implemented with the correspondence principle in the Laplace transform domain. This leads to a series of decoupled 2-D problems which are solved with the SEM. Inverse Laplace transform of the independent 2-D solutions leads to the time-domain solution of the 3-D equations of quasi-static equilibrium imposed on a 2-D structure. The numerical procedure is verified through comparison with analytic solutions for finite faults embedded in a laterally homogeneous viscoelastic structure. This methodology is applicable to situations where the predominant structure varies in one horizontal direction, such as a structural contrast across (or parallel to) a long strike-slip fault.
Image-based spectroscopy for environmental monitoring
NASA Astrophysics Data System (ADS)
Bachmakov, Eduard; Molina, Carolyn; Wynne, Rosalind
2014-03-01
An image-processing algorithm for use with a nano-featured spectrometer chemical agent detection configuration is presented. The spectrometer chip acquired from Nano-Optic DevicesTM can reduce the size of the spectrometer down to a coin. The nanospectrometer chip was aligned with a 635nm laser source, objective lenses, and a CCD camera. The images from a nanospectrometer chip were collected and compared to reference spectra. Random background noise contributions were isolated and removed from the diffraction pattern image analysis via a threshold filter. Results are provided for the image-based detection of the diffraction pattern produced by the nanospectrometer. The featured PCF spectrometer has the potential to measure optical absorption spectra in order to detect trace amounts of contaminants. MATLAB tools allow for implementation of intelligent, automatic detection of the relevant sub-patterns in the diffraction patterns and subsequent extraction of the parameters using region-detection algorithms such as the generalized Hough transform, which detects specific shapes within the image. This transform is a method for detecting curves by exploiting the duality between points on a curve and parameters of that curve. By employing this imageprocessing technique, future sensor systems will benefit from new applications such as unsupervised environmental monitoring of air or water quality.
Hybrid diffusion-P3 equation in N-layered turbid media: steady-state domain.
Shi, Zhenzhi; Zhao, Huijuan; Xu, Kexin
2011-10-01
This paper discusses light propagation in N-layered turbid media. The hybrid diffusion-P3 equation is solved for an N-layered finite or infinite turbid medium in the steady-state domain for one point source using the extrapolated boundary condition. The Fourier transform formalism is applied to derive the analytical solutions of the fluence rate in Fourier space. Two inverse Fourier transform methods are developed to calculate the fluence rate in real space. In addition, the solutions of the hybrid diffusion-P3 equation are compared to the solutions of the diffusion equation and the Monte Carlo simulation. For the case of small absorption coefficients, the solutions of the N-layered diffusion equation and hybrid diffusion-P3 equation are almost equivalent and are in agreement with the Monte Carlo simulation. For the case of large absorption coefficients, the model of the hybrid diffusion-P3 equation is more precise than that of the diffusion equation. In conclusion, the model of the hybrid diffusion-P3 equation can replace the diffusion equation for modeling light propagation in the N-layered turbid media for a wide range of absorption coefficients.
Enhanced K-means clustering with encryption on cloud
NASA Astrophysics Data System (ADS)
Singh, Iqjot; Dwivedi, Prerna; Gupta, Taru; Shynu, P. G.
2017-11-01
This paper tries to solve the problem of storing and managing big files over cloud by implementing hashing on Hadoop in big-data and ensure security while uploading and downloading files. Cloud computing is a term that emphasis on sharing data and facilitates to share infrastructure and resources.[10] Hadoop is an open source software that gives us access to store and manage big files according to our needs on cloud. K-means clustering algorithm is an algorithm used to calculate distance between the centroid of the cluster and the data points. Hashing is a algorithm in which we are storing and retrieving data with hash keys. The hashing algorithm is called as hash function which is used to portray the original data and later to fetch the data stored at the specific key. [17] Encryption is a process to transform electronic data into non readable form known as cipher text. Decryption is the opposite process of encryption, it transforms the cipher text into plain text that the end user can read and understand well. For encryption and decryption we are using Symmetric key cryptographic algorithm. In symmetric key cryptography are using DES algorithm for a secure storage of the files. [3
Numerical investigations of low-density nozzle flow by solving the Boltzmann equation
NASA Technical Reports Server (NTRS)
Deng, Zheng-Tao; Liaw, Goang-Shin; Chou, Lynn Chen
1995-01-01
A two-dimensional finite-difference code to solve the BGK-Boltzmann equation has been developed. The solution procedure consists of three steps: (1) transforming the BGK-Boltzmann equation into two simultaneous partial differential equations by taking moments of the distribution function with respect to the molecular velocity u(sub z), with weighting factors 1 and u(sub z)(sup 2); (2) solving the transformed equations in the physical space based on the time-marching technique and the four-stage Runge-Kutta time integration, for a given discrete-ordinate. The Roe's second-order upwind difference scheme is used to discretize the convective terms and the collision terms are treated as source terms; and (3) using the newly calculated distribution functions at each point in the physical space to calculate the macroscopic flow parameters by the modified Gaussian quadrature formula. Repeating steps 2 and 3, the time-marching procedure stops when the convergent criteria is reached. A low-density nozzle flow field has been calculated by this newly developed code. The BGK Boltzmann solution and experimental data show excellent agreement. It demonstrated that numerical solutions of the BGK-Boltzmann equation are ready to be experimentally validated.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-18
...-up transformer; (4) a new 300- foot-long, 69-kilovolt (kV) transmission line extending from the local transformer to the local grid (the point of interconnection) which is owned and operated by the City of...
The Kinetics of Bainitic Transformation of Roll Steel 75Kh3MF
NASA Astrophysics Data System (ADS)
Kletsova, O. A.; Krylova, S. E.; Priymak, E. Yu.; Gryzunov, V. I.; Kamantsev, S. V.
2018-01-01
The critical points of steel 75Kh3MF and the temperature of the start of martensitic transformation are determined by a dilatometric method. The thermokinetic and isothermal diagrams of decomposition of supercooled austenite are plotted. The microstructure and microhardness of steel specimens cooled at different rates are studied. The kinetics of the occurrence of bainitic transformation in the steel is calculated using the Austin-Ricket equation.
Finite-mode analysis by means of intensity information in fractional optical systems.
Alieva, Tatiana; Bastiaans, Martin J
2002-03-01
It is shown how a coherent optical signal that contains only a finite number of Hermite-Gauss modes can be reconstructed from the knowledge of its Radon-Wigner transform-associated with the intensity distribution in a fractional-Fourier-transform optical system-at only two transversal points. The proposed method can be generalized to any fractional system whose generator transform has a complete orthogonal set of eigenfunctions.
Sex differences in components of imagined perspective transformation.
Gardner, Mark R; Sorhus, Ingrid; Edmonds, Caroline J; Potts, Rosalind
2012-05-01
Little research to date has examined whether sex differences in spatial ability extend to the mental self rotation involved in taking on a third party perspective. This question was addressed in the present study by assessing components of imagined perspective transformations in twenty men and twenty women. Participants made speeded left-right judgements about the hand in which an object was held by front- and back- facing schematic human figures in an "own body transformation task." Response times were longer when the figure did not share the same spatial orientation as the participant, and were substantially longer than those made for a control task requiring left-right judgements about the same stimuli from the participant's own point of view. A sex difference in imagined perspective transformation favouring males was found to be restricted to the speed of imagined self rotation, and was not observed for components indexing readiness to take a third party point of view, nor in left-right confusion. These findings indicate that the range of spatial abilities for which a sex difference has been established should be extended to include imagined perspective transformations. They also suggest that imagined perspective transformations may not draw upon those empathic social-emotional perspective taking processes for which females show an advantage. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Martinis, C.; Baumgardner, J.; Wroten, J.; Mendillo, M.
2018-04-01
Optical signatures of ionospheric disturbances exist at all latitudes on Earth-the most well known case being visible aurora at high latitudes. Sub-visual emissions occur equatorward of the auroral zones that also indicate periods and locations of severe Space Weather effects. These fall into three magnetic latitude domains in each hemisphere: (1) sub-auroral latitudes ∼40-60°, (2) mid-latitudes (20-40°) and (3) equatorial-to-low latitudes (0-20°). Boston University has established a network of all-sky-imagers (ASIs) with sites at opposite ends of the same geomagnetic field lines in each hemisphere-called geomagnetic conjugate points. Our ASIs are autonomous instruments that operate in mini-observatories situated at four conjugate pairs in North and South America, plus one pair linking Europe and South Africa. In this paper, we describe instrument design, data-taking protocols, data transfer and archiving issues, image processing, science objectives and early results for each latitude domain. This unique capability addresses how a single source of disturbance is transformed into similar or different effects based on the unique "receptor" conditions (seasonal effects) found in each hemisphere. Applying optical conjugate point observations to Space Weather problems offers a new diagnostic approach for understanding the global system response functions operating in the Earth's upper atmosphere.
Visualization of conserved structures by fusing highly variable datasets.
Silverstein, Jonathan C; Chhadia, Ankur; Dech, Fred
2002-01-01
Skill, effort, and time are required to identify and visualize anatomic structures in three-dimensions from radiological data. Fundamentally, automating these processes requires a technique that uses symbolic information not in the dynamic range of the voxel data. We were developing such a technique based on mutual information for automatic multi-modality image fusion (MIAMI Fuse, University of Michigan). This system previously demonstrated facility at fusing one voxel dataset with integrated symbolic structure information to a CT dataset (different scale and resolution) from the same person. The next step of development of our technique was aimed at accommodating the variability of anatomy from patient to patient by using warping to fuse our standard dataset to arbitrary patient CT datasets. A standard symbolic information dataset was created from the full color Visible Human Female by segmenting the liver parenchyma, portal veins, and hepatic veins and overwriting each set of voxels with a fixed color. Two arbitrarily selected patient CT scans of the abdomen were used for reference datasets. We used the warping functions in MIAMI Fuse to align the standard structure data to each patient scan. The key to successful fusion was the focused use of multiple warping control points that place themselves around the structure of interest automatically. The user assigns only a few initial control points to align the scans. Fusion 1 and 2 transformed the atlas with 27 points around the liver to CT1 and CT2 respectively. Fusion 3 transformed the atlas with 45 control points around the liver to CT1 and Fusion 4 transformed the atlas with 5 control points around the portal vein. The CT dataset is augmented with the transformed standard structure dataset, such that the warped structure masks are visualized in combination with the original patient dataset. This combined volume visualization is then rendered interactively in stereo on the ImmersaDesk in an immersive Virtual Reality (VR) environment. The accuracy of the fusions was determined qualitatively by comparing the transformed atlas overlaid on the appropriate CT. It was examined for where the transformed structure atlas was incorrectly overlaid (false positive) and where it was incorrectly not overlaid (false negative). According to this method, fusions 1 and 2 were correct roughly 50-75% of the time, while fusions 3 and 4 were correct roughly 75-100%. The CT dataset augmented with transformed dataset was viewed arbitrarily in user-centered perspective stereo taking advantage of features such as scaling, windowing and volumetric region of interest selection. This process of auto-coloring conserved structures in variable datasets is a step toward the goal of a broader, standardized automatic structure visualization method for radiological data. If successful it would permit identification, visualization or deletion of structures in radiological data by semi-automatically applying canonical structure information to the radiological data (not just processing and visualization of the data's intrinsic dynamic range). More sophisticated selection of control points and patterns of warping may allow for more accurate transforms, and thus advances in visualization, simulation, education, diagnostics, and treatment planning.
Entanglement-assisted transformation is asymptotically equivalent to multiple-copy transformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan Runyao; Feng Yuan; Ying Mingsheng
2005-08-15
We show that two ways of manipulating quantum entanglement - namely, entanglement-assisted local transformation [D. Jonathan and M. B. Plenio, Phys. Rev. Lett. 83, 3566 (1999)] and multiple-copy transformation [S. Bandyopadhyay, V. Roychowdhury, and U. Sen, Phys. Rev. A 65, 052315 (2002)]--are equivalent in the sense that they can asymptotically simulate each other's ability to implement a desired transformation from a given source state to another given target state with the same optimal success probability. As a consequence, this yields a feasible method to evaluate the optimal conversion probability of an entanglement-assisted transformation.
Kim, Sangmin; Raphael, Patrick D; Oghalai, John S; Applegate, Brian E
2016-04-01
Swept-laser sources offer a number of advantages for Phase-sensitive Optical Coherence Tomography (PhOCT). However, inter- and intra-sweep variability leads to calibration errors that adversely affect phase sensitivity. While there are several approaches to overcoming this problem, our preferred method is to simply calibrate every sweep of the laser. This approach offers high accuracy and phase stability at the expense of a substantial processing burden. In this approach, the Hilbert phase of the interferogram from a reference interferometer provides the instantaneous wavenumber of the laser, but is computationally expensive. Fortunately, the Hilbert transform may be approximated by a Finite Impulse-Response (FIR) filter. Here we explore the use of several FIR filter based Hilbert transforms for calibration, explicitly considering the impact of filter choice on phase sensitivity and OCT image quality. Our results indicate that the complex FIR filter approach is the most robust and accurate among those considered. It provides similar image quality and slightly better phase sensitivity than the traditional FFT-IFFT based Hilbert transform while consuming fewer resources in an FPGA implementation. We also explored utilizing the Hilbert magnitude of the reference interferogram to calculate an ideal window function for spectral amplitude calibration. The ideal window function is designed to carefully control sidelobes on the axial point spread function. We found that after a simple chromatic correction, calculating the window function using the complex FIR filter and the reference interferometer gave similar results to window functions calculated using a mirror sample and the FFT-IFFT Hilbert transform. Hence, the complex FIR filter can enable accurate and high-speed calibration of the magnitude and phase of spectral interferograms.
Kim, Sangmin; Raphael, Patrick D.; Oghalai, John S.; Applegate, Brian E.
2016-01-01
Swept-laser sources offer a number of advantages for Phase-sensitive Optical Coherence Tomography (PhOCT). However, inter- and intra-sweep variability leads to calibration errors that adversely affect phase sensitivity. While there are several approaches to overcoming this problem, our preferred method is to simply calibrate every sweep of the laser. This approach offers high accuracy and phase stability at the expense of a substantial processing burden. In this approach, the Hilbert phase of the interferogram from a reference interferometer provides the instantaneous wavenumber of the laser, but is computationally expensive. Fortunately, the Hilbert transform may be approximated by a Finite Impulse-Response (FIR) filter. Here we explore the use of several FIR filter based Hilbert transforms for calibration, explicitly considering the impact of filter choice on phase sensitivity and OCT image quality. Our results indicate that the complex FIR filter approach is the most robust and accurate among those considered. It provides similar image quality and slightly better phase sensitivity than the traditional FFT-IFFT based Hilbert transform while consuming fewer resources in an FPGA implementation. We also explored utilizing the Hilbert magnitude of the reference interferogram to calculate an ideal window function for spectral amplitude calibration. The ideal window function is designed to carefully control sidelobes on the axial point spread function. We found that after a simple chromatic correction, calculating the window function using the complex FIR filter and the reference interferometer gave similar results to window functions calculated using a mirror sample and the FFT-IFFT Hilbert transform. Hence, the complex FIR filter can enable accurate and high-speed calibration of the magnitude and phase of spectral interferograms. PMID:27446666
Image Registration: A Necessary Evil
NASA Technical Reports Server (NTRS)
Bell, James; McLachlan, Blair; Hermstad, Dexter; Trosin, Jeff; George, Michael W. (Technical Monitor)
1995-01-01
Registration of test and reference images is a key component of nearly all PSP data reduction techniques. This is done to ensure that a test image pixel viewing a particular point on the model is ratioed by the reference image pixel which views the same point. Typically registration is needed to account for model motion due to differing airloads when the wind-off and wind-on images are taken. Registration is also necessary when two cameras are used for simultaneous acquisition of data from a dual-frequency paint. This presentation will discuss the advantages and disadvantages of several different image registration techniques. In order to do so, it is necessary to propose both an accuracy requirement for image registration and a means for measuring the accuracy of a particular technique. High contrast regions in the unregistered images are most sensitive to registration errors, and it is proposed that these regions be used to establish the error limits for registration. Once this is done, the actual registration error can be determined by locating corresponding points on the test and reference images, and determining how well a particular registration technique matches them. An example of this procedure is shown for three transforms used to register images of a semispan model. Thirty control points were located on the model. A subset of the points were used to determine the coefficients of each registration transform, and the error with which each transform aligned the remaining points was determined. The results indicate the general superiority of a third-order polynomial over other candidate transforms, as well as showing how registration accuracy varies with number of control points. Finally, it is proposed that image registration may eventually be done away with completely. As more accurate image resection techniques and more detailed model surface grids become available, it will be possible to map raw image data onto the model surface accurately. Intensity ratio data can then be obtained by a "model surface ratio," rather than an image ratio. The problems and advantages of this technique will be discussed.
NASA Astrophysics Data System (ADS)
Darrh, A.; Downs, C. M.; Poppeliers, C.
2017-12-01
Born Scattering Inversion (BSI) of electromagnetic (EM) data is a geophysical imaging methodology for mapping weak conductivity, permeability, and/or permittivity contrasts in the subsurface. The high computational cost of full waveform inversion is reduced by adopting the First Born Approximation for scattered EM fields. This linearizes the inverse problem in terms of Born scattering amplitudes for a set of effective EM body sources within a 3D imaging volume. Estimation of scatterer amplitudes is subsequently achieved by solving the normal equations. Our present BSI numerical experiments entail Fourier transforming real-valued synthetic EM data to the frequency-domain, and minimizing the L2 residual between complex-valued observed and predicted data. We are testing the ability of BSI to resolve simple scattering models. For our initial experiments, synthetic data are acquired by three-component (3C) electric field receivers distributed on a plane above a single point electric dipole within a homogeneous and isotropic wholespace. To suppress artifacts, candidate Born scatterer locations are confined to a volume beneath the receiver array. Also, we explore two different numerical linear algebra algorithms for solving the normal equations: Damped Least Squares (DLS), and Non-Negative Least Squares (NNLS). Results from NNLS accurately recover the source location only for a large dense 3C receiver array, but fail when the array is decimated, or is restricted to horizontal component data. Using all receiver stations and all components per station, NNLS results are relatively insensitive to a sub-sampled frequency spectrum, suggesting that coarse frequency-domain sampling may be adequate for good target resolution. Results from DLS are insensitive to diminishing array density, but contain spatially oscillatory structure. DLS-generated images are consistently centered at the known point source location, despite an abundance of surrounding structure.
NASA Astrophysics Data System (ADS)
Okumura, Hiroshi; Suezaki, Masashi; Sueyasu, Hideki; Arai, Kohei
2003-03-01
An automated method that can select corresponding point candidates is developed. This method has the following three features: 1) employment of the RIN-net for corresponding point candidate selection; 2) employment of multi resolution analysis with Haar wavelet transformation for improvement of selection accuracy and noise tolerance; 3) employment of context information about corresponding point candidates for screening of selected candidates. Here, the 'RIN-net' means the back-propagation trained feed-forward 3-layer artificial neural network that feeds rotation invariants as input data. In our system, pseudo Zernike moments are employed as the rotation invariants. The RIN-net has N x N pixels field of view (FOV). Some experiments are conducted to evaluate corresponding point candidate selection capability of the proposed method by using various kinds of remotely sensed images. The experimental results show the proposed method achieves fewer training patterns, less training time, and higher selection accuracy than conventional method.
Liang, Shih-Hsiung; Hsu, Duen-Wei; Lin, Chia-Ying; Kao, Chih-Ming; Huang, Da-Ji; Chien, Chih-Ching; Chen, Ssu-Ching; Tsai, Isheng Jason; Chen, Chien-Cheng
2017-04-01
In this study, the bacterial strain Citrobacter youngae strain E4 was isolated from 2,4,6-trinitrotoluene (TNT)-contaminated soil and used to assess the capacity of TNT transformation with/without exogenous nutrient amendments. C. youngae E4 poorly degraded TNT without an exogenous amino nitrogen source, whereas the addition of an amino nitrogen source considerably increased the efficacy of TNT transformation in a dose-dependent manner. The enhanced TNT transformation of C. youngae E4 was mediated by increased cell growth and up-regulation of TNT nitroreductases, including NemA, NfsA and NfsB. This result indicates that the increase in TNT transformation by C. youngae E4 via nitrogen nutrient stimulation is a cometabolism process. Consistently, TNT transformation was effectively enhanced when C. youngae E4 was subjected to a TNT-contaminated soil slurry in the presence of an exogenous amino nitrogen amendment. Thus, effective enhancement of TNT transformation via the coordinated inoculation of the nutrient-responsive C. youngae E4 and an exogenous nitrogen amendment might be applicable for the remediation of TNT-contaminated soil. Although the TNT transformation was significantly enhanced by C. youngae E4 in concert with biostimulation, the 96-h LC50 value of the TNT transformation product mixture on the aquatic invertebrate Tigriopus japonicas was higher than the LC50 value of TNT alone. Our results suggest that exogenous nutrient amendment can enhance microbial TNT transformation; however, additional detoxification processes may be needed due to the increased toxicity after reduced TNT transformation. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Székely, Balázs; Kania, Adam; Varga, Katalin; Heilmeier, Hermann
2017-04-01
Lacunarity, a measure of the spatial distribution of the empty space is found to be a useful descriptive quantity of the forest structure. Its calculation, based on laser-scanned point clouds, results in a four-dimensional data set. The evaluation of results needs sophisticated tools and visualization techniques. To simplify the evaluation, it is straightforward to use approximation functions fitted to the results. The lacunarity function L(r), being a measure of scale-independent structural properties, has a power-law character. Previous studies showed that log(log(L(r))) transformation is suitable for analysis of spatial patterns. Accordingly, transformed lacunarity functions can be approximated by appropriate functions either in the original or in the transformed domain. As input data we have used a number of laser-scanned point clouds of various forests. The lacunarity distribution has been calculated along a regular horizontal grid at various (relative) elevations. The lacunarity data cube then has been logarithm-transformed and the resulting values became the input of parameter estimation at each point (point of interest, POI). This way at each POI a parameter set is generated that is suitable for spatial analysis. The expectation is that the horizontal variation and vertical layering of the vegetation can be characterized by this procedure. The results show that the transformed L(r) functions can be typically approximated by exponentials individually, and the residual values remain low in most cases. However, (1) in most cases the residuals may vary considerably, and (2) neighbouring POIs often give rather differing estimates both in horizontal and in vertical directions, of them the vertical variation seems to be more characteristic. In the vertical sense, the distribution of estimates shows abrupt changes at places, presumably related to the vertical structure of the forest. In low relief areas horizontal similarity is more typical, in higher relief areas horizontal similarity fades out in short distances. Some of the input data have been acquired in the framework of the ChangeHabitats2 project financed by the European Union. BS contributed as an Alexander von Humboldt Research Fellow.
Microbial nitrogen transformation potential in surface run-off leachate from a tropical landfill
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mangimbulude, Jubhar C.; Straalen, Nico M. van; Roeling, Wilfred F.M., E-mail: wilfred.roling@falw.vu.nl
2012-01-15
Highlights: Black-Right-Pointing-Pointer Microbial nitrogen transformations can alleviate toxic ammonium discharge. Black-Right-Pointing-Pointer Aerobic ammonium oxidation was rate-limiting in Indonesian landfill leachate. Black-Right-Pointing-Pointer Organic nitrogen ammonification was most dominant. Black-Right-Pointing-Pointer Anaerobic nitrate reduction and ammonium oxidation potential were also high. Black-Right-Pointing-Pointer A two-stage aerobic-anaerobic nitrogen removal system needs to be implemented. - Abstract: Ammonium is one of the major toxic compounds and a critical long-term pollutant in landfill leachate. Leachate from the Jatibarang landfill in Semarang, Indonesia, contains ammonium in concentrations ranging from 376 to 929 mg N L{sup -1}. The objective of this study was to determine seasonal variation in themore » potential for organic nitrogen ammonification, aerobic nitrification, anaerobic nitrate reduction and anaerobic ammonium oxidation (anammox) at this landfilling site. Seasonal samples from leachate collection treatment ponds were used as an inoculum to feed synthetic media to determine potential rates of nitrogen transformations. Aerobic ammonium oxidation potential (<0.06 mg N L{sup -1} h{sup -1}) was more than a hundred times lower than the anaerobic nitrogen transformation processes and organic nitrogen ammonification, which were of the same order of magnitude. Anaerobic nitrate oxidation did not proceed beyond nitrite; isolates grown with nitrate as electron acceptor did not degrade nitrite further. Effects of season were only observed for aerobic nitrification and anammox, and were relatively minor: rates were up to three times higher in the dry season. To completely remove the excess ammonium from the leachate, we propose a two-stage treatment system to be implemented. Aeration in the first leachate pond would strongly contribute to aerobic ammonium oxidation to nitrate by providing the currently missing oxygen in the anaerobic leachate and allowing for the growth of ammonium oxidisers. In the second pond the remaining ammonium and produced nitrate can be converted by a combination of nitrate reduction to nitrite and anammox. Such optimization of microbial nitrogen transformations can contribute to alleviating the ammonium discharge to surface water draining the landfill.« less
The τq-Fourier transform: Covariance and uniqueness
NASA Astrophysics Data System (ADS)
Kalogeropoulos, Nikolaos
2018-05-01
We propose an alternative definition for a Tsallis entropy composition-inspired Fourier transform, which we call “τq-Fourier transform”. We comment about the underlying “covariance” on the set of algebraic fields that motivates its introduction. We see that the definition of the τq-Fourier transform is automatically invertible in the proper context. Based on recent results in Fourier analysis, it turns that the τq-Fourier transform is essentially unique under the assumption of the exchange of the point-wise product of functions with their convolution.
Code orange: Towards transformational leadership of emergency management systems.
Caro, Denis H J
2015-09-01
The 21(st) century calls upon health leaders to recognize and respond to emerging threats and systemic emergency management challenges through transformative processes inherent in the LEADS in a caring environment framework. Using a grounded theory approach, this qualitative study explores key informant perspectives of leaders in emergency management across Canada on pressing needs for relevant systemic transformation. The emerging model points to eight specific attributes of transformational leadership central to emergency management and suggests that contextualization of health leadership is of particular import. © 2015 The Canadian College of Health Leaders.
40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?
Code of Federal Regulations, 2012 CFR
2012-07-01
... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...
40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?
Code of Federal Regulations, 2010 CFR
2010-07-01
... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...
40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?
Code of Federal Regulations, 2014 CFR
2014-07-01
... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...
Wang, Junxia; Liu, Lili; Wang, Jinfu; Pan, Bishu; Fu, Xiaoxu; Zhang, Gang; Zhang, Long; Lin, Kuangfei
2015-01-01
Brominated flame retardants (BFRs, including polybrominated diphenyl ethers (PBDEs) and tetrabromobisphenol-A (TBBPA)) and metals (Cu, Zn, Pb, Cd, Ni, Hg and As) in sediments, soils and herb plants from unregulated e-waste disposal sites were examined. The metal concentrations, ∑PBDE and TBBPA concentrations in all samples from the examined e-waste dismantling sites were relatively high in comparison with those of rural and urban areas around the world. The PBDE and TBBPA levels in soils significantly decreased with increasing distance from the e-waste dismantling sites, indicating that PBDEs and TBBPA had similar transport potential from the e-waste dismantling process as a point source to the surrounding region. BDE-209 and TBBPA predominated in all samples, which is consistent with the evidence that the deca-BDE and TBBPA commercial mixtures were extensively used in electronic products. Metals, PBDEs and TBBPA displayed significant positive correlations with TOC, whereas the correlations with pH were insignificant, indicating that TOC was a major factor governing the spatial distribution, transportation and fate in sediments and soils. A significant relationship between log-transformed metals and BFR concentrations indicated common pollution sources. Moreover, cluster analysis and principal component analysis further confirmed that the metals and BFRs had a common source, and penta- and deca-BDE commercial products may be two sources of PBDEs in this region.
46 CFR 111.05-23 - Location of ground indicators.
Code of Federal Regulations, 2014 CFR
2014-10-01
... affected) for each feeder circuit that is isolated from the main source by a transformer or other device... control cable, that allows the detecting equipment to remain near the transformer or other isolating...
46 CFR 111.05-23 - Location of ground indicators.
Code of Federal Regulations, 2011 CFR
2011-10-01
... affected) for each feeder circuit that is isolated from the main source by a transformer or other device... control cable, that allows the detecting equipment to remain near the transformer or other isolating...
46 CFR 111.05-23 - Location of ground indicators.
Code of Federal Regulations, 2012 CFR
2012-10-01
... affected) for each feeder circuit that is isolated from the main source by a transformer or other device... control cable, that allows the detecting equipment to remain near the transformer or other isolating...
46 CFR 111.05-23 - Location of ground indicators.
Code of Federal Regulations, 2010 CFR
2010-10-01
... affected) for each feeder circuit that is isolated from the main source by a transformer or other device... control cable, that allows the detecting equipment to remain near the transformer or other isolating...
46 CFR 111.05-23 - Location of ground indicators.
Code of Federal Regulations, 2013 CFR
2013-10-01
... affected) for each feeder circuit that is isolated from the main source by a transformer or other device... control cable, that allows the detecting equipment to remain near the transformer or other isolating...
Confidence intervals for the first crossing point of two hazard functions.
Cheng, Ming-Yen; Qiu, Peihua; Tan, Xianming; Tu, Dongsheng
2009-12-01
The phenomenon of crossing hazard rates is common in clinical trials with time to event endpoints. Many methods have been proposed for testing equality of hazard functions against a crossing hazards alternative. However, there has been relatively few approaches available in the literature for point or interval estimation of the crossing time point. The problem of constructing confidence intervals for the first crossing time point of two hazard functions is considered in this paper. After reviewing a recent procedure based on Cox proportional hazard modeling with Box-Cox transformation of the time to event, a nonparametric procedure using the kernel smoothing estimate of the hazard ratio is proposed. The proposed procedure and the one based on Cox proportional hazard modeling with Box-Cox transformation of the time to event are both evaluated by Monte-Carlo simulations and applied to two clinical trial datasets.
The Psychic Organ Point of Autistic Syntax
ERIC Educational Resources Information Center
Amir, Dana
2013-01-01
This paper deals with autistic syntax and its expressions both in the fully fledged autistic structure and in the autistic zones of other personality structures. The musical notion of the organ point serves as a point of departure in an attempt to describe how autistic syntax transforms what was meant to constitute the substrate for linguistic…
Case management and quality: have we reached a tipping point?
Dulworth, Sherrie
2006-01-01
In The Tipping Point, Malcolm Gladwell describes a phenomenon in which a niche market or fad undergoes transformation into mainstream acceptability, resulting in widespread social change. He concludes that a "tipping point" occurs when a series of small events results in a critical mass of acceptance that produces sudden major changes.
Pointing History Engine for the Spitzer Space Telescope
NASA Technical Reports Server (NTRS)
Bayard, David; Ahmed, Asif; Brugarolas, Paul
2007-01-01
The Pointing History Engine (PHE) is a computer program that provides mathematical transformations needed to reconstruct, from downlinked telemetry data, the attitude of the Spitzer Space Telescope (formerly known as the Space Infrared Telescope Facility) as a function of time. The PHE also serves as an example for development of similar pointing reconstruction software for future space telescopes. The transformations implemented in the PHE take account of the unique geometry of the Spitzer telescope-pointing chain, including all data on relative alignments of components, and all information available from attitude-determination instruments. The PHE makes it possible to coordinate attitude data with observational data acquired at the same time, so that any observed astronomical object can be located for future reference and re-observation. The PHE is implemented as a subroutine used in conjunction with telemetry-formatting services of the Mission Image Processing Laboratory of NASA s Jet Propulsion Laboratory to generate the Boresight Pointing History File (BPHF). The BPHF is an archival database designed to serve as Spitzer s primary astronomical reference documenting where the telescope was pointed at any time during its mission.
The GeoDataPortal: A Standards-based Environmental Modeling Data Access and Manipulation Toolkit
NASA Astrophysics Data System (ADS)
Blodgett, D. L.; Kunicki, T.; Booth, N.; Suftin, I.; Zoerb, R.; Walker, J.
2010-12-01
Environmental modelers from fields of study such as climatology, hydrology, geology, and ecology rely on many data sources and processing methods that are common across these disciplines. Interest in inter-disciplinary, loosely coupled modeling and data sharing is increasing among scientists from the USGS, other agencies, and academia. For example, hydrologic modelers need downscaled climate change scenarios and land cover data summarized for the watersheds they are modeling. Subsequently, ecological modelers are interested in soil moisture information for a particular habitat type as predicted by the hydrologic modeler. The USGS Center for Integrated Data Analytics Geo Data Portal (GDP) project seeks to facilitate this loose model coupling data sharing through broadly applicable open-source web processing services. These services simplify and streamline the time consuming and resource intensive tasks that are barriers to inter-disciplinary collaboration. The GDP framework includes a catalog describing projects, models, data, processes, and how they relate. Using newly introduced data, or sources already known to the catalog, the GDP facilitates access to sub-sets and common derivatives of data in numerous formats on disparate web servers. The GDP performs many of the critical functions needed to summarize data sources into modeling units regardless of scale or volume. A user can specify their analysis zones or modeling units as an Open Geospatial Consortium (OGC) standard Web Feature Service (WFS). Utilities to cache Shapefiles and other common GIS input formats have been developed to aid in making the geometry available for processing via WFS. Dataset access in the GDP relies primarily on the Unidata NetCDF-Java library’s common data model. Data transfer relies on methods provided by Unidata’s Thematic Real-time Environmental Data Distribution System Data Server (TDS). TDS services of interest include the Open-source Project for a Network Data Access Protocol (OPeNDAP) standard for gridded time series, the OGC’s Web Coverage Service for high-density static gridded data, and Unidata’s CDM-remote for point time series. OGC WFS and Sensor Observation Service (SOS) are being explored as mechanisms to serve and access static or time series data attributed to vector geometry. A set of standardized XML-based output formats allows easy transformation into a wide variety of “model-ready” formats. Interested users will have the option of submitting custom transformations to the GDP or transforming the XML output as a post-process. The GDP project aims to support simple, rapid development of thin user interfaces (like web portals) to commonly needed environmental modeling-related data access and manipulation tools. Standalone, service-oriented components of the GDP framework provide the metadata cataloging, data subset access, and spatial-statistics calculations needed to support interdisciplinary environmental modeling.
Discrete Fourier Transform in a Complex Vector Space
NASA Technical Reports Server (NTRS)
Dean, Bruce H. (Inventor)
2015-01-01
An image-based phase retrieval technique has been developed that can be used on board a space based iterative transformation system. Image-based wavefront sensing is computationally demanding due to the floating-point nature of the process. The discrete Fourier transform (DFT) calculation is presented in "diagonal" form. By diagonal we mean that a transformation of basis is introduced by an application of the similarity transform of linear algebra. The current method exploits the diagonal structure of the DFT in a special way, particularly when parts of the calculation do not have to be repeated at each iteration to converge to an acceptable solution in order to focus an image.
NASA Astrophysics Data System (ADS)
Johns, Jesse M.; Burkes, Douglas
2017-07-01
In this work, a multilayered perceptron (MLP) network is used to develop predictive isothermal time-temperature-transformation (TTT) models covering a range of U-Mo binary and ternary alloys. The selected ternary alloys for model development are U-Mo-Ru, U-Mo-Nb, U-Mo-Zr, U-Mo-Cr, and U-Mo-Re. These model's ability to predict 'novel' U-Mo alloys is shown quite well despite the discrepancies between literature sources for similar alloys which likely arise from different thermal-mechanical processing conditions. These models are developed with the primary purpose of informing experimental decisions. Additional experimental insight is necessary in order to reduce the number of experiments required to isolate ideal alloys. These models allow test planners to evaluate areas of experimental interest; once initial tests are conducted, the model can be updated and further improve follow-on testing decisions. The model also improves analysis capabilities by reducing the number of data points necessary from any particular test. For example, if one or two isotherms are measured during a test, the model can construct the rest of the TTT curve over a wide range of temperature and time. This modeling capability reduces the cost of experiments while also improving the value of the results from the tests. The reduced costs could result in improved material characterization and therefore improved fundamental understanding of TTT dynamics. As additional understanding of phenomena driving TTTs is acquired, this type of MLP model can be used to populate unknowns (such as material impurity and other thermal mechanical properties) from past literature sources.
NASA Astrophysics Data System (ADS)
Vaughan, A. R.; Lee, J. D.; Lewis, A. C.; Purvis, R.; Carslaw, D.; Misztal, P. K.; Metzger, S.; Beevers, S.; Goldstein, A. H.; Hewitt, C. N.; Shaw, M.; Karl, T.; Davison, B.
2015-12-01
The emission of pollutants is a major problem in today's cities. Emission inventories are a key tool for air quality management, with the United Kingdom's National and London Atmospheric Emission Inventories (NAEI & LAEI) being good examples. Assessing the validity of such inventoried is important. Here we report on the technical methodology of matching flux measurements of NOx over a city to inventory estimates. We used an eddy covariance technique to directly measure NOx fluxes from central London on an aircraft flown at low altitude. NOx mixing ratios were measured at 10 Hz time resolution using chemiluminescence (to measure NO) and highly specific photolytic conversion of NO2 to NO (to measure NO2). Wavelet transformation was used to calculate instantaneous fluxes along the flight track for each flight leg. The transformation allows for both frequency and time information to be extracted from a signal, where we quantify the covariance between the de-trended vertical wind and concentration to derive a flux. Comparison between the calculated fluxes and emission inventory data was achieved using a footprint model, which accounts for contributing source. Using both a backwards lagrangian model and cross-wind dispersion function, we find the footprint extent ranges from 5 to 11 Km in distance from the sample point. We then calculate a relative weighting matrix for each emission inventory within the calculated footprint. The inventories are split into their contributing source sectors with each scaled using up to date emission factors, giving a month; day and hourly scaled estimate which is then compared to the measurement.
Transformational and derivational strategies in analogical problem solving.
Schelhorn, Sven-Eric; Griego, Jacqueline; Schmid, Ute
2007-03-01
Analogical problem solving is mostly described as transfer of a source solution to a target problem based on the structural correspondences (mapping) between source and target. Derivational analogy (Carbonell, Machine learning: an artificial intelligence approach Los Altos. Morgan Kaufmann, 1986) proposes an alternative view: a target problem is solved by replaying a remembered problem-solving episode. Thus, the experience with the source problem is used to guide the search for the target solution by applying the same solution technique rather than by transferring the complete solution. We report an empirical study using the path finding problems presented in Novick and Hmelo (J Exp Psychol Learn Mem Cogn 20:1296-1321, 1994) as material. We show that both transformational and derivational analogy are problem-solving strategies realized by human problem solvers. Which strategy is evoked in a given problem-solving context depends on the constraints guiding object-to-object mapping between source and target problem. Specifically, if constraints facilitating mapping are available, subjects are more likely to employ a transformational strategy, otherwise they are more likely to use a derivational strategy.
Livieratos, L; Stegger, L; Bloomfield, P M; Schafers, K; Bailey, D L; Camici, P G
2005-07-21
High-resolution cardiac PET imaging with emphasis on quantification would benefit from eliminating the problem of respiratory movement during data acquisition. Respiratory gating on the basis of list-mode data has been employed previously as one approach to reduce motion effects. However, it results in poor count statistics with degradation of image quality. This work reports on the implementation of a technique to correct for respiratory motion in the area of the heart at no extra cost for count statistics and with the potential to maintain ECG gating, based on rigid-body transformations on list-mode data event-by-event. A motion-corrected data set is obtained by assigning, after pre-correction for detector efficiency and photon attenuation, individual lines-of-response to new detector pairs with consideration of respiratory motion. Parameters of respiratory motion are obtained from a series of gated image sets by means of image registration. Respiration is recorded simultaneously with the list-mode data using an inductive respiration monitor with an elasticized belt at chest level. The accuracy of the technique was assessed with point-source data showing a good correlation between measured and true transformations. The technique was applied on phantom data with simulated respiratory motion, showing successful recovery of tracer distribution and contrast on the motion-corrected images, and on patient data with C15O and 18FDG. Quantitative assessment of preliminary C15O patient data showed improvement in the recovery coefficient at the centre of the left ventricle.
Occurrence Characteristics of Microplastic in Secondary Sewage Treatment Plant in Shanghai,China.
NASA Astrophysics Data System (ADS)
Bai, M.; Zhao, S.; Li, D.
2017-12-01
As emerging pollutants, microplastics (MPs) are of concern worldwide. Due to plenty of microbeads and synthetic fibers presenting in the effluent of waste water treatment plants (WWTPs), WWTPs have been regarded as important point sources of MP into the sea. Currently, information of microplastics from WWTPs in China is limited. Herein, we studied the MP contamination of a sewage plant in Shanghai by analyzing water and sludge samples with fourier transform infrared spectroscopy. The abundances of MP in the influent, mixed water, effluent and sludge four stages are respectively 117 n/L, 90 n/L, 52 n/L and 181 n/50g(wet weight). The removal efficiency of MP in the current WWTP is 55.6%. Fiber is the most common shape type. Rayon is the most type in effluent and mixed water while synthetic leather account for the largest percentage in influent and sludge. This study firstly discussed the occurrence characteristics of microplastics in the WWTP of China and confirmed that WWTP is a source of MPs inputting into aquatic environments.
NASA Astrophysics Data System (ADS)
Gibbons, Gary W.; Volkov, Mikhail S.
2017-05-01
We study solutions obtained via applying dualities and complexifications to the vacuum Weyl metrics generated by massive rods and by point masses. Rescaling them and extending to complex parameter values yields axially symmetric vacuum solutions containing singularities along circles that can be viewed as singular matter sources. These solutions have wormhole topology with several asymptotic regions interconnected by throats and their sources can be viewed as thin rings of negative tension encircling the throats. For a particular value of the ring tension the geometry becomes exactly flat although the topology remains non-trivial, so that the rings literally produce holes in flat space. To create a single ring wormhole of one metre radius one needs a negative energy equivalent to the mass of Jupiter. Further duality transformations dress the rings with the scalar field, either conventional or phantom. This gives rise to large classes of static, axially symmetric solutions, presumably including all previously known solutions for a gravity-coupled massless scalar field, as for example the spherically symmetric Bronnikov-Ellis wormholes with phantom scalar. The multi-wormholes contain infinite struts everywhere at the symmetry axes, apart from solutions with locally flat geometry.