Topology based methods for vector field comparisons
NASA Astrophysics Data System (ADS)
Batra, Rajesh Kumar
Vector fields are commonly found in almost all branches of the physical sciences. Aerodynamics, dynamical systems, electromagnetism, and global climate modeling are a few examples. These multivariate data fields are often large, and no general, automated method exists for comparing these fields. Existing methods require either subjective visual judgments, or data interface compatibility, or domain specific knowledge. A topology based method intrinsically eliminates all of the above limitations and has the additional advantage of significantly compressing the vector field by representing only key features of the flow. Therefore, large databases are compactly represented and quickly searched. Topology is a natural framework for the study of many vector fields. It provides rules of an organizing principle, a flow grammar, that can describe and connect together the properties common to flows. Helman and Hesselink first introduced automated methods to extract and visualize this grammar. This work extends their method by introducing automated methods for vector topology comparison. Basic two-dimensional flows are first compared. The theory is extended to compare three-dimensional flow fields and the topology on no-slip surfaces. Concepts from graph theory and linear programming are utilized to solve these problems. Finally, the first automated method for higher order singularity comparisons is introduced using mathematical theories from geometric (Clifford) algebra.
Sensitivity-based virtual fields for the non-linear virtual fields method
NASA Astrophysics Data System (ADS)
Marek, Aleksander; Davis, Frances M.; Pierron, Fabrice
2017-04-01
The virtual fields method is an approach to inversely identify material parameters using full-field deformation data. In this manuscript, a new set of automatically-defined virtual fields for non-linear constitutive models has been proposed. These new sensitivity-based virtual fields reduce the influence of noise on the parameter identification. The sensitivity-based virtual fields were applied to a numerical example involving small strain plasticity; however, the general formulation derived for these virtual fields is applicable to any non-linear constitutive model. To quantify the improvement offered by these new virtual fields, they were compared with stiffness-based and manually defined virtual fields. The proposed sensitivity-based virtual fields were consistently able to identify plastic model parameters and outperform the stiffness-based and manually defined virtual fields when the data was corrupted by noise.
DO TIE LABORATORY BASED ASSESSMENT METHODS REALLY PREDICT FIELD EFFECTS?
Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both porewaters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question of whethe...
DO TIE LABORATORY BASED ASSESSMENT METHODS REALLY PREDICT FIELD EFFECTS?
Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both porewaters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question of whethe...
DO TIE LABORATORY BASED METHODS REALLY REFLECT FIELD CONDITIONS
Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both interstitial waters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question ...
DO TIE LABORATORY BASED METHODS REALLY REFLECT FIELD CONDITIONS
Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both interstitial waters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question ...
An Exact Model-Based Method for Near-Field Sources Localization with Bistatic MIMO System
Singh, Parth Raj; Wang, Yide; Chargé, Pascal
2017-01-01
In this paper, we propose an exact model-based method for near-field sources localization with a bistatic multiple input, multiple output (MIMO) radar system, and compare it with an approximated model-based method. The aim of this paper is to propose an efficient way to use the exact model of the received signals of near-field sources in order to eliminate the systematic error introduced by the use of approximated model in most existing near-field sources localization techniques. The proposed method uses parallel factor (PARAFAC) decomposition to deal with the exact model. Thanks to the exact model, the proposed method has better precision and resolution than the compared approximated model-based method. The simulation results show the performance of the proposed method. PMID:28358326
Krylov subspace iterative methods for boundary element method based near-field acoustic holography.
Valdivia, Nicolas; Williams, Earl G
2005-02-01
The reconstruction of the acoustic field for general surfaces is obtained from the solution of a matrix system that results from a boundary integral equation discretized using boundary element methods. The solution to the resultant matrix system is obtained using iterative regularization methods that counteract the effect of noise on the measurements. These methods will not require the calculation of the singular value decomposition, which can be expensive when the matrix system is considerably large. Krylov subspace methods are iterative methods that have the phenomena known as "semi-convergence," i.e., the optimal regularization solution is obtained after a few iterations. If the iteration is not stopped, the method converges to a solution that generally is totally corrupted by errors on the measurements. For these methods the number of iterations play the role of the regularization parameter. We will focus our attention to the study of the regularizing properties from the Krylov subspace methods like conjugate gradients, least squares QR and the recently proposed Hybrid method. A discussion and comparison of the available stopping rules will be included. A vibrating plate is considered as an example to validate our results.
A fast and flexible library-based thick-mask near-field calculation method
NASA Astrophysics Data System (ADS)
Ma, Xu; Gao, Jie; Chen, Xuanbo; Dong, Lisong; Li, Yanqiu
2015-03-01
Aerial image calculation is the basis of the current lithography simulation. As the critical dimension (CD) of the integrated circuits continuously shrinks, the thick mask near-field calculation has increasing influence on the accuracy and efficiency of the entire aerial image calculation process. This paper develops a flexible librarybased approach to significantly improve the efficiency of the thick mask near-field calculation compared to the rigorous modeling method, while leading to much higher accuracy than the Kirchhoff approximation method. Specifically, a set of typical features on the fullchip are selected to serve as the training data, whose near-fields are pre-calculated and saved in the library. Given an arbitrary test mask, we first decompose it into convex corners, concave corners and edges, afterwards match each patch to the training layouts based on nonparametric kernel regression. Subsequently, we use the matched near-fields in the library to replace the mask patches, and rapidly synthesize the near-field for the entire test mask. Finally, a data-fitting method is proposed to improve the accuracy of the synthesized near-field based on least square estimate (LSE). We use a pair of two-dimensional mask patterns to test our method. Simulations show that the proposed method can significantly speed up the current FDTD method, and effectively improve the accuracy of the Kirchhoff approximation method.
A new gradient shimming method based on undistorted field map of B0 inhomogeneity.
Bao, Qingjia; Chen, Fang; Chen, Li; Song, Kan; Liu, Zao; Liu, Chaoyang
2016-04-01
Most existing gradient shimming methods for NMR spectrometers estimate field maps that resolve B0 inhomogeneity spatially from dual gradient-echo (GRE) images acquired at different echo times. However, the distortions induced by B0 inhomogeneity that always exists in the GRE images can result in estimated field maps that are distorted in both geometry and intensity, leading to inaccurate shimming. This work proposes a new gradient shimming method based on undistorted field map of B0 inhomogeneity obtained by a more accurate field map estimation technique. Compared to the traditional field map estimation method, this new method exploits both the positive and negative polarities of the frequency encoded gradients to eliminate the distortions caused by B0 inhomogeneity in the field map. Next, the corresponding automatic post-data procedure is introduced to obtain undistorted B0 field map based on knowledge of the invariant characteristics of the B0 inhomogeneity and the variant polarity of the encoded gradient. The experimental results on both simulated and real gradient shimming tests demonstrate the high performance of this new method.
A comparison of field-based similarity searching methods: CatShape, FBSS, and ROCS.
Moffat, Kirstin; Gillet, Valerie J; Whittle, Martin; Bravi, Gianpaolo; Leach, Andrew R
2008-04-01
Three field-based similarity methods are compared in retrospective virtual screening experiments. The methods are the CatShape module of CATALYST, ROCS, and an in-house program developed at the University of Sheffield called FBSS. The programs are used in both rigid and flexible searches carried out in the MDL Drug Data Report. UNITY 2D fingerprints are also used to provide a comparison with a more traditional approach to similarity searching, and similarity based on simple whole-molecule properties is used to provide a baseline for the more sophisticated searches. Overall, UNITY 2D fingerprints and ROCS with the chemical force field option gave comparable performance and were superior to the shape-only 3D methods. When the flexible methods were compared with the rigid methods, it was generally found that the flexible methods gave slightly better results than their respective rigid methods; however, the increased performance did not justify the additional computational cost required.
FLASHFLOOD: A 3D Field-based similarity search and alignment method for flexible molecules
NASA Astrophysics Data System (ADS)
Pitman, Michael C.; Huber, Wolfgang K.; Horn, Hans; Krämer, Andreas; Rice, Julia E.; Swope, William C.
2001-07-01
A three-dimensional field-based similarity search and alignment method for flexible molecules is introduced. The conformational space of a flexible molecule is represented in terms of fragments and torsional angles of allowed conformations. A user-definable property field is used to compute features of fragment pairs. Features are generalizations of CoMMA descriptors (Silverman, B.D. and Platt, D.E., J. Med. Chem., 39 (1996) 2129.) that characterize local regions of the property field by its local moments. The features are invariant under coordinate system transformations. Features taken from a query molecule are used to form alignments with fragment pairs in the database. An assembly algorithm is then used to merge the fragment pairs into full structures, aligned to the query. Key to the method is the use of a context adaptive descriptor scaling procedure as the basis for similarity. This allows the user to tune the weights of the various feature components based on examples relevant to the particular context under investigation. The property fields may range from simple, phenomenological fields, to fields derived from quantum mechanical calculations. We apply the method to the dihydrofolate/methotrexate benchmark system, and show that when one injects relevant contextual information into the descriptor scaling procedure, better results are obtained more efficiently. We also show how the method works and include computer times for a query from a database that represents approximately 23 million conformers of seventeen flexible molecules.
Design of a reaction field using a linear-combination-based isotropic periodic sum method.
Takahashi, Kazuaki Z
2014-04-30
In our previous study (Takahashi et al., J. Chem. Theory Comput. 2012, 8, 4503), we developed the linear-combination-based isotropic periodic sum (LIPS) method. The LIPS method is based on the extended isotropic periodic sum theory that produces a ubiquitous interaction potential function to estimate homogeneous and heterogeneous systems. The LIPS theory also provides the procedure to design a periodic reaction field. To demonstrate this, in the present work, a novel reaction field of the LIPS method was developed. The novel reaction field was labeled LIPS-SW, because it provides an interaction potential function with a shape that resembles that of the switch function method. To evaluate the ability of the LIPS-SW method to describe in homogeneous and heterogeneous systems, we carried out molecular dynamics (MD) simulations of bulk water and water-vapor interfacial systems using the LIPS-SW method. The results of these simulations show that the LIPS-SW method gives higher accuracy than the conventional interaction potential function of the LIPS method. The accuracy of simulating water-vapor interfacial systems was greatly improved, while that of bulk water systems was maintained using the LIPS-SW method. We conclude that the LIPS-SW method shows great potential for high-accuracy, high-performance computing to allow large scale MD simulations. © 2014 Wiley Periodicals, Inc. Copyright © 2014 Wiley Periodicals, Inc.
Systems and Methods for Implementing Robust Carbon Nanotube-Based Field Emitters
NASA Technical Reports Server (NTRS)
Manohara, Harish (Inventor); Kristof, Valerie (Inventor); Toda, Risaku (Inventor)
2015-01-01
Systems and methods in accordance with embodiments of the invention implement carbon nanotube-based field emitters. In one embodiment, a method of fabricating a carbon nanotube field emitter includes: patterning a substrate with a catalyst, where the substrate has thereon disposed a diffusion barrier layer; growing a plurality of carbon nanotubes on at least a portion of the patterned catalyst; and heating the substrate to an extent where it begins to soften such that at least a portion of at least one carbon nanotube becomes enveloped by the softened substrate.
An optical flow-based method for velocity field of fluid flow estimation
NASA Astrophysics Data System (ADS)
Głomb, Grzegorz; Świrniak, Grzegorz; Mroczka, Janusz
2017-06-01
The aim of this paper is to present a method for estimating flow-velocity vector fields using the Lucas-Kanade algorithm. The optical flow measurements are based on the Particle Image Velocimetry (PIV) technique, which is commonly used in fluid mechanics laboratories in both research institutes and industry. Common approaches for an optical characterization of velocity fields base on computation of partial derivatives of the image intensity using finite differences. Nevertheless, the accuracy of velocity field computations is low due to the fact that an exact estimation of spatial derivatives is very difficult in presence of rapid intensity changes in the PIV images, caused by particles having small diameters. The method discussed in this paper solves this problem by interpolating the PIV images using Gaussian radial basis functions. This provides a significant improvement in the accuracy of the velocity estimation but, more importantly, allows for the evaluation of the derivatives in intermediate points between pixels. Numerical analysis proves that the method is able to estimate even a separate vector for each particle with a 5× 5 px2 window, whereas a classical correlation-based method needs at least 4 particle images. With the use of a specialized multi-step hybrid approach to data analysis the method improves the estimation of the particle displacement far above 1 px.
Mixture model and Markov random field-based remote sensing image unsupervised clustering method
NASA Astrophysics Data System (ADS)
Hou, Y.; Yang, Y.; Rao, N.; Lun, X.; Lan, J.
2011-03-01
In this paper, a novel method for remote sensing image clustering based on mixture model and Markov random field (MRF) is proposed. A remote sensing image can be considered as Gaussian mixture model. The image clustering result corresponding to the image label field is a MRF. So, the image clustering procedure is transformed to a maximum a posterior (MAP) problem by Bayesian theorem. The intensity difference and the spatial distance between the two pixels in the same clique are introduced into the traditional MRF potential function. The iterative conditional model (ICM) is employed to find the solution of MAP. We use the max entropy criterion to choose the optimal clustering number. In the experiments, the method is compared with the traditional MRF clustering method using ICM and simulated annealing (SA). The results show that this method is better than the traditional MRF model both in noise filtering and miss-classification ratio.
Design method for a distributed Bragg resonator based evanescent field sensor
NASA Astrophysics Data System (ADS)
Bischof, David; Kehl, Florian; Michler, Markus
2016-12-01
This paper presents an analytic design method for a distributed Bragg resonator based evanescent field sensor. Such sensors can, for example, be used to measure changing refractive indices of the cover medium of a waveguide, as well as molecule adsorption at the sensor surface. For given starting conditions, the presented design method allows the analytical calculation of optimized sensor parameters for quantitative simulation and fabrication. The design process is based on the Fabry-Pérot resonator and analytical solutions of coupled mode theory.
FGG-NUFFT-Based Method for Near-Field 3-D Imaging Using Millimeter Waves
Kan, Yingzhi; Zhu, Yongfeng; Tang, Liang; Fu, Qiang; Pei, Hucheng
2016-01-01
In this paper, to deal with the concealed target detection problem, an accurate and efficient algorithm for near-field millimeter wave three-dimensional (3-D) imaging is proposed that uses a two-dimensional (2-D) plane antenna array. First, a two-dimensional fast Fourier transform (FFT) is performed on the scattered data along the antenna array plane. Then, a phase shift is performed to compensate for the spherical wave effect. Finally, fast Gaussian gridding based nonuniform FFT (FGG-NUFFT) combined with 2-D inverse FFT (IFFT) is performed on the nonuniform 3-D spatial spectrum in the frequency wavenumber domain to achieve 3-D imaging. The conventional method for near-field 3-D imaging uses Stolt interpolation to obtain uniform spatial spectrum samples and performs 3-D IFFT to reconstruct a 3-D image. Compared with the conventional method, our FGG-NUFFT based method is comparable in both efficiency and accuracy in the full sampled case and can obtain more accurate images with less clutter and fewer noisy artifacts in the down-sampled case, which are good properties for practical applications. Both simulation and experimental results demonstrate that the FGG-NUFFT-based near-field 3-D imaging algorithm can have better imaging performance than the conventional method for down-sampled measurements. PMID:27657066
FGG-NUFFT-Based Method for Near-Field 3-D Imaging Using Millimeter Waves.
Kan, Yingzhi; Zhu, Yongfeng; Tang, Liang; Fu, Qiang; Pei, Hucheng
2016-09-19
In this paper, to deal with the concealed target detection problem, an accurate and efficient algorithm for near-field millimeter wave three-dimensional (3-D) imaging is proposed that uses a two-dimensional (2-D) plane antenna array. First, a two-dimensional fast Fourier transform (FFT) is performed on the scattered data along the antenna array plane. Then, a phase shift is performed to compensate for the spherical wave effect. Finally, fast Gaussian gridding based nonuniform FFT (FGG-NUFFT) combined with 2-D inverse FFT (IFFT) is performed on the nonuniform 3-D spatial spectrum in the frequency wavenumber domain to achieve 3-D imaging. The conventional method for near-field 3-D imaging uses Stolt interpolation to obtain uniform spatial spectrum samples and performs 3-D IFFT to reconstruct a 3-D image. Compared with the conventional method, our FGG-NUFFT based method is comparable in both efficiency and accuracy in the full sampled case and can obtain more accurate images with less clutter and fewer noisy artifacts in the down-sampled case, which are good properties for practical applications. Both simulation and experimental results demonstrate that the FGG-NUFFT-based near-field 3-D imaging algorithm can have better imaging performance than the conventional method for down-sampled measurements.
A universal parameterized gradient-based method for photon beam field size determination.
Lebron, Sharon; Yan, Guanghua; Li, Jonathan; Lu, Bo; Liu, Chihray
2017-09-09
To propose a universal, parameterized gradient-based method (PGM) for radiation field size determination. The PGM locates the beam profile's edge by parameterizing its penumbra region with a modified sigmoid function where the inflection point can be determined in a closed form. The parametrization was validated with filter-flattened (FF), flattening-filter-free (FFF) and wedged profiles measured on two Elekta linac models (Synergy and Versa HD). Gamma analysis with the delta dose function set to zero was used to quantitatively assess the parameterization accuracy. Field sizes of FF beams were calculated with the PGM and the full width at half maximum (FWHM) methods for comparison. To assess the consistency of the PGM and the FWHM method with geometric scaling across different depths, the calculated field size at a reference depth was scaled to other depths and compared with the field sizes calculated from the measured profiles. The method was also validated against a maximum-slope method (MSM) with wedge and FFF profiles. We also evaluated the robustness of the three methods with respect to measurement noise, varying scanning step sizes, detector characteristics and beam energy/modality. Small distance-to-agreement (0.02±0.02 mm) between the measured and parameterized penumbra region was observed for all profiles. The differences between the field sizes calculated with the FWHM method and the PGM were consistent (0.9±0.3 mm), with the FWHM method yielding larger values. With geometrical scaling, the PGM and the FWHM method produced maximum differences of 0.26 and 1.16 mm, respectively. For wedge and FFF beams, the mean differences relative to FF fields were 0.15±0.09 mm and 0.57±0.91 mm for the PGM and the MSM, respectively. The PGM was also found to produce more consistent results than the FWHM method and the MSM when measurement noise, scanning step size, detector characteristics and beam energy/modality changed. The proposed PGM is universally applicable to
NASA Astrophysics Data System (ADS)
Zhang, Weidong; Cui, Xiangqun; Yao, Ruoya
1996-09-01
The advantage and disadvantage of the conventional detecting methods of electric or magnetic field sensing signal in the fiber optical sensing system are discussed. A new digital detecting method of electric or magnetic field sensing signal based on the optical modulation and electrical demodulation is proposed in order to eliminate the power frequency and low frequency noises. The digital detecting circuit is designed and the detecting software is developed. In design of the circuit, because the closed-loop synchronous sampling technique is used, not only the leakage effect is controlled and the detecting accuracy is increased, but also the signal processing is simplified, so it has better real-time characteristics. The experiments show that the test results detected by this method have good linearity.
Time-domain incident-field extrapolation technique based on the singularity-expansion method
Klaasen, J.J.
1991-05-01
In this report, a method presented to extrapolate measurements from Nuclear Electromagnetic Pulse (NEMP) assessments directly in the time domain. This method is based on a time-domain extrapolation function which is obtained from the Singularity Expansion Method representation of the measured incident field of the NEMP simulator. Once the time-domain extrapolation function is determined, the responses recorded during an assessment can be extrapolated simply by convolving them with the time domain extrapolation function. It is found that to obtain useful extrapolated responses, the incident field measurements needs to be made minimum phase; otherwise unbounded results can be obtained. Results obtained with this technique are presented, using data from actual assessments.
Consistency check method for sighting axis and laser detection axis based on field testing
NASA Astrophysics Data System (ADS)
Guo, Hao; Zhao, Linfeng; Liu, Yanfang; Yin, Ruiguang
2016-09-01
Optical axis consistency is an important index of multi-axes equipment. Most test methods of optical axis consistency are aimed at the consistency of multi sighting axes, or consistency between sighting axis and laser emission axis. It is difficult for consistency test between sighting axis and laser detection axis. A new method based on field testing was put forward to solve the difficulty of consistency check between sighting axis and laser detection axis. At first, sighting axis was set down as base, and high precision numerical turntable was used to adjust laser detection heading, and then the total field of view of laser detection channel was measured. The laser detection axis was gotten subsequently. Finally, the consistency error of sighting axis and laser detection axis was worked out, by comparing sighting axis's angular position with the angular position of laser detection axis. There are many merits of the method, such as high precision, wide applicability, and easy to operate, etc. Meanwhile, the field of view of laser detection channel was checked out. This paper showed that the method we put forward can meet the demand of consistency check between sighting axis and laser detection axis well.
Error model of geomagnetic-field measurement and extended Kalman-filter based compensation method
Ge, Zhilei; Liu, Suyun; Li, Guopeng; Huang, Yan; Wang, Yanni
2017-01-01
The real-time accurate measurement of the geomagnetic-field is the foundation to achieving high-precision geomagnetic navigation. The existing geomagnetic-field measurement models are essentially simplified models that cannot accurately describe the sources of measurement error. This paper, on the basis of systematically analyzing the source of geomagnetic-field measurement error, built a complete measurement model, into which the previously unconsidered geomagnetic daily variation field was introduced. This paper proposed an extended Kalman-filter based compensation method, which allows a large amount of measurement data to be used in estimating parameters to obtain the optimal solution in the sense of statistics. The experiment results showed that the compensated strength of the geomagnetic field remained close to the real value and the measurement error was basically controlled within 5nT. In addition, this compensation method has strong applicability due to its easy data collection and ability to remove the dependence on a high-precision measurement instrument. PMID:28445508
Spatial sound field synthesis and upmixing based on the equivalent source method.
Bai, Mingsian R; Hsu, Hoshen; Wen, Jheng-Ciang
2014-01-01
Given scarce number of recorded signals, spatial sound field synthesis with an extended sweet spot is a challenging problem in acoustic array signal processing. To address the problem, a synthesis and upmixing approach inspired by the equivalent source method (ESM) is proposed. The synthesis procedure is based on the pressure signals recorded by a microphone array and requires no source model. The array geometry can also be arbitrary. Four upmixing strategies are adopted to enhance the resolution of the reproduced sound field when there are more channels of loudspeakers than the microphones. Multi-channel inverse filtering with regularization is exploited to deal with the ill-posedness in the reconstruction process. The distance between the microphone and loudspeaker arrays is optimized to achieve the best synthesis quality. To validate the proposed system, numerical simulations and subjective listening experiments are performed. The results demonstrated that all upmixing methods improved the quality of reproduced target sound field over the original reproduction. In particular, the underdetermined ESM interpolation method yielded the best spatial sound field synthesis in terms of the reproduction error, timbral quality, and spatial quality.
A novel autonomous real-time position method based on polarized light and geomagnetic field.
Wang, Yinlong; Chu, Jinkui; Zhang, Ran; Wang, Lu; Wang, Zhiwen
2015-04-08
Many animals exploit polarized light in order to calibrate their magnetic compasses for navigation. For example, some birds are equipped with biological magnetic and celestial compasses enabling them to migrate between the Western and Eastern Hemispheres. The Vikings' ability to derive true direction from polarized light is also widely accepted. However, their amazing navigational capabilities are still not completely clear. Inspired by birds' and Vikings' ancient navigational skills. Here we present a combined real-time position method based on the use of polarized light and geomagnetic field. The new method works independently of any artificial signal source with no accumulation of errors and can obtain the position and the orientation directly. The novel device simply consists of two polarized light sensors, a 3-axis compass and a computer. The field experiments demonstrate device performance.
A novel autonomous real-time position method based on polarized light and geomagnetic field
Wang, Yinlong; Chu, Jinkui; Zhang, Ran; Wang, Lu; Wang, Zhiwen
2015-01-01
Many animals exploit polarized light in order to calibrate their magnetic compasses for navigation. For example, some birds are equipped with biological magnetic and celestial compasses enabling them to migrate between the Western and Eastern Hemispheres. The Vikings' ability to derive true direction from polarized light is also widely accepted. However, their amazing navigational capabilities are still not completely clear. Inspired by birds' and Vikings' ancient navigational skills. Here we present a combined real-time position method based on the use of polarized light and geomagnetic field. The new method works independently of any artificial signal source with no accumulation of errors and can obtain the position and the orientation directly. The novel device simply consists of two polarized light sensors, a 3-axis compass and a computer. The field experiments demonstrate device performance. PMID:25851793
A novel autonomous real-time position method based on polarized light and geomagnetic field
NASA Astrophysics Data System (ADS)
Wang, Yinlong; Chu, Jinkui; Zhang, Ran; Wang, Lu; Wang, Zhiwen
2015-04-01
Many animals exploit polarized light in order to calibrate their magnetic compasses for navigation. For example, some birds are equipped with biological magnetic and celestial compasses enabling them to migrate between the Western and Eastern Hemispheres. The Vikings' ability to derive true direction from polarized light is also widely accepted. However, their amazing navigational capabilities are still not completely clear. Inspired by birds' and Vikings' ancient navigational skills. Here we present a combined real-time position method based on the use of polarized light and geomagnetic field. The new method works independently of any artificial signal source with no accumulation of errors and can obtain the position and the orientation directly. The novel device simply consists of two polarized light sensors, a 3-axis compass and a computer. The field experiments demonstrate device performance.
[A detection method of liver iron overload based on static field magnetization principle].
Zhang, Ziyi; Liu, Peiguo; Zhang, Liang; Ding, Liang; Lin, Xiaohong
2014-02-01
Magnetic induction method aims at the noninvasive detection of liver iron overload by measuring the hepatic magnetic susceptibility. To solve the difficulty that eddy current effects interfere with the measurement of magnetic susceptibility, we proposed an improved coil system based on the static field magnetization principle in this study. We used a direct current excitation to eliminate the eddy current effect, and a rotary receiver coil to get the induced voltage. The magnetic field for a cylindrical object due to the magnetization effect was calculated and the relative change of maximum induced voltage was derived. The correlation between magnetic susceptibility of object and maximum magnetic flux, maximum induced voltage and relative change of maximum induced voltage of the receiver coil were obtained by simulation experiments, and the results were compared with those of the theory calculation. The contrast shows that the simulation results fit the theory results well, which proves our method can eliminate the eddy current effect effectively.
A Novel Prediction Method about Single Components of Analog Circuits Based on Complex Field Modeling
Tian, Shulin; Yang, Chenglin
2014-01-01
Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments. PMID:25147853
Zhou, Jingyu; Tian, Shulin; Yang, Chenglin
2014-01-01
Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments.
Force-Field Based Quasi-Chemical Method for Rapid Evaluation of Binary Phase Diagrams.
Sweere, Augustinus J M; Fraaije, Johannes G E M
2015-11-05
We present the Pair Configurations to Molecular Activity Coefficients (PAC-MAC) method. The method is based on the pair sampling technique of Blanco (Fan, C. F.; Olafson, B. D.; Blanco, M.; Hsu, S. L. Application of Molecular Simulation to Derive Phase Diagrams of Binary Mixtures. Macromolecules 1992, 25, 3667-3676) with an extension that takes the packing of the molecules into account by a free energy model. The intermolecular energy is calculated using classical force fields. PAC-MAC is able to predict activity coefficients and corresponding vapor-liquid equilibrium diagrams at least 4 orders of magnitude faster than molecular simulations. The accuracy of the PAC-MAC method is tested by comparing the results with experimental data and with the results of the COSMO-SAC model (Lin, S.-T.; Sandler, S. I. A Priori Phase Equilibrium Prediction from a Segment Contribution Solvation Model. Ind. Eng. Chem. Res. 2002, 41, 899-913). PAC-MAC (using the OPLS-aa force field) is shown to be comparable in accuracy to COSMO-SAC, at the considerable advantage that PAC-MAC in principle does not require quantum calculation, provided proper force fields to be available.
Transparent Conductive Coating Based on Carbon Nanotubes Using Electric Field Deposition Method
Latununuwe, Altje; Hattu, Nikmans; Setiawan, Andhy; Winata, Toto; Abdullah, Mikrajuddin; Darma, Yudi
2010-10-24
The transparent conductive coating based on carbon nanotubes (CNTs) had been fabricated using the electric field deposition method. The scanning electron microscope (SEM) results show a quite uniform CNTs on Corning glass substrates. Moreover the X-ray Diffraction (XRD) results shows the peak at around 25 deg. which proves the existence of CNT materials. The CNT thin films obtained with different deposition times have different transmittance coefficients at wavelength of 550 nm. I-V measurement results shows higher sheet resistance value which relates with bigger transmittance coefficients and vice versa.
Standard-wheel-based field calibration method for railway wheelset diameter online measuring system.
Chen, Yuejian; Xing, Zongyi; Li, Yifan; Yang, Zhi
2017-04-01
Laser displacement sensor (LDS)-based online measuring of the wheel diameter has been widely adopted in engineering for advantages such as noncontact, high efficiency, and high precision. For almost all these online measuring systems, calibration is certainly needed in order to obtain the extrinsic parameters of sensors. A field-based easy-to-operate, economical, and efficient calibration method is proposed for an LDS-based wheel diameter online measuring system. Only one standard wheelset is used to build the 3D calibration target that is also the measurement target of the system. The extrinsic parameters for each LDS are obtained through minimizing the residual summation of squares. A multistart framework combining the generation of certain numbers of uniformly distributed starting points and a nonlinear programming solver is adopted to solve the minimizing function to obtain the global optimizer. Factors include the number of standard wheelset placement and sensor noises that will result in calibration error are analyzed. Field experiments are carried out, and the correctness of the calibration method is verified through comparisons with manual caliper-measuring results.
A new method for direction finding based on Markov random field model
NASA Astrophysics Data System (ADS)
Ota, Mamoru; Kasahara, Yoshiya; Goto, Yoshitaka
2015-07-01
Investigating the characteristics of plasma waves observed by scientific satellites in the Earth's plasmasphere/magnetosphere is effective for understanding the mechanisms for generating waves and the plasma environment that influences wave generation and propagation. In particular, finding the propagation directions of waves is important for understanding mechanisms of VLF/ELF waves. To find these directions, the wave distribution function (WDF) method has been proposed. This method is based on the idea that observed signals consist of a number of elementary plane waves that define wave energy density distribution. However, the resulting equations constitute an ill-posed problem in which a solution is not determined uniquely; hence, an adequate model must be assumed for a solution. Although many models have been proposed, we have to select the most optimum model for the given situation because each model has its own advantages and disadvantages. In the present study, we propose a new method for direction finding of the plasma waves measured by plasma wave receivers. Our method is based on the assumption that the WDF can be represented by a Markov random field model with inference of model parameters performed using a variational Bayesian learning algorithm. Using computer-generated spectral matrices, we evaluated the performance of the model and compared the results with those obtained from two conventional methods.
Using geotypes for landslide hazard assessment and mapping: a coupled field and GIS-based method
NASA Astrophysics Data System (ADS)
Bilgot, S.; Parriaux, A.
2009-04-01
Switzerland is exceptionally subjected to landslides; indeed, about 10% of its area is considered as unstable. Making this observation, its Department of the Environment (BAFU) introduces in 1997 a method to realize landslide hazard maps. It is routinely used but, like most of the methods applied in Europe to map unstable areas, it is mainly based on the signs of previous or current phenomena (geomorphologic mapping, archive consultation, etc.) even though instabilities can appear where there is nothing to show that they existed earlier. Furthermore, the transcription from the geomorphologic map to the hazard map can vary according to the geologist or the geographer who realizes it: this method is affected by a certain lack of transparency. The aim of this project is to introduce the bedrock of a new method for landslide hazard mapping; based on instability predisposition assessment, it involves the designation of main factors for landslide susceptibility, their integration in a GIS to calculate a landslide predisposition index and the implementation of new methods to evaluate these factors; to be competitive, these processes have to be both cheap and quick. To identify the most important parameters to consider for assessing slope stability, we chose a large panel of topographic, geomechanic and hydraulic parameters and tested their importance by calculating safety factors on theoretical landslides using Geostudio 2007®; thus, we could determine that slope, cohesion, hydraulic conductivity and saturation play an important role in soil stability. After showing that cohesion and hydraulic conductivity of loose materials are strongly linked to their granulometry and plasticity index, we implemented two new field tests, one based on teledetection and one coupled sedimentometric and blue methylen test to evaluate these parameters. From these data, we could deduce approximated values of maximum cohesion and saturated hydraulic conductivity. The hydraulic conductivity of
Simulation on Temperature Field of Radiofrequency Lesions System Based on Finite Element Method
NASA Astrophysics Data System (ADS)
Xiao, D.; Qian, L.; Qian, Z.; Li, W.
2011-01-01
This paper mainly describes the way to get the volume model of damaged region according to the simulation on temperature field of radiofrequency ablation lesion system in curing Parkinson's disease based on finite element method. This volume model reflects, to some degree, the shape and size of the damaged tissue during the treatment with all tendencies in different time or core temperature. By using Pennes equation as heat conduction equation of radiofrequency ablation of biological tissue, the author obtains the temperature distribution field of biological tissue in the method of finite element for solving equations. In order to establish damage models at temperature points of 60°C, 65°C, 70°C, 75°C, 80°C, 85°C and 90 °C while the time points are 30s, 60s, 90s and 120s, Parkinson's disease model of nuclei is reduced to uniform, infinite model with RF pin at the origin. Theoretical simulations of these models are displayed, focusing on a variety of conditions about the effective lesion size on horizontal and vertical. The results show the binary complete quadratic non-linear joint temperature-time models of the maximum damage diameter and maximum height. The models can comprehensively reflect the degeneration of target tissue caused by radio frequency temperature and duration. This lay the foundation for accurately monitor of clinical RF treatment of Parkinson's disease in the future.
Numerical focusing methods for full field OCT: a comparison based on a common signal model.
Kumar, Abhishek; Drexler, Wolfgang; Leitgeb, Rainer A
2014-06-30
In this paper a theoretical model of the full field swept source (FF SS) OCT signal is presented based on the angular spectrum wave propagation approach which accounts for the defocus error with imaging depth. It is shown that using the same theoretical model of the signal, numerical defocus correction methods based on a simple forward model (FM) and inverse scattering (IS), the latter being similar to interferometric synthetic aperture microscopy (ISAM), can be derived. Both FM and IS are compared quantitatively with sub-aperture based digital adaptive optics (DAO). FM has the least numerical complexity, and is the fastest in terms of computational speed among the three. SNR improvement of more than 10 dB is shown for all the three methods over a sample depth of 1.5 mm. For a sample with non-uniform refractive index with depth, FM and IS both improved the depth of focus (DOF) by a factor of 7x for an imaging NA of 0.1. DAO performs the best in case of non-uniform refractive index with respect to DOF improvement by 11x.
ERIC Educational Resources Information Center
Laman, Tasha Tropp; Miller, Erin T.; Lopez-Robertson, Julia
2012-01-01
This qualitative study examines what early childhood preservice teachers enrolled in a field-based literacy methods course deemed relevant regarding teaching, literacy, and learning. This study is based on postcourse interviews with 7 early childhood preservice teachers. Findings suggest that "contextualized field experiences" facilitate…
NASA Astrophysics Data System (ADS)
Gao, Kun; Yang, Hu; Chen, Xiaomei; Ni, Guoqiang
2008-03-01
Because of complex thermal objects in an infrared image, the prevalent image edge detection operators are often suitable for a certain scene and extract too wide edges sometimes. From a biological point of view, the image edge detection operators work reliably when assuming a convolution-based receptive field architecture. A DoG (Difference-of- Gaussians) model filter based on ON-center retinal ganglion cell receptive field architecture with artificial eye tremors introduced is proposed for the image contour detection. Aiming at the blurred edges of an infrared image, the subsequent orthogonal polynomial interpolation and sub-pixel level edge detection in rough edge pixel neighborhood is adopted to locate the foregoing rough edges in sub-pixel level. Numerical simulations show that this method can locate the target edge accurately and robustly.
Schall, Mark C; Fethke, Nathan B; Chen, Howard; Gerr, Fred
2015-05-01
The performance of an inertial measurement unit (IMU) system for directly measuring thoracolumbar trunk motion was compared to that of the Lumbar Motion Monitor (LMM). Thirty-six male participants completed a simulated material handling task with both systems deployed simultaneously. Estimates of thoracolumbar trunk motion obtained with the IMU system were processed using five common methods for estimating trunk motion characteristics. Results of measurements obtained from IMUs secured to the sternum and pelvis had smaller root-mean-square differences and mean bias estimates in comparison to results obtained with the LMM than results of measurements obtained solely from a sternum mounted IMU. Fusion of IMU accelerometer measurements with IMU gyroscope and/or magnetometer measurements was observed to increase comparability to the LMM. Results suggest investigators should consider computing thoracolumbar trunk motion as a function of estimates from multiple IMUs using fusion algorithms rather than using a single accelerometer secured to the sternum in field-based studies.
Study on Two Methods for Nonlinear Force-Free Extrapolation Based on Semi-Analytical Field
NASA Astrophysics Data System (ADS)
Liu, S.; Zhang, H. Q.; Su, J. T.; Song, M. T.
2011-03-01
In this paper, two semi-analytical solutions of force-free fields (Low and Lou, Astrophys. J. 352, 343, 1990) have been used to test two nonlinear force-free extrapolation methods. One is the boundary integral equation (BIE) method developed by Yan and Sakurai ( Solar Phys. 195, 89, 2000), and the other is the approximate vertical integration (AVI) method developed by Song et al. ( Astrophys. J. 649, 1084, 2006). Some improvements have been made to the AVI method to avoid the singular points in the process of calculation. It is found that the correlation coefficients between the first semi-analytical field and extrapolated field using the BIE method, and also that obtained by the improved AVI method, are greater than 90% below a height 10 of the 64×64 lower boundary. For the second semi-analytical field, these correlation coefficients are greater than 80% below the same relative height. Although differences between the semi-analytical solutions and the extrapolated fields exist for both the BIE and AVI methods, these two methods can give reliable results for heights of about 15% of the extent of the lower boundary.
NASA Astrophysics Data System (ADS)
Diaz, P. M. A.; Feitosa, R. Q.; Sanches, I. D.; Costa, G. A. O. P.
2016-06-01
This paper presents a method to estimate the temporal interaction in a Conditional Random Field (CRF) based approach for crop recognition from multitemporal remote sensing image sequences. This approach models the phenology of different crop types as a CRF. Interaction potentials are assumed to depend only on the class labels of an image site at two consecutive epochs. In the proposed method, the estimation of temporal interaction parameters is considered as an optimization problem, whose goal is to find the transition matrix that maximizes the CRF performance, upon a set of labelled data. The objective functions underlying the optimization procedure can be formulated in terms of different accuracy metrics, such as overall and average class accuracy per crop or phenological stages. To validate the proposed approach, experiments were carried out upon a dataset consisting of 12 co-registered LANDSAT images of a region in southeast of Brazil. Pattern Search was used as the optimization algorithm. The experimental results demonstrated that the proposed method was able to substantially outperform estimates related to joint or conditional class transition probabilities, which rely on training samples.
Zhang, Yu-Cun; Fu, Xian-Bin; Liu, Bin; Qi, Yan-De; Zhou, Shan
2013-01-01
In order to grasp the changes of the forging's temperature field during heat treatment, a temperature field detection method based on infrared spectra for large cylinder forgings is proposed in the present paper. On the basis of heat transfer a temperature field model of large barrel forgings was established by the method of separating variables. Using infrared spectroscopy the large forgings temperature measurement system was built based on the three-level interference filter. The temperature field detection of forging was realized in its heat treatment by combining the temperature data and the forgings temperature field detection model. Finally, this method is feasible according to the simulation experiment. The heating forging temperature detection method can provide the theoretical basis for the correct implementation of the heat treatment process.
An Entropy-Based Propagation Speed Estimation Method for Near-Field Subsurface Radar Imaging
NASA Astrophysics Data System (ADS)
Flores-Tapia, Daniel; Pistorius, Stephen
2010-12-01
During the last forty years, Subsurface Radar (SR) has been used in an increasing number of noninvasive/nondestructive imaging applications, ranging from landmine detection to breast imaging. To properly assess the dimensions and locations of the targets within the scan area, SR data sets have to be reconstructed. This process usually requires the knowledge of the propagation speed in the medium, which is usually obtained by performing an offline measurement from a representative sample of the materials that form the scan region. Nevertheless, in some novel near-field SR scenarios, such as Microwave Wood Inspection (MWI) and Breast Microwave Radar (BMR), the extraction of a representative sample is not an option due to the noninvasive requirements of the application. A novel technique to determine the propagation speed of the medium based on the use of an information theory metric is proposed in this paper. The proposed method uses the Shannon entropy of the reconstructed images as the focal quality metric to generate an estimate of the propagation speed in a given scan region. The performance of the proposed algorithm was assessed using data sets collected from experimental setups that mimic the dielectric contrast found in BMI and MWI scenarios. The proposed method yielded accurate results and exhibited an execution time in the order of seconds.
Correlation-based methods in calibrating an FBG sensor with strain field non-uniformity
NASA Astrophysics Data System (ADS)
Cieszczyk, S.
2015-12-01
Fibre Bragg gratings have many sensing applications, mainly for measuring strain and temperature. The physical quantity that influences grating uniformly along its length causes a related shift of the Bragg wavelength. Many peak detection algorithms have been proposed, among which the most popular are the detection of maximum intensity, the centroid detection, the least square method, the cross-correlation, auto-correlation and fast phase correlation. Nonuniform gratings elongation is a cause of spectrum deformation. The introduction of non-uniformity can be intentional or appear as an unintended effect of placing sensing elements in the tested structure. Heterogeneous impacts on grating may result in additional errors and the difficulty in tracking the Bragg wavelength based on a distorted spectrum. This paper presents the application of correlation methods of peak wavelength shifts estimation for non-uniform Bragg grating elongation. The autocorrelation, cross-correlation and fast phase correlation algorithms are considered and experimental spectra measured for axisymmetric strain field along the Bragg grating are analyzed. The strain profile consists of constant and variable components. The results of this study indicate the properties of correlation algorithms applied to moderately non-uniform elongation of an FBG sensor.
NASA Astrophysics Data System (ADS)
Templeton, Jeremy A.; Jones, Reese E.; Wagner, Gregory J.
2010-12-01
This paper derives a methodology to enable spatial and temporal control of thermally inhomogeneous molecular dynamics (MD) simulations. The primary goal is to perform non-equilibrium MD of thermal transport analogous to continuum solutions of heat flow which have complex initial and boundary conditions, moving MD beyond quasi-equilibrium simulations using periodic boundary conditions. In our paradigm, the entire spatial domain is filled with atoms and overlaid with a finite element (FE) mesh. The representation of continuous variables on this mesh allows fixed temperature and fixed heat flux boundary conditions to be applied, non-equilibrium initial conditions to be imposed and source terms to be added to the atomistic system. In effect, the FE mesh defines a large length scale over which atomic quantities can be locally averaged to derive continuous fields. Unlike coupling methods which require a surrogate model of thermal transport like Fourier's law, in this work the FE grid is only employed for its projection, averaging and interpolation properties. Inherent in this approach is the assumption that MD observables of interest, e.g. temperature, can be mapped to a continuous representation in a non-equilibrium setting. This assumption is taken advantage of to derive a single, unified set of control forces based on Gaussian isokinetic thermostats to regulate the temperature and heat flux locally in the MD. Example problems are used to illustrate potential applications. In addition to the physical results, data relevant to understanding the numerical effects of the method on these systems are also presented.
The value of mindfulness-based methods in teaching at a clinical field placement.
Gökhan, Nurper; Meehan, Edward F; Peters, Kevin
2010-04-01
The value of mindfulness-based methods in an undergraduate field placement was investigated in relation to the acquisition of self-care and other basic clinical competencies. The participants were 22 students in an applied behavioral analysis course, which included a mindfulness-based training module, and 20 students enrolled in an experimental psychology course without mindfulness training. The Mindfulness Attention and Awareness Scale, the Freiberg Mindfulness Inventory, and the Kentucky Inventory of Mindfulness Skills were used as measurements before and after intervention. Mindfulness-trained participants kept records and were asked to share their personal experiences during supervision and an exit interview. Results demonstrated that training significantly increased mindfulness. Qualitative data indicated enhanced self-care, attention to well-being, self-awareness, active involvement acquiring skills, and empathy and compassion. The need to expand the utility of mindfulness to the realm of education and the importance of including comparison groups with other self-care modules for future studies were discussed.
Index cost estimate based BIM method - Computational example for sports fields
NASA Astrophysics Data System (ADS)
Zima, Krzysztof
2017-07-01
The paper presents an example ofcost estimation in the early phase of the project. The fragment of relative database containing solution, descriptions, geometry of construction object and unit cost of sports facilities was shown. The Index Cost Estimate Based BIM method calculationswith use of Case Based Reasoning were presented, too. The article presentslocal and global similarity measurement and example of BIM based quantity takeoff process. The outcome of cost calculations based on CBR method was presented as a final result of calculations.
An automatic detection method to the field wheat based on image processing
NASA Astrophysics Data System (ADS)
Wang, Yu; Cao, Zhiguo; Bai, Xiaodong; Yu, Zhenghong; Li, Yanan
2013-10-01
The automatic observation of the field crop attracts more and more attention recently. The use of image processing technology instead of the existing manual observation method can observe timely and manage consistently. It is the basis that extracting the wheat from the field wheat images. In order to improve accuracy of the wheat segmentation, a novel two-stage wheat image segmentation method is proposed. Training stage adjusts several key thresholds which will be used in segmentation stage to achieve the best segmentation results, and counts these thresholds. Segmentation stage compares the different values of color index to determine which class of each pixel is. To verify the superiority of the proposed algorithm, we compared our method with other crop segmentation methods. Experiment results shows that the proposed method has the best performance.
NASA Astrophysics Data System (ADS)
Shin, Jaemin; Lee, Hyun Geun; Lee, June-Yub
2016-12-01
The phase-field crystal equation derived from the Swift-Hohenberg energy functional is a sixth order nonlinear equation. We propose numerical methods based on a new convex splitting for the phase-field crystal equation. The first order convex splitting method based on the proposed splitting is unconditionally gradient stable, which means that the discrete energy is non-increasing for any time step. The second order scheme is unconditionally weakly energy stable, which means that the discrete energy is bounded by its initial value for any time step. We prove mass conservation, unique solvability, energy stability, and the order of truncation error for the proposed methods. Numerical experiments are presented to show the accuracy and stability of the proposed splitting methods compared to the existing other splitting methods. Numerical tests indicate that the proposed convex splitting is a good choice for numerical methods of the phase-field crystal equation.
The Corpus: A Data-Based Device for Teaching Field Methods.
ERIC Educational Resources Information Center
Stoddart, Kenneth
1987-01-01
Notes that one-semester field methods courses in sociology often lack adequate time for students to learn appropriate techniques and still collect and report their data. Describes how undergraduate students bypass this problem by using multiple observations of a single event to quickly form a corpus of ethnographic data. (JDH)
NASA Astrophysics Data System (ADS)
Zhang, Lihui; Wang, Dongchuan; Huang, Mingxiang; Gong, Jianhua; Fang, Liqun; Cao, Wuchun
2008-10-01
With the development of mobile technologies and the integration with the spatial information technologies, it becomes possible to provide a potential to develop new techno-support solutions to Epidemiological Field Investigation especially for the disposal of emergent public health events. Based on mobile technologies and virtual geographic environment, the authors have designed a model for collaborative work in four communication patterns, namely, S2S (Static to Static), M2S (Mobile to Static), S2M (Static to Mobile), and M2M (Mobile to Mobile). Based on the model mentioned above, this paper stresses to explore mobile online mapping regarding mobile collaboration and conducts an experimental case study of HFRS (Hemorrhagic Fever with Renal Syndrome) fieldwork, and then develops a prototype system of emergent response disposition information system to test the effectiveness and usefulness of field survey based on mobile collaboration.
A new method for matched field localization based on two-hydrophone
NASA Astrophysics Data System (ADS)
Li, Kun; Fang, Shi-liang
2015-03-01
The conventional matched field processing (MFP) uses large vertical arrays to locate an underwater acoustic target. However, the use of large vertical arrays increases equipment and computational cost, and causes some problems such as element failures, and array tilting to degrade the localization performance. In this paper, the matched field localization method using two-hydrophone is proposed for underwater acoustic pulse signals with an unknown emitted signal waveform. Using the received signal of hydrophones and the ocean channel pulse response which can be calculated from an acoustic propagation model, the spectral matrix of the emitted signal for different source locations can be estimated by employing the method of frequency domain least squares. The resulting spectral matrix of the emitted signal for every grid region is then multiplied by the ocean channel frequency response matrix to generate the spectral matrix of replica signal. Finally, the matched field localization using two-hydrophone for underwater acoustic pulse signals of an unknown emitted signal waveform can be estimated by comparing the difference between the spectral matrixes of the received signal and the replica signal. The simulated results from a shallow water environment for broadband signals demonstrate the significant localization performance of the proposed method. In addition, the localization accuracy in five different cases are analyzed by the simulation trial, and the results show that the proposed method has a sharp peak and low sidelobes, overcoming the problem of high sidelobes in the conventional MFP due to lack of the number of elements.
Zhang, Xiao-Zheng; Thomas, Jean-Hugh; Bi, Chuan-Xing; Pascal, Jean-Claude
2012-10-01
A time-domain plane wave superposition method is proposed to reconstruct nonstationary sound fields. In this method, the sound field is expressed as a superposition of time convolutions between the estimated time-wavenumber spectrum of the sound pressure on a virtual source plane and the time-domain propagation kernel at each wavenumber. By discretizing the time convolutions directly, the reconstruction can be carried out iteratively in the time domain, thus providing the advantage of continuously reconstructing time-dependent pressure signals. In the reconstruction process, the Tikhonov regularization is introduced at each time step to obtain a relevant estimate of the time-wavenumber spectrum on the virtual source plane. Because the double infinite integral of the two-dimensional spatial Fourier transform is discretized directly in the wavenumber domain in the proposed method, it does not need to perform the two-dimensional spatial fast Fourier transform that is generally used in time domain holography and real-time near-field acoustic holography, and therefore it avoids some errors associated with the two-dimensional spatial fast Fourier transform in theory and makes possible to use an irregular microphone array. The feasibility of the proposed method is demonstrated by numerical simulations and an experiment with two speakers.
Localization of incipient tip vortex cavitation using ray based matched field inversion method
NASA Astrophysics Data System (ADS)
Kim, Dongho; Seong, Woojae; Choo, Youngmin; Lee, Jeunghoon
2015-10-01
Cavitation of marine propeller is one of the main contributing factors of broadband radiated ship noise. In this research, an algorithm for the source localization of incipient vortex cavitation is suggested. Incipient cavitation is modeled as monopole type source and matched-field inversion method is applied to find the source position by comparing the spatial correlation between measured and replicated pressure fields at the receiver array. The accuracy of source localization is improved by broadband matched-field inversion technique that enhances correlation by incoherently averaging correlations of individual frequencies. Suggested localization algorithm is verified through known virtual source and model test conducted in Samsung ship model basin cavitation tunnel. It is found that suggested localization algorithm enables efficient localization of incipient tip vortex cavitation using a few pressure data measured on the outer hull above the propeller and practically applicable to the typically performed model scale experiment in a cavitation tunnel at the early design stage.
Evaluation of Three Field-Based Methods for Quantifying Soil Carbon
Izaurralde, Roberto C.; Rice, Charles W.; Wielopolski, Lucian; Ebinger, Michael H.; Reeves, James B.; Thomson, Allison M.; Francis, Barry; Mitra, Sudeep; Rappaport, Aaron G.; Etchevers, Jorge D.; Sayre, Kenneth D.; Govaerts, Bram; McCarty, Gregory W.
2013-01-01
Three advanced technologies to measure soil carbon (C) density (g C m−2) are deployed in the field and the results compared against those obtained by the dry combustion (DC) method. The advanced methods are: a) Laser Induced Breakdown Spectroscopy (LIBS), b) Diffuse Reflectance Fourier Transform Infrared Spectroscopy (DRIFTS), and c) Inelastic Neutron Scattering (INS). The measurements and soil samples were acquired at Beltsville, MD, USA and at Centro International para el Mejoramiento del Maíz y el Trigo (CIMMYT) at El Batán, Mexico. At Beltsville, soil samples were extracted at three depth intervals (0–5, 5–15, and 15–30 cm) and processed for analysis in the field with the LIBS and DRIFTS instruments. The INS instrument determined soil C density to a depth of 30 cm via scanning and stationary measurements. Subsequently, soil core samples were analyzed in the laboratory for soil bulk density (kg m−3), C concentration (g kg−1) by DC, and results reported as soil C density (kg m−2). Results from each technique were derived independently and contributed to a blind test against results from the reference (DC) method. A similar procedure was employed at CIMMYT in Mexico employing but only with the LIBS and DRIFTS instruments. Following conversion to common units, we found that the LIBS, DRIFTS, and INS results can be compared directly with those obtained by the DC method. The first two methods and the standard DC require soil sampling and need soil bulk density information to convert soil C concentrations to soil C densities while the INS method does not require soil sampling. We conclude that, in comparison with the DC method, the three instruments (a) showed acceptable performances although further work is needed to improve calibration techniques and (b) demonstrated their portability and their capacity to perform under field conditions. PMID:23383225
Evaluation of Three Field-Based Methods for Quantifying Soil Carbon
Izaurralde, Roberto C.; Rice, Charles W.; Wielopolski, Lucien; Ebinger, Michael H.; Reeves, James B.; Thomson, Allison M.; Harris, Ron; Francis, Barry; Mitra, S.; Rappaport, Aaron; Etchevers, Jorge; Sayre, Ken D.; Govaerts, Bram; McCarty, G. W.
2013-01-31
Three advanced technologies to measure soil carbon (C) density (g C m22) are deployed in the field and the results compared against those obtained by the dry combustion (DC) method. The advanced methods are: a) Laser Induced Breakdown Spectroscopy (LIBS), b) Diffuse Reflectance Fourier Transform Infrared Spectroscopy (DRIFTS), and c) Inelastic Neutron Scattering (INS). The measurements and soil samples were acquired at Beltsville, MD, USA and at Centro International para el Mejoramiento del Maiz y el Trigo (CIMMYT) at El Bata´n, Mexico. At Beltsville, soil samples were extracted at three depth intervals (0–5, 5–15, and 15–30 cm) and processed for analysis in the field with the LIBS and DRIFTS instruments. The INS instrument determined soil C density to a depth of 30 cm via scanning and stationary measurements. Subsequently, soil core samples were analyzed in the laboratory for soil bulk density (kg m23), C concentration (g kg21) by DC, and results reported as soil C density (kg m22). Results from each technique were derived independently and contributed to a blind test against results from the reference (DC) method. A similar procedure was employed at CIMMYT in Mexico employing but only with the LIBS and DRIFTS instruments. Following conversion to common units, we found that the LIBS, DRIFTS, and INS results can be compared directly with those obtained by the DC method. The first two methods and the standard DC require soil sampling and need soil bulk density information to convert soil C concentrations to soil C densities while the INS method does not require soil sampling. We conclude that, in comparison with the DC method, the three instruments (a) showed acceptable performances although further work is needed to improve calibration techniques and (b) demonstrated their portability and their capacity to perform under field conditions.
Evaluation of three field-based methods for quantifying soil carbon.
Izaurralde, Roberto C; Rice, Charles W; Wielopolski, Lucian; Ebinger, Michael H; Reeves, James B; Thomson, Allison M; Harris, Ronny; Francis, Barry; Mitra, Sudeep; Rappaport, Aaron G; Etchevers, Jorge D; Sayre, Kenneth D; Govaerts, Bram; McCarty, Gregory W
2013-01-01
Three advanced technologies to measure soil carbon (C) density (g C m(-2)) are deployed in the field and the results compared against those obtained by the dry combustion (DC) method. The advanced methods are: a) Laser Induced Breakdown Spectroscopy (LIBS), b) Diffuse Reflectance Fourier Transform Infrared Spectroscopy (DRIFTS), and c) Inelastic Neutron Scattering (INS). The measurements and soil samples were acquired at Beltsville, MD, USA and at Centro International para el Mejoramiento del Maíz y el Trigo (CIMMYT) at El Batán, Mexico. At Beltsville, soil samples were extracted at three depth intervals (0-5, 5-15, and 15-30 cm) and processed for analysis in the field with the LIBS and DRIFTS instruments. The INS instrument determined soil C density to a depth of 30 cm via scanning and stationary measurements. Subsequently, soil core samples were analyzed in the laboratory for soil bulk density (kg m(-3)), C concentration (g kg(-1)) by DC, and results reported as soil C density (kg m(-2)). Results from each technique were derived independently and contributed to a blind test against results from the reference (DC) method. A similar procedure was employed at CIMMYT in Mexico employing but only with the LIBS and DRIFTS instruments. Following conversion to common units, we found that the LIBS, DRIFTS, and INS results can be compared directly with those obtained by the DC method. The first two methods and the standard DC require soil sampling and need soil bulk density information to convert soil C concentrations to soil C densities while the INS method does not require soil sampling. We conclude that, in comparison with the DC method, the three instruments (a) showed acceptable performances although further work is needed to improve calibration techniques and (b) demonstrated their portability and their capacity to perform under field conditions.
The development of field-based measurement methods for radioactive fallout assessment.
Miller, Kevin M; Larsen, Richard J
2002-05-01
An overview is provided on the development of field equipment, instrument systems, and methods of analyses that were used to assess the impact of radioactive fallout from atmospheric weapons tests. Included in this review are developments in fallout collection, aerosols measurements in surface air, and high-altitude sampling with aircraft and balloons. In addition, developments in radiation measurements are covered in such areas as survey and monitoring instruments, in situ gamma-ray spectrometry, and aerial measurement systems. The history of these developments and the interplay with the general advances in the field of radiation and radioactivity metrology are highlighted. An emphasis is given as to how the modifications and improvements in the instruments and methods over time led to their adaptation to present-day applications to radiation and radioactivity measurements.
NASA Astrophysics Data System (ADS)
Hano, Mitsuo; Hotta, Masashi
A new multigrid method based on high-order vector finite elements is proposed in this paper. Low level discretizations in this method are obtained by using low-order vector finite elements for the same mesh. Gauss-Seidel method is used as a smoother, and a linear equation of lowest level is solved by ICCG method. But it is often found that multigrid solutions do not converge into ICCG solutions. An elimination algolithm of constant term using a null space of the coefficient matrix is also described. In three dimensional magnetostatic field analysis, convergence time and number of iteration of this multigrid method are discussed with the convectional ICCG method.
A GPU-based calculation using the three-dimensional FDTD method for electromagnetic field analysis.
Nagaoka, Tomoaki; Watanabe, Soichi
2010-01-01
Numerical simulations with the numerical human model using the finite-difference time domain (FDTD) method have recently been performed frequently in a number of fields in biomedical engineering. However, the FDTD calculation runs too slowly. We focus, therefore, on general purpose programming on the graphics processing unit (GPGPU). The three-dimensional FDTD method was implemented on the GPU using Compute Unified Device Architecture (CUDA). In this study, we used the NVIDIA Tesla C1060 as a GPGPU board. The performance of the GPU is evaluated in comparison with the performance of a conventional CPU and a vector supercomputer. The results indicate that three-dimensional FDTD calculations using a GPU can significantly reduce run time in comparison with that using a conventional CPU, even a native GPU implementation of the three-dimensional FDTD method, while the GPU/CPU speed ratio varies with the calculation domain and thread block size.
GPU-based parallel method of temperature field analysis in a floor heater with a controller
NASA Astrophysics Data System (ADS)
Forenc, Jaroslaw
2016-06-01
A parallel method enabling acceleration of the numerical analysis of the transient temperature field in an air floor heating system is presented in this paper. An initial-boundary value problem of the heater regulated by an on/off controller is formulated. The analogue model is discretized using the implicit finite difference method. The BiCGStab method is used to compute the obtained system of equations. A computer program implementing simultaneous computations on CPUand GPU(GPGPUtechnology) was developed. CUDA environment and linear algebra libraries (CUBLAS and CUSPARSE) are used by this program. The time of computations was reduced eight times in comparison with a program executed on the CPU only. Results of computations are presented in the form of time profiles and temperature field distributions. An influence of a model of the heat transfer coefficient on the simulation of the system operation was examined. The physical interpretation of obtained results is also presented.Results of computations were verified by comparing them with solutions obtained with the use of a commercial program - COMSOL Mutiphysics.
Refraction-based X-ray Computed Tomography for Biomedical Purpose Using Dark Field Imaging Method
NASA Astrophysics Data System (ADS)
Sunaguchi, Naoki; Yuasa, Tetsuya; Huo, Qingkai; Ichihara, Shu; Ando, Masami
We have proposed a tomographic x-ray imaging system using DFI (dark field imaging) optics along with a data-processing method to extract information on refraction from the measured intensities, and a reconstruction algorithm to reconstruct a refractive-index field from the projections generated from the extracted refraction information. The DFI imaging system consists of a tandem optical system of Bragg- and Laue-case crystals, a positioning device system for a sample, and two CCD (charge coupled device) cameras. Then, we developed a software code to simulate the data-acquisition, data-processing, and reconstruction methods to investigate the feasibility of the proposed methods. Finally, in order to demonstrate its efficacy, we imaged a sample with DCIS (ductal carcinoma in situ) excised from a breast cancer patient using a system constructed at the vertical wiggler beamline BL-14C in KEK-PF. Its CT images depicted a variety of fine histological structures, such as milk ducts, duct walls, secretions, adipose and fibrous tissue. They correlate well with histological sections.
NASA Astrophysics Data System (ADS)
Sun, Xu; Yang, Lina; Gao, Lianru; Zhang, Bing; Li, Shanshan; Li, Jun
2015-01-01
Center-oriented hyperspectral image clustering methods have been widely applied to hyperspectral remote sensing image processing; however, the drawbacks are obvious, including the over-simplicity of computing models and underutilized spatial information. In recent years, some studies have been conducted trying to improve this situation. We introduce the artificial bee colony (ABC) and Markov random field (MRF) algorithms to propose an ABC-MRF-cluster model to solve the problems mentioned above. In this model, a typical ABC algorithm framework is adopted in which cluster centers and iteration conditional model algorithm's results are considered as feasible solutions and objective functions separately, and MRF is modified to be capable of dealing with the clustering problem. Finally, four datasets and two indices are used to show that the application of ABC-cluster and ABC-MRF-cluster methods could help to obtain better image accuracy than conventional methods. Specifically, the ABC-cluster method is superior when used for a higher power of spectral discrimination, whereas the ABC-MRF-cluster method can provide better results when used for an adjusted random index. In experiments on simulated images with different signal-to-noise ratios, ABC-cluster and ABC-MRF-cluster showed good stability.
ERIC Educational Resources Information Center
Siry, Christina A.
2011-01-01
This article details a field-based methods course for preservice teachers that has been designed to integrate shared teaching experiences in elementary classrooms with ongoing critical dialogues with a focus on highlighting the complexities of teaching. I describe the structure of the course and explore the use of coteaching and cogenerative…
Daniel Meliza, C; Keen, Sara C.; Rubenstein, Dustin R.
2013-01-01
Quantitative measures of acoustic similarity can reveal patterns of shared vocal behavior in social species. Many methods for computing similarity have been developed, but their performance has not been extensively characterized in noisy environments and with vocalizations characterized by complex frequency modulations. This paper describes methods of bioacoustic comparison based on dynamic time warping (DTW) of the fundamental frequency or spectrogram. Fundamental frequency is estimated using a Bayesian particle filter adaptation of harmonic template matching. The methods were tested on field recordings of flight calls from superb starlings, Lamprotornis superbus, for how well they could separate distinct categories of call elements (motifs). The fundamental-frequency-based method performed best, but the spectrogram-based method was less sensitive to noise. Both DTW methods provided better separation of categories than spectrographic cross correlation, likely due to substantial variability in the duration of superb starling flight call motifs. PMID:23927136
Meliza, C Daniel; Keen, Sara C; Rubenstein, Dustin R
2013-08-01
Quantitative measures of acoustic similarity can reveal patterns of shared vocal behavior in social species. Many methods for computing similarity have been developed, but their performance has not been extensively characterized in noisy environments and with vocalizations characterized by complex frequency modulations. This paper describes methods of bioacoustic comparison based on dynamic time warping (DTW) of the fundamental frequency or spectrogram. Fundamental frequency is estimated using a Bayesian particle filter adaptation of harmonic template matching. The methods were tested on field recordings of flight calls from superb starlings, Lamprotornis superbus, for how well they could separate distinct categories of call elements (motifs). The fundamental-frequency-based method performed best, but the spectrogram-based method was less sensitive to noise. Both DTW methods provided better separation of categories than spectrographic cross correlation, likely due to substantial variability in the duration of superb starling flight call motifs.
On-orbit assembly of a team of flexible spacecraft using potential field based method
NASA Astrophysics Data System (ADS)
Chen, Ti; Wen, Hao; Hu, Haiyan; Jin, Dongping
2017-04-01
In this paper, a novel control strategy is developed based on artificial potential field for the on-orbit autonomous assembly of four flexible spacecraft without inter-member collision. Each flexible spacecraft is simplified as a hub-beam model with truncated beam modes in the floating frame of reference and the communication graph among the four spacecraft is assumed to be a ring topology. The four spacecraft are driven to a pre-assembly configuration first and then to the assembly configuration. In order to design the artificial potential field for the first step, each spacecraft is outlined by an ellipse and a virtual leader of circle is introduced. The potential field mainly depends on the attitude error between the flexible spacecraft and its neighbor, the radial Euclidian distance between the ellipse and the circle and the classical Euclidian distance between the centers of the ellipse and the circle. It can be demonstrated that there are no local minima for the potential function and the global minimum is zero. If the function is equal to zero, the solution is not a certain state, but a set. All the states in the set are corresponding to the desired configurations. The Lyapunov analysis guarantees that the four spacecraft asymptotically converge to the target configuration. Moreover, the other potential field is also included to avoid the inter-member collision. In the control design of the second step, only small modification is made for the controller in the first step. Finally, the successful application of the proposed control law to the assembly mission is verified by two case studies.
Szeliski, Richard; Zabih, Ramin; Scharstein, Daniel; Veksler, Olga; Kolmogorov, Vladimir; Agarwala, Aseem; Tappen, Marshall; Rother, Carsten
2008-06-01
Among the most exciting advances in early vision has been the development of efficient energy minimization algorithms for pixel-labeling tasks such as depth or texture computation. It has been known for decades that such problems can be elegantly expressed as Markov random fields, yet the resulting energy minimization problems have been widely viewed as intractable. Recently, algorithms such as graph cuts and loopy belief propagation (LBP) have proven to be very powerful: for example, such methods form the basis for almost all the top-performing stereo methods. However, the tradeoffs among different energy minimization algorithms are still not well understood. In this paper we describe a set of energy minimization benchmarks and use them to compare the solution quality and running time of several common energy minimization algorithms. We investigate three promising recent methods graph cuts, LBP, and tree-reweighted message passing in addition to the well-known older iterated conditional modes (ICM) algorithm. Our benchmark problems are drawn from published energy functions used for stereo, image stitching, interactive segmentation, and denoising. We also provide a general-purpose software interface that allows vision researchers to easily switch between optimization methods. Benchmarks, code, images, and results are available at http://vision.middlebury.edu/MRF/.
NASA Astrophysics Data System (ADS)
Xia, Baizhan; Yin, Hui; Yu, Dejie
2017-02-01
The response of the acoustic field, especially for the mid-frequency response, is very sensitive to uncertainties rising from manufacturing/construction tolerances, aggressive environmental factors and unpredictable excitations. To quantify these uncertainties with limited information effectively, two nondeterministic models (the interval model and the hybrid probability-interval model) are introduced. And then, two corresponding nondeterministic numerical methods are developed for the low- and mid-frequency response analysis of the acoustic field under these two nondeterministic models. The first one is the interval perturbation wave-based method (IPWBM) which is proposed to predict the maximal values of the low- and mid-frequency responses of the acoustic field under the interval model. The second one is the hybrid perturbation wave-based method (HPWBM) which is proposed to predict the maximal values of expectations and standard variances of the low- and mid-frequency responses of the acoustic field under the hybrid probability-interval model. The effectiveness and efficiency of the proposed nondeterministic numerical methods for the low- and mid-frequency response analysis of the acoustic field under the interval model and the hybrid probability-interval model are investigated by a numerical example.
NASA Astrophysics Data System (ADS)
Chen, Haijun; Houkes, Zweitze
1998-09-01
In this paper, a segmentation method for agricultural fields in aerial sequences of images based on the Circular Symmetri Auto-Regressive (CSAR) model is presented. The image sequences assumed to be acquired by a video camera (RGB-CCD system) from an aeroplane, which moves linearly over the scene. The objects in the scenes being considered in this paper, are agricultural fields. The classes of agricultural fields to be distinguished are determined by the type of crop, e.g. potatoes sugar beet, wheat, etc. In order to recognize and classify these fields from aerial sequence of images, a reliable segmentatio is required. Here texture features are used for segmentation. The implementation of segmentation for agricultural fields in aerial sequences of images is based on CSAR model in texture analysis. By comparing the estimated parameters of CSAR model from different area in an image, the characteristics and the class of a texture may be determined. The paper describes the segmentation method and its evaluation through experiments. Based on segmentation results, classification for surface texture of vegetation from aerial sequences of images is realized.
Image restoration method based on Hilbert transform for full-field optical coherence tomography
NASA Astrophysics Data System (ADS)
Na, Jihoon; Choi, Woo June; Choi, Eun Seo; Ryu, Seon Young; Lee, Byeong Ha
2008-01-01
A full-field optical coherence tomography (FF-OCT) system utilizing a simple but novel image restoration method suitable for a high-speed system is demonstrated. An en-face image is retrieved from only two phase-shifted interference fringe images through using the mathematical Hilbert transform. With a thermal light source, a high-resolution FF-OCT system having axial and transverse resolutions of 1 and 2.2 μm, respectively, was implemented. The feasibility of the proposed scheme is confirmed by presenting the obtained en-face images of biological samples such as a piece of garlic and a gold beetle. The proposed method is robust to the error in the amount of the phase shift and does not leave residual fringes. The use of just two interference images and the strong immunity to phase errors provide great advantages in the imaging speed and the system design flexibility of a high-speed high-resolution FF-OCT system.
NASA Astrophysics Data System (ADS)
Ahrens, T.; Matson, P.; Lobell, D.
2006-12-01
Sensitivity analyses (SA) of biogeochemical and agricultural models are often used to identify the importance of input variables for variance in model outputs, such as crop yield or nitrate leaching. Identification of these factors can aid in prioritizing efforts in research or decision support. Many types of sensitivity analyses are available, ranging from simple One-At-A-Time (OAT) screening exercises to more complex local and global variance-based methods (see Saltelli et al 2004). The purpose of this study was to determine the influence of the type of SA on factor prioritization in the Yaqui Valley, Mexico using the Water and Nitrogen Management Model (WNMM; Chen et al 2005). WNMM, a coupled plant-growth - biogeochemistry simulation model, was calibrated to reproduce crop growth, soil moisture, and gaseous N emission dynamics in experimental plots of irrigated wheat in the Yaqui Valley, Mexico from 1994-1997. Three types of SA were carried out using 16 input variables, including parameters related to weather, soil properties and crop management. Methods used for SA were local OAT, Monte Carlo (MC), and a global variance-based method (orthogonal input; OI). Results of the SA were based on typical interpretations used for each test: maximum absolute ratio of variation (MAROV) for OAT analyses; first- and second-order regressions for MC analyses; and a total effects index for OI. The three most important factors identified by MC and OI methods were generally in agreement, although the order of importance was not always consistent and there was little agreement for variables of less importance. OAT over-estimated the importance of two factors (planting date and pH) for many outputs. The biggest differences between the OAT results and those from MC and OI were likely due to the inability of OAT methods to account for non-linearity (eg. pH and ammonia volatilization), interactions among variables (eg. pH and timing of fertilization) and an over-reliance on baseline
NASA Astrophysics Data System (ADS)
Matsumoto, S.
2016-09-01
The stress field is a key factor controlling earthquake occurrence and crustal evolution. In this study, we propose an approach for determining the stress field in a region using seismic moment tensors, based on the classical equation in plasticity theory. Seismic activity is a phenomenon that relaxes crustal stress and creates plastic strain in a medium because of faulting, which suggests that the medium could behave as a plastic body. Using the constitutive relation in plastic theory, the increment of the plastic strain tensor is proportional to the deviatoric stress tensor. Simple mathematical manipulation enables the development of an inversion method for estimating the stress field in a region. The method is tested on shallow earthquakes occurring on Kyushu Island, Japan.
Latouche, Gwendal; Debord, Christian; Raynal, Marc; Milhade, Charlotte; Cerovic, Zoran G
2015-10-01
Early detection of fungal pathogen presence in the field would help to better time or avoid some of the fungicide treatments used to prevent crop production losses. We recently introduced a new phytoalexin-based method for a non-invasive detection of crop diseases using their fluorescence. The causal agent of grapevine downy mildew, Plasmopara viticola, induces the synthesis of stilbenoid phytoalexins by the host, Vitis vinifera, early upon infection. These stilbenoids emit violet-blue fluorescence under UV light. A hand-held solid-state UV-LED-based field fluorimeter, named Multiplex 330, was used to measure stilbenoid phytoalexins in a vineyard. It allowed us to non-destructively detect and monitor the naturally occurring downy mildew infections on leaves in the field.
NASA Astrophysics Data System (ADS)
Kim, Sungho; Ahn, Jae-Hyuk; Park, Tae Jung; Lee, Sang Yup; Choi, Yang-Kyu
2009-06-01
A unique direct electrical detection method of biomolecules, charge pumping, was demonstrated using a nanogap embedded field-effect-transistor (FET). With aid of a charge pumping method, sensitivity can fall below the 1 ng/ml concentration regime in antigen-antibody binding of an avian influenza case. Biomolecules immobilized in the nanogap are mainly responsible for the acute changes of the interface trap density due to modulation of the energy level of the trap. This finding is supported by a numerical simulation. The proposed detection method for biomolecules using a nanogap embedded FET represents a foundation for a chip-based biosensor capable of high sensitivity.
Liu, H H; McCullough, E C; Mackie, T R
1998-01-01
A convolution/superposition based method was developed to calculate dose distributions and wedge factors in photon treatment fields generated by dynamic wedges. This algorithm used a dual source photon beam model that accounted for both primary photons from the target and secondary photons scattered from the machine head. The segmented treatment tables (STT) were used to calculate realistic photon fluence distributions in the wedged fields. The inclusion of the extra-focal photons resulted in more accurate dose calculation in high dose gradient regions, particularly in the beam penumbra. The wedge factors calculated using the convolution method were also compared to the measured data and showed good agreement within 0.5%. The wedge factor varied significantly with the field width along the moving jaw direction, but not along the static jaw or the depth direction. This variation was found to be determined by the ending position of the moving jaw, or the STT of the dynamic wedge. In conclusion, the convolution method proposed in this work can be used to accurately compute dose for a dynamic or an intensity modulated treatment based on the fluence modulation in the treatment field.
Variational methods for field theories
Ben-Menahem, S.
1986-09-01
Four field theory models are studied: Periodic Quantum Electrodynamics (PQED) in (2 + 1) dimensions, free scalar field theory in (1 + 1) dimensions, the Quantum XY model in (1 + 1) dimensions, and the (1 + 1) dimensional Ising model in a transverse magnetic field. The last three parts deal exclusively with variational methods; the PQED part involves mainly the path-integral approach. The PQED calculation results in a better understanding of the connection between electric confinement through monopole screening, and confinement through tunneling between degenerate vacua. This includes a better quantitative agreement for the string tensions in the two approaches. Free field theory is used as a laboratory for a new variational blocking-truncation approximation, in which the high-frequency modes in a block are truncated to wave functions that depend on the slower background modes (Boron-Oppenheimer approximation). This ''adiabatic truncation'' method gives very accurate results for ground-state energy density and correlation functions. Various adiabatic schemes, with one variable kept per site and then two variables per site, are used. For the XY model, several trial wave functions for the ground state are explored, with an emphasis on the periodic Gaussian. A connection is established with the vortex Coulomb gas of the Euclidean path integral approach. The approximations used are taken from the realms of statistical mechanics (mean field approximation, transfer-matrix methods) and of quantum mechanics (iterative blocking schemes). In developing blocking schemes based on continuous variables, problems due to the periodicity of the model were solved. Our results exhibit an order-disorder phase transition. The transfer-matrix method is used to find a good (non-blocking) trial ground state for the Ising model in a transverse magnetic field in (1 + 1) dimensions.
Li, Ming; Li, Jingyun; He, Zihuai; Lu, Qing; Witte, John S; Macleod, Stewart L; Hobbs, Charlotte A; Cleves, Mario A
2016-05-01
Family-based association studies are commonly used in genetic research because they can be robust to population stratification (PS). Recent advances in high-throughput genotyping technologies have produced a massive amount of genomic data in family-based studies. However, current family-based association tests are mainly focused on evaluating individual variants one at a time. In this article, we introduce a family-based generalized genetic random field (FB-GGRF) method to test the joint association between a set of autosomal SNPs (i.e., single-nucleotide polymorphisms) and disease phenotypes. The proposed method is a natural extension of a recently developed GGRF method for population-based case-control studies. It models offspring genotypes conditional on parental genotypes, and, thus, is robust to PS. Through simulations, we presented that under various disease scenarios the FB-GGRF has improved power over a commonly used family-based sequence kernel association test (FB-SKAT). Further, similar to GGRF, the proposed FB-GGRF method is asymptotically well-behaved, and does not require empirical adjustment of the type I error rates. We illustrate the proposed method using a study of congenital heart defects with family trios from the National Birth Defects Prevention Study (NBDPS).
NASA Astrophysics Data System (ADS)
Davis, L. E.; Eves, R. L.
2006-12-01
Transitioning students from learner to investigator is best accomplished by incorporating research into the undergraduate classroom as a collaborative enterprise between students and faculty. Our course is a two-part design with a focus on a modern carbonate ecosystem and depositional environment on San Salvador Island, Bahamas in order to integrate geology, biology, and environmental science. Content background is provided in the classroom, which focuses on the geology of the Bahamian platform; the biological aspects of Caribbean island marine ecosystems; and the impact of human development on tropical islands. Application of course content is focused during an integrated field study of a specific carbonate environment, e.g. carbonate production in a tidal lagoon. The ultimate goals of the course are (1) identifying and acquiring both disciplinary and interdisciplinary research methodologies, (2) defining a specific investigative problem, (3) conducting `real' [meaningful] research, and (4) communicating research findings in the form of presentations at national meetings and publication in research journals. Assessment is based on specific criteria to be achieved during the research project. Criteria are determined through collaboration between faculty mentors and student researchers. Students are evaluated throughout the research phase with particular attention paid to an understanding of appropriate planning and background research, originality of thought; use of project-specific and appropriate data collection and sampling techniques; and analysis and interpretation of data. Students are expected to submit a final written report containing appropriate conclusions from data analysis and recommendations for further studies. Each student is also required to complete a self-assessment. The interdisciplinary experiences gained by faculty and students have already been incorporated into other courses and have led to publication of results. The course stimulates both
Method of depositing multi-layer carbon-based coatings for field emission
Sullivan, John P.; Friedmann, Thomas A.
1999-01-01
A novel field emitter device for cold cathode field emission applications, comprising a multi-layer resistive carbon film. The multi-layered film of the present invention is comprised of at least two layers of a resistive carbon material, preferably amorphous-tetrahedrally coordinated carbon, such that the resistivities of adjacent layers differ. For electron emission from the surface, the preferred structure comprises a top layer having a lower resistivity than the bottom layer. For edge emitting structures, the preferred structure of the film comprises a plurality of carbon layers, wherein adjacent layers have different resistivities. Through selection of deposition conditions, including the energy of the depositing carbon species, the presence or absence of certain elements such as H, N, inert gases or boron, carbon layers having desired resistivities can be produced. Field emitters made according the present invention display improved electron emission characteristics in comparison to conventional field emitter materials.
Method of depositing multi-layer carbon-based coatings for field emission
Sullivan, J.P.; Friedmann, T.A.
1999-08-10
A novel field emitter device is disclosed for cold cathode field emission applications, comprising a multi-layer resistive carbon film. The multi-layered film of the present invention is comprised of at least two layers of a resistive carbon material, preferably amorphous-tetrahedrally coordinated carbon, such that the resistivities of adjacent layers differ. For electron emission from the surface, the preferred structure comprises a top layer having a lower resistivity than the bottom layer. For edge emitting structures, the preferred structure of the film comprises a plurality of carbon layers, wherein adjacent layers have different resistivities. Through selection of deposition conditions, including the energy of the depositing carbon species, the presence or absence of certain elements such as H, N, inert gases or boron, carbon layers having desired resistivities can be produced. Field emitters made according the present invention display improved electron emission characteristics in comparison to conventional field emitter materials. 8 figs.
Novel Texture-based Visualization Methods for High-dimensional Multi-field Data Sets
2013-07-06
gradient image is multiplied with the first texture image, resulting in the second texture image appearing as “bumps”. The concept of Gestalt ...originates from the fine arts and expresses the notion that the whole contains more information than the parts. Perception of Gestalt is influenced by...We exploit Gestalt perception by using different masks, which subdivide the domain of two fields, and show for each section only one field. By
Shen, Hujun; Czaplewski, Cezary; Liwo, Adam; Scheraga, Harold A
2008-08-01
The kinetic-trapping problem in simulating protein folding can be overcome by using a Replica Exchange Method (REM). However, in implementing REM in molecular dynamics simulations, synchronization between processors on parallel computers is required, and communication between processors limits its ability to sample conformational space in a complex system efficiently. To minimize communication between processors during the simulation, a Serial Replica Exchange Method (SREM) has been proposed recently by Hagan et al. (J. Phys. Chem. B2007, 111, 1416-1423). Here, we report the implementation of this new SREM algorithm with our physics-based united-residue (UNRES) force field. The method has been tested on the protein 1E0L with a temperature-independent UNRES force field and on terminally blocked deca-alanine (Ala(10)) and 1GAB with the recently introduced temperature-dependent UNRES force field. With the temperature-independent force field, SREM reproduces the results of REM but is more efficient in terms of wall-clock time and scales better on distributed-memory machines. However, exact application of SREM to the temperature-dependent UNRES algorithm requires the determination of a four-dimensional distribution of UNRES energy components instead of a one-dimensional energy distribution for each temperature, which is prohibitively expensive. Hence, we assumed that the temperature dependence of the force field can be ignored for neighboring temperatures. This version of SREM worked for Ala(10) which is a simple system but failed to reproduce the thermodynamic results as well as regular REM on the more complex 1GAB protein. Hence, SREM can be applied to the temperature-independent but not to the temperature-dependent UNRES force field.
Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both interstitial waters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question ...
Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both interstitial waters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question ...
NASA Astrophysics Data System (ADS)
Meillier, Céline; Chatelain, Florent; Michel, Olivier; Bacon, Roland; Piqueras, Laure; Bacher, Raphael; Ayasso, Hacheme
2016-04-01
We present SELFI, the Source Emission Line FInder, a new Bayesian method optimized for detection of faint galaxies in Multi Unit Spectroscopic Explorer (MUSE) deep fields. MUSE is the new panoramic integral field spectrograph at the Very Large Telescope (VLT) that has unique capabilities for spectroscopic investigation of the deep sky. It has provided data cubes with 324 million voxels over a single 1 arcmin2 field of view. To address the challenge of faint-galaxy detection in these large data cubes, we developed a new method that processes 3D data either for modeling or for estimation and extraction of source configurations. This object-based approach yields a natural sparse representation of the sources in massive data fields, such as MUSE data cubes. In the Bayesian framework, the parameters that describe the observed sources are considered random variables. The Bayesian model leads to a general and robust algorithm where the parameters are estimated in a fully data-driven way. This detection algorithm was applied to the MUSE observation of Hubble Deep Field-South. With 27 h total integration time, these observations provide a catalog of 189 sources of various categories and with secured redshift. The algorithm retrieved 91% of the galaxies with only 9% false detection. This method also allowed the discovery of three new Lyα emitters and one [OII] emitter, all without any Hubble Space Telescope counterpart. We analyzed the reasons for failure for some targets, and found that the most important limitation of the method is when faint sources are located in the vicinity of bright spatially resolved galaxies that cannot be approximated by the Sérsic elliptical profile. The software and its documentation are available on the MUSE science web service (muse-vlt.eu/science).
Energy-based method for near-real time modeling of sound field in complex urban environments.
Pasareanu, Stephanie M; Remillieux, Marcel C; Burdisso, Ricardo A
2012-12-01
Prediction of the sound field in large urban environments has been limited thus far by the heavy computational requirements of conventional numerical methods such as boundary element (BE) or finite-difference time-domain (FDTD) methods. Recently, a considerable amount of work has been devoted to developing energy-based methods for this application, and results have shown the potential to compete with conventional methods. However, these developments have been limited to two-dimensional (2-D) studies (along street axes), and no real description of the phenomena at issue has been exposed. Here the mathematical theory of diffusion is used to predict the sound field in 3-D complex urban environments. A 3-D diffusion equation is implemented by means of a simple finite-difference scheme and applied to two different types of urban configurations. This modeling approach is validated against FDTD and geometrical acoustic (GA) solutions, showing a good overall agreement. The role played by diffraction near buildings edges close to the source is discussed, and suggestions are made on the possibility to predict accurately the sound field in complex urban environments, in near real time simulations.
Methods in field chronobiology.
Dominoni, Davide M; Åkesson, Susanne; Klaassen, Raymond; Spoelstra, Kamiel; Bulla, Martin
2017-11-19
Chronobiological research has seen a continuous development of novel approaches and techniques to measure rhythmicity at different levels of biological organization from locomotor activity (e.g. migratory restlessness) to physiology (e.g. temperature and hormone rhythms, and relatively recently also in genes, proteins and metabolites). However, the methodological advancements in this field have been mostly and sometimes exclusively used only in indoor laboratory settings. In parallel, there has been an unprecedented and rapid improvement in our ability to track animals and their behaviour in the wild. However, while the spatial analysis of tracking data is widespread, its temporal aspect is largely unexplored. Here, we review the tools that are available or have potential to record rhythms in the wild animals with emphasis on currently overlooked approaches and monitoring systems. We then demonstrate, in three question-driven case studies, how the integration of traditional and newer approaches can help answer novel chronobiological questions in free-living animals. Finally, we highlight unresolved issues in field chronobiology that may benefit from technological development in the future. As most of the studies in the field are descriptive, the future challenge lies in applying the diverse technologies to experimental set-ups in the wild.This article is part of the themed issue 'Wild clocks: integrating chronobiology and ecology to understand timekeeping in free-living animals'. © 2017 The Author(s).
NASA Astrophysics Data System (ADS)
Tang, Min; Wang, Yihong
2017-02-01
In magnetized plasma, the magnetic field confines the particles around the field lines. The anisotropy intensity in the viscosity and heat conduction may reach the order of 1012. When the boundary conditions are periodic or Neumann, the strong diffusion leads to an ill-posed limiting problem. To remove the ill-conditionedness in the highly anisotropic diffusion equations, we introduce a simple but very efficient asymptotic preserving reformulation in this paper. The key idea is that, instead of discretizing the Neumann boundary conditions locally, we replace one of the Neumann boundary condition by the integration of the original problem along the field line, the singular 1 / ɛ terms can be replaced by O (1) terms after the integration, which yields a well-posed problem. Small modifications to the original code are required and no change of coordinates nor mesh adaptation are needed. Uniform convergence with respect to the anisotropy strength 1 / ɛ can be observed numerically and the condition number does not scale with the anisotropy.
Variational Methods for Field Theories.
NASA Astrophysics Data System (ADS)
Ben-Menahem, Shahar
The thesis has four parts, dealing with four field theory models: Periodic Quantum Electrodynamics (PQED) in (2 + 1) dimensions, free scalar field theory in (1 + 1) dimensions, the Quantum XY model in (1 + 1) dimensions, and the (1 + 1) dimensional Ising model in a transverse magnetic field. The last three parts deal exclusively with variational methods; the PQED part involves mainly the path-integral approach. The PQED calculation results in a better understanding of the connection between electric confinement through monopole screening, and confinement through tunneling between degenerate vacua. This includes a better quantitative agreement for the string tensions in the two approaches. In the second part, we use free field theory as a loboratory for a new variational blocking-tuncation approximation, in which the high-frequency modes in a block are truncated to wave functions that depend on the slower background modes(Born-Oppenheimer approximation). This "adiabatic truncation" method gives very accurate results for ground -state energy density and correlation functions. Without the adiabatic method, a much larger number of state per block must be kept to get comparable results. Various adiabatic schemes, with one variable kept per site and then two variables per site, are used. For the XY model, several trial wave functions for the ground state are explored, with an emphasis on the periodic Gaussian. A connection is established with the vortex Coulomb gas of the Eclidean path integral approach. The approximations used are taken from the realms of statistical mechanics (mean field approximation, transfer-matrix methods) and of quantum mechanics (iterative blocking schemes). In developing blocking schemes based on continuous variables, problems due to the periodicity of the model were solved. Our results exhibit an order-disorder phase transition. This transition is a rudimentary version of the actual transition known to occur in the XY model, and is
Wee, S-H; Nam, H-M; Moon, O-K; Yoon, H; Park, J-Y; More, S J
2008-12-01
Relevant to foot and mouth disease (FMD), most published epidemiological studies have been conducted using quantitative methods and substantial regional or national datasets. Veterinary epidemiology also plays a critical role during outbreak investigations, both to assist with herd-level decision-making and to contribute relevant information to assist with ongoing national or regional control strategies. Despite the importance of this role, however, little information has been published on the use of applied (field-based) epidemiological methods during disease outbreaks. In this study, we outline an investigative template for FMD, and a case study of its use during the 2002 FMD outbreak in Korea. Suitable for use during field-based epidemiological investigations of individual farms within a broader regional/national response, the template considers three steps including confirming infection, estimating date of introduction and determining method of introduction. A case study was conducted on IP13 (the 13th infected premises), the only IP during the 2002 FMD outbreak in Korea that was geographically isolated from all other known cases. The authorities first became aware of FMD on IP13 on 2 June, however, infection may have been present from 12 May. Infection was confirmed on 3 June 2002. FMD was probably spread to IP13 by a contract worker who had participated during 2-4 May in the culling operations on IP1. Other routes of spread were ruled out during the investigation. The contract worker lived in the locality of IP13 and worked on a part-time basis at a pork-processing plant that was adjacent to this farm. The contractor became heavily contaminated during the cull, but did not comply fully with cleaning and disinfection requirements once the cull had been completed. The investigative template contributed structure and focus to the field-based investigation. Results from this case study demonstrate the need for strict management of personnel in disease control and
Correlation Based Geomagnetic Field Modeling
NASA Astrophysics Data System (ADS)
Holschneider, M.; Mauerberger, S.; Lesur, V.; Baerenzung, J.
2015-12-01
We present a new method for determining geomagnetic field models. It is based on the construction of an a priori correlation structure derived from our knowledge about characteristic length scales and sources of the geomagnetic field. The magnetic field measurements are then seen as correlated random variables too and the inversion process amounts to compute the a posteriori correlation structure using Bayes theorem. We show how this technique allows the statistical separation of the various field contributions and the assessment of their uncertainties.
Field-based evaluation of a male-specific (F+) RNA coliphage concentration method
Fecal contamination of water poses a significant risk to public health due to the potential presence of pathogens, including enteric viruses. Thus, sensitive, reliable and easy to use methods for the detection of microorganisms are needed to evaluate water quality. In this stud...
Field-based evaluation of a male-specific (F+) RNA coliphage concentration method
Fecal contamination of water poses a significant risk to public health due to the potential presence of pathogens, including enteric viruses. Thus, sensitive, reliable and easy to use methods for the detection of microorganisms are needed to evaluate water quality. In this stud...
NASA Astrophysics Data System (ADS)
Schnetger, Bernhard; Dellwig, Olaf
2012-02-01
Experiments with water samples from the redoxclines of the Black Sea and the Baltic Sea identified a fraction of dissolved Mn which is completely oxidised to solid MnOx within less than 48 h at laboratory conditions. Disproportionation of this dissolved reactive Mn (dMnreact) into Mn (II) and Mn (IV) did not occur. Our data suggest that bacteria using oxygen are responsible for the fast oxidation of dMnreact. The operational definition of dMnreact is a Mn phase that passes a 0.45 μm filter, but can be separated from remaining dissolved Mn (II) by filtration 48 h after exposure to atmospheric oxygen. The application of this method to water samples from the redoxcline of the Black Sea reveals dMnreact profiles comparable to published Mn (III) profiles analysed by polarography thus identifying Mn (III) as the dominating constituent of dMnreact. As the degree of autocatalytic oxidation of dissolved Mn (II) by readily produced MnOx and microbial Mn (II) oxidation within the applied oxidation period is unknown, dMnreact is at least a semi-quantitative measure of dissolved Mn (III). Furthermore, the present method helps to assess the full potential for oxidation of dissolved Mn within aquatic ecosystems. This method has the advantage that sample preparation can be easily done on site, followed by analysis of dissolved Mn by conventional methods.
Liu, Cui; Wang, Yang; Zhao, Dongxia; Gong, Lidong; Yang, Zhongzhi
2014-02-01
The integrity of the genetic information is constantly threatened by oxidizing agents. Oxidized guanines have all been linked to different types of cancers. Theoretical approaches supplement the assorted experimental techniques, and bring new sight and opportunities to investigate the underlying microscopic mechanics. Unfortunately, there is no specific force field to DNA system including oxidized guanines. Taking high level ab initio calculations as benchmark, we developed the ABEEMσπ fluctuating charge force field, which uses multiple fluctuating charges per atom. And it was applied to study the energies, structures and mutations of base pairs containing oxidized guanines. The geometries were obtained in reference to other studies or using B3LYP/6-31+G* level optimization, which is more rational and timesaving among 24 quantum mechanical methods selected and tested by this work. The energies were determined at MP2/aug-cc-pVDZ level with BSSE corrections. Results show that the constructed potential function can accurately simulate the change of H-bond and the buckled angle formed by two base planes induced by oxidized guanine, and it provides reliable information of hydrogen bonding, stacking interaction and the mutation processes. The performance of ABEEMσπ polarizable force field in predicting the bond lengths, bond angles, dipole moments etc. is generally better than those of the common force fields. And the accuracy of ABEEMσπ PFF is close to that of the MP2 method. This shows that ABEEMσπ model is a reliable choice for further research of dynamics behavior of DNA fragment including oxidized guanine.
Zhang, Yu-Cun; Wei, Bin; Fu, Xian-Bin
2014-02-01
A temperature field detection method based on long-wavelength infrared spectrum for hot forging is proposed in the present paper. This method combines primary spectrum pyrometry and three-stage FP-cavity LCTF. By optimizing the solutions of three group nonlinear equations in the mathematical model of temperature detection, the errors are reduced, thus measuring results will be more objective and accurate. Then the system of three-stage FP-cavity LCTF was designed on the principle of crystal birefringence. The system realized rapid selection of any wavelength in a certain wavelength range. It makes the response of the temperature measuring system rapid and accurate. As a result, without the emissivity of hot forging, the method can acquire exact information of temperature field and effectively suppress the background light radiation around the hot forging and ambient light that impact the temperature detection accuracy. Finally, the results of MATLAB showed that the infrared spectroscopy through the three-stage FP-cavity LCTF could meet the requirements of design. And experiments verified the feasibility of temperature measuring method. Compared with traditional single-band thermal infrared imager, the accuracy of measuring result was improved.
NASA Astrophysics Data System (ADS)
Huang, Sheng; Ao, Xiang; Li, Yuan-yuan; Zhang, Rui
2016-09-01
In order to meet the needs of high-speed development of optical communication system, a construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on multiplicative group of finite field is proposed. The Tanner graph of parity check matrix of the code constructed by this method has no cycle of length 4, and it can make sure that the obtained code can get a good distance property. Simulation results show that when the bit error rate ( BER) is 10-6, in the same simulation environment, the net coding gain ( NCG) of the proposed QC-LDPC(3 780, 3 540) code with the code rate of 93.7% in this paper is improved by 2.18 dB and 1.6 dB respectively compared with those of the RS(255, 239) code in ITU-T G.975 and the LDPC(3 2640, 3 0592) code in ITU-T G.975.1. In addition, the NCG of the proposed QC-LDPC(3 780, 3 540) code is respectively 0.2 dB and 0.4 dB higher compared with those of the SG-QC-LDPC(3 780, 3 540) code based on the two different subgroups in finite field and the AS-QC-LDPC(3 780, 3 540) code based on the two arbitrary sets of a finite field. Thus, the proposed QC-LDPC(3 780, 3 540) code in this paper can be well applied in optical communication systems.
Field by field hybrid upwind splitting methods
NASA Technical Reports Server (NTRS)
Coquel, Frederic; Liou, Meng-Sing
1993-01-01
A new and general approach to upwind splitting is presented. The design principle combines the robustness of flux vector splitting schemes in the capture of nonlinear waves and the accuracy of some flux difference splitting schemes in the resolution of linear waves. The new schemes are derived following a general hybridization technique performed directly at the basic level of the field by field decomposition involved in FDS methods. The scheme does not use a spatial switch to be tuned up according to the local smoothness of the approximate solution.
Tsuji, Tadasuke; Kitagawa, Shinya; Ohtani, Hajime
2009-06-01
Voltage-induced impedance variation of the minicolumn (i.d. 0.53 mm, length 2 mm) packed with cation exchanger was investigated to develop a sensing method. An aqueous sample solution containing the metal cations was continuously supplied to the minicolumn during the impedance measurement with the simultaneous application of both alternating current voltage (amplitude, 1.0 V; frequency, 200 kHz to 6 Hz) and direct current (DC) offset voltage (0.1 to 1.0 V). On a complex plane plot, the profile of the column impedance consisted of a semicircle (200 kHz to 100 Hz) and a straight line (<100 Hz), of which slope varied with the magnitude of the applied DC offset voltage (V(DC)). The slope-V(DC) relation depended on the kind of the metal cation and its concentration; in particular, the slope-V(DC) relations of monovalent cations (Na(+) and K(+)) and divalent ones (Mg(2+) and Ca(2+)) were significantly different. With the change in the concentration of minor divalent salt of MgCl(2) or CaCl(2) (60 to 140 microM) in the sample solution containing 10 mM NaCl, the slopes showed almost linear relationships between those with application of V(DC) = 0.1 V and 1.0 V both for magnesium and calcium additions. In the case of plural addition of both MgCl(2) and CaCl(2) to the solution, the data points in the slope(0.1 V)-slope(1.0 V) plot were located between the two proportional lines for single additions of magnesium and calcium, reflecting both the mixing ratio and net concentrations of the divalent cations. Thus, simulations determination of Mg(2+) and Ca(2+) can be attained on the basis of the slope(0.1 V)-slope(1.0 V) relation obtained by the impedance measurements of the minicolumn. Actually, the contents of both magnesium and calcium cations in the bottled mineral waters determined simultaneously using the proposed method were almost equivalent to those obtained by the atomic absorption spectrometric measurement.
Tavakkoli, Ehsan; Fatehi, Foad; Rengasamy, Pichu; McDonald, Glenn K
2012-06-01
Success in breeding crops for yield and other quantitative traits depends on the use of methods to evaluate genotypes accurately under field conditions. Although many screening criteria have been suggested to distinguish between genotypes for their salt tolerance under controlled environmental conditions, there is a need to test these criteria in the field. In this study, the salt tolerance, ion concentrations, and accumulation of compatible solutes of genotypes of barley with a range of putative salt tolerance were investigated using three growing conditions (hydroponics, soil in pots, and natural saline field). Initially, 60 genotypes of barley were screened for their salt tolerance and uptake of Na(+), Cl(-), and K(+) at 150 mM NaCl and, based on this, a subset of 15 genotypes was selected for testing in pots and in the field. Expression of salt tolerance in saline solution culture was not a reliable indicator of the differences in salt tolerance between barley plants that were evident in saline soil-based comparisons. Significant correlations were observed in the rankings of genotypes on the basis of their grain yield production at a moderately saline field site and their relative shoot growth in pots at EC(e) 7.2 [Spearman's rank correlation (rs)=0.79] and EC(e) 15.3 (rs=0.82) and the crucial parameter of leaf Na(+) (rs=0.72) and Cl(-) (rs=0.82) concentrations at EC(e) 7.2 dS m(-1). This work has established screening procedures that correlated well with grain yield at sites with moderate levels of soil salinity. This study also showed that both salt exclusion and osmotic tolerance are involved in salt tolerance and that the relative importance of these traits may differ with the severity of the salt stress. In soil, ion exclusion tended to be more important at low to moderate levels of stress but osmotic stress became more important at higher stress levels. Salt exclusion coupled with a synthesis of organic solutes were shown to be important components of
Tavakkoli, Ehsan; Fatehi, Foad; Rengasamy, Pichu; McDonald, Glenn K.
2012-01-01
Success in breeding crops for yield and other quantitative traits depends on the use of methods to evaluate genotypes accurately under field conditions. Although many screening criteria have been suggested to distinguish between genotypes for their salt tolerance under controlled environmental conditions, there is a need to test these criteria in the field. In this study, the salt tolerance, ion concentrations, and accumulation of compatible solutes of genotypes of barley with a range of putative salt tolerance were investigated using three growing conditions (hydroponics, soil in pots, and natural saline field). Initially, 60 genotypes of barley were screened for their salt tolerance and uptake of Na+, Cl–, and K+ at 150 mM NaCl and, based on this, a subset of 15 genotypes was selected for testing in pots and in the field. Expression of salt tolerance in saline solution culture was not a reliable indicator of the differences in salt tolerance between barley plants that were evident in saline soil-based comparisons. Significant correlations were observed in the rankings of genotypes on the basis of their grain yield production at a moderately saline field site and their relative shoot growth in pots at ECe 7.2 [Spearman’s rank correlation (rs)=0.79] and ECe 15.3 (rs=0.82) and the crucial parameter of leaf Na+ (rs=0.72) and Cl– (rs=0.82) concentrations at ECe 7.2 dS m−1. This work has established screening procedures that correlated well with grain yield at sites with moderate levels of soil salinity. This study also showed that both salt exclusion and osmotic tolerance are involved in salt tolerance and that the relative importance of these traits may differ with the severity of the salt stress. In soil, ion exclusion tended to be more important at low to moderate levels of stress but osmotic stress became more important at higher stress levels. Salt exclusion coupled with a synthesis of organic solutes were shown to be important components of salt
An inversion method of 2D NMR relaxation spectra in low fields based on LSQR and L-curve
NASA Astrophysics Data System (ADS)
Su, Guanqun; Zhou, Xiaolong; Wang, Lijia; Wang, Yuanjun; Nie, Shengdong
2016-04-01
The low-field nuclear magnetic resonance (NMR) inversion method based on traditional least-squares QR decomposition (LSQR) always produces some oscillating spectra. Moreover, the solution obtained by traditional LSQR algorithm often cannot reflect the true distribution of all the components. Hence, a good solution requires some manual intervention, for especially low signal-to-noise ratio (SNR) data. An approach based on the LSQR algorithm and L-curve is presented to solve this problem. The L-curve method is applied to obtain an improved initial optimal solution by balancing the residual and the complexity of the solutions instead of manually adjusting the smoothing parameters. First, the traditional LSQR algorithm is used on 2D NMR T1-T2 data to obtain its resultant spectra and corresponding residuals, whose norms are utilized to plot the L-curve. Second, the corner of the L-curve as the initial optimal solution for the non-negative constraint is located. Finally, a 2D map is corrected and calculated iteratively based on the initial optimal solution. The proposed approach is tested on both simulated and measured data. The results show that this algorithm is robust, accurate and promising for the NMR analysis.
NASA Astrophysics Data System (ADS)
Yin, Gang; Zhang, Yingtang; Mi, Songlin; Fan, Hongbo; Li, Zhining
2016-11-01
To obtain accurate magnetic gradient tensor data, a fast and robust calculation method based on regularized method in frequency domain was proposed. Using the potential field theory, the transform formula in frequency domain was deduced in order to calculate the magnetic gradient tensor from the pre-existing total magnetic anomaly data. By analyzing the filter characteristics of the Vertical vector transform operator (VVTO) and Gradient tensor transform operator (GTTO), we proved that the conventional transform process was unstable which would zoom in the high-frequency part of the data in which measuring noise locate. Due to the existing unstable problem that led to a low signal-to-noise (SNR) for the calculated result, we introduced regularized method in this paper. By selecting the optimum regularization parameters of different transform phases using the C-norm approach, the high frequency noise was restrained and the SNR was improved effectively. Numerical analysis demonstrates that most value and characteristics of the calculated data by the proposed method compare favorably with reference magnetic gradient tensor data. In addition, calculated magnetic gradient tensor components form real aeromagnetic survey provided better resolution of the magnetic sources and original profile.
NASA Astrophysics Data System (ADS)
Lee, J. H.; Lee, Sang Young
2006-10-01
In obtaining the intrinsic surface resistance (RS) from the effective surface resistance (RS,eff) measured at microwave frequencies by using the dielectric resonator method, the impedance transformation method reported by Klein et al. [N. Klein, H. Chaloupka, G. Muller, S. Orbach, H. Piel, B. Roas, L. Schultz, U. Klein, M. Peiniger, J. Appl. Phys. 67 (1990) 6940] has been very useful. Here we compared the RS of YBa2Cu3O7-δ (YBCO) films on dielectric substrates obtained by a rigorous field analysis based on the TE-mode matching method with those by the impedance transformation method. The two methods produced almost the same RS,eff vs. RS relation in most practical cases of the substrate thickness being less than 1 mm and sapphire and rutile used as the materials for the dielectric rod. However, when the resonant frequency of the dielectric resonator became close to that of the resonant structure formed by the substrates and the metallic surroundings, the RS,eff vs. RS relations appeared strikingly different between the two methods. Effects of the TE011-mode cutoff frequency inside the substrate region, which could not be considered in the impedance transformation method, on the relation between the RS,eff and RS of superconductor films are also investigated. We confirmed our arguments by demonstrating a case where existence of evanescent modes should be considered for obtaining the RS of YBCO films from the RS,eff.
NASA Astrophysics Data System (ADS)
Chen, Yuan-Ho
2017-05-01
In this work, we propose a counting-weighted calibration method for field-programmable-gate-array (FPGA)-based time-to-digital converter (TDC) to provide non-linearity calibration for use in positron emission tomography (PET) scanners. To deal with the non-linearity in FPGA, we developed a counting-weighted delay line (CWD) to count the delay time of the delay cells in the TDC in order to reduce the differential non-linearity (DNL) values based on code density counts. The performance of the proposed CWD-TDC with regard to linearity far exceeds that of TDC with a traditional tapped delay line (TDL) architecture, without the need for nonlinearity calibration. When implemented in a Xilinx Vertix-5 FPGA device, the proposed CWD-TDC achieved time resolution of 60 ps with integral non-linearity (INL) and DNL of [-0.54, 0.24] and [-0.66, 0.65] least-significant-bit (LSB), respectively. This is a clear indication of the suitability of the proposed FPGA-based CWD-TDC for use in PET scanners.
NASA Astrophysics Data System (ADS)
Liu, Xiaoming; Mei, Ming; Liu, Jun; Hu, Wei
2015-12-01
Clustered microcalcifications (MCs) in mammograms are an important early sign of breast cancer in women. Their accurate detection is important in computer-aided detection (CADe). In this paper, we integrated the possibilistic fuzzy c-means (PFCM) clustering algorithm and weighted support vector machine (WSVM) for the detection of MC clusters in full-field digital mammograms (FFDM). For each image, suspicious MC regions are extracted with region growing and active contour segmentation. Then geometry and texture features are extracted for each suspicious MC, a mutual information-based supervised criterion is used to select important features, and PFCM is applied to cluster the samples into two clusters. Weights of the samples are calculated based on possibilities and typicality values from the PFCM, and the ground truth labels. A weighted nonlinear SVM is trained. During the test process, when an unknown image is presented, suspicious regions are located with the segmentation step, selected features are extracted, and the suspicious MC regions are classified as containing MC or not by the trained weighted nonlinear SVM. Finally, the MC regions are analyzed with spatial information to locate MC clusters. The proposed method is evaluated using a database of 410 clinical mammograms and compared with a standard unweighted support vector machine (SVM) classifier. The detection performance is evaluated using response receiver operating (ROC) curves and free-response receiver operating characteristic (FROC) curves. The proposed method obtained an area under the ROC curve of 0.8676, while the standard SVM obtained an area of 0.8268 for MC detection. For MC cluster detection, the proposed method obtained a high sensitivity of 92 % with a false-positive rate of 2.3 clusters/image, and it is also better than standard SVM with 4.7 false-positive clusters/image at the same sensitivity.
Teaching Geographic Field Methods Using Paleoecology
ERIC Educational Resources Information Center
Walsh, Megan K.
2014-01-01
Field-based undergraduate geography courses provide numerous pedagogical benefits including an opportunity for students to acquire employable skills in an applied context. This article presents one unique approach to teaching geographic field methods using paleoecological research. The goals of this course are to teach students key geographic…
Teaching Geographic Field Methods Using Paleoecology
ERIC Educational Resources Information Center
Walsh, Megan K.
2014-01-01
Field-based undergraduate geography courses provide numerous pedagogical benefits including an opportunity for students to acquire employable skills in an applied context. This article presents one unique approach to teaching geographic field methods using paleoecological research. The goals of this course are to teach students key geographic…
NASA Astrophysics Data System (ADS)
Li, J. H.; Zhu, Z. Q.; Liu, S. C.; Zeng, S. H.
2011-12-01
Based on the principle of abnormal field algorithms, Helmholtz equations for electromagnetic field have been deduced. We made the electric field Helmholtz equation the governing equation, and derived the corresponding system of vector finite element method equations using the Galerkin method. For solving the governing equation using the vector finite element method, we divided the computing domain into homogenous brick elements, and used Whitney-type vector basis functions. After obtaining the electric field's anomaly field in the Laplace domain using the vector finite element method, we used the Gaver-Stehfest algorithm to transform the electric field's anomaly field to the time domain, and obtained the impulse response of magnetic field's anomaly field through the Faraday law of electromagnetic induction. By comparing 1D analytic solutions of quasi-H-type geoelectric models, the accuracy of the vector finite element method is tested. For the low resistivity brick geoelectric model, the plot shape of electromotive force computed using the vector finite element method coincides with that of the integral equation method and finite difference in time domain solutions.
A DNA-based method for studying root responses to drought in field-grown wheat genotypes
Huang, Chun Y.; Kuchel, Haydn; Edwards, James; Hall, Sharla; Parent, Boris; Eckermann, Paul; Herdina; Hartley, Diana M.; Langridge, Peter; McKay, Alan C.
2013-01-01
Root systems are critical for water and nutrient acquisition by crops. Current methods measuring root biomass and length are slow and labour-intensive for studying root responses to environmental stresses in the field. Here, we report the development of a method that measures changes in the root DNA concentration in soil and detects root responses to drought in controlled environment and field trials. To allow comparison of soil DNA concentrations from different wheat genotypes, we also developed a procedure for correcting genotypic differences in the copy number of the target DNA sequence. The new method eliminates the need for separation of roots from soil and permits large-scale phenotyping of root responses to drought or other environmental and disease stresses in the field. PMID:24217242
NASA Astrophysics Data System (ADS)
Dai, Qianwei; Lin, Fangpeng; Wang, Xiaoping; Feng, Deshan; Bayless, Richard C.
2017-05-01
An integrated geophysical investigation was performed at S dam located at Dadu basin in China to assess the condition of the dam curtain. The key methodology of the integrated technique used was flow-field fitting method, which allowed identification of the hydraulic connections between the dam foundation and surface water sources (upstream and downstream), and location of the anomalous leakage outlets in the dam foundation. Limitations of the flow-field fitting method were complemented with resistivity logging to identify the internal erosion which had not yet developed into seepage pathways. The results of the flow-field fitting method and resistivity logging were consistent when compared with data provided by seismic tomography, borehole television, water injection test, and rock quality designation.
Field method for sulfide determination
Wilson, B L; Schwarser, R R; Chukwuenye, C O
1982-01-01
A simple and rapid method was developed for determining the total sulfide concentration in water in the field. Direct measurements were made using a silver/sulfide ion selective electrode in conjunction with a double junction reference electrode connected to an Orion Model 407A/F Specific Ion Meter. The method also made use of a sulfide anti-oxidant buffer (SAOB II) which consists of ascorbic acid, sodium hydroxide, and disodium EDTA. Preweighed sodium sulfide crystals were sealed in air tight plastic volumetric flasks which were used in standardization process in the field. Field standards were prepared by adding SAOB II to the flask containing the sulfide crystals and diluting it to the mark with deionized deaerated water. Serial dilutions of the standards were used to prepare standards of lower concentrations. Concentrations as low as 6 ppB were obtained on lake samples with a reproducibility better than +- 10%.
Variational methods for field theories
NASA Astrophysics Data System (ADS)
Ben-Menahem, Shahar
1986-09-01
The thesis is presented in four parts dealing with field theory models: Periodic Quantum Electrodynamics (PQED) in (2+1) dimensions, free scalar field theory in (1+1) dimensions, the Quantum XY model in (1+1) dimensions, and the (1+1) dimensional Ising model in a transverse magnetic field. The last three parts deal exclusively with variational methods; the PQED part involves mainly the path integral approach. The PQED calculation results in a better understanding of the connection between electric confinement through monopole screening, and confinement through tunneling between degenerate vacua. Free field theory is used as a laboratory for a new variational blocking truncation approximation, in which the high frequency modes in a block are truncated to wave functions that depend on the slower background model (Born Oppenheimer approximation). For the XY model, several trial wave functions for the ground state are explored, with an emphasis on the periodic Gaussian. In the 4th part, the transfer matrix method is used to find a good (non blocking) trial ground state for the Ising model in a transverse magnetic field in (1+1) dimensions.
ERIC Educational Resources Information Center
Varma, Tina; Volkmann, Mark; Hanuscin, Deborah
2009-01-01
Literature indicates that the "National Science Education Standards" ("NSES") teaching standards and inquiry-based teaching strategies for science are not uniformly incorporated into the elementary science methods (eSEM) courses across the U.S. and that field experiences might not provide appropriate models of the inquiry-based science pedagogy…
An efficient direction field-based method for the detection of fasteners on high-speed railways.
Yang, Jinfeng; Tao, Wei; Liu, Manhua; Zhang, Yongjie; Zhang, Haibo; Zhao, Hui
2011-01-01
Railway inspection is an important task in railway maintenance to ensure safety. The fastener is a major part of the railway which fastens the tracks to the ground. The current article presents an efficient method to detect fasteners on the basis of image processing and pattern recognition techniques, which can be used to detect the absence of fasteners on the corresponding track in high-speed(up to 400 km/h). The Direction Field is extracted as the feature descriptor for recognition. In addition, the appropriate weight coefficient matrix is presented for robust and rapid matching in a complex environment. Experimental results are presented to show that the proposed method is computation efficient and robust for the detection of fasteners in a complex environment. Through the practical device fixed on the track inspection train, enough fastener samples are obtained, and the feasibility of the method is verified at 400 km/h.
An Efficient Direction Field-Based Method for the Detection of Fasteners on High-Speed Railways
Yang, Jinfeng; Tao, Wei; Liu, Manhua; Zhang, Yongjie; Zhang, Haibo; Zhao, Hui
2011-01-01
Railway inspection is an important task in railway maintenance to ensure safety. The fastener is a major part of the railway which fastens the tracks to the ground. The current article presents an efficient method to detect fasteners on the basis of image processing and pattern recognition techniques, which can be used to detect the absence of fasteners on the corresponding track in high-speed(up to 400 km/h). The Direction Field is extracted as the feature descriptor for recognition. In addition, the appropriate weight coefficient matrix is presented for robust and rapid matching in a complex environment. Experimental results are presented to show that the proposed method is computation efficient and robust for the detection of fasteners in a complex environment. Through the practical device fixed on the track inspection train, enough fastener samples are obtained, and the feasibility of the method is verified at 400 km/h. PMID:22164022
NASA Astrophysics Data System (ADS)
Lin, Yuchun; Baumketner, Andrij; Deng, Shaozhong; Xu, Zhenli; Jacobs, Donald; Cai, Wei
2009-10-01
In this paper, a new solvation model is proposed for simulations of biomolecules in aqueous solutions that combines the strengths of explicit and implicit solvent representations. Solute molecules are placed in a spherical cavity filled with explicit water, thus providing microscopic detail where it is most needed. Solvent outside of the cavity is modeled as a dielectric continuum whose effect on the solute is treated through the reaction field corrections. With this explicit/implicit model, the electrostatic potential represents a solute molecule in an infinite bath of solvent, thus avoiding unphysical interactions between periodic images of the solute commonly used in the lattice-sum explicit solvent simulations. For improved computational efficiency, our model employs an accurate and efficient multiple-image charge method to compute reaction fields together with the fast multipole method for the direct Coulomb interactions. To minimize the surface effects, periodic boundary conditions are employed for nonelectrostatic interactions. The proposed model is applied to study liquid water. The effect of model parameters, which include the size of the cavity, the number of image charges used to compute reaction field, and the thickness of the buffer layer, is investigated in comparison with the particle-mesh Ewald simulations as a reference. An optimal set of parameters is obtained that allows for a faithful representation of many structural, dielectric, and dynamic properties of the simulated water, while maintaining manageable computational cost. With controlled and adjustable accuracy of the multiple-image charge representation of the reaction field, it is concluded that the employed model achieves convergence with only one image charge in the case of pure water. Future applications to pKa calculations, conformational sampling of solvated biomolecules and electrolyte solutions are briefly discussed.
NASA Astrophysics Data System (ADS)
Maki, Toshihiro; Ura, Tamaki; Singh, Hanumant; Sakamaki, Takashi
Large-area seafloor imaging will bring significant benefits to various fields such as academics, resource survey, marine development, security, and search-and-rescue. The authors have proposed a navigation method of an autonomous underwater vehicle for seafloor imaging, and verified its performance through mapping tubeworm colonies with the area of 3,000 square meters using the AUV Tri-Dog 1 at Tagiri vent field, Kagoshima bay in Japan (Maki et al., 2008, 2009). This paper proposes a post-processing method to build a natural photo mosaic from a number of pictures taken by an underwater platform. The method firstly removes lens distortion, invariances of color and lighting from each image, and then ortho-rectification is performed based on camera pose and seafloor estimated by navigation data. The image alignment is based on both navigation data and visual characteristics, implemented as an expansion of the image based method (Pizarro et al., 2003). Using the two types of information realizes an image alignment that is consistent both globally and locally, as well as making the method applicable to data sets with little visual keys. The method was evaluated using a data set obtained by the AUV Tri-Dog 1 at the vent field in Sep. 2009. A seamless, uniformly illuminated photo mosaic covering the area of around 500 square meters was created from 391 pictures, which covers unique features of the field such as bacteria mats and tubeworm colonies.
Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; Bettencourt, Matthew
2016-08-09
Here, we propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowed by typical CFL restrictions.
NASA Astrophysics Data System (ADS)
Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; Bettencourt, Matthew
2016-12-01
We propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowed by typical CFL restrictions.
Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; Bettencourt, Matthew
2016-08-09
Here, we propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowed by typical CFL restrictions.
NASA Astrophysics Data System (ADS)
Welland, M. J.; Tenuta, E.; Prudil, A. A.
2017-06-01
This article describes a phase-field model for an isothermal multicomponent, multiphase system which avoids implicit interfacial energy contributions by starting from a grand potential formulation. A method is developed for incorporating arbitrary forms of the equilibrium thermodynamic potentials in all phases to determine an explicit relationship between chemical potentials and species concentrations. The model incorporates variable densities between adjacent phases, defect migration, and dependence of internal pressure on object dimensions ranging from the macro- to nanoscale. A demonstrative simulation of an overpressurized nanoscopic intragranular bubble in nuclear fuel migrating to a grain boundary under kinetically limited vacancy diffusion is shown.
Hohenstein, Edward G.; Luehr, Nathan; Ufimtsev, Ivan S.; Martínez, Todd J.
2015-06-14
Despite its importance, state-of-the-art algorithms for performing complete active space self-consistent field (CASSCF) computations have lagged far behind those for single reference methods. We develop an algorithm for the CASSCF orbital optimization that uses sparsity in the atomic orbital (AO) basis set to increase the applicability of CASSCF. Our implementation of this algorithm uses graphical processing units (GPUs) and has allowed us to perform CASSCF computations on molecular systems containing more than one thousand atoms. Additionally, we have implemented analytic gradients of the CASSCF energy; the gradients also benefit from GPU acceleration as well as sparsity in the AO basis.
Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.
2015-12-07
In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO_{2} and comparing the predictions with experiments.
La Delfa, Nicholas J; Potvin, Jim R
2017-03-01
This paper describes the development of a novel method (termed the 'Arm Force Field' or 'AFF') to predict manual arm strength (MAS) for a wide range of body orientations, hand locations and any force direction. This method used an artificial neural network (ANN) to predict the effects of hand location and force direction on MAS, and included a method to estimate the contribution of the arm's weight to the predicted strength. The AFF method predicted the MAS values very well (r(2) = 0.97, RMSD = 5.2 N, n = 456) and maintained good generalizability with external test data (r(2) = 0.842, RMSD = 13.1 N, n = 80). The AFF can be readily integrated within any DHM ergonomics software, and appears to be a more robust, reliable and valid method of estimating the strength capabilities of the arm, when compared to current approaches.
Modified methods of stellar magnetic field measurements
NASA Astrophysics Data System (ADS)
Kholtygin, A. F.
2014-12-01
The standard methods of the magnetic field measurement, based on an analysis of the relation between the Stokes V-parameter and the first derivative of the total line profile intensity, were modified by applying a linear integral operator \\hat{L} to both sides of this relation. As the operator \\hat{L}, the operator of the wavelet transform with DOG-wavelets is used. The key advantage of the proposed method is an effective suppression of the noise contribution to the line profile and the Stokes parameter V. The efficiency of the method has been studied using model line profiles with various noise contributions. To test the proposed method, the spectropolarimetric observations of the A0 star α2 CVn, the Of?p star HD 148937, and the A0 supergiant HD 92207 were used. The longitudinal magnetic field strengths calculated by our method appeared to be in good agreement with those determined by other methods.
ERIC Educational Resources Information Center
Yerrick, Randy K.; Hoving, Timothy J.
2003-01-01
Investigates preservice science teachers' beliefs about science teaching and learning through reflections on teaching lower track science students. Studies preservice teachers enrolled in a field-based secondary science methods course working with rural Black children. Discusses implications for teacher education and research. (Author/KHR)
Xu, Peng; Haves, Philip
2002-05-16
An automated fault detection and diagnosis tool for HVAC systems is being developed, based on an integrated, life-cycle, approach to commissioning and performance monitoring. The tool uses component-level HVAC equipment models implemented in the SPARK equation-based simulation environment. The models are configured using design information and component manufacturers' data and then fine-tuned to match the actual performance of the equipment by using data measured during functional tests of the sort using in commissioning. This paper presents the results of field tests of mixing box and VAV fan system models in an experimental facility and a commercial office building. The models were found to be capable of representing the performance of correctly operating mixing box and VAV fan systems and detecting several types of incorrect operation.
NASA Astrophysics Data System (ADS)
Yang, Kang; Guo, Zhaoli
2016-04-01
In this paper, a lattice Boltzmann equation (LBE) model is proposed for binary fluids based on a quasi-incompressible phase-field model [J. Shen et al., Commun. Comput. Phys. 13, 1045 (2013), 10.4208/cicp.300711.160212a]. Compared with the other incompressible LBE models based on the incompressible phase-field theory, the quasi-incompressible model conserves mass locally. A series of numerical simulations are performed to validate the proposed model, and comparisons with an incompressible LBE model [H. Liang et al., Phys. Rev. E 89, 053320 (2014), 10.1103/PhysRevE.89.053320] are also carried out. It is shown that the proposed model can track the interface accurately. As the stationary droplet and rising bubble problems, the quasi-incompressible LBE gives nearly the same predictions as the incompressible model, but the compressible effect in the present model plays a significant role in the phase separation problem. Therefore, in general cases the present mass-conserving model should be adopted.
NASA Astrophysics Data System (ADS)
Pakniat, R.; Tavassoly, M. K.; Zandi, M. H.
2017-01-01
In this paper, we outline a scheme for entanglement swapping based on the concept of cavity QED. The atom-field entangled state in our study is produced in the nonlinear regime. In this scheme, the exploited cavities are prepared in a hybrid entangled state (a combination of coherent and number states) and the swapping process is investigated using two different methods, i.e., detecting and Bell-state measurement methods through the cavity QED. Then, we make use of the atom-field entangled state obtained by detecting method to show that how the atom-atom entanglement as well as atomic and field states teleportation can be achieved with complete fidelity.
Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; ...
2016-08-09
Here, we propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowedmore » by typical CFL restrictions.« less
ERIC Educational Resources Information Center
Richards, Janet C.
2006-01-01
As part of course requirements twenty-eight preservice teachers in a field-based content reading course created a series of self-portraits that illustrated their concerns and perceptions about teaching content reading. They accompanied their drawings with dialogue. Analysis of the portraits indicates that arts-based techniques have the potential…
Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.
2015-12-07
In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO2 and comparing themore » predictions with experiments.« less
Babu, Binoy; Washburn, Brian K; Ertek, Tülin Sarigül; Miller, Steven H; Riddle, Charles B; Knox, Gary W; Ochoa-Corona, Francisco M; Olson, Jennifer; Katırcıoğlu, Yakup Zekai; Paret, Mathews L
2017-09-01
Rose rosette disease, caused by Rose rosette virus (RRV; genus Emaravirus) is a major threat to the rose industry in the U.S. The only strategy currently available for disease management is early detection and eradication of the infected plants, thereby limiting its potential spread. Current RT-PCR based diagnostic methods for RRV are time consuming and are inconsistent in detecting the virus from symptomatic plants. Real-time RT-qPCR assay is highly sensitive for detection of RRV, but it is expensive and requires well-equipped laboratories. Both the RT-PCR and RT-qPCR cannot be used in a field-based testing for RRV. Hence a novel probe based, isothermal reverse transcription-recombinase polymerase amplification (RT-exoRPA) assay, using primer/probe designed based on the nucleocapsid gene of the RRV has been developed. The assay is highly specific and did not give a positive reaction to other viruses infecting roses belonging to both inclusive and exclusive genus. Dilution assays using the in vitro transcript showed that the primer/probe set is highly sensitive, with a detection limit of 1 fg/μl. In addition, a rapid technique for the extraction of viral RNA (<5min) has been standardized from RRV infected tissue sources, using PBS-T buffer (pH 7.4), which facilitates the virus adsorption onto the PCR tubes at 4°C for 2min, followed by denaturation to release the RNA. RT-exoRPA analysis of the infected plants using the primer/probe indicated that the virus could be detected from leaves, stems, petals, pollen, primary roots and secondary roots. In addition, the assay was efficiently used in the diagnosis of RRV from different rose varieties, collected from different states in the U.S. The entire process, including the extraction can be completed in 25min, with less sophisticated equipments. The developed assay can be used with high efficiency in large scale field testing for rapid detection of RRV in commercial nurseries and landscapes. Copyright © 2017 Elsevier B
Field evaluation of a VOST sampling method
Jackson, M.D.; Johnson, L.D.; Fuerst, R.G.; McGaughey, J.F.; Bursey, J.T.; Merrill, R.G.
1994-12-31
The VOST (SW-846 Method 0030) specifies the use of Tenax{reg_sign} and a particular petroleum-based charcoal (SKC Lot 104, or its equivalent), that is no longer commercially available. In field evaluation studies of VOST methodology, a replacement petroleum-based charcoal has been used: candidate replacement sorbents for charcoal were studied, and Anasorb{reg_sign} 747, a carbon-based sorbent, was selected for field testing. The sampling train was modified to use only Anasorb{reg_sign} in the back tube and Tenax{reg_sign} in the two front tubes to avoid analytical difficulties associated with the analysis of the sequential bed back tube used in the standard VOST train. The standard (SW-846 Method 0030) and the modified VOST methods were evaluated at a chemical manufacturing facility using a quadruple probe system with quadruple trains. In this field test, known concentrations of the halogenated volatile organic compounds, that are listed in the Clean Air Act Amendments of 1990, Title 3, were introduced into the VOST train and the modified VOST train, using the same certified gas cylinder as a source of test compounds. Statistical tests of the comparability of methods were performed on a compound-by-compound basis. For most compounds, the VOST and modified VOST methods were found to be statistically equivalent.
NASA Astrophysics Data System (ADS)
Gandolfo, Daniel; Rodriguez, Roger; Tuckwell, Henry C.
2017-03-01
We investigate the dynamics of large-scale interacting neural populations, composed of conductance based, spiking model neurons with modifiable synaptic connection strengths, which are possibly also subjected to external noisy currents. The network dynamics is controlled by a set of neural population probability distributions (PPD) which are constructed along the same lines as in the Klimontovich approach to the kinetic theory of plasmas. An exact non-closed, nonlinear, system of integro-partial differential equations is derived for the PPDs. As is customary, a closing procedure leads to a mean field limit. The equations we have obtained are of the same type as those which have been recently derived using rigorous techniques of probability theory. The numerical solutions of these so called McKean-Vlasov-Fokker-Planck equations, which are only valid in the limit of infinite size networks, actually shows that the statistical measures as obtained from PPDs are in good agreement with those obtained through direct integration of the stochastic dynamical system for large but finite size networks. Although numerical solutions have been obtained for networks of Fitzhugh-Nagumo model neurons, which are often used to approximate Hodgkin-Huxley model neurons, the theory can be readily applied to networks of general conductance-based model neurons of arbitrary dimension.
NASA Astrophysics Data System (ADS)
Gandolfo, Daniel; Rodriguez, Roger; Tuckwell, Henry C.
2017-01-01
We investigate the dynamics of large-scale interacting neural populations, composed of conductance based, spiking model neurons with modifiable synaptic connection strengths, which are possibly also subjected to external noisy currents. The network dynamics is controlled by a set of neural population probability distributions (PPD) which are constructed along the same lines as in the Klimontovich approach to the kinetic theory of plasmas. An exact non-closed, nonlinear, system of integro-partial differential equations is derived for the PPDs. As is customary, a closing procedure leads to a mean field limit. The equations we have obtained are of the same type as those which have been recently derived using rigorous techniques of probability theory. The numerical solutions of these so called McKean-Vlasov-Fokker-Planck equations, which are only valid in the limit of infinite size networks, actually shows that the statistical measures as obtained from PPDs are in good agreement with those obtained through direct integration of the stochastic dynamical system for large but finite size networks. Although numerical solutions have been obtained for networks of Fitzhugh-Nagumo model neurons, which are often used to approximate Hodgkin-Huxley model neurons, the theory can be readily applied to networks of general conductance-based model neurons of arbitrary dimension.
Tokonami, Shiho; Iida, Takuya
2017-10-02
For sustainable human life, biosensing systems for contaminants or disease-causing bacteria are crucial for food security, environmental improvement, and disease prevention. With an aim of enhancing the sensitivity and detection speed, many researchers have developed efficient detection methods for target bacteria. In this review, we discuss recent topics related to active and passive bacterial detection methods, including (1) optical approaches with unique functional nano- and micro-structures, and (2) electrical approaches involving mechanical modulation and electrochemical reactions. Particularly, we discuss the prospects in the development of label-free, rapid, and highly sensitive biosensors based on active detection principles with light-induced dynamics, in conjunction with dielectrophoresis-induced selective trapping. Copyright © 2017. Published by Elsevier B.V.
Xiao, Xiao; Hua, Xue-Ming; Wu, Yi-Xiong; Li, Fang
2012-09-01
Pulsed TIG welding is widely used in industry due to its superior properties, and the measurement of arc temperature is important to analysis of welding process. The relationship between particle densities of Ar and temperature was calculated based on the theory of spectrum, the relationship between emission coefficient of spectra line at 794.8 nm and temperature was calculated, arc image of spectra line at 794.8 nm was captured by high speed camera, and both the Abel inversion and Fowler-Milne method were used to calculate the temperature distribution of pulsed TIG welding.
NASA Astrophysics Data System (ADS)
Liu, Wei; Sneeuw, Nico; Jiang, Weiping
2017-04-01
GRACE mission has contributed greatly to the temporal gravity field monitoring in the past few years. However, ocean tides cause notable alias errors for single-pair spaceborne gravimetry missions like GRACE in two ways. First, undersampling from satellite orbit induces the aliasing of high-frequency tidal signals into the gravity signal. Second, ocean tide models used for de-aliasing in the gravity field retrieval carry errors, which will directly alias into the recovered gravity field. GRACE satellites are in non-repeat orbit, disabling the alias error spectral estimation based on the repeat period. Moreover, the gravity field recovery is conducted in non-strictly monthly interval and has occasional gaps, which result in an unevenly sampled time series. In view of the two aspects above, we investigate the data-driven method to mitigate the ocean tide alias error in a post-processing mode.
NASA Astrophysics Data System (ADS)
Kulchitsky, A. V.
2009-12-01
Space weather modeling and forecasting techniques are important for a variety of applications, such as satellite operations,GPS navigation, magnetosphere and ionosphere modeling etc. The work described here is intended to help provide a better IMF data forecast near the Earth's magnetosphere by measurements at L1 Lagrange point. Theory, as well as observations made from different satellites simultaneously, shows that the interplanetary magnetic field (IMF) may consist of current layers and wave fronts along which changes in magnetic field are small in the scale of the diameter of the ACE satellite's orbit. The knowledge about current layers and wave fronts along which the changes of IMF are minimal would significantly improve IMF forecast near the Earth by measurements at L1 Lagrange point. Many methods have been developed to determine these structures using measurements at a single spacecraft, based on different fundamental properties of the solar wind and IMF. However, most solar wind parameters, such as density and velocity, cannot be measured with sufficient time resolution comparable with magnetic field measurements. For this reason, methods based on the magnetic field are most frequently used for practical calculations and forecasting. There are two known methods for IMF calculations, MVAB-0 and the upstream-downstream magnetic field cross-product method. In this work, we propose two new methods based on physical laws of the solar wind and magnetic field measurements. We demonstrate their usefulness through comparison of data on the ACE and WIND satellites over long continuous periods of time. We used model skill analysis base on RMS and correlation between the model and measurements. All of these methods depend on a series of 4-6 free parameters, depending on the method. We analyzed all free parameters across a wide range. All analysis was performed on massive parallel computers. Computations reveled that there is no set of constant parameters that allow
Vives, Alejandra; Ferreccio, Catterina; Marshall, Guillermo
2009-01-01
Unit non-response is a growing problem in sample surveys that can bias survey estimates if respondents and non-respondents differ systematically. To compare the results of two nonresponse adjustment methods: field substitution and weighting nonresponse adjustment based on response propensity. Field substitution and response propensity weights are used to adjust for non-response and their effect on the prevalence of six survey outcomes is compared. Although significant differences are found between respondents and non-respondents, only slight changes on prevalence estimates are observed after adjustment, with both techniques showing similar results. In the sole case of smoking, substitution seems to have further biased survey estimates. Our results suggest that when there is information available for both respondents and non-respondents, or if a careful sample substitution process is performed, weighting adjustments based on response propensity and field substitution produce comparable results on prevalence estimates.
New Methods of Magnetic Field Measurements
NASA Astrophysics Data System (ADS)
Kholtygin, A. F.
2015-04-01
The standard methods of magnetic field measurements, based on the relation between the Stokes V parameter and the first derivative of the line profile intensity were modified by applying a linear integral transform to both sides of this relation. We used the wavelet integral transform with the DOG wavelets. The key advantage of the proposed method is the effective suppression of the noise contribution both to the line profile and the Stokes V parameter. To test the proposed method, spectropolarimetric observations of the young O star θ1 Ori C were used. We also demonstrate that the smoothed Time Variation Spectra (smTVS) can be used as a tool for detecting the local stellar magnetic fields.
Nguyen, Lam; Stoter, Stein; Baum, Thomas; Kirschke, Jan; Ruess, Martin; Yosibash, Zohar; Schillinger, Dominik
2017-03-11
The voxel finite cell method uses unfitted finite element meshes and voxel quadrature rules to seamlessly transfer computed tomography data into patient-specific bone discretizations. The method, however, still requires the explicit parametrization of boundary surfaces to impose traction and displacement boundary conditions, which constitutes a potential roadblock to automation. We explore a phase-field-based formulation for imposing traction and displacement constraints in a diffuse sense. Its essential component is a diffuse geometry model generated from metastable phase-field solutions of the Allen-Cahn problem that assumes the imaging data as initial condition. Phase-field approximations of the boundary and its gradient are then used to transfer all boundary terms in the variational formulation into volumetric terms. We show that in the context of the voxel finite cell method, diffuse boundary conditions achieve the same accuracy as boundary conditions defined over explicit sharp surfaces, if the inherent length scales, ie, the interface width of the phase field, the voxel spacing, and the mesh size, are properly related. We demonstrate the flexibility of the new method by analyzing stresses in a human femur and a vertebral body. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Kyllmar, K.; Mårtensson, K.; Johnsson, H.
2005-03-01
A method to calculate N leaching from arable fields using model-calculated N leaching coefficients (NLCs) was developed. Using the process-based modelling system SOILNDB, leaching of N was simulated for four leaching regions in southern Sweden with 20-year climate series and a large number of randomised crop sequences based on regional agricultural statistics. To obtain N leaching coefficients, mean values of annual N leaching were calculated for each combination of main crop, following crop and fertilisation regime for each leaching region and soil type. The field-NLC method developed could be useful for following up water quality goals in e.g. small monitoring catchments, since it allows normal leaching from actual crop rotations and fertilisation to be determined regardless of the weather. The method was tested using field data from nine small intensively monitored agricultural catchments. The agreement between calculated field N leaching and measured N transport in catchment stream outlets, 19-47 and 8-38 kg ha -1 yr -1, respectively, was satisfactory in most catchments when contributions from land uses other than arable land and uncertainties in groundwater flows were considered. The possibility of calculating effects of crop combinations (crop and following crop) is of considerable value since changes in crop rotation constitute a large potential for reducing N leaching. When the effect of a number of potential measures to reduce N leaching (i.e. applying manure in spring instead of autumn; postponing ploughing-in of ley and green fallow in autumn; undersowing a catch crop in cereals and oilseeds; and increasing the area of catch crops by substituting winter cereals and winter oilseeds with corresponding spring crops) was calculated for the arable fields in the catchments using field-NLCs, N leaching was reduced by between 34 and 54% for the separate catchments when the best possible effect on the entire potential area was assumed.
Liu, Ji; Yu, Li-xia; Zhang, Bin; Zhao Dong-e; Liij, Xiao-yan; Wang, Heng-fei
2016-03-01
The deflagration fire lasting for a long time and covering a large area in the process of large equivalent explosion makes it difficult to obtain velocity parameters of fragments in the near-field. In order to solve the problem, it is proposed in this paper a photoelectric transceiver integrated method which utilize laser screen as the sensing area. The analysis of three different types of warhead explosion flame spectral distribution of radiation shows that 0.3 to 1.0 μm within the band is at relatively low intensity. On the basis of this, the optical system applies the principle of determining the fixed distance by measuring the time and the reflector technology, which consists of single longitudinal mode laser, cylindrical Fresnel lens, narrow-band filters and high-speed optical sensors, etc. The system has its advantage, such as transceiver, compact structure and combination of narrowband filter and single longitudinal mode laser, which can stop the spectrum of fire from suppressing the interference of background light effectively. Large amounts of experiments in different models and equivalent have been conducted to measure the velocity of difference kinds of warheads, obtaining higher signal-to-noise ratio of the waveform signal after a series of signal de-noising and recognition through NI company data acquisition and recording system. The experimental results show that this method can complete the accurately test velocity of fragments around center of the explosion. Specifically, the minimum size of fragments can be measured is 4 mm while the speed can be obtained is up to 1 200 m x s(-1) and the capture rate is better than 95% comparing with test results of target plate. At the same time, the system adopts Fresnel lenses-transparent to form a rectangular screen, which makes the distribution of rectangular light uniform in vertical direction, and the light intensity uniformity in horizontal direction is more than 80%. Consequently, the system can
Apparatuses and methods for generating electric fields
Scott, Jill R; McJunkin, Timothy R; Tremblay, Paul L
2013-08-06
Apparatuses and methods relating to generating an electric field are disclosed. An electric field generator may include a semiconductive material configured in a physical shape substantially different from a shape of an electric field to be generated thereby. The electric field is generated when a voltage drop exists across the semiconductive material. A method for generating an electric field may include applying a voltage to a shaped semiconductive material to generate a complex, substantially nonlinear electric field. The shape of the complex, substantially nonlinear electric field may be configured for directing charged particles to a desired location. Other apparatuses and methods are disclosed.
Lockie, Robert G; Murphy, Aron J; Scott, Brendan R; Janse de Jonge, Xanne A K
2012-10-01
Session ratings of perceived exertion (session RPE) are commonly used to assess global training intensity for team sports. However, there is little research quantifying the intensity of field-based training protocols for speed development. The study's aim was to determine the session RPE of popular training protocols (free sprint [FST], resisted sprint [RST], and plyometrics [PT]) designed to improve sprint acceleration over 10 m in team sport athletes. Twenty-seven men (age = 23.3 ± 4.7 years; mass = 84.5 ± 8.9 kg; height = 1.83 ± 0.07 m) were divided into 3 groups according to 10-m velocity. Training consisted of an incremental program featuring two 1-hour sessions per week for 6 weeks. Subjects recorded session RPE 30 minutes post training using the Borg category-ratio 10 scale. Repeated measures analysis of variance found significant (p < 0.05) changes in sprint velocity and session RPE over 6 weeks. All groups significantly increased 0- to 5-m velocity and 0- to 10-m velocity by 4-7%, with no differences between groups. There were no significant differences in session RPE between the groups, suggesting that protocols were matched for intensity. Session RPE significantly increased over the 6 weeks for all groups, ranging from 3.75 to 5.50. This equated to intensities of somewhat hard to hard. Post hoc testing revealed few significant weekly increases, suggesting that session RPE may not be sensitive to weekly load increases in sprint and plyometric training programs. Another explanation, however, could be that the weekly load increments used were not great enough to increase perceived exertion. Nonetheless, the progressive overload of each program was sufficient to improve 10-m sprint performance. The session RPE values from the present study could be used to assess workload for speed training periodization within a team sports conditioning program.
NASA Astrophysics Data System (ADS)
Leiva Lopez, Josue Nahun
In general, the nursery industry lacks an automated inventory control system. Object-based image analysis (OBIA) software and aerial images could be used to count plants in nurseries. The objectives of this research were: 1) to evaluate the effect of an unmanned aerial vehicle (UAV) flight altitude and plant canopy separation of container-grown plants on count accuracy using aerial images and 2) to evaluate the effect of plant canopy shape, presence of flowers, and plant status (living and dead) on counting accuracy of container-grown plants using remote sensing images. Images were analyzed using Feature AnalystRTM (FA) and an algorithm trained using MATLABRTM. Total count error, false positives and unidentified plants were recorded from output images using FA; only total count error was reported for the MATLAB algorithm. For objective 1, images were taken at 6, 12 and 22 m above the ground using a UAV. Plants were placed on black fabric and gravel, and spaced as follows: 5 cm between canopy edges, canopy edges touching, and 5 cm of canopy edge overlap. In general, when both methods were considered, total count error was smaller [ranging from -5 (undercount) to 4 (over count)] when plants were fully separated with the exception of images taken at 22 m. FA showed a smaller total count error (-2) than MATLAB (-5) when plants were placed on black fabric than those placed on gravel. For objective 2, the plan was to continue using the UAV, however, due to the unexpected disruption of the GPS-based navigation by heightened solar flare activity in 2013, a boom lift that could provide images on a more reliable basis was used. When images obtained using a boom lift were analyzed using FA there was no difference between variables measured when an algorithm trained with an image displaying regular or irregular plant canopy shape was applied to images displaying both plant canopy shapes even though the canopy shape of 'Sea Green' juniper is less compact than 'Plumosa Compacta
Computer Based Virtual Field Trips.
ERIC Educational Resources Information Center
Clark, Kenneth F.; Hosticka, Alice; Schriver, Martha; Bedell, Jackie
This paper discusses computer based virtual field trips that use technologies commonly found in public schools in the United States. The discussion focuses on the advantages of both using and creating these field trips for an instructional situation. A virtual field trip to Cumberland Island National Seashore, St. Marys, Georgia is used as a point…
Galanti, Eli; Kaspi, Yohai
2016-04-01
During 2016–17, the Juno and Cassini spacecraft will both perform close eccentric orbits of Jupiter and Saturn, respectively, obtaining high-precision gravity measurements for these planets. These data will be used to estimate the depth of the observed surface flows on these planets. All models to date, relating the winds to the gravity field, have been in the forward direction, thus only allowing the calculation of the gravity field from given wind models. However, there is a need to do the inverse problem since the new observations will be of the gravity field. Here, an inverse dynamical model is developed to relate the expected measurable gravity field, to perturbations of the density and wind fields, and therefore to the observed cloud-level winds. In order to invert the gravity field into the 3D circulation, an adjoint model is constructed for the dynamical model, thus allowing backward integration. This tool is used for the examination of various scenarios, simulating cases in which the depth of the wind depends on latitude. We show that it is possible to use the gravity measurements to derive the depth of the winds, both on Jupiter and Saturn, also taking into account measurement errors. Calculating the solution uncertainties, we show that the wind depth can be determined more precisely in the low-to-mid-latitudes. In addition, the gravitational moments are found to be particularly sensitive to flows at the equatorial intermediate depths. Therefore, we expect that if deep winds exist on these planets they will have a measurable signature by Juno and Cassini.
Hwa, Jae Hwa; Yoon, Young Jun; Lee, Hwan Gi; Yoo, Gwan Min; Cho, Eou-Sik; Cho, Seongjae; Lee, Jung-Hee; Kang, In Man
2014-11-01
This paper presents a new extraction method for source and drain (S/D) series resistances of silicon nanowire (SNW) metal-oxide-semiconductor field-effect transistors (MOSFETs) based on small-signal radio-frequency (RF) analysis. The proposed method can be applied to the extraction of S/D series resistances for SNW MOSFETs with finite off-state channel resistance as well as gate bias-dependent on-state resistive components realized by 3-dimensional (3-D) device simulation. The series resistances as a function of frequency and gate voltage are presented and compared with the results obtained by an existing method with infinite off-state channel resistance model. The accuracy of the newly proposed parameter extraction method has been successfully verified by Z22- and Y-parameters up to 100 GHz operation frequency.
Zhao, Qian; Jia, Xiaobo; Xia, Rui; Lin, Jianing; Zhang, Yuan
2016-09-01
Ionic mixtures, measured as specific conductivity, have been increasingly concerned because of their toxicities to aquatic organisms. However, identifying protective values of specific conductivity for aquatic organisms is challenging given that laboratory test systems cannot examine more salt-intolerant species nor effects occurring in streams. Large data sets used for deriving field-based benchmarks are rarely available. In this study, a field-based method for small data sets was used to derive specific conductivity benchmark, which is expected to prevent the extirpation of 95% of local taxa from circum-neutral to alkaline waters dominated by a mixture of SO4(2-) and HCO3(-) anions and other dissolved ions. To compensate for the smaller sample size, species level analyses were combined with genus level analyses. The benchmark is based on extirpation concentration (XC95) values of specific conductivity for 60 macroinvertebrate genera estimated from 296 sampling sites in the Hun-Tai River Basin. We derived the specific conductivity benchmark by using a 2-point interpolation method, which yielded the benchmark of 249 μS/cm. Our study tailored the method that was developed by USEPA to derive aquatic life benchmark for specific conductivity for basin scale application, and may provide useful information for water pollution control and management. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Galanti, E.; Finocchiaro, S.; Kaspi, Y.; Iess, L.
2013-12-01
The upcoming high precision measurements of the Juno flybys around Jupiter, have the potential of improving the estimation of Jupiter's gravity field. The analysis of the Juno Doppler data will provide a very accurate reconstruction of spacial gravity variations, but these measurements will be over a limited latitudinal and longitudinal range. In order to deduce the full gravity field of Jupiter, additional information needs to be incorporated into the analysis, especially with regards to the Jovian wind structure and its depth at high latitudes. In this work we propose a new iterative method for the estimation of the Jupiter gravity field, using the Juno expected measurements, a trajectory estimation model, and an adjoint based inverse thermal wind model. Beginning with an artificial gravitational field, the trajectory estimation model together with an optimization procedure is used to obtain an initial solution of the gravitational moments. As upper limit constraints, the model applies the gravity harmonics obtained from a thermal wind model in which the winds are assumed to penetrate barotropicaly along the direction of the spin axis. The solution from the trajectory model is then used as an initial guess for the thermal wind model, and together with an adjoint optimization method, the optimal penetration depth of the winds is computed. As a final step, the gravity harmonics solution from the thermal wind model is given back to the trajectory model, along with an uncertainties estimate, to be used as constraints for a new calculation of the gravity field. We test this method for several cases, some with zonal harmonics only, and some with the full gravity field including longitudinal variations that include the tesseral harmonics as well. The results show that using this method some of the gravitational moments are fitted better to the 'observed' ones, mainly due to the fact that the thermal wind model is taking into consideration the wind structure and depth
Meyer, Frans J C; Davidson, David B; Jakobus, Ulrich; Stuchly, Maria A
2003-02-01
A hybrid finite-element method (FEM)/method of moments (MoM) technique is employed for specific absorption rate (SAR) calculations in a human phantom in the near field of a typical group special mobile (GSM) base-station antenna. The MoM is used to model the metallic surfaces and wires of the base-station antenna, and the FEM is used to model the heterogeneous human phantom. The advantages of each of these frequency domain techniques are, thus, exploited, leading to a highly efficient and robust numerical method for addressing this type of bioelectromagnetic problem. The basic mathematical formulation of the hybrid technique is presented. This is followed by a discussion of important implementation details-in particular, the linear algebra routines for sparse, complex FEM matrices combined with dense MoM matrices. The implementation is validated by comparing results to MoM (surface equivalence principle implementation) and finite-difference time-domain (FDTD) solutions of human exposure problems. A comparison of the computational efficiency of the different techniques is presented. The FEM/MoM implementation is then used for whole-body and critical-organ SAR calculations in a phantom at different positions in the near field of a base-station antenna. This problem cannot, in general, be solved using the MoM or FDTD due to computational limitations. This paper shows that the specific hybrid FEM/MoM implementation is an efficient numerical tool for accurate assessment of human exposure in the near field of base-station antennas.
Casti, Paola; Mencattini, Arianna; Salmeri, Marcello; Ancona, Antonietta; Mangieri, Fabio Felice; Pepe, Maria Luisa; Rangayyan, Rangaraj Mandayam
2013-10-01
Automatic detection of the nipple in mammograms is an important step in computerized systems that combine multiview information for accurate detection and diagnosis of breast cancer. Locating the nipple is a difficult task owing to variations in image quality, presence of noise, and distortion and displacement of the breast tissue due to compression. In this work, we propose a novel Hessian-based method to locate automatically the nipple in screen-film and full-field digital mammograms (FFDMs). The method includes detection of a plausible nipple/retroareolar area in a mammogram using geometrical constraints, analysis of the gradient vector field by mean and Gaussian curvature measurements, and local shape-based conditions. The proposed procedure was tested on 566 mammographic images consisting of 372 randomly selected scanned films from two public databases (mini-MIAS and DDSM), and 194 digital mammograms acquired with a GE Senographe 2000D FFDM system. A radiologist independently marked the centers of the nipples for evaluation of the results. The average error obtained was 6.7 mm (22 pixels) with reference to the center of the nipple as identified by the radiologist. Only two out of the 566 detected nipples (0.35 %) had an error larger than 50 mm. The method was also directly compared with two other techniques for the detection of the nipple. The results indicate that the proposed method outperforms other algorithms presented in the literature and can be used to identify accurately the nipple on various types of mammographic images.
Third-order aberrations in GRIN crystalline lens: A new method based on axial and field rays
Río, Arturo Díaz del; Gómez-Reino, Carlos; Flores-Arias, M. Teresa
2014-01-01
This paper presents a new procedure for calculating the third-order aberration of gradient-index (GRIN) lenses that combines an iterative numerical method with the Hamiltonian theory of aberrations in terms of two paraxial rays with boundary conditions on general curved end surfaces and, as a second algebraic step has been presented. Application of this new method to a GRIN human is analyzed in the framework of the bi-elliptical model. The different third-order aberrations are determined, except those that need for their calculation skew rays, because the study is made only for meridional rays. PMID:25444647
Optimization methods in control of electromagnetic fields
NASA Astrophysics Data System (ADS)
Angell, Thomas S.; Kleinman, Ralph E.
1991-05-01
This program is developing constructive methods for certain constrained optimization problems arising in the design and control of electromagnetic fields and in the identification of scattering objects. The problems addressed fall into three categories: (1) the design of antennas with optimal radiation characteristics measured in terms of directivity; (2) the control of the electromagnetic scattering characteristics of an object, in particular the minimization of its radar cross section, by the choice of material properties; and (3) the determination of the shape of scattering objects with various electromagnetic properties from scattered field data. The main thrust of the program is toward the development of constructive methods based on the use of complete families of solutions of the time-harmonic Maxwell equations in the infinite domain exterior to the radiating or scattering body. During the course of the work an increasing amount of attention has been devoted to the use of iterative methods for the solution of various direct and inverse problems. The continued investigation and development of these methods and their application in parameter identification has become a significant part of the program.
NASA Astrophysics Data System (ADS)
Kitauchi, H.; Nozaki, K.; Ito, H.; Kondo, T.; Tsuchiya, S.; Imamura, K.; Nagatsuma, T.; Ishii, M.
2014-12-01
We present our recent efforts on an evaluation of the numerical prediction method of electric field strength for ionospheric propagation of low frequency (LF) radio waves based on a wave-hop propagation theory described in Section 2.4 of Recommendation ITU-R P.684-6 (2012), "Prediction of field strength at frequencies below about 150 kHz," made by International Telecommunication Union Radiocommunication Sector (ITU-R). As part of the Japanese Antarctic Research Expedition (JARE), we conduct on-board measurements of the electric field strengths and phases of LF 40 kHz and 60 kHz of radio signals (call sign JJY) continuously along both the ways between Tokyo, Japan and Syowa Station, the Japanese Antarctic station, at 69° 00' S, 39° 35' E on East Ongul Island, Lützow-Holm Bay, East Antarctica. The measurements are made by a newly developed, highly sensitive receiving system installed on board the Japanese Antarctic research vessel (RV) Shirase. We obtained new data sets of the electric field strength up to approximately 13,000-14,000 km propagation of LF JJY 40 kHz and 60 kHz radio waves by utilizing a newly developed, highly sensitive receiving system, comprised of an orthogonally crossed double-loop antenna and digital-signal-processing lock-in amplifiers, on board RV Shirase during the 55th JARE from November 2013 to April 2014. We have made comparisons between those on-board measurements and the numerical predictions of field strength for long-range propagation of low frequency radio waves based on a wave-hop propagation theory described in Section 2.4 of Recommendation ITU-R P.684-6 (2012) to show that our results qualitatively support the recommended wave-hop theory for the great-circle paths approximately 7,000-8,000 km and 13,000-14,000 km propagations.
Third-order aberrations in GRIN crystalline lens: a new method based on axial and field rays.
Río, Arturo Díaz Del; Gómez-Reino, Carlos; Flores-Arias, M Teresa
2015-01-01
This paper presents a new procedure for calculating the third-order aberration of gradient-index (GRIN) lenses that combines an iterative numerical method with the Hamiltonian theory of aberrations in terms of two paraxial rays with boundary conditions on general curved end surfaces and, as a second algebraic step has been presented. Application of this new method to a GRIN human is analyzed in the framework of the bi-elliptical model. The different third-order aberrations are determined, except those that need for their calculation skew rays, because the study is made only for meridional rays. Copyright © 2013 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.
Goto, Masami; Suzuki, Makoto; Mizukami, Shinya; Abe, Osamu; Aoki, Shigeki; Miyati, Tosiaki; Fukuda, Michinari; Gomi, Tsutomu; Takeda, Tohoru
2016-10-11
An understanding of the repeatability of measured results is important for both the atlas-based and voxel-based morphometry (VBM) methods of magnetic resonance (MR) brain volumetry. However, many recent studies that have investigated the repeatability of brain volume measurements have been performed using static magnetic fields of 1-4 tesla, and no study has used a low-strength static magnetic field. The aim of this study was to investigate the repeatability of measured volumes using the atlas-based method and a low-strength static magnetic field (0.4 tesla). Ten healthy volunteers participated in this study. Using a 0.4 tesla magnetic resonance imaging (MRI) scanner and a quadrature head coil, three-dimensional T1-weighted images (3D-T1WIs) were obtained from each subject, twice on the same day. VBM8 software was used to construct segmented normalized images [gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) images]. The regions-of-interest (ROIs) of GM, WM, CSF, hippocampus (HC), orbital gyrus (OG), and cerebellum posterior lobe (CPL) were generated using WFU PickAtlas. The percentage change was defined as[100 × (measured volume with first segmented image - mean volume in each subject)/(mean volume in each subject)]The average percentage change was calculated as the percentage change in the 6 ROIs of the 10 subjects. The mean of the average percentage changes for each ROI was as follows: GM, 0.556%; WM, 0.324%; CSF, 0.573%; HC, 0.645%; OG, 1.74%; and CPL, 0.471%. The average percentage change was higher for the orbital gyrus than for the other ROIs. We consider that repeatability of the atlas-based method is similar between 0.4 and 1.5 tesla MR scanners. To our knowledge, this is the first report to show that the level of repeatability with a 0.4 tesla MR scanner is adequate for the estimation of brain volume change by the atlas-based method.
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Zhou, Guang-xiang; Gao, Wen-chun; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-01-01
According to the requirements of the increasing development for optical transmission systems, a novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on the subgroup of the finite field multiplicative group is proposed. Furthermore, this construction method can effectively avoid the girth-4 phenomena and has the advantages such as simpler construction, easier implementation, lower encoding/decoding complexity, better girth properties and more flexible adjustment for the code length and code rate. The simulation results show that the error correction performance of the QC-LDPC(3 780,3 540) code with the code rate of 93.7% constructed by this proposed method is excellent, its net coding gain is respectively 0.3 dB, 0.55 dB, 1.4 dB and 1.98 dB higher than those of the QC-LDPC(5 334,4 962) code constructed by the method based on the inverse element characteristics in the finite field multiplicative group, the SCG-LDPC(3 969,3 720) code constructed by the systematically constructed Gallager (SCG) random construction method, the LDPC(32 640,30 592) code in ITU-T G.975.1 and the classic RS(255,239) code which is widely used in optical transmission systems in ITU-T G.975 at the bit error rate ( BER) of 10-7. Therefore, the constructed QC-LDPC(3 780,3 540) code is more suitable for optical transmission systems.
Jiang, Su; Liu, Ya-Feng; Wang, Xiao-Min; Liu, Ke-Fei; Zhang, Ding-Hong; Li, Yi-Ding; Yu, Ai-Ping; Zhang, Xiao-Hui; Zhang, Jia-Yi; Xu, Jian-Guang; Gu, Yu-Dong; Xu, Wen-Dong; Zeng, Shao-Qun
2016-01-01
We introduce a more flexible optogenetics-based mapping system attached on a stereo microscope, which offers automatic light stimulation to individual regions of interest in the cortex that expresses light-activated channelrhodopsin-2 in vivo. Combining simultaneous recording of electromyography from specific forelimb muscles, we demonstrate that this system offers much better efficiency and precision in mapping distinct domains for controlling limb muscles in the mouse motor cortex. Furthermore, the compact and modular design of the system also yields a simple and flexible implementation to different commercial stereo microscopes, and thus could be widely used among laboratories. PMID:27699114
NASA Astrophysics Data System (ADS)
Cottura, M.; Appolaire, B.; Finel, A.; Le Bouar, Y.
2016-09-01
A phase field model is coupled to strain gradient crystal plasticity based on dislocation densities. The resulting model includes anisotropic plasticity and the size-dependence of plastic activity, required when plasticity is confined in region below few microns in size. These two features are important for handling microstructure evolutions during diffusive phase transformations that involve plastic deformation occurring in confined areas such as Ni-based superalloys undergoing rafting. The model also uses a storage-recovery law for the evolution of the dislocation density of each glide system and a hardening matrix to account for the short-range interactions between dislocations. First, it is shown that the unstable modes during the morphological destabilization of a growing misfitting circular precipitate are selected by the anisotropy of plasticity. Then, the rafting of γ‧ precipitates in a Ni-based superalloy is investigated during [100] creep loadings. Our model includes most of the important physical phenomena accounted for during the microstructure evolution, such as the presence of different crystallographic γ‧ variants, their misfit with the γ matrix, the elastic inhomogeneity and anisotropy, the hardening, anisotropy and viscosity of plasticity. In agreement with experiments, the model predicts that rafting proceeds perpendicularly to the tensile loading axis and it is shown that plasticity slows down significantly the evolution of the rafts.
NASA Astrophysics Data System (ADS)
Vergallo, P.; Lay-Ekuakille, A.
2013-08-01
Brain activity can be recorded by means of EEG (Electroencephalogram) electrodes placed on the scalp of the patient. The EEG reflects the activity of groups of neurons located in the head, and the fundamental problem in neurophysiology is the identification of the sources responsible of brain activity, especially if a seizure occurs and in this case it is important to identify it. The studies conducted in order to formalize the relationship between the electromagnetic activity in the head and the recording of the generated external field allow to know pattern of brain activity. The inverse problem, that is given the sampling field at different electrodes the underlying asset must be determined, is more difficult because the problem may not have a unique solution, or the search for the solution is made difficult by a low spatial resolution which may not allow to distinguish between activities involving sources close to each other. Thus, sources of interest may be obscured or not detected and known method in source localization problem as MUSIC (MUltiple SIgnal Classification) could fail. Many advanced source localization techniques achieve a best resolution by exploiting sparsity: if the number of sources is small as a result, the neural power vs. location is sparse. In this work a solution based on the spatial sparsity of the field signal is presented and analyzed to improve MUSIC method. For this purpose, it is necessary to set a priori information of the sparsity in the signal. The problem is formulated and solved using a regularization method as Tikhonov, which calculates a solution that is the better compromise between two cost functions to minimize, one related to the fitting of the data, and another concerning the maintenance of the sparsity of the signal. At the first, the method is tested on simulated EEG signals obtained by the solution of the forward problem. Relatively to the model considered for the head and brain sources, the result obtained allows to
NASA Astrophysics Data System (ADS)
Peng, Jiayuan; Zhang, Zhen; Wang, Jiazhou; Xie, Jiang; Chen, Junchao; Hu, Weigang
2015-10-01
GafChromic RTQA2 film is a type of radiochromic film designed for light field and radiation field alignment. The aim of this study is to extend the application of RTQA2 film to the measurement of patient specific quality assurance (QA) fields as a 2D relative dosimeter. Pre-irradiated and post-irradiated RTQA2 films were scanned in reflection mode using a flatbed scanner. A plan-based calibration (PBC) method utilized the mapping information of the calculated dose image and film grayscale image to create a dose versus pixel value calibration model. This model was used to calibrate the film grayscale image to the film relative dose image. The dose agreement between calculated and film dose images were analyzed by gamma analysis. To evaluate the feasibility of this method, eight clinically approved RapidArc cases (one abdomen cancer and seven head-and-neck cancer patients) were tested using this method. Moreover, three MLC gap errors and two MLC transmission errors were introduced to eight Rapidarc cases respectively to test the robustness of this method. The PBC method could overcome the film lot and post-exposure time variations of RTQA2 film to get a good 2D relative dose calibration result. The mean gamma passing rate of eight patients was 97.90% ± 1.7%, which showed good dose consistency between calculated and film dose images. In the error test, the PBC method could over-calibrate the film, which means some dose error in the film would be falsely corrected to keep the dose in film consistent with the dose in the calculated dose image. This would then lead to a false negative result in the gamma analysis. In these cases, the derivative curve of the dose calibration curve would be non-monotonic which would expose the dose abnormality. By using the PBC method, we extended the application of more economical RTQA2 film to patient specific QA. The robustness of the PBC method has been improved by analyzing the monotonicity of the derivative of the
Peng, Jiayuan; Zhang, Zhen; Wang, Jiazhou; Xie, Jiang; Chen, Junchao; Hu, Weigang
2015-10-07
GafChromic RTQA2 film is a type of radiochromic film designed for light field and radiation field alignment. The aim of this study is to extend the application of RTQA2 film to the measurement of patient specific quality assurance (QA) fields as a 2D relative dosimeter.Pre-irradiated and post-irradiated RTQA2 films were scanned in reflection mode using a flatbed scanner. A plan-based calibration (PBC) method utilized the mapping information of the calculated dose image and film grayscale image to create a dose versus pixel value calibration model. This model was used to calibrate the film grayscale image to the film relative dose image. The dose agreement between calculated and film dose images were analyzed by gamma analysis. To evaluate the feasibility of this method, eight clinically approved RapidArc cases (one abdomen cancer and seven head-and-neck cancer patients) were tested using this method. Moreover, three MLC gap errors and two MLC transmission errors were introduced to eight Rapidarc cases respectively to test the robustness of this method.The PBC method could overcome the film lot and post-exposure time variations of RTQA2 film to get a good 2D relative dose calibration result. The mean gamma passing rate of eight patients was 97.90% ± 1.7%, which showed good dose consistency between calculated and film dose images. In the error test, the PBC method could over-calibrate the film, which means some dose error in the film would be falsely corrected to keep the dose in film consistent with the dose in the calculated dose image. This would then lead to a false negative result in the gamma analysis. In these cases, the derivative curve of the dose calibration curve would be non-monotonic which would expose the dose abnormality.By using the PBC method, we extended the application of more economical RTQA2 film to patient specific QA. The robustness of the PBC method has been improved by analyzing the monotonicity of the derivative of the calibration
Low field SQUID MRI devices, components and methods
NASA Technical Reports Server (NTRS)
Penanen, Konstantin I. (Inventor); Eom, Byeong H (Inventor); Hahn, Inseob (Inventor)
2010-01-01
Low field SQUID MRI devices, components and methods are disclosed. They include a portable low field (SQUID)-based MRI instrument and a portable low field SQUID-based MRI system to be operated under a bed where a subject is adapted to be located. Also disclosed is a method of distributing wires on an image encoding coil system adapted to be used with an NMR or MRI device for analyzing a sample or subject and a second order superconducting gradiometer adapted to be used with a low field SQUID-based MRI device as a sensing component for an MRI signal related to a subject or sample.
Low field SQUID MRI devices, components and methods
NASA Technical Reports Server (NTRS)
Penanen, Konstantin I. (Inventor); Eom, Byeong H. (Inventor); Hahn, Inseob (Inventor)
2011-01-01
Low field SQUID MRI devices, components and methods are disclosed. They include a portable low field (SQUID)-based MRI instrument and a portable low field SQUID-based MRI system to be operated under a bed where a subject is adapted to be located. Also disclosed is a method of distributing wires on an image encoding coil system adapted to be used with an NMR or MRI device for analyzing a sample or subject and a second order superconducting gradiometer adapted to be used with a low field SQUID-based MRI device as a sensing component for an MRI signal related to a subject or sample.
Low Field Squid MRI Devices, Components and Methods
NASA Technical Reports Server (NTRS)
Penanen, Konstantin I. (Inventor); Eom, Byeong H. (Inventor); Hahn, Inseob (Inventor)
2013-01-01
Low field SQUID MRI devices, components and methods are disclosed. They include a portable low field (SQUID)-based MRI instrument and a portable low field SQUID-based MRI system to be operated under a bed where a subject is adapted to be located. Also disclosed is a method of distributing wires on an image encoding coil system adapted to be used with an NMR or MRI device for analyzing a sample or subject and a second order superconducting gradiometer adapted to be used with a low field SQUID-based MRI device as a sensing component for an MRI signal related to a subject or sample.
Low Field Squid MRI Devices, Components and Methods
NASA Technical Reports Server (NTRS)
Penanen, Konstantin I. (Inventor); Eom, Byeong H. (Inventor); Hahn, Inseob (Inventor)
2014-01-01
Low field SQUID MRI devices, components and methods are disclosed. They include a portable low field (SQUID)-based MRI instrument and a portable low field SQUID-based MRI system to be operated under a bed where a subject is adapted to be located. Also disclosed is a method of distributing wires on an image encoding coil system adapted to be used with an NMR or MRI device for analyzing a sample or subject and a second order superconducting gradiometer adapted to be used with a low field SQUID-based MRI device as a sensing component for an MRI signal related to a subject or sample.
C. Alina Cansler; Donald. McKenzie
2012-01-01
Remotely sensed indices of burn severity are now commonly used by researchers and land managers to assess fire effects, but their relationship to field-based assessments of burn severity has been evaluated only in a few ecosystems. This analysis illustrates two cases in which methodological refinements to field-based and remotely sensed indices of burn severity...
NASA Astrophysics Data System (ADS)
Shackleton, J. R.; Cooke, M. L.; Johnson, G.; Cilona, A.
2012-12-01
latest stages of fold growth during or after deposition of terrestrial deposits (Garumnian sequence). This interpretation is based on the observation of significant iron oxidation that is probably associated with subaerial exposure. Sant Corneli anticline offers a unique opportunity to evaluate the efficacy of structural restoration in predicting sub-seismic scale fractures and faults because each fracturing event is temporally constrained by relationships to growth strata that constrain fold evolution. Thus, the strains predicted by the restorations are compared to the fracture sets that formed over the corresponding time intervals. In this manner, we can directly evaluate the efficacy of the restoration in predicting fracture patterns, although the time intervals identified by the growth strata are typically much larger than the time scales we expect individual fracturing events to occur.
NASA Astrophysics Data System (ADS)
Kitada, K.; Araki, E.; Kimura, T.; Saffer, D. M.; Byrne, T.; McNeill, L. C.; Toczko, S.; Eguchi, N. O.; Takahashi, K.
2009-12-01
environmental conditions, such as shock, acceleration, and vibration during installation; and (2) to confirm sensor installation operational procedures, such as onboard assembly of the sensor tree, ship maneuvers to reenter the sensor tree, and entry into the hole. Acceleration and tilt data were recorded at 500 Hz and recovered after the dummy run test. Preliminary results from vibration analysis show that the strong vibration due to the high Kuroshio Current (~5 knot) occurred during the test. Spectral analysis of the collected acceleration data represents the drill pipe vibration and resonance of the instrument carrier. The resonance was much larger than the drill pipe vibration and its magnitude may depend on the structure of the instrument carrier. Preliminary results of the vibration mode, its amplitude, and comparison with current speed and direction, ships speed and depth of the sensor assembly are also shown to elucidate the cause of the vibration. These results give us an opportunity to establish installation methods, and to develop and refine sensors for the future long-term observatory emplacement.
NASA Astrophysics Data System (ADS)
Gao, Siwen; Rajendran, Mohan Kumar; Fivel, Marc; Ma, Anxin; Shchyglo, Oleg; Hartmaier, Alexander; Steinbach, Ingo
2015-10-01
Three-dimensional discrete dislocation dynamics (DDD) simulations in combination with the phase-field method are performed to investigate the influence of different realistic Ni-base single crystal superalloy microstructures with the same volume fraction of {γ\\prime} precipitates on plastic deformation at room temperature. The phase-field method is used to generate realistic microstructures as the boundary conditions for DDD simulations in which a constant high uniaxial tensile load is applied along different crystallographic directions. In addition, the lattice mismatch between the γ and {γ\\prime} phases is taken into account as a source of internal stresses. Due to the high antiphase boundary energy and the rare formation of superdislocations, precipitate cutting is not observed in the present simulations. Therefore, the plastic deformation is mainly caused by dislocation motion in γ matrix channels. From a comparison of the macroscopic mechanical response and the dislocation evolution for different microstructures in each loading direction, we found that, for a given {γ\\prime} phase volume fraction, the optimal microstructure should possess narrow and homogeneous γ matrix channels.
NASA Astrophysics Data System (ADS)
Maris, Virginie
An existing 3-D magnetotelluric (MT) inversion program written for a single processor personal computer (PC) has been modified and parallelized using OpenMP, in order to run the program efficiently on a multicore workstation. The program uses the Gauss-Newton inversion algorithm based on a staggered-grid finite-difference forward problem, requiring explicit calculation of the Frechet derivatives. The most time-consuming tasks are calculating the derivatives and determining the model parameters at each iteration. Forward modeling and derivative calculations are parallelized by assigning the calculations for each frequency to separate threads, which execute concurrently. Model parameters are obtained by factoring the Hessian using the LDLT method, implemented using a block-cyclic algorithm and compact storage. MT data from 102 tensor stations over the East Flank of the Coso Geothermal Field, California are inverted. Less than three days are required to invert the dataset for ˜ 55,000 inversion parameters on a 2.66 GHz 8-CPU PC with 16 GB of RAM. Inversion results, recovered from a halfspace rather than initial 2-D inversions, qualitatively resemble models from massively parallel 3-D inversion by other researchers and overall, exhibit an improved fit. A steeply west-dipping conductor under the western East Flank is tentatively correlated with a zone of high-temperature ionic fluids based on known well production and lost circulation intervals. Beneath the Main Field, vertical and north-trending shallow conductors are correlated with geothermal producing intervals as well.
Harris, W; Zhang, Y; Ren, L; Yin, F
2014-06-01
Purpose: To investigate the feasibility of using nanoparticle markers to validate liver tumor motion together with a deformation field map-based four dimensional (4D) cone-beam computed tomography (CBCT) reconstruction method. Methods: A technique for lung 4D-CBCT reconstruction has been previously developed using a deformation field map (DFM)-based strategy. In this method, each phase of the 4D-CBCT is considered as a deformation of a prior CT volume. The DFM is solved by a motion modeling and free-form deformation (MM-FD) technique, using a data fidelity constraint and the deformation energy minimization. For liver imaging, there is low contrast of a liver tumor in on-board projections. A validation of liver tumor motion using implanted gold nanoparticles, along with the MM-FD deformation technique is implemented to reconstruct onboard 4D CBCT liver radiotherapy images. These nanoparticles were placed around the liver tumor to reflect the tumor positions in both CT simulation and on-board image acquisition. When reconstructing each phase of the 4D-CBCT, the migrations of the gold nanoparticles act as a constraint to regularize the deformation field, along with the data fidelity and the energy minimization constraints. In this study, multiple tumor diameters and positions were simulated within the liver for on-board 4D-CBCT imaging. The on-board 4D-CBCT reconstructed by the proposed method was compared with the “ground truth” image. Results: The preliminary data, which uses reconstruction for lung radiotherapy suggests that the advanced reconstruction algorithm including the gold nanoparticle constraint will Resultin volume percentage differences (VPD) between lesions in reconstructed images by MM-FD and “ground truth” on-board images of 11.5% (± 9.4%) and a center of mass shift of 1.3 mm (± 1.3 mm) for liver radiotherapy. Conclusion: The advanced MM-FD technique enforcing the additional constraints from gold nanoparticles, results in improved accuracy
Stothard, J R; Pleasant, J; Oguttu, D; Adriko, M; Galimaka, R; Ruggiana, A; Kazibwe, F; Kabatereine, N B
2008-09-01
To ascertain the current status of strongyloidiasis in mothers and their preschool children, a field-based survey was conducted in western Uganda using a combination of diagnostic methods: ELISA, Baermann concentration and Koga agar plate. The prevalence of other soil-transmitted helminthiasis and intestinal schistosomiasis were also determined. In total, 158 mothers and 143 children were examined from five villages within Kabale, Hoima and Masindi districts. In mothers and children, the general prevalence of strongyloidiasis inferred by ELISA was approximately 4% and approximately 2%, respectively. Using the Baermann concentration method, two parasitologically proven cases were encountered in an unrelated mother and child, both of whom were sero-negative for strongyloidiasis. No infections were detected by Koga agar plate method. The general level of awareness of strongyloidiasis was very poor ( < 5%) in comparison to schistosomiasis (51%) and ascariasis (36%). Strongyloidiasis is presently at insufficient levels to justify inclusion within a community treatment programme targeting maternal and child health. Better epidemiological screening is needed, however, especially identifying infections in HIV-positive women of childbearing age. In the rural clinic setting, further use of the Baermann concentration method would appear to be the most immediate and pragmatic option for disease diagnosis.
Historic Methods for Capturing Magnetic Field Images
NASA Astrophysics Data System (ADS)
Kwan, Alistair
2016-03-01
I investigated two late 19th-century methods for capturing magnetic field images from iron filings for historical insight into the pedagogy of hands-on physics education methods, and to flesh out teaching and learning practicalities tacit in the historical record. Both methods offer opportunities for close sensory engagement in data-collection processes.
Historic Methods for Capturing Magnetic Field Images
ERIC Educational Resources Information Center
Kwan, Alistair
2016-01-01
I investigated two late 19th-century methods for capturing magnetic field images from iron filings for historical insight into the pedagogy of hands-on physics education methods, and to flesh out teaching and learning practicalities tacit in the historical record. Both methods offer opportunities for close sensory engagement in data-collection…
Historic Methods for Capturing Magnetic Field Images
ERIC Educational Resources Information Center
Kwan, Alistair
2016-01-01
I investigated two late 19th-century methods for capturing magnetic field images from iron filings for historical insight into the pedagogy of hands-on physics education methods, and to flesh out teaching and learning practicalities tacit in the historical record. Both methods offer opportunities for close sensory engagement in data-collection…
NASA Astrophysics Data System (ADS)
Zaccheo, T. S.; Pernini, T.; Botos, C.; Dobler, J. T.; Blume, N.
2015-12-01
The Greenhouse gas Laser Imaging Tomography Experiment (GreenLITE) combines real-time differential Laser Absorption Spectroscopy (LAS) measurements with a lightweight web-based data acquisition and product generation system to provide autonomous 24/7 monitoring of CO2. The current GreenLITE system is comprised of two transceivers and a series of retro-reflectors that continuously measure the differential transmission over a user-defined set of intersecting line-of-site paths or "chords" that form the plane of interest. These observations are first combined with in situ surface measurements of temperature (T), pressure (P) and relative humidity (RH) to compute the integrated CO2 mixing ratios based on an iterative radiative transfer modeling approach. The retrieved CO2 mixing ratios are then grouped based on observation time and employed in a sparse sample reconstruction method to provide a tomographic- like representation of the 2-D distribution of CO2 over the field of interest. This reconstruction technique defines the field of interest as a set of idealized plumes whose integrated values best match the observations. The GreenLITE system has been deployed at two primary locations; 1) the Zero Emissions Research and Technology (ZERT) center in Bozeman, Montana, in Aug-Sept 2014, where more than 200 hours of data were collected over a wide range of environmental conditions while utilizing a controlled release of CO2 into a segmented underground pipe, and 2) continuously at a carbon sequestration test facility in Feb-Aug 2015. The system demonstrated the ability to identify persistent CO2 sources at the ZERT test facility and showed strong correlation with an independent measurement using a LI-COR based system. Here we describe the measurement approach, algorithm design and extended study results.
Pla, Maria; La Paz, José-Luis; Peñas, Gisela; García, Nora; Palaudelmàs, Montserrat; Esteve, Teresa; Messeguer, Joaquima; Melé, Enric
2006-04-01
Maize is one of the main crops worldwide and an increasing number of genetically modified (GM) maize varieties are cultivated and commercialized in many countries in parallel to conventional crops. Given the labeling rules established e.g. in the European Union and the necessary coexistence between GM and non-GM crops, it is important to determine the extent of pollen dissemination from transgenic maize to other cultivars under field conditions. The most widely used methods for quantitative detection of GMO are based on real-time PCR, which implies the results are expressed in genome percentages (in contrast to seed or grain percentages). Our objective was to assess the accuracy of real-time PCR based assays to accurately quantify the contents of transgenic grains in non-GM fields in comparison with the real cross-fertilization rate as determined by phenotypical analysis. We performed this study in a region where both GM and conventional maize are normally cultivated and used the predominant transgenic maize Mon810 in combination with a conventional maize variety which displays the characteristic of white grains (therefore allowing cross-pollination quantification as percentage of yellow grains). Our results indicated an excellent correlation between real-time PCR results and number of cross-fertilized grains at Mon810 levels of 0.1-10%. In contrast, Mon810 percentage estimated by weight of grains produced less accurate results. Finally, we present and discuss the pattern of pollen-mediated gene flow from GM to conventional maize in an example case under field conditions.
A simple calculation method for determination of equivalent square field.
Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad
2012-04-01
Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning.
NASA Astrophysics Data System (ADS)
Tiwari, Shashi; Takashima, Wataru; Nagamatsu, S.; Balasubramanian, S. K.; Prakash, Rajiv
2014-09-01
A comparative study on electrical performance, optical properties, and surface morphology of poly(3-hexylthiophene) (P3HT) and P3HT-nanofibers based "normally on" type p-channel field effect transistors (FETs), fabricated by two different coating techniques has been reported here. Nanofibers are prepared in the laboratory with the approach of self-assembly of P3HT molecules into nanofibers in an appropriate solvent. P3HT (0.3 wt. %) and P3HT-nanofibers (˜0.25 wt. %) are used as semiconductor transport materials for deposition over FETs channel through spin coating as well as through our recently developed floating film transfer method (FTM). FETs fabricated using FTM show superior performance compared to spin coated devices; however, the mobility of FTM films based FETs is comparable to the mobility of spin coated one. The devices based on P3HT-nanofibers (using both the techniques) show much better performance in comparison to P3HT FETs. The best performance among all the fabricated organic field effect transistors are observed for FTM coated P3HT-nanofibers FETs. This improved performance of nanofiber-FETs is due to ordering of fibers and also due to the fact that fibers offer excellent charge transport facility because of point to point transmission. The optical properties and structural morphologies (P3HT and P3HT-nanofibers) are studied using UV-visible absorption spectrophotometer and atomic force microscopy , respectively. Coating techniques and effect of fiber formation for organic conductors give information for fabrication of organic devices with improved performance.
Tiwari, Shashi; Balasubramanian, S. K.; Takashima, Wataru; Nagamatsu, S.; Prakash, Rajiv
2014-09-07
A comparative study on electrical performance, optical properties, and surface morphology of poly(3-hexylthiophene) (P3HT) and P3HT-nanofibers based “normally on” type p-channel field effect transistors (FETs), fabricated by two different coating techniques has been reported here. Nanofibers are prepared in the laboratory with the approach of self-assembly of P3HT molecules into nanofibers in an appropriate solvent. P3HT (0.3 wt. %) and P3HT-nanofibers (∼0.25 wt. %) are used as semiconductor transport materials for deposition over FETs channel through spin coating as well as through our recently developed floating film transfer method (FTM). FETs fabricated using FTM show superior performance compared to spin coated devices; however, the mobility of FTM films based FETs is comparable to the mobility of spin coated one. The devices based on P3HT-nanofibers (using both the techniques) show much better performance in comparison to P3HT FETs. The best performance among all the fabricated organic field effect transistors are observed for FTM coated P3HT-nanofibers FETs. This improved performance of nanofiber-FETs is due to ordering of fibers and also due to the fact that fibers offer excellent charge transport facility because of point to point transmission. The optical properties and structural morphologies (P3HT and P3HT-nanofibers) are studied using UV-visible absorption spectrophotometer and atomic force microscopy , respectively. Coating techniques and effect of fiber formation for organic conductors give information for fabrication of organic devices with improved performance.
Bae, Il Kwon; Kim, Juwon; Sun, Je Young Hannah; Jeong, Seok Hoon; Kim, Yong-Rok; Wang, Kang-Kyun; Lee, Kyungwon
2014-01-01
Background & objectives: PFGE, rep-PCR, and MLST are widely used to identify related bacterial isolates and determine epidemiologic associations during outbreaks. This study was performed to compare the ability of repetitive sequence-based PCR (rep-PCR) and pulsed-field gel electrophoresis (PFGE) to determine the genetic relationships among Escherichia coli isolates assigned to various sequence types (STs) by two multilocus sequence typing (MLST) schemes. Methods: A total of 41 extended-spectrum β-lactamase- (ESBL-) and/or AmpC β-lactamase-producing E. coli clinical isolates were included in this study. MLST experiments were performed following the Achtman's MLST scheme and the Whittam's MLST scheme, respectively. Rep-PCR experiments were performed using the DiversiLab system. PFGE experiments were also performed. Results: A comparison of the two MLST methods demonstrated that these two schemes yielded compatible results. PFGE correctly segregated E. coli isolates belonging to different STs as different types, but did not group E. coli isolates belonging to the same ST in the same group. Rep-PCR accurately grouped E. coli isolates belonging to the same ST together, but this method demonstrated limited ability to discriminate between E. coli isolates belonging to different STs. Interpretation & conclusions: These results suggest that PFGE would be more effective when investigating outbreaks in a limited space, such as a specialty hospital or an intensive care unit, whereas rep-PCR should be used for nationwide or worldwide epidemiology studies. PMID:25579152
Human Biology, A Guide to Field Methods.
ERIC Educational Resources Information Center
Weiner, J. S.; Lourie, J. A.
The aim of this handbook is to provide, in a form suitable for use in the field, instructions on the whole range of methods required for the fulfillment of human biological studies on a comparative basis. Certain of these methods can be used to carry out the rapid surveys on growth, physique, and genetic constitution. They are also appropriate for…
NASA Astrophysics Data System (ADS)
Revel, G. M.; Martarelli, M.; Chiariotti, P.
2010-07-01
The selective intensity technique is a powerful tool for the localization of acoustic sources and for the identification of the structural contribution to the acoustic emission. In practice, the selective intensity method is based on simultaneous measurements of acoustic intensity, by means of a couple of matched microphones, and structural vibration of the emitting object. In this paper high spatial density multi-point vibration data, acquired by using a scanning laser Doppler vibrometer, have been used for the first time. Therefore, by applying the selective intensity algorithm, the contribution of a large number of structural sources to the acoustic field radiated by the vibrating object can be estimated. The selective intensity represents the distribution of the acoustic monopole sources on the emitting surface, as if each monopole acted separately from the others. This innovative selective intensity approach can be very helpful when the measurement is performed on large panels in highly reverberating environments, such as aircraft cabins. In this case the separation of the direct acoustic field (radiated by the vibrating panels of the fuselage) and the reverberant one is difficult by traditional techniques. The work shown in this paper is the application of part of the results of the European project CREDO (Cabin Noise Reduction by Experimental and Numerical Design Optimization) carried out within the framework of the EU. Therefore the aim of this paper is to illustrate a real application of the method to the interior acoustic characterization of an Alenia Aeronautica ATR42 ground test facility, Alenia Aeronautica being a partner of the CREDO project.
Soil Identification using Field Electrical Resistivity Method
NASA Astrophysics Data System (ADS)
Hazreek, Z. A. M.; Rosli, S.; Chitral, W. D.; Fauziah, A.; Azhar, A. T. S.; Aziman, M.; Ismail, B.
2015-06-01
Geotechnical site investigation with particular reference to soil identification was important in civil engineering works since it reports the soil condition in order to relate the design and construction of the proposed works. In the past, electrical resistivity method (ERM) has widely being used in soil characterization but experienced several black boxes which related to its results and interpretations. Hence, this study performed a field electrical resistivity method (ERM) using ABEM SAS (4000) at two different types of soils (Gravelly SAND and Silty SAND) in order to discover the behavior of electrical resistivity values (ERV) with type of soils studied. Soil basic physical properties was determine thru density (p), moisture content (w) and particle size distribution (d) in order to verify the ERV obtained from each type of soil investigated. It was found that the ERV of Gravelly SAND (278 Ωm & 285 Ωm) was slightly higher than SiltySAND (223 Ωm & 199 Ωm) due to the uncertainties nature of soils. This finding has showed that the results obtained from ERM need to be interpreted based on strong supported findings such as using direct test from soil laboratory data. Furthermore, this study was able to prove that the ERM can be established as an alternative tool in soil identification provided it was being verified thru other relevance information such as using geotechnical properties.
Method for making field-structured memory materials
Martin, James E.; Anderson, Robert A.; Tigges, Chris P.
2002-01-01
A method of forming a dual-level memory material using field structured materials. The field structured materials are formed from a dispersion of ferromagnetic particles in a polymerizable liquid medium, such as a urethane acrylate-based photopolymer, which are applied as a film to a support and then exposed in selected portions of the film to an applied magnetic or electric field. The field can be applied either uniaxially or biaxially at field strengths up to 150 G or higher to form the field structured materials. After polymerizing the field-structure materials, a magnetic field can be applied to selected portions of the polymerized field-structured material to yield a dual-level memory material on the support, wherein the dual-level memory material supports read-and-write binary data memory and write once, read many memory.
Field-theory methods in coagulation theory
Lushnikov, A. A.
2011-08-15
Coagulating systems are systems of chaotically moving particles that collide and coalesce, producing daughter particles of mass equal to the sum of the masses involved in the respective collision event. The present article puts forth basic ideas underlying the application of methods of quantum-field theory to the theory of coagulating systems. Instead of the generally accepted treatment based on the use of a standard kinetic equation that describes the time evolution of concentrations of particles consisting of a preset number of identical objects (monomers in the following), one introduces the probability W(Q, t) to find the system in some state Q at an instant t for a specific rate of transitions between various states. Each state Q is characterized by a set of occupation numbers Q = (n{sub 1}, n{sub 2}, ..., n{sub g}, ...), where n{sub g} is the total number of particles containing precisely g monomers. Thereupon, one introduces the generating functional {Psi} for the probability W(Q, t). The time evolution of {Psi} is described by an equation that is similar to the Schroedinger equation for a one-dimensional Bose field. This equation is solved exactly for transition rates proportional to the product of the masses of colliding particles. It is shown that, within a finite time interval, which is independent of the total mass of the entire system, a giant particle of mass about the mass of the entire system may appear in this system. The particle in question is unobservable in the thermodynamic limit, and this explains the well-known paradox of mass-concentration nonconservation in classical kinetic theory. The theory described in the present article is successfully applied in studying the time evolution of random graphs.
A method for longitudinal relaxation time measurement in inhomogeneous fields
NASA Astrophysics Data System (ADS)
Chen, Hao; Cai, Shuhui; Chen, Zhong
2017-08-01
The spin-lattice relaxation time (T1) plays a crucial role in the study of spin dynamics, signal optimization and data quantification. However, the measurement of chemical shift-specific T1 constants is hampered by the magnetic field inhomogeneity due to poorly shimmed external magnetic fields or intrinsic magnetic susceptibility heterogeneity in samples. In this study, we present a new protocol to determine chemical shift-specific T1 constants in inhomogeneous fields. Based on intermolecular double-quantum coherences, the new method can resolve overlapped peaks in inhomogeneous fields. The measurement results are in consistent with the measurements in homogeneous fields using the conventional method. Since spatial encoding technique is involved, the experimental time for the new method is very close to that for the conventional method. With the aid of T1 knowledge, some concealed information can be exploited by T1 weighting experiments.
Rapid field-screening method for PCBs
NASA Astrophysics Data System (ADS)
Vo-Dinh, Tuan; Watts, Wendi; Miller, Gordon H.; Pal, A.; Eastwood, DeLyle; Lidberg, Russell L.
1993-03-01
The analysis of polychlorinated biphenyls (PCBs) generally requires selectivity and sensitivity. Even after cleanup, PCBs are usually at ultratrace levels in field samples, mixed in with other halocarbons, hydrocarbons, lipids, etc. The levels of PCBs typically found in water, soil, tissue, food, biota, and other matrices of interest are in the parts per billion (ppb) range. Most current measurement techniques for PCBs require chromatographic separations and are not practical for routine analysis. There is a strong need to have rapid and simple techniques to screen for PCBs under field conditions. The use of field screening analysis allows rapid decisions in remedial actions and reduces the need for sample preparations and time- consuming laboratory analyses. Field screening techniques also reduce the cost of clean-up operations. This paper describes a simple screening technique based on room temperature phosphorescence (RTP) and provides an overview of both this analytical procedure to detect trace levels of PCBs in environmental samples.
An Experimental Method for Semantic Field Study.
ERIC Educational Resources Information Center
Cutler, Anne
This paper emphasizes the need for empirical research and objective discovery procedures in semantics, and illustrates a method by which these goals may be obtained. The aim of the methodology described is to provide a description of the internal structure of a semantic field by eliciting the description--in an objective, standardized manner--from…
NASA Astrophysics Data System (ADS)
Ferreira, Vagner G.; Montecino, Henry D. C.; Yakubu, Caleb I.; Heck, Bernhard
2016-01-01
Currently, various satellite processing centers produce extensive data, with different solutions of the same field being available. For instance, the Gravity Recovery and Climate Experiment (GRACE) has been monitoring terrestrial water storage (TWS) since April 2002, while the Center for Space Research (CSR), the Jet Propulsion Laboratory (JPL), the GeoForschungsZentrum (GFZ), and the Groupe de Recherche de Géodésie Spatiale (GRGS) provide individual monthly solutions in the form of Stokes coefficients. The inverted TWS maps (or the regionally averaged values) from these coefficients are being used in many applications; however, as no ground truth data exist, the uncertainties are unknown. Consequently, the purpose of this work is to assess the quality of each processing center by estimating their uncertainties using a generalized formulation of the three-cornered hat (TCH) method. Overall, the TCH results for the study period of August 2002 to June 2014 indicate that at a global scale, the CSR, GFZ, GRGS, and JPL presented uncertainties of 9.4, 13.7, 14.8, and 13.2 mm, respectively. At a basin scale, the overall good performance of the CSR was observed at 91 river basins. The TCH-based results were confirmed by a comparison with an ensemble solution from the four GRACE processing centers.
Constantinou, Marios; Stolojan, Vlad; Rajeev, Kiron Prabha; Hinder, Steven; Fisher, Brett; Bogart, Timothy D; Korgel, Brian A; Shkunov, Maxim
2015-10-14
In this letter, we demonstrate a solution-based method for a one-step deposition and surface passivation of the as-grown silicon nanowires (Si NWs). Using N,N-dimethylformamide (DMF) as a mild oxidizing agent, the NWs' surface traps density was reduced by over 2 orders of magnitude from 1×10(13) cm(-2) in pristine NWs to 3.7×10(10) cm(-2) in DMF-treated NWs, leading to a dramatic hysteresis reduction in NW field-effect transistors (FETs) from up to 32 V to a near-zero hysteresis. The change of the polyphenylsilane NW shell stoichiometric composition was confirmed by X-ray photoelectron spectroscopy analysis showing a 35% increase in fully oxidized Si4+ species for DMF-treated NWs compared to dry NW powder. Additionally, a shell oxidation effect induced by DMF resulted is a more stable NW FET performance with steady transistor currents and only 1.5 V hysteresis after 1000 h of air exposure.
ERIC Educational Resources Information Center
Napier, John D.; Vansickle, Ronald L.
1978-01-01
Comparison of pre-service social studies teachers in field and non-field based methods courses indicated no significant differences with regard to teaching skills, attitudes, or behaviors teachers should exhibit in the classroom. (Author/DB)
ERIC Educational Resources Information Center
Napier, John D.; Vansickle, Ronald L.
1978-01-01
Comparison of pre-service social studies teachers in field and non-field based methods courses indicated no significant differences with regard to teaching skills, attitudes, or behaviors teachers should exhibit in the classroom. (Author/DB)
Electric Field Quantitative Measurement System and Method
NASA Technical Reports Server (NTRS)
Generazio, Edward R. (Inventor)
2016-01-01
A method and system are provided for making a quantitative measurement of an electric field. A plurality of antennas separated from one another by known distances are arrayed in a region that extends in at least one dimension. A voltage difference between at least one selected pair of antennas is measured. Each voltage difference is divided by the known distance associated with the selected pair of antennas corresponding thereto to generate a resulting quantity. The plurality of resulting quantities defined over the region quantitatively describe an electric field therein.
Meng, Yuguang; Lei, Hao
2008-12-01
T2*-weighted imaging (T2*WI) and quantitative T2* mapping with conventional gradient-echo acquisition are often hindered by severe signal loss induced by macroscopic field inhomogeneity. Various z-shimming approaches have been developed for T2*WI/T2* mapping in which the effects of macroscopic field inhomogeneity are suppressed while the sensitivity of T2*-related signal intensity to alterations in the microscopic susceptibility is maintained. However, this is often done at the cost of significantly increased imaging time. In this work, a fast T2* mapping method with compensation for macroscopic field inhomogeneity was developed. A proton density-weighted image and a composite T2*-weighted image, both of which were essentially free from macroscopic field inhomogeneity-induced signal loss, were used for the T2* calculation. The composite T2*-weighted image was reconstructed from a number of gradient-echo images acquired with successively incremented z-shimming compensation. Because acquisition of the two images and z-shimming compensation were realized in a single scan, the total acquisition time for obtaining a T2* map with the proposed method is the same as the time taken for a conventional multiecho gradient-echo imaging sequence without compensation. The performance and efficiency of the proposed method were demonstrated and evaluated at 4.7 T. (c) 2008 Wiley-Liss, Inc.
A New Method for Coronal Magnetic Field Reconstruction
NASA Astrophysics Data System (ADS)
Yi, Sibaek; Choe, Gwangson; Lim, Daye
2015-08-01
We present a new, simple, variational method for reconstruction of coronal force-free magnetic fields based on vector magnetogram data. Our method employs vector potentials for magnetic field description in order to ensure the divergence-free condition. As boundary conditions, it only requires the normal components of magnetic field and current density so that the boundary conditions are not over-specified as in many other methods. The boundary normal current distribution is initially fixed once and for all and does not need continual adjustment as in stress-and-relax type methods. We have tested the computational code based on our new method in problems with known solutions and those with actual photospheric data. When solutions are fully given at all boundaries, the accuracy of our method is almost comparable to best performing methods in the market. When magnetic field data are given only at the photospheric boundary, our method excels other methods in most “figures of merit” devised by Schrijver et al. (2006). Furthermore the residual force in the solution is at least an order of magnitude smaller than that of any other method. It can also accommodate the source-surface boundary condition at the top boundary. Our method is expected to contribute to the real time monitoring of the sun required for future space weather forecasts.
Got Mud? Field-based Learning in Wetland Ecology.
ERIC Educational Resources Information Center
Baldwin, Andrew H.
2001-01-01
Describes methods for teaching wetland ecology classes based mainly on direct, hands-on field experiences for students. Makes the case that classroom lectures are necessary but there is no substitute for field and laboratory experiences. (Author/MM)
Got Mud? Field-based Learning in Wetland Ecology.
ERIC Educational Resources Information Center
Baldwin, Andrew H.
2001-01-01
Describes methods for teaching wetland ecology classes based mainly on direct, hands-on field experiences for students. Makes the case that classroom lectures are necessary but there is no substitute for field and laboratory experiences. (Author/MM)
Field emission from graphene based composite thin films
NASA Astrophysics Data System (ADS)
Eda, Goki; Emrah Unalan, H.; Rupesinghe, Nalin; Amaratunga, Gehan A. J.; Chhowalla, Manish
2008-12-01
Field emission from graphene is challenging because the existing deposition methods lead to sheets that lay flat on the substrate surface, which limits the field enhancement. Here we describe a simple and general solution based method for the deposition of field emitting graphene/polymer composite thin films. The graphene sheets are oriented at some angles with respect to the substrate surface leading to field emission at low threshold fields (˜4Vμm-1). Our method provides a route for the deposition of graphene based thin film field emitter on different substrates, opening up avenues for a variety of applications.
Lattice Methods and Effective Field Theory
NASA Astrophysics Data System (ADS)
Nicholson, Amy
Lattice field theory is a non-perturbative tool for studying properties of strongly interacting field theories, which is particularly amenable to numerical calculations and has quantifiable systematic errors. In these lectures we apply these techniques to nuclear Effective Field Theory (EFT), a non-relativistic theory for nuclei involving the nucleons as the basic degrees of freedom. The lattice formulation of Endres et al. (Phys Rev A 84:043644, 2011; Phys Rev A 87:023615, 2013) for so-called pionless EFT is discussed in detail, with portions of code included to aid the reader in code development. Systematic and statistical uncertainties of these methods are discussed at length, and extensions beyond pionless EFT are introduced in the final section.
Overlay control methodology comparison: field-by-field and high-order methods
NASA Astrophysics Data System (ADS)
Huang, Chun-Yen; Chiu, Chui-Fu; Wu, Wen-Bin; Shih, Chiang-Lin; Huang, Chin-Chou Kevin; Huang, Healthy; Choi, DongSub; Pierson, Bill; Robinson, John C.
2012-03-01
Overlay control in advanced integrated circuit (IC) manufacturing is becoming one of the leading lithographic challenges in the 3x and 2x nm process nodes. Production overlay control can no longer meet the stringent emerging requirements based on linear composite wafer and field models with sampling of 10 to 20 fields and 4 to 5 sites per field, which was the industry standard for many years. Methods that have emerged include overlay metrology in many or all fields, including the high order field model method called high order control (HOC), and field by field control (FxFc) methods also called correction per exposure. The HOC and FxFc methods were initially introduced as relatively infrequent scanner qualification activities meant to supplement linear production schemes. More recently, however, it is clear that production control is also requiring intense sampling, similar high order and FxFc methods. The added control benefits of high order and FxFc overlay methods need to be balanced with the increased metrology requirements, however, without putting material at risk. Of critical importance is the proper control of edge fields, which requires intensive sampling in order to minimize signatures. In this study we compare various methods of overlay control including the performance levels that can be achieved.
Ding, Cheng; Chen, Tianming; Li, Zhaoxia; Yan, Jinlong
2015-05-01
Using the standardized polyurethane foam unit (PFU) method, a preliminary investigation was carried out on the bioaccumulation and the ecotoxic effects of the pulp and paper wastewater for irrigating reed fields. Static ectoxicity test had shown protozoal communities were very sensitive to variations in toxin time and effective concentration (EC) of the pulp and paper wastewater. Shannon-Wiener diversity index (H) was a more suitable indicator of the extent of water pollution than Gleason and Margalef diversity index (d), Simpson's diversity index (D), and Pielou's index (J). The regression equation between S eq and EC was S eq = - 0.118EC + 18.554. The relatively safe concentration and maximum acceptable toxicant concentration (MATC) of the wastewater for the protozoal communities were about 20 % and 42 %, respectively. To safely use this wastewater for irrigation, more than 58 % of the toxins must be removed or diluted by further processing. Monitoring of the wastewater in representative irrigated reed fields showed that the regularity of the protozoal colonization process was similar to the static ectoxicity, indicating that the toxicity of the irrigating pulp and paper wastewater was not lethal to protozoal communities in the reed fields. This study demonstrated the applicability of the PFU method in monitoring the ecotoxic effects of pulp and paper wastewater on the level of microbial communities and may guide the supervision and control of pulp and paper wastewater irrigating within the reed fields ecological system (RFES).
Domain decomposition methods in FVM approach to gravity field modelling.
NASA Astrophysics Data System (ADS)
Macák, Marek
2017-04-01
The finite volume method (FVM) as a numerical method can be straightforwardly implemented for global or local gravity field modelling. This discretization method solves the geodetic boundary value problems in a space domain. In order to obtain precise numerical solutions, it usually requires very refined discretization leading to large-scale parallel computations. To optimize such computations, we present a special class of numerical techniques that are based on a physical decomposition of the global solution domain. The domain decomposition (DD) methods like the Multiplicative Schwarz Method and Additive Schwarz Method are very efficient methods for solving partial differential equations. We briefly present their mathematical formulations and we test their efficiency. Presented numerical experiments are dealing with gravity field modelling. Since there is no need to solve special interface problems between neighbouring subdomains, in our applications we use the overlapping DD methods.
Tahmasebi Birgani, Mohamad J.; Chegeni, Nahid; Zabihzadeh, Mansoor; Hamzian, Nima
2014-04-01
Equivalent field is frequently used for central axis depth-dose calculations of rectangular- and irregular-shaped photon beams. As most of the proposed models to calculate the equivalent square field are dosimetry based, a simple physical-based method to calculate the equivalent square field size was used as the basis of this study. The table of the sides of the equivalent square or rectangular fields was constructed and then compared with the well-known tables by BJR and Venselaar, et al. with the average relative error percentage of 2.5 ± 2.5% and 1.5 ± 1.5%, respectively. To evaluate the accuracy of this method, the percentage depth doses (PDDs) were measured for some special irregular symmetric and asymmetric treatment fields and their equivalent squares for Siemens Primus Plus linear accelerator for both energies, 6 and 18 MV. The mean relative differences of PDDs measurement for these fields and their equivalent square was approximately 1% or less. As a result, this method can be employed to calculate equivalent field not only for rectangular fields but also for any irregular symmetric or asymmetric field.
A field day of soil regulation methods
NASA Astrophysics Data System (ADS)
Kempter, Axel; Kempter, Carmen
2015-04-01
The subject Soil plays an important role in the school subject geography. In particular in the upper classes it is expected that the knowledge from the area of Soil can be also be applied in other subjects. Thus, e.g., an assessment of economy and agricultural development and developing potential requires the interweaving of natural- geographic and human-geographic factors. The treatment of the subject Soil requires the desegregation of the results of different fields like Physics, Chemistry and Biology. Accordingly the subject gives cause to professional-covering lessons and offers the opportunity for practical work as well as excursions. Beside the mediation of specialist knowledge and with the support of the methods and action competences, the independent learning and the practical work should have a special emphasis on the field excursion by using stimulating exercises oriented to solving problems and mastering the methods. This aim should be achieved by the interdisciplinary treatment of the subject Soil in the task-oriented learning process on the field day. The methods and experiments should be sensibly selected for both the temporal and material supply constraints. During the field day the pupils had to categorize soil texture, soil colour, soil profile, soil skeleton, lime content, ion exchanger (Soils filter materials), pH-Value, water retention capacity and evidence of different ions like e.g. Fe3+, Mg2+, Cl- and NO3-. The pupils worked on stations and evaluated the data to receive a general view of the ground at the end. According to numbers of locations, amount of time and group size, different procedures can be offered. There are groups of experts who carry out the same experiment at all locations and split for the evaluation in different groups or each group ran through all stations. The results were compared and discussed at the end.
Improved methods for fan sound field determination
NASA Technical Reports Server (NTRS)
Cicon, D. E.; Sofrin, T. G.; Mathews, D. C.
1981-01-01
Several methods for determining acoustic mode structure in aircraft turbofan engines using wall microphone data were studied. A method for reducing data was devised and implemented which makes the definition of discrete coherent sound fields measured in the presence of engine speed fluctuation more accurate. For the analytical methods, algorithms were developed to define the dominant circumferential modes from full and partial circumferential arrays of microphones. Axial arrays were explored to define mode structure as a function of cutoff ratio, and the use of data taken at several constant speeds was also evaluated in an attempt to reduce instrumentation requirements. Sensitivities of the various methods to microphone density, array size and measurement error were evaluated and results of these studies showed these new methods to be impractical. The data reduction method used to reduce the effects of engine speed variation consisted of an electronic circuit which windowed the data so that signal enhancement could occur only when the speed was within a narrow range.
Tegze, Gyoergy Bansel, Gurvinder; Toth, Gyula I.; Pusztai, Tamas; Fan, Zhongyun; Granasy, Laszlo
2009-03-20
We present an efficient method to solve numerically the equations of dissipative dynamics of the binary phase-field crystal model proposed by Elder et al. [K.R. Elder, M. Katakowski, M. Haataja, M. Grant, Phys. Rev. B 75 (2007) 064107] characterized by variable coefficients. Using the operator splitting method, the problem has been decomposed into sub-problems that can be solved more efficiently. A combination of non-trivial splitting with spectral semi-implicit solution leads to sets of algebraic equations of diagonal matrix form. Extensive testing of the method has been carried out to find the optimum balance among errors associated with time integration, spatial discretization, and splitting. We show that our method speeds up the computations by orders of magnitude relative to the conventional explicit finite difference scheme, while the costs of the pointwise implicit solution per timestep remains low. Also we show that due to its numerical dissipation, finite differencing can not compete with spectral differencing in terms of accuracy. In addition, we demonstrate that our method can efficiently be parallelized for distributed memory systems, where an excellent scalability with the number of CPUs is observed.
NASA Astrophysics Data System (ADS)
Tegze, György; Bansel, Gurvinder; Tóth, Gyula I.; Pusztai, Tamás; Fan, Zhongyun; Gránásy, László
2009-03-01
We present an efficient method to solve numerically the equations of dissipative dynamics of the binary phase-field crystal model proposed by Elder et al. [K.R. Elder, M. Katakowski, M. Haataja, M. Grant, Phys. Rev. B 75 (2007) 064107] characterized by variable coefficients. Using the operator splitting method, the problem has been decomposed into sub-problems that can be solved more efficiently. A combination of non-trivial splitting with spectral semi-implicit solution leads to sets of algebraic equations of diagonal matrix form. Extensive testing of the method has been carried out to find the optimum balance among errors associated with time integration, spatial discretization, and splitting. We show that our method speeds up the computations by orders of magnitude relative to the conventional explicit finite difference scheme, while the costs of the pointwise implicit solution per timestep remains low. Also we show that due to its numerical dissipation, finite differencing can not compete with spectral differencing in terms of accuracy. In addition, we demonstrate that our method can efficiently be parallelized for distributed memory systems, where an excellent scalability with the number of CPUs is observed.
Inverse field-based approach for simultaneous B₁ mapping at high fields - a phantom based study.
Jin, Jin; Liu, Feng; Zuo, Zhentao; Xue, Rong; Li, Mingyan; Li, Yu; Weber, Ewald; Crozier, Stuart
2012-04-01
Based on computational electromagnetics and multi-level optimization, an inverse approach of attaining accurate mapping of both transmit and receive sensitivity of radiofrequency coils is presented. This paper extends our previous study of inverse methods of receptivity mapping at low fields, to allow accurate mapping of RF magnetic fields (B(1)) for high-field applications. Accurate receive sensitivity mapping is essential to image domain parallel imaging methods, such as sensitivity encoding (SENSE), to reconstruct high quality images. Accurate transmit sensitivity mapping will facilitate RF-shimming and parallel transmission techniques that directly address the RF inhomogeneity issue, arguably the most challenging issue of high-field magnetic resonance imaging (MRI). The inverse field-based approach proposed herein is based on computational electromagnetics and iterative optimization. It fits an experimental image to the numerically calculated signal intensity by iteratively optimizing the coil-subject geometry to better resemble the experiments. Accurate transmit and receive sensitivities are derived as intermediate results of the optimization process. The method is validated by imaging studies using homogeneous saline phantom at 7T. A simulation study at 300MHz demonstrates that the proposed method is able to obtain receptivity mapping with errors an order of magnitude less than that of the conventional method. The more accurate receptivity mapping and simultaneously obtained transmit sensitivity mapping could enable artefact-reduced and intensity-corrected image reconstructions. It is hoped that by providing an approach to the accurate mapping of both transmit and receive sensitivity, the proposed method will facilitate a range of applications in high-field MRI and parallel imaging. Copyright © 2012 Elsevier Inc. All rights reserved.
Laboratory and field based evaluation of chromatography ...
The Monitor for AeRosols and GAses in ambient air (MARGA) is an on-line ion-chromatography-based instrument designed for speciation of the inorganic gas and aerosol ammonium-nitrate-sulfate system. Previous work to characterize the performance of the MARGA has been primarily based on field comparison to other measurement methods to evaluate accuracy. While such studies are useful, the underlying reasons for disagreement among methods are not always clear. This study examines aspects of MARGA accuracy and precision specifically related to automated chromatography analysis. Using laboratory standards, analytical accuracy, precision, and method detection limits derived from the MARGA chromatography software are compared to an alternative software package (Chromeleon, Thermo Scientific Dionex). Field measurements are used to further evaluate instrument performance, including the MARGA’s use of an internal LiBr standard to control accuracy. Using gas/aerosol ratios and aerosol neutralization state as a case study, the impact of chromatography on measurement error is assessed. The new generation of on-line chromatography-based gas and particle measurement systems have many advantages, including simultaneous analysis of multiple pollutants. The Monitor for Aerosols and Gases in Ambient Air (MARGA) is such an instrument that is used in North America, Europe, and Asia for atmospheric process studies as well as routine monitoring. While the instrument has been evaluat
Ogura, Toshihiko
2014-08-08
Highlights: • We developed a high-sensitive frequency transmission electric-field (FTE) system. • The output signal was highly enhanced by applying voltage to a metal layer on SiN. • The spatial resolution of new FTE method is 41 nm. • New FTE system enables observation of the intact bacteria and virus in water. - Abstract: The high-resolution structural analysis of biological specimens by scanning electron microscopy (SEM) presents several advantages. Until now, wet bacterial specimens have been examined using atmospheric sample holders. However, images of unstained specimens in water using these holders exhibit very poor contrast and heavy radiation damage. Recently, we developed the frequency transmission electric-field (FTE) method, which facilitates the SEM observation of biological specimens in water without radiation damage. However, the signal detection system presents low sensitivity. Therefore, a high EB current is required to generate clear images, and thus reducing spatial resolution and inducing thermal damage to the samples. Here a high-sensitivity detection system is developed for the FTE method, which enhances the output signal amplitude by hundredfold. The detection signal was highly enhanced when voltage was applied to the metal layer on silicon nitride thin film. This enhancement reduced the EB current and improved the spatial resolution as well as the signal-to-noise ratio. The spatial resolution of a high-sensitive FTE system is 41 nm, which is considerably higher than previous FTE system. New FTE system can easily be utilised to examine various unstained biological specimens in water, such as living bacteria and viruses.
The virtual fields method applied to spalling tests on concrete
NASA Astrophysics Data System (ADS)
Pierron, F.; Forquin, P.
2012-08-01
For one decade spalling techniques based on the use of a metallic Hopkinson bar put in contact with a concrete sample have been widely employed to characterize the dynamic tensile strength of concrete at strain-rates ranging from a few tens to two hundreds of s-1. However, the processing method mainly based on the use of the velocity profile measured on the rear free surface of the sample (Novikov formula) remains quite basic and an identification of the whole softening behaviour of the concrete is out of reach. In the present paper a new processing method is proposed based on the use of the Virtual Fields Method (VFM). First, a digital high speed camera is used to record the pictures of a grid glued on the specimen. Next, full-field measurements are used to obtain the axial displacement field at the surface of the specimen. Finally, a specific virtual field has been defined in the VFM equation to use the acceleration map as an alternative `load cell'. This method applied to three spalling tests allowed to identify Young's modulus during the test. It was shown that this modulus is constant during the initial compressive part of the test and decreases in the tensile part when micro-damage exists. It was also shown that in such a simple inertial test, it was possible to reconstruct average axial stress profiles using only the acceleration data. Then, it was possible to construct local stress-strain curves and derive a tensile strength value.
A new method of field MRTD test
NASA Astrophysics Data System (ADS)
Chen, Zhibin; Song, Yan; Liu, Xianhong; Xiao, Wenjian
2014-09-01
MRTD is an important indicator to measure the imaging performance of infrared camera. In the traditional laboratory test, blackbody is used as simulated heat source which is not only expensive and bulky but also difficult to meet field testing requirements of online automatic infrared camera MRTD. To solve this problem, this paper introduces a new detection device for MRTD, which uses LED as a simulation heat source and branded plated zinc sulfide glass carved four-bar target as a simulation target. By using high temperature adaptability cassegrain collimation system, the target is simulated to be distance-infinite so that it can be observed by the human eyes to complete the subjective test, or collected to complete objective measurement by image processing. This method will use LED to replace blackbody. The color temperature of LED is calibrated by thermal imager, thereby, the relation curve between the LED temperature controlling current and the blackbody simulation temperature difference is established, accurately achieved the temperature control of the infrared target. Experimental results show that the accuracy of the device in field testing of thermal imager MRTD can be limited within 0.1K, which greatly reduces the cost to meet the project requirements with a wide application value.
Narrow field electromagnetic sensor system and method
McEwan, Thomas E.
1996-01-01
A narrow field electromagnetic sensor system and method of sensing a characteristic of an object provide the capability to realize a characteristic of an object such as density, thickness, or presence, for any desired coordinate position on the object. One application is imaging. The sensor can also be used as an obstruction detector or an electronic trip wire with a narrow field without the disadvantages of impaired performance when exposed to dirt, snow, rain, or sunlight. The sensor employs a transmitter for transmitting a sequence of electromagnetic signals in response to a transmit timing signal, a receiver for sampling only the initial direct RF path of the electromagnetic signal while excluding all other electromagnetic signals in response to a receive timing signal, and a signal processor for processing the sampled direct RF path electromagnetic signal and providing an indication of the characteristic of an object. Usually, the electromagnetic signal is a short RF burst and the obstruction must provide a substantially complete eclipse of the direct RF path. By employing time-of-flight techniques, a timing circuit controls the receiver to sample only the initial direct RF path of the electromagnetic signal while not sampling indirect path electromagnetic signals. The sensor system also incorporates circuitry for ultra-wideband spread spectrum operation that reduces interference to and from other RF services while allowing co-location of multiple electronic sensors without the need for frequency assignments.
Narrow field electromagnetic sensor system and method
McEwan, T.E.
1996-11-19
A narrow field electromagnetic sensor system and method of sensing a characteristic of an object provide the capability to realize a characteristic of an object such as density, thickness, or presence, for any desired coordinate position on the object. One application is imaging. The sensor can also be used as an obstruction detector or an electronic trip wire with a narrow field without the disadvantages of impaired performance when exposed to dirt, snow, rain, or sunlight. The sensor employs a transmitter for transmitting a sequence of electromagnetic signals in response to a transmit timing signal, a receiver for sampling only the initial direct RF path of the electromagnetic signal while excluding all other electromagnetic signals in response to a receive timing signal, and a signal processor for processing the sampled direct RF path electromagnetic signal and providing an indication of the characteristic of an object. Usually, the electromagnetic signal is a short RF burst and the obstruction must provide a substantially complete eclipse of the direct RF path. By employing time-of-flight techniques, a timing circuit controls the receiver to sample only the initial direct RF path of the electromagnetic signal while not sampling indirect path electromagnetic signals. The sensor system also incorporates circuitry for ultra-wideband spread spectrum operation that reduces interference to and from other RF services while allowing co-location of multiple electronic sensors without the need for frequency assignments. 12 figs.
A field method for measurement of infiltration
Johnson, A.I.
1963-01-01
The determination of infiltration--the downward entry of water into a soil (or sediment)--is receiving increasing attention in hydrologic studies because of the need for more quantitative data on all phases of the hydrologic cycle. A measure of infiltration, the infiltration rate, is usually determined in the field by flooding basins or furrows, sprinkling, or measuring water entry from cylinders (infiltrometer rings). Rates determined by ponding in large areas are considered most reliable, but the high cost usually dictates that infiltrometer rings, preferably 2 feet in diameter or larger, be used. The hydrology of subsurface materials is critical in the study of infiltration. The zone controlling the rate of infiltration is usually the least permeable zone. Many other factors affect infiltration rate--the sediment (soil) structure, the condition of the sediment surface, the distribution of soil moisture or soil- moisture tension, the chemical and physical nature of the sediments, the head of applied water, the depth to ground water, the chemical quality and the turbidity of the applied water, the temperature of the water and the sediments, the percentage of entrapped air in the sediments, the atmospheric pressure, the length of time of application of water, the biological activity in the sediments, and the type of equipment or method used. It is concluded that specific values of the infiltration rate for a particular type of sediment are probably nonexistent and that measured rates are primarily for comparative use. A standard field-test method for determining infiltration rates by means of single- or double-ring infiltrometers is described and the construction, installation, and operation of the infiltrometers are discussed in detail.
Far-field method for the characterisation of three-dimensional fields: vectorial polarimetry
NASA Astrophysics Data System (ADS)
Rodríguez, O.; Lara, D.; Dainty, C.
2010-06-01
The first attempt to completely characterise a three-dimensional field was done by Ellis and Dogariu with excellent results reported [1] . However, their method is based on near-field techniques, which limits its range of applications. In this work, we present an alternative far-field method for the characterisation of the three-dimensional field that results from the interaction of a tightly focused three-dimensional field [2] with a sub-resolution specimen. Our method is based on the analysis of the scattering-angle-resolved polarisation state distribution across the exit pupil of a high numerical aperture (NA) collector lens using standard polarimetry techniques. Details of the method, the experimental setup built to verify its capabilities, and numerical and first experimental evidence demonstrating that the method allows for high sensitivit y on sub-resolution displacements of a sub-resolution specimen shall be presented [3]. This work is funded by Science Foundation Ireland grant No. 07/IN.1/I906 and Shimadzu Corporation, Japan. Oscar Rodríguez is grateful to the National Council for Science and Technology (CONACYT, Mexico) for the Ph D scholarship 177627.
Kussmann, Jörg; Luenser, Arne; Beer, Matthias; Ochsenfeld, Christian
2015-03-07
An analytical method to calculate the molecular vibrational Hessian matrix at the self-consistent field level is presented. By analysis of the multipole expansions of the relevant derivatives of Coulomb-type two-electron integral contractions, we show that the effect of the perturbation on the electronic structure due to the displacement of nuclei decays at least as r{sup −2} instead of r{sup −1}. The perturbation is asymptotically local, and the computation of the Hessian matrix can, in principle, be performed with O(N) complexity. Our implementation exhibits linear scaling in all time-determining steps, with some rapid but quadratic-complexity steps remaining. Sample calculations illustrate linear or near-linear scaling in the construction of the complete nuclear Hessian matrix for sparse systems. For more demanding systems, scaling is still considerably sub-quadratic to quadratic, depending on the density of the underlying electronic structure.
Thomas P. Holmes; Wiktor L. Adamowicz
2003-01-01
Stated preference methods of environmental valuation have been used by economists for decades where behavioral data have limitations. The contingent valuation method (Chapter 5) is the oldest stated preference approach, and hundreds of contingent valuation studies have been conducted. More recently, and especially over the last decade, a class of stated preference...
Ogura, Toshihiko
2014-08-08
The high-resolution structural analysis of biological specimens by scanning electron microscopy (SEM) presents several advantages. Until now, wet bacterial specimens have been examined using atmospheric sample holders. However, images of unstained specimens in water using these holders exhibit very poor contrast and heavy radiation damage. Recently, we developed the frequency transmission electric-field (FTE) method, which facilitates the SEM observation of biological specimens in water without radiation damage. However, the signal detection system presents low sensitivity. Therefore, a high EB current is required to generate clear images, and thus reducing spatial resolution and inducing thermal damage to the samples. Here a high-sensitivity detection system is developed for the FTE method, which enhances the output signal amplitude by hundredfold. The detection signal was highly enhanced when voltage was applied to the metal layer on silicon nitride thin film. This enhancement reduced the EB current and improved the spatial resolution as well as the signal-to-noise ratio. The spatial resolution of a high-sensitive FTE system is 41nm, which is considerably higher than previous FTE system. New FTE system can easily be utilised to examine various unstained biological specimens in water, such as living bacteria and viruses. Copyright © 2014 The Author. Published by Elsevier Inc. All rights reserved.
Zhao, Zi-Fang; Li, Xue-Zhu; Wan, You
2017-09-12
The local field potential (LFP) is a signal reflecting the electrical activity of neurons surrounding the electrode tip. Synchronization between LFP signals provides important details about how neural networks are organized. Synchronization between two distant brain regions is hard to detect using linear synchronization algorithms like correlation and coherence. Synchronization likelihood (SL) is a non-linear synchronization-detecting algorithm widely used in studies of neural signals from two distant brain areas. One drawback of non-linear algorithms is the heavy computational burden. In the present study, we proposed a graphic processing unit (GPU)-accelerated implementation of an SL algorithm with optional 2-dimensional time-shifting. We tested the algorithm with both artificial data and raw LFP data. The results showed that this method revealed detailed information from original data with the synchronization values of two temporal axes, delay time and onset time, and thus can be used to reconstruct the temporal structure of a neural network. Our results suggest that this GPU-accelerated method can be extended to other algorithms for processing time-series signals (like EEG and fMRI) using similar recording techniques.
Field methods for measuring concentrated flow erosion
NASA Astrophysics Data System (ADS)
Castillo, C.; Pérez, R.; James, M. R.; Quinton, J. N.; Taguas, E. V.; Gómez, J. A.
2012-04-01
Many studies have stressed the importance of gully erosion in the overall soil loss and sediment yield of agricultural catchments, for instance in recent years (Vandaele and Poesen, 1995; De Santisteban et al., 2006; Wu el al, 2008). Several techniques have been used for determining gully erosion in field studies. The conventional techniques involved the use of different devices (i.e. ruler, pole, tape, micro-topographic profilers, total station) to calculate rill and gully volumes through the determination of cross sectional areas and length of reaches (Casalí et al, 1999; Hessel and van Asch, 2003). Optical devices (i.e. laser profilemeters) have also been designed for the purpose of rapid and detailed assessment of cross sectional areas in gully networks (Giménez et al., 2009). These conventional 2d methods provide a simple and un-expensive approach for erosion evaluation, but are time consuming to carry out if a good accuracy is required. On the other hand, remote sensing techniques are being increasingly applied to gully erosion investigation such as aerial photography used for big-scale, long-term, investigations (e.g. Martínez-Casasnovas et al., 2004; Ionita, 2006), airborne and terrestrial LiDAR datasets for gully volume evaluation (James et al., 2007; Evans and Lindsay, 2010) and recently, major advances in 3D photo-reconstruction techniques (Welty et al. 2010, James et al., 2011). Despite its interest, few studies simultaneously compare the accuracies of the range of conventional and remote sensing techniques used, or define the most suitable method for a particular scale, given and time and cost constraints. That was the reason behind the International Workshop Innovations in the evaluation and measurement of rill and gully erosion, held in Cordoba in May 2011 and from which derive part of the materials presented in this abstract. The main aim of this work was to compare the accuracy and time requirements of traditional (2D) and recently developed
Andreuccetti, D; Zoppetti, N
2004-01-01
An advanced numerical evaluation tool is proposed for calculating the magnetic flux density dispersed by high-voltage power lines. When compared to existing software packages based on the application of standardized methods, this tool turned out to be particularly suitable for making accurate evaluations on vast portions of the territory, especially when the contribution of numerous aerial and/or underground lines must be taken into account. The aspects of the tool of greatest interest are (1) the interaction with an electronic archive of power lines, from which all the information necessary for the calculation is obtained; (2) the use of three-dimensional models of both the power lines and the territory crossed by these; (3) the direct interfacing with electronic cartography; and finally (4) the use of a representation procedure for the results that is based on contour maps. The tool had proven to be very useful especially for Environmental Impact Assessment procedures relative to new power lines.
Intermediate electrostatic field for the generalized elongation method.
Liu, Kai; Korchowiec, Jacek; Aoki, Yuriko
2015-05-18
An intermediate electrostatic field is introduced to improve the accuracy of fragment-based quantum-chemical computational methods by including long-range polarizations of biomolecules. The point charge distribution of the intermediate field is generated by a charge sensitivity analysis that is parameterized for five different population analyses, namely, atoms-in-molecules, Hirshfeld, Mulliken, natural orbital, and Voronoi population analysis. Two model systems are chosen to demonstrate the performance of the generalized elongation method (ELG) combined with the intermediate electrostatic field. The calculations are performed for the STO-3G, 6-31G, and 6-31G(d) basis sets and compared with reference Hartree-Fock calculations. It is shown that the error in the total energy is reduced by one order of magnitude, independently of the population analyses used. This demonstrates the importance of long-range polarization in electronic-structure calculations by fragmentation techniques.
A novel background field removal method for MRI using projection onto dipole fields (PDF).
Liu, Tian; Khalidov, Ildar; de Rochefort, Ludovic; Spincemaille, Pascal; Liu, Jing; Tsiouris, A John; Wang, Yi
2011-11-01
For optimal image quality in susceptibility-weighted imaging and accurate quantification of susceptibility, it is necessary to isolate the local field generated by local magnetic sources (such as iron) from the background field that arises from imperfect shimming and variations in magnetic susceptibility of surrounding tissues (including air). Previous background removal techniques have limited effectiveness depending on the accuracy of model assumptions or information input. In this article, we report an observation that the magnetic field for a dipole outside a given region of interest (ROI) is approximately orthogonal to the magnetic field of a dipole inside the ROI. Accordingly, we propose a nonparametric background field removal technique based on projection onto dipole fields (PDF). In this PDF technique, the background field inside an ROI is decomposed into a field originating from dipoles outside the ROI using the projection theorem in Hilbert space. This novel PDF background removal technique was validated on a numerical simulation and a phantom experiment and was applied in human brain imaging, demonstrating substantial improvement in background field removal compared with the commonly used high-pass filtering method. Copyright © 2011 John Wiley & Sons, Ltd.
Field testing method for photovaltaic modules
NASA Astrophysics Data System (ADS)
Ramos, Gerber N.
For remote areas, where solar photovoltaic modules are the only source of power, it is essential to perform preventive maintenance to insure that the PV system works properly; unfortunately, prices for PV testers range from 1,700 to 8,000. To address this issue, a portable inexpensive tester and analysis methodology have been developed. Assembling a simple tester, which costs $530 and weighs about 5 pounds, and using the Four-Parameters PV Model, we characterized the current-voltage (I-V) curve at environmental testing conditions; and then employing radiation, temperature, and age degradation sensitivity equations, we extrapolated the I-V curve to standard testing conditions. After applying the methodology to three kinds of silicon modules (mono-crystalline, multi-crystalline, and thin-film), we obtained maximum power points up to 97% of the manufacturer's specifications. Therefore, based on these results, it is reasonably accurate and affordable to verify the performance of solar modules in the field.
Edison, John R; Monson, Peter A
2014-07-14
Recently we have developed a dynamic mean field theory (DMFT) for lattice gas models of fluids in porous materials [P. A. Monson, J. Chem. Phys. 128(8), 084701 (2008)]. The theory can be used to describe the relaxation processes in the approach to equilibrium or metastable states for fluids in pores and is especially useful for studying system exhibiting adsorption/desorption hysteresis. In this paper we discuss the extension of the theory to higher order by means of the path probability method (PPM) of Kikuchi and co-workers. We show that this leads to a treatment of the dynamics that is consistent with thermodynamics coming from the Bethe-Peierls or Quasi-Chemical approximation for the equilibrium or metastable equilibrium states of the lattice model. We compare the results from the PPM with those from DMFT and from dynamic Monte Carlo simulations. We find that the predictions from PPM are qualitatively similar to those from DMFT but give somewhat improved quantitative accuracy, in part due to the superior treatment of the underlying thermodynamics. This comes at the cost of greater computational expense associated with the larger number of equations that must be solved.
Edison, John R.; Monson, Peter A.
2014-07-14
Recently we have developed a dynamic mean field theory (DMFT) for lattice gas models of fluids in porous materials [P. A. Monson, J. Chem. Phys. 128(8), 084701 (2008)]. The theory can be used to describe the relaxation processes in the approach to equilibrium or metastable states for fluids in pores and is especially useful for studying system exhibiting adsorption/desorption hysteresis. In this paper we discuss the extension of the theory to higher order by means of the path probability method (PPM) of Kikuchi and co-workers. We show that this leads to a treatment of the dynamics that is consistent with thermodynamics coming from the Bethe-Peierls or Quasi-Chemical approximation for the equilibrium or metastable equilibrium states of the lattice model. We compare the results from the PPM with those from DMFT and from dynamic Monte Carlo simulations. We find that the predictions from PPM are qualitatively similar to those from DMFT but give somewhat improved quantitative accuracy, in part due to the superior treatment of the underlying thermodynamics. This comes at the cost of greater computational expense associated with the larger number of equations that must be solved.
von Papen, M; Dafsari, H; Florin, E; Gerick, F; Timmermann, L; Saur, J
2017-11-01
Local field potentials (LFP) reflect the integrated electrophysiological activity of large neuron populations and may thus reflect the dynamics of spatially and functionally different networks. We introduce the wavelet-based phase-coherence classification (PCC), which separates LFP into volume-conducted, local incoherent and local coherent components. It allows to compute power spectral densities for each component associated with local or remote electrophysiological activity. We use synthetic time series to estimate optimal parameters for the application to LFP from within the subthalamic nucleus of eight Parkinson patients. With PCC we identify multiple local tremor clusters and quantify the relative power of local and volume-conducted components. We analyze the electrophysiological response to an apomorphine injection during rest and hold. Here we show medication-induced significant decrease of incoherent activity in the low beta band and increase of coherent activity in the high beta band. On medication significant movement-induced changes occur in the high beta band of the local coherent signal. It increases during isometric hold tasks and decreases during phasic wrist movement. The power spectra of local PCC components is compared to bipolar recordings. In contrast to bipolar recordings PCC can distinguish local incoherent and coherent signals. We further compare our results with classification based on the imaginary part of coherency and the weighted phase lag index. The low and high beta band are more susceptible to medication- and movement-related changes reflected by incoherent and local coherent activity, respectively. PCC components may thus reflect functionally different networks. Copyright © 2017 Elsevier B.V. All rights reserved.
Shi, Xu; Barnes, Robert O.; Chen, Li; Shajahan-Haq, Ayesha N.; Hilakivi-Clarke, Leena; Clarke, Robert; Wang, Yue; Xuan, Jianhua
2015-01-01
Summary: Identification of protein interaction subnetworks is an important step to help us understand complex molecular mechanisms in cancer. In this paper, we develop a BMRF-Net package, implemented in Java and C++, to identify protein interaction subnetworks based on a bagging Markov random field (BMRF) framework. By integrating gene expression data and protein–protein interaction data, this software tool can be used to identify biologically meaningful subnetworks. A user friendly graphic user interface is developed as a Cytoscape plugin for the BMRF-Net software to deal with the input/output interface. The detailed structure of the identified networks can be visualized in Cytoscape conveniently. The BMRF-Net package has been applied to breast cancer data to identify significant subnetworks related to breast cancer recurrence. Availability and implementation: The BMRF-Net package is available at http://sourceforge.net/projects/bmrfcjava/. The package is tested under Ubuntu 12.04 (64-bit), Java 7, glibc 2.15 and Cytoscape 3.1.0. Contact: xuan@vt.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25755273
Potential theoretic methods for far field sound radiation calculations
NASA Technical Reports Server (NTRS)
Hariharan, S. I.; Stenger, Edward J.; Scott, J. R.
1995-01-01
In the area of computational acoustics, procedures which accurately predict the far-field sound radiation are much sought after. A systematic development of such procedures are found in a sequence of papers by Atassi. The method presented here is an alternate approach to predicting far field sound based on simple layer potential theoretic methods. The main advantages of this method are: it requires only a simple free space Green's function, it can accommodate arbitrary shapes of Kirchoff surfaces, and is readily extendable to three-dimensional problems. Moreover, the procedure presented here, though tested for unsteady lifting airfoil problems, can easily be adapted to other areas of interest, such as jet noise radiation problems. Results are presented for lifting airfoil problems and comparisons are made with the results reported by Atassi. Direct comparisons are also made for the flat plate case.
School's IN for Summer: An Alternative Field Experience for Elementary Science Methods Students
ERIC Educational Resources Information Center
Hanuscin, Deborah L.; Musikul, Kusalin
2007-01-01
Field experiences are critical to teacher learning and enhance the effectiveness of methods courses; however, when methods courses are offered in the summer, traditional school-based field experiences are not possible. This article describes an alternative campus-based experience created as part of an elementary science methods course. The Summer…
Gravitational collapse of scalar fields via spectral methods
Oliveira, H. P. de; Rodrigues, E. L.; Skea, J. E. F.
2010-11-15
In this paper we present a new numerical code based on the Galerkin method to integrate the field equations for the spherical collapse of massive and massless scalar fields. By using a spectral decomposition in terms of the radial coordinate, the field equations were reduced to a finite set of ordinary differential equations in the space of modes associated with the Galerkin expansion of the scalar field, together with algebraic sets of equations connecting modes associated with the metric functions. The set of ordinary differential equations with respect to the null coordinate is then integrated using an eighth-order Runge-Kutta method. The numerical tests have confirmed the high accuracy and fast convergence of the code. As an application we have evaluated the whole spectrum of black hole masses which ranges from infinitesimal to large values obtained after varying the amplitude of the initial scalar field distribution. We have found strong numerical evidence that this spectrum is described by a nonextensive distribution law.
Abnormality degree detection method using negative potential field group detectors
NASA Astrophysics Data System (ADS)
Zhang, Hongli; Liu, Shulin; Li, Dong; Shi, Kunju; Wang, Bo; Cui, Jiqiang
2015-09-01
Online monitoring methods have been widely used in many major devices, however the normal and abnormal states of equipment are estimated mainly based on the monitoring results whether monitored parameters exceed the setting thresholds. Using these monitoring methods may cause serious false positive or false negative results. In order to precisely monitor the state of equipment, the problem of abnormality degree detection without fault sample is studied with a new detection method called negative potential field group detectors(NPFG-detectors). This method achieves the quantitative expression of abnormality degree and provides the better detection results compared with other methods. In the process of Iris data set simulation, the new algorithm obtains the successful results in abnormal detection. The detection rates for 3 types of Iris data set respectively reach 100%, 91.6%, and 95.24% with 50% training samples. The problem of Bearing abnormality degree detection via an abnormality degree curve is successfully solved.
Magnetic field transfer device and method
Wipf, S.L.
1990-02-13
A magnetic field transfer device includes a pair of oppositely wound inner coils which each include at least one winding around an inner coil axis, and an outer coil which includes at least one winding around an outer coil axis. The windings may be formed of superconductors. The axes of the two inner coils are parallel and laterally spaced from each other so that the inner coils are positioned in side-by-side relation. The outer coil is outwardly positioned from the inner coils and rotatable relative to the inner coils about a rotational axis substantially perpendicular to the inner coil axes to generate a hypothetical surface which substantially encloses the inner coils. The outer coil rotates relative to the inner coils between a first position in which the outer coil axis is substantially parallel to the inner coil axes and the outer coil augments the magnetic field formed in one of the inner coils, and a second position 180[degree] from the first position, in which the augmented magnetic field is transferred into the other inner coil and reoriented 180[degree] from the original magnetic field. The magnetic field transfer device allows a magnetic field to be transferred between volumes with negligible work being required to rotate the outer coil with respect to the inner coils. 16 figs.
Magnetic field transfer device and method
Wipf, Stefan L.
1990-01-01
A magnetic field transfer device includes a pair of oppositely wound inner coils which each include at least one winding around an inner coil axis, and an outer coil which includes at least one winding around an outer coil axis. The windings may be formed of superconductors. The axes of the two inner coils are parallel and laterally spaced from each other so that the inner coils are positioned in side-by-side relation. The outer coil is outwardly positioned from the inner coils and rotatable relative to the inner coils about a rotational axis substantially perpendicular to the inner coil axes to generate a hypothetical surface which substantially encloses the inner coils. The outer coil rotates relative to the inner coils between a first position in which the outer coil axis is substantially parallel to the inner coil axes and the outer coil augments the magnetic field formed in one of the inner coils, and a second position 180.degree. from the first position, in which the augmented magnetic field is transferred into the other inner coil and reoriented 180.degree. from the original magnetic field. The magnetic field transfer device allows a magnetic field to be transferred between volumes with negligible work being required to rotate the outer coil with respect to the inner coils.
Process system and method for fabricating submicron field emission cathodes
Jankowski, A.F.; Hayes, J.P.
1998-05-05
A process method and system for making field emission cathodes exists. The deposition source divergence is controlled to produce field emission cathodes with height-to-base aspect ratios that are uniform over large substrate surface areas while using very short source-to-substrate distances. The rate of hole closure is controlled from the cone source. The substrate surface is coated in well defined increments. The deposition source is apertured to coat pixel areas on the substrate. The entire substrate is coated using a manipulator to incrementally move the whole substrate surface past the deposition source. Either collimated sputtering or evaporative deposition sources can be used. The position of the aperture and its size and shape are used to control the field emission cathode size and shape. 3 figs.
Process system and method for fabricating submicron field emission cathodes
Jankowski, Alan F.; Hayes, Jeffrey P.
1998-01-01
A process method and system for making field emission cathodes exists. The deposition source divergence is controlled to produce field emission cathodes with height-to-base aspect ratios that are uniform over large substrate surface areas while using very short source-to-substrate distances. The rate of hole closure is controlled from the cone source. The substrate surface is coated in well defined increments. The deposition source is apertured to coat pixel areas on the substrate. The entire substrate is coated using a manipulator to incrementally move the whole substrate surface past the deposition source. Either collimated sputtering or evaporative deposition sources can be used. The position of the aperture and its size and shape are used to control the field emission cathode size and shape.
Knowledge-based flow field zoning
NASA Technical Reports Server (NTRS)
Andrews, Alison E.
1988-01-01
Automation flow field zoning in two dimensions is an important step towards easing the three-dimensional grid generation bottleneck in computational fluid dynamics. A knowledge based approach works well, but certain aspects of flow field zoning make the use of such an approach challenging. A knowledge based flow field zoner, called EZGrid, was implemented and tested on representative two-dimensional aerodynamic configurations. Results are shown which illustrate the way in which EZGrid incorporates the effects of physics, shape description, position, and user bias in a flow field zoning.
Studies on Partially Coherent Fields and Coherence Measurement Methods
NASA Astrophysics Data System (ADS)
Cho, Seongkeun
The concept of coherence in optics means how closely an optical field oscillates in unison at the same position in different time (temporal coherence) or at different positions at the same time (spatial coherence). Since all optical fields oscillate very rapidly with random fluctuations, coherence theory has been developed to describe the state of coherence of those optical fields through the usage of time-averaged correlation functions. This thesis reviews and applies coherence theory for an accurate and improved modeling in field-propagation and coherence measurement for partially coherent fields. The first half of this thesis discusses the study of phase-space distributions and phase-space tomography. Phase-space distributions such as the Wigner and the ambiguity functions can be used as simple mathematical tools for describing the propagation of an optical field for any state of coherence as those functions incorporate wave effects with the simplicity of ray optics. However, the Wigner and the ambiguity functions require a paraxial condition for the field description. To overcome this limitation, the nonparaxial extensions of the Wigner function have been studied and applied for nonparaxial fields. In this thesis, a simple series expression for calculating a nonparaxial generalization of theWigner function from the standard Wigner function is developed in both two- and three-dimensional free space. A nonparaxial generalization of the ambiguity function that retains properties analogous to the standard ambiguity function is also proposed in both two and three dimensions. This generalization extends phase-space tomography to the nonparaxial regime. The second half of this thesis proposes a new method of coherence measurement based on diffraction. By measuring the radiant intensity of a field with and without a binary transparent phase mask, one can estimate the coherence of a field at all pairs of the points centered at the mask's edge. This method is proposed in
Investigation of drag effect using the field signature method
NASA Astrophysics Data System (ADS)
Wan, Zhengjun; Liao, Junbi; Tian, Gui Yun; Cheng, Liang
2011-08-01
The potential drop (PD) method is an established non-destructive evaluation (NDE) technique. The monitoring of internal corrosion, erosion and cracks in piping systems, based on electrical field mapping or direct current potential drop array, is also known as the field signature method (FSM). The FSM has been applied in the field of submarine pipe monitoring and land-based oil and gas transmission pipes and containers. In the experimental studies, to detect and calculate the degree of pipe corrosion, the FSM analyses the relationships between the electrical resistance and pipe thickness using an electrode matrix. The relevant drag effect or trans-resistance will cause a large margin of error in the application of resistance arrays. It is the first time that the drag effect in the paper is investigated and analysed in resistance networks with the help of the FSM. Subsequently, a method to calculate the drag factors and eliminate its errors is proposed and presented. Theoretical analysis, simulation and experimental results show that the measurement accuracy can be improved by eliminating the errors caused by the drag effect.
NASA Astrophysics Data System (ADS)
Zheng, Xiang; Yang, Chao; Cai, Xiao-Chuan; Keyes, David
2015-03-01
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton-Krylov-Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.
Zheng, Xiang; Yang, Chao; Cai, Xiao-Chuan; Keyes, David
2015-03-15
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn–Hilliard–Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton–Krylov–Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.
Wide-field TCSPC: methods and applications
NASA Astrophysics Data System (ADS)
Hirvonen, Liisa M.; Suhling, Klaus
2017-01-01
Time-correlated single photon counting (TCSPC) is a widely used, robust and mature technique to measure the photon arrival time in applications such as fluorescence spectroscopy and microscopy, LIDAR and optical tomography. In the past few years there have been significant developments with wide-field TCSPC detectors, which can record the position as well as the arrival time of the photon simultaneously. In this review, we summarise different approaches used in wide-field TCSPC detection, and discuss their merits for different applications, with emphasis on fluorescence lifetime imaging.
Cloud field classification based on textural features
NASA Technical Reports Server (NTRS)
Sengupta, Sailes Kumar
1989-01-01
An essential component in global climate research is accurate cloud cover and type determination. Of the two approaches to texture-based classification (statistical and textural), only the former is effective in the classification of natural scenes such as land, ocean, and atmosphere. In the statistical approach that was adopted, parameters characterizing the stochastic properties of the spatial distribution of grey levels in an image are estimated and then used as features for cloud classification. Two types of textural measures were used. One is based on the distribution of the grey level difference vector (GLDV), and the other on a set of textural features derived from the MaxMin cooccurrence matrix (MMCM). The GLDV method looks at the difference D of grey levels at pixels separated by a horizontal distance d and computes several statistics based on this distribution. These are then used as features in subsequent classification. The MaxMin tectural features on the other hand are based on the MMCM, a matrix whose (I,J)th entry give the relative frequency of occurrences of the grey level pair (I,J) that are consecutive and thresholded local extremes separated by a given pixel distance d. Textural measures are then computed based on this matrix in much the same manner as is done in texture computation using the grey level cooccurrence matrix. The database consists of 37 cloud field scenes from LANDSAT imagery using a near IR visible channel. The classification algorithm used is the well known Stepwise Discriminant Analysis. The overall accuracy was estimated by the percentage or correct classifications in each case. It turns out that both types of classifiers, at their best combination of features, and at any given spatial resolution give approximately the same classification accuracy. A neural network based classifier with a feed forward architecture and a back propagation training algorithm is used to increase the classification accuracy, using these two classes
Hyperbolic Methods for Surface and Field Grid Generation
NASA Technical Reports Server (NTRS)
Chan, William M.; VanDalsem, William R. (Technical Monitor)
1996-01-01
This chapter describes the use of hyperbolic partial differential equation methods for structured surface grid generation and field grid generation. While the surface grid generation equations are inherently three dimensional, the field grid generation equations can be formulated in two or three dimensions. The governing equations are derived from orthogonality relations and cell area/volume constraints; and are solved numerically by marching from an initial curve or surface. The marching step size and marching distance can be prescribedly the user. Exact specifications of the side and outer boundaries are not possible with a one sweep marching scheme but limited control is achievable. Excellent orthogonality and grid clustering characteristics are provided by hyperbolic methods with one to two orders of magnitude savings in time over typical elliptic methods. Since hyperbolic grid generation methods do not require the exact specifications of the side and outer boundaries of a grid, these methods are particularly well suited for the overlapping grid approach for solving problems on complex configurations. Grid generation software based on hyperbolic methods and their applications on several complex configurations will be described.
A field-based method for simultaneous measurements of the δ18O and δ13C of soil CO2 efflux
NASA Astrophysics Data System (ADS)
Mortazavi, B.; Prater, J. L.; Chanton, J. P.
determined from soil CO2. There were close agreements between the three methods for the determination of the δ13C of soil efflux CO2. Results suggest that the mini-towers can be effectively used in the field for determining the δ18O and the δ13C of soil-respired CO2.
Grassmann phase space methods for fermions. II. Field theory
NASA Astrophysics Data System (ADS)
Dalton, B. J.; Jeffers, J.; Barnett, S. M.
2017-02-01
In both quantum optics and cold atom physics, the behaviour of bosonic photons and atoms is often treated using phase space methods, where mode annihilation and creation operators are represented by c-number phase space variables, with the density operator equivalent to a distribution function of these variables. The anti-commutation rules for fermion annihilation, creation operators suggests the possibility of using anti-commuting Grassmann variables to represent these operators. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of Grassmann phase space methods in quantum-atom optics to treat fermionic systems is rather rare, though fermion coherent states using Grassmann variables are widely used in particle physics. This paper presents a phase space theory for fermion systems based on distribution functionals, which replace the density operator and involve Grassmann fields representing anti-commuting fermion field annihilation, creation operators. It is an extension of a previous phase space theory paper for fermions (Paper I) based on separate modes, in which the density operator is replaced by a distribution function depending on Grassmann phase space variables which represent the mode annihilation and creation operators. This further development of the theory is important for the situation when large numbers of fermions are involved, resulting in too many modes to treat separately. Here Grassmann fields, distribution functionals, functional Fokker-Planck equations and Ito stochastic field equations are involved. Typical applications to a trapped Fermi gas of interacting spin 1/2 fermionic atoms and to multi-component Fermi gases with non-zero range interactions are presented, showing that the Ito stochastic field equations are local in these cases. For the spin 1/2 case we also show how simple solutions can be obtained both for the untrapped case and for an optical lattice trapping potential.
Stream temperature investigations: field and analytic methods
Bartholow, J.M.
1989-01-01
Alternative public domain stream and reservoir temperature models are contrasted with SNTEMP. A distinction is made between steady-flow and dynamic-flow models and their respective capabilities. Regression models are offered as an alternative approach for some situations, with appropriate mathematical formulas suggested. Appendices provide information on State and Federal agencies that are good data sources, vendors for field instrumentation, and small computer programs useful in data reduction.
Ab initio based polarizable force field parametrization
NASA Astrophysics Data System (ADS)
Masia, Marco
2008-05-01
Experimental and simulation studies of anion-water systems have pointed out the importance of molecular polarization for many phenomena ranging from hydrogen-bond dynamics to water interfaces structure. The study of such systems at molecular level is usually made with classical molecular dynamics simulations. Structural and dynamical features are deeply influenced by molecular and ionic polarizability, which parametrization in classical force field has been an object of long-standing efforts. Although when classical models are compared to ab initio calculations at condensed phase, it is found that the water dipole moments are underestimated by ˜30%, while the anion shows an overpolarization at short distances. A model for chloride-water polarizable interaction is parametrized here, making use of Car-Parrinello simulations at condensed phase. The results hint to an innovative approach in polarizable force fields development, based on ab initio simulations, which do not suffer for the mentioned drawbacks. The method is general and can be applied to the modeling of different systems ranging from biomolecular to solid state simulations.
Junction-based field emission structure for field emission display
Dinh, Long N.; Balooch, Mehdi; McLean, II, William; Schildbach, Marcus A.
2002-01-01
A junction-based field emission display, wherein the junctions are formed by depositing a semiconducting or dielectric, low work function, negative electron affinity (NEA) silicon-based compound film (SBCF) onto a metal or n-type semiconductor substrate. The SBCF can be doped to become a p-type semiconductor. A small forward bias voltage is applied across the junction so that electron transport is from the substrate into the SBCF region. Upon entering into this NEA region, many electrons are released into the vacuum level above the SBCF surface and accelerated toward a positively biased phosphor screen anode, hence lighting up the phosphor screen for display. To turn off, simply switch off the applied potential across the SBCF/substrate. May be used for field emission flat panel displays.
Thomer, Andrea; Vaidya, Gaurav; Guralnick, Robert; Bloom, David; Russell, Laura
2012-01-01
Abstract Part diary, part scientific record, biological field notebooks often contain details necessary to understanding the location and environmental conditions existent during collecting events. Despite their clear value for (and recent use in) global change studies, the text-mining outputs from field notebooks have been idiosyncratic to specific research projects, and impossible to discover or re-use. Best practices and workflows for digitization, transcription, extraction, and integration with other sources are nascent or non-existent. In this paper, we demonstrate a workflow to generate structured outputs while also maintaining links to the original texts. The first step in this workflow was to place already digitized and transcribed field notebooks from the University of Colorado Museum of Natural History founder, Junius Henderson, on Wikisource, an open text transcription platform. Next, we created Wikisource templates to document places, dates, and taxa to facilitate annotation and wiki-linking. We then requested help from the public, through social media tools, to take advantage of volunteer efforts and energy. After three notebooks were fully annotated, content was converted into XML and annotations were extracted and cross-walked into Darwin Core compliant record sets. Finally, these recordsets were vetted, to provide valid taxon names, via a process we call “taxonomic referencing.” The result is identification and mobilization of 1,068 observations from three of Henderson’s thirteen notebooks and a publishable Darwin Core record set for use in other analyses. Although challenges remain, this work demonstrates a feasible approach to unlock observations from field notebooks that enhances their discovery and interoperability without losing the narrative context from which those observations are drawn. “Compose your notes as if you were writing a letter to someone a century in the future.” Perrine and Patton (2011) PMID:22859891
Thomer, Andrea; Vaidya, Gaurav; Guralnick, Robert; Bloom, David; Russell, Laura
2012-01-01
Part diary, part scientific record, biological field notebooks often contain details necessary to understanding the location and environmental conditions existent during collecting events. Despite their clear value for (and recent use in) global change studies, the text-mining outputs from field notebooks have been idiosyncratic to specific research projects, and impossible to discover or re-use. Best practices and workflows for digitization, transcription, extraction, and integration with other sources are nascent or non-existent. In this paper, we demonstrate a workflow to generate structured outputs while also maintaining links to the original texts. The first step in this workflow was to place already digitized and transcribed field notebooks from the University of Colorado Museum of Natural History founder, Junius Henderson, on Wikisource, an open text transcription platform. Next, we created Wikisource templates to document places, dates, and taxa to facilitate annotation and wiki-linking. We then requested help from the public, through social media tools, to take advantage of volunteer efforts and energy. After three notebooks were fully annotated, content was converted into XML and annotations were extracted and cross-walked into Darwin Core compliant record sets. Finally, these recordsets were vetted, to provide valid taxon names, via a process we call "taxonomic referencing." The result is identification and mobilization of 1,068 observations from three of Henderson's thirteen notebooks and a publishable Darwin Core record set for use in other analyses. Although challenges remain, this work demonstrates a feasible approach to unlock observations from field notebooks that enhances their discovery and interoperability without losing the narrative context from which those observations are drawn."Compose your notes as if you were writing a letter to someone a century in the future."Perrine and Patton (2011).
Graphene-based field-effect transistor biosensors
Chen; , Junhong; Mao, Shun; Lu, Ganhua
2017-06-14
The disclosure provides a field-effect transistor (FET)-based biosensor and uses thereof. In particular, to FET-based biosensors using thermally reduced graphene-based sheets as a conducting channel decorated with nanoparticle-biomolecule conjugates. The present disclosure also relates to FET-based biosensors using metal nitride/graphene hybrid sheets. The disclosure provides a method for detecting a target biomolecule in a sample using the FET-based biosensor described herein.
A data base of geologic field spectra
NASA Technical Reports Server (NTRS)
Kahle, A. B.; Goetz, A. F. H.; Paley, H. N.; Alley, R. E.; Abbott, E. A.
1981-01-01
It is noted that field samples measured in the laboratory do not always present an accurate picture of the ground surface sensed by airborne or spaceborne instruments because of the heterogeneous nature of most surfaces and because samples are disturbed and surface characteristics changed by collection and handling. The development of new remote sensing instruments relies on the analysis of surface materials in their natural state. The existence of thousands of Portable Field Reflectance Spectrometer (PFRS) spectra has necessitated a single, all-inclusive data base that permits greatly simplified searching and sorting procedures and facilitates further statistical analyses. The data base developed at JPL for cataloging geologic field spectra is discussed.
Dispersion Method Using Focused Ultrasonic Field
NASA Astrophysics Data System (ADS)
Jungsoon Kim,; Moojoon Kim,; Kanglyel Ha,; Minchul Chu,
2010-07-01
The dispersion of powders into liquids has become one of the most important techniques in high-tech industries and it is a common process in the formulation of various products, such as paint, ink, shampoo, beverages, and polishing media. In this study, an ultrasonic system with a cylindrical transducer is newly introduced for pure nanoparticle dispersion. The acoustics pressure field and the characteristics of the shock pulse caused by cavitation are investigated. The frequency spectrum of the pulse from the collapse of air bubbles in the cavitation is analyzed theoretically. It was confirmed that a TiO2 water suspension can be dispersed effectively using the suggested system.
Computational Methods for Complex Flow Fields.
1986-06-28
James J. Riley Joel H . Ferziger "Turbulent Flow Simulation - Future Needs" Micha Wolfshtein " Numerical Calculation of the Reynolds Stress and Turbulent...July 1983. Also in RECENT ADVANCES IN NUMERICAL METHODS IN FLUIDS, Vol. 3, Editor W.G. Habashi, Pineridge Press. 2. Usab, W.J., "Embedded Mesh Solutions...ridiaconal matrices applicable to approximane factorization methods . E:xlicit algcrit-s are also easier to adapz to multiProcessor arcr.itectures as the
NASA Astrophysics Data System (ADS)
Lee, Sebastian J. R.; Miyamoto, Kaito; Ding, Feizhi; Manby, Frederick R.; Miller, Thomas F.
2017-09-01
We consider mean-field electronic structure calculations with subsystems that employ different atomic-orbital basis sets. A major source of error arises in charge-manifestation reactions (including ionization, electron attachment, or deprotonation) due to electronic density artifacts at the subsystem interface. The underlying errors in the electronic density can be largely eliminated with Fock-matrix corrections or by avoiding the use of a minimal basis set in the low-level region. These corrections succeed by balancing the electronegativity of atoms at the subsystem interface, much as link-atoms in QM/MM calculations rely upon balancing the electronegativity of atoms in the truncated QM region.
Shin, Jicheol; Hong, Tae Ryang; Lee, Tae Wan; Kim, Aryeon; Kim, Yun Ho; Cho, Min Ju; Choi, Dong Hoon
2014-09-10
Template-guided solution-shearing (TGSS) is used to fabricate field-effect transistors (FETs) composed of micropatterned prisms as active channels. The prisms comprise highly crystalline PTDPP-DTTE, in which diketopyrrolopyrrole (DPP) is flanked by thiophene. The FET has a maximum mobility of approximately 7.43 cm(2) V(-1) s(-1) , which is much higher than the mobility values of the thin-film transistors with solution-sheared or spin-coated films of PTDPP-DTTE annealed at 200 °C. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Determination of traces of cobalt in soils: A field method
Almond, H.
1953-01-01
The growing use of geochemical prospecting methods in the search for ore deposits has led to the development of a field method for the determination of cobalt in soils. The determination is based on the fact that cobalt reacts with 2-nitroso-1-naphthol to yield a pink compound that is soluble in carbon tetrachloride. The carbon tetrachloride extract is shaken with dilute cyanide to complex interfering elements and to remove excess reagent. The cobalt content is estimated by comparing the pink color in the carbon tetrachloride with a standard series prepared from standard solutions. The cobalt 2-nitroso-1-naphtholate system in carbon tetrachloride follows Beer's law. As little as 1 p.p.m. can be determined in a 0.1-gram sample. The method is simple and fast and requires only simple equipment. More than 40 samples can be analyzed per man-day with an accuracy within 30% or better.
NASA Astrophysics Data System (ADS)
Chirouze, J.; Boulet, G.; Jarlan, L.; Fieuzal, R.; Rodriguez, J. C.; Ezzahar, J.; Er-Raki, S.; Bigeard, G.; Merlin, O.; Garatuza-Payan, J.; Watts, C.; Chehbouni, G.
2013-01-01
Remotely sensed surface temperature can provide a good proxy for water stress level and is therefore particularly useful to estimate spatially distributed evapotranspiration. Instantaneous stress levels or instantaneous latent heat flux are deduced from the surface energy balance equation constrained by this equilibrium temperature. Pixel average surface temperature depends on two main factors: stress and vegetation fraction cover. Methods estimating stress vary according to the way they treat each factor. Two families of methods can be defined: the contextual methods, where stress levels are scaled on a given image between hot/dry and cool/wet pixels for a particular vegetation cover, and single-pixel methods which evaluate latent heat as the residual of the surface energy balance for one pixel independently from the others. Four models, two contextual (S-SEBI and a triangle method, inspired by Moran et al., 1994) and two single-pixel (TSEB, SEBS) are applied at seasonal scale over a four by four km irrigated agricultural area in semi-arid northern Mexico. Their performances, both at local and spatial standpoints, are compared relatively to energy balance data acquired at seven locations within the area, as well as a more complex soil-vegetation-atmosphere transfer model forced with true irrigation and rainfall data. Stress levels are not always well retrieved by most models, but S-SEBI as well as TSEB, although slightly biased, show good performances. Drop in model performances is observed when vegetation is senescent, mostly due to a poor partitioning both between turbulent fluxes and between the soil/plant components of the latent heat flux and the available energy. As expected, contextual methods perform well when extreme hydric and vegetation conditions are encountered in the same image (therefore, esp. in spring and early summer) while they tend to exaggerate the spread in water status in more homogeneous conditions (esp. in winter).
An Efficient Method for Transferring Adult Mosquitoes during Field Tests,
CULICIDAE, *COLLECTING METHODS, REPRINTS, BLOOD SUCKING INSECTS, FIELD TESTS, HAND HELD, EFFICIENCY, LABORATORY EQUIPMENT, MORTALITY RATES , ADULTS, AEDES, ASPIRATORS, CULICIDAE, TEST AND EVALUATION, REPRINTS
Advanced Fuzzy Potential Field Method for Mobile Robot Obstacle Avoidance
Park, Jong-Wook; Kwak, Hwan-Joo; Kang, Young-Chang; Kim, Dong W.
2016-01-01
An advanced fuzzy potential field method for mobile robot obstacle avoidance is proposed. The potential field method primarily deals with the repulsive forces surrounding obstacles, while fuzzy control logic focuses on fuzzy rules that handle linguistic variables and describe the knowledge of experts. The design of a fuzzy controller—advanced fuzzy potential field method (AFPFM)—that models and enhances the conventional potential field method is proposed and discussed. This study also examines the rule-explosion problem of conventional fuzzy logic and assesses the performance of our proposed AFPFM through simulations carried out using a mobile robot. PMID:27123001
Advanced Fuzzy Potential Field Method for Mobile Robot Obstacle Avoidance.
Park, Jong-Wook; Kwak, Hwan-Joo; Kang, Young-Chang; Kim, Dong W
2016-01-01
An advanced fuzzy potential field method for mobile robot obstacle avoidance is proposed. The potential field method primarily deals with the repulsive forces surrounding obstacles, while fuzzy control logic focuses on fuzzy rules that handle linguistic variables and describe the knowledge of experts. The design of a fuzzy controller--advanced fuzzy potential field method (AFPFM)--that models and enhances the conventional potential field method is proposed and discussed. This study also examines the rule-explosion problem of conventional fuzzy logic and assesses the performance of our proposed AFPFM through simulations carried out using a mobile robot.
Handbook of field methods for monitoring landbirds
C.J. Ralph; G.R. Geupel; P. Pyle; T.E. Martin; D.F. DeSante
1993-01-01
The increased attention devoted to the status and possible declines of populations of smaller species of terrestrial birds, known collectively as "landbirds," has resulted in an immediate need for specific methodology for monitoring their populations. This handbook is derived from several sources and is based on the authorsâ collective experiences in...
This paper summarizes and discusses recent available U.S. and European information on
ammonia (NH3) emissions from swine farms and assesses the applicability for general use
in the United States. The emission rates for the swine barns calculated by various methods show
g...
NASA Astrophysics Data System (ADS)
Chirouze, J.; Boulet, G.; Jarlan, L.; Fieuzal, R.; Rodriguez, J. C.; Ezzahar, J.; Er-Raki, S.; Bigeard, G.; Merlin, O.; Garatuza-Payan, J.; Watts, C.; Chehbouni, G.
2014-03-01
Instantaneous evapotranspiration rates and surface water stress levels can be deduced from remotely sensed surface temperature data through the surface energy budget. Two families of methods can be defined: the contextual methods, where stress levels are scaled on a given image between hot/dry and cool/wet pixels for a particular vegetation cover, and single-pixel methods, which evaluate latent heat as the residual of the surface energy balance for one pixel independently from the others. Four models, two contextual (S-SEBI and a modified triangle method, named VIT) and two single-pixel (TSEB, SEBS) are applied over one growing season (December-May) for a 4 km × 4 km irrigated agricultural area in the semi-arid northern Mexico. Their performance, both at local and spatial standpoints, are compared relatively to energy balance data acquired at seven locations within the area, as well as an uncalibrated soil-vegetation-atmosphere transfer (SVAT) model forced with local in situ data including observed irrigation and rainfall amounts. Stress levels are not always well retrieved by most models, but S-SEBI as well as TSEB, although slightly biased, show good performance. The drop in model performance is observed for all models when vegetation is senescent, mostly due to a poor partitioning both between turbulent fluxes and between the soil/plant components of the latent heat flux and the available energy. As expected, contextual methods perform well when contrasted soil moisture and vegetation conditions are encountered in the same image (therefore, especially in spring and early summer) while they tend to exaggerate the spread in water status in more homogeneous conditions (especially in winter). Surface energy balance models run with available remotely sensed products prove to be nearly as accurate as the uncalibrated SVAT model forced with in situ data.
A Comprehensive Expedient Methods Field Manual.
1984-09-01
provide surface shelters that offer protection against the elements. These kits contain modular, expandable, and canvas shelters. "Modular and expandable...shelters and canvas tents provide all the structures needed on a bare 16 . .. . . - - .* *.-~ base to provide billeting, shops, hangars, and storage...interconnecting stringer light cables. 4. Each spider box contains enough outlets to supply each tent with at least two power receptacles. 5. All equipment
Field Applicable Method to Reduce Dental Emergencies.
1987-07-31
apthous ulcers (Yeoman, Greenspan, and Harding, 1978); anti-fungal drugs for the management of denture stomatitis (Douglas and Walker, 1973; Thomas and...In vitro studies into the use of denture base and soft liner materials as carriers for drugs in the mouth. Journal of Oral Rehabili- tation, 8:131...cement. (1980) Journal of the American Dental Association, 101:669. Douglas, W. H. and Walker, D. M. (1973) Nystatin in denture liners, an alter- native
Yoshikawa, Takeshi; Nakai, Hiromi
2015-01-30
Graphical processing units (GPUs) are emerging in computational chemistry to include Hartree-Fock (HF) methods and electron-correlation theories. However, ab initio calculations of large molecules face technical difficulties such as slow memory access between central processing unit and GPU and other shortfalls of GPU memory. The divide-and-conquer (DC) method, which is a linear-scaling scheme that divides a total system into several fragments, could avoid these bottlenecks by separately solving local equations in individual fragments. In addition, the resolution-of-the-identity (RI) approximation enables an effective reduction in computational cost with respect to the GPU memory. The present study implemented the DC-RI-HF code on GPUs using math libraries, which guarantee compatibility with future development of the GPU architecture. Numerical applications confirmed that the present code using GPUs significantly accelerated the HF calculations while maintaining accuracy. © 2014 Wiley Periodicals, Inc.
Path planning in uncertain flow fields using ensemble method
NASA Astrophysics Data System (ADS)
Wang, Tong; Le Maître, Olivier P.; Hoteit, Ibrahim; Knio, Omar M.
2016-10-01
An ensemble-based approach is developed to conduct optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where an ensemble of deterministic predictions is used to model and quantify uncertainty. In an operational setting, much about dynamics, topography, and forcing of the ocean environment is uncertain. To address this uncertainty, the flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each of the resulting realizations of the uncertain current field, we predict the path that minimizes the travel time by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of the sampling strategy and to develop insight into extensions dealing with general circulation ocean models. In particular, the ensemble method enables us to perform a statistical analysis of travel times and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.
Background field method in the gradient flow
NASA Astrophysics Data System (ADS)
Suzuki, Hiroshi
2015-10-01
In perturbative consideration of the Yang-Mills gradient flow, it is useful to introduce a gauge non-covariant term (“gauge-fixing term”) to the flow equation that gives rise to a Gaussian damping factor also for gauge degrees of freedom. In the present paper, we consider a modified form of the gauge-fixing term that manifestly preserves covariance under the background gauge transformation. It is shown that our gauge-fixing term does not affect gauge-invariant quantities as does the conventional gauge-fixing term. The formulation thus allows a background gauge covariant perturbative expansion of the flow equation that provides, in particular, a very efficient computational method of expansion coefficients in the small flow time expansion. The formulation can be generalized to systems containing fermions.
Inverse methods for stellarator error-fields and emission
NASA Astrophysics Data System (ADS)
Hammond, K. C.; Anichowski, A.; Brenner, P. W.; Diaz-Pacheco, R.; Volpe, F. A.; Wei, Y.; Kornbluth, Y.; Pedersen, T. S.; Raftopoulos, S.; Traverso, P.
2016-10-01
Work at the CNT stellarator at Columbia University has resulted in the development of two inverse diagnosis techniques that infer difficult-to-measure properties from simpler measurements. First, CNT's error-field is determined using a Newton-Raphson algorithm to infer coil misalignments based on measurements of flux surfaces. This is obtained by reconciling the computed flux surfaces (a function of coil misalignments) with the measured flux surfaces. Second, the plasma emissivity profile is determined based on a single CCD camera image using an onion-peeling method. This approach posits a system of linear equations relating pixel brightness to emission from a discrete set of plasma layers bounded by flux surfaces. Results for both of these techniques as applied to CNT will be shown, and their applicability to large modular coil stellarators will be discussed.
Bootstrapping conformal field theories with the extremal functional method.
El-Showk, Sheer; Paulos, Miguel F
2013-12-13
The existence of a positive linear functional acting on the space of (differences between) conformal blocks has been shown to rule out regions in the parameter space of conformal field theories (CFTs). We argue that at the boundary of the allowed region the extremal functional contains, in principle, enough information to determine the dimensions and operator product expansion (OPE) coefficients of an infinite number of operators appearing in the correlator under analysis. Based on this idea we develop the extremal functional method (EFM), a numerical procedure for deriving the spectrum and OPE coefficients of CFTs lying on the boundary (of solution space). We test the EFM by using it to rederive the low lying spectrum and OPE coefficients of the two-dimensional Ising model based solely on the dimension of a single scalar quasiprimary--no Virasoro algebra required. Our work serves as a benchmark for applications to more interesting, less known CFTs in the near future.
Deformation methods in modelling of the inner magnetospheric electromagnetic fields
NASA Astrophysics Data System (ADS)
Toivanen, P. K.
2007-12-01
Various deformation methods have been widely used in animation image processing. In common terms, they are mathematical presentations of deformations of an image drawn on an elastic material under stretching or compression of the material. Such a method has also been used in modelling of the magnetospheric magnetic fields, and recently been generalized to include also the electric fields. In this presentations, the theory of the deformation method and an application in a form of a new global magnetospheric electromagnetic field model are previewed. The main focus of the presentation is on the inner magnetospheric current systems and associated electromagnetic fields during quiet and disturbed periods. Finally, a short look at the modern deformation methods in image processing is taken. These methods include the Free Form Deformations and Moving Least Squares Deformations, and their future applications in magnetospheric field modelling are discussed.
Comparison of induction motor field efficiency evaluation methods
Hsu, J.S.; Kueck, J.D.; Olszewski, M.; Casada, D.A.; Otaduy, P.J.; Tolbert, L.M.
1996-10-01
Unlike testing motor efficiency in a laboratory, certain methods given in the IEEE-Std 112 cannot be used for motor efficiency in the field. For example, it is difficult to load a motor in the field with a dynamometer when the motor is already coupled to driven equipment. The motor efficiency field evaluation faces a different environment from that for which the IEEE-Std 112 is chiefly written. A field evaluation method consists of one or several basic methods according to their physical natures. Their intrusivenesses and accuracies are also discussed. This study is useful for field engineers to select or to establish a proper efficiency evaluation method by understanding the theories and error sources of the methods.
Symstad, Amy J.; Wienk, Cody L.; Thorstenson, Andy
2006-01-01
The Northern Great Plains Inventory & Monitoring (I&M) Network (Network) of the National Park Service (NPS) consists of 13 NPS units in North Dakota, South Dakota, Nebraska, and eastern Wyoming. The Network is in the planning phase of a long-term program to monitor the health of park ecosystems. Plant community composition is one of the 'Vital Signs,' or indicators, that will be monitored as part of this program for three main reasons. First, plant community composition is information-rich; a single sampling protocol can provide information on the diversity of native and non-native species, the abundance of individual dominant species, and the abundance of groups of plants. Second, plant community composition is of specific management concern. The abundance and diversity of exotic plants, both absolute and relative to native species, is one of the greatest management concerns in almost all Network parks (Symstad 2004). Finally, plant community composition reflects the effects of a variety of current or anticipated stressors on ecosystem health in the Network parks including invasive exotic plants, large ungulate grazing, lack of fire in a fire-adapted system, chemical exotic plant control, nitrogen deposition, increased atmospheric carbon dioxide concentrations, and climate change. Before the Network begins its Vital Signs monitoring, a detailed plan describing specific protocols used for each of the Vital Signs must go through rigorous development and review. The pilot study on which we report here is one of the components of this protocol development. The goal of the work we report on here was to determine a specific method to use for monitoring plant community composition of the herb layer (< 2 m tall).
Symstad, Amy J.; Wienk, Cody L.; Thorstenson, Andy
2006-01-01
The Northern Great Plains Inventory & Monitoring (I&M) Network (Network) of the National Park Service (NPS) consists of 13 NPS units in North Dakota, South Dakota, Nebraska, and eastern Wyoming. The Network is in the planning phase of a long-term program to monitor the health of park ecosystems. Plant community composition is one of the 'Vital Signs,' or indicators, that will be monitored as part of this program for three main reasons. First, plant community composition is information-rich; a single sampling protocol can provide information on the diversity of native and non-native species, the abundance of individual dominant species, and the abundance of groups of plants. Second, plant community composition is of specific management concern. The abundance and diversity of exotic plants, both absolute and relative to native species, is one of the greatest management concerns in almost all Network parks (Symstad 2004). Finally, plant community composition reflects the effects of a variety of current or anticipated stressors on ecosystem health in the Network parks including invasive exotic plants, large ungulate grazing, lack of fire in a fire-adapted system, chemical exotic plant control, nitrogen deposition, increased atmospheric carbon dioxide concentrations, and climate change. Before the Network begins its Vital Signs monitoring, a detailed plan describing specific protocols used for each of the Vital Signs must go through rigorous development and review. The pilot study on which we report here is one of the components of this protocol development. The goal of the work we report on here was to determine a specific method to use for monitoring plant community composition of the herb layer (< 2 m tall).
Learning from Participants in Field Based Research
ERIC Educational Resources Information Center
Francis, Dawn
2004-01-01
This paper takes a critically reflective look at field research done in the early career of an academic and in so doing uncovers the dilemmas of a novice researcher that are rarely acknowledged in texts that address qualitative methods. It addresses issues of power in research associations surrounding different paradigms and the ways in which the…
NASA Astrophysics Data System (ADS)
Patsourakos, S.; Georgoulis, M. K.
2017-07-01
Patsourakos et al. ( Astrophys. J. 817, 14, 2016) and Patsourakos and Georgoulis ( Astron. Astrophys. 595, A121, 2016) introduced a method to infer the axial magnetic field in flux-rope coronal mass ejections (CMEs) in the solar corona and farther away in the interplanetary medium. The method, based on the conservation principle of magnetic helicity, uses the relative magnetic helicity of the solar source region as input estimates, along with the radius and length of the corresponding CME flux rope. The method was initially applied to cylindrical force-free flux ropes, with encouraging results. We hereby extend our framework along two distinct lines. First, we generalize our formalism to several possible flux-rope configurations (linear and nonlinear force-free, non-force-free, spheromak, and torus) to investigate the dependence of the resulting CME axial magnetic field on input parameters and the employed flux-rope configuration. Second, we generalize our framework to both Sun-like and active M-dwarf stars hosting superflares. In a qualitative sense, we find that Earth may not experience severe atmosphere-eroding magnetospheric compression even for eruptive solar superflares with energies {≈} 104 times higher than those of the largest Geostationary Operational Environmental Satellite (GOES) X-class flares currently observed. In addition, the two recently discovered exoplanets with the highest Earth-similarity index, Kepler 438b and Proxima b, seem to lie in the prohibitive zone of atmospheric erosion due to interplanetary CMEs (ICMEs), except when they possess planetary magnetic fields that are much higher than that of Earth.
NASA Astrophysics Data System (ADS)
Su, Xiaoru; Shu, Longcang; Chen, Xunhong; Lu, Chengpeng; Wen, Zhonghui
2016-12-01
Interactions between surface waters and groundwater are of great significance for evaluating water resources and protecting ecosystem health. Heat as a tracer method is widely used in determination of the interactive exchange with high precision, low cost and great convenience. The flow in a river-bank cross-section occurs in vertical and lateral directions. In order to depict the flow path and its spatial distribution in bank areas, a genetic algorithm (GA) two-dimensional (2-D) heat-transport nested-loop method for variably saturated sediments, GA-VS2DH, was developed based on Microsoft Visual Basic 6.0. VS2DH was applied to model a 2-D bank-water flow field and GA was used to calibrate the model automatically by minimizing the difference between observed and simulated temperatures in bank areas. A hypothetical model was developed to assess the reliability of GA-VS2DH in inverse modeling in a river-bank system. Some benchmark tests were conducted to recognize the capability of GA-VS2DH. The results indicated that the simulated seepage velocity and parameters associated with GA-VS2DH were acceptable and reliable. Then GA-VS2DH was applied to two field sites in China with different sedimentary materials, to verify the reliability of the method. GA-VS2DH could be applied in interpreting the cross-sectional 2-D water flow field. The estimates of horizontal hydraulic conductivity at the Dawen River and Qinhuai River sites are 1.317 and 0.015 m/day, which correspond to sand and clay sediment in the two sites, respectively.
Section summary: Ground-based field measurements
Nophea Sasaki
2013-01-01
Although deforestation has been the main focus of international debate in REDD+, forest degradation could emit even more carbon emissions because forest degradation can take place in any accessible forest. Accounting for emission factors requires the use of stockchange or gain-loss approach depending on the forests in questions. Ground based field measurements are a...
On conductance-based neural field models
Pinotsis, Dimitris A.; Leite, Marco; Friston, Karl J.
2013-01-01
This technical note introduces a conductance-based neural field model that combines biologically realistic synaptic dynamics—based on transmembrane currents—with neural field equations, describing the propagation of spikes over the cortical surface. This model allows for fairly realistic inter-and intra-laminar intrinsic connections that underlie spatiotemporal neuronal dynamics. We focus on the response functions of expected neuronal states (such as depolarization) that generate observed electrophysiological signals (like LFP recordings and EEG). These response functions characterize the model's transfer functions and implicit spectral responses to (uncorrelated) input. Our main finding is that both the evoked responses (impulse response functions) and induced responses (transfer functions) show qualitative differences depending upon whether one uses a neural mass or field model. Furthermore, there are differences between the equivalent convolution and conductance models. Overall, all models reproduce a characteristic increase in frequency, when inhibition was increased by increasing the rate constants of inhibitory populations. However, convolution and conductance-based models showed qualitatively different changes in power, with convolution models showing decreases with increasing inhibition, while conductance models show the opposite effect. These differences suggest that conductance based field models may be important in empirical studies of cortical gain control or pharmacological manipulations. PMID:24273508
Wind field model-based estimation of Seasat scatterometer winds
NASA Technical Reports Server (NTRS)
Long, David G.
1993-01-01
A model-based approach to estimating near-surface wind fields over the ocean from Seasat scatterometer (SASS) measurements is presented. The approach is a direct assimilation technique in which wind field model parameters are estimated directly from the scatterometer measurements of the radar backscatter of the ocean's surface using maximum likelihood principles. The wind field estimate is then computed from the estimated model parameters. The wind field model used in this approach is based on geostrophic approximation and on simplistic assumptions about the wind field vorticity and divergence but includes ageostrophic winds. Nine days of SASS data were processed to obtain unique wind estimates. Comparisons in performance to the traditional two-step (point-wise wind retrieval followed by ambiguity removal) wind estimate method and the model-based method are provided using both simulated radar backscatter measurements and actual SASS measurements. In the latter case the results are compared to wind fields determined using subjective ambiguity removal. While the traditional approach results in missing measurements and reduced effective swath width due to fore/aft beam cell coregistration problems, the model-based approach uses all available measurements to increase the effective swath width and to reduce data gaps. The results reveal that the model-based wind estimates have accuracy comparable to traditionally estimated winds with less 'noise' in the directional estimates, particularly at low wind speeds.
Ocean Wave Simulation Based on Wind Field.
Li, Zhongyi; Wang, Hao
2016-01-01
Ocean wave simulation has a wide range of applications in movies, video games and training systems. Wind force is the main energy resource for generating ocean waves, which are the result of the interaction between wind and the ocean surface. While numerous methods to handle simulating oceans and other fluid phenomena have undergone rapid development during the past years in the field of computer graphic, few of them consider to construct ocean surface height field from the perspective of wind force driving ocean waves. We introduce wind force to the construction of the ocean surface height field through applying wind field data and wind-driven wave particles. Continual and realistic ocean waves result from the overlap of wind-driven wave particles, and a strategy was proposed to control these discrete wave particles and simulate an endless ocean surface. The results showed that the new method is capable of obtaining a realistic ocean scene under the influence of wind fields at real time rates.
Ocean Wave Simulation Based on Wind Field
2016-01-01
Ocean wave simulation has a wide range of applications in movies, video games and training systems. Wind force is the main energy resource for generating ocean waves, which are the result of the interaction between wind and the ocean surface. While numerous methods to handle simulating oceans and other fluid phenomena have undergone rapid development during the past years in the field of computer graphic, few of them consider to construct ocean surface height field from the perspective of wind force driving ocean waves. We introduce wind force to the construction of the ocean surface height field through applying wind field data and wind-driven wave particles. Continual and realistic ocean waves result from the overlap of wind-driven wave particles, and a strategy was proposed to control these discrete wave particles and simulate an endless ocean surface. The results showed that the new method is capable of obtaining a realistic ocean scene under the influence of wind fields at real time rates. PMID:26808718
IR photodetector based on rectangular quantum wire in magnetic field
Jha, Nandan
2014-04-24
In this paper we study rectangular quantum wire based IR detector with magnetic field applied along the wires. The energy spectrum of a particle in rectangular box shows level repulsions and crossings when external magnetic field is applied. Due to this complex level dynamics, we can tune the spacing between any two levels by varying the magnetic field. This method allows user to change the detector parameters according to his/her requirements. In this paper, we numerically calculate the energy sub-band levels of the square quantum wire in constant magnetic field along the wire and quantify the possible operating wavelength range that can be obtained by varying the magnetic field. We also calculate the photon absorption probability at different magnetic fields and give the efficiency for different wavelengths if the transition is assumed between two lowest levels.
Classical-field methods for atom-molecule systems
NASA Astrophysics Data System (ADS)
Sahlberg, Catarina E.; Gardiner, C. W.
2013-02-01
We extend classical-field methods [Blakie , Adv. Phys.ADPHAH0001-873210.1080/00018730802564254 57, 363 (2008)] to provide a description of atom-molecule systems. We use a model of Bose-Einstein condensation of atoms close to a Feshbach resonance, in which the tunable scattering length of the atoms is described using a system of coupled atom and molecule fields [Holland , Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.86.1915 86, 1915 (2001)]. We formulate the basic theoretical methods for a coupled atom-molecule system, including the determination of the phenomenological parameters in our system, the Thomas-Fermi description of Bose-Einstein condensate, the Bogoliubov-de Gennes equations, and the Bogoliubov excitation spectrum for a homogenous condensed system. We apply this formalism to the special case of Bragg scattering from a uniform condensate and find that for moderate and large scattering lengths, there is a dramatic difference in the shift of the peak of the Bragg spectra, compared to that based on a structureless atom model. The result is compatible with the experimental results of Papp [Phys. Rev. LettPRLTAO0031-900710.1103/PhysRevLett.101.135301 101, 135301 (2008)] for Bragg scattering from a nonuniform condensate.
An improved reconstruction method for cosmological density fields
NASA Technical Reports Server (NTRS)
Gramann, Mirt
1993-01-01
This paper proposes some improvements to existing reconstruction methods for recovering the initial linear density and velocity fields of the universe from the present large-scale density distribution. We derive the Eulerian continuity equation in the Zel'dovich approximation and show that, by applying this equation, we can trace the evolution of the gravitational potential of the universe more exactly than is possible with previous approaches based on the Zel'dovich-Bernoulli equation. The improved reconstruction method is tested using N-body simulations. When the Zel'dovich-Bernoulli equation describes the formation of filaments, then the Zel'dovich continuity equation also follows the clustering of clumps inside the filaments. Our reconstruction method recovers the true initial gravitational potential with an rms error about 3 times smaller than previous methods. We examine the recovery of the initial distribution of Fourier components and find the scale at which the recovered phases are scrambled with respect their true initial values. Integrating the Zel'dovich continuity equation back in time, we can improve the spatial resolution of the reconstruction by a factor of about 2.
Geostatistical joint inversion of seismic and potential field methods
NASA Astrophysics Data System (ADS)
Shamsipour, Pejman; Chouteau, Michel; Giroux, Bernard
2016-04-01
Interpretation of geophysical data needs to integrate different types of information to make the proposed model geologically realistic. Multiple data sets can reduce uncertainty and non-uniqueness present in separate geophysical data inversions. Seismic data can play an important role in mineral exploration, however processing and interpretation of seismic data is difficult due to complexity of hard-rock geology. On the other hand, the recovered model from potential field methods is affected by inherent non uniqueness caused by the nature of the physics and by underdetermination of the problem. Joint inversion of seismic and potential field data can mitigate weakness of separate inversion of these methods. A stochastic joint inversion method based on geostatistical techniques is applied to estimate density and velocity distributions from gravity and travel time data. The method fully integrates the physical relations between density-gravity, on one hand, and slowness-travel time, on the other hand. As a consequence, when the data are considered noise-free, the responses from the inverted slowness and density data exactly reproduce the observed data. The required density and velocity auto- and cross-covariance are assumed to follow a linear model of coregionalization (LCM). The recent development of nonlinear model of coregionalization could also be applied if needed. The kernel function for the gravity method is obtained by the closed form formulation. For ray tracing, we use the shortest-path methods (SPM) to calculate the operation matrix. The jointed inversion is performed on structured grid; however, it is possible to extend it to use unstructured grid. The method is tested on two synthetic models: a model consisting of two objects buried in a homogeneous background and a model with stochastic distribution of parameters. The results illustrate the capability of the method to improve the inverted model compared to the separate inverted models with either gravity
A component compensation method for magnetic interferential field
NASA Astrophysics Data System (ADS)
Zhang, Qi; Wan, Chengbiao; Pan, Mengchun; Liu, Zhongyan; Sun, Xiaoyong
2017-04-01
A new component searching with scalar restriction method (CSSRM) is proposed for magnetometer to compensate magnetic interferential field caused by ferromagnetic material of platform and improve measurement performance. In CSSRM, the objection function for parameter estimation is to minimize magnetic field (components and magnitude) difference between its measurement value and reference value. Two scalar compensation method is compared with CSSRM and the simulation results indicate that CSSRM can estimate all interferential parameters and external magnetic field vector with high accuracy. The magnetic field magnitude and components, compensated with CSSRM, coincide with true value very well. Experiment is carried out for a tri-axial fluxgate magnetometer, mounted in a measurement system with inertial sensors together. After compensation, error standard deviation of both magnetic field components and magnitude are reduced from more than thousands nT to less than 20 nT. It suggests that CSSRM provides an effective way to improve performance of magnetic interferential field compensation.
Oriented Connectivity-Based Method for Segmenting Solar Loops
NASA Technical Reports Server (NTRS)
Lee, J. K.; Newman, T. S.; Gary, G. A.
2005-01-01
A method based on oriented connectivity that can automatically segnient arc-like structures (solar loops) from intensity images of the Sun's corona is introduced. The method is a constructive approach that uses model-guided processing to enable extraction of credible loop structures. Since the solar loops are vestiges of the solar magnetic field, the model-guided processing exploits external estimates of this field s local orientations that are derived from a physical magnetic field model. Empirical studies of the method s effectiveness are also presented. The Oriented Connectivity- Based Method is the first automatic method for the segmentation of solar loops.
New Methods of Low-Field Magnetic Resonance Imaging for Application to Traumatic Brain Injury
2015-02-15
events. MRI-based in vivo free radical imaging using OMRI is impossible at high-field due to the inability of the ESR pulse to penetrate into tissue , and...Award Number: W81XWH-11-2-0076 TITLE: New Methods of Low-Field Magnetic Resonance Imaging for Application to Traumatic Brain Injury PRINCIPAL...2014 - 9 Jan 2015 4. TITLE AND SUBTITLE New Methods of Low-Field Magnetic Resonance Imaging for Application to Traumatic Brain Injury 5a. CONTRACT
Method of using triaxial magnetic fields for making particle structures
Martin, James E.; Anderson, Robert A.; Williamson, Rodney L.
2005-01-18
A method of producing three-dimensional particle structures with enhanced magnetic susceptibility in three dimensions by applying a triaxial energetic field to a magnetic particle suspension and subsequently stabilizing said particle structure. Combinations of direct current and alternating current fields in three dimensions produce particle gel structures, honeycomb structures, and foam-like structures.
Methods of measuring soil moisture in the field
Johnson, A.I.
1962-01-01
For centuries, the amount of moisture in the soil has been of interest in agriculture. The subject of soil moisture is also of great importance to the hydrologist, forester, and soils engineer. Much equipment and many methods have been developed to measure soil moisture under field conditions. This report discusses and evaluates the various methods for measurement of soil moisture and describes the equipment needed for each method. The advantages and disadvantages of each method are discussed and an extensive list of references is provided for those desiring to study the subject in more detail. The gravimetric method is concluded to be the most satisfactory method for most problems requiring onetime moisture-content data. The radioactive method is normally best for obtaining repeated measurements of soil moisture in place. It is concluded that all methods have some limitations and that the ideal method for measurement of soil moisture under field conditions has yet to be perfected.
FIELD ANALYTICAL SCREENING PROGRAM: PCB METHOD - INNOVATIVE TECHNOLOGY REPORT
This innovative technology evaluation report (ITER) presents information on the demonstration of the U.S. Environmental Protection Agency (EPA) Region 7 Superfund Field Analytical Screening Program (FASP) method for determining polychlorinated biphenyl (PCB) contamination in soil...
FIELD ANALYTICAL SCREENING PROGRAM: PCB METHOD - INNOVATIVE TECHNOLOGY REPORT
This innovative technology evaluation report (ITER) presents information on the demonstration of the U.S. Environmental Protection Agency (EPA) Region 7 Superfund Field Analytical Screening Program (FASP) method for determining polychlorinated biphenyl (PCB) contamination in soil...
New Method for Solving Inductive Electric Fields in the Ionosphere
NASA Astrophysics Data System (ADS)
Vanhamäki, H.
2005-12-01
We present a new method for calculating inductive electric fields in the ionosphere. It is well established that on large scales the ionospheric electric field is a potential field. This is understandable, since the temporal variations of large scale current systems are generally quite slow, in the timescales of several minutes, so inductive effects should be small. However, studies of Alfven wave reflection have indicated that in some situations inductive phenomena could well play a significant role in the reflection process, and thus modify the nature of ionosphere-magnetosphere coupling. The input to our calculation method are the time series of the potential part of the ionospheric electric field together with the Hall and Pedersen conductances. The output is the time series of the induced rotational part of the ionospheric electric field. The calculation method works in the time-domain and can be used with non-uniform, time-dependent conductances. In addition no particular symmetry requirements are imposed on the input potential electric field. The presented method makes use of special non-local vector basis functions called Cartesian Elementary Current Systems (CECS). This vector basis offers a convenient way of representing curl-free and divergence-free parts of 2-dimensional vector fields and makes it possible to solve the induction problem using simple linear algebra. The new calculation method is validated by comparing it with previously published results for Alfven wave reflection from uniformly conducting ionosphere.
Wavelet-based hierarchical surface approximation from height fields
Sang-Mook Lee; A. Lynn Abbott; Daniel L. Schmoldt
2004-01-01
This paper presents a novel hierarchical approach to triangular mesh generation from height fields. A wavelet-based multiresolution analysis technique is used to estimate local shape information at different levels of resolution. Using predefined templates at the coarsest level, the method constructs an initial triangulation in which underlying object shapes are well...
Method for using germanium thermometers in moderately high magnetic fields
NASA Astrophysics Data System (ADS)
Roy, A.; Buchanan, D. S.; Ginsberg, D. M.
1985-03-01
We have devised a simple method for extending the zero-field calibration of a germanium resistance thermometer to include the effects of magnetic fields up to 5 T. We describe the application of this method to the use of a germanium thermometer at liquid-helium temperatures. We outline a similar procedure to take into account the temperature variation of the calibration of a Hall probe.
NASA Technical Reports Server (NTRS)
Baxes, Gregory A. (Inventor)
2010-01-01
Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.
NASA Technical Reports Server (NTRS)
Baxes, Gregory A. (Inventor); Linger, Timothy C. (Inventor)
2011-01-01
Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.
Graphical methods for quantifying macromolecules through bright field imaging.
Chang, Hang; DeFilippis, Rosa Anna; Tlsty, Thea D; Parvin, Bahram
2009-04-15
Bright field imaging of biological samples stained with antibodies and/or special stains provides a rapid protocol for visualizing various macromolecules. However, this method of sample staining and imaging is rarely employed for direct quantitative analysis due to variations in sample fixations, ambiguities introduced by color composition and the limited dynamic range of imaging instruments. We demonstrate that, through the decomposition of color signals, staining can be scored on a cell-by-cell basis. We have applied our method to fibroblasts grown from histologically normal breast tissue biopsies obtained from two distinct populations. Initially, nuclear regions are segmented through conversion of color images into gray scale, and detection of dark elliptic features. Subsequently, the strength of staining is quantified by a color decomposition model that is optimized by a graph cut algorithm. In rare cases where nuclear signal is significantly altered as a result of sample preparation, nuclear segmentation can be validated and corrected. Finally, segmented stained patterns are associated with each nuclear region following region-based tessellation. Compared to classical non-negative matrix factorization, proposed method: (i) improves color decomposition, (ii) has a better noise immunity, (iii) is more invariant to initial conditions and (iv) has a superior computing performance.
Light Field Imaging Based Accurate Image Specular Highlight Removal
Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo
2016-01-01
Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083
Artificial terraced field extraction based on high resolution DEMs
NASA Astrophysics Data System (ADS)
Na, Jiaming; Yang, Xin; Xiong, Liyang; Tang, Guoan
2017-04-01
With the increase of human activities, artificial landforms become one of the main terrain features with special geographical and hydrological value. Terraced field, as the most important artificial landscapes of the loess plateau, plays an important role in conserving soil and water. With the development of digital terrain analysis (DTA), there is a current and future need in developing a robust, repeatable and cost-effective research methodology for terraced fields. In this paper, a novel method using bidirectional DEM shaded relief is proposed for terraced field identification based on high resolution DEM, taking Zhifanggou watershed, Shannxi province as the study area. Firstly, 1m DEM is obtained by low altitude aerial photogrammetry using Unmanned Aerial Vehicle (UAV), and 0.1m DOM is also obtained as the test data. Then, the positive and negative terrain segmentation is done to acquire the area of terraced field. Finally, a bidirectional DEM shaded relief is simulated to extract the ridges of each terraced field stages. The method in this paper can get not only polygon feature of the terraced field areas but also line feature of terraced field ridges. The accuracy is 89.7% compared with the artificial interpretation result from DOM. And additional experiment shows that this method has a strong robustness as well as high accuracy.
Double-sensor method for detection of oscillating electric field.
Ohkuma, Yasunori; Ikeyama, Taeko; Nogi, Yasuyuki
2011-04-01
An electric-field sensor consisting of thin copper plates is designed to measure an oscillating electric field produced by charge separations on a plasma column. The sensor installed in a vacuum region around plasma detects charges induced by the electric field on the copper plates. The value of the induced charges depends not only on the strength of the electric field, but also on the design of the sensor. To obtain the correct strength of the electric field, a correction factor arising from the design of the sensor must be known. The factor is calculated numerically using Laplace's equation and compared with a value measured using a uniform electric field in the frequency range of 10-500 kHz. When an external circuit is connected to the sensor to measure the induced charges, the electric field around the sensor is disturbed. Therefore, a double-sensor method for excluding a disturbed component in the measured electric field is proposed. The reliability of the double-sensor method is confirmed by measuring dipole-like and quadrupole-like electric fields. © 2011 American Institute of Physics
A Novel Method of Localization for Moving Objects with an Alternating Magnetic Field.
Gao, Xiang; Yan, Shenggang; Li, Bin
2017-04-21
Magnetic detection technology has wide applications in the fields of geological exploration, biomedical treatment, wreck removal and localization of unexploded ordinance. A large number of methods have been developed to locate targets with static magnetic fields, however, the relation between the problem of localization of moving objectives with alternating magnetic fields and the localization with a static magnetic field is rarely studied. A novel method of target localization based on coherent demodulation was proposed in this paper. The problem of localization of moving objects with an alternating magnetic field was transformed into the localization with a static magnetic field. The Levenberg-Marquardt (L-M) algorithm was applied to calculate the position of the target with magnetic field data measured by a single three-component magnetic sensor. Theoretical simulation and experimental results demonstrate the effectiveness of the proposed method.
A Novel Method of Localization for Moving Objects with an Alternating Magnetic Field
Gao, Xiang; Yan, Shenggang; Li, Bin
2017-01-01
Magnetic detection technology has wide applications in the fields of geological exploration, biomedical treatment, wreck removal and localization of unexploded ordinance. A large number of methods have been developed to locate targets with static magnetic fields, however, the relation between the problem of localization of moving objectives with alternating magnetic fields and the localization with a static magnetic field is rarely studied. A novel method of target localization based on coherent demodulation was proposed in this paper. The problem of localization of moving objects with an alternating magnetic field was transformed into the localization with a static magnetic field. The Levenberg-Marquardt (L-M) algorithm was applied to calculate the position of the target with magnetic field data measured by a single three-component magnetic sensor. Theoretical simulation and experimental results demonstrate the effectiveness of the proposed method. PMID:28430153
Comparison of electric field exposure measurement methods under power lines.
Korpinen, Leena; Kuisti, Harri; Tarao, Hiroo; Pääkkönen, Rauno; Elovaara, Jarmo
2014-01-01
The object of the study was to investigate extremely low frequency (ELF) electric field exposure measurement methods under power lines. The authors compared two different methods under power lines: in Method A, the sensor was placed on a tripod; and Method B required the measurer to hold the meter horizontally so that the distance from him/her was at least 1.5 m. The study includes 20 measurements in three places under 400 kV power lines. The authors used two commercial three-axis meters, EFA-3 and EFA-300. In statistical analyses, they did not find significant differences between Methods A and B. However, in the future, it is important to take into account that measurement methods can, in some cases, influence ELF electric field measurement results, and it is important to report the methods used so that it is possible to repeat the measurements.
New light field camera based on physical based rendering tracing
NASA Astrophysics Data System (ADS)
Chung, Ming-Han; Chang, Shan-Ching; Lee, Chih-Kung
2014-03-01
Even though light field technology was first invented more than 50 years ago, it did not gain popularity due to the limitation imposed by the computation technology. With the rapid advancement of computer technology over the last decade, the limitation has been uplifted and the light field technology quickly returns to the spotlight of the research stage. In this paper, PBRT (Physical Based Rendering Tracing) was introduced to overcome the limitation of using traditional optical simulation approach to study the light field camera technology. More specifically, traditional optical simulation approach can only present light energy distribution but typically lack the capability to present the pictures in realistic scenes. By using PBRT, which was developed to create virtual scenes, 4D light field information was obtained to conduct initial data analysis and calculation. This PBRT approach was also used to explore the light field data calculation potential in creating realistic photos. Furthermore, we integrated the optical experimental measurement results with PBRT in order to place the real measurement results into the virtually created scenes. In other words, our approach provided us with a way to establish a link of virtual scene with the real measurement results. Several images developed based on the above-mentioned approaches were analyzed and discussed to verify the pros and cons of the newly developed PBRT based light field camera technology. It will be shown that this newly developed light field camera approach can circumvent the loss of spatial resolution associated with adopting a micro-lens array in front of the image sensors. Detailed operational constraint, performance metrics, computation resources needed, etc. associated with this newly developed light field camera technique were presented in detail.
Characterizing ice crystal growth behavior under electric field using phase field method.
He, Zhi Zhu; Liu, Jing
2009-07-01
In this article, the microscale ice crystal growth behavior under electrostatic field is investigated via a phase field method, which also incorporates the effects of anisotropy and thermal noise. The multiple ice nuclei's competitive growth as disclosed in existing experiments is thus successfully predicted. The present approach suggests a highly efficient theoretical tool for probing into the freeze injury mechanisms of biological material due to ice formation during cryosurgery or cryopreservation process when external electric field was involved.
FIELD ANALYTICAL SCREENING PROGRAM: PCP METHOD - INNOVATIVE TECHNOLOGY EVALUATION REPORT
The Field Analytical Screening Program (FASP) pentachlorophenol (PCP) method uses a gas chromatograph (GC) equipped with a megabore capillary column and flame ionization detector (FID) and electron capture detector (ECD) to identify and quantify PCP. The FASP PCP method is design...
FIELD ANALYTICAL SCREENING PROGRAM: PCP METHOD - INNOVATIVE TECHNOLOGY EVALUATION REPORT
The Field Analytical Screening Program (FASP) pentachlorophenol (PCP) method uses a gas chromatograph (GC) equipped with a megabore capillary column and flame ionization detector (FID) and electron capture detector (ECD) to identify and quantify PCP. The FASP PCP method is design...
Individual SWCNT based ionic field effect transistor
NASA Astrophysics Data System (ADS)
Pang, Pei; He, Jin; Park, Jae Hyun; Krstic, Predrag; Lindsay, Stuart
2011-03-01
Here we report that the ionic current through a single-walled carbon nanotube (SWCNT) can be effectively gated by a perpendicular electrical field from a top gate electrode, working as ionic field effect transistor. Both our experiment and simulation confirms that the electroosmotic current (EOF) is the main component in the ionic current through the SWCNT and is responsible for the gating effect. We also studied the gating efficiency as a function of solution concentration and pH and demonstrated that the device can work effectively in the physiological relevant condition. This work opens the door to use CNT based nanofluidics for ion and molecule manipulation. This work was supported by the DNA Sequencing Technology Program of the National Human Genome Research Institute (1RC2HG005625-01, 1R21HG004770-01), Arizona Technology Enterprises and the Biodesign Institute.
Geochemical field method for determination of nickel in plants
Reichen, L.E.
1951-01-01
The use of biogeochemical data in prospecting for nickel emphasizes the need for a simple, moderately accurate field method for the determination of nickel in plants. In order to follow leads provided by plants of unusual nickel content without loss of time, the plants should be analyzed and the results given to the field geologist promptly. The method reported in this paper was developed to meet this need. Speed is acquired by elimination of the customary drying and controlled ashing; the fresh vegetation is ashed in an open dish over a gasoline stove. The ash is put into solution with hydrochloric acid and the solution buffered. A chromograph is used to make a confined spot with an aliquot of the ash solution on dimethylglyoxime reagent paper. As little as 0.025% nickel in plant ash can be determined. With a simple modification, 0.003% can be detected. Data are given comparing the results obtained by an accepted laboratory procedure. Results by the field method are within 30% of the laboratory values. The field method for nickel in plants meets the requirements of biogeochemical prospecting with respect to accuracy, simplicity, speed, and ease of performance in the field. With experience, an analyst can make 30 determinations in an 8-hour work day in the field.
1973-08-01
placed on the development of field test kits based on two improved colorimetric methods involving the use of methylene blue and Azure A. The...simplified and improved Methylene Blue Method and Azure A Method require only 5 or 6 ml of aqueous reagent and 25 ml of chloroform for analyzing one sample
Stevens, Fred J.
1992-01-01
A novel method of electric field flow fractionation for separating solute molecules from a carrier solution is disclosed. The method of the invention utilizes an electric field that is periodically reversed in polarity, in a time-dependent, wave-like manner. The parameters of the waveform, including amplitude, frequency and wave shape may be varied to optimize separation of solute species. The waveform may further include discontinuities to enhance separation.
A comparison of methods for estimating the geoelectric field
NASA Astrophysics Data System (ADS)
Weigel, R. S.
2017-02-01
The geoelectric field is the primary input used for estimation of geomagnetically induced currents (GICs) in conducting systems. We compare three methods for estimating the geoelectric field given the measured geomagnetic field at four locations in the U.S. during time intervals with average Kp in the range of 2-3 and when the measurements had few data spikes and no baseline jumps. The methods include using (1) a preexisting 1-D conductivity model, (2) a conventional 3-D frequency domain method, and (3) a robust and remote reference 3-D frequency domain method. The quality of the estimates is determined using the power spectrum (in the period range 9.1 to 18,725 s) of estimation errors along with the prediction efficiency summary statistic. It is shown that with respect to these quality metrics, Method 1 produces average out-of-sample electric field estimation errors with a variance that can be equal to or larger than the average measured variance (due to underestimation or overestimation, respectively), and Method 3 produces reliable but slightly lower quality estimates than Method 2 for the time intervals and locations considered.
Direct field method for root biomass quantification in agroecosystems.
Frasier, Ileana; Noellemeyer, Elke; Fernández, Romina; Quiroga, Alberto
2016-01-01
The present article describes a field auger sampling method for row-crop root measurements. In agroecosystems where crops are planted in a specific design (row crops), sampling procedures for root biomass quantification need to consider the spatial variability of the root system. This article explains in detail how to sample and calculate root biomass considering the sampling position in the field and the differential weight of the root biomass in the inter-row compared to the crop row when expressing data per area unit. This method is highly reproducible in the field and requires no expensive equipment and/or special skills. It proposes to use a narrow auger thus reducing field labor with less destructive sampling, and decreases laboratory time because samples are smaller. The small sample size also facilitates the washing and root separation with tweezers. This method is suitable for either winter- or summer crop roots. •Description of a direct field method for row-crop root measurements.•Description of data calculation for total root-biomass estimation per unit area.•The proposed method is simple, less labor- and less time consuming.
The emergence of mixing methods in the field of evaluation.
Greene, Jennifer C
2015-06-01
When and how did the contemporary practice of mixing methods in social inquiry get started? What events transpired to catalyze the explosive conceptual development and practical adoption of mixed methods social inquiry over recent decades? How has this development progressed? What "next steps" would be most constructive? These questions are engaged in this personally narrative account of the beginnings of the contemporary mixed methods phenomenon in the field of evaluation from the perspective of a methodologist who was there.
Non-perturbative methods in relativistic field theory
Franz Gross
2013-03-01
This talk reviews relativistic methods used to compute bound and low energy scattering states in field theory, with emphasis on approaches that John Tjon and I discussed (and argued about) together. I compare the Bethe–Salpeter and Covariant Spectator equations, show some applications, and then report on some of the things we have learned from the beautiful Feynman–Schwinger technique for calculating the exact sum of all ladder and crossed ladder diagrams in field theory.
Method of determining interwell oil field fluid saturation distribution
Donaldson, Erle C.; Sutterfield, F. Dexter
1981-01-01
A method of determining the oil and brine saturation distribution in an oil field by taking electrical current and potential measurements among a plurality of open-hole wells geometrically distributed throughout the oil field. Poisson's equation is utilized to develop fluid saturation distributions from the electrical current and potential measurement. Both signal generating equipment and chemical means are used to develop current flow among the several open-hole wells.
Li, Yunhan; Sun, Yonghai; Jaffray, David A; Yeow, John T W
2017-04-18
Field emission (FE) uniformity and the mechanism of emitter failure of freestanding carbon nanotube (CNT) arrays have not been well studied due to the difficulty of observing and quantifying FE performance of each emitter in CNT arrays. Herein a field emission microscopy (FEM) method based on poly(methyl methacrylate) (PMMA) thin film is proposed to study the FE uniformity and CNT emitter failure of freestanding CNT arrays. FE uniformity of freestanding CNT arrays and different levels of FE current contributions from each emitter in the arrays are recorded and visualized. FEM patterns on the PMMA thin film contain the details of the CNT emitter tip shape and whether multiple CNT emitters occur at an emission site. Observation of real-time FE performance and the CNT emitter failure process in freestanding CNT arrays are successfully achieved using a microscopic camera. High emission currents through CNT emitters causes Joule heating and light emission followed by an explosion of the CNTs. The proposed approach is capable of resolving the major challenge of building the relationship between FE performance and CNT morphologies, which can significantly facilitate the study of FE non-uniformity, the emitter failure mechanism and the development of stable and reliable FE devices in practical applications.
NASA Astrophysics Data System (ADS)
Li, Yunhan; Sun, Yonghai; Jaffray, David A.; Yeow, John T. W.
2017-04-01
Field emission (FE) uniformity and the mechanism of emitter failure of freestanding carbon nanotube (CNT) arrays have not been well studied due to the difficulty of observing and quantifying FE performance of each emitter in CNT arrays. Herein a field emission microscopy (FEM) method based on poly(methyl methacrylate) (PMMA) thin film is proposed to study the FE uniformity and CNT emitter failure of freestanding CNT arrays. FE uniformity of freestanding CNT arrays and different levels of FE current contributions from each emitter in the arrays are recorded and visualized. FEM patterns on the PMMA thin film contain the details of the CNT emitter tip shape and whether multiple CNT emitters occur at an emission site. Observation of real-time FE performance and the CNT emitter failure process in freestanding CNT arrays are successfully achieved using a microscopic camera. High emission currents through CNT emitters causes Joule heating and light emission followed by an explosion of the CNTs. The proposed approach is capable of resolving the major challenge of building the relationship between FE performance and CNT morphologies, which can significantly facilitate the study of FE non-uniformity, the emitter failure mechanism and the development of stable and reliable FE devices in practical applications.
Kazachenko, Maria D.; Fisher, George H.; Welsch, Brian T.
2014-11-01
Photospheric electric fields, estimated from sequences of vector magnetic field and Doppler measurements, can be used to estimate the flux of magnetic energy (the Poynting flux) into the corona and as time-dependent boundary conditions for dynamic models of the coronal magnetic field. We have modified and extended an existing method to estimate photospheric electric fields that combines a poloidal-toroidal decomposition (PTD) of the evolving magnetic field vector with Doppler and horizontal plasma velocities. Our current, more comprehensive method, which we dub the 'PTD-Doppler-FLCT Ideal' (PDFI) technique, can now incorporate Doppler velocities from non-normal viewing angles. It uses the FISHPACK software package to solve several two-dimensional Poisson equations, a faster and more robust approach than our previous implementations. Here, we describe systematic, quantitative tests of the accuracy and robustness of the PDFI technique using synthetic data from anelastic MHD (ANMHD) simulations, which have been used in similar tests in the past. We find that the PDFI method has less than 1% error in the total Poynting flux and a 10% error in the helicity flux rate at a normal viewing angle (θ = 0) and less than 25% and 10% errors, respectively, at large viewing angles (θ < 60°). We compare our results with other inversion methods at zero viewing angle and find that our method's estimates of the fluxes of magnetic energy and helicity are comparable to or more accurate than other methods. We also discuss the limitations of the PDFI method and its uncertainties.
Method of improving field emission characteristics of diamond thin films
Krauss, Alan R.; Gruen, Dieter M.
1999-01-01
A method of preparing diamond thin films with improved field emission properties. The method includes preparing a diamond thin film on a substrate, such as Mo, W, Si and Ni. An atmosphere of hydrogen (molecular or atomic) can be provided above the already deposited film to form absorbed hydrogen to reduce the work function and enhance field emission properties of the diamond film. In addition, hydrogen can be absorbed on intergranular surfaces to enhance electrical conductivity of the diamond film. The treated diamond film can be part of a microtip array in a flat panel display.
Method of improving field emission characteristics of diamond thin films
Krauss, A.R.; Gruen, D.M.
1999-05-11
A method of preparing diamond thin films with improved field emission properties is disclosed. The method includes preparing a diamond thin film on a substrate, such as Mo, W, Si and Ni. An atmosphere of hydrogen (molecular or atomic) can be provided above the already deposited film to form absorbed hydrogen to reduce the work function and enhance field emission properties of the diamond film. In addition, hydrogen can be absorbed on intergranular surfaces to enhance electrical conductivity of the diamond film. The treated diamond film can be part of a microtip array in a flat panel display. 3 figs.
NASA Technical Reports Server (NTRS)
Glatt, C. R.; Hague, D. S.; Reiners, S. J.
1975-01-01
A computerized procedure for predicting sonic boom from experimental near-field overpressure data has been developed. The procedure extrapolates near-field pressure signatures for a specified flight condition to the ground by the Thomas method. Near-field pressure signatures are interpolated from a data base of experimental pressure signatures. The program is an independently operated ODIN (Optimal Design Integration) program which obtains flight path information from other ODIN programs or from input.
Multigrid Methods for the Computation of Propagators in Gauge Fields
NASA Astrophysics Data System (ADS)
Kalkreuter, Thomas
Multigrid methods were invented for the solution of discretized partial differential equations in order to overcome the slowness of traditional algorithms by updates on various length scales. In the present work generalizations of multigrid methods for propagators in gauge fields are investigated. Gauge fields are incorporated in algorithms in a covariant way. The kernel C of the restriction operator which averages from one grid to the next coarser grid is defined by projection on the ground-state of a local Hamiltonian. The idea behind this definition is that the appropriate notion of smoothness depends on the dynamics. The ground-state projection choice of C can be used in arbitrary dimension and for arbitrary gauge group. We discuss proper averaging operations for bosons and for staggered fermions. The kernels C can also be used in multigrid Monte Carlo simulations, and for the definition of block spins and blocked gauge fields in Monte Carlo renormalization group studies. Actual numerical computations are performed in four-dimensional SU(2) gauge fields. We prove that our proposals for block spins are “good”, using renormalization group arguments. A central result is that the multigrid method works in arbitrarily disordered gauge fields, in principle. It is proved that computations of propagators in gauge fields without critical slowing down are possible when one uses an ideal interpolation kernel. Unfortunately, the idealized algorithm is not practical, but it was important to answer questions of principle. Practical methods are able to outperform the conjugate gradient algorithm in case of bosons. The case of staggered fermions is harder. Multigrid methods give considerable speed-ups compared to conventional relaxation algorithms, but on lattices up to 184 conjugate gradient is superior.
Accurate force fields and methods for modelling organic molecular crystals at finite temperatures.
Nyman, Jonas; Pundyke, Orla Sheehan; Day, Graeme M
2016-06-21
We present an assessment of the performance of several force fields for modelling intermolecular interactions in organic molecular crystals using the X23 benchmark set. The performance of the force fields is compared to several popular dispersion corrected density functional methods. In addition, we present our implementation of lattice vibrational free energy calculations in the quasi-harmonic approximation, using several methods to account for phonon dispersion. This allows us to also benchmark the force fields' reproduction of finite temperature crystal structures. The results demonstrate that anisotropic atom-atom multipole-based force fields can be as accurate as several popular DFT-D methods, but have errors 2-3 times larger than the current best DFT-D methods. The largest error in the examined force fields is a systematic underestimation of the (absolute) lattice energy.
NASA Astrophysics Data System (ADS)
Frassinetti, L.; Olofsson, K. E. J.; Fridström, R.; Setiadi, A. C.; Brunsell, P. R.; Volpe, F. A.; Drake, J.
2013-08-01
A new method for the estimate of the wall diffusion time of non-axisymmetric fields is developed. The method based on rotating external fields and on the measurement of the wall frequency response is developed and tested in EXTRAP T2R. The method allows the experimental estimate of the wall diffusion time for each Fourier harmonic and the estimate of the wall diffusion toroidal asymmetries. The method intrinsically considers the effects of three-dimensional structures and of the shell gaps. Far from the gaps, experimental results are in good agreement with the diffusion time estimated with a simple cylindrical model that assumes a homogeneous wall. The method is also applied with non-standard configurations of the coil array, in order to mimic tokamak-relevant settings with a partial wall coverage and active coils of large toroidal extent. The comparison with the full coverage results shows good agreement if the effects of the relevant sidebands are considered.
Prediction of sound fields in acoustical cavities using the boundary element method. M.S. Thesis
NASA Technical Reports Server (NTRS)
Kipp, C. R.; Bernhard, R. J.
1985-01-01
A method was developed to predict sound fields in acoustical cavities. The method is based on the indirect boundary element method. An isoparametric quadratic boundary element is incorporated. Pressure, velocity and/or impedance boundary conditions may be applied to a cavity by using this method. The capability to include acoustic point sources within the cavity is implemented. The method is applied to the prediction of sound fields in spherical and rectangular cavities. All three boundary condition types are verified. Cases with a point source within the cavity domain are also studied. Numerically determined cavity pressure distributions and responses are presented. The numerical results correlate well with available analytical results.
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.
2002-01-01
A multifunctional interface method with capabilities for variable-fidelity modeling and multiple method analysis is presented. The methodology provides an effective capability by which domains with diverse idealizations can be modeled independently to exploit the advantages of one approach over another. The multifunctional method is used to couple independently discretized subdomains, and it is used to couple the finite element and the finite difference methods. The method is based on a weighted residual variational method and is presented for two-dimensional scalar-field problems. A verification test problem and a benchmark application are presented, and the computational implications are discussed.
Field Science Ethnography: Methods For Systematic Observation on an Expedition
NASA Technical Reports Server (NTRS)
Clancey, William J.; Clancy, Daniel (Technical Monitor)
2001-01-01
The Haughton-Mars expedition is a multidisciplinary project, exploring an impact crater in an extreme environment to determine how people might live and work on Mars. The expedition seeks to understand and field test Mars facilities, crew roles, operations, and computer tools. I combine an ethnographic approach to establish a baseline understanding of how scientists prefer to live and work when relatively unemcumbered, with a participatory design approach of experimenting with procedures and tools in the context of use. This paper focuses on field methods for systematically recording and analyzing the expedition's activities. Systematic photography and time-lapse video are combined with concept mapping to organize and present information. This hybrid approach is generally applicable to the study of modern field expeditions having a dozen or more multidisciplinary participants, spread over a large terrain during multiple field seasons.
Hyperspectral Imaging and Related Field Methods: Building the Science
NASA Technical Reports Server (NTRS)
Goetz, Alexander F. H.; Steffen, Konrad; Wessman, Carol
1999-01-01
The proposal requested funds for the computing power to bring hyperspectral image processing into undergraduate and graduate remote sensing courses. This upgrade made it possible to handle more students in these oversubscribed courses and to enhance CSES' summer short course entitled "Hyperspectral Imaging and Data Analysis" provided for government, industry, university and military. Funds were also requested to build field measurement capabilities through the purchase of spectroradiometers, canopy radiation sensors and a differential GPS system. These instruments provided systematic and complete sets of field data for the analysis of hyperspectral data with the appropriate radiometric and wavelength calibration as well as atmospheric data needed for application of radiative transfer models. The proposed field equipment made it possible to team-teach a new field methods course, unique in the country, that took advantage of the expertise of the investigators rostered in three different departments, Geology, Geography and Biology.
Background field method and the cohomology of renormalization
NASA Astrophysics Data System (ADS)
Anselmi, Damiano
2016-03-01
Using the background field method and the Batalin-Vilkovisky formalism, we prove a key theorem on the cohomology of perturbatively local functionals of arbitrary ghost numbers in renormalizable and nonrenormalizable quantum field theories whose gauge symmetries are general covariance, local Lorentz symmetry, non-Abelian Yang-Mills symmetries and Abelian gauge symmetries. Interpolating between the background field approach and the usual, nonbackground approach by means of a canonical transformation, we take advantage of the properties of both approaches and prove that a closed functional is the sum of an exact functional plus a functional that depends only on the physical fields and possibly the ghosts. The assumptions of the theorem are the mathematical versions of general properties that characterize the counterterms and the local contributions to the potential anomalies. This makes the outcome a theorem on the cohomology of renormalization, rather than the whole local cohomology. The result supersedes numerous involved arguments that are available in the literature.
Field and laboratory methods in human milk research.
Miller, Elizabeth M; Aiello, Marco O; Fujita, Masako; Hinde, Katie; Milligan, Lauren; Quinn, E A
2013-01-01
Human milk is a complex and variable fluid of increasing interest to human biologists who study nutrition and health. The collection and analysis of human milk poses many practical and ethical challenges to field workers, who must balance both appropriate methodology with the needs of participating mothers and infants and logistical challenges to collection and analysis. In this review, we address various collection methods, volume measurements, and ethical considerations and make recommendations for field researchers. We also review frequently used methods for the analysis of fat, protein, sugars/lactose, and specific biomarkers in human milk. Finally, we address new technologies in human milk research, the MIRIS Human Milk Analyzer and dried milk spots, which will improve the ability of human biologists and anthropologists to study human milk in field settings.
Methane generation in tropical landfills: simplified methods and field results.
Machado, Sandro L; Carvalho, Miriam F; Gourc, Jean-Pierre; Vilar, Orencio M; do Nascimento, Julio C F
2009-01-01
This paper deals with the use of simplified methods to predict methane generation in tropical landfills. Methane recovery data obtained on site as part of a research program being carried out at the Metropolitan Landfill, Salvador, Brazil, is analyzed and used to obtain field methane generation over time. Laboratory data from MSW samples of different ages are presented and discussed; and simplified procedures to estimate the methane generation potential, Lo, and the constant related to the biodegradation rate, k are applied. The first order decay method is used to fit field and laboratory results. It is demonstrated that despite the assumptions and the simplicity of the adopted laboratory procedures, the values Lo and k obtained are very close to those measured in the field, thus making this kind of analysis very attractive for first approach purposes.
New Methods of Low-Field MRI for Application to Traumatic Brain Injury
2014-04-01
TITLE: New Methods of Low-Field MRI for Application to Traumatic Brain Injury PRINCIPAL INVESTIGATOR: Matthew S. Rosen, Ph.D...SUBTITLE New Methods of Low-Field MRI for Application to TBI 5a. CONTRACT NUMBER 5b. GRANT NUMBER W81XWH-11-2-0076 5c. PROGRAM...imaging in a 100 lb scanner based on a rotating permanent magnet array, and free-radical imaging ex vivo as a path toward in vivo applications
A field theoretical approach to the quasi-continuum method
NASA Astrophysics Data System (ADS)
Iyer, Mrinal; Gavini, Vikram
2011-08-01
The quasi-continuum method has provided many insights into the behavior of lattice defects in the past decade. However, recent numerical analysis suggests that the approximations introduced in various formulations of the quasi-continuum method lead to inconsistencies—namely, appearance of ghost forces or residual forces, non-conservative nature of approximate forces, etc.—which affect the numerical accuracy and stability of the method. In this work, we identify the source of these errors to be the incompatibility of using quadrature rules, which is a local notion, on a non-local representation of energy. We eliminate these errors by first reformulating the extended interatomic interactions into a local variational problem that describes the energy of a system via potential fields. We subsequently introduce the quasi-continuum reduction of these potential fields using an adaptive finite-element discretization of the formulation. We demonstrate that the present formulation resolves the inconsistencies present in previous formulations of the quasi-continuum method, and show using numerical examples the remarkable improvement in the accuracy of solutions. Further, this field theoretic formulation of quasi-continuum method makes mathematical analysis of the method more amenable using functional analysis and homogenization theories.
A Field Method for Investigating the Cultural Landscape.
ERIC Educational Resources Information Center
Parson, Helen E.; McKay, Ian A.
1989-01-01
Outlines a method for conducting a rural cultural-landscape field project. Notes that this activity is especially useful with students whose life experiences are primarily urban. Describes a cemetery survey, a small town reconnaissance, and rural land and building survey. Provides examples of student generated materials. (KO)
General Anisotropy Identification of Paperboard with Virtual Fields Method
J.M. Considine; F. Pierron; K.T. Turner; D.W. Vahey
2014-01-01
This work extends previous efforts in plate bending of Virtual Fields Method (VFM) parameter identification to include a general 2-D anisotropicmaterial. Such an extension was needed for instances in which material principal directions are unknown or when specimen orientation is not aligned with material principal directions. A new fixture with a multiaxial force...
Work function measurements by the field emission retarding potential method
NASA Technical Reports Server (NTRS)
Swanson, L. W.; Strayer, R. W.; Mackie, W. A.
1971-01-01
Using the field emission retarding potential method true work functions have been measured for the following monocrystalline substrates: W(110), W(111), W(100), Nb(100), Ni(100), Cu(100), Ir(110) and Ir(111). The electron elastic and inelastic reflection coefficients from several of these surfaces have also been examined near zero primary beam energy.
Field test of a new Australian method of rangeland monitoring
Suzanne Mayne; Neil West
2001-01-01
Managers need more efficient means of monitoring changes on the lands they manage. Accordingly, a new Australian approach was field tested and compared to the Daubenmire method of assessing plant cover, litter, and bare soil. The study area was a 2 mile wide by 30.15 mile long strip, mostly covered by salt desert shrub ecosystem types, centered along the SE boundary of...
Unsaturated soil hydraulic conductivity: The field infiltrometer method
USDA-ARS?s Scientific Manuscript database
Theory: Field methods to measure the unsaturated soil hydraulic conductivity assume presence of steady-state water flow. Soil infiltrometers are desired to apply water onto the soil surface at constant negative pressure. Water is applied to the soil from the Marriott device through a porous membrane...
Longitudinal Field Research Methods for Studying Processes of Organizational Change.
ERIC Educational Resources Information Center
Van de Ven, Andrew H.; Huber, George P.
1990-01-01
This and the next issue of "Organization Science" contain eight papers that deal with the process of organizational change. The five papers in this issue feature the theory of method and practice of researchers engaged in longitudinal field studies aimed at understanding processes of organizational change. (MLF)
Field Deployable Method for Arsenic Speciation in Water.
Voice, Thomas C; Flores Del Pino, Lisveth V; Havezov, Ivan; Long, David T
2011-01-01
Contamination of drinking water supplies by arsenic is a world-wide problem. Total arsenic measurements are commonly used to investigate and regulate arsenic in water, but it is well understood that arsenic occurs in several chemical forms, and these exhibit different toxicities. It is problematic to use laboratory-based speciation techniques to assess exposure as it has been suggested that the distribution of species is not stable during transport in some types of samples. A method was developed in this study for the on-site speciation of the most toxic dissolved arsenic species: As (III), As (V), monomethylarsonic acid (MMA) and dimethylarsenic acid (DMA). Development criteria included ease of use under field conditions, applicable at levels of concern for drinking water, and analytical performance.The approach is based on selective retention of arsenic species on specific ion-exchange chromatography cartridges followed by selective elution and quantification using graphite furnace atomic absorption spectroscopy. Water samples can be delivered to a set of three cartridges using either syringes or peristaltic pumps. Species distribution is stable at this point, and the cartridges can be transported to the laboratory for elution and quantitative analysis. A set of ten replicate spiked samples of each compound, having concentrations between 1 and 60 µg/L, were analyzed. Arsenic recoveries ranged from 78-112 % and relative standard deviations were generally below 10%. Resolution between species was shown to be outstanding, with the only limitation being that the capacity for As (V) was limited to approximately 50 µg/L. This could be easily remedied by changes in either cartridge design, or the extraction procedure. Recoveries were similar for two spiked hard groundwater samples indicating that dissolved minerals are not likely to be problematic. These results suggest that this methodology can be use for analysis of the four primary arsenic species of concern in
Field Deployable Method for Arsenic Speciation in Water
Voice, Thomas C.; Flores del Pino, Lisveth V.; Havezov, Ivan; Long, David T.
2010-01-01
Contamination of drinking water supplies by arsenic is a world-wide problem. Total arsenic measurements are commonly used to investigate and regulate arsenic in water, but it is well understood that arsenic occurs in several chemical forms, and these exhibit different toxicities. It is problematic to use laboratory-based speciation techniques to assess exposure as it has been suggested that the distribution of species is not stable during transport in some types of samples. A method was developed in this study for the on-site speciation of the most toxic dissolved arsenic species: As (III), As (V), monomethylarsonic acid (MMA) and dimethylarsenic acid (DMA). Development criteria included ease of use under field conditions, applicable at levels of concern for drinking water, and analytical performance. The approach is based on selective retention of arsenic species on specific ion-exchange chromatography cartridges followed by selective elution and quantification using graphite furnace atomic absorption spectroscopy. Water samples can be delivered to a set of three cartridges using either syringes or peristaltic pumps. Species distribution is stable at this point, and the cartridges can be transported to the laboratory for elution and quantitative analysis. A set of ten replicate spiked samples of each compound, having concentrations between 1 and 60 µg/L, were analyzed. Arsenic recoveries ranged from 78–112 % and relative standard deviations were generally below 10%. Resolution between species was shown to be outstanding, with the only limitation being that the capacity for As (V) was limited to approximately 50 µg/L. This could be easily remedied by changes in either cartridge design, or the extraction procedure. Recoveries were similar for two spiked hard groundwater samples indicating that dissolved minerals are not likely to be problematic. These results suggest that this methodology can be use for analysis of the four primary arsenic species of concern in
Direct magnetic field estimation based on echo planar raw data.
Testud, Frederik; Splitthoff, Daniel Nicolas; Speck, Oliver; Hennig, Jürgen; Zaitsev, Maxim
2010-07-01
Gradient recalled echo echo planar imaging is widely used in functional magnetic resonance imaging. The fast data acquisition is, however, very sensitive to field inhomogeneities which manifest themselves as artifacts in the images. Typically used correction methods have the common deficit that the data for the correction are acquired only once at the beginning of the experiment, assuming the field inhomogeneity distribution B(0) does not change over the course of the experiment. In this paper, methods to extract the magnetic field distribution from the acquired k-space data or from the reconstructed phase image of a gradient echo planar sequence are compared and extended. A common derivation for the presented approaches provides a solid theoretical basis, enables a fair comparison and demonstrates the equivalence of the k-space and the image phase based approaches. The image phase analysis is extended here to calculate the local gradient in the readout direction and improvements are introduced to the echo shift analysis, referred to here as "k-space filtering analysis." The described methods are compared to experimentally acquired B(0) maps in phantoms and in vivo. The k-space filtering analysis presented in this work demonstrated to be the most sensitive method to detect field inhomogeneities.
Localized Dictionaries Based Orientation Field Estimation for Latent Fingerprints.
Xiao Yang; Jianjiang Feng; Jie Zhou
2014-05-01
Dictionary based orientation field estimation approach has shown promising performance for latent fingerprints. In this paper, we seek to exploit stronger prior knowledge of fingerprints in order to further improve the performance. Realizing that ridge orientations at different locations of fingerprints have different characteristics, we propose a localized dictionaries-based orientation field estimation algorithm, in which noisy orientation patch at a location output by a local estimation approach is replaced by real orientation patch in the local dictionary at the same location. The precondition of applying localized dictionaries is that the pose of the latent fingerprint needs to be estimated. We propose a Hough transform-based fingerprint pose estimation algorithm, in which the predictions about fingerprint pose made by all orientation patches in the latent fingerprint are accumulated. Experimental results on challenging latent fingerprint datasets show the proposed method outperforms previous ones markedly.
Interferometric methods for mapping static electric and magnetic fields
NASA Astrophysics Data System (ADS)
Pozzi, Giulio; Beleggia, Marco; Kasama, Takeshi; Dunin-Borkowski, Rafal E.
2014-02-01
The mapping of static electric and magnetic fields using electron probes with a resolution and sensitivity that are sufficient to reveal nanoscale features in materials requires the use of phase-sensitive methods such as the shadow technique, coherent Foucault imaging and the Transport of Intensity Equation. Among these approaches, image-plane off-axis electron holography in the transmission electron microscope has acquired a prominent role thanks to its quantitative capabilities and broad range of applicability. After a brief overview of the main ideas and methods behind field mapping, we focus on theoretical models that form the basis of the quantitative interpretation of electron holographic data. We review the application of electron holography to a variety of samples (including electric fields associated with p-n junctions in semiconductors, quantized magnetic flux in superconductors and magnetization topographies in nanoparticles and other magnetic materials) and electron-optical geometries (including multiple biprism, amplitude and mixed-type set-ups). We conclude by highlighting the emerging perspectives of (i) three-dimensional field mapping using electron holographic tomography and (ii) the model-independent determination of the locations and magnitudes of field sources (electric charges and magnetic dipoles) directly from electron holographic data. xml:lang="fr"
Multiresolution and Explicit Methods for Vector Field Analysis and Visualization
NASA Technical Reports Server (NTRS)
Nielson, Gregory M.
1997-01-01
This is a request for a second renewal (3d year of funding) of a research project on the topic of multiresolution and explicit methods for vector field analysis and visualization. In this report, we describe the progress made on this research project during the second year and give a statement of the planned research for the third year. There are two aspects to this research project. The first is concerned with the development of techniques for computing tangent curves for use in visualizing flow fields. The second aspect of the research project is concerned with the development of multiresolution methods for curvilinear grids and their use as tools for visualization, analysis and archiving of flow data. We report on our work on the development of numerical methods for tangent curve computation first.
Extending methods: using Bourdieu's field analysis to further investigate taste
NASA Astrophysics Data System (ADS)
Schindel Dimick, Alexandra
2015-06-01
In this commentary on Per Anderhag, Per-Olof Wickman and Karim Hamza's article Signs of taste for science, I consider how their study is situated within the concern for the role of science education in the social and cultural production of inequality. Their article provides a finely detailed methodology for analyzing the constitution of taste within science education classrooms. Nevertheless, because the authors' socially situated methodology draws upon Bourdieu's theories, it seems equally important to extend these methods to consider how and why students make particular distinctions within a relational context—a key aspect of Bourdieu's theory of cultural production. By situating the constitution of taste within Bourdieu's field analysis, researchers can explore the ways in which students' tastes and social positionings are established and transformed through time, space, place, and their ability to navigate the field. I describe the process of field analysis in relation to the authors' paper and suggest that combining the authors' methods with a field analysis can provide a strong methodological and analytical framework in which theory and methods combine to create a detailed understanding of students' interest in relation to their context.
Field-based physiological testing of wheelchair athletes.
Goosey-Tolfrey, Victoria L; Leicht, Christof A
2013-02-01
The volume of literature on field-based physiological testing of wheelchair sports, such as basketball, rugby and tennis, is considerably smaller when compared with that available for individuals and team athletes in able-bodied (AB) sports. In analogy to the AB literature, it is recognized that performance in wheelchair sports not only relies on fitness, but also sport-specific skills, experience and technical proficiency. However, in contrast to AB sports, two major components contribute towards 'wheeled sports' performance, which are the athlete and the wheelchair. It is the interaction of these two that enable wheelchair propulsion and the sporting movements required within a given sport. Like any other athlete, participants of wheelchair sports are looking for efficient ways to train and/or analyse their technique and fitness to improve their performance. Consequently, laboratory and/or field-based physiological monitoring tools used at regular intervals at key time points throughout the year must be considered to help with training evaluation. The present review examines methods available in the literature to assess wheelchair sports fitness in a field-based environment, with special attention on outcome variables, validity and reliability issues, and non-physiological influences on performance. It also lays out the context of field-based testing by providing details about the Paralympic court sports and the impacts of a disability on sporting performance. Due to the limited availability of specialized equipment for testing wheelchair-dependent participants in the laboratory, the adoption of field-based testing has become the preferred option by team coaches of wheelchair athletes. An obvious advantage of field-based testing is that large groups of athletes can be tested in less time. Furthermore, athletes are tested in their natural environment (using their normal sports wheelchair set-up and floor surface), potentially making the results of such testing
Lagrangian based methods for coherent structure detection
Allshouse, Michael R.; Peacock, Thomas
2015-09-15
There has been a proliferation in the development of Lagrangian analytical methods for detecting coherent structures in fluid flow transport, yielding a variety of qualitatively different approaches. We present a review of four approaches and demonstrate the utility of these methods via their application to the same sample analytic model, the canonical double-gyre flow, highlighting the pros and cons of each approach. Two of the methods, the geometric and probabilistic approaches, are well established and require velocity field data over the time interval of interest to identify particularly important material lines and surfaces, and influential regions, respectively. The other two approaches, implementing tools from cluster and braid theory, seek coherent structures based on limited trajectory data, attempting to partition the flow transport into distinct regions. All four of these approaches share the common trait that they are objective methods, meaning that their results do not depend on the frame of reference used. For each method, we also present a number of example applications ranging from blood flow and chemical reactions to ocean and atmospheric flows.
Analysis of Double Ring Resonators using Method of Equating Fields
NASA Astrophysics Data System (ADS)
Althaf, Shahana
Optical ring resonators have the potential to be integral parts of large scale photonic circuits. My thesis theoretically analyzes parallel coupled double ring resonators (DRRs) in detail. The analysis is performed using the method of equating fields (MEF) which provides an in depth understanding about the transmitted and reflected light paths in the structure. Equations for the transmitted and reflected fields are derived; these equations allow for unequal ring lengths and coupling coefficients. Sanity checks including comparison with previously studied structures are performed in the final chapter in order to prove the correctness of the obtained results.
An inpainting-based deinterlacing method.
Ballester, Coloma; Bertalmío, Marcelo; Caselles, Vicent; Garrido, Luis; Marques, Adrián; Ranchin, Florent
2007-10-01
Video is usually acquired in interlaced format, where each image frame is composed of two image fields, each field holding same parity lines. However, many display devices require progressive video as input; also, many video processing tasks perform better on progressive material than on interlaced video. In the literature, there exist a great number of algorithms for interlaced to progressive video conversion, with a great tradeoff between the speed and quality of the results. The best algorithms in terms of image quality require motion compensation; hence, they are computationally very intensive. In this paper, we propose a novel deinterlacing algorithm based on ideas from the image inpainting arena. We view the lines to interpolate as gaps that we need to inpaint. Numerically, this is implemented using a dynamic programming procedure, which ensures a complexity of O(S), where S is the number of pixels in the image. The results obtained with our algorithm compare favorably, in terms of image quality, with state-of-the-art methods, but at a lower computational cost, since we do not need to perform motion field estimation.
[Sub-field imaging spectrometer design based on Offner structure].
Wu, Cong-Jun; Yan, Chang-Xiang; Liu, Wei; Dai, Hu
2013-08-01
To satisfy imaging spectrometers's miniaturization, lightweight and large field requirements in space application, the current optical design of imaging spectrometer with Offner structure was analyzed, and an simple method to design imaging spectrometer with concave grating based on current ways was given. Using the method offered, the sub-field imaging spectrometer with 400 km altitude, 0.4-1.0 microm wavelength range, 5 F-number of 720 mm focal length and 4.3 degrees total field was designed. Optical fiber was used to transfer the image in telescope's focal plane to three slits arranged in the same plane so as to achieve subfield. The CCD detector with 1 024 x 1 024 and 18 microm x 18 microm was used to receive the image of the three slits after dispersing. Using ZEMAX software optimization and tolerance analysis, the system can satisfy 5 nm spectrum resolution and 5 m field resolution, and the MTF is over 0.62 with 28 lp x mm(-1). The field of the system is almost 3 times that of similar instruments used in space probe.
Evanescent Field Based Photoacoustics: Optical Property Evaluation at Surfaces
Goldschmidt, Benjamin S.; Rudy, Anna M.; Nowak, Charissa A.; Tsay, Yowting; Whiteside, Paul J. D.; Hunt, Heather K.
2016-01-01
Here, we present a protocol to estimate material and surface optical properties using the photoacoustic effect combined with total internal reflection. Optical property evaluation of thin films and the surfaces of bulk materials is an important step in understanding new optical material systems and their applications. The method presented can estimate thickness, refractive index, and use absorptive properties of materials for detection. This metrology system uses evanescent field-based photoacoustics (EFPA), a field of research based upon the interaction of an evanescent field with the photoacoustic effect. This interaction and its resulting family of techniques allow the technique to probe optical properties within a few hundred nanometers of the sample surface. This optical near field allows for the highly accurate estimation of material properties on the same scale as the field itself such as refractive index and film thickness. With the use of EFPA and its sub techniques such as total internal reflection photoacoustic spectroscopy (TIRPAS) and optical tunneling photoacoustic spectroscopy (OTPAS), it is possible to evaluate a material at the nanoscale in a consolidated instrument without the need for many instruments and experiments that may be cost prohibitive. PMID:27500652
DC-based magnetic field controller
Kotter, D.K.; Rankin, R.A.; Morgan, J.P.
1994-05-31
A magnetic field controller is described for laboratory devices and in particular to dc operated magnetic field controllers for mass spectrometers, comprising a dc power supply in combination with improvements to a Hall probe subsystem, display subsystem, preamplifier, field control subsystem, and an output stage. 1 fig.
DC-based magnetic field controller
Kotter, Dale K.; Rankin, Richard A.; Morgan, John P,.
1994-01-01
A magnetic field controller for laboratory devices and in particular to dc operated magnetic field controllers for mass spectrometers, comprising a dc power supply in combination with improvements to a hall probe subsystem, display subsystem, preamplifier, field control subsystem, and an output stage.
Ray, J.; Lee, J.; Yadav, V.; ...
2014-08-20
We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP), to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP tomore » impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO2 (ffCO2) emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less
Field methods to measure surface displacement and strain with the Video Image Correlation method
NASA Technical Reports Server (NTRS)
Maddux, Gary A.; Horton, Charles M.; Mcneill, Stephen R.; Lansing, Matthew D.
1994-01-01
The objective of this project was to develop methods and application procedures to measure displacement and strain fields during the structural testing of aerospace components using paint speckle in conjunction with the Video Image Correlation (VIC) system.
Markov random field method for dynamic PET image segmentation
NASA Astrophysics Data System (ADS)
Lin, Kang-Ping; Lou, Shyhliang A.; Yu, Chin-Lung; Chung, Being-Tau; Wu, Liang-Chi; Liu, Ren-Shyan
1998-06-01
In this paper, the Markov random field (MRF) clustering method for highly noisy medical image segmentation is presented. In MRF method, the image to be segmented is analyzed in a probabilistic way that establishes image model by a posteriori probability density function with Bayes' theorem, with relation between pixel positions as well as gray-levels involved. The adaptive threshold parameter is determined in the iterative clustering process to achieve global optimal segmentation. The presented method and other segmentation methods in use are tested on simulation images of different noise levels, and the numerical comparison result is presented. It also is applied on the highly noisy positron emission tomography images, in that the diagnostic hypoxia fraction is automatically calculated. The experimental results are acceptable, and show that the presented method is suitable and robust for noisy image segmentation.
Diamond-based field sensor for nEDM experiment
NASA Astrophysics Data System (ADS)
Sharma, Sarvagya; Hovde, Chris; Beck, Douglas H.
2016-02-01
Ensembles of negatively charged nitrogen vacancy centers in diamonds are investigated as optical sensors for electric and magnetic fields in the interaction region of a neutron electric dipole moment experiment. As a first step towards measuring electric fields, the Stark shift is investigated in the ground electronic state, using optically detected magnetic resonance (ODMR) to measure hyperfine-resolved fine structure transitions. One detection approach is to modulate the electric field and demodulate the ODMR signal at the modulation frequency or its harmonic. Models indicate that the ratio of the amplitudes of these signals provides information about the magnitude of the electric field. Experiments show line shapes consistent with the models. Methods are considered for extending this technique to all-optical measurement of fields. Additionally, progress is reported towards an all-optical, fiberized sensor based on electromagnetically-induced transparency (EIT), which may be suitable for measuring magnetic fields. The design uses total internal reflection to provide a long optical path through the diamond for both the 637 nm EIT laser and a green repump laser.
NASA Astrophysics Data System (ADS)
Gressier, V.; Lacoste, V.; Martin, A.; Pepino, M.
2014-10-01
The variation in the response of instruments with neutron energy has to be determined in well-characterized monoenergetic neutron fields. The quantities associated with these fields are the neutron fluence and the mean energy of the monoenergetic neutron peak needed to determine the related dosimetric quantities. At the IRSN AMANDE facility, the reference measurement standard for neutron fluence is based on a long counter calibrated in the IRSN reference 252Cf neutron field. In this paper, the final characterization of this device is presented as well as the method used to determine the reference fluence at the calibration point in monoenergetic neutron fields.
New method of asymmetric flow field measurement in hypersonic shock tunnel.
Yan, D P; He, A Z; Ni, X W
1991-03-01
In this paper a method of large aperture (?500 mm) high sensitivity moire deflectometry is used to obtain multidirectional deflectograms of the asymmetric flow field in hypersonic (M = 10.29) shock tunnel. At the same time, a 3-D reconstructive method of the asymmetric flow field is presented which is based on the integration of the moire deflective angle and the double-cubic many-knot interpolating splines; it is used to calculate the 3-D density distribution of the asymmetric flow field.
A one-field monolithic fictitious domain method for fluid-structure interactions
NASA Astrophysics Data System (ADS)
Wang, Yongxing; Jimack, Peter K.; Walkley, Mark A.
2017-04-01
In this article, we present a one-field monolithic fictitious domain (FD) method for simulation of general fluid-structure interactions (FSI). One-field means only one velocity field is solved in the whole domain, based upon the use of an appropriate L2 projection. Monolithic means the fluid and solid equations are solved synchronously (rather than sequentially). We argue that the proposed method has the same generality and robustness as FD methods with distributed Lagrange multiplier (DLM) but is significantly more computationally efficient (because of one-field) whilst being very straightforward to implement. The method is described in detail, followed by the presentation of multiple computational examples in order to validate it across a wide range of fluid and solid parameters and interactions.
Fast Field Calibration of MIMU Based on the Powell Algorithm
Ma, Lin; Chen, Wanwan; Li, Bin; You, Zheng; Chen, Zhigang
2014-01-01
The calibration of micro inertial measurement units is important in ensuring the precision of navigation systems, which are equipped with microelectromechanical system sensors that suffer from various errors. However, traditional calibration methods cannot meet the demand for fast field calibration. This paper presents a fast field calibration method based on the Powell algorithm. As the key points of this calibration, the norm of the accelerometer measurement vector is equal to the gravity magnitude, and the norm of the gyro measurement vector is equal to the rotational velocity inputs. To resolve the error parameters by judging the convergence of the nonlinear equations, the Powell algorithm is applied by establishing a mathematical error model of the novel calibration. All parameters can then be obtained in this manner. A comparison of the proposed method with the traditional calibration method through navigation tests shows the classic performance of the proposed calibration method. The proposed calibration method also saves more time compared with the traditional calibration method. PMID:25177801
A self-consistent field method for galactic dynamics
NASA Technical Reports Server (NTRS)
Hernquist, Lars; Ostriker, Jeremiah P.
1992-01-01
The present study describes an algorithm for evolving collisionless stellar systems in order to investigate the evolution of systems with density profiles like the R exp 1/4 law, using only a few terms in the expansions. A good fit is obtained for a truncated isothermal distribution, which renders the method appropriate for galaxies with flat rotation curves. Calculations employing N of about 10 exp 6-7 are straightforward on existing supercomputers, making possible simulations having significantly smoother fields than with direct methods such as tree-codes. Orbits are found in a given static or time-dependent gravitational field; the potential, phi(r, t) is revised from the resultant density, rho(r, t). Possible scientific uses of this technique are discussed, including tidal perturbations of dwarf galaxies, the adiabatic growth of central masses in spheroidal galaxies, instabilities in realistic galaxy models, and secular processes in galactic evolution.
A Method for Evaluating Volt-VAR Optimization Field Demonstrations
Schneider, Kevin P.; Weaver, T. F.
2014-08-31
In a regulated business environment a utility must be able to validate that deployed technologies provide quantifiable benefits to the end-use customers. For traditional technologies there are well established procedures for determining what benefits will be derived from the deployment. But for many emerging technologies procedures for determining benefits are less clear and completely absent in some cases. Volt-VAR Optimization is a technology that is being deployed across the nation, but there are still numerous discussions about potential benefits and how they are achieved. This paper will present a method for the evaluation, and quantification of benefits, for field deployments of Volt-VAR Optimization technologies. In addition to the basic methodology, the paper will present a summary of results, and observations, from two separate Volt-VAR Optimization field evaluations using the proposed method.
Method for imaging with low frequency electromagnetic fields
Lee, Ki H.; Xie, Gan Q.
1994-01-01
A method for imaging with low frequency electromagnetic fields, and for interpreting the electromagnetic data using ray tomography, in order to determine the earth conductivity with high accuracy and resolution. The imaging method includes the steps of placing one or more transmitters, at various positions in a plurality of transmitter holes, and placing a plurality of receivers in a plurality of receiver holes. The transmitters generate electromagnetic signals which diffuse through a medium, such as earth, toward the receivers. The measured diffusion field data H is then transformed into wavefield data U. The traveltimes corresponding to the wavefield data U, are then obtained, by charting the wavefield data U, using a different regularization parameter .alpha. for each transform. The desired property of the medium, such as conductivity, is then derived from the velocity, which in turn is constructed from the wavefield data U using ray tomography.
The reduced basis method for the electric field integral equation
Fares, M.; Hesthaven, J.S.; Maday, Y.; Stamm, B.
2011-06-20
We introduce the reduced basis method (RBM) as an efficient tool for parametrized scattering problems in computational electromagnetics for problems where field solutions are computed using a standard Boundary Element Method (BEM) for the parametrized electric field integral equation (EFIE). This combination enables an algorithmic cooperation which results in a two step procedure. The first step consists of a computationally intense assembling of the reduced basis, that needs to be effected only once. In the second step, we compute output functionals of the solution, such as the Radar Cross Section (RCS), independently of the dimension of the discretization space, for many different parameter values in a many-query context at very little cost. Parameters include the wavenumber, the angle of the incident plane wave and its polarization.
A geologic approach to field methods in fluvial geomorphology
Fitzpatrick, Faith A.; Thornbush, Mary J; Allen, Casey D; Fitzpatrick, Faith A.
2014-01-01
A geologic approach to field methods in fluvial geomorphology is useful for understanding causes and consequences of past, present, and possible future perturbations in river behavior and floodplain dynamics. Field methods include characterizing river planform and morphology changes and floodplain sedimentary sequences over long periods of time along a longitudinal river continuum. Techniques include topographic and bathymetric surveying of fluvial landforms in valley bottoms and describing floodplain sedimentary sequences through coring, trenching, and examining pits and exposures. Historical sediment budgets that include floodplain sedimentary records can characterize past and present sources and sinks of sediment along a longitudinal river continuum. Describing paleochannels and floodplain vertical accretion deposits, estimating long-term sedimentation rates, and constructing historical sediment budgets can assist in management of aquatic resources, habitat, sedimentation, and flooding issues.
A self-consistent field method for galactic dynamics
NASA Technical Reports Server (NTRS)
Hernquist, Lars; Ostriker, Jeremiah P.
1992-01-01
The present study describes an algorithm for evolving collisionless stellar systems in order to investigate the evolution of systems with density profiles like the R exp 1/4 law, using only a few terms in the expansions. A good fit is obtained for a truncated isothermal distribution, which renders the method appropriate for galaxies with flat rotation curves. Calculations employing N of about 10 exp 6-7 are straightforward on existing supercomputers, making possible simulations having significantly smoother fields than with direct methods such as tree-codes. Orbits are found in a given static or time-dependent gravitational field; the potential, phi(r, t) is revised from the resultant density, rho(r, t). Possible scientific uses of this technique are discussed, including tidal perturbations of dwarf galaxies, the adiabatic growth of central masses in spheroidal galaxies, instabilities in realistic galaxy models, and secular processes in galactic evolution.
Method for imaging with low frequency electromagnetic fields
Lee, K.H.; Xie, G.Q.
1994-12-13
A method is described for imaging with low frequency electromagnetic fields, and for interpreting the electromagnetic data using ray tomography, in order to determine the earth conductivity with high accuracy and resolution. The imaging method includes the steps of placing one or more transmitters, at various positions in a plurality of transmitter holes, and placing a plurality of receivers in a plurality of receiver holes. The transmitters generate electromagnetic signals which diffuse through a medium, such as earth, toward the receivers. The measured diffusion field data H is then transformed into wavefield data U. The travel times corresponding to the wavefield data U, are then obtained, by charting the wavefield data U, using a different regularization parameter [alpha] for each transform. The desired property of the medium, such as conductivity, is then derived from the velocity, which in turn is constructed from the wavefield data U using ray tomography. 13 figures.
Bringing the Field into the Classroom: A Field Methods Course on Saudi Arabian Sign Language
ERIC Educational Resources Information Center
Stephen, Anika; Mathur, Gaurav
2012-01-01
The methodology used in one graduate-level linguistics field methods classroom is examined through the lens of the students' experiences. Four male Deaf individuals from the Kingdom of Saudi Arabia served as the consultants for the course. After a brief background information about their country and its practices surrounding deaf education, both…
Bringing the Field into the Classroom: A Field Methods Course on Saudi Arabian Sign Language
ERIC Educational Resources Information Center
Stephen, Anika; Mathur, Gaurav
2012-01-01
The methodology used in one graduate-level linguistics field methods classroom is examined through the lens of the students' experiences. Four male Deaf individuals from the Kingdom of Saudi Arabia served as the consultants for the course. After a brief background information about their country and its practices surrounding deaf education, both…
Simulating unsaturated flow fields based on saturationmeasurements
Kitterod, Nils-Otto; Finsterle, Stefan
2003-12-15
Large amounts of de-icing chemicals are applied at the airport of Oslo, Norway. These chemicals pose a potential hazard to the groundwater because the airport is located on a delta deposit over an unconfined aquifer. Under normal flow conditions, most of the chemicals degrade in the vadose zone, but during periods of intensive infiltration, the residence time of contaminants in the unsaturated zone may be too short for sufficient degradation. To assess the potential for groundwater contamination and to design remedial actions, it is essential to quantify flow velocities in the vadose zone. The main purpose of this study is to evaluate theoretical possibilities of using measurements of liquid saturation in combination with inverse modeling for the estimation of unsaturated flow velocities. The main stratigraphic units and their geometry were identified from ground penetrating radar (GPR) measurements and borehole logs. These observations are included as a priori information in the inverse modeling. The liquid saturation measurements reveal the smaller-scale heterogeneities within each stratigraphic unit. The relatively low sensitivity of flow velocities to the observable saturation limits the direct inference of hydraulic parameters. However, even an approximate estimate of flow velocities is valuable as long as the estimate is qualified by an uncertainty measure. A method referred to as simulation by Empirical Orthogonal Functions (EOF) was adapted for uncertainty propagation analyses. The EOF method is conditional in the sense that statistical moments are reproduced independent of second-order stationarity. This implies that unlikely parameter combinations are discarded from the uncertainty propagation analysis. Simple forward simulations performed with the most likely parameter set are qualitatively consistent with the apparent fast flow of contaminants from an accidental spill. A field tracer test performed close to the airport will be used as an independent
Evolutionary Based Techniques for Fault Tolerant Field Programmable Gate Arrays
NASA Technical Reports Server (NTRS)
Larchev, Gregory V.; Lohn, Jason D.
2006-01-01
The use of SRAM-based Field Programmable Gate Arrays (FPGAs) is becoming more and more prevalent in space applications. Commercial-grade FPGAs are potentially susceptible to permanently debilitating Single-Event Latchups (SELs). Repair methods based on Evolutionary Algorithms may be applied to FPGA circuits to enable successful fault recovery. This paper presents the experimental results of applying such methods to repair four commonly used circuits (quadrature decoder, 3-by-3-bit multiplier, 3-by-3-bit adder, 440-7 decoder) into which a number of simulated faults have been introduced. The results suggest that evolutionary repair techniques can improve the process of fault recovery when used instead of or as a supplement to Triple Modular Redundancy (TMR), which is currently the predominant method for mitigating FPGA faults.
Work function measurements by the field emission retarding potential method.
NASA Technical Reports Server (NTRS)
Strayer, R. W.; Mackie, W.; Swanson, L. W.
1973-01-01
Description of the theoretical foundation of the field electron retarding potential method, and review of its experimental application to the measurement of single crystal face work functions. The results obtained from several substrates are discussed. An interesting and useful fallout from the experimental approach described is the ability to accurately measure the elastic and inelastic reflection coefficient for impinging electrons to near zero-volt energy.
Lidar Tracking of Multiple Fluorescent Tracers: Method and Field Test
NASA Technical Reports Server (NTRS)
Eberhard, Wynn L.; Willis, Ron J.
1992-01-01
Past research and applications have demonstrated the advantages and usefulness of lidar detection of a single fluorescent tracer to track air motions. Earlier researchers performed an analytical study that showed good potential for lidar discrimination and tracking of two or three different fluorescent tracers at the same time. The present paper summarizes the multiple fluorescent tracer method, discusses its expected advantages and problems, and describes our field test of this new technique.
Tattoli, F.; Casavola, C.; Pierron, F.; Rotinat, R.; Pappalettere, C.
2011-01-17
One of the main problems in welding is the microstructural transformation within the area affected by the thermal history. The resulting heterogeneous microstructure within the weld nugget and the heat affected zones is often associated with changes in local material properties. The present work deals with the identification of material parameters governing the elasto--plastic behaviour of the fused and heat affected zones as well as the base material for titanium hybrid welded joints (Ti6Al4V alloy). The material parameters are identified from heterogeneous strain fields with the Virtual Fields Method. This method is based on a relevant use of the principle of virtual work and it has been shown to be useful and much less time consuming than classical finite element model updating approaches applied to similar problems. The paper will present results and discuss the problem of selection of the weld zones for the identification.
NASA Astrophysics Data System (ADS)
Tattoli, F.; Pierron, F.; Rotinat, R.; Casavola, C.; Pappalettere, C.
2011-01-01
One of the main problems in welding is the microstructural transformation within the area affected by the thermal history. The resulting heterogeneous microstructure within the weld nugget and the heat affected zones is often associated with changes in local material properties. The present work deals with the identification of material parameters governing the elasto—plastic behaviour of the fused and heat affected zones as well as the base material for titanium hybrid welded joints (Ti6Al4V alloy). The material parameters are identified from heterogeneous strain fields with the Virtual Fields Method. This method is based on a relevant use of the principle of virtual work and it has been shown to be useful and much less time consuming than classical finite element model updating approaches applied to similar problems. The paper will present results and discuss the problem of selection of the weld zones for the identification.
Numerical results for extended field method applications. [thin plates
NASA Technical Reports Server (NTRS)
Donaldson, B. K.; Chander, S.
1973-01-01
This paper presents the numerical results obtained when a new method of analysis, called the extended field method, was applied to several thin plate problems including one with non-rectangular geometry, and one problem involving both beams and a plate. The numerical results show that the quality of the single plate solutions was satisfactory for all cases except those involving a freely deflecting plate corner. The results for the beam and plate structure were satisfactory even though the structure had a freely deflecting corner.
Neutron Field Measurements in Phantom with Foil Activation Methods.
1986-11-29
jI25 Ii III uumu ullli~ S....- - Lb - w * .qJ’ AD-A 192 122 ulJ. IL (pj DNA-TR-87- 10 N EUTRON FIELD MEASUREMENTS IN PHANTOM WITH FOIL ACTIVATION...SAND II Measurements in Phantom 6 4 The 5-Foil Neutron Dosimetry Method 29 5 Comparison of SAND II and Simple 5-Foil Dosimetry Method 34 6 Thermal ...quite reasonable. The monkey phantom spectrum differs from the NBS U-235 fission spectrum in that the former has a I/E tail plus thermal -neutron peak
A field calibration method to eliminate the error caused by relative tilt on roll angle measurement
NASA Astrophysics Data System (ADS)
Qi, Jingya; Wang, Zhao; Huang, Junhui; Yu, Bao; Gao, Jianmin
2016-11-01
The roll angle measurement method based on a heterodyne interferometer is an efficient technique for its high precision and environmental noise immunity. The optical layout bases on a polarization-assisted conversion of the roll angle into an optical phase shift, read by a beam passing through the objective plate actuated by the roll rotation. The measurement sensitivity or the gain coefficient G is calibrated before. However, a relative tilt between the laser and objective plate always exist due to the tilt of the laser and the roll of the guide in the field long rail measurement. The relative tilt affect the value of G, thus result in the roll angle measurement error. In this paper, a method for field calibration of G is presented to eliminate the measurement error above. The field calibration layout turns the roll angle into an optical path change (OPC) by a rotary table. Thus, the roll angle can be obtained from the OPC read by a two-frequency interferometer. Together with the phase shift, an accurate G in field measurement can be obtained and the measurement error can be corrected. The optical system of the field calibration method is set up and the experiment results are given. Contrasted with the Renishaw XL-80 for calibration, the proposed field calibration method can obtain the accurate G in the field rail roll angle measurement.
NASA Astrophysics Data System (ADS)
Boblest, S.; Meyer, D.; Wunner, G.
2014-11-01
We present a quantum Monte Carlo application for the computation of energy eigenvalues for atoms and ions in strong magnetic fields. The required guiding wave functions are obtained with the Hartree-Fock-Roothaan code described in the accompanying publication (Schimeczek and Wunner, 2014). Our method yields highly accurate results for the binding energies of symmetry subspace ground states and at the same time provides a means for quantifying the quality of the results obtained with the above-mentioned Hartree-Fock-Roothaan method.
Performance of FFT methods in local gravity field modelling
NASA Technical Reports Server (NTRS)
Forsberg, Rene; Solheim, Dag
1989-01-01
Fast Fourier transform (FFT) methods provide a fast and efficient means of processing large amounts of gravity or geoid data in local gravity field modelling. The FFT methods, however, has a number of theoretical and practical limitations, especially the use of flat-earth approximation, and the requirements for gridded data. In spite of this the method often yields excellent results in practice when compared to other more rigorous (and computationally expensive) methods, such as least-squares collocation. The good performance of the FFT methods illustrate that the theoretical approximations are offset by the capability of taking into account more data in larger areas, especially important for geoid predictions. For best results good data gridding algorithms are essential. In practice truncated collocation approaches may be used. For large areas at high latitudes the gridding must be done using suitable map projections such as UTM, to avoid trivial errors caused by the meridian convergence. The FFT methods are compared to ground truth data in New Mexico (xi, eta from delta g), Scandinavia (N from delta g, the geoid fits to 15 cm over 2000 km), and areas of the Atlantic (delta g from satellite altimetry using Wiener filtering). In all cases the FFT methods yields results comparable or superior to other methods.
[Methods of dosimetry in evaluation of electromagnetic fields' biological action].
Rubtsova, N B; Perov, S Iu
2012-01-01
Theoretical and experimental dosimetry can be used for adequate evaluation of the effects of radiofrequency electromagnetic fields. In view of the tough electromagnetic environment in aircraft, pilots' safety is of particular topicality. The dosimetric evaluation is made from the quantitative characteristics of the EMF interaction with bio-objects depending on EM energy absorption in a unit of tissue volume or mass calculated as a specific absorbed rate (SAR) and measured in W/kg. Theoretical dosimetry employs a number of computational methods to determine EM energy, as well as the augmented method of boundary conditions, iterative augmented method of boundary conditions, moments method, generalized multipolar method, finite-element method, time domain finite-difference method, and hybrid methods combining several decision plans modeling the design philosophy of navigation, radiolocation and human systems. Because of difficulties with the experimental SAR estimate, theoretical dosimetry is regarded as the first step in analysis of the in-aircraft conditions of exposure and possible bio-effects.
ALTERNATIVE FIELD METHODS TO TREAT MERCURY IN SOIL
Ernest F. Stine Jr; Steven T. Downey
2002-08-14
U.S. Department of Energy (DOE) used large quantities of mercury in the uranium separating process from the 1950s until the late 1980s in support of national defense. Some of this mercury, as well as other hazardous metals and radionuclides, found its way into, and under, several buildings, soil and subsurface soils and into some of the surface waters. Several of these areas may pose potential health or environmental risks and must be dealt with under current environmental regulations. DOE's National Energy Technology Laboratory (NETL) awarded a contract ''Alternative Field Methods to Treat Mercury in Soil'' to IT Group, Knoxville TN (IT) and its subcontractor NFS, Erwin, TN to identify remedial methods to clean up mercury-contaminated high-clay content soils using proven treatment chemistries. The sites of interest were the Y-12 National Security Complex located in Oak Ridge, Tennessee, the David Witherspoon properties located in Knoxville, Tennessee, and at other similarly contaminated sites. The primary laboratory-scale contract objectives were (1) to safely retrieve and test samples of contaminated soil in an approved laboratory and (2) to determine an acceptable treatment method to ensure that the mercury does not leach from the soil above regulatory levels. The leaching requirements were to meet the TC (0.2 mg/l) and UTS (0.025 mg/l) TCLP criteria. In-situ treatments were preferred to control potential mercury vapors emissions and liquid mercury spills associated with ex-situ treatments. All laboratory work was conducted in IT's and NFS laboratories. Mercury contaminated nonradioactive soil from under the Alpha 2 building in the Y-12 complex was used. This soils contained insufficient levels of leachable mercury and resulted in TCLP mercury concentrations that were similar to the applicable LDR limits. The soil was spiked at multiple levels with metallic (up to 6000 mg/l) and soluble mercury compounds (up to 500 mg/kg) to simulate expected ranges of mercury
Magnetic space-based field measurements
NASA Technical Reports Server (NTRS)
Langel, R. A.
1981-01-01
Because the near Earth magnetic field is a complex combination of fields from outside the Earth of fields from its core and of fields from its crust, measurements from space prove to be the only practical way to obtain timely, global surveys. Due to difficulty in making accurate vector measurements, early satellites such as Sputnik and Vanguard measured only the magnitude survey. The attitude accuracy was 20 arc sec. Both the Earth's core fields and the fields arising from its crust were mapped from satellite data. The standard model of the core consists of a scalar potential represented by a spherical harmonics series. Models of the crustal field are relatively new. Mathematical representation is achieved in localized areas by arrays of dipoles appropriately located in the Earth's crust. Measurements of the Earth's field are used in navigation, to map charged particles in the magnetosphere, to study fluid properties in the Earth's core, to infer conductivity of the upper mantels, and to delineate regional scale geological features.
Reconstruction of the sound field above a reflecting plane using the equivalent source method
NASA Astrophysics Data System (ADS)
Bi, Chuan-Xing; Jing, Wen-Qian; Zhang, Yong-Bin; Lin, Wang-Lin
2017-01-01
In practical situations, vibrating objects are usually located above a reflecting plane instead of exposing to a free field. The conventional nearfield acoustic holography (NAH) sometimes fails to identify sound sources under such situations. This paper develops two kinds of equivalent source method (ESM)-based half-space NAH to reconstruct the sound field above a reflecting plane. In the first kind of method, the half-space Green's function is introduced into the ESM-based NAH, and the sound field is reconstructed based on the condition that the surface impedance of the reflecting plane is known a prior. The second kind of method regards the reflections as being radiated by equivalent sources placed under the reflecting plane, and the sound field is reconstructed by matching the pressure on the hologram surface with the equivalent sources distributed within the vibrating object and those substituting for reflections. Thus, this kind of method is independent of the surface impedance of the reflecting plane. Numerical simulations and experiments demonstrate the feasibility of these two kinds of methods for reconstructing the sound field above a reflecting plane.
Methods for classification of agricultural fields in aerial sequences: a comparative study
NASA Astrophysics Data System (ADS)
Houkes, Zweitze; Chen, Haijun; Blacquiere, Jan-Friso
1998-12-01
A comparative study of a selection of classification methods for agricultural fields in sequences of aerial images is presented. The image sequences are acquired by an RGB-CCD video camera which is assumed to be on board of an airplane, moving linear over the scene. The objects in the scenes being considered are agricultural fields. The classes of agricultural fields to be distinguished are determined by the type of crop, e.g. potatoes, sugar beet, wheat, etc. In order to recognize and classify these fields obtained from the aerial sequences of images, a common approach is in the use of surface texture. Textural features are extracted from the images to effectively characterize the vegetation. Methods based on Circular Symmetric Auto-Regression, Co-Occurrence Matrix and Local Binary Patterns are selected for the comparative study. The experiments are carried out with image sequences taken from a scaled model of a landscape and a selection from the Brodatz set. A few training images are used to set up the model bases for the three methods. The methods are tested using the same regions from other images of the sequence, and other sequences of images of similar fields. Comparison fa the methods is based on the confusion matrix. Sensitivity to variations in flight direction, variations in altitude and luminance conditions are being considered.
Wave Field Continuation Methods for Passive Imaging Under Deep Basins
NASA Astrophysics Data System (ADS)
Langston, C. A.
2009-12-01
The coastal plains of the central and eastern United States contain deep sections of unconsolidated to poorly consolidated sediments. These sediments mask deeper crustal and upper mantle converted phases in teleseismic receiver functions through large amplitude, near-surface reverberations, and also amplify ambient noise levels to generally reduce data signal-to-noise ratios. Removing shallow sediment wave propagation effects is critical for imaging deep lithospheric structure and will be a major hurdle to overcome when the EarthScope Transportable array and related flex array experiments are deployed within these areas. Targets include the Mississippi embayment to examine the lithosphere under a failed rift zone and along the Gulf and Atlantic coasts to illuminate the transition from continental to oceanic lithosphere. A propagator matrix formalism is used to downward continue the wave field for teleseismic P waves into the mid-crust in order to separate the upgoing S wave field from the total teleseismic response of the P wave, exposing deep Sp conversions. This method requires that the earth model from the surface to the reference depth be known. Synthetic tests show that imperfect knowledge of the earth model is not critical for calculating the upgoing P wave and downgoing P and S waves within the structure. However, the upgoing S wave field may contain large non-causal S wave arrivals before the P wave arrival. An improved earth model may be found by minimizing these non-causal arrivals. Model perturbations also show interesting effects where velocity parameters for the true model may be bracketed by stacking calculated upgoing S waves to approximately remove the non-causal arrivals. Decomposing the teleseismic wave field also yields another method for estimating the upgoing P wave that can be used in receiver function deconvolution. Upward continuation of the wave field from bedrock into the sediment section is useful for understanding the effect of thick
Modelling of induced electric fields based on incompletely known magnetic fields
NASA Astrophysics Data System (ADS)
Laakso, Ilkka; De Santis, Valerio; Cruciani, Silvano; Campi, Tommaso; Feliziani, Mauro
2017-08-01
Determining the induced electric fields in the human body is a fundamental problem in bioelectromagnetics that is important for both evaluation of safety of electromagnetic fields and medical applications. However, existing techniques for numerical modelling of induced electric fields require detailed information about the sources of the magnetic field, which may be unknown or difficult to model in realistic scenarios. Here, we show how induced electric fields can accurately be determined in the case where the magnetic fields are known only approximately, e.g. based on field measurements. The robustness of our approach is shown in numerical simulations for both idealized and realistic scenarios featuring a personalized MRI-based head model. The approach allows for modelling of the induced electric fields in biological bodies directly based on real-world magnetic field measurements.
A telluric method for natural field induced polarization studies
NASA Astrophysics Data System (ADS)
Zorin, Nikita; Epishkin, Dmitrii; Yakovlev, Andrey
2016-12-01
Natural field induced polarization (NFIP) is a branch of low-frequency electromagnetics designed for detection of buried polarizable objects from magnetotelluric (MT) data. The conventional approach to the method deals with normalized MT apparent resistivity. We show that it is more favorable to extract the IP effect from solely electric (telluric) transfer functions instead. For lateral localization of polarizable bodies it is convenient to work with the telluric tensor determinant, which does not depend on the rotation of the receiving electric dipoles. Applicability of the new method was verified in the course of a large-scale field research. The field work was conducted in a well-explored area in East Kazakhstan known for the presence of various IP sources such as graphite, magnetite, and sulfide mineralization. A new multichannel processing approach allowed the determination of the telluric tensor components with very good accuracy. This holds out a hope that in some cases NFIP data may be used not only for detection of polarizable objects, but also for a rough estimation of their spectral IP characteristics.
Pseudorange Measurement Method Based on AIS Signals
Zhang, Jingbo; Zhang, Shufang; Wang, Jinpeng
2017-01-01
In order to use the existing automatic identification system (AIS) to provide additional navigation and positioning services, a complete pseudorange measurements solution is presented in this paper. Through the mathematical analysis of the AIS signal, the bit-0-phases in the digital sequences were determined as the timestamps. Monte Carlo simulation was carried out to compare the accuracy of the zero-crossing and differential peak, which are two timestamp detection methods in the additive white Gaussian noise (AWGN) channel. Considering the low-speed and low-dynamic motion characteristics of ships, an optimal estimation method based on the minimum mean square error is proposed to improve detection accuracy. Furthermore, the α difference filter algorithm was used to achieve the fusion of the optimal estimation results of the two detection methods. The results show that the algorithm can greatly improve the accuracy of pseudorange estimation under low signal-to-noise ratio (SNR) conditions. In order to verify the effectiveness of the scheme, prototypes containing the measurement scheme were developed and field tests in Xinghai Bay of Dalian (China) were performed. The test results show that the pseudorange measurement accuracy was better than 28 m (σ) without any modification of the existing AIS system. PMID:28531153
A novel colorimetric method for field arsenic speciation analysis.
Hu, Shan; Lu, Jinsuo; Jing, Chuanyong
2012-01-01
Accurate on-site determination of arsenic (As) concentration as well as its speciation presents a great environmental challenge especially to developing countries. To meet the need of routine field monitoring, we developed a rapid colorimetric method with a wide dynamic detection range and high precision. The novel application of KMnO4 and CH4N2S as effective As(III) oxidant and As(V) reductant, respectively, in the formation of molybdenum blue complexes enabled the differentiation of As(III) and As(V). The detection limit of the method was 8 microg/L with a linear range (R2 = 0.998) of four orders of magnitude in total As concentrations. The As speciation in groundwater samples determined with the colorimetric method in the field were consistent with the results using the high performance liquid chromatography atomic fluorescence spectrometry, as evidenced by a linear correlation in paired analysis with a slope of 0.9990-0.9997 (p < 0.0001, n = 28). The recovery of 96%-116% for total As, 85%-122% for As(III), and 88%-127% for As(V) were achieved for groundwater samples with a total As concentration range 100-800 microg/L. The colorimetric result showed that 3.61 g/L As(III) existed as the only As species in a real industrial wastewater, which was in good agreement with the HPLC-AFS result of 3.56 g/L As(III). No interference with the color development was observed in the presence of sulfate, phosphate, silicate, humic acid, and heavy metals from complex water matrix. This accurate, sensitive, and easy-to-use method is especially suitable for field As determination.
A method of analysis of distributions of local electric fields in composites
NASA Astrophysics Data System (ADS)
Kolesnikov, V. I.; Yakovlev, V. B.; Bardushkin, V. V.; Lavrov, I. V.; Sychev, A. P.; Yakovleva, E. N.
2016-03-01
A method of prediction of distributions of local electric fields in composite media based on analysis of the tensor operators of the concentration of intensity and induction is proposed. Both general expressions and the relations for calculating these operators are obtained in various approximations. The analytical expressions are presented for the operators of the concentration of electric fields in various types of inhomogeneous structures obtained in the generalized singular approximation.
Field Methods for the Study of Slope and Fluvial Processes
Leopold, Luna Bergere; Leopold, Luna Bergere
1967-01-01
In Belgium during the summer of 1966 the Commission on Slopes and the Commission on Applied Geomorphology of the International Geographical Union sponsored a joint symposium, with field excursions, and meetings of the two commissions. As a result of the conference and associated discussions, the participants expressed the view that it would be a contribution to scientific work relating to the subject area if the Commission on Applied Geomorphology could prepare a small manual describling the methods of field investigation being used by research scientists throughout the world in the study of various aspects of &lope development and fluvial processes. The Commission then assumed this responsibility and asked as many persons as were known to be. working on this subject to contribute whatever they wished in the way of descriptions of methods being employed.The purpose of the present manual is to show the variety of study methods now in use, to describe from the experience gained the limitations and advantages of different techniques, and to give pertinent detail which might be useful to other investigators. Some details that would be useful to know are not included in scientific publications, but in a manual on methods the details of how best t6 use a method has a place. Various persons have learned certain things which cannot be done, as well as some methods that are successful. It is our hope that comparison of methods tried will give the reader suggestions as to how a particular method might best be applied to his own circumstance.The manual does not purport to include methods used by all workers. In particular, it does not interfere with a more systematic treatment of the subject (1) or with various papers already published in the present journal. In fact we are sure that there are pertinent research methods that we do not know of and the Commission would be glad to receive additions and other ideas from those who find they have something to contribute. Also, the
NASA Astrophysics Data System (ADS)
Huang, Yuqing; Zhang, Zhiyong; Wang, Kaiyu; Cai, Shuhui; Chen, Zhong
2014-08-01
Three-dimensional (3D) NMR plays an important role in structural elucidations of complex samples, whereas difficulty remains in its applications to inhomogeneous fields. Here, we propose an NMR approach based on intermolecular zero-quantum coherences (iZQCs) to obtain high-resolution 3D J-resolved-COSY spectra in inhomogeneous fields. Theoretical analyses are presented for verifying the proposed method. Experiments on a simple chemical solution and a complex brain phantom are performed under non-ideal field conditions to show the ability of the proposed method. This method is an application of iZQCs to high-resolution 3D NMR, and is useful for studies of complex samples in inhomogeneous fields.
Magnetic field reconstruction based on sunspot oscillations
NASA Astrophysics Data System (ADS)
Löhner-Böttcher, J.; Bello González, N.; Schmidt, W.
2016-11-01
The magnetic field of a sunspot guides magnetohydrodynamic waves toward higher atmospheric layers. In the upper photosphere and lower chromosphere, wave modes with periods longer than the acoustic cut-off period become evanescent. The cut-off period essentially changes due to the atmospheric properties, e.g., increases for larger zenith inclinations of the magnetic field. In this work, we aim at introducing a novel technique of reconstructing the magnetic field inclination on the basis of the dominating wave periods in the sunspot chromosphere and upper photosphere. On 2013 August 21, we observed an isolated, circular sunspot (NOAA11823) for 58 min in a purely spectroscopic multi-wavelength mode with the Interferometric Bidimensional Spectro-polarimeter (IBIS) at the Dunn Solar Telescope. By means of a wavelet power analysis, we retrieved the dominating wave periods and reconstructed the zenith inclinations in the chromosphere and upper photosphere. The results are in good agreement with the lower photospheric HMI magnetograms. The sunspot's magnetic field in the chromosphere inclines from almost vertical (0°) in the umbra to around 60° in the outer penumbra. With increasing altitude in the sunspot atmosphere, the magnetic field of the penumbra becomes less inclined. We conclude that the reconstruction of the magnetic field topology on the basis of sunspot oscillations yields consistent and conclusive results. The technique opens up a new possibility to infer the magnetic field inclination in the solar chromosphere.
METHOD AND APPARATUS FOR TRAPPING IONS IN A MAGNETIC FIELD
Luce, J.S.
1962-04-17
A method and apparatus are described for trapping ions within an evacuated container and within a magnetic field utilizing dissociation and/or ionization of molecular ions to form atomic ions and energetic neutral particles. The atomic ions are magnetically trapped as a result of a change of charge-to- mass ratio. The molecular ions are injected into the container and into the path of an energetic carbon arc discharge which dissociates and/or ionizes a portion of the molecular ions into atomic ions and energetic neutrals. The resulting atomic ions are trapped by the magnetic field to form a circulating beam of atomic ions, and the energetic neutrals pass out of the system and may be utilized in a particle accelerator. (AEC)
Magnetic field adjustment structure and method for a tapered wiggler
Halbach, Klaus
1988-03-01
An improved method and structure is disclosed for adjusting the magnetic field generated by a group of electromagnet poles spaced along the path of a charged particle beam to compensate for energy losses in the charged particles which comprises providing more than one winding on at least some of the electromagnet poles; connecting one respective winding on each of several consecutive adjacent electromagnet poles to a first power supply, and the other respective winding on the electromagnet pole to a different power supply in staggered order; and independently adjusting one power supply to independently vary the current in one winding on each electromagnet pole in a group whereby the magnetic field strength of each of a group of electromagnet poles may be changed in smaller increments.
Magnetic field adjustment structure and method for a tapered wiggler
Halbach, Klaus
1988-01-01
An improved method and structure is disclosed for adjusting the magnetic field generated by a group of electromagnet poles spaced along the path of a charged particle beam to compensate for energy losses in the charged particles which comprises providing more than one winding on at least some of the electromagnet poles; connecting one respective winding on each of several consecutive adjacent electromagnet poles to a first power supply, and the other respective winding on the electromagnet pole to a different power supply in staggered order; and independently adjusting one power supply to independently vary the current in one winding on each electromagnet pole in a group whereby the magnetic field strength of each of a group of electromagnet poles may be changed in smaller increments.
NASA Astrophysics Data System (ADS)
Nishida, Hidetoshi
In order to reconstruct the arbitrary shaped incompressible velocity field with noises, a new data-processing fluid dynamics (DFD) based upon the seamless immersed boundary method is proposed. The velocity field with noises is reconstructed by the Helmholtz's decomposition. The performance of DFD is demonstrated first for the reconstruction of velocity with noises and erroneous vectors. Also, the seamless immersed boundary method is incorporated into the velocity reconstruction for complicated flow geometry. Some fundamental flow fields, i.e., the square cavity flows with a circular cylinder and a square cylinder, are considered. As a result, it is concluded that the present DFD based upon the seamless immersed boundary method is very versatile technique for velocity reconstruction of the arbitrary shaped incompressible velocity with noises.
Magnetic irreversibility: An important amendment in the zero-field-cooling and field-cooling method
NASA Astrophysics Data System (ADS)
Teixeira Dias, Fábio; das Neves Vieira, Valdemar; Esperança Nunes, Sabrina; Pureur, Paulo; Schaf, Jacob; Fernanda Farinela da Silva, Graziele; de Paiva Gouvêa, Cristol; Wolff-Fabris, Frederik; Kampert, Erik; Obradors, Xavier; Puig, Teresa; Roa Rovira, Joan Josep
2016-02-01
The present work reports about experimental procedures to correct significant deviations of magnetization data, caused by magnetic relaxation, due to small field cycling by sample transport in the inhomogeneous applied magnetic field of commercial magnetometers. The extensively used method for measuring the magnetic irreversibility by first cooling the sample in zero field, switching on a constant applied magnetic field and measuring the magnetization M(T) while slowly warming the sample, and subsequently measuring M(T) while slowly cooling it back in the same field, is very sensitive even to small displacement of the magnetization curve. In our melt-processed YBaCuO superconducting sample we observed displacements of the irreversibility limit up to 7 K in high fields. Such displacements are detected only on confronting the magnetic irreversibility limit with other measurements, like for instance zero resistance, in which the sample remains fixed and so is not affected by such relaxation. We measured the magnetic irreversibility, Tirr(H), using a vibrating sample magnetometer (VSM) from Quantum Design. The zero resistance data, Tc0(H), were obtained using a PPMS from Quantum Design. On confronting our irreversibility lines with those of zero resistance, we observed that the Tc0(H) data fell several degrees K above the Tirr(H) data, which obviously contradicts the well known properties of superconductivity. In order to get consistent Tirr(H) data in the H-T plane, it was necessary to do a lot of additional measurements as a function of the amplitude of the sample transport and extrapolate the Tirr(H) data for each applied field to zero amplitude.
A Method to Localize RF B1 Field in High-Field Magnetic Resonance Imaging Systems
Yoo, Hyoungsuk; Gopinath, Anand; Vaughan, J. Thomas
2014-01-01
In high-field magnetic resonance imaging (MRI) systems, B0 fields of 7 and 9.4 T, the RF field shows greater inhomogeneity compared to clinical MRI systems with B0 fields of 1.5 and 3.0 T. In multichannel RF coils, the magnitude and phase of the input to each coil element can be controlled independently to reduce the nonuniformity of the RF field. The convex optimization technique has been used to obtain the optimum excitation parameters with iterative solutions for homogeneity in a selected region of interest. The pseudoinverse method has also been used to find a solution. The simulation results for 9.4- and 7-T MRI systems are discussed in detail for the head model. Variation of the simulation results in a 9.4-T system with the number of RF coil elements for different positions of the regions of interest in a spherical phantom are also discussed. Experimental results were obtained in a phantom in the 9.4-T system and are compared to the simulation results and the specific absorption rate has been evaluated. PMID:22929360
Iterative Methods to Solve Linear RF Fields in Hot Plasma
NASA Astrophysics Data System (ADS)
Spencer, Joseph; Svidzinski, Vladimir; Evstatiev, Evstati; Galkin, Sergei; Kim, Jin-Soo
2014-10-01
Most magnetic plasma confinement devices use radio frequency (RF) waves for current drive and/or heating. Numerical modeling of RF fields is an important part of performance analysis of such devices and a predictive tool aiding design and development of future devices. Prior attempts at this modeling have mostly used direct solvers to solve the formulated linear equations. Full wave modeling of RF fields in hot plasma with 3D nonuniformities is mostly prohibited, with memory demands of a direct solver placing a significant limitation on spatial resolution. Iterative methods can significantly increase spatial resolution. We explore the feasibility of using iterative methods in 3D full wave modeling. The linear wave equation is formulated using two approaches: for cold plasmas the local cold plasma dielectric tensor is used (resolving resonances by particle collisions), while for hot plasmas the conductivity kernel (which includes a nonlocal dielectric response) is calculated by integrating along test particle orbits. The wave equation is discretized using a finite difference approach. The initial guess is important in iterative methods, and we examine different initial guesses including the solution to the cold plasma wave equation. Work is supported by the U.S. DOE SBIR program.
NASA Astrophysics Data System (ADS)
Wu, Shudong; Wan, Li
2012-03-01
The electronic structures of a CdSe spherical quantum dot in a magnetic field are obtained by using an exact diagonalization method and a variational method within the effective-mass approximation. The dependences of the energies and wave functions of electron states, exciton binding energy, exciton transition energy, and exciton diamagnetic shift on the applied magnetic field are investigated theoretically in detail. It is observed that the degeneracy of magnetic quantum number m is removed due to the Zeeman effect when the magnetic field is present. For the states with m ≥ 0, the electron energies increase as the magnetic field increases. However, for the states with m < 0, the electron energies decrease to a minimum, and then increase with increasing the magnetic field. The energies and wave functions of electron states obtained from the variational method based on the variational functions we proposed are in excellent agreement with the results obtained from the exact diagonalization method we presented. A comparison between the results obtained from the variational functions proposed by us and Xiao is also verified.
Ray, J.; Lee, J.; Yadav, V.; ...
2015-04-29
Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) and fitting.more » Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO2 (ffCO2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO2 emissions and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also
Nondestructive acoustic electric field probe apparatus and method
Migliori, Albert
1982-01-01
The disclosure relates to a nondestructive acoustic electric field probe and its method of use. A source of acoustic pulses of arbitrary but selected shape is placed in an oil bath along with material to be tested across which a voltage is disposed and means for receiving acoustic pulses after they have passed through the material. The received pulses are compared with voltage changes across the material occurring while acoustic pulses pass through it and analysis is made thereof to determine preselected characteristics of the material.
Field method for the determination of molybdenum in plants
Reichen, Laura E.; Ward, F.N.
1951-01-01
Fresh plant material is ashed directly by heating in nickel or platinum dishes over a "flame. An acid solution of 25 milligrams of ash is treated with stannous chloride and potassium thiocyanate. The amber-colored molybdenum thiocyanate complex ion is extracted with isopropyl ether, and the intensity of the color of the ether layer over a sample solution is compared with the ether layer over standard molybdenum solutions treated similarly. Field determinations can be made quickly and the method requires no special equipment. As little as 0.25 microgram or 0.001 percent molybdenum can be determined in plant ash.
Sun, Dali
2016-01-01
Nanoparticles have become a powerful tool for cell imaging, biomolecule and cell and protein interaction studies, but are difficult to rapidly and accurately measure in most assays. Dark-field microscope (DFM) image analysis approaches used to quantify nanoparticles require high-magnification near-field (HN) images that are labor intensive due to a requirement for manual image selection and focal adjustments needed when identifying and capturing new regions of interest. Low-magnification far-field (LF) DFM imagery is technically simpler to perform but cannot be used as an alternate to HN-DFM quantification, since it is highly sensitive to surface artifacts and debris that can easily mask nanoparticle signal. We now describe a new noise reduction approach that markedly reduces LF-DFM image artifacts to allow sensitive and accurate nanoparticle signal quantification from LF-DFM images. We have used this approach to develop a “Dark Scatter Master” (DSM) algorithm for the popular NIH image analysis program ImageJ, which can be readily adapted for use with automated high-throughput assay analyses. This method demonstrated robust performance quantifying nanoparticles in different assay formats, including a novel method that quantified extracellular vesicles in patient blood sample to detect pancreatic cancer cases. Based on these results, we believe our LF-DFM quantification method can markedly decrease the analysis time of most nanoparticle-based assays to impact both basic research and clinical analyses. PMID:28177210
DNA-based methods of geochemical prospecting
Ashby, Matthew [Mill Valley, CA
2011-12-06
The present invention relates to methods for performing surveys of the genetic diversity of a population. The invention also relates to methods for performing genetic analyses of a population. The invention further relates to methods for the creation of databases comprising the survey information and the databases created by these methods. The invention also relates to methods for analyzing the information to correlate the presence of nucleic acid markers with desired parameters in a sample. These methods have application in the fields of geochemical exploration, agriculture, bioremediation, environmental analysis, clinical microbiology, forensic science and medicine.
Vision Sensor-Based Road Detection for Field Robot Navigation
Lu, Keyu; Li, Jian; An, Xiangjing; He, Hangen
2015-01-01
Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging road scenes. More specifically, for a given road image captured by an on-board vision sensor, we introduce a multiple population genetic algorithm (MPGA)-based approach for efficient road vanishing point detection. Superpixel-level seeds are then selected in an unsupervised way using a clustering strategy. Then, according to the GrowCut framework, the seeds proliferate and iteratively try to occupy their neighbors. After convergence, the initial road segment is obtained. Finally, in order to achieve a globally-consistent road segment, the initial road segment is refined using the conditional random field (CRF) framework, which integrates high-level information into road detection. We perform several experiments to evaluate the common performance, scale sensitivity and noise sensitivity of the proposed method. The experimental results demonstrate that the proposed method exhibits high robustness compared to the state of the art. PMID:26610514
Vision Sensor-Based Road Detection for Field Robot Navigation.
Lu, Keyu; Li, Jian; An, Xiangjing; He, Hangen
2015-11-24
Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging road scenes. More specifically, for a given road image captured by an on-board vision sensor, we introduce a multiple population genetic algorithm (MPGA)-based approach for efficient road vanishing point detection. Superpixel-level seeds are then selected in an unsupervised way using a clustering strategy. Then, according to the GrowCut framework, the seeds proliferate and iteratively try to occupy their neighbors. After convergence, the initial road segment is obtained. Finally, in order to achieve a globally-consistent road segment, the initial road segment is refined using the conditional random field (CRF) framework, which integrates high-level information into road detection. We perform several experiments to evaluate the common performance, scale sensitivity and noise sensitivity of the proposed method. The experimental results demonstrate that the proposed method exhibits high robustness compared to the state of the art.
A Field-Based Aquatic Life Benchmark for Conductivity in ...
EPA announced the availability of the final report, A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams. This report describes a method to characterize the relationship between the extirpation (the effective extinction) of invertebrate genera and salinity (measured as conductivity) and from that relationship derives a freshwater aquatic life benchmark. This benchmark of 300 µS/cm may be applied to waters in Appalachian streams that are dominated by calcium and magnesium salts of sulfate and bicarbonate at circum-neutral to mildly alkaline pH. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.
A Field-Based Aquatic Life Benchmark for Conductivity in ...
EPA announced the availability of the final report, A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams. This report describes a method to characterize the relationship between the extirpation (the effective extinction) of invertebrate genera and salinity (measured as conductivity) and from that relationship derives a freshwater aquatic life benchmark. This benchmark of 300 µS/cm may be applied to waters in Appalachian streams that are dominated by calcium and magnesium salts of sulfate and bicarbonate at circum-neutral to mildly alkaline pH. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.
Lattice-based flow field modeling.
Wei, Xiaoming; Zhao, Ye; Fan, Zhe; Li, Wei; Qiu, Feng; Yoakum-Stover, Suzanne; Kaufman, Arie E
2004-01-01
We present an approach for simulating the natural dynamics that emerge from the interaction between a flow field and immersed objects. We model the flow field using the Lattice Boltzmann Model (LBM) with boundary conditions appropriate for moving objects and accelerate the computation on commodity graphics hardware (GPU) to achieve real-time performance. The boundary conditions mediate the exchange of momentum between the flow field and the moving objects resulting in forces exerted by the flow on the objects as well as the back-coupling on the flow. We demonstrate our approach using soap bubbles and a feather. The soap bubbles illustrate Fresnel reflection, reveal the dynamics of the unseen flow field in which they travel, and display spherical harmonics in their undulations. Our simulation allows the user to directly interact with the flow field to influence the dynamics in real time. The free feather flutters and gyrates in response to lift and drag forces created by its motion relative to the flow. Vortices are created as the free feather falls in an otherwise quiescent flow.
Method for calculating multidimensional electric fields in photovoltaic modules
NASA Astrophysics Data System (ADS)
Kallis, J. M.; Trucker, D. C.; Cuddihy, E. F.; Garcia, A., III
1984-05-01
A finite element method for evaluating the electrical isolation characteristics of photovoltaic modules was developed; its accuracy was verified by comparison with an exact solution for a geometry similar to that of solar cells. Tests on a square test coupon, employed in electrical isolation tests, and a group of disc-shaped solar cells illustrated the finite element method's usefulness in evaluating module encapsulation designs. Finite element models had to avoid adjacent large and small elements and elements with large aspect ratios, and the NASTRAN output had to be curve fitted to calculate the maximum field. Geometric limits were indicated: cells with very sharp edges, and cells much thinner or thicker than the dielectric pottant layer.
Method for calculating multidimensional electric fields in photovoltaic modules
NASA Technical Reports Server (NTRS)
Kallis, J. M.; Trucker, D. C.; Cuddihy, E. F.; Garcia, A., III
1984-01-01
A finite element method for evaluating the electrical isolation characteristics of photovoltaic modules was developed; its accuracy was verified by comparison with an exact solution for a geometry similar to that of solar cells. Tests on a square test coupon, employed in electrical isolation tests, and a group of disc-shaped solar cells illustrated the finite element method's usefulness in evaluating module encapsulation designs. Finite element models had to avoid adjacent large and small elements and elements with large aspect ratios, and the NASTRAN output had to be curve fitted to calculate the maximum field. Geometric limits were indicated: cells with very sharp edges, and cells much thinner or thicker than the dielectric pottant layer.
Laboratory and field methods for measuring human energy expenditure.
Leonard, William R
2012-01-01
Energetics research is central to the field of human biology. Energy is an important currency for measuring adaptation, because both its acquisition and allocation for biological processes have important implications for survival and reproduction. Recent technological and methodological advances are now allowing human biologists to study variation in energy dynamics with much greater accuracy in a wide variety of ecological contexts. This article provides an overview of the methods used for measuring human energy expenditure (EE) and considers some of the important ecological and evolutionary questions that can be explored from an energetics perspective. Basic principles of calorimetry are first presented, followed by an overview of the equipment used for measuring human EE and work capacity. Methods for measuring three important dimensions of human EE-resting metabolic rate, working/exercising EE, and total EE-are then presented, highlighting key areas of ongoing research.
NASA Astrophysics Data System (ADS)
Chen, Dong; Xie, Hongzhi; Zhang, Shuyang; Gu, Lixu
2017-10-01
Respiration-introduced tumor location uncertainty is a challenge in the precise lung biopsy for lung lesions. Current statistical modeling approaches hardly capture the complex local respiratory motion information. In this study, we formulate a statistical respiratory motion model using biplane x-ray images to improve the accuracy of motion field estimation by efficiently preserving local motion details for specific patients. Given CT data sets of 18 healthy subjects at end-expiratory and end-inspiratory breathing phases, the respiratory motion field is constructed based on deformation vector fields which are extracted from these CT data sets, and a lung contour motion repository respiratory is generated dependent on displacements of boundary control points. By varying the sparse weight coefficients of the statistical sparse motion field presentation (SMFP) method, the newly-input motion field is approximately presented by a sparse linear combination of a subset of the motion repository. The SMFP method is employed twice in the coefficient optimization process. Finally, these non-zero coefficients are fine-tuned to maximize the similarity between the projection image of reconstructed volumetric images and the current x-ray image. We performed the proposed method for estimating respiratory motion field on ten subject datasets and compared the result with the PCA method. The maximum average target registration error of the PCA-based and the SMFP-based respiratory motion field estimation are 3.1(2.0) and 2.9(1.6) mm, respectively. The maximum average symmetric surface distance of two methods are 2.5(1.6) and 2.4(1.3) mm, respectively.
NASA Astrophysics Data System (ADS)
Fletcher, Lauren E.; Valdivia-Silva, Julio E.; Perez-Montaño, Saul; Condori-Apaza, Renee M.; Conley, Catharine A.; Navarro-Gonzalez, Rafael; McKay, Christopher P.
2014-03-01
The objective of this work was to develop a field method for the determination of labile organic carbon in hyper-arid desert soils. Industry standard methods rely on expensive analytical equipment that are not possible to take into the field, while scientific challenges require fast turn-around of large numbers of samples in order to characterize the soils throughout this region. Here we present a method utilizing acid-hydrolysis extraction of the labile fraction of organic carbon followed by potassium permanganate oxidation, which provides a quick and inexpensive approach to investigate samples in the field. Strict reagent standardization and calibration steps within this method allowed the determination of very low levels of organic carbon in hyper-arid soils, in particular, with results similar to those determined by the alternative methods of Calcination and Pyrolysis-Gas Chromatography-Mass Spectrometry. Field testing of this protocol increased the understanding of the role of organic materials in hyper-arid environments and allowed real-time, strategic decision making for planning for more detailed laboratory-based analysis.
B. Julia-Diaz, H. Kamano, T.-S. H. Lee, A. Matsuyama, T. Sato, N. Suzuki
2009-04-01
Within the relativistic quantum field theory, we analyze the differences between the $\\pi N$ reaction models constructed from using (1) three-dimensional reductions of Bethe-Salpeter Equation, (2) method of unitary transformation, and (3) time-ordered perturbation theory. Their relations with the approach based on the dispersion relations of S-matrix theory are dicusssed.
The methods and instructions for field operations presented in this manual for surveys of non-wadeable streams and rivers were developed and tested based on 55 sample sites in the Mid-Atlantic region and 53 sites in an Oregon study during two years of pilot and demonstration proj...
The methods and instructions for field operations presented in this manual for surveys of non-wadeable streams and rivers were developed and tested based on 55 sample sites in the Mid-Atlantic region and 53 sites in an Oregon study during two years of pilot and demonstration proj...
Pre-Student Teachers React to Field-Supplemented Methods Courses.
ERIC Educational Resources Information Center
Gantt, Walter N.; Davey, Beth
This document on the value of field experience for preservice teachers is based on a course and an experiment conducted at the University of Maryland in which blocks of a methods course were devoted to elementary school classroom experience. It is reported that school visits progressively involved observation lesson presentation, and general…
Methodical problems of magnetic field measurements in umbra of sunspots
NASA Astrophysics Data System (ADS)
Lozitska, N. I.; Lozitsky, V. G.; Andryeyeva, O. A.; Akhtemov, Z. S.; Malashchuk, V. M.; Perebeynos, V. A.; Stepanyan, N. N.; Shtertser, N. I.
2015-02-01
Visual measurements of magnetic field strengths in sunspot umbra provide data on magnetic field strength modulus directly, i.e., irrespective from any solar atmosphere model assumptions. In order to increase the accuracy of calculation of the solar magnetic indexes, such as B ‾ max or Bsp, the inclusion of all available data from different observatories is needed. In such measurements some methodical problems arise, which bring about inconsistency of the data samples combined from different sources; this work describes the problems at hand and proposes solutions on how to eliminate the inconsistencies. Data sets of sunspot magnetic field strength visual measurements from Mt. Wilson, Crimea and Kyiv observatories in 2010-2012 have been processed. It is found that two measurement modes of Zeeman split, σ → σ and σ → π, yield almost the same results, if data rows are long enough (over ∼100 sunspots in central area of Sun, r < 0.7 R). It is generally held that the most reliable measurement results are obtained for magnetic fields that exceed 2400 G. However, the empirical comparison of the internal data consistency of the samples produced by different observers shows that for reliable results this limit can be lowered down to 1100 G. To increase the precision of measurements, empirical calibration of the line-shifter is required by using closely positioned telluric lines. Such calibrations have been performed at Kyiv and Crimea, but as far as we know, it has not been carried out at Mt. Wilson observatory after its diffraction grate was replaced in 1994. Taking into consideration the highest quality and coverage of Mt. Wilson sunspot observational data, the authors are convinced that reliable calibration of its instrument by narrow telluric lines is definitely required.
A Method for Field Infestation with Meloidogyne incognita
Xing, L. J.; Westphal, A.
2005-01-01
A field inoculation method was developed to produce Meloidogyne spp. infestation sites with minimal quantities of nematode inoculum and with a reduced labor requirement compared to previous techniques. In a preseason-methyl bromidefumigated site, nematode egg suspensions were delivered at concentrations of 0 or 10x eggs/m of row where x = 2.12, 2.82, 3.52, or 4.22 through a drip line attached to the seed firmer of a commercial 2-row planter into the open seed furrow while planting cowpea. These treatments were compared to a hand-inoculated treatment, in which 103.1 eggs were delivered every 30 cm in 5 ml of water agar suspension 2 weeks after planting. Ten weeks after planting, infection of cowpea roots was measured by gall rating and gall counts on cowpea roots. A linear relationship between the inoculation levels and nematode-induced galls was found. At this time, the amount of galling per root system in the hand-inoculated treatment was less than in the machine-applied treatments. Advantages of this new technique include application uniformity and low population level requisite for establishing the nematode. This method has potential in field-testing of Meloidogyne spp. management strategies by providing uniform infestation of test sites at planting time. PMID:19262898
Field methods for rapidly characterizing paint waste during bridge rehabilitation.
Shu, Zhan; Axe, Lisa; Jahan, Kauser; Ramanujachary, Kandalam V
2015-09-01
For Department of Transportation (DOT) agencies, bridge rehabilitation involving paint removal results in waste that is often managed as hazardous. Hence, an approach that provides field characterization of the waste classification would be beneficial. In this study, an analysis of variables critical to the leaching process was conducted to develop a predictive tool for waste classification. This approach first involved identifying mechanistic processes that control leaching. Because steel grit is used to remove paint, elevated iron concentrations remain in the paint waste. As such, iron oxide coatings provide an important surface for metal adsorption. The diffuse layer model was invoked (logKMe=4.65 for Pb and logKMe=2.11 for Cr), where 90% of the data were captured within the 95% confidence level. Based on an understanding of mechanistic processes along with principal component analysis (PCA) of data obtained from field-portable X-ray fluorescence (FP-XRF), statistically-based models for leaching from paint waste were developed. Modeling resulted in 96% of the data falling within the 95% confidence level for Pb (R(2) 0.6-0.9, p ⩽ 0.04), Ba (R(2) 0.5-0.7, p ⩽ 0.1), and Zn (R(2) 0.6-0.7, p ⩽ 0.08). However, the regression model obtained for Cr leaching was not significant (R(2) 0.3-0.5, p ⩽ 0.75). The results of this work may assist DOT agencies with applying a predictive tool in the field that addresses the mobility of trace metals as well as disposal and management of paint waste during bridge rehabilitation.
Research on BOM based composable modeling method
NASA Astrophysics Data System (ADS)
Zhang, Mingxin; He, Qiang; Gong, Jianxing
2013-03-01
Composable modeling method has been a research hotpot in the area of Modeling and Simulation for a long time. In order to increase the reuse and interoperability of BOM based model, this paper put forward a composable modeling method based on BOM, studied on the basic theory of composable modeling method based on BOM, designed a general structure of the coupled model based on BOM, and traversed the structure of atomic and coupled model based on BOM. At last, the paper introduced the process of BOM based composable modeling and made a conclusion on composable modeling method based on BOM. From the prototype we developed and accumulative model stocks, we found this method could increase the reuse and interoperability of models.
A New Method for Coronal Magnetic Field Reconstruction
NASA Astrophysics Data System (ADS)
Yi, Sibaek; Choe, Gwang-Son; Cho, Kyung-Suk; Kim, Kap-Sung
2017-08-01
A precise way of coronal magnetic field reconstruction (extrapolation) is an indispensable tool for understanding of various solar activities. A variety of reconstruction codes have been developed so far and are available to researchers nowadays, but they more or less bear this and that shortcoming. In this paper, a new efficient method for coronal magnetic field reconstruction is presented. The method imposes only the normal components of magnetic field and current density at the bottom boundary to avoid the overspecification of the reconstruction problem, and employs vector potentials to guarantee the divergence-freeness. In our method, the normal component of current density is imposed, not by adjusting the tangential components of A, but by adjusting its normal component. This allows us to avoid a possible numerical instability that on and off arises in codes using A. In real reconstruction problems, the information for the lateral and top boundaries is absent. The arbitrariness of the boundary conditions imposed there as well as various preprocessing brings about the diversity of resulting solutions. We impose the source surface condition at the top boundary to accommodate flux imbalance, which always shows up in magnetograms. To enhance the convergence rate, we equip our code with a gradient-method type accelerator. Our code is tested on two analytical force-free solutions. When the solution is given only at the bottom boundary, our result surpasses competitors in most figures of merits devised by Schrijver et al. (2006). We have also applied our code to a real active region NOAA 11974, in which two M-class flares and a halo CME took place. The EUV observation shows a sudden appearance of an erupting loop before the first flare. Our numerical solutions show that two entwining flux tubes exist before the flare and their shackling is released after the CME with one of them opened up. We suggest that the erupting loop is created by magnetic reconnection between
Phase-field elasticity model based on mechanical jump conditions
NASA Astrophysics Data System (ADS)
Schneider, Daniel; Tschukin, Oleg; Choudhury, Abhik; Selzer, Michael; Böhlke, Thomas; Nestler, Britta
2015-05-01
Computational models based on the phase-field method typically operate on a mesoscopic length scale and resolve structural changes of the material and furthermore provide valuable information about microstructure and mechanical property relations. An accurate calculation of the stresses and mechanical energy at the transition region is therefore indispensable. We derive a quantitative phase-field elasticity model based on force balance and Hadamard jump conditions at the interface. Comparing the simulated stress profiles calculated with Voigt/Taylor (Annalen der Physik 274(12):573, 1889), Reuss/Sachs (Z Angew Math Mech 9:49, 1929) and the proposed model with the theoretically predicted stress fields in a plate with a round inclusion under hydrostatic tension, we show the quantitative characteristics of the model. In order to validate the elastic contribution to the driving force for phase transition, we demonstrate the absence of excess energy, calculated by Durga et al. (Model Simul Mater Sci Eng 21(5):055018, 2013), in a one-dimensional equilibrium condition of serial and parallel material chains. To validate the driving force for systems with curved transition regions, we relate simulations to the Gibbs-Thompson equilibrium condition (Johnson and Alexander, J Appl Phys 59(8):2735, 1986).
A sparse equivalent source method for near-field acoustic holography.
Fernandez-Grande, Efren; Xenaki, Angeliki; Gerstoft, Peter
2017-01-01
This study examines a near-field acoustic holography method consisting of a sparse formulation of the equivalent source method, based on the compressive sensing (CS) framework. The method, denoted Compressive-Equivalent Source Method (C-ESM), encourages spatially sparse solutions (based on the superposition of few waves) that are accurate when the acoustic sources are spatially localized. The importance of obtaining a non-redundant representation, i.e., a sensing matrix with low column coherence, and the inherent ill-conditioning of near-field reconstruction problems is addressed. Numerical and experimental results on a classical guitar and on a highly reactive dipole-like source are presented. C-ESM is valid beyond the conventional sampling limits, making wide-band reconstruction possible. Spatially extended sources can also be addressed with C-ESM, although in this case the obtained solution does not recover the spatial extent of the source.
Field evaluation of endotoxin air sampling assay methods.
Thorne, P S; Reynolds, S J; Milton, D K; Bloebaum, P D; Zhang, X; Whitten, P; Burmeister, L F
1997-11-01
This study tested the importance of filter media, extraction and assay protocol, and bioaerosol source on the determination of endotoxin under field conditions in swine and poultry confinement buildings. Multiple simultaneous air samples were collected using glass fiber (GF) and polycarbonate (PC) filters, and these were assayed using two methods in two separate laboratories: an endpoint chromogenic Limulus amebocyte lysate (LAL) assay (QCL) performed in water and a kinetic chromogenic LAL assay (KQCL) performed in buffer with resistant-parallel line estimation analysis (KLARE). In addition, two aqueous filter extraction methods were compared in the QCL assay: 120 min extraction at 22 degrees C with vigorous shaking and 30 min extraction at 68 degrees C with gentle rocking. These extraction methods yielded endotoxin activities that were not significantly different and were very highly correlated. Reproducibility of endotoxin determinations from duplicate air sampling filters was very high (Cronbach alpha all > 0.94). When analyzed by the QCL method GF filters yielded significantly higher endotoxin activity than PC filters. QCL and KLARE methods gave similar estimates for endotoxin activity from PC filters; however, GF filters analyzed by the QCL method yielded significantly higher endotoxin activity estimates, suggesting enhancement of the QCL assay or inhibition of the KLARE asay with GF filters. Correlation between QCL-GF and QCL-PC was high (r = 0.98) while that between KLARE-GF and KLARE-PC was moderate (r = 0.68). Analysis of variance demonstrated that assay methodology, filter-type, barn-type, and interactions between assay and filter-type and between assay and barn-type were important factors influencing endotoxin exposure assessment.
NASA Astrophysics Data System (ADS)
H, Dhaouadi; R, Zgueb; O, Riahi; F, Trabelsi; T, Othman
2016-05-01
In ferroelectric liquid crystals, phase transitions can be induced by an electric field. The current constant method allows these transition to be quickly localized and thus the (E,T) phase diagram of the studied product can be obtained. In this work, we make a slight modification to the measurement principles based on this method. This modification allows the characteristic parameters of ferroelectric liquid crystal to be quantitatively measured. The use of a current square signal highlights a phenomenon of ferroelectric hysteresis with remnant polarization at null field, which points out an effect of memory in this compound.
Bi-color near infrared thermoreflectometry: a method for true temperature field measurement.
Sentenac, Thierry; Gilblas, Rémi; Hernandez, Daniel; Le Maoult, Yannick
2012-12-01
In a context of radiative temperature field measurement, this paper deals with an innovative method, called bicolor near infrared thermoreflectometry, for the measurement of true temperature fields without prior knowledge of the emissivity field of an opaque material. This method is achieved by a simultaneous measurement, in the near infrared spectral band, of the radiance temperature fields and of the emissivity fields measured indirectly by reflectometry. The theoretical framework of the method is introduced and the principle of the measurements at two wavelengths is detailed. The crucial features of the indirect measurement of emissivity are the measurement of bidirectional reflectivities in a single direction and the introduction of an unknown variable, called the "diffusion factor." Radiance temperature and bidirectional reflectivities are then merged into a bichromatic system based on Kirchhoff's laws. The assumption of the system, based on the invariance of the diffusion factor for two near wavelengths, and the value of the chosen wavelengths, are then discussed in relation to a database of several material properties. A thermoreflectometer prototype was developed, dimensioned, and evaluated. Experiments were carried out to outline its trueness in challenging cases. First, experiments were performed on a metallic sample with a high emissivity value. The bidirectional reflectivity was then measured from low signals. The results on erbium oxide demonstrate the power of the method with materials with high emissivity variations in near infrared spectral band.
76 FR 28664 - Method 301-Field Validation of Pollutant Measurement Methods From Various Waste Media
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-18
... . The TTN provides information and technology exchange in various areas of air pollution control. A... rules that limit air pollution emission limits. K. Congressional Review Act The Congressional Review Act... protection, Alternative test method, Air pollution control, Field validation, Hazardous air...
Method of recovering oil-based fluid
Brinkley, H.E.
1993-07-13
A method is described of recovering oil-based fluid, said method comprising the steps of: applying an oil-based fluid absorbent cloth of man-made fiber to an oil-based fluid, the cloth having at least a portion thereof that is napped so as to raise ends and loops of the man-made fibers and define voids; and absorbing the oil-based fluid into the napped portion of the cloth.
Surface Profile and Stress Field Evaluation using Digital Gradient Sensing Method
Miao, C.; Sundaram, B. M.; Huang, L.; Tippur, H. V.
2016-08-09
Shape and surface topography evaluation from measured orthogonal slope/gradient data is of considerable engineering significance since many full-field optical sensors and interferometers readily output accurate data of that kind. This has applications ranging from metrology of optical and electronic elements (lenses, silicon wafers, thin film coatings), surface profile estimation, wave front and shape reconstruction, to name a few. In this context, a new methodology for surface profile and stress field determination based on a recently introduced non-contact, full-field optical method called digital gradient sensing (DGS) capable of measuring small angular deflections of light rays coupled with a robust finite-difference-based least-squares integration (HFLI) scheme in the Southwell configuration is advanced here. The method is demonstrated by evaluating (a) surface profiles of mechanically warped silicon wafers and (b) stress gradients near growing cracks in planar phase objects.
Surface profile and stress field evaluation using digital gradient sensing method
NASA Astrophysics Data System (ADS)
Miao, C.; Sundaram, B. M.; Huang, L.; Tippur, H. V.
2016-09-01
Shape and surface topography evaluation from measured orthogonal slope/gradient data is of considerable engineering significance since many full-field optical sensors and interferometers readily output such a data accurately. This has applications ranging from metrology of optical and electronic elements (lenses, silicon wafers, thin film coatings), surface profile estimation, wave front and shape reconstruction, to name a few. In this context, a new methodology for surface profile and stress field determination based on a recently introduced non-contact, full-field optical method called digital gradient sensing (DGS) capable of measuring small angular deflections of light rays coupled with a robust finite-difference-based least-squares integration (HFLI) scheme in the Southwell configuration is advanced here. The method is demonstrated by evaluating (a) surface profiles of mechanically warped silicon wafers and (b) stress gradients near growing cracks in planar phase objects.
Surface Profile and Stress Field Evaluation using Digital Gradient Sensing Method
Miao, C.; Sundaram, B. M.; Huang, L.; ...
2016-08-09
Shape and surface topography evaluation from measured orthogonal slope/gradient data is of considerable engineering significance since many full-field optical sensors and interferometers readily output accurate data of that kind. This has applications ranging from metrology of optical and electronic elements (lenses, silicon wafers, thin film coatings), surface profile estimation, wave front and shape reconstruction, to name a few. In this context, a new methodology for surface profile and stress field determination based on a recently introduced non-contact, full-field optical method called digital gradient sensing (DGS) capable of measuring small angular deflections of light rays coupled with a robust finite-difference-based least-squaresmore » integration (HFLI) scheme in the Southwell configuration is advanced here. The method is demonstrated by evaluating (a) surface profiles of mechanically warped silicon wafers and (b) stress gradients near growing cracks in planar phase objects.« less
Surface Profile and Stress Field Evaluation using Digital Gradient Sensing Method
Miao, C.; Sundaram, B. M.; Huang, L.; Tippur, H. V.
2016-08-09
Shape and surface topography evaluation from measured orthogonal slope/gradient data is of considerable engineering significance since many full-field optical sensors and interferometers readily output accurate data of that kind. This has applications ranging from metrology of optical and electronic elements (lenses, silicon wafers, thin film coatings), surface profile estimation, wave front and shape reconstruction, to name a few. In this context, a new methodology for surface profile and stress field determination based on a recently introduced non-contact, full-field optical method called digital gradient sensing (DGS) capable of measuring small angular deflections of light rays coupled with a robust finite-difference-based least-squares integration (HFLI) scheme in the Southwell configuration is advanced here. The method is demonstrated by evaluating (a) surface profiles of mechanically warped silicon wafers and (b) stress gradients near growing cracks in planar phase objects.
Deasy, William; Shepherd, Tom; Alexander, Colin J; Birch, A Nicholas E; Evans, K Andrew
2016-11-01
Collection of volatiles from plant roots poses technical challenges due to difficulties accessing the soil environment without damaging the roots. To validate a new non-invasive method for passive sampling of root volatiles in situ, from plants grown under field conditions, using solid phase micro-extraction (SPME). SPME fibres were inserted into perforated polytetrafluoroethene (PTFE) tubes positioned in the soil next to broccoli plants for collection of root volatiles pre- and post-infestation with Delia radicum larvae. After sample analysis by gas chromatography-mass spectrometry (GC-MS), principal component analysis (PCA) was applied to determine differences in the profiles of volatiles between samples. GC-MS analysis revealed that this method can detect temporal changes in root volatiles emitted before and after Delia radicum damage. PCA showed that samples collected pre- and post-infestation were compositionally different due to the presence of root volatiles induced by D. radicum feeding. Sulphur containing compounds, in particular, accounted for the differences observed. Root volatiles emission patterns post-infestation are thought to follow the feeding and developmental progress of larvae. This study shows that volatiles released by broccoli roots can be collected in situ using SPME fibres within perforated PTFE tubes under field conditions. Plants damaged by Delia radicum larvae could be distinguished from plants sampled pre-infestation and soil controls on the basis of larval feeding-induced sulphur-containing volatiles. These results show that this new method is a powerful tool for non-invasive sampling of root volatiles below-ground. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Plouff, Donald
2000-01-01
Gravity observations are directly made or are obtained from other sources by the U.S. Geological Survey in order to prepare maps of the anomalous gravity field and consequently to interpret the subsurface distribution of rock densities and associated lithologic or geologic units. Observations are made in the field with gravity meters at new locations and at reoccupations of previously established gravity "stations." This report illustrates an interactively-prompted series of steps needed to convert gravity "readings" to values that are tied to established gravity datums and includes computer programs to implement those steps. Inasmuch as individual gravity readings have small variations, gravity-meter (instrument) drift may not be smoothly variable, and acommodations may be needed for ties to previously established stations, the reduction process is iterative. Decision-making by the program user is prompted by lists of best values and graphical displays. Notes about irregularities of topography, which affect the value of observed gravity but are not shown in sufficient detail on topographic maps, must be recorded in the field. This report illustrates ways to record field notes (distances, heights, and slope angles) and includes computer programs to convert field notes to gravity terrain corrections. This report includes approaches that may serve as models for other applications, for example: portrayal of system flow; style of quality control to document and validate computer applications; lack of dependence on proprietary software except source code compilation; method of file-searching with a dwindling list; interactive prompting; computer code to write directly in the PostScript (Adobe Systems Incorporated) printer language; and high-lighting the four-digit year on the first line of time-dependent data sets for assured Y2K compatibility. Computer source codes provided are written in the Fortran scientific language. In order for the programs to operate, they first
Global gravimetric geoid model based a new method
NASA Astrophysics Data System (ADS)
Shen, W. B.; Han, J. C.
2012-04-01
The geoid, defined as the equipotential surface nearest to the mean sea level, plays a key role in physical geodesy and unification of height datum system. In this study, we introduce a new method, which is quite different from the conventional geoid modeling methods (e.g., Stokes method, Molodensky method), to determine the global gravimetric geoid (GGG). Based on the new method, using the dada base of the external Earth gravity field model EGM2008, digital topographic model DTM2006.0 and crust density distribution model CRUST2.0, we first determined the inner geopotential field until to the depth of D, and then established a GGG model , the accuracy of which is evaluated by comparing with the observations from USA, AUS, some parts of Canada, and some parts of China. The main idea of the new method is stated as follows. Given the geopotential field (e.g. EGM2008) outside the Earth, we may determine the inner geopotential field until to the depth of D by using Newtonian integral, once the density distribution model (e.g. CRUST2.0) of a shallow layer until to the depth D is given. Then, based on the definition of the geoid (i.e. an equipotential surface nearest to the mean sea level) one may determine the GGG. This study is supported by Natural Science Foundation China (grant No.40974015; No.41174011; No.41021061; No.41128003).
Reliability of field methods for estimating body fat.
Loenneke, Jeremy P; Barnes, Jeremy T; Wilson, Jacob M; Lowery, Ryan P; Isaacs, Melissa N; Pujol, Thomas J
2013-09-01
When health professionals measure the fitness levels of clients, body composition is usually estimated. In practice, the reliability of the measurement may be more important than the actual validity, as reliability determines how much change is needed to be considered meaningful. Therefore, the purpose of this study was to determine the reliability of two bioelectrical impedance analysis (BIA) devices (in athlete and non-athlete mode) and compare that to 3-site skinfold (SKF) readings. Twenty-one college students attended the laboratory on two occasions and had their measurements taken in the following order: body mass, height, SKF, Tanita body fat-350 (BF-350) and Omron HBF-306C. There were no significant pairwise differences between Visit 1 and Visit 2 for any of the estimates (P>0.05). The Pearson product correlations ranged from r = 0.933 for HBF-350 in the athlete mode (A) to r = 0.994 for SKF. The ICC's ranged from 0.93 for HBF-350(A) to 0.992 for SKF, and the MD's ranged from 1.8% for SKF to 5.1% for BF-350(A). The current study found that SKF and HBF-306C(A) were the most reliable (<2%) methods of estimating BF%, with the other methods (BF-350, BF-350(A), HBF-306C) producing minimal differences greater than 2%. In conclusion, the SKF method presented with the best reliability because of its low minimal difference, suggesting this method may be the best field method to track changes over time if you have an experienced tester. However, if technical error is a concern, the practitioner may use the HBF-306C(A) because it had a minimal difference value comparable to SKF.
A Multipole Expansion Method for Analyzing Lightning Field Changes
NASA Technical Reports Server (NTRS)
Koshak, William J.; Krider, E. Philip; Murphy, Martin J.
1999-01-01
Changes in the surface electric field are frequently used to infer the locations and magnitudes of lightning-caused changes in thundercloud charge distributions. The traditional procedure is to assume that the charges that are effectively deposited by the flash can be modeled either as a single point charge (the Q model) or a point dipole (the P model). The Q model has four unknown parameters and provides a good description of many cloud-to-ground (CG) flashes. The P model has six unknown parameters and describes many intracloud (IC) discharges. In this paper we introduce a new analysis method that assumes that the change in the cloud charge can be described by a truncated multipole expansion, i.e., there are both monopole and dipole terms in the unknown source distribution, and both terms are applied simultaneously. This method can be used to analyze CG flashes that are accompanied by large changes in the cloud dipole moment and complex IC discharges. If there is enough information content in the measurements, the model can also be generalized to include quadrupole and higher order terms. The parameters of the charge moments are determined using a dme-dimensional grid search in combination with a linear inversion, and because of this, local minima in the error function and the associated solution ambiguities are avoided. The multipole method has been tested on computer-simulated sources and on natural lightning at the NASA Kennedy Space Center and U.S. Air Force Eastern Range.
James, Kevin R; Dowling, David R
2008-09-01
In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.
Model-based calculations of fiber output fields for fiber-based spectroscopy
NASA Astrophysics Data System (ADS)
Hernandez, Eloy; Bodenmüller, Daniel; Roth, Martin M.; Kelz, Andreas
2016-08-01
The accurate characterization of the field at the output of the optical fibres is of relevance for precision spectroscopy in astronomy. The modal effects of the fibre translate to the illumination of the pupil in the spectrograph and impact on the resulting point spread function (PSF). A Model is presented that is based on the Eigenmode Expansion Method (EEM) that calculates the output field from a given fibre for different manipulations of the input field. The fibre design and modes calculation are done via the commercially available Rsoft-FemSIM software. We developed a Python script to apply the EEM. Results are shown for different configuration parameters, such as spatial and angular displacements of the input field, spot size and propagation length variations, different transverse fibre geometries and different wavelengths. This work is part of the phase A study of the fibre system for MOSAIC, a proposed multi-object spectrograph for the European Extremely Large Telescope (ELT-MOS).
Transformations based on continuous piecewise-affine velocity fields
Freifeld, Oren; Hauberg, Soren; Batmanghelich, Kayhan; ...
2017-01-11
Here, we propose novel finite-dimensional spaces of well-behaved Rn → Rn transformations. The latter are obtained by (fast and highly-accurate) integration of continuous piecewise-affine velocity fields. The proposed method is simple yet highly expressive, effortlessly handles optional constraints (e.g., volume preservation and/or boundary conditions), and supports convenient modeling choices such as smoothing priors and coarse-to-fine analysis. Importantly, the proposed approach, partly due to its rapid likelihood evaluations and partly due to its other properties, facilitates tractable inference over rich transformation spaces, including using Markov-Chain Monte-Carlo methods. Its applications include, but are not limited to: monotonic regression (more generally, optimization overmore » monotonic functions); modeling cumulative distribution functions or histograms; time-warping; image warping; image registration; real-time diffeomorphic image editing; data augmentation for image classifiers. Our GPU-based code is publicly available.« less
Performance of climate field reconstruction methods over multiple seasons and climate variables
NASA Astrophysics Data System (ADS)
Dannenberg, Matthew P.; Wise, Erika K.
2013-09-01
Studies of climate variability require long time series of data but are limited by the absence of preindustrial instrumental records. For such studies, proxy-based climate reconstructions, such as those produced from tree-ring widths, provide the opportunity to extend climatic records into preindustrial periods. Climate field reconstruction (CFR) methods are capable of producing spatially-resolved reconstructions of climate fields. We assessed the performance of three commonly used CFR methods (canonical correlation analysis, point-by-point regression, and regularized expectation maximization) over spatially-resolved fields using multiple seasons and climate variables. Warm- and cool-season geopotential height, precipitable water, and surface temperature were tested for each method using tree-ring chronologies. Spatial patterns of reconstructive skill were found to be generally consistent across each of the methods, but the robustness of the validation metrics varied by CFR method, season, and climate variable. The most robust validation metrics were achieved with geopotential height, the October through March temporal composite, and the Regularized Expectation Maximization method. While our study is limited to assessment of skill over multidecadal (rather than multi-centennial) time scales, our findings suggest that the climate variable of interest, seasonality, and spatial domain of the target field should be considered when assessing potential CFR methods for real-world applications.
Kojovic, L. ); Kezunovic, M.; Skendzic, V. ); Fromen, C.W.; Sevcik, D.R. )
1994-10-01
This paper presents the results of an EPRI study on development of a new method for coupling capacitor voltage transformer (CCVT) frequency response measurements from the secondary side. The method is especially suitable for field measurements since it does not require any internal CCVT disassembly or access to its individual components. It has been verified by performing the field measurements on actual CCVTs installed in a substations. The results were compared with the results obtained by carrying out the CCVT frequency response measurements on the same type of CCVTs in a laboratory and using the method of frequency response measurement from the primary side. The proposed method is easy to use and gives accurate results. The method may be used for the EMTP-based CCVT model development and the CCVT performance analysis.
Hybrid star structure with the Field Correlator Method
NASA Astrophysics Data System (ADS)
Burgio, G. F.; Zappalà, D.
2016-03-01
We explore the relevance of the color-flavor locking phase in the equation of state (EoS) built with the Field Correlator Method (FCM) for the description of the quark matter core of hybrid stars. For the hadronic phase, we use the microscopic Brueckner-Hartree-Fock (BHF) many-body theory, and its relativistic counterpart, i.e. the Dirac-Brueckner (DBHF). We find that the main features of the phase transition are directly related to the values of the quark-antiquark potential V1, the gluon condensate G2 and the color-flavor superconducting gap Δ. We confirm that the mapping between the FCM and the CSS (constant speed of sound) parameterization holds true even in the case of paired quark matter. The inclusion of hyperons in the hadronic phase and its effect on the mass-radius relation of hybrid stars is also investigated.
Methods for Quantitative Interpretation of Retarding Field Analyzer Data
Calvey, J.R.; Crittenden, J.A.; Dugan, G.F.; Palmer, M.A.; Furman, M.; Harkay, K.
2011-03-28
Over the course of the CesrTA program at Cornell, over 30 Retarding Field Analyzers (RFAs) have been installed in the CESR storage ring, and a great deal of data has been taken with them. These devices measure the local electron cloud density and energy distribution, and can be used to evaluate the efficacy of different cloud mitigation techniques. Obtaining a quantitative understanding of RFA data requires use of cloud simulation programs, as well as a detailed model of the detector itself. In a drift region, the RFA can be modeled by postprocessing the output of a simulation code, and one can obtain best fit values for important simulation parameters with a chi-square minimization method.
Level set methods for modelling field evaporation in atom probe.
Haley, Daniel; Moody, Michael P; Smith, George D W
2013-12-01
Atom probe is a nanoscale technique for creating three-dimensional spatially and chemically resolved point datasets, primarily of metallic or semiconductor materials. While atom probe can achieve local high-level resolution, the spatial coherence of the technique is highly dependent upon the evaporative physics in the material and can often result in large geometric distortions in experimental results. The distortions originate from uncertainties in the projection function between the field evaporating specimen and the ion detector. Here we explore the possibility of continuum numerical approximations to the evaporative behavior during an atom probe experiment, and the subsequent propagation of ions to the detector, with particular emphasis placed on the solution of axisymmetric systems, such as isolated particles and multilayer systems. Ultimately, this method may prove critical in rapid modeling of tip shape evolution in atom probe tomography, which itself is a key factor in the rapid generation of spatially accurate reconstructions in atom probe datasets.
Apparatus and method for producing an artificial gravitational field
NASA Technical Reports Server (NTRS)
Mccanna, Jason (Inventor)
1993-01-01
An apparatus and method is disclosed for producing an artificial gravitational field in a spacecraft by rotating the same around a spin axis. The centrifugal force thereby created acts as an artificial gravitational force. The apparatus includes an engine which produces a drive force offset from the spin axis to drive the spacecraft towards a destination. The engine is also used as a counterbalance for a crew cabin for rotation of the spacecraft. Mass of the spacecraft, which may include either the engine or crew cabin, is shifted such that the centrifugal force acting on that mass is no longer directed through the center of mass of the craft. This off-center centrifugal force creates a moment that counterbalances the moment produced by the off-center drive force to eliminate unwanted rotation which would otherwise be precipitated by the offset drive force.
Wang, Zhengzhou; Hu, Bingliang; Yin, Qinye
2017-01-01
The schlieren method of measuring far-field focal spots offers many advantages at the Shenguang III laser facility such as low cost and automatic laser-path collimation. However, current methods of far-field focal spot measurement often suffer from low precision and efficiency when the final focal spot is merged manually, thereby reducing the accuracy of reconstruction. In this paper, we introduce an improved schlieren method to construct the high dynamic-range image of far-field focal spots and improve the reconstruction accuracy and efficiency. First, a detection method based on weak light beam sampling and magnification imaging was designed; images of the main and side lobes of the focused laser irradiance in the far field were obtained using two scientific CCD cameras. Second, using a self-correlation template matching algorithm, a circle the same size as the schlieren ball was dug from the main lobe cutting image and used to change the relative region of the main lobe cutting image within a 100×100 pixel region. The position that had the largest correlation coefficient between the side lobe cutting image and the main lobe cutting image when a circle was dug was identified as the best matching point. Finally, the least squares method was used to fit the center of the side lobe schlieren small ball, and the error was less than 1 pixel. The experimental results show that this method enables the accurate, high-dynamic-range measurement of a far-field focal spot and automatic image reconstruction. Because the best matching point is obtained through image processing rather than traditional reconstruction methods based on manual splicing, this method is less sensitive to the efficiency of focal-spot reconstruction and thus offers better experimental precision. PMID:28207758
Behavior of magnetic field and eddy current in a magnetostriction based bi-layered composite
NASA Astrophysics Data System (ADS)
Zhang, Kewei; Zhang, Kehao; Liu, Huifeng; Li, Junlin
2016-12-01
In this paper, we presented a theoretical method for studying the behavior of magnetic field intensity and eddy current inside a magnetostriction based bi-layered composite. Firstly, the mathematical model for the electromagnetic field in the composite was established. Then, the governing equation for determining the magnetic field intensity and eddy current was solved. Furthermore, the effect of the composite's conductivity on the magnetic field intensity and eddy current were discussed. Lastly, by comparing with the well known R.L. Stoll's equation, the magnetic field intensity calculated based on our equation showed a less than 0.5% error.
Phase field approaches of bone remodeling based on TIP
NASA Astrophysics Data System (ADS)
Ganghoffer, Jean-François; Rahouadj, Rachid; Boisse, Julien; Forest, Samuel
2016-01-01
The process of bone remodeling includes a cycle of repair, renewal, and optimization. This adaptation process, in response to variations in external loads and chemical driving factors, involves three main types of bone cells: osteoclasts, which remove the old pre-existing bone; osteoblasts, which form the new bone in a second phase; osteocytes, which are sensing cells embedded into the bone matrix, trigger the aforementioned sequence of events. The remodeling process involves mineralization of the bone in the diffuse interface separating the marrow, which contains all specialized cells, from the newly formed bone. The main objective advocated in this contribution is the setting up of a modeling and simulation framework relying on the phase field method to capture the evolution of the diffuse interface between the new bone and the marrow at the scale of individual trabeculae. The phase field describes the degree of mineralization of this diffuse interface; it varies continuously between the lower value (no mineral) and unity (fully mineralized phase, e.g. new bone), allowing the consideration of a diffuse moving interface. The modeling framework is the theory of continuous media, for which field equations for the mechanical, chemical, and interfacial phenomena are written, based on the thermodynamics of irreversible processes. Additional models for the cellular activity are formulated to describe the coupling of the cell activity responsible for bone production/resorption to the kinetics of the internal variables. Kinetic equations for the internal variables are obtained from a pseudo-potential of dissipation. The combination of the balance equations for the microforce associated to the phase field and the kinetic equations lead to the Ginzburg-Landau equation satisfied by the phase field with a source term accounting for the dissipative microforce. Simulations illustrating the proposed framework are performed in a one-dimensional situation showing the evolution of
Acoustic spectroscopy: A powerful analytical method for the pharmaceutical field?
Bonacucina, Giulia; Perinelli, Diego R; Cespi, Marco; Casettari, Luca; Cossi, Riccardo; Blasi, Paolo; Palmieri, Giovanni F
2016-04-30
Acoustics is one of the emerging technologies developed to minimize processing, maximize quality and ensure the safety of pharmaceutical, food and chemical products. The operating principle of acoustic spectroscopy is the measurement of the ultrasound pulse intensity and phase after its propagation through a sample. The main goal of this technique is to characterise concentrated colloidal dispersions without dilution, in such a way as to be able to analyse non-transparent and even highly structured systems. This review presents the state of the art of ultrasound-based techniques in pharmaceutical pre-formulation and formulation steps, showing their potential, applicability and limits. It reports in a simplified version the theory behind acoustic spectroscopy, describes the most common equipment on the market, and finally overviews different studies performed on systems and materials used in the pharmaceutical or related fields.
Singular boundary method for global gravity field modelling
NASA Astrophysics Data System (ADS)
Cunderlik, Robert
2014-05-01
The singular boundary method (SBM) and method of fundamental solutions (MFS) are meshless boundary collocation techniques that use the fundamental solution of a governing partial differential equation (e.g. the Laplace equation) as their basis functions. They have been developed to avoid singular numerical integration as well as mesh generation in the traditional boundary element method (BEM). SBM have been proposed to overcome a main drawback of MFS - its controversial fictitious boundary outside the domain. The key idea of SBM is to introduce a concept of the origin intensity factors that isolate singularities of the fundamental solution and its derivatives using some appropriate regularization techniques. Consequently, the source points can be placed directly on the real boundary and coincide with the collocation nodes. In this study we deal with SBM applied for high-resolution global gravity field modelling. The first numerical experiment presents a numerical solution to the fixed gravimetric boundary value problem. The achieved results are compared with the numerical solutions obtained by MFS or the direct BEM indicating efficiency of all methods. In the second numerical experiments, SBM is used to derive the geopotential and its first derivatives from the Tzz components of the gravity disturbing tensor observed by the GOCE satellite mission. A determination of the origin intensity factors allows to evaluate the disturbing potential and gravity disturbances directly on the Earth's surface where the source points are located. To achieve high-resolution numerical solutions, the large-scale parallel computations are performed on the cluster with 1TB of the distributed memory and an iterative elimination of far zones' contributions is applied.
How to Plan a Theme Based Field Day
ERIC Educational Resources Information Center
Shea, Scott A.; Fagala, Lisa M.
2006-01-01
Having a theme-based field day is a great way to get away from doing the traditional track-and-field type events, such as the softball throw, 50 yard dash, and sack race, year after year. In a theme-based field day format all stations or events are planned around a particular theme. This allows the teacher to be creative while also adding…
Field-Based Teacher Education: Past, Present, and Future.
ERIC Educational Resources Information Center
Bruce, William C.; And Others
This monograph consists of five papers originating from a 1974 conference entitled, "Field-Based Teacher Education for the '80's." The first paper, "Public School-College Cooperation in the Field-Based Education of Teachers (FBTE)--A Historical Perspective," by James L. Slay, focuses on how the historical development of public school cooperation…
Literature Based Discovery: models, methods, and trends.
Sam Henry, M S; McInnes, Bridget T
2017-08-21
This paper provides an introduction and overview of literature based discovery (LBD) in the biomedical domain. It introduces the reader to modern and historical LBD models, key system components, evaluation methodologies, and current trends. After completion, the reader will be familiar with the challenges and methodologies of LBD. The reader will be capable of distinguishing between recent LBD systems and publications, and be capable of designing an LBD system for a specific application. From biomedical researchers curious about LBD, to someone looking to design an LBD system, to an LBD expert trying to catch up on trends in the field. The reader need not be familiar with LBD, but knowledge of biomedical text processing tools is helpful. This paper describes a unifying framework for LBD systems. Within this framework, different models and methods are presented to both distinguish and show overlap between systems. Topics include term and document representation, system components, and an overview of models including co-occurrence models, semantic models, and distributional models. Other topics include uninformative term filtering, term ranking, results display, system evaluation, an overview of the application areas of drug development, drug repurposing, and adverse drug event prediction, and challenges and future directions. A timeline showing contributions to LBD, and a table summarizing the works of several authors is provided. Topics are presented from a high level perspective. References are given if more detailed analysis is required. Copyright © 2017. Published by Elsevier Inc.
MEMS cantilever based magnetic field gradient sensor
NASA Astrophysics Data System (ADS)
Dabsch, Alexander; Rosenberg, Christoph; Stifter, Michael; Keplinger, Franz
2017-05-01
This paper describes major contributions to a MEMS magnetic field gradient sensor. An H-shaped structure supported by four arms with two circuit paths on the surface is designed for measuring two components of the magnetic flux density and one component of the gradient. The structure is produced from silicon wafers by a dry etching process. The gold leads on the surface carry the alternating current which interacts with the magnetic field component perpendicular to the direction of the current. If the excitation frequency is near to a mechanical resonance, vibrations with an amplitude within the range of 1-103 nm are expected. Both theoretical (simulations and analytic calculations) and experimental analysis have been carried out to optimize the structures for different strength of the magnetic gradient. In the same way the impact of the coupling structure on the resonance frequency and of different operating modes to simultaneously measure two components of the flux density were tested. For measuring the local gradient of the flux density the structure was operated at the first symmetrical and the first anti-symmetrical mode. Depending on the design, flux densities of approximately 2.5 µT and gradients starting from 1 µT mm-1 can be measured.
NASA Astrophysics Data System (ADS)
Kother, L. K.; Hammer, M. D.; Finlay, C. C.; Olsen, N.
2014-12-01
We present a technique for modelling the lithospheric magnetic field based on estimation of equivalent potential field sources. As a first demonstration we present an application to magnetic field measurements made by the CHAMP satellite during the period 2009-2010. Three component vector field data are utilized at all latitudes. Estimates of core and large-scale magnetospheric sources are removed from the satellite measurements using the CHAOS-4 model. Quiet-time and night-side data selection criteria are also employed to minimize the influence of the ionospheric field. The model for the remaining lithospheric magnetic field consists of magnetic point sources (monopoles) arranged in an icosahedron grid with an increasing grid resolution towards the airborne survey area. The corresponding source values are estimated using an iteratively reweighted least squares algorithm that includes model regularization (either quadratic or maximum entropy) and Huber weighting. Data error covariance matrices are implemented, accounting for the dependence of data error variances on quasi-dipole latitudes. Results show good consistency with the CM5 and MF7 models for spherical harmonic degrees up to n = 95. Advantages of the equivalent source method include its local nature and the ease of transforming to spherical harmonics when needed. The method can also be applied in local, high resolution, investigations of the lithospheric magnetic field, for example where suitable aeromagnetic data is available. To illustrate this possibility, we present preliminary results from a case study combining satellite measurements and local airborne scalar magnetic measurements of the Norwegian coastline.
Impulse-based methods for fluid flow
Cortez, Ricardo
1995-05-01
A Lagrangian numerical method based on impulse variables is analyzed. A relation between impulse vectors and vortex dipoles with a prescribed dipole moment is presented. This relation is used to adapt the high-accuracy cutoff functions of vortex methods for use in impulse-based methods. A source of error in the long-time implementation of the impulse method is explained and two techniques for avoiding this error are presented. An application of impulse methods to the motion of a fluid surrounded by an elastic membrane is presented.
Virtual fields method coupled with moiré interferometry: Special considerations and application
NASA Astrophysics Data System (ADS)
Zhou, Mengmeng; Xie, Huimin; Wu, Lifu
2016-12-01
The virtual fields method (VFM) is a novel highly efficient non-iterative tool for the identification of the constitutive parameters of materials. The VFM can obtain several constitutive parameters based on the full-field deformation of the specimen measured in a single test. However, the available results demonstrate that the accuracy of the identification result is strongly dependent on the quality of the deformation field, which is generally measured using optical methods. Especially, in the case where a small deformation is applied under elastic loading, the image noise and measurement error will exhibit a significant influence on the identification results. By combining the VFM with moiré interferometry (MI), a MI-based VFM is used to identify the parameters of an orthotropic linear elastic material. A numerical experiment is conducted to examine the feasibility of this method. From the analysis results, we determine that two factors exhibit an influence on the identification accuracy. The reinforcement direction of the orthotropic material is one factor, and the other is the noise in the deformation field. This MI-based VFM is then applied to determine the mechanical parameters of a unidirectional carbon fiber composite material. In the measurement, a three-point bending load is applied to the specimens. A high density grating with a frequency of 1200 line/mm grating is replicated on the specimen surface and used for measuring the in-plane deformation fields using a moiré interferometer. The obtained deformation fields are taken as the inputs of the VFM identification process, and the elastic properties of the materials are identified. The obtained results verify the advantage of the proposed method with respect to high accuracy and good noise immunity.
Characterisation of the acoustic field radiated by a rail with a microphone array: The SWEAM method
NASA Astrophysics Data System (ADS)
Faure, Baldrik; Chiello, Olivier; Pallas, Marie-Agnès; Servière, Christine
2015-06-01
Beamforming methods are widely used for the identification of acoustic sources on rail-bound vehicles with microphone arrays, although they have limitations in case of spatially extended sources such as the rail. In this paper, an alternative method dedicated to the acoustic field radiated by the rail is presented. The method is called SWEAM for Structural Wavenumbers Estimation with an Array of Microphones. The main idea is to replace the elementary fields commonly used in beamforming (point sources or plane waves) by specific fields related to point forces applied on the rail. The vertical bending vibration of the rail is modelled using a simple beam assumption so that the rail vibration depends only on two parameters: the wavenumber and the decay rate of the propagative wave. Together with a radiation model based on a line of coherent monopoles, the acoustic field emitted by the rail is easily derived. The method itself consists in using the signals measured on a microphone array to estimate both the structural parameters and the global amplitude of this specific source. The estimation is achieved by minimising a least squares criterion based on the measured and modelled spectral matrices. Simulations are performed to evaluate the performance of the method considering one or several sources at fixed positions. The comparison of the simulated and reconstructed fields are convincing at most frequencies. The method is finally validated in the case of a single vertical excitation using an original set up composed of a 30 m long experimental track excited by an electrodynamic shaker. The results show a great improvement of the wavenumber estimation in the whole frequency range compared with the plane wave beamforming method and a fair estimation of the decay rate. The underestimation of some low decay rates due to the poor selectivity of the criterion occurring in these cases requires further study.
Test of Scintillometer Saturation Correction Methods Using Field Experimental Data
NASA Astrophysics Data System (ADS)
Kleissl, J.; Hartogensis, O. K.; Gomez, J. D.
2010-12-01
Saturation of large aperture scintillometer (LAS) signals can result in sensible heat flux measurements that are biased low. A field study with LASs of different aperture sizes and path lengths was performed to investigate the onset of, and corrections for, signal saturation. Saturation already occurs at {C_n^2 ≈ 0.074 D^{5/3} λ^{1/3} L^{-8/3}}, where {C_n^2} is the structure parameter of the refractive index, D is the aperture size, λ is the wavelength, L is the transect length, which is smaller than theoretically derived saturation limits. At a transect length of 1 km, a height of 2.5 m, and aperture ≈0.15 m the correction factor exceeds 5% already at {C_n^2=2× 10^{-12}m^{-2/3}}, which will affect many practical applications of scintillometry. The Clifford correction method, which only depends on {C_n^2} and the transect geometry, provides good saturation corrections over the range of conditions observed in our study. The saturation correction proposed by Ochs and Hill results in correction factors that are too small in large saturation regimes. An inner length scale dependence of the saturation correction factor was not observed. Thus for practical applications the Clifford correction method should be applied.
Generalized method of eigenoscillations for near-field optical microscopy
NASA Astrophysics Data System (ADS)
Jiang, Bor-Yuan; Zhang, Lingfeng; Castro Neto, Antonio; Basov, Dimitri; Fogler, Michael
2015-03-01
Electromagnetic interaction between a sub-wavelength particle (the ``probe'') and a material surface (the ``sample'') is studied theoretically. The interaction is shown to be governed by a series of resonances (eigenoscillations), corresponding to surface polariton modes localized near the probe. The resonance parameters depend on the dielectric function and geometry of the probe, as well as the surface reflectivity of the material. Calculation of such resonances is carried out for several axisymmetric particle shapes (spherical, spheroidal, and pear-shaped). For spheroids an efficient numerical method is proposed, capable of handling cases of large or strongly momentum-dependent surface reflectivity. The method is applied to modeling near-field spectroscopy studies of various materials. For highly resonant materials such as aluminum oxide (by itself or covered with graphene) a rich structure of the simulated signal is found, including multi-peak spectra and nonmonotonic approach curves. These features have a strong dependence on physical parameters, e.g., the probe shape. For less resonant materials such as silicon oxide the dependence is weaker, and the spheroid model is generally applicable.
Comparison of aquatic macroinvertebrate samples collected using different field methods
Lenz, Bernard N.; Miller, Michael A.
1996-01-01
Government agencies, academic institutions, and volunteer monitoring groups in the State of Wisconsin collect aquatic macroinvertebrate data to assess water quality. Sampling methods differ among agencies, reflecting the differences in the sampling objectives of each agency. Lack of infor- mation about data comparability impedes data shar- ing among agencies, which can result in duplicated sampling efforts or the underutilization of avail- able information. To address these concerns, com- parisons were made of macroinvertebrate samples collected from wadeable streams in Wisconsin by personnel from the U.S. Geological Survey- National Water Quality Assessment Program (USGS-NAWQA), the Wisconsin Department of Natural Resources (WDNR), the U.S. Department of Agriculture-Forest Service (USDA-FS), and volunteers from the Water Action Volunteer-Water Quality Monitoring Program (WAV). This project was part of the Intergovernmental Task Force on Monitoring Water Quality (ITFM) Wisconsin Water Resources Coordination Project. The numbers, types, and environmental tolerances of the organ- isms collected were analyzed to determine if the four different field methods that were used by the different agencies and volunteer groups provide comparable results. Additionally, this study com- pared the results of samples taken from different locations and habitats within the same streams.
Deng, Yelin; Paraskevas, Dimos; Cao, Shi-Jie
2017-03-22
This study focuses on a detailed Life Cycle Assessment (LCA) for flax cultivation in Northern France. Nitrogen related field emissions are derived both from a process-oriented DeNitrification-DeComposition (DNDC) method and the generic Intergovernmental Panel on Climate Change (IPCC) method. Since the IPCC method is synthesised from field measurements at sites with various soil types, climate conditions, and crops, it contains significant uncertainties. In contrast, the outputs from the DNDC method are considered as more site specific as it is built according to complex models of soil science. As it is demonstrated in this paper the emission factors from the DNDC method and the recommended values from the IPCC method exhibit significant variations for the case of flax cultivation. The DNDC based emission factor for direct N2O emission, which is a strong greenhouse gas, is 0.25-0.5%, significantly lower than the recommend 1% level derived from the IPCC method. The DNDC method leads to a reduction of 17% in the impact category of climate change per kg retted flax straw production from the level obtained from the IPCC method. Much higher reductions are recorded for particulate matter formation, terrestrial acidification, and marine eutrophication impact categories. Meanwhile, based on the DNDC and IPCC methods, a comparative LCA per kg flax straw is presented. For both methods sensitivity analysis as well as comparison of uncertainties parameterisation of the N2O estimates via Monte-Carlo analysis are performed. The DNDC method incorporates more relevant field emissions from the agricultural life cycle phase, which can also improve the quality of the Life Cycle Inventory as well as allow more precise uncertainty calibration in the LCA inventory.
Vocabulary Teaching Based on Semantic-Field
ERIC Educational Resources Information Center
Wangru, Cao
2016-01-01
Vocabulary is an indispensable part of language and it is of vital importance for second language learners. Wilkins (1972) points out: "without grammar very little can be conveyed, without vocabulary nothing can be conveyed." Vocabulary teaching has experienced several stages characterized by grammatical-translation method, audio-lingual…
Multiresolution and Explicit Methods for Vector Field Analysis and Visualization
NASA Technical Reports Server (NTRS)
1996-01-01
We first report on our current progress in the area of explicit methods for tangent curve computation. The basic idea of this method is to decompose the domain into a collection of triangles (or tetrahedra) and assume linear variation of the vector field over each cell. With this assumption, the equations which define a tangent curve become a system of linear, constant coefficient ODE's which can be solved explicitly. There are five different representation of the solution depending on the eigenvalues of the Jacobian. The analysis of these five cases is somewhat similar to the phase plane analysis often associate with critical point classification within the context of topological methods, but it is not exactly the same. There are some critical differences. Moving from one cell to the next as a tangent curve is tracked, requires the computation of the exit point which is an intersection of the solution of the constant coefficient ODE and the edge of a triangle. There are two possible approaches to this root computation problem. We can express the tangent curve into parametric form and substitute into an implicit form for the edge or we can express the edge in parametric form and substitute in an implicit form of the tangent curve. Normally the solution of a system of ODE's is given in parametric form and so the first approach is the most accessible and straightforward. The second approach requires the 'implicitization' of these parametric curves. The implicitization of parametric curves can often be rather difficult, but in this case we have been successful and have been able to develop algorithms and subsequent computer programs for both approaches. We will give these details along with some comparisons in a forthcoming research paper on this topic.
Bayesian methods for parameter estimation in effective field theories
Schindler, M.R. Phillips, D.R.
2009-03-15
We demonstrate and explicate Bayesian methods for fitting the parameters that encode the impact of short-distance physics on observables in effective field theories (EFTs). We use Bayes' theorem together with the principle of maximum entropy to account for the prior information that these parameters should be natural, i.e., O(1) in appropriate units. Marginalization can then be employed to integrate the resulting probability density function (pdf) over the EFT parameters that are not of specific interest in the fit. We also explore marginalization over the order of the EFT calculation, M, and over the variable, R, that encodes the inherent ambiguity in the notion that these parameters are O(1). This results in a very general formula for the pdf of the EFT parameters of interest given a data set, D. We use this formula and the simpler 'augmented {chi}{sup 2}' in a toy problem for which we generate pseudo-data. These Bayesian methods, when used in combination with the 'naturalness prior', facilitate reliable extractions of EFT parameters in cases where {chi}{sup 2} methods are ambiguous at best. We also examine the problem of extracting the nucleon mass in the chiral limit, M{sub 0}, and the nucleon sigma term, from pseudo-data on the nucleon mass as a function of the pion mass. We find that Bayesian techniques can provide reliable information on M{sub 0}, even if some of the data points used for the extraction lie outside the region of applicability of the EFT.
A Calibration Method for Wide-Field Multicolor Photometric Systems
NASA Astrophysics Data System (ADS)
Zhou, Xu; Chen, Jiansheng; Xu, Wen; Zhang, Mei; Jiang, Zhaoji; Zheng, Zhongyuan; Zhu, Jin
1999-07-01
The purpose of this paper is to present a method to self-calibrate the spectral energy distribution (SED) of objects in a survey based on the fitting of a SED library to observed multicolor photometry. We adopt, for illustrative purposes, the Vilnius and Gunn & Stryker SED libraries. The self-calibration technique can improve the quality of observations which are not taken under perfectly photometric conditions. The more passbands used for the photometry, the better the results. This technique has been applied to the BATC 15 passband CCD survey.
Using Field Trips and Field-Based Laboratories to Teach Undergraduate Soil Science
NASA Astrophysics Data System (ADS)
Brevik, Eric C.; Steffan, Joshua; Hopkins, David
2015-04-01
Classroom activities can provide important background information allowing students to understand soils. However, soils are formed in nature; therefore, understanding their properties and spatial relationships in the field is a critical component for gaining a comprehensive and holistic understanding of soils. Field trips and field-based laboratories provide students with the field experiences and skills needed to gain this understanding. Field studies can 1) teach students the fundamentals of soil descriptions, 2) expose students to features (e.g., structure, redoximorphic features, clay accumulation, etc.) discussed in the classroom, and 3) allow students to verify for themselves concepts discussed in the more theoretical setting of the classroom. In each case, actually observing these aspects of soils in the field reinforces and improves upon classroom learning and comprehension. In addition, the United States Department of Agriculture's Natural Resources Conservation Service has identified a lack of fundamental field skills as a problem when they hire recent soil science graduates, thereby demonstrating the need for increased field experiences for the modern soil science student. In this presentation we will provide examples of field trips and field-based laboratories that we have designed for our undergraduate soil science classes, discuss the learning objectives, and provide several examples of comments our students have made in response to these field experiences.
Teaching Geographic Field Methods to Cultural Resource Management Technicians
ERIC Educational Resources Information Center
Mires, Peter B.
2004-01-01
There are perhaps 10,000 technicians in the United States who work in the field known as cultural resource management (CRM). The typical field technician possesses a bachelor's degree in anthropology, geography, or a closely allied discipline. The author's experience has been that few CRM field technicians receive adequate undergraduate training…
Teaching Geographic Field Methods to Cultural Resource Management Technicians
ERIC Educational Resources Information Center
Mires, Peter B.
2004-01-01
There are perhaps 10,000 technicians in the United States who work in the field known as cultural resource management (CRM). The typical field technician possesses a bachelor's degree in anthropology, geography, or a closely allied discipline. The author's experience has been that few CRM field technicians receive adequate undergraduate training…
A New Method for Reconstruction of Coronal Force-Free Magnetic Fields
NASA Astrophysics Data System (ADS)
Yi, Sibaek; Choe, Gwangson; Lim, Daye; Kim, Kap-Sung
2016-04-01
We present a new method for coronal magnetic field reconstruction based on vector magnetogram data. This method belongs to a variational method in that the magnetic energy of the system is decreased as the iteration proceeds. We employ a vector potential rather than the magnetic field vector in order to be free from the numerical divergence B problem. Whereas most methods employing three components of the magnetic field vector overspecify the boundary conditions, we only impose the normal components of magnetic field and current density as the bottom boundary conditions. Previous methods using a vector potential need to adjust the bottom boundary conditions continually, but we fix the bottom boundary conditions once and for all. To minimize the effect of the obscure lateral and top boundary conditions, we have adopted a nested grid system, which can accommodate as large as a computational domain without consuming as much computational resources. At the top boundary, we have implemented the source surface condition. We have tested our method with the analytic solution by Low & Lou (1990) as a reference. When the solution is given only at the bottom boundary, our method excels in most figures of merits devised by Schrijver et al. (2006). We have also applied our method to the active region AR 11974, in which two M class flares and a halo CME took place. Our reconstructed field shows three sigmoid structures in the lower corona and two interwound flux tubes in the upper corona. The former seem to cause the observed flares and the latter seem to be responsible for the global eruption, i.e., the CME.
The system analysis of light field information collection based on the light field imaging
NASA Astrophysics Data System (ADS)
Wang, Ye; Li, Wenhua; Hao, Chenyang
2016-10-01
Augmented reality(AR) technology is becoming the study focus, and the AR effect of the light field imaging makes the research of light field camera attractive. The micro array structure was adopted in most light field information acquisition system(LFIAS) since emergence of light field camera, micro lens array(MLA) and micro pinhole array(MPA) system mainly included. It is reviewed in this paper the structure of the LFIAS that the Light field camera commonly used in recent years. LFIAS has been analyzed based on the theory of geometrical optics. Meanwhile, this paper presents a novel LFIAS, plane grating system, we call it "micro aperture array(MAA." And the LFIAS are analyzed based on the knowledge of information optics; This paper proves that there is a little difference in the multiple image produced by the plane grating system. And the plane grating system can collect and record the amplitude and phase information of the field light.
Natarajan, Annamalai; Angarita, Gustavo; Gaiser, Edward; Malison, Robert; Ganesan, Deepak; Marlin, Benjamin M
2016-09-01
Mobile health research on illicit drug use detection typically involves a two-stage study design where data to learn detectors is first collected in lab-based trials, followed by a deployment to subjects in a free-living environment to assess detector performance. While recent work has demonstrated the feasibility of wearable sensors for illicit drug use detection in the lab setting, several key problems can limit lab-to-field generalization performance. For example, lab-based data collection often has low ecological validity, the ground-truth event labels collected in the lab may not be available at the same level of temporal granularity in the field, and there can be significant variability between subjects. In this paper, we present domain adaptation methods for assessing and mitigating potential sources of performance loss in lab-to-field generalization and apply them to the problem of cocaine use detection from wearable electrocardiogram sensor data.
Natarajan, Annamalai; Angarita, Gustavo; Gaiser, Edward; Malison, Robert; Ganesan, Deepak; Marlin, Benjamin M.
2016-01-01
Mobile health research on illicit drug use detection typically involves a two-stage study design where data to learn detectors is first collected in lab-based trials, followed by a deployment to subjects in a free-living environment to assess detector performance. While recent work has demonstrated the feasibility of wearable sensors for illicit drug use detection in the lab setting, several key problems can limit lab-to-field generalization performance. For example, lab-based data collection often has low ecological validity, the ground-truth event labels collected in the lab may not be available at the same level of temporal granularity in the field, and there can be significant variability between subjects. In this paper, we present domain adaptation methods for assessing and mitigating potential sources of performance loss in lab-to-field generalization and apply them to the problem of cocaine use detection from wearable electrocardiogram sensor data. PMID:28090605
Defeaturing CAD models using a geometry-based size field and facet-based reduction operators.
Quadros, William Roshan; Owen, Steven James
2010-04-01
We propose a method to automatically defeature a CAD model by detecting irrelevant features using a geometry-based size field and a method to remove the irrelevant features via facet-based operations on a discrete representation. A discrete B-Rep model is first created by obtaining a faceted representation of the CAD entities. The candidate facet entities are then marked for reduction by using a geometry-based size field. This is accomplished by estimating local mesh sizes based on geometric criteria. If the field value at a facet entity goes below a user specified threshold value then it is identified as an irrelevant feature and is marked for reduction. The reduction of marked facet entities is primarily performed using an edge collapse operator. Care is taken to retain a valid geometry and topology of the discrete model throughout the procedure. The original model is not altered as the defeaturing is performed on a separate discrete model. Associativity between the entities of the discrete model and that of original CAD model is maintained in order to decode the attributes and boundary conditions applied on the original CAD entities onto the mesh via the entities of the discrete model. Example models are presented to illustrate the effectiveness of the proposed approach.
NASA Astrophysics Data System (ADS)
Finke, G.; Kujawińska, M.; Kozacki, T.; Zaperty, W.
2016-09-01
In this paper we propose a method which allows to overcome the basic functional problems in holographic displays with naked eye observation caused by delivering too small images visible in narrow viewing angles. The solution is based on combining the spatiotemporal multiplexing method with a 4f optical system. It enables to increase an aperture of a holographic display and extend the angular visual field of view. The applicability of the modified display is evidenced by Wigner distribution analysis of holographic imaging with spatiotemporal multiplexing method and by the experiments performed at the display demonstrator.
Fourier method for recovering acoustic sources from multi-frequency far-field data
NASA Astrophysics Data System (ADS)
Wang, Xianchao; Guo, Yukun; Zhang, Deyue; Liu, Hongyu
2017-03-01
We consider an inverse source problem of determining a source term in the Helmholtz equation from multi-frequency far-field measurements. Based on the Fourier series expansion, we develop a novel non-iterative reconstruction method for solving the problem. A promising feature of this method is that it utilizes the data from only a few observation directions for each frequency. Theoretical uniqueness and stability analysis are provided. Numerical experiments are conducted to illustrate the effectiveness and efficiency of the proposed method in both two and three dimensions.