DO TIE LABORATORY BASED ASSESSMENT METHODS REALLY PREDICT FIELD EFFECTS?
Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both porewaters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question of whethe...
DO TIE LABORATORY BASED METHODS REALLY REFLECT FIELD CONDITIONS
Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both interstitial waters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question ...
Krylov subspace iterative methods for boundary element method based near-field acoustic holography.
Valdivia, Nicolas; Williams, Earl G
2005-02-01
The reconstruction of the acoustic field for general surfaces is obtained from the solution of a matrix system that results from a boundary integral equation discretized using boundary element methods. The solution to the resultant matrix system is obtained using iterative regularization methods that counteract the effect of noise on the measurements. These methods will not require the calculation of the singular value decomposition, which can be expensive when the matrix system is considerably large. Krylov subspace methods are iterative methods that have the phenomena known as "semi-convergence," i.e., the optimal regularization solution is obtained after a few iterations. If the iteration is not stopped, the method converges to a solution that generally is totally corrupted by errors on the measurements. For these methods the number of iterations play the role of the regularization parameter. We will focus our attention to the study of the regularizing properties from the Krylov subspace methods like conjugate gradients, least squares QR and the recently proposed Hybrid method. A discussion and comparison of the available stopping rules will be included. A vibrating plate is considered as an example to validate our results.
A fast and flexible library-based thick-mask near-field calculation method
NASA Astrophysics Data System (ADS)
Ma, Xu; Gao, Jie; Chen, Xuanbo; Dong, Lisong; Li, Yanqiu
2015-03-01
Aerial image calculation is the basis of the current lithography simulation. As the critical dimension (CD) of the integrated circuits continuously shrinks, the thick mask near-field calculation has increasing influence on the accuracy and efficiency of the entire aerial image calculation process. This paper develops a flexible librarybased approach to significantly improve the efficiency of the thick mask near-field calculation compared to the rigorous modeling method, while leading to much higher accuracy than the Kirchhoff approximation method. Specifically, a set of typical features on the fullchip are selected to serve as the training data, whose near-fields are pre-calculated and saved in the library. Given an arbitrary test mask, we first decompose it into convex corners, concave corners and edges, afterwards match each patch to the training layouts based on nonparametric kernel regression. Subsequently, we use the matched near-fields in the library to replace the mask patches, and rapidly synthesize the near-field for the entire test mask. Finally, a data-fitting method is proposed to improve the accuracy of the synthesized near-field based on least square estimate (LSE). We use a pair of two-dimensional mask patterns to test our method. Simulations show that the proposed method can significantly speed up the current FDTD method, and effectively improve the accuracy of the Kirchhoff approximation method.
A new gradient shimming method based on undistorted field map of B0 inhomogeneity.
Bao, Qingjia; Chen, Fang; Chen, Li; Song, Kan; Liu, Zao; Liu, Chaoyang
2016-04-01
Most existing gradient shimming methods for NMR spectrometers estimate field maps that resolve B0 inhomogeneity spatially from dual gradient-echo (GRE) images acquired at different echo times. However, the distortions induced by B0 inhomogeneity that always exists in the GRE images can result in estimated field maps that are distorted in both geometry and intensity, leading to inaccurate shimming. This work proposes a new gradient shimming method based on undistorted field map of B0 inhomogeneity obtained by a more accurate field map estimation technique. Compared to the traditional field map estimation method, this new method exploits both the positive and negative polarities of the frequency encoded gradients to eliminate the distortions caused by B0 inhomogeneity in the field map. Next, the corresponding automatic post-data procedure is introduced to obtain undistorted B0 field map based on knowledge of the invariant characteristics of the B0 inhomogeneity and the variant polarity of the encoded gradient. The experimental results on both simulated and real gradient shimming tests demonstrate the high performance of this new method.
A comparison of field-based similarity searching methods: CatShape, FBSS, and ROCS.
Moffat, Kirstin; Gillet, Valerie J; Whittle, Martin; Bravi, Gianpaolo; Leach, Andrew R
2008-04-01
Three field-based similarity methods are compared in retrospective virtual screening experiments. The methods are the CatShape module of CATALYST, ROCS, and an in-house program developed at the University of Sheffield called FBSS. The programs are used in both rigid and flexible searches carried out in the MDL Drug Data Report. UNITY 2D fingerprints are also used to provide a comparison with a more traditional approach to similarity searching, and similarity based on simple whole-molecule properties is used to provide a baseline for the more sophisticated searches. Overall, UNITY 2D fingerprints and ROCS with the chemical force field option gave comparable performance and were superior to the shape-only 3D methods. When the flexible methods were compared with the rigid methods, it was generally found that the flexible methods gave slightly better results than their respective rigid methods; however, the increased performance did not justify the additional computational cost required.
FLASHFLOOD: A 3D Field-based similarity search and alignment method for flexible molecules
NASA Astrophysics Data System (ADS)
Pitman, Michael C.; Huber, Wolfgang K.; Horn, Hans; Krämer, Andreas; Rice, Julia E.; Swope, William C.
2001-07-01
A three-dimensional field-based similarity search and alignment method for flexible molecules is introduced. The conformational space of a flexible molecule is represented in terms of fragments and torsional angles of allowed conformations. A user-definable property field is used to compute features of fragment pairs. Features are generalizations of CoMMA descriptors (Silverman, B.D. and Platt, D.E., J. Med. Chem., 39 (1996) 2129.) that characterize local regions of the property field by its local moments. The features are invariant under coordinate system transformations. Features taken from a query molecule are used to form alignments with fragment pairs in the database. An assembly algorithm is then used to merge the fragment pairs into full structures, aligned to the query. Key to the method is the use of a context adaptive descriptor scaling procedure as the basis for similarity. This allows the user to tune the weights of the various feature components based on examples relevant to the particular context under investigation. The property fields may range from simple, phenomenological fields, to fields derived from quantum mechanical calculations. We apply the method to the dihydrofolate/methotrexate benchmark system, and show that when one injects relevant contextual information into the descriptor scaling procedure, better results are obtained more efficiently. We also show how the method works and include computer times for a query from a database that represents approximately 23 million conformers of seventeen flexible molecules.
Systems and Methods for Implementing Robust Carbon Nanotube-Based Field Emitters
NASA Technical Reports Server (NTRS)
Manohara, Harish (Inventor); Kristof, Valerie (Inventor); Toda, Risaku (Inventor)
2015-01-01
Systems and methods in accordance with embodiments of the invention implement carbon nanotube-based field emitters. In one embodiment, a method of fabricating a carbon nanotube field emitter includes: patterning a substrate with a catalyst, where the substrate has thereon disposed a diffusion barrier layer; growing a plurality of carbon nanotubes on at least a portion of the patterned catalyst; and heating the substrate to an extent where it begins to soften such that at least a portion of at least one carbon nanotube becomes enveloped by the softened substrate.
Design method for a distributed Bragg resonator based evanescent field sensor
NASA Astrophysics Data System (ADS)
Bischof, David; Kehl, Florian; Michler, Markus
2016-12-01
This paper presents an analytic design method for a distributed Bragg resonator based evanescent field sensor. Such sensors can, for example, be used to measure changing refractive indices of the cover medium of a waveguide, as well as molecule adsorption at the sensor surface. For given starting conditions, the presented design method allows the analytical calculation of optimized sensor parameters for quantitative simulation and fabrication. The design process is based on the Fabry-Pérot resonator and analytical solutions of coupled mode theory.
Mixture model and Markov random field-based remote sensing image unsupervised clustering method
NASA Astrophysics Data System (ADS)
Hou, Y.; Yang, Y.; Rao, N.; Lun, X.; Lan, J.
2011-03-01
In this paper, a novel method for remote sensing image clustering based on mixture model and Markov random field (MRF) is proposed. A remote sensing image can be considered as Gaussian mixture model. The image clustering result corresponding to the image label field is a MRF. So, the image clustering procedure is transformed to a maximum a posterior (MAP) problem by Bayesian theorem. The intensity difference and the spatial distance between the two pixels in the same clique are introduced into the traditional MRF potential function. The iterative conditional model (ICM) is employed to find the solution of MAP. We use the max entropy criterion to choose the optimal clustering number. In the experiments, the method is compared with the traditional MRF clustering method using ICM and simulated annealing (SA). The results show that this method is better than the traditional MRF model both in noise filtering and miss-classification ratio.
FGG-NUFFT-Based Method for Near-Field 3-D Imaging Using Millimeter Waves
Kan, Yingzhi; Zhu, Yongfeng; Tang, Liang; Fu, Qiang; Pei, Hucheng
2016-01-01
In this paper, to deal with the concealed target detection problem, an accurate and efficient algorithm for near-field millimeter wave three-dimensional (3-D) imaging is proposed that uses a two-dimensional (2-D) plane antenna array. First, a two-dimensional fast Fourier transform (FFT) is performed on the scattered data along the antenna array plane. Then, a phase shift is performed to compensate for the spherical wave effect. Finally, fast Gaussian gridding based nonuniform FFT (FGG-NUFFT) combined with 2-D inverse FFT (IFFT) is performed on the nonuniform 3-D spatial spectrum in the frequency wavenumber domain to achieve 3-D imaging. The conventional method for near-field 3-D imaging uses Stolt interpolation to obtain uniform spatial spectrum samples and performs 3-D IFFT to reconstruct a 3-D image. Compared with the conventional method, our FGG-NUFFT based method is comparable in both efficiency and accuracy in the full sampled case and can obtain more accurate images with less clutter and fewer noisy artifacts in the down-sampled case, which are good properties for practical applications. Both simulation and experimental results demonstrate that the FGG-NUFFT-based near-field 3-D imaging algorithm can have better imaging performance than the conventional method for down-sampled measurements. PMID:27657066
FGG-NUFFT-Based Method for Near-Field 3-D Imaging Using Millimeter Waves.
Kan, Yingzhi; Zhu, Yongfeng; Tang, Liang; Fu, Qiang; Pei, Hucheng
2016-09-19
In this paper, to deal with the concealed target detection problem, an accurate and efficient algorithm for near-field millimeter wave three-dimensional (3-D) imaging is proposed that uses a two-dimensional (2-D) plane antenna array. First, a two-dimensional fast Fourier transform (FFT) is performed on the scattered data along the antenna array plane. Then, a phase shift is performed to compensate for the spherical wave effect. Finally, fast Gaussian gridding based nonuniform FFT (FGG-NUFFT) combined with 2-D inverse FFT (IFFT) is performed on the nonuniform 3-D spatial spectrum in the frequency wavenumber domain to achieve 3-D imaging. The conventional method for near-field 3-D imaging uses Stolt interpolation to obtain uniform spatial spectrum samples and performs 3-D IFFT to reconstruct a 3-D image. Compared with the conventional method, our FGG-NUFFT based method is comparable in both efficiency and accuracy in the full sampled case and can obtain more accurate images with less clutter and fewer noisy artifacts in the down-sampled case, which are good properties for practical applications. Both simulation and experimental results demonstrate that the FGG-NUFFT-based near-field 3-D imaging algorithm can have better imaging performance than the conventional method for down-sampled measurements.
Spatial sound field synthesis and upmixing based on the equivalent source method.
Bai, Mingsian R; Hsu, Hoshen; Wen, Jheng-Ciang
2014-01-01
Given scarce number of recorded signals, spatial sound field synthesis with an extended sweet spot is a challenging problem in acoustic array signal processing. To address the problem, a synthesis and upmixing approach inspired by the equivalent source method (ESM) is proposed. The synthesis procedure is based on the pressure signals recorded by a microphone array and requires no source model. The array geometry can also be arbitrary. Four upmixing strategies are adopted to enhance the resolution of the reproduced sound field when there are more channels of loudspeakers than the microphones. Multi-channel inverse filtering with regularization is exploited to deal with the ill-posedness in the reconstruction process. The distance between the microphone and loudspeaker arrays is optimized to achieve the best synthesis quality. To validate the proposed system, numerical simulations and subjective listening experiments are performed. The results demonstrated that all upmixing methods improved the quality of reproduced target sound field over the original reproduction. In particular, the underdetermined ESM interpolation method yielded the best spatial sound field synthesis in terms of the reproduction error, timbral quality, and spatial quality.
[A detection method of liver iron overload based on static field magnetization principle].
Zhang, Ziyi; Liu, Peiguo; Zhang, Liang; Ding, Liang; Lin, Xiaohong
2014-02-01
Magnetic induction method aims at the noninvasive detection of liver iron overload by measuring the hepatic magnetic susceptibility. To solve the difficulty that eddy current effects interfere with the measurement of magnetic susceptibility, we proposed an improved coil system based on the static field magnetization principle in this study. We used a direct current excitation to eliminate the eddy current effect, and a rotary receiver coil to get the induced voltage. The magnetic field for a cylindrical object due to the magnetization effect was calculated and the relative change of maximum induced voltage was derived. The correlation between magnetic susceptibility of object and maximum magnetic flux, maximum induced voltage and relative change of maximum induced voltage of the receiver coil were obtained by simulation experiments, and the results were compared with those of the theory calculation. The contrast shows that the simulation results fit the theory results well, which proves our method can eliminate the eddy current effect effectively.
A novel autonomous real-time position method based on polarized light and geomagnetic field
NASA Astrophysics Data System (ADS)
Wang, Yinlong; Chu, Jinkui; Zhang, Ran; Wang, Lu; Wang, Zhiwen
2015-04-01
Many animals exploit polarized light in order to calibrate their magnetic compasses for navigation. For example, some birds are equipped with biological magnetic and celestial compasses enabling them to migrate between the Western and Eastern Hemispheres. The Vikings' ability to derive true direction from polarized light is also widely accepted. However, their amazing navigational capabilities are still not completely clear. Inspired by birds' and Vikings' ancient navigational skills. Here we present a combined real-time position method based on the use of polarized light and geomagnetic field. The new method works independently of any artificial signal source with no accumulation of errors and can obtain the position and the orientation directly. The novel device simply consists of two polarized light sensors, a 3-axis compass and a computer. The field experiments demonstrate device performance.
A novel autonomous real-time position method based on polarized light and geomagnetic field
Wang, Yinlong; Chu, Jinkui; Zhang, Ran; Wang, Lu; Wang, Zhiwen
2015-01-01
Many animals exploit polarized light in order to calibrate their magnetic compasses for navigation. For example, some birds are equipped with biological magnetic and celestial compasses enabling them to migrate between the Western and Eastern Hemispheres. The Vikings' ability to derive true direction from polarized light is also widely accepted. However, their amazing navigational capabilities are still not completely clear. Inspired by birds' and Vikings' ancient navigational skills. Here we present a combined real-time position method based on the use of polarized light and geomagnetic field. The new method works independently of any artificial signal source with no accumulation of errors and can obtain the position and the orientation directly. The novel device simply consists of two polarized light sensors, a 3-axis compass and a computer. The field experiments demonstrate device performance. PMID:25851793
Zhou, Jingyu; Tian, Shulin; Yang, Chenglin
2014-01-01
Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments.
A new method for direction finding based on Markov random field model
NASA Astrophysics Data System (ADS)
Ota, Mamoru; Kasahara, Yoshiya; Goto, Yoshitaka
2015-07-01
Investigating the characteristics of plasma waves observed by scientific satellites in the Earth's plasmasphere/magnetosphere is effective for understanding the mechanisms for generating waves and the plasma environment that influences wave generation and propagation. In particular, finding the propagation directions of waves is important for understanding mechanisms of VLF/ELF waves. To find these directions, the wave distribution function (WDF) method has been proposed. This method is based on the idea that observed signals consist of a number of elementary plane waves that define wave energy density distribution. However, the resulting equations constitute an ill-posed problem in which a solution is not determined uniquely; hence, an adequate model must be assumed for a solution. Although many models have been proposed, we have to select the most optimum model for the given situation because each model has its own advantages and disadvantages. In the present study, we propose a new method for direction finding of the plasma waves measured by plasma wave receivers. Our method is based on the assumption that the WDF can be represented by a Markov random field model with inference of model parameters performed using a variational Bayesian learning algorithm. Using computer-generated spectral matrices, we evaluated the performance of the model and compared the results with those obtained from two conventional methods.
Using geotypes for landslide hazard assessment and mapping: a coupled field and GIS-based method
NASA Astrophysics Data System (ADS)
Bilgot, S.; Parriaux, A.
2009-04-01
Switzerland is exceptionally subjected to landslides; indeed, about 10% of its area is considered as unstable. Making this observation, its Department of the Environment (BAFU) introduces in 1997 a method to realize landslide hazard maps. It is routinely used but, like most of the methods applied in Europe to map unstable areas, it is mainly based on the signs of previous or current phenomena (geomorphologic mapping, archive consultation, etc.) even though instabilities can appear where there is nothing to show that they existed earlier. Furthermore, the transcription from the geomorphologic map to the hazard map can vary according to the geologist or the geographer who realizes it: this method is affected by a certain lack of transparency. The aim of this project is to introduce the bedrock of a new method for landslide hazard mapping; based on instability predisposition assessment, it involves the designation of main factors for landslide susceptibility, their integration in a GIS to calculate a landslide predisposition index and the implementation of new methods to evaluate these factors; to be competitive, these processes have to be both cheap and quick. To identify the most important parameters to consider for assessing slope stability, we chose a large panel of topographic, geomechanic and hydraulic parameters and tested their importance by calculating safety factors on theoretical landslides using Geostudio 2007®; thus, we could determine that slope, cohesion, hydraulic conductivity and saturation play an important role in soil stability. After showing that cohesion and hydraulic conductivity of loose materials are strongly linked to their granulometry and plasticity index, we implemented two new field tests, one based on teledetection and one coupled sedimentometric and blue methylen test to evaluate these parameters. From these data, we could deduce approximated values of maximum cohesion and saturated hydraulic conductivity. The hydraulic conductivity of
ERIC Educational Resources Information Center
Laman, Tasha Tropp; Miller, Erin T.; Lopez-Robertson, Julia
2012-01-01
This qualitative study examines what early childhood preservice teachers enrolled in a field-based literacy methods course deemed relevant regarding teaching, literacy, and learning. This study is based on postcourse interviews with 7 early childhood preservice teachers. Findings suggest that "contextualized field experiences" facilitate…
Numerical focusing methods for full field OCT: a comparison based on a common signal model.
Kumar, Abhishek; Drexler, Wolfgang; Leitgeb, Rainer A
2014-06-30
In this paper a theoretical model of the full field swept source (FF SS) OCT signal is presented based on the angular spectrum wave propagation approach which accounts for the defocus error with imaging depth. It is shown that using the same theoretical model of the signal, numerical defocus correction methods based on a simple forward model (FM) and inverse scattering (IS), the latter being similar to interferometric synthetic aperture microscopy (ISAM), can be derived. Both FM and IS are compared quantitatively with sub-aperture based digital adaptive optics (DAO). FM has the least numerical complexity, and is the fastest in terms of computational speed among the three. SNR improvement of more than 10 dB is shown for all the three methods over a sample depth of 1.5 mm. For a sample with non-uniform refractive index with depth, FM and IS both improved the depth of focus (DOF) by a factor of 7x for an imaging NA of 0.1. DAO performs the best in case of non-uniform refractive index with respect to DOF improvement by 11x.
NASA Astrophysics Data System (ADS)
Gao, Kun; Yang, Hu; Chen, Xiaomei; Ni, Guoqiang
2008-03-01
Because of complex thermal objects in an infrared image, the prevalent image edge detection operators are often suitable for a certain scene and extract too wide edges sometimes. From a biological point of view, the image edge detection operators work reliably when assuming a convolution-based receptive field architecture. A DoG (Difference-of- Gaussians) model filter based on ON-center retinal ganglion cell receptive field architecture with artificial eye tremors introduced is proposed for the image contour detection. Aiming at the blurred edges of an infrared image, the subsequent orthogonal polynomial interpolation and sub-pixel level edge detection in rough edge pixel neighborhood is adopted to locate the foregoing rough edges in sub-pixel level. Numerical simulations show that this method can locate the target edge accurately and robustly.
Schall, Mark C; Fethke, Nathan B; Chen, Howard; Gerr, Fred
2015-05-01
The performance of an inertial measurement unit (IMU) system for directly measuring thoracolumbar trunk motion was compared to that of the Lumbar Motion Monitor (LMM). Thirty-six male participants completed a simulated material handling task with both systems deployed simultaneously. Estimates of thoracolumbar trunk motion obtained with the IMU system were processed using five common methods for estimating trunk motion characteristics. Results of measurements obtained from IMUs secured to the sternum and pelvis had smaller root-mean-square differences and mean bias estimates in comparison to results obtained with the LMM than results of measurements obtained solely from a sternum mounted IMU. Fusion of IMU accelerometer measurements with IMU gyroscope and/or magnetometer measurements was observed to increase comparability to the LMM. Results suggest investigators should consider computing thoracolumbar trunk motion as a function of estimates from multiple IMUs using fusion algorithms rather than using a single accelerometer secured to the sternum in field-based studies.
Study on Two Methods for Nonlinear Force-Free Extrapolation Based on Semi-Analytical Field
NASA Astrophysics Data System (ADS)
Liu, S.; Zhang, H. Q.; Su, J. T.; Song, M. T.
2011-03-01
In this paper, two semi-analytical solutions of force-free fields (Low and Lou, Astrophys. J. 352, 343, 1990) have been used to test two nonlinear force-free extrapolation methods. One is the boundary integral equation (BIE) method developed by Yan and Sakurai ( Solar Phys. 195, 89, 2000), and the other is the approximate vertical integration (AVI) method developed by Song et al. ( Astrophys. J. 649, 1084, 2006). Some improvements have been made to the AVI method to avoid the singular points in the process of calculation. It is found that the correlation coefficients between the first semi-analytical field and extrapolated field using the BIE method, and also that obtained by the improved AVI method, are greater than 90% below a height 10 of the 64×64 lower boundary. For the second semi-analytical field, these correlation coefficients are greater than 80% below the same relative height. Although differences between the semi-analytical solutions and the extrapolated fields exist for both the BIE and AVI methods, these two methods can give reliable results for heights of about 15% of the extent of the lower boundary.
NASA Astrophysics Data System (ADS)
Diaz, P. M. A.; Feitosa, R. Q.; Sanches, I. D.; Costa, G. A. O. P.
2016-06-01
This paper presents a method to estimate the temporal interaction in a Conditional Random Field (CRF) based approach for crop recognition from multitemporal remote sensing image sequences. This approach models the phenology of different crop types as a CRF. Interaction potentials are assumed to depend only on the class labels of an image site at two consecutive epochs. In the proposed method, the estimation of temporal interaction parameters is considered as an optimization problem, whose goal is to find the transition matrix that maximizes the CRF performance, upon a set of labelled data. The objective functions underlying the optimization procedure can be formulated in terms of different accuracy metrics, such as overall and average class accuracy per crop or phenological stages. To validate the proposed approach, experiments were carried out upon a dataset consisting of 12 co-registered LANDSAT images of a region in southeast of Brazil. Pattern Search was used as the optimization algorithm. The experimental results demonstrated that the proposed method was able to substantially outperform estimates related to joint or conditional class transition probabilities, which rely on training samples.
An Entropy-Based Propagation Speed Estimation Method for Near-Field Subsurface Radar Imaging
NASA Astrophysics Data System (ADS)
Flores-Tapia, Daniel; Pistorius, Stephen
2010-12-01
During the last forty years, Subsurface Radar (SR) has been used in an increasing number of noninvasive/nondestructive imaging applications, ranging from landmine detection to breast imaging. To properly assess the dimensions and locations of the targets within the scan area, SR data sets have to be reconstructed. This process usually requires the knowledge of the propagation speed in the medium, which is usually obtained by performing an offline measurement from a representative sample of the materials that form the scan region. Nevertheless, in some novel near-field SR scenarios, such as Microwave Wood Inspection (MWI) and Breast Microwave Radar (BMR), the extraction of a representative sample is not an option due to the noninvasive requirements of the application. A novel technique to determine the propagation speed of the medium based on the use of an information theory metric is proposed in this paper. The proposed method uses the Shannon entropy of the reconstructed images as the focal quality metric to generate an estimate of the propagation speed in a given scan region. The performance of the proposed algorithm was assessed using data sets collected from experimental setups that mimic the dielectric contrast found in BMI and MWI scenarios. The proposed method yielded accurate results and exhibited an execution time in the order of seconds.
An automatic detection method to the field wheat based on image processing
NASA Astrophysics Data System (ADS)
Wang, Yu; Cao, Zhiguo; Bai, Xiaodong; Yu, Zhenghong; Li, Yanan
2013-10-01
The automatic observation of the field crop attracts more and more attention recently. The use of image processing technology instead of the existing manual observation method can observe timely and manage consistently. It is the basis that extracting the wheat from the field wheat images. In order to improve accuracy of the wheat segmentation, a novel two-stage wheat image segmentation method is proposed. Training stage adjusts several key thresholds which will be used in segmentation stage to achieve the best segmentation results, and counts these thresholds. Segmentation stage compares the different values of color index to determine which class of each pixel is. To verify the superiority of the proposed algorithm, we compared our method with other crop segmentation methods. Experiment results shows that the proposed method has the best performance.
NASA Astrophysics Data System (ADS)
Shin, Jaemin; Lee, Hyun Geun; Lee, June-Yub
2016-12-01
The phase-field crystal equation derived from the Swift-Hohenberg energy functional is a sixth order nonlinear equation. We propose numerical methods based on a new convex splitting for the phase-field crystal equation. The first order convex splitting method based on the proposed splitting is unconditionally gradient stable, which means that the discrete energy is non-increasing for any time step. The second order scheme is unconditionally weakly energy stable, which means that the discrete energy is bounded by its initial value for any time step. We prove mass conservation, unique solvability, energy stability, and the order of truncation error for the proposed methods. Numerical experiments are presented to show the accuracy and stability of the proposed splitting methods compared to the existing other splitting methods. Numerical tests indicate that the proposed convex splitting is a good choice for numerical methods of the phase-field crystal equation.
The Corpus: A Data-Based Device for Teaching Field Methods.
ERIC Educational Resources Information Center
Stoddart, Kenneth
1987-01-01
Notes that one-semester field methods courses in sociology often lack adequate time for students to learn appropriate techniques and still collect and report their data. Describes how undergraduate students bypass this problem by using multiple observations of a single event to quickly form a corpus of ethnographic data. (JDH)
NASA Astrophysics Data System (ADS)
Zhang, Lihui; Wang, Dongchuan; Huang, Mingxiang; Gong, Jianhua; Fang, Liqun; Cao, Wuchun
2008-10-01
With the development of mobile technologies and the integration with the spatial information technologies, it becomes possible to provide a potential to develop new techno-support solutions to Epidemiological Field Investigation especially for the disposal of emergent public health events. Based on mobile technologies and virtual geographic environment, the authors have designed a model for collaborative work in four communication patterns, namely, S2S (Static to Static), M2S (Mobile to Static), S2M (Static to Mobile), and M2M (Mobile to Mobile). Based on the model mentioned above, this paper stresses to explore mobile online mapping regarding mobile collaboration and conducts an experimental case study of HFRS (Hemorrhagic Fever with Renal Syndrome) fieldwork, and then develops a prototype system of emergent response disposition information system to test the effectiveness and usefulness of field survey based on mobile collaboration.
Zhang, Xiao-Zheng; Thomas, Jean-Hugh; Bi, Chuan-Xing; Pascal, Jean-Claude
2012-10-01
A time-domain plane wave superposition method is proposed to reconstruct nonstationary sound fields. In this method, the sound field is expressed as a superposition of time convolutions between the estimated time-wavenumber spectrum of the sound pressure on a virtual source plane and the time-domain propagation kernel at each wavenumber. By discretizing the time convolutions directly, the reconstruction can be carried out iteratively in the time domain, thus providing the advantage of continuously reconstructing time-dependent pressure signals. In the reconstruction process, the Tikhonov regularization is introduced at each time step to obtain a relevant estimate of the time-wavenumber spectrum on the virtual source plane. Because the double infinite integral of the two-dimensional spatial Fourier transform is discretized directly in the wavenumber domain in the proposed method, it does not need to perform the two-dimensional spatial fast Fourier transform that is generally used in time domain holography and real-time near-field acoustic holography, and therefore it avoids some errors associated with the two-dimensional spatial fast Fourier transform in theory and makes possible to use an irregular microphone array. The feasibility of the proposed method is demonstrated by numerical simulations and an experiment with two speakers.
A new method for matched field localization based on two-hydrophone
NASA Astrophysics Data System (ADS)
Li, Kun; Fang, Shi-liang
2015-03-01
The conventional matched field processing (MFP) uses large vertical arrays to locate an underwater acoustic target. However, the use of large vertical arrays increases equipment and computational cost, and causes some problems such as element failures, and array tilting to degrade the localization performance. In this paper, the matched field localization method using two-hydrophone is proposed for underwater acoustic pulse signals with an unknown emitted signal waveform. Using the received signal of hydrophones and the ocean channel pulse response which can be calculated from an acoustic propagation model, the spectral matrix of the emitted signal for different source locations can be estimated by employing the method of frequency domain least squares. The resulting spectral matrix of the emitted signal for every grid region is then multiplied by the ocean channel frequency response matrix to generate the spectral matrix of replica signal. Finally, the matched field localization using two-hydrophone for underwater acoustic pulse signals of an unknown emitted signal waveform can be estimated by comparing the difference between the spectral matrixes of the received signal and the replica signal. The simulated results from a shallow water environment for broadband signals demonstrate the significant localization performance of the proposed method. In addition, the localization accuracy in five different cases are analyzed by the simulation trial, and the results show that the proposed method has a sharp peak and low sidelobes, overcoming the problem of high sidelobes in the conventional MFP due to lack of the number of elements.
Localization of incipient tip vortex cavitation using ray based matched field inversion method
NASA Astrophysics Data System (ADS)
Kim, Dongho; Seong, Woojae; Choo, Youngmin; Lee, Jeunghoon
2015-10-01
Cavitation of marine propeller is one of the main contributing factors of broadband radiated ship noise. In this research, an algorithm for the source localization of incipient vortex cavitation is suggested. Incipient cavitation is modeled as monopole type source and matched-field inversion method is applied to find the source position by comparing the spatial correlation between measured and replicated pressure fields at the receiver array. The accuracy of source localization is improved by broadband matched-field inversion technique that enhances correlation by incoherently averaging correlations of individual frequencies. Suggested localization algorithm is verified through known virtual source and model test conducted in Samsung ship model basin cavitation tunnel. It is found that suggested localization algorithm enables efficient localization of incipient tip vortex cavitation using a few pressure data measured on the outer hull above the propeller and practically applicable to the typically performed model scale experiment in a cavitation tunnel at the early design stage.
Evaluation of Three Field-Based Methods for Quantifying Soil Carbon
Izaurralde, Roberto C.; Rice, Charles W.; Wielopolski, Lucian; Ebinger, Michael H.; Reeves, James B.; Thomson, Allison M.; Francis, Barry; Mitra, Sudeep; Rappaport, Aaron G.; Etchevers, Jorge D.; Sayre, Kenneth D.; Govaerts, Bram; McCarty, Gregory W.
2013-01-01
Three advanced technologies to measure soil carbon (C) density (g C m−2) are deployed in the field and the results compared against those obtained by the dry combustion (DC) method. The advanced methods are: a) Laser Induced Breakdown Spectroscopy (LIBS), b) Diffuse Reflectance Fourier Transform Infrared Spectroscopy (DRIFTS), and c) Inelastic Neutron Scattering (INS). The measurements and soil samples were acquired at Beltsville, MD, USA and at Centro International para el Mejoramiento del Maíz y el Trigo (CIMMYT) at El Batán, Mexico. At Beltsville, soil samples were extracted at three depth intervals (0–5, 5–15, and 15–30 cm) and processed for analysis in the field with the LIBS and DRIFTS instruments. The INS instrument determined soil C density to a depth of 30 cm via scanning and stationary measurements. Subsequently, soil core samples were analyzed in the laboratory for soil bulk density (kg m−3), C concentration (g kg−1) by DC, and results reported as soil C density (kg m−2). Results from each technique were derived independently and contributed to a blind test against results from the reference (DC) method. A similar procedure was employed at CIMMYT in Mexico employing but only with the LIBS and DRIFTS instruments. Following conversion to common units, we found that the LIBS, DRIFTS, and INS results can be compared directly with those obtained by the DC method. The first two methods and the standard DC require soil sampling and need soil bulk density information to convert soil C concentrations to soil C densities while the INS method does not require soil sampling. We conclude that, in comparison with the DC method, the three instruments (a) showed acceptable performances although further work is needed to improve calibration techniques and (b) demonstrated their portability and their capacity to perform under field conditions. PMID:23383225
Evaluation of Three Field-Based Methods for Quantifying Soil Carbon
Izaurralde, Roberto C.; Rice, Charles W.; Wielopolski, Lucien; Ebinger, Michael H.; Reeves, James B.; Thomson, Allison M.; Harris, Ron; Francis, Barry; Mitra, S.; Rappaport, Aaron; Etchevers, Jorge; Sayre, Ken D.; Govaerts, Bram; McCarty, G. W.
2013-01-31
Three advanced technologies to measure soil carbon (C) density (g C m22) are deployed in the field and the results compared against those obtained by the dry combustion (DC) method. The advanced methods are: a) Laser Induced Breakdown Spectroscopy (LIBS), b) Diffuse Reflectance Fourier Transform Infrared Spectroscopy (DRIFTS), and c) Inelastic Neutron Scattering (INS). The measurements and soil samples were acquired at Beltsville, MD, USA and at Centro International para el Mejoramiento del Maiz y el Trigo (CIMMYT) at El Bata´n, Mexico. At Beltsville, soil samples were extracted at three depth intervals (0–5, 5–15, and 15–30 cm) and processed for analysis in the field with the LIBS and DRIFTS instruments. The INS instrument determined soil C density to a depth of 30 cm via scanning and stationary measurements. Subsequently, soil core samples were analyzed in the laboratory for soil bulk density (kg m23), C concentration (g kg21) by DC, and results reported as soil C density (kg m22). Results from each technique were derived independently and contributed to a blind test against results from the reference (DC) method. A similar procedure was employed at CIMMYT in Mexico employing but only with the LIBS and DRIFTS instruments. Following conversion to common units, we found that the LIBS, DRIFTS, and INS results can be compared directly with those obtained by the DC method. The first two methods and the standard DC require soil sampling and need soil bulk density information to convert soil C concentrations to soil C densities while the INS method does not require soil sampling. We conclude that, in comparison with the DC method, the three instruments (a) showed acceptable performances although further work is needed to improve calibration techniques and (b) demonstrated their portability and their capacity to perform under field conditions.
Evaluation of three field-based methods for quantifying soil carbon.
Izaurralde, Roberto C; Rice, Charles W; Wielopolski, Lucian; Ebinger, Michael H; Reeves, James B; Thomson, Allison M; Harris, Ronny; Francis, Barry; Mitra, Sudeep; Rappaport, Aaron G; Etchevers, Jorge D; Sayre, Kenneth D; Govaerts, Bram; McCarty, Gregory W
2013-01-01
Three advanced technologies to measure soil carbon (C) density (g C m(-2)) are deployed in the field and the results compared against those obtained by the dry combustion (DC) method. The advanced methods are: a) Laser Induced Breakdown Spectroscopy (LIBS), b) Diffuse Reflectance Fourier Transform Infrared Spectroscopy (DRIFTS), and c) Inelastic Neutron Scattering (INS). The measurements and soil samples were acquired at Beltsville, MD, USA and at Centro International para el Mejoramiento del Maíz y el Trigo (CIMMYT) at El Batán, Mexico. At Beltsville, soil samples were extracted at three depth intervals (0-5, 5-15, and 15-30 cm) and processed for analysis in the field with the LIBS and DRIFTS instruments. The INS instrument determined soil C density to a depth of 30 cm via scanning and stationary measurements. Subsequently, soil core samples were analyzed in the laboratory for soil bulk density (kg m(-3)), C concentration (g kg(-1)) by DC, and results reported as soil C density (kg m(-2)). Results from each technique were derived independently and contributed to a blind test against results from the reference (DC) method. A similar procedure was employed at CIMMYT in Mexico employing but only with the LIBS and DRIFTS instruments. Following conversion to common units, we found that the LIBS, DRIFTS, and INS results can be compared directly with those obtained by the DC method. The first two methods and the standard DC require soil sampling and need soil bulk density information to convert soil C concentrations to soil C densities while the INS method does not require soil sampling. We conclude that, in comparison with the DC method, the three instruments (a) showed acceptable performances although further work is needed to improve calibration techniques and (b) demonstrated their portability and their capacity to perform under field conditions.
The development of field-based measurement methods for radioactive fallout assessment.
Miller, Kevin M; Larsen, Richard J
2002-05-01
An overview is provided on the development of field equipment, instrument systems, and methods of analyses that were used to assess the impact of radioactive fallout from atmospheric weapons tests. Included in this review are developments in fallout collection, aerosols measurements in surface air, and high-altitude sampling with aircraft and balloons. In addition, developments in radiation measurements are covered in such areas as survey and monitoring instruments, in situ gamma-ray spectrometry, and aerial measurement systems. The history of these developments and the interplay with the general advances in the field of radiation and radioactivity metrology are highlighted. An emphasis is given as to how the modifications and improvements in the instruments and methods over time led to their adaptation to present-day applications to radiation and radioactivity measurements.
A GPU-based calculation using the three-dimensional FDTD method for electromagnetic field analysis.
Nagaoka, Tomoaki; Watanabe, Soichi
2010-01-01
Numerical simulations with the numerical human model using the finite-difference time domain (FDTD) method have recently been performed frequently in a number of fields in biomedical engineering. However, the FDTD calculation runs too slowly. We focus, therefore, on general purpose programming on the graphics processing unit (GPGPU). The three-dimensional FDTD method was implemented on the GPU using Compute Unified Device Architecture (CUDA). In this study, we used the NVIDIA Tesla C1060 as a GPGPU board. The performance of the GPU is evaluated in comparison with the performance of a conventional CPU and a vector supercomputer. The results indicate that three-dimensional FDTD calculations using a GPU can significantly reduce run time in comparison with that using a conventional CPU, even a native GPU implementation of the three-dimensional FDTD method, while the GPU/CPU speed ratio varies with the calculation domain and thread block size.
Refraction-based X-ray Computed Tomography for Biomedical Purpose Using Dark Field Imaging Method
NASA Astrophysics Data System (ADS)
Sunaguchi, Naoki; Yuasa, Tetsuya; Huo, Qingkai; Ichihara, Shu; Ando, Masami
We have proposed a tomographic x-ray imaging system using DFI (dark field imaging) optics along with a data-processing method to extract information on refraction from the measured intensities, and a reconstruction algorithm to reconstruct a refractive-index field from the projections generated from the extracted refraction information. The DFI imaging system consists of a tandem optical system of Bragg- and Laue-case crystals, a positioning device system for a sample, and two CCD (charge coupled device) cameras. Then, we developed a software code to simulate the data-acquisition, data-processing, and reconstruction methods to investigate the feasibility of the proposed methods. Finally, in order to demonstrate its efficacy, we imaged a sample with DCIS (ductal carcinoma in situ) excised from a breast cancer patient using a system constructed at the vertical wiggler beamline BL-14C in KEK-PF. Its CT images depicted a variety of fine histological structures, such as milk ducts, duct walls, secretions, adipose and fibrous tissue. They correlate well with histological sections.
GPU-based parallel method of temperature field analysis in a floor heater with a controller
NASA Astrophysics Data System (ADS)
Forenc, Jaroslaw
2016-06-01
A parallel method enabling acceleration of the numerical analysis of the transient temperature field in an air floor heating system is presented in this paper. An initial-boundary value problem of the heater regulated by an on/off controller is formulated. The analogue model is discretized using the implicit finite difference method. The BiCGStab method is used to compute the obtained system of equations. A computer program implementing simultaneous computations on CPUand GPU(GPGPUtechnology) was developed. CUDA environment and linear algebra libraries (CUBLAS and CUSPARSE) are used by this program. The time of computations was reduced eight times in comparison with a program executed on the CPU only. Results of computations are presented in the form of time profiles and temperature field distributions. An influence of a model of the heat transfer coefficient on the simulation of the system operation was examined. The physical interpretation of obtained results is also presented.Results of computations were verified by comparing them with solutions obtained with the use of a commercial program - COMSOL Mutiphysics.
ERIC Educational Resources Information Center
Siry, Christina A.
2011-01-01
This article details a field-based methods course for preservice teachers that has been designed to integrate shared teaching experiences in elementary classrooms with ongoing critical dialogues with a focus on highlighting the complexities of teaching. I describe the structure of the course and explore the use of coteaching and cogenerative…
Meliza, C Daniel; Keen, Sara C; Rubenstein, Dustin R
2013-08-01
Quantitative measures of acoustic similarity can reveal patterns of shared vocal behavior in social species. Many methods for computing similarity have been developed, but their performance has not been extensively characterized in noisy environments and with vocalizations characterized by complex frequency modulations. This paper describes methods of bioacoustic comparison based on dynamic time warping (DTW) of the fundamental frequency or spectrogram. Fundamental frequency is estimated using a Bayesian particle filter adaptation of harmonic template matching. The methods were tested on field recordings of flight calls from superb starlings, Lamprotornis superbus, for how well they could separate distinct categories of call elements (motifs). The fundamental-frequency-based method performed best, but the spectrogram-based method was less sensitive to noise. Both DTW methods provided better separation of categories than spectrographic cross correlation, likely due to substantial variability in the duration of superb starling flight call motifs.
Daniel Meliza, C; Keen, Sara C.; Rubenstein, Dustin R.
2013-01-01
Quantitative measures of acoustic similarity can reveal patterns of shared vocal behavior in social species. Many methods for computing similarity have been developed, but their performance has not been extensively characterized in noisy environments and with vocalizations characterized by complex frequency modulations. This paper describes methods of bioacoustic comparison based on dynamic time warping (DTW) of the fundamental frequency or spectrogram. Fundamental frequency is estimated using a Bayesian particle filter adaptation of harmonic template matching. The methods were tested on field recordings of flight calls from superb starlings, Lamprotornis superbus, for how well they could separate distinct categories of call elements (motifs). The fundamental-frequency-based method performed best, but the spectrogram-based method was less sensitive to noise. Both DTW methods provided better separation of categories than spectrographic cross correlation, likely due to substantial variability in the duration of superb starling flight call motifs. PMID:23927136
On-orbit assembly of a team of flexible spacecraft using potential field based method
NASA Astrophysics Data System (ADS)
Chen, Ti; Wen, Hao; Hu, Haiyan; Jin, Dongping
2017-04-01
In this paper, a novel control strategy is developed based on artificial potential field for the on-orbit autonomous assembly of four flexible spacecraft without inter-member collision. Each flexible spacecraft is simplified as a hub-beam model with truncated beam modes in the floating frame of reference and the communication graph among the four spacecraft is assumed to be a ring topology. The four spacecraft are driven to a pre-assembly configuration first and then to the assembly configuration. In order to design the artificial potential field for the first step, each spacecraft is outlined by an ellipse and a virtual leader of circle is introduced. The potential field mainly depends on the attitude error between the flexible spacecraft and its neighbor, the radial Euclidian distance between the ellipse and the circle and the classical Euclidian distance between the centers of the ellipse and the circle. It can be demonstrated that there are no local minima for the potential function and the global minimum is zero. If the function is equal to zero, the solution is not a certain state, but a set. All the states in the set are corresponding to the desired configurations. The Lyapunov analysis guarantees that the four spacecraft asymptotically converge to the target configuration. Moreover, the other potential field is also included to avoid the inter-member collision. In the control design of the second step, only small modification is made for the controller in the first step. Finally, the successful application of the proposed control law to the assembly mission is verified by two case studies.
Szeliski, Richard; Zabih, Ramin; Scharstein, Daniel; Veksler, Olga; Kolmogorov, Vladimir; Agarwala, Aseem; Tappen, Marshall; Rother, Carsten
2008-06-01
Among the most exciting advances in early vision has been the development of efficient energy minimization algorithms for pixel-labeling tasks such as depth or texture computation. It has been known for decades that such problems can be elegantly expressed as Markov random fields, yet the resulting energy minimization problems have been widely viewed as intractable. Recently, algorithms such as graph cuts and loopy belief propagation (LBP) have proven to be very powerful: for example, such methods form the basis for almost all the top-performing stereo methods. However, the tradeoffs among different energy minimization algorithms are still not well understood. In this paper we describe a set of energy minimization benchmarks and use them to compare the solution quality and running time of several common energy minimization algorithms. We investigate three promising recent methods graph cuts, LBP, and tree-reweighted message passing in addition to the well-known older iterated conditional modes (ICM) algorithm. Our benchmark problems are drawn from published energy functions used for stereo, image stitching, interactive segmentation, and denoising. We also provide a general-purpose software interface that allows vision researchers to easily switch between optimization methods. Benchmarks, code, images, and results are available at http://vision.middlebury.edu/MRF/.
NASA Astrophysics Data System (ADS)
Xia, Baizhan; Yin, Hui; Yu, Dejie
2017-02-01
The response of the acoustic field, especially for the mid-frequency response, is very sensitive to uncertainties rising from manufacturing/construction tolerances, aggressive environmental factors and unpredictable excitations. To quantify these uncertainties with limited information effectively, two nondeterministic models (the interval model and the hybrid probability-interval model) are introduced. And then, two corresponding nondeterministic numerical methods are developed for the low- and mid-frequency response analysis of the acoustic field under these two nondeterministic models. The first one is the interval perturbation wave-based method (IPWBM) which is proposed to predict the maximal values of the low- and mid-frequency responses of the acoustic field under the interval model. The second one is the hybrid perturbation wave-based method (HPWBM) which is proposed to predict the maximal values of expectations and standard variances of the low- and mid-frequency responses of the acoustic field under the hybrid probability-interval model. The effectiveness and efficiency of the proposed nondeterministic numerical methods for the low- and mid-frequency response analysis of the acoustic field under the interval model and the hybrid probability-interval model are investigated by a numerical example.
NASA Astrophysics Data System (ADS)
Ahrens, T.; Matson, P.; Lobell, D.
2006-12-01
Sensitivity analyses (SA) of biogeochemical and agricultural models are often used to identify the importance of input variables for variance in model outputs, such as crop yield or nitrate leaching. Identification of these factors can aid in prioritizing efforts in research or decision support. Many types of sensitivity analyses are available, ranging from simple One-At-A-Time (OAT) screening exercises to more complex local and global variance-based methods (see Saltelli et al 2004). The purpose of this study was to determine the influence of the type of SA on factor prioritization in the Yaqui Valley, Mexico using the Water and Nitrogen Management Model (WNMM; Chen et al 2005). WNMM, a coupled plant-growth - biogeochemistry simulation model, was calibrated to reproduce crop growth, soil moisture, and gaseous N emission dynamics in experimental plots of irrigated wheat in the Yaqui Valley, Mexico from 1994-1997. Three types of SA were carried out using 16 input variables, including parameters related to weather, soil properties and crop management. Methods used for SA were local OAT, Monte Carlo (MC), and a global variance-based method (orthogonal input; OI). Results of the SA were based on typical interpretations used for each test: maximum absolute ratio of variation (MAROV) for OAT analyses; first- and second-order regressions for MC analyses; and a total effects index for OI. The three most important factors identified by MC and OI methods were generally in agreement, although the order of importance was not always consistent and there was little agreement for variables of less importance. OAT over-estimated the importance of two factors (planting date and pH) for many outputs. The biggest differences between the OAT results and those from MC and OI were likely due to the inability of OAT methods to account for non-linearity (eg. pH and ammonia volatilization), interactions among variables (eg. pH and timing of fertilization) and an over-reliance on baseline
Image restoration method based on Hilbert transform for full-field optical coherence tomography
NASA Astrophysics Data System (ADS)
Na, Jihoon; Choi, Woo June; Choi, Eun Seo; Ryu, Seon Young; Lee, Byeong Ha
2008-01-01
A full-field optical coherence tomography (FF-OCT) system utilizing a simple but novel image restoration method suitable for a high-speed system is demonstrated. An en-face image is retrieved from only two phase-shifted interference fringe images through using the mathematical Hilbert transform. With a thermal light source, a high-resolution FF-OCT system having axial and transverse resolutions of 1 and 2.2 μm, respectively, was implemented. The feasibility of the proposed scheme is confirmed by presenting the obtained en-face images of biological samples such as a piece of garlic and a gold beetle. The proposed method is robust to the error in the amount of the phase shift and does not leave residual fringes. The use of just two interference images and the strong immunity to phase errors provide great advantages in the imaging speed and the system design flexibility of a high-speed high-resolution FF-OCT system.
NASA Astrophysics Data System (ADS)
Matsumoto, S.
2016-09-01
The stress field is a key factor controlling earthquake occurrence and crustal evolution. In this study, we propose an approach for determining the stress field in a region using seismic moment tensors, based on the classical equation in plasticity theory. Seismic activity is a phenomenon that relaxes crustal stress and creates plastic strain in a medium because of faulting, which suggests that the medium could behave as a plastic body. Using the constitutive relation in plastic theory, the increment of the plastic strain tensor is proportional to the deviatoric stress tensor. Simple mathematical manipulation enables the development of an inversion method for estimating the stress field in a region. The method is tested on shallow earthquakes occurring on Kyushu Island, Japan.
Latouche, Gwendal; Debord, Christian; Raynal, Marc; Milhade, Charlotte; Cerovic, Zoran G
2015-10-01
Early detection of fungal pathogen presence in the field would help to better time or avoid some of the fungicide treatments used to prevent crop production losses. We recently introduced a new phytoalexin-based method for a non-invasive detection of crop diseases using their fluorescence. The causal agent of grapevine downy mildew, Plasmopara viticola, induces the synthesis of stilbenoid phytoalexins by the host, Vitis vinifera, early upon infection. These stilbenoids emit violet-blue fluorescence under UV light. A hand-held solid-state UV-LED-based field fluorimeter, named Multiplex 330, was used to measure stilbenoid phytoalexins in a vineyard. It allowed us to non-destructively detect and monitor the naturally occurring downy mildew infections on leaves in the field.
NASA Astrophysics Data System (ADS)
Kim, Sungho; Ahn, Jae-Hyuk; Park, Tae Jung; Lee, Sang Yup; Choi, Yang-Kyu
2009-06-01
A unique direct electrical detection method of biomolecules, charge pumping, was demonstrated using a nanogap embedded field-effect-transistor (FET). With aid of a charge pumping method, sensitivity can fall below the 1 ng/ml concentration regime in antigen-antibody binding of an avian influenza case. Biomolecules immobilized in the nanogap are mainly responsible for the acute changes of the interface trap density due to modulation of the energy level of the trap. This finding is supported by a numerical simulation. The proposed detection method for biomolecules using a nanogap embedded FET represents a foundation for a chip-based biosensor capable of high sensitivity.
Li, Ming; Li, Jingyun; He, Zihuai; Lu, Qing; Witte, John S; Macleod, Stewart L; Hobbs, Charlotte A; Cleves, Mario A
2016-05-01
Family-based association studies are commonly used in genetic research because they can be robust to population stratification (PS). Recent advances in high-throughput genotyping technologies have produced a massive amount of genomic data in family-based studies. However, current family-based association tests are mainly focused on evaluating individual variants one at a time. In this article, we introduce a family-based generalized genetic random field (FB-GGRF) method to test the joint association between a set of autosomal SNPs (i.e., single-nucleotide polymorphisms) and disease phenotypes. The proposed method is a natural extension of a recently developed GGRF method for population-based case-control studies. It models offspring genotypes conditional on parental genotypes, and, thus, is robust to PS. Through simulations, we presented that under various disease scenarios the FB-GGRF has improved power over a commonly used family-based sequence kernel association test (FB-SKAT). Further, similar to GGRF, the proposed FB-GGRF method is asymptotically well-behaved, and does not require empirical adjustment of the type I error rates. We illustrate the proposed method using a study of congenital heart defects with family trios from the National Birth Defects Prevention Study (NBDPS).
NASA Astrophysics Data System (ADS)
Davis, L. E.; Eves, R. L.
2006-12-01
Transitioning students from learner to investigator is best accomplished by incorporating research into the undergraduate classroom as a collaborative enterprise between students and faculty. Our course is a two-part design with a focus on a modern carbonate ecosystem and depositional environment on San Salvador Island, Bahamas in order to integrate geology, biology, and environmental science. Content background is provided in the classroom, which focuses on the geology of the Bahamian platform; the biological aspects of Caribbean island marine ecosystems; and the impact of human development on tropical islands. Application of course content is focused during an integrated field study of a specific carbonate environment, e.g. carbonate production in a tidal lagoon. The ultimate goals of the course are (1) identifying and acquiring both disciplinary and interdisciplinary research methodologies, (2) defining a specific investigative problem, (3) conducting `real' [meaningful] research, and (4) communicating research findings in the form of presentations at national meetings and publication in research journals. Assessment is based on specific criteria to be achieved during the research project. Criteria are determined through collaboration between faculty mentors and student researchers. Students are evaluated throughout the research phase with particular attention paid to an understanding of appropriate planning and background research, originality of thought; use of project-specific and appropriate data collection and sampling techniques; and analysis and interpretation of data. Students are expected to submit a final written report containing appropriate conclusions from data analysis and recommendations for further studies. Each student is also required to complete a self-assessment. The interdisciplinary experiences gained by faculty and students have already been incorporated into other courses and have led to publication of results. The course stimulates both
Variational methods for field theories
Ben-Menahem, S.
1986-09-01
Four field theory models are studied: Periodic Quantum Electrodynamics (PQED) in (2 + 1) dimensions, free scalar field theory in (1 + 1) dimensions, the Quantum XY model in (1 + 1) dimensions, and the (1 + 1) dimensional Ising model in a transverse magnetic field. The last three parts deal exclusively with variational methods; the PQED part involves mainly the path-integral approach. The PQED calculation results in a better understanding of the connection between electric confinement through monopole screening, and confinement through tunneling between degenerate vacua. This includes a better quantitative agreement for the string tensions in the two approaches. Free field theory is used as a laboratory for a new variational blocking-truncation approximation, in which the high-frequency modes in a block are truncated to wave functions that depend on the slower background modes (Boron-Oppenheimer approximation). This ''adiabatic truncation'' method gives very accurate results for ground-state energy density and correlation functions. Various adiabatic schemes, with one variable kept per site and then two variables per site, are used. For the XY model, several trial wave functions for the ground state are explored, with an emphasis on the periodic Gaussian. A connection is established with the vortex Coulomb gas of the Euclidean path integral approach. The approximations used are taken from the realms of statistical mechanics (mean field approximation, transfer-matrix methods) and of quantum mechanics (iterative blocking schemes). In developing blocking schemes based on continuous variables, problems due to the periodicity of the model were solved. Our results exhibit an order-disorder phase transition. The transfer-matrix method is used to find a good (non-blocking) trial ground state for the Ising model in a transverse magnetic field in (1 + 1) dimensions.
Method of depositing multi-layer carbon-based coatings for field emission
Sullivan, John P.; Friedmann, Thomas A.
1999-01-01
A novel field emitter device for cold cathode field emission applications, comprising a multi-layer resistive carbon film. The multi-layered film of the present invention is comprised of at least two layers of a resistive carbon material, preferably amorphous-tetrahedrally coordinated carbon, such that the resistivities of adjacent layers differ. For electron emission from the surface, the preferred structure comprises a top layer having a lower resistivity than the bottom layer. For edge emitting structures, the preferred structure of the film comprises a plurality of carbon layers, wherein adjacent layers have different resistivities. Through selection of deposition conditions, including the energy of the depositing carbon species, the presence or absence of certain elements such as H, N, inert gases or boron, carbon layers having desired resistivities can be produced. Field emitters made according the present invention display improved electron emission characteristics in comparison to conventional field emitter materials.
Method of depositing multi-layer carbon-based coatings for field emission
Sullivan, J.P.; Friedmann, T.A.
1999-08-10
A novel field emitter device is disclosed for cold cathode field emission applications, comprising a multi-layer resistive carbon film. The multi-layered film of the present invention is comprised of at least two layers of a resistive carbon material, preferably amorphous-tetrahedrally coordinated carbon, such that the resistivities of adjacent layers differ. For electron emission from the surface, the preferred structure comprises a top layer having a lower resistivity than the bottom layer. For edge emitting structures, the preferred structure of the film comprises a plurality of carbon layers, wherein adjacent layers have different resistivities. Through selection of deposition conditions, including the energy of the depositing carbon species, the presence or absence of certain elements such as H, N, inert gases or boron, carbon layers having desired resistivities can be produced. Field emitters made according the present invention display improved electron emission characteristics in comparison to conventional field emitter materials. 8 figs.
Novel Texture-based Visualization Methods for High-dimensional Multi-field Data Sets
2013-07-06
gradient image is multiplied with the first texture image, resulting in the second texture image appearing as “bumps”. The concept of Gestalt ...originates from the fine arts and expresses the notion that the whole contains more information than the parts. Perception of Gestalt is influenced by...We exploit Gestalt perception by using different masks, which subdivide the domain of two fields, and show for each section only one field. By
Shen, Hujun; Czaplewski, Cezary; Liwo, Adam; Scheraga, Harold A
2008-08-01
The kinetic-trapping problem in simulating protein folding can be overcome by using a Replica Exchange Method (REM). However, in implementing REM in molecular dynamics simulations, synchronization between processors on parallel computers is required, and communication between processors limits its ability to sample conformational space in a complex system efficiently. To minimize communication between processors during the simulation, a Serial Replica Exchange Method (SREM) has been proposed recently by Hagan et al. (J. Phys. Chem. B2007, 111, 1416-1423). Here, we report the implementation of this new SREM algorithm with our physics-based united-residue (UNRES) force field. The method has been tested on the protein 1E0L with a temperature-independent UNRES force field and on terminally blocked deca-alanine (Ala(10)) and 1GAB with the recently introduced temperature-dependent UNRES force field. With the temperature-independent force field, SREM reproduces the results of REM but is more efficient in terms of wall-clock time and scales better on distributed-memory machines. However, exact application of SREM to the temperature-dependent UNRES algorithm requires the determination of a four-dimensional distribution of UNRES energy components instead of a one-dimensional energy distribution for each temperature, which is prohibitively expensive. Hence, we assumed that the temperature dependence of the force field can be ignored for neighboring temperatures. This version of SREM worked for Ala(10) which is a simple system but failed to reproduce the thermodynamic results as well as regular REM on the more complex 1GAB protein. Hence, SREM can be applied to the temperature-independent but not to the temperature-dependent UNRES force field.
Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both interstitial waters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question ...
NASA Astrophysics Data System (ADS)
Meillier, Céline; Chatelain, Florent; Michel, Olivier; Bacon, Roland; Piqueras, Laure; Bacher, Raphael; Ayasso, Hacheme
2016-04-01
We present SELFI, the Source Emission Line FInder, a new Bayesian method optimized for detection of faint galaxies in Multi Unit Spectroscopic Explorer (MUSE) deep fields. MUSE is the new panoramic integral field spectrograph at the Very Large Telescope (VLT) that has unique capabilities for spectroscopic investigation of the deep sky. It has provided data cubes with 324 million voxels over a single 1 arcmin2 field of view. To address the challenge of faint-galaxy detection in these large data cubes, we developed a new method that processes 3D data either for modeling or for estimation and extraction of source configurations. This object-based approach yields a natural sparse representation of the sources in massive data fields, such as MUSE data cubes. In the Bayesian framework, the parameters that describe the observed sources are considered random variables. The Bayesian model leads to a general and robust algorithm where the parameters are estimated in a fully data-driven way. This detection algorithm was applied to the MUSE observation of Hubble Deep Field-South. With 27 h total integration time, these observations provide a catalog of 189 sources of various categories and with secured redshift. The algorithm retrieved 91% of the galaxies with only 9% false detection. This method also allowed the discovery of three new Lyα emitters and one [OII] emitter, all without any Hubble Space Telescope counterpart. We analyzed the reasons for failure for some targets, and found that the most important limitation of the method is when faint sources are located in the vicinity of bright spatially resolved galaxies that cannot be approximated by the Sérsic elliptical profile. The software and its documentation are available on the MUSE science web service (muse-vlt.eu/science).
Energy-based method for near-real time modeling of sound field in complex urban environments.
Pasareanu, Stephanie M; Remillieux, Marcel C; Burdisso, Ricardo A
2012-12-01
Prediction of the sound field in large urban environments has been limited thus far by the heavy computational requirements of conventional numerical methods such as boundary element (BE) or finite-difference time-domain (FDTD) methods. Recently, a considerable amount of work has been devoted to developing energy-based methods for this application, and results have shown the potential to compete with conventional methods. However, these developments have been limited to two-dimensional (2-D) studies (along street axes), and no real description of the phenomena at issue has been exposed. Here the mathematical theory of diffusion is used to predict the sound field in 3-D complex urban environments. A 3-D diffusion equation is implemented by means of a simple finite-difference scheme and applied to two different types of urban configurations. This modeling approach is validated against FDTD and geometrical acoustic (GA) solutions, showing a good overall agreement. The role played by diffraction near buildings edges close to the source is discussed, and suggestions are made on the possibility to predict accurately the sound field in complex urban environments, in near real time simulations.
NASA Astrophysics Data System (ADS)
Tang, Min; Wang, Yihong
2017-02-01
In magnetized plasma, the magnetic field confines the particles around the field lines. The anisotropy intensity in the viscosity and heat conduction may reach the order of 1012. When the boundary conditions are periodic or Neumann, the strong diffusion leads to an ill-posed limiting problem. To remove the ill-conditionedness in the highly anisotropic diffusion equations, we introduce a simple but very efficient asymptotic preserving reformulation in this paper. The key idea is that, instead of discretizing the Neumann boundary conditions locally, we replace one of the Neumann boundary condition by the integration of the original problem along the field line, the singular 1 / ɛ terms can be replaced by O (1) terms after the integration, which yields a well-posed problem. Small modifications to the original code are required and no change of coordinates nor mesh adaptation are needed. Uniform convergence with respect to the anisotropy strength 1 / ɛ can be observed numerically and the condition number does not scale with the anisotropy.
Correlation Based Geomagnetic Field Modeling
NASA Astrophysics Data System (ADS)
Holschneider, M.; Mauerberger, S.; Lesur, V.; Baerenzung, J.
2015-12-01
We present a new method for determining geomagnetic field models. It is based on the construction of an a priori correlation structure derived from our knowledge about characteristic length scales and sources of the geomagnetic field. The magnetic field measurements are then seen as correlated random variables too and the inversion process amounts to compute the a posteriori correlation structure using Bayes theorem. We show how this technique allows the statistical separation of the various field contributions and the assessment of their uncertainties.
Variational Methods for Field Theories.
NASA Astrophysics Data System (ADS)
Ben-Menahem, Shahar
The thesis has four parts, dealing with four field theory models: Periodic Quantum Electrodynamics (PQED) in (2 + 1) dimensions, free scalar field theory in (1 + 1) dimensions, the Quantum XY model in (1 + 1) dimensions, and the (1 + 1) dimensional Ising model in a transverse magnetic field. The last three parts deal exclusively with variational methods; the PQED part involves mainly the path-integral approach. The PQED calculation results in a better understanding of the connection between electric confinement through monopole screening, and confinement through tunneling between degenerate vacua. This includes a better quantitative agreement for the string tensions in the two approaches. In the second part, we use free field theory as a loboratory for a new variational blocking-tuncation approximation, in which the high-frequency modes in a block are truncated to wave functions that depend on the slower background modes(Born-Oppenheimer approximation). This "adiabatic truncation" method gives very accurate results for ground -state energy density and correlation functions. Without the adiabatic method, a much larger number of state per block must be kept to get comparable results. Various adiabatic schemes, with one variable kept per site and then two variables per site, are used. For the XY model, several trial wave functions for the ground state are explored, with an emphasis on the periodic Gaussian. A connection is established with the vortex Coulomb gas of the Eclidean path integral approach. The approximations used are taken from the realms of statistical mechanics (mean field approximation, transfer-matrix methods) and of quantum mechanics (iterative blocking schemes). In developing blocking schemes based on continuous variables, problems due to the periodicity of the model were solved. Our results exhibit an order-disorder phase transition. This transition is a rudimentary version of the actual transition known to occur in the XY model, and is
NASA Astrophysics Data System (ADS)
Schnetger, Bernhard; Dellwig, Olaf
2012-02-01
Experiments with water samples from the redoxclines of the Black Sea and the Baltic Sea identified a fraction of dissolved Mn which is completely oxidised to solid MnOx within less than 48 h at laboratory conditions. Disproportionation of this dissolved reactive Mn (dMnreact) into Mn (II) and Mn (IV) did not occur. Our data suggest that bacteria using oxygen are responsible for the fast oxidation of dMnreact. The operational definition of dMnreact is a Mn phase that passes a 0.45 μm filter, but can be separated from remaining dissolved Mn (II) by filtration 48 h after exposure to atmospheric oxygen. The application of this method to water samples from the redoxcline of the Black Sea reveals dMnreact profiles comparable to published Mn (III) profiles analysed by polarography thus identifying Mn (III) as the dominating constituent of dMnreact. As the degree of autocatalytic oxidation of dissolved Mn (II) by readily produced MnOx and microbial Mn (II) oxidation within the applied oxidation period is unknown, dMnreact is at least a semi-quantitative measure of dissolved Mn (III). Furthermore, the present method helps to assess the full potential for oxidation of dissolved Mn within aquatic ecosystems. This method has the advantage that sample preparation can be easily done on site, followed by analysis of dissolved Mn by conventional methods.
Field-based evaluation of a male-specific (F+) RNA coliphage concentration method
Fecal contamination of water poses a significant risk to public health due to the potential presence of pathogens, including enteric viruses. Thus, sensitive, reliable and easy to use methods for the detection of microorganisms are needed to evaluate water quality. In this stud...
Liu, Cui; Wang, Yang; Zhao, Dongxia; Gong, Lidong; Yang, Zhongzhi
2014-02-01
The integrity of the genetic information is constantly threatened by oxidizing agents. Oxidized guanines have all been linked to different types of cancers. Theoretical approaches supplement the assorted experimental techniques, and bring new sight and opportunities to investigate the underlying microscopic mechanics. Unfortunately, there is no specific force field to DNA system including oxidized guanines. Taking high level ab initio calculations as benchmark, we developed the ABEEMσπ fluctuating charge force field, which uses multiple fluctuating charges per atom. And it was applied to study the energies, structures and mutations of base pairs containing oxidized guanines. The geometries were obtained in reference to other studies or using B3LYP/6-31+G* level optimization, which is more rational and timesaving among 24 quantum mechanical methods selected and tested by this work. The energies were determined at MP2/aug-cc-pVDZ level with BSSE corrections. Results show that the constructed potential function can accurately simulate the change of H-bond and the buckled angle formed by two base planes induced by oxidized guanine, and it provides reliable information of hydrogen bonding, stacking interaction and the mutation processes. The performance of ABEEMσπ polarizable force field in predicting the bond lengths, bond angles, dipole moments etc. is generally better than those of the common force fields. And the accuracy of ABEEMσπ PFF is close to that of the MP2 method. This shows that ABEEMσπ model is a reliable choice for further research of dynamics behavior of DNA fragment including oxidized guanine.
NASA Astrophysics Data System (ADS)
Huang, Sheng; Ao, Xiang; Li, Yuan-yuan; Zhang, Rui
2016-09-01
In order to meet the needs of high-speed development of optical communication system, a construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on multiplicative group of finite field is proposed. The Tanner graph of parity check matrix of the code constructed by this method has no cycle of length 4, and it can make sure that the obtained code can get a good distance property. Simulation results show that when the bit error rate ( BER) is 10-6, in the same simulation environment, the net coding gain ( NCG) of the proposed QC-LDPC(3 780, 3 540) code with the code rate of 93.7% in this paper is improved by 2.18 dB and 1.6 dB respectively compared with those of the RS(255, 239) code in ITU-T G.975 and the LDPC(3 2640, 3 0592) code in ITU-T G.975.1. In addition, the NCG of the proposed QC-LDPC(3 780, 3 540) code is respectively 0.2 dB and 0.4 dB higher compared with those of the SG-QC-LDPC(3 780, 3 540) code based on the two different subgroups in finite field and the AS-QC-LDPC(3 780, 3 540) code based on the two arbitrary sets of a finite field. Thus, the proposed QC-LDPC(3 780, 3 540) code in this paper can be well applied in optical communication systems.
Field by field hybrid upwind splitting methods
NASA Technical Reports Server (NTRS)
Coquel, Frederic; Liou, Meng-Sing
1993-01-01
A new and general approach to upwind splitting is presented. The design principle combines the robustness of flux vector splitting schemes in the capture of nonlinear waves and the accuracy of some flux difference splitting schemes in the resolution of linear waves. The new schemes are derived following a general hybridization technique performed directly at the basic level of the field by field decomposition involved in FDS methods. The scheme does not use a spatial switch to be tuned up according to the local smoothness of the approximate solution.
Tavakkoli, Ehsan; Fatehi, Foad; Rengasamy, Pichu; McDonald, Glenn K.
2012-01-01
Success in breeding crops for yield and other quantitative traits depends on the use of methods to evaluate genotypes accurately under field conditions. Although many screening criteria have been suggested to distinguish between genotypes for their salt tolerance under controlled environmental conditions, there is a need to test these criteria in the field. In this study, the salt tolerance, ion concentrations, and accumulation of compatible solutes of genotypes of barley with a range of putative salt tolerance were investigated using three growing conditions (hydroponics, soil in pots, and natural saline field). Initially, 60 genotypes of barley were screened for their salt tolerance and uptake of Na+, Cl–, and K+ at 150 mM NaCl and, based on this, a subset of 15 genotypes was selected for testing in pots and in the field. Expression of salt tolerance in saline solution culture was not a reliable indicator of the differences in salt tolerance between barley plants that were evident in saline soil-based comparisons. Significant correlations were observed in the rankings of genotypes on the basis of their grain yield production at a moderately saline field site and their relative shoot growth in pots at ECe 7.2 [Spearman’s rank correlation (rs)=0.79] and ECe 15.3 (rs=0.82) and the crucial parameter of leaf Na+ (rs=0.72) and Cl– (rs=0.82) concentrations at ECe 7.2 dS m−1. This work has established screening procedures that correlated well with grain yield at sites with moderate levels of soil salinity. This study also showed that both salt exclusion and osmotic tolerance are involved in salt tolerance and that the relative importance of these traits may differ with the severity of the salt stress. In soil, ion exclusion tended to be more important at low to moderate levels of stress but osmotic stress became more important at higher stress levels. Salt exclusion coupled with a synthesis of organic solutes were shown to be important components of salt
Tavakkoli, Ehsan; Fatehi, Foad; Rengasamy, Pichu; McDonald, Glenn K
2012-06-01
Success in breeding crops for yield and other quantitative traits depends on the use of methods to evaluate genotypes accurately under field conditions. Although many screening criteria have been suggested to distinguish between genotypes for their salt tolerance under controlled environmental conditions, there is a need to test these criteria in the field. In this study, the salt tolerance, ion concentrations, and accumulation of compatible solutes of genotypes of barley with a range of putative salt tolerance were investigated using three growing conditions (hydroponics, soil in pots, and natural saline field). Initially, 60 genotypes of barley were screened for their salt tolerance and uptake of Na(+), Cl(-), and K(+) at 150 mM NaCl and, based on this, a subset of 15 genotypes was selected for testing in pots and in the field. Expression of salt tolerance in saline solution culture was not a reliable indicator of the differences in salt tolerance between barley plants that were evident in saline soil-based comparisons. Significant correlations were observed in the rankings of genotypes on the basis of their grain yield production at a moderately saline field site and their relative shoot growth in pots at EC(e) 7.2 [Spearman's rank correlation (rs)=0.79] and EC(e) 15.3 (rs=0.82) and the crucial parameter of leaf Na(+) (rs=0.72) and Cl(-) (rs=0.82) concentrations at EC(e) 7.2 dS m(-1). This work has established screening procedures that correlated well with grain yield at sites with moderate levels of soil salinity. This study also showed that both salt exclusion and osmotic tolerance are involved in salt tolerance and that the relative importance of these traits may differ with the severity of the salt stress. In soil, ion exclusion tended to be more important at low to moderate levels of stress but osmotic stress became more important at higher stress levels. Salt exclusion coupled with a synthesis of organic solutes were shown to be important components of
An inversion method of 2D NMR relaxation spectra in low fields based on LSQR and L-curve
NASA Astrophysics Data System (ADS)
Su, Guanqun; Zhou, Xiaolong; Wang, Lijia; Wang, Yuanjun; Nie, Shengdong
2016-04-01
The low-field nuclear magnetic resonance (NMR) inversion method based on traditional least-squares QR decomposition (LSQR) always produces some oscillating spectra. Moreover, the solution obtained by traditional LSQR algorithm often cannot reflect the true distribution of all the components. Hence, a good solution requires some manual intervention, for especially low signal-to-noise ratio (SNR) data. An approach based on the LSQR algorithm and L-curve is presented to solve this problem. The L-curve method is applied to obtain an improved initial optimal solution by balancing the residual and the complexity of the solutions instead of manually adjusting the smoothing parameters. First, the traditional LSQR algorithm is used on 2D NMR T1-T2 data to obtain its resultant spectra and corresponding residuals, whose norms are utilized to plot the L-curve. Second, the corner of the L-curve as the initial optimal solution for the non-negative constraint is located. Finally, a 2D map is corrected and calculated iteratively based on the initial optimal solution. The proposed approach is tested on both simulated and measured data. The results show that this algorithm is robust, accurate and promising for the NMR analysis.
NASA Astrophysics Data System (ADS)
Yin, Gang; Zhang, Yingtang; Mi, Songlin; Fan, Hongbo; Li, Zhining
2016-11-01
To obtain accurate magnetic gradient tensor data, a fast and robust calculation method based on regularized method in frequency domain was proposed. Using the potential field theory, the transform formula in frequency domain was deduced in order to calculate the magnetic gradient tensor from the pre-existing total magnetic anomaly data. By analyzing the filter characteristics of the Vertical vector transform operator (VVTO) and Gradient tensor transform operator (GTTO), we proved that the conventional transform process was unstable which would zoom in the high-frequency part of the data in which measuring noise locate. Due to the existing unstable problem that led to a low signal-to-noise (SNR) for the calculated result, we introduced regularized method in this paper. By selecting the optimum regularization parameters of different transform phases using the C-norm approach, the high frequency noise was restrained and the SNR was improved effectively. Numerical analysis demonstrates that most value and characteristics of the calculated data by the proposed method compare favorably with reference magnetic gradient tensor data. In addition, calculated magnetic gradient tensor components form real aeromagnetic survey provided better resolution of the magnetic sources and original profile.
NASA Astrophysics Data System (ADS)
Lee, J. H.; Lee, Sang Young
2006-10-01
In obtaining the intrinsic surface resistance (RS) from the effective surface resistance (RS,eff) measured at microwave frequencies by using the dielectric resonator method, the impedance transformation method reported by Klein et al. [N. Klein, H. Chaloupka, G. Muller, S. Orbach, H. Piel, B. Roas, L. Schultz, U. Klein, M. Peiniger, J. Appl. Phys. 67 (1990) 6940] has been very useful. Here we compared the RS of YBa2Cu3O7-δ (YBCO) films on dielectric substrates obtained by a rigorous field analysis based on the TE-mode matching method with those by the impedance transformation method. The two methods produced almost the same RS,eff vs. RS relation in most practical cases of the substrate thickness being less than 1 mm and sapphire and rutile used as the materials for the dielectric rod. However, when the resonant frequency of the dielectric resonator became close to that of the resonant structure formed by the substrates and the metallic surroundings, the RS,eff vs. RS relations appeared strikingly different between the two methods. Effects of the TE011-mode cutoff frequency inside the substrate region, which could not be considered in the impedance transformation method, on the relation between the RS,eff and RS of superconductor films are also investigated. We confirmed our arguments by demonstrating a case where existence of evanescent modes should be considered for obtaining the RS of YBCO films from the RS,eff.
NASA Astrophysics Data System (ADS)
Liu, Xiaoming; Mei, Ming; Liu, Jun; Hu, Wei
2015-12-01
Clustered microcalcifications (MCs) in mammograms are an important early sign of breast cancer in women. Their accurate detection is important in computer-aided detection (CADe). In this paper, we integrated the possibilistic fuzzy c-means (PFCM) clustering algorithm and weighted support vector machine (WSVM) for the detection of MC clusters in full-field digital mammograms (FFDM). For each image, suspicious MC regions are extracted with region growing and active contour segmentation. Then geometry and texture features are extracted for each suspicious MC, a mutual information-based supervised criterion is used to select important features, and PFCM is applied to cluster the samples into two clusters. Weights of the samples are calculated based on possibilities and typicality values from the PFCM, and the ground truth labels. A weighted nonlinear SVM is trained. During the test process, when an unknown image is presented, suspicious regions are located with the segmentation step, selected features are extracted, and the suspicious MC regions are classified as containing MC or not by the trained weighted nonlinear SVM. Finally, the MC regions are analyzed with spatial information to locate MC clusters. The proposed method is evaluated using a database of 410 clinical mammograms and compared with a standard unweighted support vector machine (SVM) classifier. The detection performance is evaluated using response receiver operating (ROC) curves and free-response receiver operating characteristic (FROC) curves. The proposed method obtained an area under the ROC curve of 0.8676, while the standard SVM obtained an area of 0.8268 for MC detection. For MC cluster detection, the proposed method obtained a high sensitivity of 92 % with a false-positive rate of 2.3 clusters/image, and it is also better than standard SVM with 4.7 false-positive clusters/image at the same sensitivity.
Teaching Geographic Field Methods Using Paleoecology
ERIC Educational Resources Information Center
Walsh, Megan K.
2014-01-01
Field-based undergraduate geography courses provide numerous pedagogical benefits including an opportunity for students to acquire employable skills in an applied context. This article presents one unique approach to teaching geographic field methods using paleoecological research. The goals of this course are to teach students key geographic…
NASA Astrophysics Data System (ADS)
Li, J. H.; Zhu, Z. Q.; Liu, S. C.; Zeng, S. H.
2011-12-01
Based on the principle of abnormal field algorithms, Helmholtz equations for electromagnetic field have been deduced. We made the electric field Helmholtz equation the governing equation, and derived the corresponding system of vector finite element method equations using the Galerkin method. For solving the governing equation using the vector finite element method, we divided the computing domain into homogenous brick elements, and used Whitney-type vector basis functions. After obtaining the electric field's anomaly field in the Laplace domain using the vector finite element method, we used the Gaver-Stehfest algorithm to transform the electric field's anomaly field to the time domain, and obtained the impulse response of magnetic field's anomaly field through the Faraday law of electromagnetic induction. By comparing 1D analytic solutions of quasi-H-type geoelectric models, the accuracy of the vector finite element method is tested. For the low resistivity brick geoelectric model, the plot shape of electromotive force computed using the vector finite element method coincides with that of the integral equation method and finite difference in time domain solutions.
A DNA-based method for studying root responses to drought in field-grown wheat genotypes
Huang, Chun Y.; Kuchel, Haydn; Edwards, James; Hall, Sharla; Parent, Boris; Eckermann, Paul; Herdina; Hartley, Diana M.; Langridge, Peter; McKay, Alan C.
2013-01-01
Root systems are critical for water and nutrient acquisition by crops. Current methods measuring root biomass and length are slow and labour-intensive for studying root responses to environmental stresses in the field. Here, we report the development of a method that measures changes in the root DNA concentration in soil and detects root responses to drought in controlled environment and field trials. To allow comparison of soil DNA concentrations from different wheat genotypes, we also developed a procedure for correcting genotypic differences in the copy number of the target DNA sequence. The new method eliminates the need for separation of roots from soil and permits large-scale phenotyping of root responses to drought or other environmental and disease stresses in the field. PMID:24217242
Variational methods for field theories
NASA Astrophysics Data System (ADS)
Ben-Menahem, Shahar
1986-09-01
The thesis is presented in four parts dealing with field theory models: Periodic Quantum Electrodynamics (PQED) in (2+1) dimensions, free scalar field theory in (1+1) dimensions, the Quantum XY model in (1+1) dimensions, and the (1+1) dimensional Ising model in a transverse magnetic field. The last three parts deal exclusively with variational methods; the PQED part involves mainly the path integral approach. The PQED calculation results in a better understanding of the connection between electric confinement through monopole screening, and confinement through tunneling between degenerate vacua. Free field theory is used as a laboratory for a new variational blocking truncation approximation, in which the high frequency modes in a block are truncated to wave functions that depend on the slower background model (Born Oppenheimer approximation). For the XY model, several trial wave functions for the ground state are explored, with an emphasis on the periodic Gaussian. In the 4th part, the transfer matrix method is used to find a good (non blocking) trial ground state for the Ising model in a transverse magnetic field in (1+1) dimensions.
An efficient direction field-based method for the detection of fasteners on high-speed railways.
Yang, Jinfeng; Tao, Wei; Liu, Manhua; Zhang, Yongjie; Zhang, Haibo; Zhao, Hui
2011-01-01
Railway inspection is an important task in railway maintenance to ensure safety. The fastener is a major part of the railway which fastens the tracks to the ground. The current article presents an efficient method to detect fasteners on the basis of image processing and pattern recognition techniques, which can be used to detect the absence of fasteners on the corresponding track in high-speed(up to 400 km/h). The Direction Field is extracted as the feature descriptor for recognition. In addition, the appropriate weight coefficient matrix is presented for robust and rapid matching in a complex environment. Experimental results are presented to show that the proposed method is computation efficient and robust for the detection of fasteners in a complex environment. Through the practical device fixed on the track inspection train, enough fastener samples are obtained, and the feasibility of the method is verified at 400 km/h.
An Efficient Direction Field-Based Method for the Detection of Fasteners on High-Speed Railways
Yang, Jinfeng; Tao, Wei; Liu, Manhua; Zhang, Yongjie; Zhang, Haibo; Zhao, Hui
2011-01-01
Railway inspection is an important task in railway maintenance to ensure safety. The fastener is a major part of the railway which fastens the tracks to the ground. The current article presents an efficient method to detect fasteners on the basis of image processing and pattern recognition techniques, which can be used to detect the absence of fasteners on the corresponding track in high-speed(up to 400 km/h). The Direction Field is extracted as the feature descriptor for recognition. In addition, the appropriate weight coefficient matrix is presented for robust and rapid matching in a complex environment. Experimental results are presented to show that the proposed method is computation efficient and robust for the detection of fasteners in a complex environment. Through the practical device fixed on the track inspection train, enough fastener samples are obtained, and the feasibility of the method is verified at 400 km/h. PMID:22164022
NASA Astrophysics Data System (ADS)
Lin, Yuchun; Baumketner, Andrij; Deng, Shaozhong; Xu, Zhenli; Jacobs, Donald; Cai, Wei
2009-10-01
In this paper, a new solvation model is proposed for simulations of biomolecules in aqueous solutions that combines the strengths of explicit and implicit solvent representations. Solute molecules are placed in a spherical cavity filled with explicit water, thus providing microscopic detail where it is most needed. Solvent outside of the cavity is modeled as a dielectric continuum whose effect on the solute is treated through the reaction field corrections. With this explicit/implicit model, the electrostatic potential represents a solute molecule in an infinite bath of solvent, thus avoiding unphysical interactions between periodic images of the solute commonly used in the lattice-sum explicit solvent simulations. For improved computational efficiency, our model employs an accurate and efficient multiple-image charge method to compute reaction fields together with the fast multipole method for the direct Coulomb interactions. To minimize the surface effects, periodic boundary conditions are employed for nonelectrostatic interactions. The proposed model is applied to study liquid water. The effect of model parameters, which include the size of the cavity, the number of image charges used to compute reaction field, and the thickness of the buffer layer, is investigated in comparison with the particle-mesh Ewald simulations as a reference. An optimal set of parameters is obtained that allows for a faithful representation of many structural, dielectric, and dynamic properties of the simulated water, while maintaining manageable computational cost. With controlled and adjustable accuracy of the multiple-image charge representation of the reaction field, it is concluded that the employed model achieves convergence with only one image charge in the case of pure water. Future applications to pKa calculations, conformational sampling of solvated biomolecules and electrolyte solutions are briefly discussed.
NASA Astrophysics Data System (ADS)
Maki, Toshihiro; Ura, Tamaki; Singh, Hanumant; Sakamaki, Takashi
Large-area seafloor imaging will bring significant benefits to various fields such as academics, resource survey, marine development, security, and search-and-rescue. The authors have proposed a navigation method of an autonomous underwater vehicle for seafloor imaging, and verified its performance through mapping tubeworm colonies with the area of 3,000 square meters using the AUV Tri-Dog 1 at Tagiri vent field, Kagoshima bay in Japan (Maki et al., 2008, 2009). This paper proposes a post-processing method to build a natural photo mosaic from a number of pictures taken by an underwater platform. The method firstly removes lens distortion, invariances of color and lighting from each image, and then ortho-rectification is performed based on camera pose and seafloor estimated by navigation data. The image alignment is based on both navigation data and visual characteristics, implemented as an expansion of the image based method (Pizarro et al., 2003). Using the two types of information realizes an image alignment that is consistent both globally and locally, as well as making the method applicable to data sets with little visual keys. The method was evaluated using a data set obtained by the AUV Tri-Dog 1 at the vent field in Sep. 2009. A seamless, uniformly illuminated photo mosaic covering the area of around 500 square meters was created from 391 pictures, which covers unique features of the field such as bacteria mats and tubeworm colonies.
NASA Astrophysics Data System (ADS)
Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; Bettencourt, Matthew
2016-12-01
We propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowed by typical CFL restrictions.
Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; Bettencourt, Matthew
2016-08-09
Here, we propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowed by typical CFL restrictions.
Hohenstein, Edward G.; Luehr, Nathan; Ufimtsev, Ivan S.; Martínez, Todd J.
2015-06-14
Despite its importance, state-of-the-art algorithms for performing complete active space self-consistent field (CASSCF) computations have lagged far behind those for single reference methods. We develop an algorithm for the CASSCF orbital optimization that uses sparsity in the atomic orbital (AO) basis set to increase the applicability of CASSCF. Our implementation of this algorithm uses graphical processing units (GPUs) and has allowed us to perform CASSCF computations on molecular systems containing more than one thousand atoms. Additionally, we have implemented analytic gradients of the CASSCF energy; the gradients also benefit from GPU acceleration as well as sparsity in the AO basis.
Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.
2015-12-07
In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO_{2} and comparing the predictions with experiments.
La Delfa, Nicholas J; Potvin, Jim R
2017-03-01
This paper describes the development of a novel method (termed the 'Arm Force Field' or 'AFF') to predict manual arm strength (MAS) for a wide range of body orientations, hand locations and any force direction. This method used an artificial neural network (ANN) to predict the effects of hand location and force direction on MAS, and included a method to estimate the contribution of the arm's weight to the predicted strength. The AFF method predicted the MAS values very well (r(2) = 0.97, RMSD = 5.2 N, n = 456) and maintained good generalizability with external test data (r(2) = 0.842, RMSD = 13.1 N, n = 80). The AFF can be readily integrated within any DHM ergonomics software, and appears to be a more robust, reliable and valid method of estimating the strength capabilities of the arm, when compared to current approaches.
Xu, Peng; Haves, Philip
2002-05-16
An automated fault detection and diagnosis tool for HVAC systems is being developed, based on an integrated, life-cycle, approach to commissioning and performance monitoring. The tool uses component-level HVAC equipment models implemented in the SPARK equation-based simulation environment. The models are configured using design information and component manufacturers' data and then fine-tuned to match the actual performance of the equipment by using data measured during functional tests of the sort using in commissioning. This paper presents the results of field tests of mixing box and VAV fan system models in an experimental facility and a commercial office building. The models were found to be capable of representing the performance of correctly operating mixing box and VAV fan systems and detecting several types of incorrect operation.
NASA Astrophysics Data System (ADS)
Yang, Kang; Guo, Zhaoli
2016-04-01
In this paper, a lattice Boltzmann equation (LBE) model is proposed for binary fluids based on a quasi-incompressible phase-field model [J. Shen et al., Commun. Comput. Phys. 13, 1045 (2013), 10.4208/cicp.300711.160212a]. Compared with the other incompressible LBE models based on the incompressible phase-field theory, the quasi-incompressible model conserves mass locally. A series of numerical simulations are performed to validate the proposed model, and comparisons with an incompressible LBE model [H. Liang et al., Phys. Rev. E 89, 053320 (2014), 10.1103/PhysRevE.89.053320] are also carried out. It is shown that the proposed model can track the interface accurately. As the stationary droplet and rising bubble problems, the quasi-incompressible LBE gives nearly the same predictions as the incompressible model, but the compressible effect in the present model plays a significant role in the phase separation problem. Therefore, in general cases the present mass-conserving model should be adopted.
NASA Astrophysics Data System (ADS)
Pakniat, R.; Tavassoly, M. K.; Zandi, M. H.
2017-01-01
In this paper, we outline a scheme for entanglement swapping based on the concept of cavity QED. The atom-field entangled state in our study is produced in the nonlinear regime. In this scheme, the exploited cavities are prepared in a hybrid entangled state (a combination of coherent and number states) and the swapping process is investigated using two different methods, i.e., detecting and Bell-state measurement methods through the cavity QED. Then, we make use of the atom-field entangled state obtained by detecting method to show that how the atom-atom entanglement as well as atomic and field states teleportation can be achieved with complete fidelity.
Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; ...
2016-08-09
Here, we propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowedmore » by typical CFL restrictions.« less
Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.
2015-12-07
In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO2 and comparing themore » predictions with experiments.« less
Field evaluation of a VOST sampling method
Jackson, M.D.; Johnson, L.D.; Fuerst, R.G.; McGaughey, J.F.; Bursey, J.T.; Merrill, R.G.
1994-12-31
The VOST (SW-846 Method 0030) specifies the use of Tenax{reg_sign} and a particular petroleum-based charcoal (SKC Lot 104, or its equivalent), that is no longer commercially available. In field evaluation studies of VOST methodology, a replacement petroleum-based charcoal has been used: candidate replacement sorbents for charcoal were studied, and Anasorb{reg_sign} 747, a carbon-based sorbent, was selected for field testing. The sampling train was modified to use only Anasorb{reg_sign} in the back tube and Tenax{reg_sign} in the two front tubes to avoid analytical difficulties associated with the analysis of the sequential bed back tube used in the standard VOST train. The standard (SW-846 Method 0030) and the modified VOST methods were evaluated at a chemical manufacturing facility using a quadruple probe system with quadruple trains. In this field test, known concentrations of the halogenated volatile organic compounds, that are listed in the Clean Air Act Amendments of 1990, Title 3, were introduced into the VOST train and the modified VOST train, using the same certified gas cylinder as a source of test compounds. Statistical tests of the comparability of methods were performed on a compound-by-compound basis. For most compounds, the VOST and modified VOST methods were found to be statistically equivalent.
NASA Astrophysics Data System (ADS)
Gandolfo, Daniel; Rodriguez, Roger; Tuckwell, Henry C.
2017-01-01
We investigate the dynamics of large-scale interacting neural populations, composed of conductance based, spiking model neurons with modifiable synaptic connection strengths, which are possibly also subjected to external noisy currents. The network dynamics is controlled by a set of neural population probability distributions (PPD) which are constructed along the same lines as in the Klimontovich approach to the kinetic theory of plasmas. An exact non-closed, nonlinear, system of integro-partial differential equations is derived for the PPDs. As is customary, a closing procedure leads to a mean field limit. The equations we have obtained are of the same type as those which have been recently derived using rigorous techniques of probability theory. The numerical solutions of these so called McKean-Vlasov-Fokker-Planck equations, which are only valid in the limit of infinite size networks, actually shows that the statistical measures as obtained from PPDs are in good agreement with those obtained through direct integration of the stochastic dynamical system for large but finite size networks. Although numerical solutions have been obtained for networks of Fitzhugh-Nagumo model neurons, which are often used to approximate Hodgkin-Huxley model neurons, the theory can be readily applied to networks of general conductance-based model neurons of arbitrary dimension.
NASA Astrophysics Data System (ADS)
Gandolfo, Daniel; Rodriguez, Roger; Tuckwell, Henry C.
2017-03-01
We investigate the dynamics of large-scale interacting neural populations, composed of conductance based, spiking model neurons with modifiable synaptic connection strengths, which are possibly also subjected to external noisy currents. The network dynamics is controlled by a set of neural population probability distributions (PPD) which are constructed along the same lines as in the Klimontovich approach to the kinetic theory of plasmas. An exact non-closed, nonlinear, system of integro-partial differential equations is derived for the PPDs. As is customary, a closing procedure leads to a mean field limit. The equations we have obtained are of the same type as those which have been recently derived using rigorous techniques of probability theory. The numerical solutions of these so called McKean-Vlasov-Fokker-Planck equations, which are only valid in the limit of infinite size networks, actually shows that the statistical measures as obtained from PPDs are in good agreement with those obtained through direct integration of the stochastic dynamical system for large but finite size networks. Although numerical solutions have been obtained for networks of Fitzhugh-Nagumo model neurons, which are often used to approximate Hodgkin-Huxley model neurons, the theory can be readily applied to networks of general conductance-based model neurons of arbitrary dimension.
Liu, Ji; Yu, Li-xia; Zhang, Bin; Zhao Dong-e; Liij, Xiao-yan; Wang, Heng-fei
2016-03-01
The deflagration fire lasting for a long time and covering a large area in the process of large equivalent explosion makes it difficult to obtain velocity parameters of fragments in the near-field. In order to solve the problem, it is proposed in this paper a photoelectric transceiver integrated method which utilize laser screen as the sensing area. The analysis of three different types of warhead explosion flame spectral distribution of radiation shows that 0.3 to 1.0 μm within the band is at relatively low intensity. On the basis of this, the optical system applies the principle of determining the fixed distance by measuring the time and the reflector technology, which consists of single longitudinal mode laser, cylindrical Fresnel lens, narrow-band filters and high-speed optical sensors, etc. The system has its advantage, such as transceiver, compact structure and combination of narrowband filter and single longitudinal mode laser, which can stop the spectrum of fire from suppressing the interference of background light effectively. Large amounts of experiments in different models and equivalent have been conducted to measure the velocity of difference kinds of warheads, obtaining higher signal-to-noise ratio of the waveform signal after a series of signal de-noising and recognition through NI company data acquisition and recording system. The experimental results show that this method can complete the accurately test velocity of fragments around center of the explosion. Specifically, the minimum size of fragments can be measured is 4 mm while the speed can be obtained is up to 1 200 m x s(-1) and the capture rate is better than 95% comparing with test results of target plate. At the same time, the system adopts Fresnel lenses-transparent to form a rectangular screen, which makes the distribution of rectangular light uniform in vertical direction, and the light intensity uniformity in horizontal direction is more than 80%. Consequently, the system can
Apparatuses and methods for generating electric fields
Scott, Jill R; McJunkin, Timothy R; Tremblay, Paul L
2013-08-06
Apparatuses and methods relating to generating an electric field are disclosed. An electric field generator may include a semiconductive material configured in a physical shape substantially different from a shape of an electric field to be generated thereby. The electric field is generated when a voltage drop exists across the semiconductive material. A method for generating an electric field may include applying a voltage to a shaped semiconductive material to generate a complex, substantially nonlinear electric field. The shape of the complex, substantially nonlinear electric field may be configured for directing charged particles to a desired location. Other apparatuses and methods are disclosed.
Computer Based Virtual Field Trips.
ERIC Educational Resources Information Center
Clark, Kenneth F.; Hosticka, Alice; Schriver, Martha; Bedell, Jackie
This paper discusses computer based virtual field trips that use technologies commonly found in public schools in the United States. The discussion focuses on the advantages of both using and creating these field trips for an instructional situation. A virtual field trip to Cumberland Island National Seashore, St. Marys, Georgia is used as a point…
NASA Astrophysics Data System (ADS)
Leiva Lopez, Josue Nahun
In general, the nursery industry lacks an automated inventory control system. Object-based image analysis (OBIA) software and aerial images could be used to count plants in nurseries. The objectives of this research were: 1) to evaluate the effect of an unmanned aerial vehicle (UAV) flight altitude and plant canopy separation of container-grown plants on count accuracy using aerial images and 2) to evaluate the effect of plant canopy shape, presence of flowers, and plant status (living and dead) on counting accuracy of container-grown plants using remote sensing images. Images were analyzed using Feature AnalystRTM (FA) and an algorithm trained using MATLABRTM. Total count error, false positives and unidentified plants were recorded from output images using FA; only total count error was reported for the MATLAB algorithm. For objective 1, images were taken at 6, 12 and 22 m above the ground using a UAV. Plants were placed on black fabric and gravel, and spaced as follows: 5 cm between canopy edges, canopy edges touching, and 5 cm of canopy edge overlap. In general, when both methods were considered, total count error was smaller [ranging from -5 (undercount) to 4 (over count)] when plants were fully separated with the exception of images taken at 22 m. FA showed a smaller total count error (-2) than MATLAB (-5) when plants were placed on black fabric than those placed on gravel. For objective 2, the plan was to continue using the UAV, however, due to the unexpected disruption of the GPS-based navigation by heightened solar flare activity in 2013, a boom lift that could provide images on a more reliable basis was used. When images obtained using a boom lift were analyzed using FA there was no difference between variables measured when an algorithm trained with an image displaying regular or irregular plant canopy shape was applied to images displaying both plant canopy shapes even though the canopy shape of 'Sea Green' juniper is less compact than 'Plumosa Compacta
Galanti, Eli; Kaspi, Yohai
2016-04-01
During 2016–17, the Juno and Cassini spacecraft will both perform close eccentric orbits of Jupiter and Saturn, respectively, obtaining high-precision gravity measurements for these planets. These data will be used to estimate the depth of the observed surface flows on these planets. All models to date, relating the winds to the gravity field, have been in the forward direction, thus only allowing the calculation of the gravity field from given wind models. However, there is a need to do the inverse problem since the new observations will be of the gravity field. Here, an inverse dynamical model is developed to relate the expected measurable gravity field, to perturbations of the density and wind fields, and therefore to the observed cloud-level winds. In order to invert the gravity field into the 3D circulation, an adjoint model is constructed for the dynamical model, thus allowing backward integration. This tool is used for the examination of various scenarios, simulating cases in which the depth of the wind depends on latitude. We show that it is possible to use the gravity measurements to derive the depth of the winds, both on Jupiter and Saturn, also taking into account measurement errors. Calculating the solution uncertainties, we show that the wind depth can be determined more precisely in the low-to-mid-latitudes. In addition, the gravitational moments are found to be particularly sensitive to flows at the equatorial intermediate depths. Therefore, we expect that if deep winds exist on these planets they will have a measurable signature by Juno and Cassini.
Meyer, Frans J C; Davidson, David B; Jakobus, Ulrich; Stuchly, Maria A
2003-02-01
A hybrid finite-element method (FEM)/method of moments (MoM) technique is employed for specific absorption rate (SAR) calculations in a human phantom in the near field of a typical group special mobile (GSM) base-station antenna. The MoM is used to model the metallic surfaces and wires of the base-station antenna, and the FEM is used to model the heterogeneous human phantom. The advantages of each of these frequency domain techniques are, thus, exploited, leading to a highly efficient and robust numerical method for addressing this type of bioelectromagnetic problem. The basic mathematical formulation of the hybrid technique is presented. This is followed by a discussion of important implementation details-in particular, the linear algebra routines for sparse, complex FEM matrices combined with dense MoM matrices. The implementation is validated by comparing results to MoM (surface equivalence principle implementation) and finite-difference time-domain (FDTD) solutions of human exposure problems. A comparison of the computational efficiency of the different techniques is presented. The FEM/MoM implementation is then used for whole-body and critical-organ SAR calculations in a phantom at different positions in the near field of a base-station antenna. This problem cannot, in general, be solved using the MoM or FDTD due to computational limitations. This paper shows that the specific hybrid FEM/MoM implementation is an efficient numerical tool for accurate assessment of human exposure in the near field of base-station antennas.
Third-order aberrations in GRIN crystalline lens: a new method based on axial and field rays.
Río, Arturo Díaz Del; Gómez-Reino, Carlos; Flores-Arias, M Teresa
2015-01-01
This paper presents a new procedure for calculating the third-order aberration of gradient-index (GRIN) lenses that combines an iterative numerical method with the Hamiltonian theory of aberrations in terms of two paraxial rays with boundary conditions on general curved end surfaces and, as a second algebraic step has been presented. Application of this new method to a GRIN human is analyzed in the framework of the bi-elliptical model. The different third-order aberrations are determined, except those that need for their calculation skew rays, because the study is made only for meridional rays.
NASA Astrophysics Data System (ADS)
Kitauchi, H.; Nozaki, K.; Ito, H.; Kondo, T.; Tsuchiya, S.; Imamura, K.; Nagatsuma, T.; Ishii, M.
2014-12-01
We present our recent efforts on an evaluation of the numerical prediction method of electric field strength for ionospheric propagation of low frequency (LF) radio waves based on a wave-hop propagation theory described in Section 2.4 of Recommendation ITU-R P.684-6 (2012), "Prediction of field strength at frequencies below about 150 kHz," made by International Telecommunication Union Radiocommunication Sector (ITU-R). As part of the Japanese Antarctic Research Expedition (JARE), we conduct on-board measurements of the electric field strengths and phases of LF 40 kHz and 60 kHz of radio signals (call sign JJY) continuously along both the ways between Tokyo, Japan and Syowa Station, the Japanese Antarctic station, at 69° 00' S, 39° 35' E on East Ongul Island, Lützow-Holm Bay, East Antarctica. The measurements are made by a newly developed, highly sensitive receiving system installed on board the Japanese Antarctic research vessel (RV) Shirase. We obtained new data sets of the electric field strength up to approximately 13,000-14,000 km propagation of LF JJY 40 kHz and 60 kHz radio waves by utilizing a newly developed, highly sensitive receiving system, comprised of an orthogonally crossed double-loop antenna and digital-signal-processing lock-in amplifiers, on board RV Shirase during the 55th JARE from November 2013 to April 2014. We have made comparisons between those on-board measurements and the numerical predictions of field strength for long-range propagation of low frequency radio waves based on a wave-hop propagation theory described in Section 2.4 of Recommendation ITU-R P.684-6 (2012) to show that our results qualitatively support the recommended wave-hop theory for the great-circle paths approximately 7,000-8,000 km and 13,000-14,000 km propagations.
Optimization methods in control of electromagnetic fields
NASA Astrophysics Data System (ADS)
Angell, Thomas S.; Kleinman, Ralph E.
1991-05-01
This program is developing constructive methods for certain constrained optimization problems arising in the design and control of electromagnetic fields and in the identification of scattering objects. The problems addressed fall into three categories: (1) the design of antennas with optimal radiation characteristics measured in terms of directivity; (2) the control of the electromagnetic scattering characteristics of an object, in particular the minimization of its radar cross section, by the choice of material properties; and (3) the determination of the shape of scattering objects with various electromagnetic properties from scattered field data. The main thrust of the program is toward the development of constructive methods based on the use of complete families of solutions of the time-harmonic Maxwell equations in the infinite domain exterior to the radiating or scattering body. During the course of the work an increasing amount of attention has been devoted to the use of iterative methods for the solution of various direct and inverse problems. The continued investigation and development of these methods and their application in parameter identification has become a significant part of the program.
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Zhou, Guang-xiang; Gao, Wen-chun; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-01-01
According to the requirements of the increasing development for optical transmission systems, a novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on the subgroup of the finite field multiplicative group is proposed. Furthermore, this construction method can effectively avoid the girth-4 phenomena and has the advantages such as simpler construction, easier implementation, lower encoding/decoding complexity, better girth properties and more flexible adjustment for the code length and code rate. The simulation results show that the error correction performance of the QC-LDPC(3 780,3 540) code with the code rate of 93.7% constructed by this proposed method is excellent, its net coding gain is respectively 0.3 dB, 0.55 dB, 1.4 dB and 1.98 dB higher than those of the QC-LDPC(5 334,4 962) code constructed by the method based on the inverse element characteristics in the finite field multiplicative group, the SCG-LDPC(3 969,3 720) code constructed by the systematically constructed Gallager (SCG) random construction method, the LDPC(32 640,30 592) code in ITU-T G.975.1 and the classic RS(255,239) code which is widely used in optical transmission systems in ITU-T G.975 at the bit error rate ( BER) of 10-7. Therefore, the constructed QC-LDPC(3 780,3 540) code is more suitable for optical transmission systems.
Jiang, Su; Liu, Ya-Feng; Wang, Xiao-Min; Liu, Ke-Fei; Zhang, Ding-Hong; Li, Yi-Ding; Yu, Ai-Ping; Zhang, Xiao-Hui; Zhang, Jia-Yi; Xu, Jian-Guang; Gu, Yu-Dong; Xu, Wen-Dong; Zeng, Shao-Qun
2016-01-01
We introduce a more flexible optogenetics-based mapping system attached on a stereo microscope, which offers automatic light stimulation to individual regions of interest in the cortex that expresses light-activated channelrhodopsin-2 in vivo. Combining simultaneous recording of electromyography from specific forelimb muscles, we demonstrate that this system offers much better efficiency and precision in mapping distinct domains for controlling limb muscles in the mouse motor cortex. Furthermore, the compact and modular design of the system also yields a simple and flexible implementation to different commercial stereo microscopes, and thus could be widely used among laboratories. PMID:27699114
NASA Astrophysics Data System (ADS)
Cottura, M.; Appolaire, B.; Finel, A.; Le Bouar, Y.
2016-09-01
A phase field model is coupled to strain gradient crystal plasticity based on dislocation densities. The resulting model includes anisotropic plasticity and the size-dependence of plastic activity, required when plasticity is confined in region below few microns in size. These two features are important for handling microstructure evolutions during diffusive phase transformations that involve plastic deformation occurring in confined areas such as Ni-based superalloys undergoing rafting. The model also uses a storage-recovery law for the evolution of the dislocation density of each glide system and a hardening matrix to account for the short-range interactions between dislocations. First, it is shown that the unstable modes during the morphological destabilization of a growing misfitting circular precipitate are selected by the anisotropy of plasticity. Then, the rafting of γ‧ precipitates in a Ni-based superalloy is investigated during [100] creep loadings. Our model includes most of the important physical phenomena accounted for during the microstructure evolution, such as the presence of different crystallographic γ‧ variants, their misfit with the γ matrix, the elastic inhomogeneity and anisotropy, the hardening, anisotropy and viscosity of plasticity. In agreement with experiments, the model predicts that rafting proceeds perpendicularly to the tensile loading axis and it is shown that plasticity slows down significantly the evolution of the rafts.
NASA Astrophysics Data System (ADS)
Vergallo, P.; Lay-Ekuakille, A.
2013-08-01
Brain activity can be recorded by means of EEG (Electroencephalogram) electrodes placed on the scalp of the patient. The EEG reflects the activity of groups of neurons located in the head, and the fundamental problem in neurophysiology is the identification of the sources responsible of brain activity, especially if a seizure occurs and in this case it is important to identify it. The studies conducted in order to formalize the relationship between the electromagnetic activity in the head and the recording of the generated external field allow to know pattern of brain activity. The inverse problem, that is given the sampling field at different electrodes the underlying asset must be determined, is more difficult because the problem may not have a unique solution, or the search for the solution is made difficult by a low spatial resolution which may not allow to distinguish between activities involving sources close to each other. Thus, sources of interest may be obscured or not detected and known method in source localization problem as MUSIC (MUltiple SIgnal Classification) could fail. Many advanced source localization techniques achieve a best resolution by exploiting sparsity: if the number of sources is small as a result, the neural power vs. location is sparse. In this work a solution based on the spatial sparsity of the field signal is presented and analyzed to improve MUSIC method. For this purpose, it is necessary to set a priori information of the sparsity in the signal. The problem is formulated and solved using a regularization method as Tikhonov, which calculates a solution that is the better compromise between two cost functions to minimize, one related to the fitting of the data, and another concerning the maintenance of the sparsity of the signal. At the first, the method is tested on simulated EEG signals obtained by the solution of the forward problem. Relatively to the model considered for the head and brain sources, the result obtained allows to
NASA Astrophysics Data System (ADS)
Peng, Jiayuan; Zhang, Zhen; Wang, Jiazhou; Xie, Jiang; Chen, Junchao; Hu, Weigang
2015-10-01
GafChromic RTQA2 film is a type of radiochromic film designed for light field and radiation field alignment. The aim of this study is to extend the application of RTQA2 film to the measurement of patient specific quality assurance (QA) fields as a 2D relative dosimeter. Pre-irradiated and post-irradiated RTQA2 films were scanned in reflection mode using a flatbed scanner. A plan-based calibration (PBC) method utilized the mapping information of the calculated dose image and film grayscale image to create a dose versus pixel value calibration model. This model was used to calibrate the film grayscale image to the film relative dose image. The dose agreement between calculated and film dose images were analyzed by gamma analysis. To evaluate the feasibility of this method, eight clinically approved RapidArc cases (one abdomen cancer and seven head-and-neck cancer patients) were tested using this method. Moreover, three MLC gap errors and two MLC transmission errors were introduced to eight Rapidarc cases respectively to test the robustness of this method. The PBC method could overcome the film lot and post-exposure time variations of RTQA2 film to get a good 2D relative dose calibration result. The mean gamma passing rate of eight patients was 97.90% ± 1.7%, which showed good dose consistency between calculated and film dose images. In the error test, the PBC method could over-calibrate the film, which means some dose error in the film would be falsely corrected to keep the dose in film consistent with the dose in the calculated dose image. This would then lead to a false negative result in the gamma analysis. In these cases, the derivative curve of the dose calibration curve would be non-monotonic which would expose the dose abnormality. By using the PBC method, we extended the application of more economical RTQA2 film to patient specific QA. The robustness of the PBC method has been improved by analyzing the monotonicity of the derivative of the
Low field SQUID MRI devices, components and methods
NASA Technical Reports Server (NTRS)
Penanen, Konstantin I. (Inventor); Eom, Byeong H (Inventor); Hahn, Inseob (Inventor)
2010-01-01
Low field SQUID MRI devices, components and methods are disclosed. They include a portable low field (SQUID)-based MRI instrument and a portable low field SQUID-based MRI system to be operated under a bed where a subject is adapted to be located. Also disclosed is a method of distributing wires on an image encoding coil system adapted to be used with an NMR or MRI device for analyzing a sample or subject and a second order superconducting gradiometer adapted to be used with a low field SQUID-based MRI device as a sensing component for an MRI signal related to a subject or sample.
Low field SQUID MRI devices, components and methods
NASA Technical Reports Server (NTRS)
Penanen, Konstantin I. (Inventor); Eom, Byeong H. (Inventor); Hahn, Inseob (Inventor)
2011-01-01
Low field SQUID MRI devices, components and methods are disclosed. They include a portable low field (SQUID)-based MRI instrument and a portable low field SQUID-based MRI system to be operated under a bed where a subject is adapted to be located. Also disclosed is a method of distributing wires on an image encoding coil system adapted to be used with an NMR or MRI device for analyzing a sample or subject and a second order superconducting gradiometer adapted to be used with a low field SQUID-based MRI device as a sensing component for an MRI signal related to a subject or sample.
Low Field Squid MRI Devices, Components and Methods
NASA Technical Reports Server (NTRS)
Penanen, Konstantin I. (Inventor); Eom, Byeong H. (Inventor); Hahn, Inseob (Inventor)
2013-01-01
Low field SQUID MRI devices, components and methods are disclosed. They include a portable low field (SQUID)-based MRI instrument and a portable low field SQUID-based MRI system to be operated under a bed where a subject is adapted to be located. Also disclosed is a method of distributing wires on an image encoding coil system adapted to be used with an NMR or MRI device for analyzing a sample or subject and a second order superconducting gradiometer adapted to be used with a low field SQUID-based MRI device as a sensing component for an MRI signal related to a subject or sample.
Low Field Squid MRI Devices, Components and Methods
NASA Technical Reports Server (NTRS)
Penanen, Konstantin I. (Inventor); Eom, Byeong H. (Inventor); Hahn, Inseob (Inventor)
2014-01-01
Low field SQUID MRI devices, components and methods are disclosed. They include a portable low field (SQUID)-based MRI instrument and a portable low field SQUID-based MRI system to be operated under a bed where a subject is adapted to be located. Also disclosed is a method of distributing wires on an image encoding coil system adapted to be used with an NMR or MRI device for analyzing a sample or subject and a second order superconducting gradiometer adapted to be used with a low field SQUID-based MRI device as a sensing component for an MRI signal related to a subject or sample.
Harris, W; Zhang, Y; Ren, L; Yin, F
2014-06-01
Purpose: To investigate the feasibility of using nanoparticle markers to validate liver tumor motion together with a deformation field map-based four dimensional (4D) cone-beam computed tomography (CBCT) reconstruction method. Methods: A technique for lung 4D-CBCT reconstruction has been previously developed using a deformation field map (DFM)-based strategy. In this method, each phase of the 4D-CBCT is considered as a deformation of a prior CT volume. The DFM is solved by a motion modeling and free-form deformation (MM-FD) technique, using a data fidelity constraint and the deformation energy minimization. For liver imaging, there is low contrast of a liver tumor in on-board projections. A validation of liver tumor motion using implanted gold nanoparticles, along with the MM-FD deformation technique is implemented to reconstruct onboard 4D CBCT liver radiotherapy images. These nanoparticles were placed around the liver tumor to reflect the tumor positions in both CT simulation and on-board image acquisition. When reconstructing each phase of the 4D-CBCT, the migrations of the gold nanoparticles act as a constraint to regularize the deformation field, along with the data fidelity and the energy minimization constraints. In this study, multiple tumor diameters and positions were simulated within the liver for on-board 4D-CBCT imaging. The on-board 4D-CBCT reconstructed by the proposed method was compared with the “ground truth” image. Results: The preliminary data, which uses reconstruction for lung radiotherapy suggests that the advanced reconstruction algorithm including the gold nanoparticle constraint will Resultin volume percentage differences (VPD) between lesions in reconstructed images by MM-FD and “ground truth” on-board images of 11.5% (± 9.4%) and a center of mass shift of 1.3 mm (± 1.3 mm) for liver radiotherapy. Conclusion: The advanced MM-FD technique enforcing the additional constraints from gold nanoparticles, results in improved accuracy
NASA Astrophysics Data System (ADS)
Zaccheo, T. S.; Pernini, T.; Botos, C.; Dobler, J. T.; Blume, N.
2015-12-01
The Greenhouse gas Laser Imaging Tomography Experiment (GreenLITE) combines real-time differential Laser Absorption Spectroscopy (LAS) measurements with a lightweight web-based data acquisition and product generation system to provide autonomous 24/7 monitoring of CO2. The current GreenLITE system is comprised of two transceivers and a series of retro-reflectors that continuously measure the differential transmission over a user-defined set of intersecting line-of-site paths or "chords" that form the plane of interest. These observations are first combined with in situ surface measurements of temperature (T), pressure (P) and relative humidity (RH) to compute the integrated CO2 mixing ratios based on an iterative radiative transfer modeling approach. The retrieved CO2 mixing ratios are then grouped based on observation time and employed in a sparse sample reconstruction method to provide a tomographic- like representation of the 2-D distribution of CO2 over the field of interest. This reconstruction technique defines the field of interest as a set of idealized plumes whose integrated values best match the observations. The GreenLITE system has been deployed at two primary locations; 1) the Zero Emissions Research and Technology (ZERT) center in Bozeman, Montana, in Aug-Sept 2014, where more than 200 hours of data were collected over a wide range of environmental conditions while utilizing a controlled release of CO2 into a segmented underground pipe, and 2) continuously at a carbon sequestration test facility in Feb-Aug 2015. The system demonstrated the ability to identify persistent CO2 sources at the ZERT test facility and showed strong correlation with an independent measurement using a LI-COR based system. Here we describe the measurement approach, algorithm design and extended study results.
Stothard, J R; Pleasant, J; Oguttu, D; Adriko, M; Galimaka, R; Ruggiana, A; Kazibwe, F; Kabatereine, N B
2008-09-01
To ascertain the current status of strongyloidiasis in mothers and their preschool children, a field-based survey was conducted in western Uganda using a combination of diagnostic methods: ELISA, Baermann concentration and Koga agar plate. The prevalence of other soil-transmitted helminthiasis and intestinal schistosomiasis were also determined. In total, 158 mothers and 143 children were examined from five villages within Kabale, Hoima and Masindi districts. In mothers and children, the general prevalence of strongyloidiasis inferred by ELISA was approximately 4% and approximately 2%, respectively. Using the Baermann concentration method, two parasitologically proven cases were encountered in an unrelated mother and child, both of whom were sero-negative for strongyloidiasis. No infections were detected by Koga agar plate method. The general level of awareness of strongyloidiasis was very poor ( < 5%) in comparison to schistosomiasis (51%) and ascariasis (36%). Strongyloidiasis is presently at insufficient levels to justify inclusion within a community treatment programme targeting maternal and child health. Better epidemiological screening is needed, however, especially identifying infections in HIV-positive women of childbearing age. In the rural clinic setting, further use of the Baermann concentration method would appear to be the most immediate and pragmatic option for disease diagnosis.
Pla, Maria; La Paz, José-Luis; Peñas, Gisela; García, Nora; Palaudelmàs, Montserrat; Esteve, Teresa; Messeguer, Joaquima; Melé, Enric
2006-04-01
Maize is one of the main crops worldwide and an increasing number of genetically modified (GM) maize varieties are cultivated and commercialized in many countries in parallel to conventional crops. Given the labeling rules established e.g. in the European Union and the necessary coexistence between GM and non-GM crops, it is important to determine the extent of pollen dissemination from transgenic maize to other cultivars under field conditions. The most widely used methods for quantitative detection of GMO are based on real-time PCR, which implies the results are expressed in genome percentages (in contrast to seed or grain percentages). Our objective was to assess the accuracy of real-time PCR based assays to accurately quantify the contents of transgenic grains in non-GM fields in comparison with the real cross-fertilization rate as determined by phenotypical analysis. We performed this study in a region where both GM and conventional maize are normally cultivated and used the predominant transgenic maize Mon810 in combination with a conventional maize variety which displays the characteristic of white grains (therefore allowing cross-pollination quantification as percentage of yellow grains). Our results indicated an excellent correlation between real-time PCR results and number of cross-fertilized grains at Mon810 levels of 0.1-10%. In contrast, Mon810 percentage estimated by weight of grains produced less accurate results. Finally, we present and discuss the pattern of pollen-mediated gene flow from GM to conventional maize in an example case under field conditions.
Historic Methods for Capturing Magnetic Field Images
ERIC Educational Resources Information Center
Kwan, Alistair
2016-01-01
I investigated two late 19th-century methods for capturing magnetic field images from iron filings for historical insight into the pedagogy of hands-on physics education methods, and to flesh out teaching and learning practicalities tacit in the historical record. Both methods offer opportunities for close sensory engagement in data-collection…
Historic Methods for Capturing Magnetic Field Images
NASA Astrophysics Data System (ADS)
Kwan, Alistair
2016-03-01
I investigated two late 19th-century methods for capturing magnetic field images from iron filings for historical insight into the pedagogy of hands-on physics education methods, and to flesh out teaching and learning practicalities tacit in the historical record. Both methods offer opportunities for close sensory engagement in data-collection processes.
Bae, Il Kwon; Kim, Juwon; Sun, Je Young Hannah; Jeong, Seok Hoon; Kim, Yong-Rok; Wang, Kang-Kyun; Lee, Kyungwon
2014-01-01
Background & objectives: PFGE, rep-PCR, and MLST are widely used to identify related bacterial isolates and determine epidemiologic associations during outbreaks. This study was performed to compare the ability of repetitive sequence-based PCR (rep-PCR) and pulsed-field gel electrophoresis (PFGE) to determine the genetic relationships among Escherichia coli isolates assigned to various sequence types (STs) by two multilocus sequence typing (MLST) schemes. Methods: A total of 41 extended-spectrum β-lactamase- (ESBL-) and/or AmpC β-lactamase-producing E. coli clinical isolates were included in this study. MLST experiments were performed following the Achtman's MLST scheme and the Whittam's MLST scheme, respectively. Rep-PCR experiments were performed using the DiversiLab system. PFGE experiments were also performed. Results: A comparison of the two MLST methods demonstrated that these two schemes yielded compatible results. PFGE correctly segregated E. coli isolates belonging to different STs as different types, but did not group E. coli isolates belonging to the same ST in the same group. Rep-PCR accurately grouped E. coli isolates belonging to the same ST together, but this method demonstrated limited ability to discriminate between E. coli isolates belonging to different STs. Interpretation & conclusions: These results suggest that PFGE would be more effective when investigating outbreaks in a limited space, such as a specialty hospital or an intensive care unit, whereas rep-PCR should be used for nationwide or worldwide epidemiology studies. PMID:25579152
Human Biology, A Guide to Field Methods.
ERIC Educational Resources Information Center
Weiner, J. S.; Lourie, J. A.
The aim of this handbook is to provide, in a form suitable for use in the field, instructions on the whole range of methods required for the fulfillment of human biological studies on a comparative basis. Certain of these methods can be used to carry out the rapid surveys on growth, physique, and genetic constitution. They are also appropriate for…
Soil Identification using Field Electrical Resistivity Method
NASA Astrophysics Data System (ADS)
Hazreek, Z. A. M.; Rosli, S.; Chitral, W. D.; Fauziah, A.; Azhar, A. T. S.; Aziman, M.; Ismail, B.
2015-06-01
Geotechnical site investigation with particular reference to soil identification was important in civil engineering works since it reports the soil condition in order to relate the design and construction of the proposed works. In the past, electrical resistivity method (ERM) has widely being used in soil characterization but experienced several black boxes which related to its results and interpretations. Hence, this study performed a field electrical resistivity method (ERM) using ABEM SAS (4000) at two different types of soils (Gravelly SAND and Silty SAND) in order to discover the behavior of electrical resistivity values (ERV) with type of soils studied. Soil basic physical properties was determine thru density (p), moisture content (w) and particle size distribution (d) in order to verify the ERV obtained from each type of soil investigated. It was found that the ERV of Gravelly SAND (278 Ωm & 285 Ωm) was slightly higher than SiltySAND (223 Ωm & 199 Ωm) due to the uncertainties nature of soils. This finding has showed that the results obtained from ERM need to be interpreted based on strong supported findings such as using direct test from soil laboratory data. Furthermore, this study was able to prove that the ERM can be established as an alternative tool in soil identification provided it was being verified thru other relevance information such as using geotechnical properties.
Method for making field-structured memory materials
Martin, James E.; Anderson, Robert A.; Tigges, Chris P.
2002-01-01
A method of forming a dual-level memory material using field structured materials. The field structured materials are formed from a dispersion of ferromagnetic particles in a polymerizable liquid medium, such as a urethane acrylate-based photopolymer, which are applied as a film to a support and then exposed in selected portions of the film to an applied magnetic or electric field. The field can be applied either uniaxially or biaxially at field strengths up to 150 G or higher to form the field structured materials. After polymerizing the field-structure materials, a magnetic field can be applied to selected portions of the polymerized field-structured material to yield a dual-level memory material on the support, wherein the dual-level memory material supports read-and-write binary data memory and write once, read many memory.
Field-theory methods in coagulation theory
Lushnikov, A. A.
2011-08-15
Coagulating systems are systems of chaotically moving particles that collide and coalesce, producing daughter particles of mass equal to the sum of the masses involved in the respective collision event. The present article puts forth basic ideas underlying the application of methods of quantum-field theory to the theory of coagulating systems. Instead of the generally accepted treatment based on the use of a standard kinetic equation that describes the time evolution of concentrations of particles consisting of a preset number of identical objects (monomers in the following), one introduces the probability W(Q, t) to find the system in some state Q at an instant t for a specific rate of transitions between various states. Each state Q is characterized by a set of occupation numbers Q = (n{sub 1}, n{sub 2}, ..., n{sub g}, ...), where n{sub g} is the total number of particles containing precisely g monomers. Thereupon, one introduces the generating functional {Psi} for the probability W(Q, t). The time evolution of {Psi} is described by an equation that is similar to the Schroedinger equation for a one-dimensional Bose field. This equation is solved exactly for transition rates proportional to the product of the masses of colliding particles. It is shown that, within a finite time interval, which is independent of the total mass of the entire system, a giant particle of mass about the mass of the entire system may appear in this system. The particle in question is unobservable in the thermodynamic limit, and this explains the well-known paradox of mass-concentration nonconservation in classical kinetic theory. The theory described in the present article is successfully applied in studying the time evolution of random graphs.
Constantinou, Marios; Stolojan, Vlad; Rajeev, Kiron Prabha; Hinder, Steven; Fisher, Brett; Bogart, Timothy D; Korgel, Brian A; Shkunov, Maxim
2015-10-14
In this letter, we demonstrate a solution-based method for a one-step deposition and surface passivation of the as-grown silicon nanowires (Si NWs). Using N,N-dimethylformamide (DMF) as a mild oxidizing agent, the NWs' surface traps density was reduced by over 2 orders of magnitude from 1×10(13) cm(-2) in pristine NWs to 3.7×10(10) cm(-2) in DMF-treated NWs, leading to a dramatic hysteresis reduction in NW field-effect transistors (FETs) from up to 32 V to a near-zero hysteresis. The change of the polyphenylsilane NW shell stoichiometric composition was confirmed by X-ray photoelectron spectroscopy analysis showing a 35% increase in fully oxidized Si4+ species for DMF-treated NWs compared to dry NW powder. Additionally, a shell oxidation effect induced by DMF resulted is a more stable NW FET performance with steady transistor currents and only 1.5 V hysteresis after 1000 h of air exposure.
NASA Astrophysics Data System (ADS)
Ferreira, Vagner G.; Montecino, Henry D. C.; Yakubu, Caleb I.; Heck, Bernhard
2016-01-01
Currently, various satellite processing centers produce extensive data, with different solutions of the same field being available. For instance, the Gravity Recovery and Climate Experiment (GRACE) has been monitoring terrestrial water storage (TWS) since April 2002, while the Center for Space Research (CSR), the Jet Propulsion Laboratory (JPL), the GeoForschungsZentrum (GFZ), and the Groupe de Recherche de Géodésie Spatiale (GRGS) provide individual monthly solutions in the form of Stokes coefficients. The inverted TWS maps (or the regionally averaged values) from these coefficients are being used in many applications; however, as no ground truth data exist, the uncertainties are unknown. Consequently, the purpose of this work is to assess the quality of each processing center by estimating their uncertainties using a generalized formulation of the three-cornered hat (TCH) method. Overall, the TCH results for the study period of August 2002 to June 2014 indicate that at a global scale, the CSR, GFZ, GRGS, and JPL presented uncertainties of 9.4, 13.7, 14.8, and 13.2 mm, respectively. At a basin scale, the overall good performance of the CSR was observed at 91 river basins. The TCH-based results were confirmed by a comparison with an ensemble solution from the four GRACE processing centers.
ERIC Educational Resources Information Center
Napier, John D.; Vansickle, Ronald L.
1978-01-01
Comparison of pre-service social studies teachers in field and non-field based methods courses indicated no significant differences with regard to teaching skills, attitudes, or behaviors teachers should exhibit in the classroom. (Author/DB)
Got Mud? Field-based Learning in Wetland Ecology.
ERIC Educational Resources Information Center
Baldwin, Andrew H.
2001-01-01
Describes methods for teaching wetland ecology classes based mainly on direct, hands-on field experiences for students. Makes the case that classroom lectures are necessary but there is no substitute for field and laboratory experiences. (Author/MM)
Field emission from graphene based composite thin films
NASA Astrophysics Data System (ADS)
Eda, Goki; Emrah Unalan, H.; Rupesinghe, Nalin; Amaratunga, Gehan A. J.; Chhowalla, Manish
2008-12-01
Field emission from graphene is challenging because the existing deposition methods lead to sheets that lay flat on the substrate surface, which limits the field enhancement. Here we describe a simple and general solution based method for the deposition of field emitting graphene/polymer composite thin films. The graphene sheets are oriented at some angles with respect to the substrate surface leading to field emission at low threshold fields (˜4Vμm-1). Our method provides a route for the deposition of graphene based thin film field emitter on different substrates, opening up avenues for a variety of applications.
Electric Field Quantitative Measurement System and Method
NASA Technical Reports Server (NTRS)
Generazio, Edward R. (Inventor)
2016-01-01
A method and system are provided for making a quantitative measurement of an electric field. A plurality of antennas separated from one another by known distances are arrayed in a region that extends in at least one dimension. A voltage difference between at least one selected pair of antennas is measured. Each voltage difference is divided by the known distance associated with the selected pair of antennas corresponding thereto to generate a resulting quantity. The plurality of resulting quantities defined over the region quantitatively describe an electric field therein.
A New Method for Coronal Magnetic Field Reconstruction
NASA Astrophysics Data System (ADS)
Yi, Sibaek; Choe, Gwangson; Lim, Daye
2015-08-01
We present a new, simple, variational method for reconstruction of coronal force-free magnetic fields based on vector magnetogram data. Our method employs vector potentials for magnetic field description in order to ensure the divergence-free condition. As boundary conditions, it only requires the normal components of magnetic field and current density so that the boundary conditions are not over-specified as in many other methods. The boundary normal current distribution is initially fixed once and for all and does not need continual adjustment as in stress-and-relax type methods. We have tested the computational code based on our new method in problems with known solutions and those with actual photospheric data. When solutions are fully given at all boundaries, the accuracy of our method is almost comparable to best performing methods in the market. When magnetic field data are given only at the photospheric boundary, our method excels other methods in most “figures of merit” devised by Schrijver et al. (2006). Furthermore the residual force in the solution is at least an order of magnitude smaller than that of any other method. It can also accommodate the source-surface boundary condition at the top boundary. Our method is expected to contribute to the real time monitoring of the sun required for future space weather forecasts.
Overlay control methodology comparison: field-by-field and high-order methods
NASA Astrophysics Data System (ADS)
Huang, Chun-Yen; Chiu, Chui-Fu; Wu, Wen-Bin; Shih, Chiang-Lin; Huang, Chin-Chou Kevin; Huang, Healthy; Choi, DongSub; Pierson, Bill; Robinson, John C.
2012-03-01
Overlay control in advanced integrated circuit (IC) manufacturing is becoming one of the leading lithographic challenges in the 3x and 2x nm process nodes. Production overlay control can no longer meet the stringent emerging requirements based on linear composite wafer and field models with sampling of 10 to 20 fields and 4 to 5 sites per field, which was the industry standard for many years. Methods that have emerged include overlay metrology in many or all fields, including the high order field model method called high order control (HOC), and field by field control (FxFc) methods also called correction per exposure. The HOC and FxFc methods were initially introduced as relatively infrequent scanner qualification activities meant to supplement linear production schemes. More recently, however, it is clear that production control is also requiring intense sampling, similar high order and FxFc methods. The added control benefits of high order and FxFc overlay methods need to be balanced with the increased metrology requirements, however, without putting material at risk. Of critical importance is the proper control of edge fields, which requires intensive sampling in order to minimize signatures. In this study we compare various methods of overlay control including the performance levels that can be achieved.
Ding, Cheng; Chen, Tianming; Li, Zhaoxia; Yan, Jinlong
2015-05-01
Using the standardized polyurethane foam unit (PFU) method, a preliminary investigation was carried out on the bioaccumulation and the ecotoxic effects of the pulp and paper wastewater for irrigating reed fields. Static ectoxicity test had shown protozoal communities were very sensitive to variations in toxin time and effective concentration (EC) of the pulp and paper wastewater. Shannon-Wiener diversity index (H) was a more suitable indicator of the extent of water pollution than Gleason and Margalef diversity index (d), Simpson's diversity index (D), and Pielou's index (J). The regression equation between S eq and EC was S eq = - 0.118EC + 18.554. The relatively safe concentration and maximum acceptable toxicant concentration (MATC) of the wastewater for the protozoal communities were about 20 % and 42 %, respectively. To safely use this wastewater for irrigation, more than 58 % of the toxins must be removed or diluted by further processing. Monitoring of the wastewater in representative irrigated reed fields showed that the regularity of the protozoal colonization process was similar to the static ectoxicity, indicating that the toxicity of the irrigating pulp and paper wastewater was not lethal to protozoal communities in the reed fields. This study demonstrated the applicability of the PFU method in monitoring the ecotoxic effects of pulp and paper wastewater on the level of microbial communities and may guide the supervision and control of pulp and paper wastewater irrigating within the reed fields ecological system (RFES).
Tahmasebi Birgani, Mohamad J.; Chegeni, Nahid; Zabihzadeh, Mansoor; Hamzian, Nima
2014-04-01
Equivalent field is frequently used for central axis depth-dose calculations of rectangular- and irregular-shaped photon beams. As most of the proposed models to calculate the equivalent square field are dosimetry based, a simple physical-based method to calculate the equivalent square field size was used as the basis of this study. The table of the sides of the equivalent square or rectangular fields was constructed and then compared with the well-known tables by BJR and Venselaar, et al. with the average relative error percentage of 2.5 ± 2.5% and 1.5 ± 1.5%, respectively. To evaluate the accuracy of this method, the percentage depth doses (PDDs) were measured for some special irregular symmetric and asymmetric treatment fields and their equivalent squares for Siemens Primus Plus linear accelerator for both energies, 6 and 18 MV. The mean relative differences of PDDs measurement for these fields and their equivalent square was approximately 1% or less. As a result, this method can be employed to calculate equivalent field not only for rectangular fields but also for any irregular symmetric or asymmetric field.
A field day of soil regulation methods
NASA Astrophysics Data System (ADS)
Kempter, Axel; Kempter, Carmen
2015-04-01
The subject Soil plays an important role in the school subject geography. In particular in the upper classes it is expected that the knowledge from the area of Soil can be also be applied in other subjects. Thus, e.g., an assessment of economy and agricultural development and developing potential requires the interweaving of natural- geographic and human-geographic factors. The treatment of the subject Soil requires the desegregation of the results of different fields like Physics, Chemistry and Biology. Accordingly the subject gives cause to professional-covering lessons and offers the opportunity for practical work as well as excursions. Beside the mediation of specialist knowledge and with the support of the methods and action competences, the independent learning and the practical work should have a special emphasis on the field excursion by using stimulating exercises oriented to solving problems and mastering the methods. This aim should be achieved by the interdisciplinary treatment of the subject Soil in the task-oriented learning process on the field day. The methods and experiments should be sensibly selected for both the temporal and material supply constraints. During the field day the pupils had to categorize soil texture, soil colour, soil profile, soil skeleton, lime content, ion exchanger (Soils filter materials), pH-Value, water retention capacity and evidence of different ions like e.g. Fe3+, Mg2+, Cl- and NO3-. The pupils worked on stations and evaluated the data to receive a general view of the ground at the end. According to numbers of locations, amount of time and group size, different procedures can be offered. There are groups of experts who carry out the same experiment at all locations and split for the evaluation in different groups or each group ran through all stations. The results were compared and discussed at the end.
Improved methods for fan sound field determination
NASA Technical Reports Server (NTRS)
Cicon, D. E.; Sofrin, T. G.; Mathews, D. C.
1981-01-01
Several methods for determining acoustic mode structure in aircraft turbofan engines using wall microphone data were studied. A method for reducing data was devised and implemented which makes the definition of discrete coherent sound fields measured in the presence of engine speed fluctuation more accurate. For the analytical methods, algorithms were developed to define the dominant circumferential modes from full and partial circumferential arrays of microphones. Axial arrays were explored to define mode structure as a function of cutoff ratio, and the use of data taken at several constant speeds was also evaluated in an attempt to reduce instrumentation requirements. Sensitivities of the various methods to microphone density, array size and measurement error were evaluated and results of these studies showed these new methods to be impractical. The data reduction method used to reduce the effects of engine speed variation consisted of an electronic circuit which windowed the data so that signal enhancement could occur only when the speed was within a narrow range.
Tegze, Gyoergy Bansel, Gurvinder; Toth, Gyula I.; Pusztai, Tamas; Fan, Zhongyun; Granasy, Laszlo
2009-03-20
We present an efficient method to solve numerically the equations of dissipative dynamics of the binary phase-field crystal model proposed by Elder et al. [K.R. Elder, M. Katakowski, M. Haataja, M. Grant, Phys. Rev. B 75 (2007) 064107] characterized by variable coefficients. Using the operator splitting method, the problem has been decomposed into sub-problems that can be solved more efficiently. A combination of non-trivial splitting with spectral semi-implicit solution leads to sets of algebraic equations of diagonal matrix form. Extensive testing of the method has been carried out to find the optimum balance among errors associated with time integration, spatial discretization, and splitting. We show that our method speeds up the computations by orders of magnitude relative to the conventional explicit finite difference scheme, while the costs of the pointwise implicit solution per timestep remains low. Also we show that due to its numerical dissipation, finite differencing can not compete with spectral differencing in terms of accuracy. In addition, we demonstrate that our method can efficiently be parallelized for distributed memory systems, where an excellent scalability with the number of CPUs is observed.
Ogura, Toshihiko
2014-08-08
Highlights: • We developed a high-sensitive frequency transmission electric-field (FTE) system. • The output signal was highly enhanced by applying voltage to a metal layer on SiN. • The spatial resolution of new FTE method is 41 nm. • New FTE system enables observation of the intact bacteria and virus in water. - Abstract: The high-resolution structural analysis of biological specimens by scanning electron microscopy (SEM) presents several advantages. Until now, wet bacterial specimens have been examined using atmospheric sample holders. However, images of unstained specimens in water using these holders exhibit very poor contrast and heavy radiation damage. Recently, we developed the frequency transmission electric-field (FTE) method, which facilitates the SEM observation of biological specimens in water without radiation damage. However, the signal detection system presents low sensitivity. Therefore, a high EB current is required to generate clear images, and thus reducing spatial resolution and inducing thermal damage to the samples. Here a high-sensitivity detection system is developed for the FTE method, which enhances the output signal amplitude by hundredfold. The detection signal was highly enhanced when voltage was applied to the metal layer on silicon nitride thin film. This enhancement reduced the EB current and improved the spatial resolution as well as the signal-to-noise ratio. The spatial resolution of a high-sensitive FTE system is 41 nm, which is considerably higher than previous FTE system. New FTE system can easily be utilised to examine various unstained biological specimens in water, such as living bacteria and viruses.
The virtual fields method applied to spalling tests on concrete
NASA Astrophysics Data System (ADS)
Pierron, F.; Forquin, P.
2012-08-01
For one decade spalling techniques based on the use of a metallic Hopkinson bar put in contact with a concrete sample have been widely employed to characterize the dynamic tensile strength of concrete at strain-rates ranging from a few tens to two hundreds of s-1. However, the processing method mainly based on the use of the velocity profile measured on the rear free surface of the sample (Novikov formula) remains quite basic and an identification of the whole softening behaviour of the concrete is out of reach. In the present paper a new processing method is proposed based on the use of the Virtual Fields Method (VFM). First, a digital high speed camera is used to record the pictures of a grid glued on the specimen. Next, full-field measurements are used to obtain the axial displacement field at the surface of the specimen. Finally, a specific virtual field has been defined in the VFM equation to use the acceleration map as an alternative `load cell'. This method applied to three spalling tests allowed to identify Young's modulus during the test. It was shown that this modulus is constant during the initial compressive part of the test and decreases in the tensile part when micro-damage exists. It was also shown that in such a simple inertial test, it was possible to reconstruct average axial stress profiles using only the acceleration data. Then, it was possible to construct local stress-strain curves and derive a tensile strength value.
Yang, Ching-Been; Chiang, Hsiu-Lu
2013-01-01
This study integrated thermally induced super-resolution into near-field photolithography and conducted simulation and analysis on line segment fabrication. This technique involves passing a laser beam through an aluminum-plated optical fiber probe onto a thin film of indium (approximately 10 nm thick). The indium film opens a melted aperture narrower than the width of the laser beam, creating a melted region and a crystalline region. The difference in penetration rate between the two regions leads to the generation of thermally induced super-resolution. This paper proposes a combination of Taguchi method with gray relational analysis, in which S/N ratios obtained using the Taguchi method are converted into gray relational grades to identify an optimal combination of parameters capable of meeting multiple quality objectives. This optimal combination includes a probe aperture of 100 nm (A1), exposure energy/μm of 0.002nJ/μm (B2), development time of 60 s (C3), and indium film with a thickness of 7 nm (D1). The optimal parameters were (A1B2C3D1) for the gray relational analysis and (A1B1C1D1) for the Taguchi method. Results showed a negative improvement of -14.3% in line width from 126.2 (Taguchi method) to 144.2 nm (gray relational analysis). Working depth, however, showed a significantly improvement of 140.4% from 5.7 (Taguchi method) to 13.7 nm (gray relational analysis). The proposed approach resolves the conflicts that commonly occur among factor levels in Taguchi analysis under the requirements of multiple quality requirements.
A new method of field MRTD test
NASA Astrophysics Data System (ADS)
Chen, Zhibin; Song, Yan; Liu, Xianhong; Xiao, Wenjian
2014-09-01
MRTD is an important indicator to measure the imaging performance of infrared camera. In the traditional laboratory test, blackbody is used as simulated heat source which is not only expensive and bulky but also difficult to meet field testing requirements of online automatic infrared camera MRTD. To solve this problem, this paper introduces a new detection device for MRTD, which uses LED as a simulation heat source and branded plated zinc sulfide glass carved four-bar target as a simulation target. By using high temperature adaptability cassegrain collimation system, the target is simulated to be distance-infinite so that it can be observed by the human eyes to complete the subjective test, or collected to complete objective measurement by image processing. This method will use LED to replace blackbody. The color temperature of LED is calibrated by thermal imager, thereby, the relation curve between the LED temperature controlling current and the blackbody simulation temperature difference is established, accurately achieved the temperature control of the infrared target. Experimental results show that the accuracy of the device in field testing of thermal imager MRTD can be limited within 0.1K, which greatly reduces the cost to meet the project requirements with a wide application value.
Narrow field electromagnetic sensor system and method
McEwan, Thomas E.
1996-01-01
A narrow field electromagnetic sensor system and method of sensing a characteristic of an object provide the capability to realize a characteristic of an object such as density, thickness, or presence, for any desired coordinate position on the object. One application is imaging. The sensor can also be used as an obstruction detector or an electronic trip wire with a narrow field without the disadvantages of impaired performance when exposed to dirt, snow, rain, or sunlight. The sensor employs a transmitter for transmitting a sequence of electromagnetic signals in response to a transmit timing signal, a receiver for sampling only the initial direct RF path of the electromagnetic signal while excluding all other electromagnetic signals in response to a receive timing signal, and a signal processor for processing the sampled direct RF path electromagnetic signal and providing an indication of the characteristic of an object. Usually, the electromagnetic signal is a short RF burst and the obstruction must provide a substantially complete eclipse of the direct RF path. By employing time-of-flight techniques, a timing circuit controls the receiver to sample only the initial direct RF path of the electromagnetic signal while not sampling indirect path electromagnetic signals. The sensor system also incorporates circuitry for ultra-wideband spread spectrum operation that reduces interference to and from other RF services while allowing co-location of multiple electronic sensors without the need for frequency assignments.
Narrow field electromagnetic sensor system and method
McEwan, T.E.
1996-11-19
A narrow field electromagnetic sensor system and method of sensing a characteristic of an object provide the capability to realize a characteristic of an object such as density, thickness, or presence, for any desired coordinate position on the object. One application is imaging. The sensor can also be used as an obstruction detector or an electronic trip wire with a narrow field without the disadvantages of impaired performance when exposed to dirt, snow, rain, or sunlight. The sensor employs a transmitter for transmitting a sequence of electromagnetic signals in response to a transmit timing signal, a receiver for sampling only the initial direct RF path of the electromagnetic signal while excluding all other electromagnetic signals in response to a receive timing signal, and a signal processor for processing the sampled direct RF path electromagnetic signal and providing an indication of the characteristic of an object. Usually, the electromagnetic signal is a short RF burst and the obstruction must provide a substantially complete eclipse of the direct RF path. By employing time-of-flight techniques, a timing circuit controls the receiver to sample only the initial direct RF path of the electromagnetic signal while not sampling indirect path electromagnetic signals. The sensor system also incorporates circuitry for ultra-wideband spread spectrum operation that reduces interference to and from other RF services while allowing co-location of multiple electronic sensors without the need for frequency assignments. 12 figs.
Kussmann, Jörg; Luenser, Arne; Beer, Matthias; Ochsenfeld, Christian
2015-03-07
An analytical method to calculate the molecular vibrational Hessian matrix at the self-consistent field level is presented. By analysis of the multipole expansions of the relevant derivatives of Coulomb-type two-electron integral contractions, we show that the effect of the perturbation on the electronic structure due to the displacement of nuclei decays at least as r{sup −2} instead of r{sup −1}. The perturbation is asymptotically local, and the computation of the Hessian matrix can, in principle, be performed with O(N) complexity. Our implementation exhibits linear scaling in all time-determining steps, with some rapid but quadratic-complexity steps remaining. Sample calculations illustrate linear or near-linear scaling in the construction of the complete nuclear Hessian matrix for sparse systems. For more demanding systems, scaling is still considerably sub-quadratic to quadratic, depending on the density of the underlying electronic structure.
Far-field method for the characterisation of three-dimensional fields: vectorial polarimetry
NASA Astrophysics Data System (ADS)
Rodríguez, O.; Lara, D.; Dainty, C.
2010-06-01
The first attempt to completely characterise a three-dimensional field was done by Ellis and Dogariu with excellent results reported [1] . However, their method is based on near-field techniques, which limits its range of applications. In this work, we present an alternative far-field method for the characterisation of the three-dimensional field that results from the interaction of a tightly focused three-dimensional field [2] with a sub-resolution specimen. Our method is based on the analysis of the scattering-angle-resolved polarisation state distribution across the exit pupil of a high numerical aperture (NA) collector lens using standard polarimetry techniques. Details of the method, the experimental setup built to verify its capabilities, and numerical and first experimental evidence demonstrating that the method allows for high sensitivit y on sub-resolution displacements of a sub-resolution specimen shall be presented [3]. This work is funded by Science Foundation Ireland grant No. 07/IN.1/I906 and Shimadzu Corporation, Japan. Oscar Rodríguez is grateful to the National Council for Science and Technology (CONACYT, Mexico) for the Ph D scholarship 177627.
A field method for measurement of infiltration
Johnson, A.I.
1963-01-01
The determination of infiltration--the downward entry of water into a soil (or sediment)--is receiving increasing attention in hydrologic studies because of the need for more quantitative data on all phases of the hydrologic cycle. A measure of infiltration, the infiltration rate, is usually determined in the field by flooding basins or furrows, sprinkling, or measuring water entry from cylinders (infiltrometer rings). Rates determined by ponding in large areas are considered most reliable, but the high cost usually dictates that infiltrometer rings, preferably 2 feet in diameter or larger, be used. The hydrology of subsurface materials is critical in the study of infiltration. The zone controlling the rate of infiltration is usually the least permeable zone. Many other factors affect infiltration rate--the sediment (soil) structure, the condition of the sediment surface, the distribution of soil moisture or soil- moisture tension, the chemical and physical nature of the sediments, the head of applied water, the depth to ground water, the chemical quality and the turbidity of the applied water, the temperature of the water and the sediments, the percentage of entrapped air in the sediments, the atmospheric pressure, the length of time of application of water, the biological activity in the sediments, and the type of equipment or method used. It is concluded that specific values of the infiltration rate for a particular type of sediment are probably nonexistent and that measured rates are primarily for comparative use. A standard field-test method for determining infiltration rates by means of single- or double-ring infiltrometers is described and the construction, installation, and operation of the infiltrometers are discussed in detail.
Ogura, Toshihiko
2014-08-08
The high-resolution structural analysis of biological specimens by scanning electron microscopy (SEM) presents several advantages. Until now, wet bacterial specimens have been examined using atmospheric sample holders. However, images of unstained specimens in water using these holders exhibit very poor contrast and heavy radiation damage. Recently, we developed the frequency transmission electric-field (FTE) method, which facilitates the SEM observation of biological specimens in water without radiation damage. However, the signal detection system presents low sensitivity. Therefore, a high EB current is required to generate clear images, and thus reducing spatial resolution and inducing thermal damage to the samples. Here a high-sensitivity detection system is developed for the FTE method, which enhances the output signal amplitude by hundredfold. The detection signal was highly enhanced when voltage was applied to the metal layer on silicon nitride thin film. This enhancement reduced the EB current and improved the spatial resolution as well as the signal-to-noise ratio. The spatial resolution of a high-sensitive FTE system is 41nm, which is considerably higher than previous FTE system. New FTE system can easily be utilised to examine various unstained biological specimens in water, such as living bacteria and viruses.
Andreuccetti, D; Zoppetti, N
2004-01-01
An advanced numerical evaluation tool is proposed for calculating the magnetic flux density dispersed by high-voltage power lines. When compared to existing software packages based on the application of standardized methods, this tool turned out to be particularly suitable for making accurate evaluations on vast portions of the territory, especially when the contribution of numerous aerial and/or underground lines must be taken into account. The aspects of the tool of greatest interest are (1) the interaction with an electronic archive of power lines, from which all the information necessary for the calculation is obtained; (2) the use of three-dimensional models of both the power lines and the territory crossed by these; (3) the direct interfacing with electronic cartography; and finally (4) the use of a representation procedure for the results that is based on contour maps. The tool had proven to be very useful especially for Environmental Impact Assessment procedures relative to new power lines.
Field methods for measuring concentrated flow erosion
NASA Astrophysics Data System (ADS)
Castillo, C.; Pérez, R.; James, M. R.; Quinton, J. N.; Taguas, E. V.; Gómez, J. A.
2012-04-01
Many studies have stressed the importance of gully erosion in the overall soil loss and sediment yield of agricultural catchments, for instance in recent years (Vandaele and Poesen, 1995; De Santisteban et al., 2006; Wu el al, 2008). Several techniques have been used for determining gully erosion in field studies. The conventional techniques involved the use of different devices (i.e. ruler, pole, tape, micro-topographic profilers, total station) to calculate rill and gully volumes through the determination of cross sectional areas and length of reaches (Casalí et al, 1999; Hessel and van Asch, 2003). Optical devices (i.e. laser profilemeters) have also been designed for the purpose of rapid and detailed assessment of cross sectional areas in gully networks (Giménez et al., 2009). These conventional 2d methods provide a simple and un-expensive approach for erosion evaluation, but are time consuming to carry out if a good accuracy is required. On the other hand, remote sensing techniques are being increasingly applied to gully erosion investigation such as aerial photography used for big-scale, long-term, investigations (e.g. Martínez-Casasnovas et al., 2004; Ionita, 2006), airborne and terrestrial LiDAR datasets for gully volume evaluation (James et al., 2007; Evans and Lindsay, 2010) and recently, major advances in 3D photo-reconstruction techniques (Welty et al. 2010, James et al., 2011). Despite its interest, few studies simultaneously compare the accuracies of the range of conventional and remote sensing techniques used, or define the most suitable method for a particular scale, given and time and cost constraints. That was the reason behind the International Workshop Innovations in the evaluation and measurement of rill and gully erosion, held in Cordoba in May 2011 and from which derive part of the materials presented in this abstract. The main aim of this work was to compare the accuracy and time requirements of traditional (2D) and recently developed
Intermediate electrostatic field for the generalized elongation method.
Liu, Kai; Korchowiec, Jacek; Aoki, Yuriko
2015-05-18
An intermediate electrostatic field is introduced to improve the accuracy of fragment-based quantum-chemical computational methods by including long-range polarizations of biomolecules. The point charge distribution of the intermediate field is generated by a charge sensitivity analysis that is parameterized for five different population analyses, namely, atoms-in-molecules, Hirshfeld, Mulliken, natural orbital, and Voronoi population analysis. Two model systems are chosen to demonstrate the performance of the generalized elongation method (ELG) combined with the intermediate electrostatic field. The calculations are performed for the STO-3G, 6-31G, and 6-31G(d) basis sets and compared with reference Hartree-Fock calculations. It is shown that the error in the total energy is reduced by one order of magnitude, independently of the population analyses used. This demonstrates the importance of long-range polarization in electronic-structure calculations by fragmentation techniques.
Edison, John R; Monson, Peter A
2014-07-14
Recently we have developed a dynamic mean field theory (DMFT) for lattice gas models of fluids in porous materials [P. A. Monson, J. Chem. Phys. 128(8), 084701 (2008)]. The theory can be used to describe the relaxation processes in the approach to equilibrium or metastable states for fluids in pores and is especially useful for studying system exhibiting adsorption/desorption hysteresis. In this paper we discuss the extension of the theory to higher order by means of the path probability method (PPM) of Kikuchi and co-workers. We show that this leads to a treatment of the dynamics that is consistent with thermodynamics coming from the Bethe-Peierls or Quasi-Chemical approximation for the equilibrium or metastable equilibrium states of the lattice model. We compare the results from the PPM with those from DMFT and from dynamic Monte Carlo simulations. We find that the predictions from PPM are qualitatively similar to those from DMFT but give somewhat improved quantitative accuracy, in part due to the superior treatment of the underlying thermodynamics. This comes at the cost of greater computational expense associated with the larger number of equations that must be solved.
Edison, John R.; Monson, Peter A.
2014-07-14
Recently we have developed a dynamic mean field theory (DMFT) for lattice gas models of fluids in porous materials [P. A. Monson, J. Chem. Phys. 128(8), 084701 (2008)]. The theory can be used to describe the relaxation processes in the approach to equilibrium or metastable states for fluids in pores and is especially useful for studying system exhibiting adsorption/desorption hysteresis. In this paper we discuss the extension of the theory to higher order by means of the path probability method (PPM) of Kikuchi and co-workers. We show that this leads to a treatment of the dynamics that is consistent with thermodynamics coming from the Bethe-Peierls or Quasi-Chemical approximation for the equilibrium or metastable equilibrium states of the lattice model. We compare the results from the PPM with those from DMFT and from dynamic Monte Carlo simulations. We find that the predictions from PPM are qualitatively similar to those from DMFT but give somewhat improved quantitative accuracy, in part due to the superior treatment of the underlying thermodynamics. This comes at the cost of greater computational expense associated with the larger number of equations that must be solved.
Shi, Xu; Barnes, Robert O.; Chen, Li; Shajahan-Haq, Ayesha N.; Hilakivi-Clarke, Leena; Clarke, Robert; Wang, Yue; Xuan, Jianhua
2015-01-01
Summary: Identification of protein interaction subnetworks is an important step to help us understand complex molecular mechanisms in cancer. In this paper, we develop a BMRF-Net package, implemented in Java and C++, to identify protein interaction subnetworks based on a bagging Markov random field (BMRF) framework. By integrating gene expression data and protein–protein interaction data, this software tool can be used to identify biologically meaningful subnetworks. A user friendly graphic user interface is developed as a Cytoscape plugin for the BMRF-Net software to deal with the input/output interface. The detailed structure of the identified networks can be visualized in Cytoscape conveniently. The BMRF-Net package has been applied to breast cancer data to identify significant subnetworks related to breast cancer recurrence. Availability and implementation: The BMRF-Net package is available at http://sourceforge.net/projects/bmrfcjava/. The package is tested under Ubuntu 12.04 (64-bit), Java 7, glibc 2.15 and Cytoscape 3.1.0. Contact: xuan@vt.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25755273
Potential theoretic methods for far field sound radiation calculations
NASA Technical Reports Server (NTRS)
Hariharan, S. I.; Stenger, Edward J.; Scott, J. R.
1995-01-01
In the area of computational acoustics, procedures which accurately predict the far-field sound radiation are much sought after. A systematic development of such procedures are found in a sequence of papers by Atassi. The method presented here is an alternate approach to predicting far field sound based on simple layer potential theoretic methods. The main advantages of this method are: it requires only a simple free space Green's function, it can accommodate arbitrary shapes of Kirchoff surfaces, and is readily extendable to three-dimensional problems. Moreover, the procedure presented here, though tested for unsteady lifting airfoil problems, can easily be adapted to other areas of interest, such as jet noise radiation problems. Results are presented for lifting airfoil problems and comparisons are made with the results reported by Atassi. Direct comparisons are also made for the flat plate case.
School's IN for Summer: An Alternative Field Experience for Elementary Science Methods Students
ERIC Educational Resources Information Center
Hanuscin, Deborah L.; Musikul, Kusalin
2007-01-01
Field experiences are critical to teacher learning and enhance the effectiveness of methods courses; however, when methods courses are offered in the summer, traditional school-based field experiences are not possible. This article describes an alternative campus-based experience created as part of an elementary science methods course. The Summer…
Knowledge-based flow field zoning
NASA Technical Reports Server (NTRS)
Andrews, Alison E.
1988-01-01
Automation flow field zoning in two dimensions is an important step towards easing the three-dimensional grid generation bottleneck in computational fluid dynamics. A knowledge based approach works well, but certain aspects of flow field zoning make the use of such an approach challenging. A knowledge based flow field zoner, called EZGrid, was implemented and tested on representative two-dimensional aerodynamic configurations. Results are shown which illustrate the way in which EZGrid incorporates the effects of physics, shape description, position, and user bias in a flow field zoning.
Process system and method for fabricating submicron field emission cathodes
Jankowski, A.F.; Hayes, J.P.
1998-05-05
A process method and system for making field emission cathodes exists. The deposition source divergence is controlled to produce field emission cathodes with height-to-base aspect ratios that are uniform over large substrate surface areas while using very short source-to-substrate distances. The rate of hole closure is controlled from the cone source. The substrate surface is coated in well defined increments. The deposition source is apertured to coat pixel areas on the substrate. The entire substrate is coated using a manipulator to incrementally move the whole substrate surface past the deposition source. Either collimated sputtering or evaporative deposition sources can be used. The position of the aperture and its size and shape are used to control the field emission cathode size and shape. 3 figs.
Process system and method for fabricating submicron field emission cathodes
Jankowski, Alan F.; Hayes, Jeffrey P.
1998-01-01
A process method and system for making field emission cathodes exists. The deposition source divergence is controlled to produce field emission cathodes with height-to-base aspect ratios that are uniform over large substrate surface areas while using very short source-to-substrate distances. The rate of hole closure is controlled from the cone source. The substrate surface is coated in well defined increments. The deposition source is apertured to coat pixel areas on the substrate. The entire substrate is coated using a manipulator to incrementally move the whole substrate surface past the deposition source. Either collimated sputtering or evaporative deposition sources can be used. The position of the aperture and its size and shape are used to control the field emission cathode size and shape.
Magnetic field transfer device and method
Wipf, S.L.
1990-02-13
A magnetic field transfer device includes a pair of oppositely wound inner coils which each include at least one winding around an inner coil axis, and an outer coil which includes at least one winding around an outer coil axis. The windings may be formed of superconductors. The axes of the two inner coils are parallel and laterally spaced from each other so that the inner coils are positioned in side-by-side relation. The outer coil is outwardly positioned from the inner coils and rotatable relative to the inner coils about a rotational axis substantially perpendicular to the inner coil axes to generate a hypothetical surface which substantially encloses the inner coils. The outer coil rotates relative to the inner coils between a first position in which the outer coil axis is substantially parallel to the inner coil axes and the outer coil augments the magnetic field formed in one of the inner coils, and a second position 180[degree] from the first position, in which the augmented magnetic field is transferred into the other inner coil and reoriented 180[degree] from the original magnetic field. The magnetic field transfer device allows a magnetic field to be transferred between volumes with negligible work being required to rotate the outer coil with respect to the inner coils. 16 figs.
Magnetic field transfer device and method
Wipf, Stefan L.
1990-01-01
A magnetic field transfer device includes a pair of oppositely wound inner coils which each include at least one winding around an inner coil axis, and an outer coil which includes at least one winding around an outer coil axis. The windings may be formed of superconductors. The axes of the two inner coils are parallel and laterally spaced from each other so that the inner coils are positioned in side-by-side relation. The outer coil is outwardly positioned from the inner coils and rotatable relative to the inner coils about a rotational axis substantially perpendicular to the inner coil axes to generate a hypothetical surface which substantially encloses the inner coils. The outer coil rotates relative to the inner coils between a first position in which the outer coil axis is substantially parallel to the inner coil axes and the outer coil augments the magnetic field formed in one of the inner coils, and a second position 180.degree. from the first position, in which the augmented magnetic field is transferred into the other inner coil and reoriented 180.degree. from the original magnetic field. The magnetic field transfer device allows a magnetic field to be transferred between volumes with negligible work being required to rotate the outer coil with respect to the inner coils.
NASA Astrophysics Data System (ADS)
Zheng, Xiang; Yang, Chao; Cai, Xiao-Chuan; Keyes, David
2015-03-01
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton-Krylov-Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.
Zheng, Xiang; Yang, Chao; Cai, Xiao-Chuan; Keyes, David
2015-03-15
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn–Hilliard–Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton–Krylov–Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.
Investigation of drag effect using the field signature method
NASA Astrophysics Data System (ADS)
Wan, Zhengjun; Liao, Junbi; Tian, Gui Yun; Cheng, Liang
2011-08-01
The potential drop (PD) method is an established non-destructive evaluation (NDE) technique. The monitoring of internal corrosion, erosion and cracks in piping systems, based on electrical field mapping or direct current potential drop array, is also known as the field signature method (FSM). The FSM has been applied in the field of submarine pipe monitoring and land-based oil and gas transmission pipes and containers. In the experimental studies, to detect and calculate the degree of pipe corrosion, the FSM analyses the relationships between the electrical resistance and pipe thickness using an electrode matrix. The relevant drag effect or trans-resistance will cause a large margin of error in the application of resistance arrays. It is the first time that the drag effect in the paper is investigated and analysed in resistance networks with the help of the FSM. Subsequently, a method to calculate the drag factors and eliminate its errors is proposed and presented. Theoretical analysis, simulation and experimental results show that the measurement accuracy can be improved by eliminating the errors caused by the drag effect.
Studies on Partially Coherent Fields and Coherence Measurement Methods
NASA Astrophysics Data System (ADS)
Cho, Seongkeun
The concept of coherence in optics means how closely an optical field oscillates in unison at the same position in different time (temporal coherence) or at different positions at the same time (spatial coherence). Since all optical fields oscillate very rapidly with random fluctuations, coherence theory has been developed to describe the state of coherence of those optical fields through the usage of time-averaged correlation functions. This thesis reviews and applies coherence theory for an accurate and improved modeling in field-propagation and coherence measurement for partially coherent fields. The first half of this thesis discusses the study of phase-space distributions and phase-space tomography. Phase-space distributions such as the Wigner and the ambiguity functions can be used as simple mathematical tools for describing the propagation of an optical field for any state of coherence as those functions incorporate wave effects with the simplicity of ray optics. However, the Wigner and the ambiguity functions require a paraxial condition for the field description. To overcome this limitation, the nonparaxial extensions of the Wigner function have been studied and applied for nonparaxial fields. In this thesis, a simple series expression for calculating a nonparaxial generalization of theWigner function from the standard Wigner function is developed in both two- and three-dimensional free space. A nonparaxial generalization of the ambiguity function that retains properties analogous to the standard ambiguity function is also proposed in both two and three dimensions. This generalization extends phase-space tomography to the nonparaxial regime. The second half of this thesis proposes a new method of coherence measurement based on diffraction. By measuring the radiant intensity of a field with and without a binary transparent phase mask, one can estimate the coherence of a field at all pairs of the points centered at the mask's edge. This method is proposed in
Wide-field TCSPC: methods and applications
NASA Astrophysics Data System (ADS)
Hirvonen, Liisa M.; Suhling, Klaus
2017-01-01
Time-correlated single photon counting (TCSPC) is a widely used, robust and mature technique to measure the photon arrival time in applications such as fluorescence spectroscopy and microscopy, LIDAR and optical tomography. In the past few years there have been significant developments with wide-field TCSPC detectors, which can record the position as well as the arrival time of the photon simultaneously. In this review, we summarise different approaches used in wide-field TCSPC detection, and discuss their merits for different applications, with emphasis on fluorescence lifetime imaging.
A field-based method for simultaneous measurements of the δ18O and δ13C of soil CO2 efflux
NASA Astrophysics Data System (ADS)
Mortazavi, B.; Prater, J. L.; Chanton, J. P.
determined from soil CO2. There were close agreements between the three methods for the determination of the δ13C of soil efflux CO2. Results suggest that the mini-towers can be effectively used in the field for determining the δ18O and the δ13C of soil-respired CO2.
Ab initio based polarizable force field parametrization
NASA Astrophysics Data System (ADS)
Masia, Marco
2008-05-01
Experimental and simulation studies of anion-water systems have pointed out the importance of molecular polarization for many phenomena ranging from hydrogen-bond dynamics to water interfaces structure. The study of such systems at molecular level is usually made with classical molecular dynamics simulations. Structural and dynamical features are deeply influenced by molecular and ionic polarizability, which parametrization in classical force field has been an object of long-standing efforts. Although when classical models are compared to ab initio calculations at condensed phase, it is found that the water dipole moments are underestimated by ˜30%, while the anion shows an overpolarization at short distances. A model for chloride-water polarizable interaction is parametrized here, making use of Car-Parrinello simulations at condensed phase. The results hint to an innovative approach in polarizable force fields development, based on ab initio simulations, which do not suffer for the mentioned drawbacks. The method is general and can be applied to the modeling of different systems ranging from biomolecular to solid state simulations.
Junction-based field emission structure for field emission display
Dinh, Long N.; Balooch, Mehdi; McLean, II, William; Schildbach, Marcus A.
2002-01-01
A junction-based field emission display, wherein the junctions are formed by depositing a semiconducting or dielectric, low work function, negative electron affinity (NEA) silicon-based compound film (SBCF) onto a metal or n-type semiconductor substrate. The SBCF can be doped to become a p-type semiconductor. A small forward bias voltage is applied across the junction so that electron transport is from the substrate into the SBCF region. Upon entering into this NEA region, many electrons are released into the vacuum level above the SBCF surface and accelerated toward a positively biased phosphor screen anode, hence lighting up the phosphor screen for display. To turn off, simply switch off the applied potential across the SBCF/substrate. May be used for field emission flat panel displays.
Hyperbolic Methods for Surface and Field Grid Generation
NASA Technical Reports Server (NTRS)
Chan, William M.; VanDalsem, William R. (Technical Monitor)
1996-01-01
This chapter describes the use of hyperbolic partial differential equation methods for structured surface grid generation and field grid generation. While the surface grid generation equations are inherently three dimensional, the field grid generation equations can be formulated in two or three dimensions. The governing equations are derived from orthogonality relations and cell area/volume constraints; and are solved numerically by marching from an initial curve or surface. The marching step size and marching distance can be prescribedly the user. Exact specifications of the side and outer boundaries are not possible with a one sweep marching scheme but limited control is achievable. Excellent orthogonality and grid clustering characteristics are provided by hyperbolic methods with one to two orders of magnitude savings in time over typical elliptic methods. Since hyperbolic grid generation methods do not require the exact specifications of the side and outer boundaries of a grid, these methods are particularly well suited for the overlapping grid approach for solving problems on complex configurations. Grid generation software based on hyperbolic methods and their applications on several complex configurations will be described.
A data base of geologic field spectra
NASA Technical Reports Server (NTRS)
Kahle, A. B.; Goetz, A. F. H.; Paley, H. N.; Alley, R. E.; Abbott, E. A.
1981-01-01
It is noted that field samples measured in the laboratory do not always present an accurate picture of the ground surface sensed by airborne or spaceborne instruments because of the heterogeneous nature of most surfaces and because samples are disturbed and surface characteristics changed by collection and handling. The development of new remote sensing instruments relies on the analysis of surface materials in their natural state. The existence of thousands of Portable Field Reflectance Spectrometer (PFRS) spectra has necessitated a single, all-inclusive data base that permits greatly simplified searching and sorting procedures and facilitates further statistical analyses. The data base developed at JPL for cataloging geologic field spectra is discussed.
Grassmann phase space methods for fermions. II. Field theory
NASA Astrophysics Data System (ADS)
Dalton, B. J.; Jeffers, J.; Barnett, S. M.
2017-02-01
In both quantum optics and cold atom physics, the behaviour of bosonic photons and atoms is often treated using phase space methods, where mode annihilation and creation operators are represented by c-number phase space variables, with the density operator equivalent to a distribution function of these variables. The anti-commutation rules for fermion annihilation, creation operators suggests the possibility of using anti-commuting Grassmann variables to represent these operators. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of Grassmann phase space methods in quantum-atom optics to treat fermionic systems is rather rare, though fermion coherent states using Grassmann variables are widely used in particle physics. This paper presents a phase space theory for fermion systems based on distribution functionals, which replace the density operator and involve Grassmann fields representing anti-commuting fermion field annihilation, creation operators. It is an extension of a previous phase space theory paper for fermions (Paper I) based on separate modes, in which the density operator is replaced by a distribution function depending on Grassmann phase space variables which represent the mode annihilation and creation operators. This further development of the theory is important for the situation when large numbers of fermions are involved, resulting in too many modes to treat separately. Here Grassmann fields, distribution functionals, functional Fokker-Planck equations and Ito stochastic field equations are involved. Typical applications to a trapped Fermi gas of interacting spin 1/2 fermionic atoms and to multi-component Fermi gases with non-zero range interactions are presented, showing that the Ito stochastic field equations are local in these cases. For the spin 1/2 case we also show how simple solutions can be obtained both for the untrapped case and for an optical lattice trapping potential.
Thomer, Andrea; Vaidya, Gaurav; Guralnick, Robert; Bloom, David; Russell, Laura
2012-01-01
Part diary, part scientific record, biological field notebooks often contain details necessary to understanding the location and environmental conditions existent during collecting events. Despite their clear value for (and recent use in) global change studies, the text-mining outputs from field notebooks have been idiosyncratic to specific research projects, and impossible to discover or re-use. Best practices and workflows for digitization, transcription, extraction, and integration with other sources are nascent or non-existent. In this paper, we demonstrate a workflow to generate structured outputs while also maintaining links to the original texts. The first step in this workflow was to place already digitized and transcribed field notebooks from the University of Colorado Museum of Natural History founder, Junius Henderson, on Wikisource, an open text transcription platform. Next, we created Wikisource templates to document places, dates, and taxa to facilitate annotation and wiki-linking. We then requested help from the public, through social media tools, to take advantage of volunteer efforts and energy. After three notebooks were fully annotated, content was converted into XML and annotations were extracted and cross-walked into Darwin Core compliant record sets. Finally, these recordsets were vetted, to provide valid taxon names, via a process we call "taxonomic referencing." The result is identification and mobilization of 1,068 observations from three of Henderson's thirteen notebooks and a publishable Darwin Core record set for use in other analyses. Although challenges remain, this work demonstrates a feasible approach to unlock observations from field notebooks that enhances their discovery and interoperability without losing the narrative context from which those observations are drawn."Compose your notes as if you were writing a letter to someone a century in the future."Perrine and Patton (2011).
Thomer, Andrea; Vaidya, Gaurav; Guralnick, Robert; Bloom, David; Russell, Laura
2012-01-01
Abstract Part diary, part scientific record, biological field notebooks often contain details necessary to understanding the location and environmental conditions existent during collecting events. Despite their clear value for (and recent use in) global change studies, the text-mining outputs from field notebooks have been idiosyncratic to specific research projects, and impossible to discover or re-use. Best practices and workflows for digitization, transcription, extraction, and integration with other sources are nascent or non-existent. In this paper, we demonstrate a workflow to generate structured outputs while also maintaining links to the original texts. The first step in this workflow was to place already digitized and transcribed field notebooks from the University of Colorado Museum of Natural History founder, Junius Henderson, on Wikisource, an open text transcription platform. Next, we created Wikisource templates to document places, dates, and taxa to facilitate annotation and wiki-linking. We then requested help from the public, through social media tools, to take advantage of volunteer efforts and energy. After three notebooks were fully annotated, content was converted into XML and annotations were extracted and cross-walked into Darwin Core compliant record sets. Finally, these recordsets were vetted, to provide valid taxon names, via a process we call “taxonomic referencing.” The result is identification and mobilization of 1,068 observations from three of Henderson’s thirteen notebooks and a publishable Darwin Core record set for use in other analyses. Although challenges remain, this work demonstrates a feasible approach to unlock observations from field notebooks that enhances their discovery and interoperability without losing the narrative context from which those observations are drawn. “Compose your notes as if you were writing a letter to someone a century in the future.” Perrine and Patton (2011) PMID:22859891
Stream temperature investigations: field and analytic methods
Bartholow, J.M.
1989-01-01
Alternative public domain stream and reservoir temperature models are contrasted with SNTEMP. A distinction is made between steady-flow and dynamic-flow models and their respective capabilities. Regression models are offered as an alternative approach for some situations, with appropriate mathematical formulas suggested. Appendices provide information on State and Federal agencies that are good data sources, vendors for field instrumentation, and small computer programs useful in data reduction.
Dispersion Method Using Focused Ultrasonic Field
NASA Astrophysics Data System (ADS)
Jungsoon Kim,; Moojoon Kim,; Kanglyel Ha,; Minchul Chu,
2010-07-01
The dispersion of powders into liquids has become one of the most important techniques in high-tech industries and it is a common process in the formulation of various products, such as paint, ink, shampoo, beverages, and polishing media. In this study, an ultrasonic system with a cylindrical transducer is newly introduced for pure nanoparticle dispersion. The acoustics pressure field and the characteristics of the shock pulse caused by cavitation are investigated. The frequency spectrum of the pulse from the collapse of air bubbles in the cavitation is analyzed theoretically. It was confirmed that a TiO2 water suspension can be dispersed effectively using the suggested system.
Determination of traces of cobalt in soils: A field method
Almond, H.
1953-01-01
The growing use of geochemical prospecting methods in the search for ore deposits has led to the development of a field method for the determination of cobalt in soils. The determination is based on the fact that cobalt reacts with 2-nitroso-1-naphthol to yield a pink compound that is soluble in carbon tetrachloride. The carbon tetrachloride extract is shaken with dilute cyanide to complex interfering elements and to remove excess reagent. The cobalt content is estimated by comparing the pink color in the carbon tetrachloride with a standard series prepared from standard solutions. The cobalt 2-nitroso-1-naphtholate system in carbon tetrachloride follows Beer's law. As little as 1 p.p.m. can be determined in a 0.1-gram sample. The method is simple and fast and requires only simple equipment. More than 40 samples can be analyzed per man-day with an accuracy within 30% or better.
NASA Astrophysics Data System (ADS)
Chirouze, J.; Boulet, G.; Jarlan, L.; Fieuzal, R.; Rodriguez, J. C.; Ezzahar, J.; Er-Raki, S.; Bigeard, G.; Merlin, O.; Garatuza-Payan, J.; Watts, C.; Chehbouni, G.
2013-01-01
Remotely sensed surface temperature can provide a good proxy for water stress level and is therefore particularly useful to estimate spatially distributed evapotranspiration. Instantaneous stress levels or instantaneous latent heat flux are deduced from the surface energy balance equation constrained by this equilibrium temperature. Pixel average surface temperature depends on two main factors: stress and vegetation fraction cover. Methods estimating stress vary according to the way they treat each factor. Two families of methods can be defined: the contextual methods, where stress levels are scaled on a given image between hot/dry and cool/wet pixels for a particular vegetation cover, and single-pixel methods which evaluate latent heat as the residual of the surface energy balance for one pixel independently from the others. Four models, two contextual (S-SEBI and a triangle method, inspired by Moran et al., 1994) and two single-pixel (TSEB, SEBS) are applied at seasonal scale over a four by four km irrigated agricultural area in semi-arid northern Mexico. Their performances, both at local and spatial standpoints, are compared relatively to energy balance data acquired at seven locations within the area, as well as a more complex soil-vegetation-atmosphere transfer model forced with true irrigation and rainfall data. Stress levels are not always well retrieved by most models, but S-SEBI as well as TSEB, although slightly biased, show good performances. Drop in model performances is observed when vegetation is senescent, mostly due to a poor partitioning both between turbulent fluxes and between the soil/plant components of the latent heat flux and the available energy. As expected, contextual methods perform well when extreme hydric and vegetation conditions are encountered in the same image (therefore, esp. in spring and early summer) while they tend to exaggerate the spread in water status in more homogeneous conditions (esp. in winter).
NASA Astrophysics Data System (ADS)
Chirouze, J.; Boulet, G.; Jarlan, L.; Fieuzal, R.; Rodriguez, J. C.; Ezzahar, J.; Er-Raki, S.; Bigeard, G.; Merlin, O.; Garatuza-Payan, J.; Watts, C.; Chehbouni, G.
2014-03-01
Instantaneous evapotranspiration rates and surface water stress levels can be deduced from remotely sensed surface temperature data through the surface energy budget. Two families of methods can be defined: the contextual methods, where stress levels are scaled on a given image between hot/dry and cool/wet pixels for a particular vegetation cover, and single-pixel methods, which evaluate latent heat as the residual of the surface energy balance for one pixel independently from the others. Four models, two contextual (S-SEBI and a modified triangle method, named VIT) and two single-pixel (TSEB, SEBS) are applied over one growing season (December-May) for a 4 km × 4 km irrigated agricultural area in the semi-arid northern Mexico. Their performance, both at local and spatial standpoints, are compared relatively to energy balance data acquired at seven locations within the area, as well as an uncalibrated soil-vegetation-atmosphere transfer (SVAT) model forced with local in situ data including observed irrigation and rainfall amounts. Stress levels are not always well retrieved by most models, but S-SEBI as well as TSEB, although slightly biased, show good performance. The drop in model performance is observed for all models when vegetation is senescent, mostly due to a poor partitioning both between turbulent fluxes and between the soil/plant components of the latent heat flux and the available energy. As expected, contextual methods perform well when contrasted soil moisture and vegetation conditions are encountered in the same image (therefore, especially in spring and early summer) while they tend to exaggerate the spread in water status in more homogeneous conditions (especially in winter). Surface energy balance models run with available remotely sensed products prove to be nearly as accurate as the uncalibrated SVAT model forced with in situ data.
Efficient Training Methods for Conditional Random Fields
2008-02-01
Learning (ICML), 2007. [63] Bruce G. Lindsay. Composite likelihood methods. Contemporary Mathematics, pages 221–239, 1988. 189 [64] Yan Liu, Jaime ...Conference on Machine Learning (ICML), pages 737–744, 2005. [107] Erik F. Tjong Kim Sang and Sabine Buchholz. Introduction to the CoNLL-2000 shared task
Advanced Fuzzy Potential Field Method for Mobile Robot Obstacle Avoidance
Park, Jong-Wook; Kwak, Hwan-Joo; Kang, Young-Chang; Kim, Dong W.
2016-01-01
An advanced fuzzy potential field method for mobile robot obstacle avoidance is proposed. The potential field method primarily deals with the repulsive forces surrounding obstacles, while fuzzy control logic focuses on fuzzy rules that handle linguistic variables and describe the knowledge of experts. The design of a fuzzy controller—advanced fuzzy potential field method (AFPFM)—that models and enhances the conventional potential field method is proposed and discussed. This study also examines the rule-explosion problem of conventional fuzzy logic and assesses the performance of our proposed AFPFM through simulations carried out using a mobile robot. PMID:27123001
Advanced Fuzzy Potential Field Method for Mobile Robot Obstacle Avoidance.
Park, Jong-Wook; Kwak, Hwan-Joo; Kang, Young-Chang; Kim, Dong W
2016-01-01
An advanced fuzzy potential field method for mobile robot obstacle avoidance is proposed. The potential field method primarily deals with the repulsive forces surrounding obstacles, while fuzzy control logic focuses on fuzzy rules that handle linguistic variables and describe the knowledge of experts. The design of a fuzzy controller--advanced fuzzy potential field method (AFPFM)--that models and enhances the conventional potential field method is proposed and discussed. This study also examines the rule-explosion problem of conventional fuzzy logic and assesses the performance of our proposed AFPFM through simulations carried out using a mobile robot.
An Efficient Method for Transferring Adult Mosquitoes during Field Tests,
CULICIDAE, *COLLECTING METHODS, REPRINTS, BLOOD SUCKING INSECTS, FIELD TESTS, HAND HELD, EFFICIENCY, LABORATORY EQUIPMENT, MORTALITY RATES , ADULTS, AEDES, ASPIRATORS, CULICIDAE, TEST AND EVALUATION, REPRINTS
A Comprehensive Expedient Methods Field Manual.
1984-09-01
provide surface shelters that offer protection against the elements. These kits contain modular, expandable, and canvas shelters. "Modular and expandable...shelters and canvas tents provide all the structures needed on a bare 16 . .. . . - - .* *.-~ base to provide billeting, shops, hangars, and storage...interconnecting stringer light cables. 4. Each spider box contains enough outlets to supply each tent with at least two power receptacles. 5. All equipment
Path planning in uncertain flow fields using ensemble method
NASA Astrophysics Data System (ADS)
Wang, Tong; Le Maître, Olivier P.; Hoteit, Ibrahim; Knio, Omar M.
2016-10-01
An ensemble-based approach is developed to conduct optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where an ensemble of deterministic predictions is used to model and quantify uncertainty. In an operational setting, much about dynamics, topography, and forcing of the ocean environment is uncertain. To address this uncertainty, the flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each of the resulting realizations of the uncertain current field, we predict the path that minimizes the travel time by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of the sampling strategy and to develop insight into extensions dealing with general circulation ocean models. In particular, the ensemble method enables us to perform a statistical analysis of travel times and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.
Background field method in the gradient flow
NASA Astrophysics Data System (ADS)
Suzuki, Hiroshi
2015-10-01
In perturbative consideration of the Yang-Mills gradient flow, it is useful to introduce a gauge non-covariant term (“gauge-fixing term”) to the flow equation that gives rise to a Gaussian damping factor also for gauge degrees of freedom. In the present paper, we consider a modified form of the gauge-fixing term that manifestly preserves covariance under the background gauge transformation. It is shown that our gauge-fixing term does not affect gauge-invariant quantities as does the conventional gauge-fixing term. The formulation thus allows a background gauge covariant perturbative expansion of the flow equation that provides, in particular, a very efficient computational method of expansion coefficients in the small flow time expansion. The formulation can be generalized to systems containing fermions.
Inverse methods for stellarator error-fields and emission
NASA Astrophysics Data System (ADS)
Hammond, K. C.; Anichowski, A.; Brenner, P. W.; Diaz-Pacheco, R.; Volpe, F. A.; Wei, Y.; Kornbluth, Y.; Pedersen, T. S.; Raftopoulos, S.; Traverso, P.
2016-10-01
Work at the CNT stellarator at Columbia University has resulted in the development of two inverse diagnosis techniques that infer difficult-to-measure properties from simpler measurements. First, CNT's error-field is determined using a Newton-Raphson algorithm to infer coil misalignments based on measurements of flux surfaces. This is obtained by reconciling the computed flux surfaces (a function of coil misalignments) with the measured flux surfaces. Second, the plasma emissivity profile is determined based on a single CCD camera image using an onion-peeling method. This approach posits a system of linear equations relating pixel brightness to emission from a discrete set of plasma layers bounded by flux surfaces. Results for both of these techniques as applied to CNT will be shown, and their applicability to large modular coil stellarators will be discussed.
Symstad, Amy J.; Wienk, Cody L.; Thorstenson, Andy
2006-01-01
The Northern Great Plains Inventory & Monitoring (I&M) Network (Network) of the National Park Service (NPS) consists of 13 NPS units in North Dakota, South Dakota, Nebraska, and eastern Wyoming. The Network is in the planning phase of a long-term program to monitor the health of park ecosystems. Plant community composition is one of the 'Vital Signs,' or indicators, that will be monitored as part of this program for three main reasons. First, plant community composition is information-rich; a single sampling protocol can provide information on the diversity of native and non-native species, the abundance of individual dominant species, and the abundance of groups of plants. Second, plant community composition is of specific management concern. The abundance and diversity of exotic plants, both absolute and relative to native species, is one of the greatest management concerns in almost all Network parks (Symstad 2004). Finally, plant community composition reflects the effects of a variety of current or anticipated stressors on ecosystem health in the Network parks including invasive exotic plants, large ungulate grazing, lack of fire in a fire-adapted system, chemical exotic plant control, nitrogen deposition, increased atmospheric carbon dioxide concentrations, and climate change. Before the Network begins its Vital Signs monitoring, a detailed plan describing specific protocols used for each of the Vital Signs must go through rigorous development and review. The pilot study on which we report here is one of the components of this protocol development. The goal of the work we report on here was to determine a specific method to use for monitoring plant community composition of the herb layer (< 2 m tall).
Wind field model-based estimation of Seasat scatterometer winds
NASA Technical Reports Server (NTRS)
Long, David G.
1993-01-01
A model-based approach to estimating near-surface wind fields over the ocean from Seasat scatterometer (SASS) measurements is presented. The approach is a direct assimilation technique in which wind field model parameters are estimated directly from the scatterometer measurements of the radar backscatter of the ocean's surface using maximum likelihood principles. The wind field estimate is then computed from the estimated model parameters. The wind field model used in this approach is based on geostrophic approximation and on simplistic assumptions about the wind field vorticity and divergence but includes ageostrophic winds. Nine days of SASS data were processed to obtain unique wind estimates. Comparisons in performance to the traditional two-step (point-wise wind retrieval followed by ambiguity removal) wind estimate method and the model-based method are provided using both simulated radar backscatter measurements and actual SASS measurements. In the latter case the results are compared to wind fields determined using subjective ambiguity removal. While the traditional approach results in missing measurements and reduced effective swath width due to fore/aft beam cell coregistration problems, the model-based approach uses all available measurements to increase the effective swath width and to reduce data gaps. The results reveal that the model-based wind estimates have accuracy comparable to traditionally estimated winds with less 'noise' in the directional estimates, particularly at low wind speeds.
Deformation methods in modelling of the inner magnetospheric electromagnetic fields
NASA Astrophysics Data System (ADS)
Toivanen, P. K.
2007-12-01
Various deformation methods have been widely used in animation image processing. In common terms, they are mathematical presentations of deformations of an image drawn on an elastic material under stretching or compression of the material. Such a method has also been used in modelling of the magnetospheric magnetic fields, and recently been generalized to include also the electric fields. In this presentations, the theory of the deformation method and an application in a form of a new global magnetospheric electromagnetic field model are previewed. The main focus of the presentation is on the inner magnetospheric current systems and associated electromagnetic fields during quiet and disturbed periods. Finally, a short look at the modern deformation methods in image processing is taken. These methods include the Free Form Deformations and Moving Least Squares Deformations, and their future applications in magnetospheric field modelling are discussed.
Comparison of induction motor field efficiency evaluation methods
Hsu, J.S.; Kueck, J.D.; Olszewski, M.; Casada, D.A.; Otaduy, P.J.; Tolbert, L.M.
1996-10-01
Unlike testing motor efficiency in a laboratory, certain methods given in the IEEE-Std 112 cannot be used for motor efficiency in the field. For example, it is difficult to load a motor in the field with a dynamometer when the motor is already coupled to driven equipment. The motor efficiency field evaluation faces a different environment from that for which the IEEE-Std 112 is chiefly written. A field evaluation method consists of one or several basic methods according to their physical natures. Their intrusivenesses and accuracies are also discussed. This study is useful for field engineers to select or to establish a proper efficiency evaluation method by understanding the theories and error sources of the methods.
NASA Astrophysics Data System (ADS)
Su, Xiaoru; Shu, Longcang; Chen, Xunhong; Lu, Chengpeng; Wen, Zhonghui
2016-12-01
Interactions between surface waters and groundwater are of great significance for evaluating water resources and protecting ecosystem health. Heat as a tracer method is widely used in determination of the interactive exchange with high precision, low cost and great convenience. The flow in a river-bank cross-section occurs in vertical and lateral directions. In order to depict the flow path and its spatial distribution in bank areas, a genetic algorithm (GA) two-dimensional (2-D) heat-transport nested-loop method for variably saturated sediments, GA-VS2DH, was developed based on Microsoft Visual Basic 6.0. VS2DH was applied to model a 2-D bank-water flow field and GA was used to calibrate the model automatically by minimizing the difference between observed and simulated temperatures in bank areas. A hypothetical model was developed to assess the reliability of GA-VS2DH in inverse modeling in a river-bank system. Some benchmark tests were conducted to recognize the capability of GA-VS2DH. The results indicated that the simulated seepage velocity and parameters associated with GA-VS2DH were acceptable and reliable. Then GA-VS2DH was applied to two field sites in China with different sedimentary materials, to verify the reliability of the method. GA-VS2DH could be applied in interpreting the cross-sectional 2-D water flow field. The estimates of horizontal hydraulic conductivity at the Dawen River and Qinhuai River sites are 1.317 and 0.015 m/day, which correspond to sand and clay sediment in the two sites, respectively.
Ocean Wave Simulation Based on Wind Field.
Li, Zhongyi; Wang, Hao
2016-01-01
Ocean wave simulation has a wide range of applications in movies, video games and training systems. Wind force is the main energy resource for generating ocean waves, which are the result of the interaction between wind and the ocean surface. While numerous methods to handle simulating oceans and other fluid phenomena have undergone rapid development during the past years in the field of computer graphic, few of them consider to construct ocean surface height field from the perspective of wind force driving ocean waves. We introduce wind force to the construction of the ocean surface height field through applying wind field data and wind-driven wave particles. Continual and realistic ocean waves result from the overlap of wind-driven wave particles, and a strategy was proposed to control these discrete wave particles and simulate an endless ocean surface. The results showed that the new method is capable of obtaining a realistic ocean scene under the influence of wind fields at real time rates.
IR photodetector based on rectangular quantum wire in magnetic field
Jha, Nandan
2014-04-24
In this paper we study rectangular quantum wire based IR detector with magnetic field applied along the wires. The energy spectrum of a particle in rectangular box shows level repulsions and crossings when external magnetic field is applied. Due to this complex level dynamics, we can tune the spacing between any two levels by varying the magnetic field. This method allows user to change the detector parameters according to his/her requirements. In this paper, we numerically calculate the energy sub-band levels of the square quantum wire in constant magnetic field along the wire and quantify the possible operating wavelength range that can be obtained by varying the magnetic field. We also calculate the photon absorption probability at different magnetic fields and give the efficiency for different wavelengths if the transition is assumed between two lowest levels.
Classical-field methods for atom-molecule systems
NASA Astrophysics Data System (ADS)
Sahlberg, Catarina E.; Gardiner, C. W.
2013-02-01
We extend classical-field methods [Blakie , Adv. Phys.ADPHAH0001-873210.1080/00018730802564254 57, 363 (2008)] to provide a description of atom-molecule systems. We use a model of Bose-Einstein condensation of atoms close to a Feshbach resonance, in which the tunable scattering length of the atoms is described using a system of coupled atom and molecule fields [Holland , Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.86.1915 86, 1915 (2001)]. We formulate the basic theoretical methods for a coupled atom-molecule system, including the determination of the phenomenological parameters in our system, the Thomas-Fermi description of Bose-Einstein condensate, the Bogoliubov-de Gennes equations, and the Bogoliubov excitation spectrum for a homogenous condensed system. We apply this formalism to the special case of Bragg scattering from a uniform condensate and find that for moderate and large scattering lengths, there is a dramatic difference in the shift of the peak of the Bragg spectra, compared to that based on a structureless atom model. The result is compatible with the experimental results of Papp [Phys. Rev. LettPRLTAO0031-900710.1103/PhysRevLett.101.135301 101, 135301 (2008)] for Bragg scattering from a nonuniform condensate.
Oriented Connectivity-Based Method for Segmenting Solar Loops
NASA Technical Reports Server (NTRS)
Lee, J. K.; Newman, T. S.; Gary, G. A.
2005-01-01
A method based on oriented connectivity that can automatically segnient arc-like structures (solar loops) from intensity images of the Sun's corona is introduced. The method is a constructive approach that uses model-guided processing to enable extraction of credible loop structures. Since the solar loops are vestiges of the solar magnetic field, the model-guided processing exploits external estimates of this field s local orientations that are derived from a physical magnetic field model. Empirical studies of the method s effectiveness are also presented. The Oriented Connectivity- Based Method is the first automatic method for the segmentation of solar loops.
An improved reconstruction method for cosmological density fields
NASA Technical Reports Server (NTRS)
Gramann, Mirt
1993-01-01
This paper proposes some improvements to existing reconstruction methods for recovering the initial linear density and velocity fields of the universe from the present large-scale density distribution. We derive the Eulerian continuity equation in the Zel'dovich approximation and show that, by applying this equation, we can trace the evolution of the gravitational potential of the universe more exactly than is possible with previous approaches based on the Zel'dovich-Bernoulli equation. The improved reconstruction method is tested using N-body simulations. When the Zel'dovich-Bernoulli equation describes the formation of filaments, then the Zel'dovich continuity equation also follows the clustering of clumps inside the filaments. Our reconstruction method recovers the true initial gravitational potential with an rms error about 3 times smaller than previous methods. We examine the recovery of the initial distribution of Fourier components and find the scale at which the recovered phases are scrambled with respect their true initial values. Integrating the Zel'dovich continuity equation back in time, we can improve the spatial resolution of the reconstruction by a factor of about 2.
Geostatistical joint inversion of seismic and potential field methods
NASA Astrophysics Data System (ADS)
Shamsipour, Pejman; Chouteau, Michel; Giroux, Bernard
2016-04-01
Interpretation of geophysical data needs to integrate different types of information to make the proposed model geologically realistic. Multiple data sets can reduce uncertainty and non-uniqueness present in separate geophysical data inversions. Seismic data can play an important role in mineral exploration, however processing and interpretation of seismic data is difficult due to complexity of hard-rock geology. On the other hand, the recovered model from potential field methods is affected by inherent non uniqueness caused by the nature of the physics and by underdetermination of the problem. Joint inversion of seismic and potential field data can mitigate weakness of separate inversion of these methods. A stochastic joint inversion method based on geostatistical techniques is applied to estimate density and velocity distributions from gravity and travel time data. The method fully integrates the physical relations between density-gravity, on one hand, and slowness-travel time, on the other hand. As a consequence, when the data are considered noise-free, the responses from the inverted slowness and density data exactly reproduce the observed data. The required density and velocity auto- and cross-covariance are assumed to follow a linear model of coregionalization (LCM). The recent development of nonlinear model of coregionalization could also be applied if needed. The kernel function for the gravity method is obtained by the closed form formulation. For ray tracing, we use the shortest-path methods (SPM) to calculate the operation matrix. The jointed inversion is performed on structured grid; however, it is possible to extend it to use unstructured grid. The method is tested on two synthetic models: a model consisting of two objects buried in a homogeneous background and a model with stochastic distribution of parameters. The results illustrate the capability of the method to improve the inverted model compared to the separate inverted models with either gravity
A component compensation method for magnetic interferential field
NASA Astrophysics Data System (ADS)
Zhang, Qi; Wan, Chengbiao; Pan, Mengchun; Liu, Zhongyan; Sun, Xiaoyong
2017-04-01
A new component searching with scalar restriction method (CSSRM) is proposed for magnetometer to compensate magnetic interferential field caused by ferromagnetic material of platform and improve measurement performance. In CSSRM, the objection function for parameter estimation is to minimize magnetic field (components and magnitude) difference between its measurement value and reference value. Two scalar compensation method is compared with CSSRM and the simulation results indicate that CSSRM can estimate all interferential parameters and external magnetic field vector with high accuracy. The magnetic field magnitude and components, compensated with CSSRM, coincide with true value very well. Experiment is carried out for a tri-axial fluxgate magnetometer, mounted in a measurement system with inertial sensors together. After compensation, error standard deviation of both magnetic field components and magnitude are reduced from more than thousands nT to less than 20 nT. It suggests that CSSRM provides an effective way to improve performance of magnetic interferential field compensation.
Method of using triaxial magnetic fields for making particle structures
Martin, James E.; Anderson, Robert A.; Williamson, Rodney L.
2005-01-18
A method of producing three-dimensional particle structures with enhanced magnetic susceptibility in three dimensions by applying a triaxial energetic field to a magnetic particle suspension and subsequently stabilizing said particle structure. Combinations of direct current and alternating current fields in three dimensions produce particle gel structures, honeycomb structures, and foam-like structures.
Light Field Imaging Based Accurate Image Specular Highlight Removal
Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo
2016-01-01
Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083
FIELD ANALYTICAL SCREENING PROGRAM: PCB METHOD - INNOVATIVE TECHNOLOGY REPORT
This innovative technology evaluation report (ITER) presents information on the demonstration of the U.S. Environmental Protection Agency (EPA) Region 7 Superfund Field Analytical Screening Program (FASP) method for determining polychlorinated biphenyl (PCB) contamination in soil...
New Method for Solving Inductive Electric Fields in the Ionosphere
NASA Astrophysics Data System (ADS)
Vanhamäki, H.
2005-12-01
We present a new method for calculating inductive electric fields in the ionosphere. It is well established that on large scales the ionospheric electric field is a potential field. This is understandable, since the temporal variations of large scale current systems are generally quite slow, in the timescales of several minutes, so inductive effects should be small. However, studies of Alfven wave reflection have indicated that in some situations inductive phenomena could well play a significant role in the reflection process, and thus modify the nature of ionosphere-magnetosphere coupling. The input to our calculation method are the time series of the potential part of the ionospheric electric field together with the Hall and Pedersen conductances. The output is the time series of the induced rotational part of the ionospheric electric field. The calculation method works in the time-domain and can be used with non-uniform, time-dependent conductances. In addition no particular symmetry requirements are imposed on the input potential electric field. The presented method makes use of special non-local vector basis functions called Cartesian Elementary Current Systems (CECS). This vector basis offers a convenient way of representing curl-free and divergence-free parts of 2-dimensional vector fields and makes it possible to solve the induction problem using simple linear algebra. The new calculation method is validated by comparing it with previously published results for Alfven wave reflection from uniformly conducting ionosphere.
NASA Technical Reports Server (NTRS)
Baxes, Gregory A. (Inventor); Linger, Timothy C. (Inventor)
2011-01-01
Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.
NASA Technical Reports Server (NTRS)
Baxes, Gregory A. (Inventor)
2010-01-01
Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.
Graphical methods for quantifying macromolecules through bright field imaging.
Chang, Hang; DeFilippis, Rosa Anna; Tlsty, Thea D; Parvin, Bahram
2009-04-15
Bright field imaging of biological samples stained with antibodies and/or special stains provides a rapid protocol for visualizing various macromolecules. However, this method of sample staining and imaging is rarely employed for direct quantitative analysis due to variations in sample fixations, ambiguities introduced by color composition and the limited dynamic range of imaging instruments. We demonstrate that, through the decomposition of color signals, staining can be scored on a cell-by-cell basis. We have applied our method to fibroblasts grown from histologically normal breast tissue biopsies obtained from two distinct populations. Initially, nuclear regions are segmented through conversion of color images into gray scale, and detection of dark elliptic features. Subsequently, the strength of staining is quantified by a color decomposition model that is optimized by a graph cut algorithm. In rare cases where nuclear signal is significantly altered as a result of sample preparation, nuclear segmentation can be validated and corrected. Finally, segmented stained patterns are associated with each nuclear region following region-based tessellation. Compared to classical non-negative matrix factorization, proposed method: (i) improves color decomposition, (ii) has a better noise immunity, (iii) is more invariant to initial conditions and (iv) has a superior computing performance.
Use of boundary element methods in field emission computations
Hartman, R.L.; Mackie, W.A.; Davis, P.R.
1994-03-01
The boundary element method is well suited to deal with some potential field problems encountered in the context of field emission. A boundary element method is presented in the specific case of three-dimensional problems with azimuthal symmetry. As a check, computed results are displayed for well-known theoretical examples. The code is then employed to calculate current from a field emission tip and from the same tip with a protrusion. Finally an extension of the boundary element code is employed to calculate space-charge effects on emitted current. 13 refs., 5 figs., 1 tab.
New light field camera based on physical based rendering tracing
NASA Astrophysics Data System (ADS)
Chung, Ming-Han; Chang, Shan-Ching; Lee, Chih-Kung
2014-03-01
Even though light field technology was first invented more than 50 years ago, it did not gain popularity due to the limitation imposed by the computation technology. With the rapid advancement of computer technology over the last decade, the limitation has been uplifted and the light field technology quickly returns to the spotlight of the research stage. In this paper, PBRT (Physical Based Rendering Tracing) was introduced to overcome the limitation of using traditional optical simulation approach to study the light field camera technology. More specifically, traditional optical simulation approach can only present light energy distribution but typically lack the capability to present the pictures in realistic scenes. By using PBRT, which was developed to create virtual scenes, 4D light field information was obtained to conduct initial data analysis and calculation. This PBRT approach was also used to explore the light field data calculation potential in creating realistic photos. Furthermore, we integrated the optical experimental measurement results with PBRT in order to place the real measurement results into the virtually created scenes. In other words, our approach provided us with a way to establish a link of virtual scene with the real measurement results. Several images developed based on the above-mentioned approaches were analyzed and discussed to verify the pros and cons of the newly developed PBRT based light field camera technology. It will be shown that this newly developed light field camera approach can circumvent the loss of spatial resolution associated with adopting a micro-lens array in front of the image sensors. Detailed operational constraint, performance metrics, computation resources needed, etc. associated with this newly developed light field camera technique were presented in detail.
Characterizing ice crystal growth behavior under electric field using phase field method.
He, Zhi Zhu; Liu, Jing
2009-07-01
In this article, the microscale ice crystal growth behavior under electrostatic field is investigated via a phase field method, which also incorporates the effects of anisotropy and thermal noise. The multiple ice nuclei's competitive growth as disclosed in existing experiments is thus successfully predicted. The present approach suggests a highly efficient theoretical tool for probing into the freeze injury mechanisms of biological material due to ice formation during cryosurgery or cryopreservation process when external electric field was involved.
FIELD ANALYTICAL SCREENING PROGRAM: PCP METHOD - INNOVATIVE TECHNOLOGY EVALUATION REPORT
The Field Analytical Screening Program (FASP) pentachlorophenol (PCP) method uses a gas chromatograph (GC) equipped with a megabore capillary column and flame ionization detector (FID) and electron capture detector (ECD) to identify and quantify PCP. The FASP PCP method is design...
Field tests of carbon monitoring methods in forestry projects
1999-07-01
In response to the emerging scientific consensus on the facts of global climate change, the international Joint Implementation (JI) program provided a pilot phase in which utilities and other industries could finance, among other activities, international efforts to sequester carbon dioxide, a major greenhouse gas. To make JI and its successor mechanisms workable, however, cost-effective methods are needed for monitoring progress in the reduction of greenhouse gas emissions. The papers in this volume describe field test experiences with methods for measuring carbon storage by three types of land use: natural forest, plantation forest, and agroforestry. Each test, in a slightly different land-use situation, contributes to the knowledge of carbon-monitoring methods as experienced in the field. The field tests of the agroforestry guidelines in Guatemala and the Philippines, for example, suggested adaptations in terms of plot size and method of delineating the total area for sampling.
Geochemical field method for determination of nickel in plants
Reichen, L.E.
1951-01-01
The use of biogeochemical data in prospecting for nickel emphasizes the need for a simple, moderately accurate field method for the determination of nickel in plants. In order to follow leads provided by plants of unusual nickel content without loss of time, the plants should be analyzed and the results given to the field geologist promptly. The method reported in this paper was developed to meet this need. Speed is acquired by elimination of the customary drying and controlled ashing; the fresh vegetation is ashed in an open dish over a gasoline stove. The ash is put into solution with hydrochloric acid and the solution buffered. A chromograph is used to make a confined spot with an aliquot of the ash solution on dimethylglyoxime reagent paper. As little as 0.025% nickel in plant ash can be determined. With a simple modification, 0.003% can be detected. Data are given comparing the results obtained by an accepted laboratory procedure. Results by the field method are within 30% of the laboratory values. The field method for nickel in plants meets the requirements of biogeochemical prospecting with respect to accuracy, simplicity, speed, and ease of performance in the field. With experience, an analyst can make 30 determinations in an 8-hour work day in the field.
1973-08-01
placed on the development of field test kits based on two improved colorimetric methods involving the use of methylene blue and Azure A. The...simplified and improved Methylene Blue Method and Azure A Method require only 5 or 6 ml of aqueous reagent and 25 ml of chloroform for analyzing one sample
Stevens, Fred J.
1992-01-01
A novel method of electric field flow fractionation for separating solute molecules from a carrier solution is disclosed. The method of the invention utilizes an electric field that is periodically reversed in polarity, in a time-dependent, wave-like manner. The parameters of the waveform, including amplitude, frequency and wave shape may be varied to optimize separation of solute species. The waveform may further include discontinuities to enhance separation.
The emergence of mixing methods in the field of evaluation.
Greene, Jennifer C
2015-06-01
When and how did the contemporary practice of mixing methods in social inquiry get started? What events transpired to catalyze the explosive conceptual development and practical adoption of mixed methods social inquiry over recent decades? How has this development progressed? What "next steps" would be most constructive? These questions are engaged in this personally narrative account of the beginnings of the contemporary mixed methods phenomenon in the field of evaluation from the perspective of a methodologist who was there.
Li, Yunhan; Sun, Yonghai; Jaffray, David; Yeow, John T W
2017-02-17
Field emission (FE) uniformity and mechanism of emitter failure of freestanding carbon nanotube (CNT) arrays have not been well studied due to the difficulty of observing and quantifying FE performance of each emitter in CNT arrays. Herein a field emission microscopy (FEM) method based on Poly(methyl methacrylate) (PMMA) thin film is proposed to study the FE uniformity and CNT emitter failure of freestanding CNT arrays. FE uniformity of freestanding CNT arrays and different levels of FE current contributions from each emitter in the arrays are recorded and visualized. FEM patterns on the PMMA thin film contain the details of the CNT emitter tip shape and whether multiple CNT emitters occurring at an emission site. Observation of real-time FE performance and CNT emitter failure process in freestanding CNT arrays are successfully achieved using a microscopic camera. High emission currents through CNT emitters causes joule heating and light emission followed by an explosion of the CNTs. The proposed approach is capable of resolving the major challenge of building the relationship between FE performances and CNT morphologies, which can significantly facilitate the study of FE non-uniformity and emitter failure mechanism and the development of stable and reliable FE devices in practical applications.
NASA Astrophysics Data System (ADS)
Li, Yunhan; Sun, Yonghai; Jaffray, David A.; Yeow, John T. W.
2017-04-01
Field emission (FE) uniformity and the mechanism of emitter failure of freestanding carbon nanotube (CNT) arrays have not been well studied due to the difficulty of observing and quantifying FE performance of each emitter in CNT arrays. Herein a field emission microscopy (FEM) method based on poly(methyl methacrylate) (PMMA) thin film is proposed to study the FE uniformity and CNT emitter failure of freestanding CNT arrays. FE uniformity of freestanding CNT arrays and different levels of FE current contributions from each emitter in the arrays are recorded and visualized. FEM patterns on the PMMA thin film contain the details of the CNT emitter tip shape and whether multiple CNT emitters occur at an emission site. Observation of real-time FE performance and the CNT emitter failure process in freestanding CNT arrays are successfully achieved using a microscopic camera. High emission currents through CNT emitters causes Joule heating and light emission followed by an explosion of the CNTs. The proposed approach is capable of resolving the major challenge of building the relationship between FE performance and CNT morphologies, which can significantly facilitate the study of FE non-uniformity, the emitter failure mechanism and the development of stable and reliable FE devices in practical applications.
Non-perturbative methods in relativistic field theory
Franz Gross
2013-03-01
This talk reviews relativistic methods used to compute bound and low energy scattering states in field theory, with emphasis on approaches that John Tjon and I discussed (and argued about) together. I compare the Bethe–Salpeter and Covariant Spectator equations, show some applications, and then report on some of the things we have learned from the beautiful Feynman–Schwinger technique for calculating the exact sum of all ladder and crossed ladder diagrams in field theory.
Method of determining interwell oil field fluid saturation distribution
Donaldson, Erle C.; Sutterfield, F. Dexter
1981-01-01
A method of determining the oil and brine saturation distribution in an oil field by taking electrical current and potential measurements among a plurality of open-hole wells geometrically distributed throughout the oil field. Poisson's equation is utilized to develop fluid saturation distributions from the electrical current and potential measurement. Both signal generating equipment and chemical means are used to develop current flow among the several open-hole wells.
Kazachenko, Maria D.; Fisher, George H.; Welsch, Brian T.
2014-11-01
Photospheric electric fields, estimated from sequences of vector magnetic field and Doppler measurements, can be used to estimate the flux of magnetic energy (the Poynting flux) into the corona and as time-dependent boundary conditions for dynamic models of the coronal magnetic field. We have modified and extended an existing method to estimate photospheric electric fields that combines a poloidal-toroidal decomposition (PTD) of the evolving magnetic field vector with Doppler and horizontal plasma velocities. Our current, more comprehensive method, which we dub the 'PTD-Doppler-FLCT Ideal' (PDFI) technique, can now incorporate Doppler velocities from non-normal viewing angles. It uses the FISHPACK software package to solve several two-dimensional Poisson equations, a faster and more robust approach than our previous implementations. Here, we describe systematic, quantitative tests of the accuracy and robustness of the PDFI technique using synthetic data from anelastic MHD (ANMHD) simulations, which have been used in similar tests in the past. We find that the PDFI method has less than 1% error in the total Poynting flux and a 10% error in the helicity flux rate at a normal viewing angle (θ = 0) and less than 25% and 10% errors, respectively, at large viewing angles (θ < 60°). We compare our results with other inversion methods at zero viewing angle and find that our method's estimates of the fluxes of magnetic energy and helicity are comparable to or more accurate than other methods. We also discuss the limitations of the PDFI method and its uncertainties.
NASA Technical Reports Server (NTRS)
Glatt, C. R.; Hague, D. S.; Reiners, S. J.
1975-01-01
A computerized procedure for predicting sonic boom from experimental near-field overpressure data has been developed. The procedure extrapolates near-field pressure signatures for a specified flight condition to the ground by the Thomas method. Near-field pressure signatures are interpolated from a data base of experimental pressure signatures. The program is an independently operated ODIN (Optimal Design Integration) program which obtains flight path information from other ODIN programs or from input.
Method of improving field emission characteristics of diamond thin films
Krauss, Alan R.; Gruen, Dieter M.
1999-01-01
A method of preparing diamond thin films with improved field emission properties. The method includes preparing a diamond thin film on a substrate, such as Mo, W, Si and Ni. An atmosphere of hydrogen (molecular or atomic) can be provided above the already deposited film to form absorbed hydrogen to reduce the work function and enhance field emission properties of the diamond film. In addition, hydrogen can be absorbed on intergranular surfaces to enhance electrical conductivity of the diamond film. The treated diamond film can be part of a microtip array in a flat panel display.
Method of improving field emission characteristics of diamond thin films
Krauss, A.R.; Gruen, D.M.
1999-05-11
A method of preparing diamond thin films with improved field emission properties is disclosed. The method includes preparing a diamond thin film on a substrate, such as Mo, W, Si and Ni. An atmosphere of hydrogen (molecular or atomic) can be provided above the already deposited film to form absorbed hydrogen to reduce the work function and enhance field emission properties of the diamond film. In addition, hydrogen can be absorbed on intergranular surfaces to enhance electrical conductivity of the diamond film. The treated diamond film can be part of a microtip array in a flat panel display. 3 figs.
Multigrid Methods for the Computation of Propagators in Gauge Fields
NASA Astrophysics Data System (ADS)
Kalkreuter, Thomas
Multigrid methods were invented for the solution of discretized partial differential equations in order to overcome the slowness of traditional algorithms by updates on various length scales. In the present work generalizations of multigrid methods for propagators in gauge fields are investigated. Gauge fields are incorporated in algorithms in a covariant way. The kernel C of the restriction operator which averages from one grid to the next coarser grid is defined by projection on the ground-state of a local Hamiltonian. The idea behind this definition is that the appropriate notion of smoothness depends on the dynamics. The ground-state projection choice of C can be used in arbitrary dimension and for arbitrary gauge group. We discuss proper averaging operations for bosons and for staggered fermions. The kernels C can also be used in multigrid Monte Carlo simulations, and for the definition of block spins and blocked gauge fields in Monte Carlo renormalization group studies. Actual numerical computations are performed in four-dimensional SU(2) gauge fields. We prove that our proposals for block spins are “good”, using renormalization group arguments. A central result is that the multigrid method works in arbitrarily disordered gauge fields, in principle. It is proved that computations of propagators in gauge fields without critical slowing down are possible when one uses an ideal interpolation kernel. Unfortunately, the idealized algorithm is not practical, but it was important to answer questions of principle. Practical methods are able to outperform the conjugate gradient algorithm in case of bosons. The case of staggered fermions is harder. Multigrid methods give considerable speed-ups compared to conventional relaxation algorithms, but on lattices up to 184 conjugate gradient is superior.
Accurate force fields and methods for modelling organic molecular crystals at finite temperatures.
Nyman, Jonas; Pundyke, Orla Sheehan; Day, Graeme M
2016-06-21
We present an assessment of the performance of several force fields for modelling intermolecular interactions in organic molecular crystals using the X23 benchmark set. The performance of the force fields is compared to several popular dispersion corrected density functional methods. In addition, we present our implementation of lattice vibrational free energy calculations in the quasi-harmonic approximation, using several methods to account for phonon dispersion. This allows us to also benchmark the force fields' reproduction of finite temperature crystal structures. The results demonstrate that anisotropic atom-atom multipole-based force fields can be as accurate as several popular DFT-D methods, but have errors 2-3 times larger than the current best DFT-D methods. The largest error in the examined force fields is a systematic underestimation of the (absolute) lattice energy.
NASA Astrophysics Data System (ADS)
Frassinetti, L.; Olofsson, K. E. J.; Fridström, R.; Setiadi, A. C.; Brunsell, P. R.; Volpe, F. A.; Drake, J.
2013-08-01
A new method for the estimate of the wall diffusion time of non-axisymmetric fields is developed. The method based on rotating external fields and on the measurement of the wall frequency response is developed and tested in EXTRAP T2R. The method allows the experimental estimate of the wall diffusion time for each Fourier harmonic and the estimate of the wall diffusion toroidal asymmetries. The method intrinsically considers the effects of three-dimensional structures and of the shell gaps. Far from the gaps, experimental results are in good agreement with the diffusion time estimated with a simple cylindrical model that assumes a homogeneous wall. The method is also applied with non-standard configurations of the coil array, in order to mimic tokamak-relevant settings with a partial wall coverage and active coils of large toroidal extent. The comparison with the full coverage results shows good agreement if the effects of the relevant sidebands are considered.
Prediction of sound fields in acoustical cavities using the boundary element method. M.S. Thesis
NASA Technical Reports Server (NTRS)
Kipp, C. R.; Bernhard, R. J.
1985-01-01
A method was developed to predict sound fields in acoustical cavities. The method is based on the indirect boundary element method. An isoparametric quadratic boundary element is incorporated. Pressure, velocity and/or impedance boundary conditions may be applied to a cavity by using this method. The capability to include acoustic point sources within the cavity is implemented. The method is applied to the prediction of sound fields in spherical and rectangular cavities. All three boundary condition types are verified. Cases with a point source within the cavity domain are also studied. Numerically determined cavity pressure distributions and responses are presented. The numerical results correlate well with available analytical results.
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.
2002-01-01
A multifunctional interface method with capabilities for variable-fidelity modeling and multiple method analysis is presented. The methodology provides an effective capability by which domains with diverse idealizations can be modeled independently to exploit the advantages of one approach over another. The multifunctional method is used to couple independently discretized subdomains, and it is used to couple the finite element and the finite difference methods. The method is based on a weighted residual variational method and is presented for two-dimensional scalar-field problems. A verification test problem and a benchmark application are presented, and the computational implications are discussed.
Field Science Ethnography: Methods For Systematic Observation on an Expedition
NASA Technical Reports Server (NTRS)
Clancey, William J.; Clancy, Daniel (Technical Monitor)
2001-01-01
The Haughton-Mars expedition is a multidisciplinary project, exploring an impact crater in an extreme environment to determine how people might live and work on Mars. The expedition seeks to understand and field test Mars facilities, crew roles, operations, and computer tools. I combine an ethnographic approach to establish a baseline understanding of how scientists prefer to live and work when relatively unemcumbered, with a participatory design approach of experimenting with procedures and tools in the context of use. This paper focuses on field methods for systematically recording and analyzing the expedition's activities. Systematic photography and time-lapse video are combined with concept mapping to organize and present information. This hybrid approach is generally applicable to the study of modern field expeditions having a dozen or more multidisciplinary participants, spread over a large terrain during multiple field seasons.
Hyperspectral Imaging and Related Field Methods: Building the Science
NASA Technical Reports Server (NTRS)
Goetz, Alexander F. H.; Steffen, Konrad; Wessman, Carol
1999-01-01
The proposal requested funds for the computing power to bring hyperspectral image processing into undergraduate and graduate remote sensing courses. This upgrade made it possible to handle more students in these oversubscribed courses and to enhance CSES' summer short course entitled "Hyperspectral Imaging and Data Analysis" provided for government, industry, university and military. Funds were also requested to build field measurement capabilities through the purchase of spectroradiometers, canopy radiation sensors and a differential GPS system. These instruments provided systematic and complete sets of field data for the analysis of hyperspectral data with the appropriate radiometric and wavelength calibration as well as atmospheric data needed for application of radiative transfer models. The proposed field equipment made it possible to team-teach a new field methods course, unique in the country, that took advantage of the expertise of the investigators rostered in three different departments, Geology, Geography and Biology.
Background field method and the cohomology of renormalization
NASA Astrophysics Data System (ADS)
Anselmi, Damiano
2016-03-01
Using the background field method and the Batalin-Vilkovisky formalism, we prove a key theorem on the cohomology of perturbatively local functionals of arbitrary ghost numbers in renormalizable and nonrenormalizable quantum field theories whose gauge symmetries are general covariance, local Lorentz symmetry, non-Abelian Yang-Mills symmetries and Abelian gauge symmetries. Interpolating between the background field approach and the usual, nonbackground approach by means of a canonical transformation, we take advantage of the properties of both approaches and prove that a closed functional is the sum of an exact functional plus a functional that depends only on the physical fields and possibly the ghosts. The assumptions of the theorem are the mathematical versions of general properties that characterize the counterterms and the local contributions to the potential anomalies. This makes the outcome a theorem on the cohomology of renormalization, rather than the whole local cohomology. The result supersedes numerous involved arguments that are available in the literature.
Localized Dictionaries Based Orientation Field Estimation for Latent Fingerprints.
Xiao Yang; Jianjiang Feng; Jie Zhou
2014-05-01
Dictionary based orientation field estimation approach has shown promising performance for latent fingerprints. In this paper, we seek to exploit stronger prior knowledge of fingerprints in order to further improve the performance. Realizing that ridge orientations at different locations of fingerprints have different characteristics, we propose a localized dictionaries-based orientation field estimation algorithm, in which noisy orientation patch at a location output by a local estimation approach is replaced by real orientation patch in the local dictionary at the same location. The precondition of applying localized dictionaries is that the pose of the latent fingerprint needs to be estimated. We propose a Hough transform-based fingerprint pose estimation algorithm, in which the predictions about fingerprint pose made by all orientation patches in the latent fingerprint are accumulated. Experimental results on challenging latent fingerprint datasets show the proposed method outperforms previous ones markedly.
Field and laboratory methods in human milk research.
Miller, Elizabeth M; Aiello, Marco O; Fujita, Masako; Hinde, Katie; Milligan, Lauren; Quinn, E A
2013-01-01
Human milk is a complex and variable fluid of increasing interest to human biologists who study nutrition and health. The collection and analysis of human milk poses many practical and ethical challenges to field workers, who must balance both appropriate methodology with the needs of participating mothers and infants and logistical challenges to collection and analysis. In this review, we address various collection methods, volume measurements, and ethical considerations and make recommendations for field researchers. We also review frequently used methods for the analysis of fat, protein, sugars/lactose, and specific biomarkers in human milk. Finally, we address new technologies in human milk research, the MIRIS Human Milk Analyzer and dried milk spots, which will improve the ability of human biologists and anthropologists to study human milk in field settings.
Methane generation in tropical landfills: simplified methods and field results.
Machado, Sandro L; Carvalho, Miriam F; Gourc, Jean-Pierre; Vilar, Orencio M; do Nascimento, Julio C F
2009-01-01
This paper deals with the use of simplified methods to predict methane generation in tropical landfills. Methane recovery data obtained on site as part of a research program being carried out at the Metropolitan Landfill, Salvador, Brazil, is analyzed and used to obtain field methane generation over time. Laboratory data from MSW samples of different ages are presented and discussed; and simplified procedures to estimate the methane generation potential, Lo, and the constant related to the biodegradation rate, k are applied. The first order decay method is used to fit field and laboratory results. It is demonstrated that despite the assumptions and the simplicity of the adopted laboratory procedures, the values Lo and k obtained are very close to those measured in the field, thus making this kind of analysis very attractive for first approach purposes.
Longitudinal Field Research Methods for Studying Processes of Organizational Change.
ERIC Educational Resources Information Center
Van de Ven, Andrew H.; Huber, George P.
1990-01-01
This and the next issue of "Organization Science" contain eight papers that deal with the process of organizational change. The five papers in this issue feature the theory of method and practice of researchers engaged in longitudinal field studies aimed at understanding processes of organizational change. (MLF)
Unsaturated soil hydraulic conductivity: The field infiltrometer method
Technology Transfer Automated Retrieval System (TEKTRAN)
Theory: Field methods to measure the unsaturated soil hydraulic conductivity assume presence of steady-state water flow. Soil infiltrometers are desired to apply water onto the soil surface at constant negative pressure. Water is applied to the soil from the Marriott device through a porous membrane...
Work function measurements by the field emission retarding potential method
NASA Technical Reports Server (NTRS)
Swanson, L. W.; Strayer, R. W.; Mackie, W. A.
1971-01-01
Using the field emission retarding potential method true work functions have been measured for the following monocrystalline substrates: W(110), W(111), W(100), Nb(100), Ni(100), Cu(100), Ir(110) and Ir(111). The electron elastic and inelastic reflection coefficients from several of these surfaces have also been examined near zero primary beam energy.
A field theoretical approach to the quasi-continuum method
NASA Astrophysics Data System (ADS)
Iyer, Mrinal; Gavini, Vikram
2011-08-01
The quasi-continuum method has provided many insights into the behavior of lattice defects in the past decade. However, recent numerical analysis suggests that the approximations introduced in various formulations of the quasi-continuum method lead to inconsistencies—namely, appearance of ghost forces or residual forces, non-conservative nature of approximate forces, etc.—which affect the numerical accuracy and stability of the method. In this work, we identify the source of these errors to be the incompatibility of using quadrature rules, which is a local notion, on a non-local representation of energy. We eliminate these errors by first reformulating the extended interatomic interactions into a local variational problem that describes the energy of a system via potential fields. We subsequently introduce the quasi-continuum reduction of these potential fields using an adaptive finite-element discretization of the formulation. We demonstrate that the present formulation resolves the inconsistencies present in previous formulations of the quasi-continuum method, and show using numerical examples the remarkable improvement in the accuracy of solutions. Further, this field theoretic formulation of quasi-continuum method makes mathematical analysis of the method more amenable using functional analysis and homogenization theories.
Field Deployable Method for Arsenic Speciation in Water
Voice, Thomas C.; Flores del Pino, Lisveth V.; Havezov, Ivan; Long, David T.
2010-01-01
Contamination of drinking water supplies by arsenic is a world-wide problem. Total arsenic measurements are commonly used to investigate and regulate arsenic in water, but it is well understood that arsenic occurs in several chemical forms, and these exhibit different toxicities. It is problematic to use laboratory-based speciation techniques to assess exposure as it has been suggested that the distribution of species is not stable during transport in some types of samples. A method was developed in this study for the on-site speciation of the most toxic dissolved arsenic species: As (III), As (V), monomethylarsonic acid (MMA) and dimethylarsenic acid (DMA). Development criteria included ease of use under field conditions, applicable at levels of concern for drinking water, and analytical performance. The approach is based on selective retention of arsenic species on specific ion-exchange chromatography cartridges followed by selective elution and quantification using graphite furnace atomic absorption spectroscopy. Water samples can be delivered to a set of three cartridges using either syringes or peristaltic pumps. Species distribution is stable at this point, and the cartridges can be transported to the laboratory for elution and quantitative analysis. A set of ten replicate spiked samples of each compound, having concentrations between 1 and 60 µg/L, were analyzed. Arsenic recoveries ranged from 78–112 % and relative standard deviations were generally below 10%. Resolution between species was shown to be outstanding, with the only limitation being that the capacity for As (V) was limited to approximately 50 µg/L. This could be easily remedied by changes in either cartridge design, or the extraction procedure. Recoveries were similar for two spiked hard groundwater samples indicating that dissolved minerals are not likely to be problematic. These results suggest that this methodology can be use for analysis of the four primary arsenic species of concern in
Field Deployable Method for Arsenic Speciation in Water.
Voice, Thomas C; Flores Del Pino, Lisveth V; Havezov, Ivan; Long, David T
2011-01-01
Contamination of drinking water supplies by arsenic is a world-wide problem. Total arsenic measurements are commonly used to investigate and regulate arsenic in water, but it is well understood that arsenic occurs in several chemical forms, and these exhibit different toxicities. It is problematic to use laboratory-based speciation techniques to assess exposure as it has been suggested that the distribution of species is not stable during transport in some types of samples. A method was developed in this study for the on-site speciation of the most toxic dissolved arsenic species: As (III), As (V), monomethylarsonic acid (MMA) and dimethylarsenic acid (DMA). Development criteria included ease of use under field conditions, applicable at levels of concern for drinking water, and analytical performance.The approach is based on selective retention of arsenic species on specific ion-exchange chromatography cartridges followed by selective elution and quantification using graphite furnace atomic absorption spectroscopy. Water samples can be delivered to a set of three cartridges using either syringes or peristaltic pumps. Species distribution is stable at this point, and the cartridges can be transported to the laboratory for elution and quantitative analysis. A set of ten replicate spiked samples of each compound, having concentrations between 1 and 60 µg/L, were analyzed. Arsenic recoveries ranged from 78-112 % and relative standard deviations were generally below 10%. Resolution between species was shown to be outstanding, with the only limitation being that the capacity for As (V) was limited to approximately 50 µg/L. This could be easily remedied by changes in either cartridge design, or the extraction procedure. Recoveries were similar for two spiked hard groundwater samples indicating that dissolved minerals are not likely to be problematic. These results suggest that this methodology can be use for analysis of the four primary arsenic species of concern in
Field-based physiological testing of wheelchair athletes.
Goosey-Tolfrey, Victoria L; Leicht, Christof A
2013-02-01
The volume of literature on field-based physiological testing of wheelchair sports, such as basketball, rugby and tennis, is considerably smaller when compared with that available for individuals and team athletes in able-bodied (AB) sports. In analogy to the AB literature, it is recognized that performance in wheelchair sports not only relies on fitness, but also sport-specific skills, experience and technical proficiency. However, in contrast to AB sports, two major components contribute towards 'wheeled sports' performance, which are the athlete and the wheelchair. It is the interaction of these two that enable wheelchair propulsion and the sporting movements required within a given sport. Like any other athlete, participants of wheelchair sports are looking for efficient ways to train and/or analyse their technique and fitness to improve their performance. Consequently, laboratory and/or field-based physiological monitoring tools used at regular intervals at key time points throughout the year must be considered to help with training evaluation. The present review examines methods available in the literature to assess wheelchair sports fitness in a field-based environment, with special attention on outcome variables, validity and reliability issues, and non-physiological influences on performance. It also lays out the context of field-based testing by providing details about the Paralympic court sports and the impacts of a disability on sporting performance. Due to the limited availability of specialized equipment for testing wheelchair-dependent participants in the laboratory, the adoption of field-based testing has become the preferred option by team coaches of wheelchair athletes. An obvious advantage of field-based testing is that large groups of athletes can be tested in less time. Furthermore, athletes are tested in their natural environment (using their normal sports wheelchair set-up and floor surface), potentially making the results of such testing
Lagrangian based methods for coherent structure detection
Allshouse, Michael R.; Peacock, Thomas
2015-09-15
There has been a proliferation in the development of Lagrangian analytical methods for detecting coherent structures in fluid flow transport, yielding a variety of qualitatively different approaches. We present a review of four approaches and demonstrate the utility of these methods via their application to the same sample analytic model, the canonical double-gyre flow, highlighting the pros and cons of each approach. Two of the methods, the geometric and probabilistic approaches, are well established and require velocity field data over the time interval of interest to identify particularly important material lines and surfaces, and influential regions, respectively. The other two approaches, implementing tools from cluster and braid theory, seek coherent structures based on limited trajectory data, attempting to partition the flow transport into distinct regions. All four of these approaches share the common trait that they are objective methods, meaning that their results do not depend on the frame of reference used. For each method, we also present a number of example applications ranging from blood flow and chemical reactions to ocean and atmospheric flows.
Interferometric methods for mapping static electric and magnetic fields
NASA Astrophysics Data System (ADS)
Pozzi, Giulio; Beleggia, Marco; Kasama, Takeshi; Dunin-Borkowski, Rafal E.
2014-02-01
The mapping of static electric and magnetic fields using electron probes with a resolution and sensitivity that are sufficient to reveal nanoscale features in materials requires the use of phase-sensitive methods such as the shadow technique, coherent Foucault imaging and the Transport of Intensity Equation. Among these approaches, image-plane off-axis electron holography in the transmission electron microscope has acquired a prominent role thanks to its quantitative capabilities and broad range of applicability. After a brief overview of the main ideas and methods behind field mapping, we focus on theoretical models that form the basis of the quantitative interpretation of electron holographic data. We review the application of electron holography to a variety of samples (including electric fields associated with p-n junctions in semiconductors, quantized magnetic flux in superconductors and magnetization topographies in nanoparticles and other magnetic materials) and electron-optical geometries (including multiple biprism, amplitude and mixed-type set-ups). We conclude by highlighting the emerging perspectives of (i) three-dimensional field mapping using electron holographic tomography and (ii) the model-independent determination of the locations and magnitudes of field sources (electric charges and magnetic dipoles) directly from electron holographic data. xml:lang="fr"
[Sub-field imaging spectrometer design based on Offner structure].
Wu, Cong-Jun; Yan, Chang-Xiang; Liu, Wei; Dai, Hu
2013-08-01
To satisfy imaging spectrometers's miniaturization, lightweight and large field requirements in space application, the current optical design of imaging spectrometer with Offner structure was analyzed, and an simple method to design imaging spectrometer with concave grating based on current ways was given. Using the method offered, the sub-field imaging spectrometer with 400 km altitude, 0.4-1.0 microm wavelength range, 5 F-number of 720 mm focal length and 4.3 degrees total field was designed. Optical fiber was used to transfer the image in telescope's focal plane to three slits arranged in the same plane so as to achieve subfield. The CCD detector with 1 024 x 1 024 and 18 microm x 18 microm was used to receive the image of the three slits after dispersing. Using ZEMAX software optimization and tolerance analysis, the system can satisfy 5 nm spectrum resolution and 5 m field resolution, and the MTF is over 0.62 with 28 lp x mm(-1). The field of the system is almost 3 times that of similar instruments used in space probe.
Evanescent Field Based Photoacoustics: Optical Property Evaluation at Surfaces
Goldschmidt, Benjamin S.; Rudy, Anna M.; Nowak, Charissa A.; Tsay, Yowting; Whiteside, Paul J. D.; Hunt, Heather K.
2016-01-01
Here, we present a protocol to estimate material and surface optical properties using the photoacoustic effect combined with total internal reflection. Optical property evaluation of thin films and the surfaces of bulk materials is an important step in understanding new optical material systems and their applications. The method presented can estimate thickness, refractive index, and use absorptive properties of materials for detection. This metrology system uses evanescent field-based photoacoustics (EFPA), a field of research based upon the interaction of an evanescent field with the photoacoustic effect. This interaction and its resulting family of techniques allow the technique to probe optical properties within a few hundred nanometers of the sample surface. This optical near field allows for the highly accurate estimation of material properties on the same scale as the field itself such as refractive index and film thickness. With the use of EFPA and its sub techniques such as total internal reflection photoacoustic spectroscopy (TIRPAS) and optical tunneling photoacoustic spectroscopy (OTPAS), it is possible to evaluate a material at the nanoscale in a consolidated instrument without the need for many instruments and experiments that may be cost prohibitive. PMID:27500652
Multiresolution and Explicit Methods for Vector Field Analysis and Visualization
NASA Technical Reports Server (NTRS)
Nielson, Gregory M.
1997-01-01
This is a request for a second renewal (3d year of funding) of a research project on the topic of multiresolution and explicit methods for vector field analysis and visualization. In this report, we describe the progress made on this research project during the second year and give a statement of the planned research for the third year. There are two aspects to this research project. The first is concerned with the development of techniques for computing tangent curves for use in visualizing flow fields. The second aspect of the research project is concerned with the development of multiresolution methods for curvilinear grids and their use as tools for visualization, analysis and archiving of flow data. We report on our work on the development of numerical methods for tangent curve computation first.
DC-based magnetic field controller
Kotter, Dale K.; Rankin, Richard A.; Morgan, John P,.
1994-01-01
A magnetic field controller for laboratory devices and in particular to dc operated magnetic field controllers for mass spectrometers, comprising a dc power supply in combination with improvements to a hall probe subsystem, display subsystem, preamplifier, field control subsystem, and an output stage.
DC-based magnetic field controller
Kotter, D.K.; Rankin, R.A.; Morgan, J.P.
1994-05-31
A magnetic field controller is described for laboratory devices and in particular to dc operated magnetic field controllers for mass spectrometers, comprising a dc power supply in combination with improvements to a Hall probe subsystem, display subsystem, preamplifier, field control subsystem, and an output stage. 1 fig.
Extending methods: using Bourdieu's field analysis to further investigate taste
NASA Astrophysics Data System (ADS)
Schindel Dimick, Alexandra
2015-06-01
In this commentary on Per Anderhag, Per-Olof Wickman and Karim Hamza's article Signs of taste for science, I consider how their study is situated within the concern for the role of science education in the social and cultural production of inequality. Their article provides a finely detailed methodology for analyzing the constitution of taste within science education classrooms. Nevertheless, because the authors' socially situated methodology draws upon Bourdieu's theories, it seems equally important to extend these methods to consider how and why students make particular distinctions within a relational context—a key aspect of Bourdieu's theory of cultural production. By situating the constitution of taste within Bourdieu's field analysis, researchers can explore the ways in which students' tastes and social positionings are established and transformed through time, space, place, and their ability to navigate the field. I describe the process of field analysis in relation to the authors' paper and suggest that combining the authors' methods with a field analysis can provide a strong methodological and analytical framework in which theory and methods combine to create a detailed understanding of students' interest in relation to their context.
Analysis of Double Ring Resonators using Method of Equating Fields
NASA Astrophysics Data System (ADS)
Althaf, Shahana
Optical ring resonators have the potential to be integral parts of large scale photonic circuits. My thesis theoretically analyzes parallel coupled double ring resonators (DRRs) in detail. The analysis is performed using the method of equating fields (MEF) which provides an in depth understanding about the transmitted and reflected light paths in the structure. Equations for the transmitted and reflected fields are derived; these equations allow for unequal ring lengths and coupling coefficients. Sanity checks including comparison with previously studied structures are performed in the final chapter in order to prove the correctness of the obtained results.
Ray, J.; Lee, J.; Yadav, V.; ...
2014-08-20
We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP), to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP tomore » impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO2 (ffCO2) emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less
Fast Field Calibration of MIMU Based on the Powell Algorithm
Ma, Lin; Chen, Wanwan; Li, Bin; You, Zheng; Chen, Zhigang
2014-01-01
The calibration of micro inertial measurement units is important in ensuring the precision of navigation systems, which are equipped with microelectromechanical system sensors that suffer from various errors. However, traditional calibration methods cannot meet the demand for fast field calibration. This paper presents a fast field calibration method based on the Powell algorithm. As the key points of this calibration, the norm of the accelerometer measurement vector is equal to the gravity magnitude, and the norm of the gyro measurement vector is equal to the rotational velocity inputs. To resolve the error parameters by judging the convergence of the nonlinear equations, the Powell algorithm is applied by establishing a mathematical error model of the novel calibration. All parameters can then be obtained in this manner. A comparison of the proposed method with the traditional calibration method through navigation tests shows the classic performance of the proposed calibration method. The proposed calibration method also saves more time compared with the traditional calibration method. PMID:25177801
NASA Astrophysics Data System (ADS)
Gressier, V.; Lacoste, V.; Martin, A.; Pepino, M.
2014-10-01
The variation in the response of instruments with neutron energy has to be determined in well-characterized monoenergetic neutron fields. The quantities associated with these fields are the neutron fluence and the mean energy of the monoenergetic neutron peak needed to determine the related dosimetric quantities. At the IRSN AMANDE facility, the reference measurement standard for neutron fluence is based on a long counter calibrated in the IRSN reference 252Cf neutron field. In this paper, the final characterization of this device is presented as well as the method used to determine the reference fluence at the calibration point in monoenergetic neutron fields.
Evolutionary Based Techniques for Fault Tolerant Field Programmable Gate Arrays
NASA Technical Reports Server (NTRS)
Larchev, Gregory V.; Lohn, Jason D.
2006-01-01
The use of SRAM-based Field Programmable Gate Arrays (FPGAs) is becoming more and more prevalent in space applications. Commercial-grade FPGAs are potentially susceptible to permanently debilitating Single-Event Latchups (SELs). Repair methods based on Evolutionary Algorithms may be applied to FPGA circuits to enable successful fault recovery. This paper presents the experimental results of applying such methods to repair four commonly used circuits (quadrature decoder, 3-by-3-bit multiplier, 3-by-3-bit adder, 440-7 decoder) into which a number of simulated faults have been introduced. The results suggest that evolutionary repair techniques can improve the process of fault recovery when used instead of or as a supplement to Triple Modular Redundancy (TMR), which is currently the predominant method for mitigating FPGA faults.
Field methods to measure surface displacement and strain with the Video Image Correlation method
NASA Technical Reports Server (NTRS)
Maddux, Gary A.; Horton, Charles M.; Mcneill, Stephen R.; Lansing, Matthew D.
1994-01-01
The objective of this project was to develop methods and application procedures to measure displacement and strain fields during the structural testing of aerospace components using paint speckle in conjunction with the Video Image Correlation (VIC) system.
Bringing the Field into the Classroom: A Field Methods Course on Saudi Arabian Sign Language
ERIC Educational Resources Information Center
Stephen, Anika; Mathur, Gaurav
2012-01-01
The methodology used in one graduate-level linguistics field methods classroom is examined through the lens of the students' experiences. Four male Deaf individuals from the Kingdom of Saudi Arabia served as the consultants for the course. After a brief background information about their country and its practices surrounding deaf education, both…
Method for imaging with low frequency electromagnetic fields
Lee, Ki H.; Xie, Gan Q.
1994-01-01
A method for imaging with low frequency electromagnetic fields, and for interpreting the electromagnetic data using ray tomography, in order to determine the earth conductivity with high accuracy and resolution. The imaging method includes the steps of placing one or more transmitters, at various positions in a plurality of transmitter holes, and placing a plurality of receivers in a plurality of receiver holes. The transmitters generate electromagnetic signals which diffuse through a medium, such as earth, toward the receivers. The measured diffusion field data H is then transformed into wavefield data U. The traveltimes corresponding to the wavefield data U, are then obtained, by charting the wavefield data U, using a different regularization parameter .alpha. for each transform. The desired property of the medium, such as conductivity, is then derived from the velocity, which in turn is constructed from the wavefield data U using ray tomography.
Method for imaging with low frequency electromagnetic fields
Lee, K.H.; Xie, G.Q.
1994-12-13
A method is described for imaging with low frequency electromagnetic fields, and for interpreting the electromagnetic data using ray tomography, in order to determine the earth conductivity with high accuracy and resolution. The imaging method includes the steps of placing one or more transmitters, at various positions in a plurality of transmitter holes, and placing a plurality of receivers in a plurality of receiver holes. The transmitters generate electromagnetic signals which diffuse through a medium, such as earth, toward the receivers. The measured diffusion field data H is then transformed into wavefield data U. The travel times corresponding to the wavefield data U, are then obtained, by charting the wavefield data U, using a different regularization parameter [alpha] for each transform. The desired property of the medium, such as conductivity, is then derived from the velocity, which in turn is constructed from the wavefield data U using ray tomography. 13 figures.
The reduced basis method for the electric field integral equation
Fares, M.; Hesthaven, J.S.; Maday, Y.; Stamm, B.
2011-06-20
We introduce the reduced basis method (RBM) as an efficient tool for parametrized scattering problems in computational electromagnetics for problems where field solutions are computed using a standard Boundary Element Method (BEM) for the parametrized electric field integral equation (EFIE). This combination enables an algorithmic cooperation which results in a two step procedure. The first step consists of a computationally intense assembling of the reduced basis, that needs to be effected only once. In the second step, we compute output functionals of the solution, such as the Radar Cross Section (RCS), independently of the dimension of the discretization space, for many different parameter values in a many-query context at very little cost. Parameters include the wavenumber, the angle of the incident plane wave and its polarization.
A self-consistent field method for galactic dynamics
NASA Technical Reports Server (NTRS)
Hernquist, Lars; Ostriker, Jeremiah P.
1992-01-01
The present study describes an algorithm for evolving collisionless stellar systems in order to investigate the evolution of systems with density profiles like the R exp 1/4 law, using only a few terms in the expansions. A good fit is obtained for a truncated isothermal distribution, which renders the method appropriate for galaxies with flat rotation curves. Calculations employing N of about 10 exp 6-7 are straightforward on existing supercomputers, making possible simulations having significantly smoother fields than with direct methods such as tree-codes. Orbits are found in a given static or time-dependent gravitational field; the potential, phi(r, t) is revised from the resultant density, rho(r, t). Possible scientific uses of this technique are discussed, including tidal perturbations of dwarf galaxies, the adiabatic growth of central masses in spheroidal galaxies, instabilities in realistic galaxy models, and secular processes in galactic evolution.
A Method for Evaluating Volt-VAR Optimization Field Demonstrations
Schneider, Kevin P.; Weaver, T. F.
2014-08-31
In a regulated business environment a utility must be able to validate that deployed technologies provide quantifiable benefits to the end-use customers. For traditional technologies there are well established procedures for determining what benefits will be derived from the deployment. But for many emerging technologies procedures for determining benefits are less clear and completely absent in some cases. Volt-VAR Optimization is a technology that is being deployed across the nation, but there are still numerous discussions about potential benefits and how they are achieved. This paper will present a method for the evaluation, and quantification of benefits, for field deployments of Volt-VAR Optimization technologies. In addition to the basic methodology, the paper will present a summary of results, and observations, from two separate Volt-VAR Optimization field evaluations using the proposed method.
Tattoli, F.; Casavola, C.; Pierron, F.; Rotinat, R.; Pappalettere, C.
2011-01-17
One of the main problems in welding is the microstructural transformation within the area affected by the thermal history. The resulting heterogeneous microstructure within the weld nugget and the heat affected zones is often associated with changes in local material properties. The present work deals with the identification of material parameters governing the elasto--plastic behaviour of the fused and heat affected zones as well as the base material for titanium hybrid welded joints (Ti6Al4V alloy). The material parameters are identified from heterogeneous strain fields with the Virtual Fields Method. This method is based on a relevant use of the principle of virtual work and it has been shown to be useful and much less time consuming than classical finite element model updating approaches applied to similar problems. The paper will present results and discuss the problem of selection of the weld zones for the identification.
NASA Astrophysics Data System (ADS)
Tattoli, F.; Pierron, F.; Rotinat, R.; Casavola, C.; Pappalettere, C.
2011-01-01
One of the main problems in welding is the microstructural transformation within the area affected by the thermal history. The resulting heterogeneous microstructure within the weld nugget and the heat affected zones is often associated with changes in local material properties. The present work deals with the identification of material parameters governing the elasto—plastic behaviour of the fused and heat affected zones as well as the base material for titanium hybrid welded joints (Ti6Al4V alloy). The material parameters are identified from heterogeneous strain fields with the Virtual Fields Method. This method is based on a relevant use of the principle of virtual work and it has been shown to be useful and much less time consuming than classical finite element model updating approaches applied to similar problems. The paper will present results and discuss the problem of selection of the weld zones for the identification.
Lidar Tracking of Multiple Fluorescent Tracers: Method and Field Test
NASA Technical Reports Server (NTRS)
Eberhard, Wynn L.; Willis, Ron J.
1992-01-01
Past research and applications have demonstrated the advantages and usefulness of lidar detection of a single fluorescent tracer to track air motions. Earlier researchers performed an analytical study that showed good potential for lidar discrimination and tracking of two or three different fluorescent tracers at the same time. The present paper summarizes the multiple fluorescent tracer method, discusses its expected advantages and problems, and describes our field test of this new technique.
Work function measurements by the field emission retarding potential method.
NASA Technical Reports Server (NTRS)
Strayer, R. W.; Mackie, W.; Swanson, L. W.
1973-01-01
Description of the theoretical foundation of the field electron retarding potential method, and review of its experimental application to the measurement of single crystal face work functions. The results obtained from several substrates are discussed. An interesting and useful fallout from the experimental approach described is the ability to accurately measure the elastic and inelastic reflection coefficient for impinging electrons to near zero-volt energy.
A field calibration method to eliminate the error caused by relative tilt on roll angle measurement
NASA Astrophysics Data System (ADS)
Qi, Jingya; Wang, Zhao; Huang, Junhui; Yu, Bao; Gao, Jianmin
2016-11-01
The roll angle measurement method based on a heterodyne interferometer is an efficient technique for its high precision and environmental noise immunity. The optical layout bases on a polarization-assisted conversion of the roll angle into an optical phase shift, read by a beam passing through the objective plate actuated by the roll rotation. The measurement sensitivity or the gain coefficient G is calibrated before. However, a relative tilt between the laser and objective plate always exist due to the tilt of the laser and the roll of the guide in the field long rail measurement. The relative tilt affect the value of G, thus result in the roll angle measurement error. In this paper, a method for field calibration of G is presented to eliminate the measurement error above. The field calibration layout turns the roll angle into an optical path change (OPC) by a rotary table. Thus, the roll angle can be obtained from the OPC read by a two-frequency interferometer. Together with the phase shift, an accurate G in field measurement can be obtained and the measurement error can be corrected. The optical system of the field calibration method is set up and the experiment results are given. Contrasted with the Renishaw XL-80 for calibration, the proposed field calibration method can obtain the accurate G in the field rail roll angle measurement.
Numerical results for extended field method applications. [thin plates
NASA Technical Reports Server (NTRS)
Donaldson, B. K.; Chander, S.
1973-01-01
This paper presents the numerical results obtained when a new method of analysis, called the extended field method, was applied to several thin plate problems including one with non-rectangular geometry, and one problem involving both beams and a plate. The numerical results show that the quality of the single plate solutions was satisfactory for all cases except those involving a freely deflecting plate corner. The results for the beam and plate structure were satisfactory even though the structure had a freely deflecting corner.
Neutron Field Measurements in Phantom with Foil Activation Methods.
1986-11-29
jI25 Ii III uumu ullli~ S....- - Lb - w * .qJ’ AD-A 192 122 ulJ. IL (pj DNA-TR-87- 10 N EUTRON FIELD MEASUREMENTS IN PHANTOM WITH FOIL ACTIVATION...SAND II Measurements in Phantom 6 4 The 5-Foil Neutron Dosimetry Method 29 5 Comparison of SAND II and Simple 5-Foil Dosimetry Method 34 6 Thermal ...quite reasonable. The monkey phantom spectrum differs from the NBS U-235 fission spectrum in that the former has a I/E tail plus thermal -neutron peak
Magnetic space-based field measurements
NASA Technical Reports Server (NTRS)
Langel, R. A.
1981-01-01
Because the near Earth magnetic field is a complex combination of fields from outside the Earth of fields from its core and of fields from its crust, measurements from space prove to be the only practical way to obtain timely, global surveys. Due to difficulty in making accurate vector measurements, early satellites such as Sputnik and Vanguard measured only the magnitude survey. The attitude accuracy was 20 arc sec. Both the Earth's core fields and the fields arising from its crust were mapped from satellite data. The standard model of the core consists of a scalar potential represented by a spherical harmonics series. Models of the crustal field are relatively new. Mathematical representation is achieved in localized areas by arrays of dipoles appropriately located in the Earth's crust. Measurements of the Earth's field are used in navigation, to map charged particles in the magnetosphere, to study fluid properties in the Earth's core, to infer conductivity of the upper mantels, and to delineate regional scale geological features.
Field Based Centers: The Missing Link?
ERIC Educational Resources Information Center
Walters, Ellen
The implementation of a field experience program for a university situated in a large and sparsely populated area is described. Twelve faculty teams pprovide the structure for student advising, instruction, and supervision of field experiences. The need for a single organizational unit to facilitate the logistics of integrating the wide-spread…
Performance of FFT methods in local gravity field modelling
NASA Technical Reports Server (NTRS)
Forsberg, Rene; Solheim, Dag
1989-01-01
Fast Fourier transform (FFT) methods provide a fast and efficient means of processing large amounts of gravity or geoid data in local gravity field modelling. The FFT methods, however, has a number of theoretical and practical limitations, especially the use of flat-earth approximation, and the requirements for gridded data. In spite of this the method often yields excellent results in practice when compared to other more rigorous (and computationally expensive) methods, such as least-squares collocation. The good performance of the FFT methods illustrate that the theoretical approximations are offset by the capability of taking into account more data in larger areas, especially important for geoid predictions. For best results good data gridding algorithms are essential. In practice truncated collocation approaches may be used. For large areas at high latitudes the gridding must be done using suitable map projections such as UTM, to avoid trivial errors caused by the meridian convergence. The FFT methods are compared to ground truth data in New Mexico (xi, eta from delta g), Scandinavia (N from delta g, the geoid fits to 15 cm over 2000 km), and areas of the Atlantic (delta g from satellite altimetry using Wiener filtering). In all cases the FFT methods yields results comparable or superior to other methods.
ALTERNATIVE FIELD METHODS TO TREAT MERCURY IN SOIL
Ernest F. Stine Jr; Steven T. Downey
2002-08-14
U.S. Department of Energy (DOE) used large quantities of mercury in the uranium separating process from the 1950s until the late 1980s in support of national defense. Some of this mercury, as well as other hazardous metals and radionuclides, found its way into, and under, several buildings, soil and subsurface soils and into some of the surface waters. Several of these areas may pose potential health or environmental risks and must be dealt with under current environmental regulations. DOE's National Energy Technology Laboratory (NETL) awarded a contract ''Alternative Field Methods to Treat Mercury in Soil'' to IT Group, Knoxville TN (IT) and its subcontractor NFS, Erwin, TN to identify remedial methods to clean up mercury-contaminated high-clay content soils using proven treatment chemistries. The sites of interest were the Y-12 National Security Complex located in Oak Ridge, Tennessee, the David Witherspoon properties located in Knoxville, Tennessee, and at other similarly contaminated sites. The primary laboratory-scale contract objectives were (1) to safely retrieve and test samples of contaminated soil in an approved laboratory and (2) to determine an acceptable treatment method to ensure that the mercury does not leach from the soil above regulatory levels. The leaching requirements were to meet the TC (0.2 mg/l) and UTS (0.025 mg/l) TCLP criteria. In-situ treatments were preferred to control potential mercury vapors emissions and liquid mercury spills associated with ex-situ treatments. All laboratory work was conducted in IT's and NFS laboratories. Mercury contaminated nonradioactive soil from under the Alpha 2 building in the Y-12 complex was used. This soils contained insufficient levels of leachable mercury and resulted in TCLP mercury concentrations that were similar to the applicable LDR limits. The soil was spiked at multiple levels with metallic (up to 6000 mg/l) and soluble mercury compounds (up to 500 mg/kg) to simulate expected ranges of mercury
Reconstruction of the sound field above a reflecting plane using the equivalent source method
NASA Astrophysics Data System (ADS)
Bi, Chuan-Xing; Jing, Wen-Qian; Zhang, Yong-Bin; Lin, Wang-Lin
2017-01-01
In practical situations, vibrating objects are usually located above a reflecting plane instead of exposing to a free field. The conventional nearfield acoustic holography (NAH) sometimes fails to identify sound sources under such situations. This paper develops two kinds of equivalent source method (ESM)-based half-space NAH to reconstruct the sound field above a reflecting plane. In the first kind of method, the half-space Green's function is introduced into the ESM-based NAH, and the sound field is reconstructed based on the condition that the surface impedance of the reflecting plane is known a prior. The second kind of method regards the reflections as being radiated by equivalent sources placed under the reflecting plane, and the sound field is reconstructed by matching the pressure on the hologram surface with the equivalent sources distributed within the vibrating object and those substituting for reflections. Thus, this kind of method is independent of the surface impedance of the reflecting plane. Numerical simulations and experiments demonstrate the feasibility of these two kinds of methods for reconstructing the sound field above a reflecting plane.
Magnetic field reconstruction based on sunspot oscillations
NASA Astrophysics Data System (ADS)
Löhner-Böttcher, J.; Bello González, N.; Schmidt, W.
2016-11-01
The magnetic field of a sunspot guides magnetohydrodynamic waves toward higher atmospheric layers. In the upper photosphere and lower chromosphere, wave modes with periods longer than the acoustic cut-off period become evanescent. The cut-off period essentially changes due to the atmospheric properties, e.g., increases for larger zenith inclinations of the magnetic field. In this work, we aim at introducing a novel technique of reconstructing the magnetic field inclination on the basis of the dominating wave periods in the sunspot chromosphere and upper photosphere. On 2013 August 21, we observed an isolated, circular sunspot (NOAA11823) for 58 min in a purely spectroscopic multi-wavelength mode with the Interferometric Bidimensional Spectro-polarimeter (IBIS) at the Dunn Solar Telescope. By means of a wavelet power analysis, we retrieved the dominating wave periods and reconstructed the zenith inclinations in the chromosphere and upper photosphere. The results are in good agreement with the lower photospheric HMI magnetograms. The sunspot's magnetic field in the chromosphere inclines from almost vertical (0°) in the umbra to around 60° in the outer penumbra. With increasing altitude in the sunspot atmosphere, the magnetic field of the penumbra becomes less inclined. We conclude that the reconstruction of the magnetic field topology on the basis of sunspot oscillations yields consistent and conclusive results. The technique opens up a new possibility to infer the magnetic field inclination in the solar chromosphere.
NASA Astrophysics Data System (ADS)
Boblest, S.; Meyer, D.; Wunner, G.
2014-11-01
We present a quantum Monte Carlo application for the computation of energy eigenvalues for atoms and ions in strong magnetic fields. The required guiding wave functions are obtained with the Hartree-Fock-Roothaan code described in the accompanying publication (Schimeczek and Wunner, 2014). Our method yields highly accurate results for the binding energies of symmetry subspace ground states and at the same time provides a means for quantifying the quality of the results obtained with the above-mentioned Hartree-Fock-Roothaan method. Catalogue identifier: AETV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETV_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 72 284 No. of bytes in distributed program, including test data, etc.: 604 948 Distribution format: tar.gz Programming language: C++. Computer: Cluster of 1-˜500 HP Compaq dc5750. Operating system: Linux. Has the code been vectorized or parallelized?: Yes. Code includes MPI directives. RAM: 500 MB per node Classification: 2.1. External routines: Boost::Serialization, Boost::MPI, LAPACK BLAS Nature of problem: Quantitative modelings of features observed in the X-ray spectra of isolated neutron stars are hampered by the lack of sufficiently large and accurate databases for atoms and ions up to the last fusion product iron, at high magnetic field strengths. The predominant amount of line data in the literature has been calculated with Hartree-Fock methods, which are intrinsically restricted in precision. Our code is intended to provide a powerful tool for calculating very accurate energy values from, and thereby improving the quality of, existing Hartree-Fock results. Solution method: The Fixed-phase quantum Monte Carlo method is used in combination with guiding functions obtained in Hartree
A method of analysis of distributions of local electric fields in composites
NASA Astrophysics Data System (ADS)
Kolesnikov, V. I.; Yakovlev, V. B.; Bardushkin, V. V.; Lavrov, I. V.; Sychev, A. P.; Yakovleva, E. N.
2016-03-01
A method of prediction of distributions of local electric fields in composite media based on analysis of the tensor operators of the concentration of intensity and induction is proposed. Both general expressions and the relations for calculating these operators are obtained in various approximations. The analytical expressions are presented for the operators of the concentration of electric fields in various types of inhomogeneous structures obtained in the generalized singular approximation.
A telluric method for natural field induced polarization studies
NASA Astrophysics Data System (ADS)
Zorin, Nikita; Epishkin, Dmitrii; Yakovlev, Andrey
2016-12-01
Natural field induced polarization (NFIP) is a branch of low-frequency electromagnetics designed for detection of buried polarizable objects from magnetotelluric (MT) data. The conventional approach to the method deals with normalized MT apparent resistivity. We show that it is more favorable to extract the IP effect from solely electric (telluric) transfer functions instead. For lateral localization of polarizable bodies it is convenient to work with the telluric tensor determinant, which does not depend on the rotation of the receiving electric dipoles. Applicability of the new method was verified in the course of a large-scale field research. The field work was conducted in a well-explored area in East Kazakhstan known for the presence of various IP sources such as graphite, magnetite, and sulfide mineralization. A new multichannel processing approach allowed the determination of the telluric tensor components with very good accuracy. This holds out a hope that in some cases NFIP data may be used not only for detection of polarizable objects, but also for a rough estimation of their spectral IP characteristics.
A novel colorimetric method for field arsenic speciation analysis.
Hu, Shan; Lu, Jinsuo; Jing, Chuanyong
2012-01-01
Accurate on-site determination of arsenic (As) concentration as well as its speciation presents a great environmental challenge especially to developing countries. To meet the need of routine field monitoring, we developed a rapid colorimetric method with a wide dynamic detection range and high precision. The novel application of KMnO4 and CH4N2S as effective As(III) oxidant and As(V) reductant, respectively, in the formation of molybdenum blue complexes enabled the differentiation of As(III) and As(V). The detection limit of the method was 8 microg/L with a linear range (R2 = 0.998) of four orders of magnitude in total As concentrations. The As speciation in groundwater samples determined with the colorimetric method in the field were consistent with the results using the high performance liquid chromatography atomic fluorescence spectrometry, as evidenced by a linear correlation in paired analysis with a slope of 0.9990-0.9997 (p < 0.0001, n = 28). The recovery of 96%-116% for total As, 85%-122% for As(III), and 88%-127% for As(V) were achieved for groundwater samples with a total As concentration range 100-800 microg/L. The colorimetric result showed that 3.61 g/L As(III) existed as the only As species in a real industrial wastewater, which was in good agreement with the HPLC-AFS result of 3.56 g/L As(III). No interference with the color development was observed in the presence of sulfate, phosphate, silicate, humic acid, and heavy metals from complex water matrix. This accurate, sensitive, and easy-to-use method is especially suitable for field As determination.
NASA Astrophysics Data System (ADS)
Huang, Yuqing; Zhang, Zhiyong; Wang, Kaiyu; Cai, Shuhui; Chen, Zhong
2014-08-01
Three-dimensional (3D) NMR plays an important role in structural elucidations of complex samples, whereas difficulty remains in its applications to inhomogeneous fields. Here, we propose an NMR approach based on intermolecular zero-quantum coherences (iZQCs) to obtain high-resolution 3D J-resolved-COSY spectra in inhomogeneous fields. Theoretical analyses are presented for verifying the proposed method. Experiments on a simple chemical solution and a complex brain phantom are performed under non-ideal field conditions to show the ability of the proposed method. This method is an application of iZQCs to high-resolution 3D NMR, and is useful for studies of complex samples in inhomogeneous fields.
Field Methods for the Study of Slope and Fluvial Processes
Leopold, Luna Bergere; Leopold, Luna Bergere
1967-01-01
In Belgium during the summer of 1966 the Commission on Slopes and the Commission on Applied Geomorphology of the International Geographical Union sponsored a joint symposium, with field excursions, and meetings of the two commissions. As a result of the conference and associated discussions, the participants expressed the view that it would be a contribution to scientific work relating to the subject area if the Commission on Applied Geomorphology could prepare a small manual describling the methods of field investigation being used by research scientists throughout the world in the study of various aspects of &lope development and fluvial processes. The Commission then assumed this responsibility and asked as many persons as were known to be. working on this subject to contribute whatever they wished in the way of descriptions of methods being employed.The purpose of the present manual is to show the variety of study methods now in use, to describe from the experience gained the limitations and advantages of different techniques, and to give pertinent detail which might be useful to other investigators. Some details that would be useful to know are not included in scientific publications, but in a manual on methods the details of how best t6 use a method has a place. Various persons have learned certain things which cannot be done, as well as some methods that are successful. It is our hope that comparison of methods tried will give the reader suggestions as to how a particular method might best be applied to his own circumstance.The manual does not purport to include methods used by all workers. In particular, it does not interfere with a more systematic treatment of the subject (1) or with various papers already published in the present journal. In fact we are sure that there are pertinent research methods that we do not know of and the Commission would be glad to receive additions and other ideas from those who find they have something to contribute. Also, the
METHOD AND APPARATUS FOR TRAPPING IONS IN A MAGNETIC FIELD
Luce, J.S.
1962-04-17
A method and apparatus are described for trapping ions within an evacuated container and within a magnetic field utilizing dissociation and/or ionization of molecular ions to form atomic ions and energetic neutral particles. The atomic ions are magnetically trapped as a result of a change of charge-to- mass ratio. The molecular ions are injected into the container and into the path of an energetic carbon arc discharge which dissociates and/or ionizes a portion of the molecular ions into atomic ions and energetic neutrals. The resulting atomic ions are trapped by the magnetic field to form a circulating beam of atomic ions, and the energetic neutrals pass out of the system and may be utilized in a particle accelerator. (AEC)
Magnetic field adjustment structure and method for a tapered wiggler
Halbach, Klaus
1988-03-01
An improved method and structure is disclosed for adjusting the magnetic field generated by a group of electromagnet poles spaced along the path of a charged particle beam to compensate for energy losses in the charged particles which comprises providing more than one winding on at least some of the electromagnet poles; connecting one respective winding on each of several consecutive adjacent electromagnet poles to a first power supply, and the other respective winding on the electromagnet pole to a different power supply in staggered order; and independently adjusting one power supply to independently vary the current in one winding on each electromagnet pole in a group whereby the magnetic field strength of each of a group of electromagnet poles may be changed in smaller increments.
Magnetic field adjustment structure and method for a tapered wiggler
Halbach, Klaus
1988-01-01
An improved method and structure is disclosed for adjusting the magnetic field generated by a group of electromagnet poles spaced along the path of a charged particle beam to compensate for energy losses in the charged particles which comprises providing more than one winding on at least some of the electromagnet poles; connecting one respective winding on each of several consecutive adjacent electromagnet poles to a first power supply, and the other respective winding on the electromagnet pole to a different power supply in staggered order; and independently adjusting one power supply to independently vary the current in one winding on each electromagnet pole in a group whereby the magnetic field strength of each of a group of electromagnet poles may be changed in smaller increments.
Vision Sensor-Based Road Detection for Field Robot Navigation
Lu, Keyu; Li, Jian; An, Xiangjing; He, Hangen
2015-01-01
Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging road scenes. More specifically, for a given road image captured by an on-board vision sensor, we introduce a multiple population genetic algorithm (MPGA)-based approach for efficient road vanishing point detection. Superpixel-level seeds are then selected in an unsupervised way using a clustering strategy. Then, according to the GrowCut framework, the seeds proliferate and iteratively try to occupy their neighbors. After convergence, the initial road segment is obtained. Finally, in order to achieve a globally-consistent road segment, the initial road segment is refined using the conditional random field (CRF) framework, which integrates high-level information into road detection. We perform several experiments to evaluate the common performance, scale sensitivity and noise sensitivity of the proposed method. The experimental results demonstrate that the proposed method exhibits high robustness compared to the state of the art. PMID:26610514
A Field-Based Aquatic Life Benchmark for Conductivity in ...
EPA announced the availability of the final report, A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams. This report describes a method to characterize the relationship between the extirpation (the effective extinction) of invertebrate genera and salinity (measured as conductivity) and from that relationship derives a freshwater aquatic life benchmark. This benchmark of 300 µS/cm may be applied to waters in Appalachian streams that are dominated by calcium and magnesium salts of sulfate and bicarbonate at circum-neutral to mildly alkaline pH. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.
Magnetic irreversibility: An important amendment in the zero-field-cooling and field-cooling method
NASA Astrophysics Data System (ADS)
Teixeira Dias, Fábio; das Neves Vieira, Valdemar; Esperança Nunes, Sabrina; Pureur, Paulo; Schaf, Jacob; Fernanda Farinela da Silva, Graziele; de Paiva Gouvêa, Cristol; Wolff-Fabris, Frederik; Kampert, Erik; Obradors, Xavier; Puig, Teresa; Roa Rovira, Joan Josep
2016-02-01
The present work reports about experimental procedures to correct significant deviations of magnetization data, caused by magnetic relaxation, due to small field cycling by sample transport in the inhomogeneous applied magnetic field of commercial magnetometers. The extensively used method for measuring the magnetic irreversibility by first cooling the sample in zero field, switching on a constant applied magnetic field and measuring the magnetization M(T) while slowly warming the sample, and subsequently measuring M(T) while slowly cooling it back in the same field, is very sensitive even to small displacement of the magnetization curve. In our melt-processed YBaCuO superconducting sample we observed displacements of the irreversibility limit up to 7 K in high fields. Such displacements are detected only on confronting the magnetic irreversibility limit with other measurements, like for instance zero resistance, in which the sample remains fixed and so is not affected by such relaxation. We measured the magnetic irreversibility, Tirr(H), using a vibrating sample magnetometer (VSM) from Quantum Design. The zero resistance data, Tc0(H), were obtained using a PPMS from Quantum Design. On confronting our irreversibility lines with those of zero resistance, we observed that the Tc0(H) data fell several degrees K above the Tirr(H) data, which obviously contradicts the well known properties of superconductivity. In order to get consistent Tirr(H) data in the H-T plane, it was necessary to do a lot of additional measurements as a function of the amplitude of the sample transport and extrapolate the Tirr(H) data for each applied field to zero amplitude.
A Method to Localize RF B1 Field in High-Field Magnetic Resonance Imaging Systems
Yoo, Hyoungsuk; Gopinath, Anand; Vaughan, J. Thomas
2014-01-01
In high-field magnetic resonance imaging (MRI) systems, B0 fields of 7 and 9.4 T, the RF field shows greater inhomogeneity compared to clinical MRI systems with B0 fields of 1.5 and 3.0 T. In multichannel RF coils, the magnitude and phase of the input to each coil element can be controlled independently to reduce the nonuniformity of the RF field. The convex optimization technique has been used to obtain the optimum excitation parameters with iterative solutions for homogeneity in a selected region of interest. The pseudoinverse method has also been used to find a solution. The simulation results for 9.4- and 7-T MRI systems are discussed in detail for the head model. Variation of the simulation results in a 9.4-T system with the number of RF coil elements for different positions of the regions of interest in a spherical phantom are also discussed. Experimental results were obtained in a phantom in the 9.4-T system and are compared to the simulation results and the specific absorption rate has been evaluated. PMID:22929360
Magne, Isabelle; Deschamps, François
2016-09-01
Health guidelines for electric and magnetic fields in the low frequency range define exposure limits for electric and magnetic fields in terms of induced electric field in the human body, which is not directly measurable, requiring use of dosimetry. However many parameters, such as human models, calculation codes and post-processing methods influence the calculation results. Based upon many published papers and therefore covering a wide range of these influence parameters, this paper proposes a method for conservatively deriving measurable levels of electric and magnetic fields equivalent to the basic restrictions. Following this method, we found that, regarding exposure to uniform fields, the ICNIRP 2010 occupational basic restrictions are equivalent to a 2 mT and 7 mT magnetic field and to a 35 kV m(-1) and 35 kV m(-1) electric field at 50 Hz when applied respectively to the central and peripheral nervous system.
NASA Astrophysics Data System (ADS)
Wu, Shudong; Wan, Li
2012-03-01
The electronic structures of a CdSe spherical quantum dot in a magnetic field are obtained by using an exact diagonalization method and a variational method within the effective-mass approximation. The dependences of the energies and wave functions of electron states, exciton binding energy, exciton transition energy, and exciton diamagnetic shift on the applied magnetic field are investigated theoretically in detail. It is observed that the degeneracy of magnetic quantum number m is removed due to the Zeeman effect when the magnetic field is present. For the states with m ≥ 0, the electron energies increase as the magnetic field increases. However, for the states with m < 0, the electron energies decrease to a minimum, and then increase with increasing the magnetic field. The energies and wave functions of electron states obtained from the variational method based on the variational functions we proposed are in excellent agreement with the results obtained from the exact diagonalization method we presented. A comparison between the results obtained from the variational functions proposed by us and Xiao is also verified.
Ray, J.; Lee, J.; Yadav, V.; ...
2015-04-29
Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) and fitting.more » Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO2 (ffCO2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO2 emissions and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also
Sun, Dali
2016-01-01
Nanoparticles have become a powerful tool for cell imaging, biomolecule and cell and protein interaction studies, but are difficult to rapidly and accurately measure in most assays. Dark-field microscope (DFM) image analysis approaches used to quantify nanoparticles require high-magnification near-field (HN) images that are labor intensive due to a requirement for manual image selection and focal adjustments needed when identifying and capturing new regions of interest. Low-magnification far-field (LF) DFM imagery is technically simpler to perform but cannot be used as an alternate to HN-DFM quantification, since it is highly sensitive to surface artifacts and debris that can easily mask nanoparticle signal. We now describe a new noise reduction approach that markedly reduces LF-DFM image artifacts to allow sensitive and accurate nanoparticle signal quantification from LF-DFM images. We have used this approach to develop a “Dark Scatter Master” (DSM) algorithm for the popular NIH image analysis program ImageJ, which can be readily adapted for use with automated high-throughput assay analyses. This method demonstrated robust performance quantifying nanoparticles in different assay formats, including a novel method that quantified extracellular vesicles in patient blood sample to detect pancreatic cancer cases. Based on these results, we believe our LF-DFM quantification method can markedly decrease the analysis time of most nanoparticle-based assays to impact both basic research and clinical analyses. PMID:28177210
DNA-based methods of geochemical prospecting
Ashby, Matthew [Mill Valley, CA
2011-12-06
The present invention relates to methods for performing surveys of the genetic diversity of a population. The invention also relates to methods for performing genetic analyses of a population. The invention further relates to methods for the creation of databases comprising the survey information and the databases created by these methods. The invention also relates to methods for analyzing the information to correlate the presence of nucleic acid markers with desired parameters in a sample. These methods have application in the fields of geochemical exploration, agriculture, bioremediation, environmental analysis, clinical microbiology, forensic science and medicine.
Nondestructive acoustic electric field probe apparatus and method
Migliori, Albert
1982-01-01
The disclosure relates to a nondestructive acoustic electric field probe and its method of use. A source of acoustic pulses of arbitrary but selected shape is placed in an oil bath along with material to be tested across which a voltage is disposed and means for receiving acoustic pulses after they have passed through the material. The received pulses are compared with voltage changes across the material occurring while acoustic pulses pass through it and analysis is made thereof to determine preselected characteristics of the material.
Field method for the determination of molybdenum in plants
Reichen, Laura E.; Ward, F.N.
1951-01-01
Fresh plant material is ashed directly by heating in nickel or platinum dishes over a "flame. An acid solution of 25 milligrams of ash is treated with stannous chloride and potassium thiocyanate. The amber-colored molybdenum thiocyanate complex ion is extracted with isopropyl ether, and the intensity of the color of the ether layer over a sample solution is compared with the ether layer over standard molybdenum solutions treated similarly. Field determinations can be made quickly and the method requires no special equipment. As little as 0.25 microgram or 0.001 percent molybdenum can be determined in plant ash.
The methods and instructions for field operations presented in this manual for surveys of non-wadeable streams and rivers were developed and tested based on 55 sample sites in the Mid-Atlantic region and 53 sites in an Oregon study during two years of pilot and demonstration proj...
B. Julia-Diaz, H. Kamano, T.-S. H. Lee, A. Matsuyama, T. Sato, N. Suzuki
2009-04-01
Within the relativistic quantum field theory, we analyze the differences between the $\\pi N$ reaction models constructed from using (1) three-dimensional reductions of Bethe-Salpeter Equation, (2) method of unitary transformation, and (3) time-ordered perturbation theory. Their relations with the approach based on the dispersion relations of S-matrix theory are dicusssed.
Research on BOM based composable modeling method
NASA Astrophysics Data System (ADS)
Zhang, Mingxin; He, Qiang; Gong, Jianxing
2013-03-01
Composable modeling method has been a research hotpot in the area of Modeling and Simulation for a long time. In order to increase the reuse and interoperability of BOM based model, this paper put forward a composable modeling method based on BOM, studied on the basic theory of composable modeling method based on BOM, designed a general structure of the coupled model based on BOM, and traversed the structure of atomic and coupled model based on BOM. At last, the paper introduced the process of BOM based composable modeling and made a conclusion on composable modeling method based on BOM. From the prototype we developed and accumulative model stocks, we found this method could increase the reuse and interoperability of models.
Laboratory and field methods for measuring human energy expenditure.
Leonard, William R
2012-01-01
Energetics research is central to the field of human biology. Energy is an important currency for measuring adaptation, because both its acquisition and allocation for biological processes have important implications for survival and reproduction. Recent technological and methodological advances are now allowing human biologists to study variation in energy dynamics with much greater accuracy in a wide variety of ecological contexts. This article provides an overview of the methods used for measuring human energy expenditure (EE) and considers some of the important ecological and evolutionary questions that can be explored from an energetics perspective. Basic principles of calorimetry are first presented, followed by an overview of the equipment used for measuring human EE and work capacity. Methods for measuring three important dimensions of human EE-resting metabolic rate, working/exercising EE, and total EE-are then presented, highlighting key areas of ongoing research.
Phase-field elasticity model based on mechanical jump conditions
NASA Astrophysics Data System (ADS)
Schneider, Daniel; Tschukin, Oleg; Choudhury, Abhik; Selzer, Michael; Böhlke, Thomas; Nestler, Britta
2015-05-01
Computational models based on the phase-field method typically operate on a mesoscopic length scale and resolve structural changes of the material and furthermore provide valuable information about microstructure and mechanical property relations. An accurate calculation of the stresses and mechanical energy at the transition region is therefore indispensable. We derive a quantitative phase-field elasticity model based on force balance and Hadamard jump conditions at the interface. Comparing the simulated stress profiles calculated with Voigt/Taylor (Annalen der Physik 274(12):573, 1889), Reuss/Sachs (Z Angew Math Mech 9:49, 1929) and the proposed model with the theoretically predicted stress fields in a plate with a round inclusion under hydrostatic tension, we show the quantitative characteristics of the model. In order to validate the elastic contribution to the driving force for phase transition, we demonstrate the absence of excess energy, calculated by Durga et al. (Model Simul Mater Sci Eng 21(5):055018, 2013), in a one-dimensional equilibrium condition of serial and parallel material chains. To validate the driving force for systems with curved transition regions, we relate simulations to the Gibbs-Thompson equilibrium condition (Johnson and Alexander, J Appl Phys 59(8):2735, 1986).
NASA Astrophysics Data System (ADS)
Fletcher, Lauren E.; Valdivia-Silva, Julio E.; Perez-Montaño, Saul; Condori-Apaza, Renee M.; Conley, Catharine A.; Navarro-Gonzalez, Rafael; McKay, Christopher P.
2014-03-01
The objective of this work was to develop a field method for the determination of labile organic carbon in hyper-arid desert soils. Industry standard methods rely on expensive analytical equipment that are not possible to take into the field, while scientific challenges require fast turn-around of large numbers of samples in order to characterize the soils throughout this region. Here we present a method utilizing acid-hydrolysis extraction of the labile fraction of organic carbon followed by potassium permanganate oxidation, which provides a quick and inexpensive approach to investigate samples in the field. Strict reagent standardization and calibration steps within this method allowed the determination of very low levels of organic carbon in hyper-arid soils, in particular, with results similar to those determined by the alternative methods of Calcination and Pyrolysis-Gas Chromatography-Mass Spectrometry. Field testing of this protocol increased the understanding of the role of organic materials in hyper-arid environments and allowed real-time, strategic decision making for planning for more detailed laboratory-based analysis.
A Method for Field Infestation with Meloidogyne incognita
Xing, L. J.; Westphal, A.
2005-01-01
A field inoculation method was developed to produce Meloidogyne spp. infestation sites with minimal quantities of nematode inoculum and with a reduced labor requirement compared to previous techniques. In a preseason-methyl bromidefumigated site, nematode egg suspensions were delivered at concentrations of 0 or 10x eggs/m of row where x = 2.12, 2.82, 3.52, or 4.22 through a drip line attached to the seed firmer of a commercial 2-row planter into the open seed furrow while planting cowpea. These treatments were compared to a hand-inoculated treatment, in which 103.1 eggs were delivered every 30 cm in 5 ml of water agar suspension 2 weeks after planting. Ten weeks after planting, infection of cowpea roots was measured by gall rating and gall counts on cowpea roots. A linear relationship between the inoculation levels and nematode-induced galls was found. At this time, the amount of galling per root system in the hand-inoculated treatment was less than in the machine-applied treatments. Advantages of this new technique include application uniformity and low population level requisite for establishing the nematode. This method has potential in field-testing of Meloidogyne spp. management strategies by providing uniform infestation of test sites at planting time. PMID:19262898
Field methods for rapidly characterizing paint waste during bridge rehabilitation.
Shu, Zhan; Axe, Lisa; Jahan, Kauser; Ramanujachary, Kandalam V
2015-09-01
For Department of Transportation (DOT) agencies, bridge rehabilitation involving paint removal results in waste that is often managed as hazardous. Hence, an approach that provides field characterization of the waste classification would be beneficial. In this study, an analysis of variables critical to the leaching process was conducted to develop a predictive tool for waste classification. This approach first involved identifying mechanistic processes that control leaching. Because steel grit is used to remove paint, elevated iron concentrations remain in the paint waste. As such, iron oxide coatings provide an important surface for metal adsorption. The diffuse layer model was invoked (logKMe=4.65 for Pb and logKMe=2.11 for Cr), where 90% of the data were captured within the 95% confidence level. Based on an understanding of mechanistic processes along with principal component analysis (PCA) of data obtained from field-portable X-ray fluorescence (FP-XRF), statistically-based models for leaching from paint waste were developed. Modeling resulted in 96% of the data falling within the 95% confidence level for Pb (R(2) 0.6-0.9, p ⩽ 0.04), Ba (R(2) 0.5-0.7, p ⩽ 0.1), and Zn (R(2) 0.6-0.7, p ⩽ 0.08). However, the regression model obtained for Cr leaching was not significant (R(2) 0.3-0.5, p ⩽ 0.75). The results of this work may assist DOT agencies with applying a predictive tool in the field that addresses the mobility of trace metals as well as disposal and management of paint waste during bridge rehabilitation.
Method of recovering oil-based fluid
Brinkley, H.E.
1993-07-13
A method is described of recovering oil-based fluid, said method comprising the steps of: applying an oil-based fluid absorbent cloth of man-made fiber to an oil-based fluid, the cloth having at least a portion thereof that is napped so as to raise ends and loops of the man-made fibers and define voids; and absorbing the oil-based fluid into the napped portion of the cloth.
A sparse equivalent source method for near-field acoustic holography.
Fernandez-Grande, Efren; Xenaki, Angeliki; Gerstoft, Peter
2017-01-01
This study examines a near-field acoustic holography method consisting of a sparse formulation of the equivalent source method, based on the compressive sensing (CS) framework. The method, denoted Compressive-Equivalent Source Method (C-ESM), encourages spatially sparse solutions (based on the superposition of few waves) that are accurate when the acoustic sources are spatially localized. The importance of obtaining a non-redundant representation, i.e., a sensing matrix with low column coherence, and the inherent ill-conditioning of near-field reconstruction problems is addressed. Numerical and experimental results on a classical guitar and on a highly reactive dipole-like source are presented. C-ESM is valid beyond the conventional sampling limits, making wide-band reconstruction possible. Spatially extended sources can also be addressed with C-ESM, although in this case the obtained solution does not recover the spatial extent of the source.
NASA Astrophysics Data System (ADS)
H, Dhaouadi; R, Zgueb; O, Riahi; F, Trabelsi; T, Othman
2016-05-01
In ferroelectric liquid crystals, phase transitions can be induced by an electric field. The current constant method allows these transition to be quickly localized and thus the (E,T) phase diagram of the studied product can be obtained. In this work, we make a slight modification to the measurement principles based on this method. This modification allows the characteristic parameters of ferroelectric liquid crystal to be quantitatively measured. The use of a current square signal highlights a phenomenon of ferroelectric hysteresis with remnant polarization at null field, which points out an effect of memory in this compound.
Surface profile and stress field evaluation using digital gradient sensing method
NASA Astrophysics Data System (ADS)
Miao, C.; Sundaram, B. M.; Huang, L.; Tippur, H. V.
2016-09-01
Shape and surface topography evaluation from measured orthogonal slope/gradient data is of considerable engineering significance since many full-field optical sensors and interferometers readily output such a data accurately. This has applications ranging from metrology of optical and electronic elements (lenses, silicon wafers, thin film coatings), surface profile estimation, wave front and shape reconstruction, to name a few. In this context, a new methodology for surface profile and stress field determination based on a recently introduced non-contact, full-field optical method called digital gradient sensing (DGS) capable of measuring small angular deflections of light rays coupled with a robust finite-difference-based least-squares integration (HFLI) scheme in the Southwell configuration is advanced here. The method is demonstrated by evaluating (a) surface profiles of mechanically warped silicon wafers and (b) stress gradients near growing cracks in planar phase objects.
Surface Profile and Stress Field Evaluation using Digital Gradient Sensing Method
Miao, C.; Sundaram, B. M.; Huang, L.; ...
2016-08-09
Shape and surface topography evaluation from measured orthogonal slope/gradient data is of considerable engineering significance since many full-field optical sensors and interferometers readily output accurate data of that kind. This has applications ranging from metrology of optical and electronic elements (lenses, silicon wafers, thin film coatings), surface profile estimation, wave front and shape reconstruction, to name a few. In this context, a new methodology for surface profile and stress field determination based on a recently introduced non-contact, full-field optical method called digital gradient sensing (DGS) capable of measuring small angular deflections of light rays coupled with a robust finite-difference-based least-squaresmore » integration (HFLI) scheme in the Southwell configuration is advanced here. The method is demonstrated by evaluating (a) surface profiles of mechanically warped silicon wafers and (b) stress gradients near growing cracks in planar phase objects.« less
Surface Profile and Stress Field Evaluation using Digital Gradient Sensing Method
Miao, C.; Sundaram, B. M.; Huang, L.; Tippur, H. V.
2016-08-09
Shape and surface topography evaluation from measured orthogonal slope/gradient data is of considerable engineering significance since many full-field optical sensors and interferometers readily output accurate data of that kind. This has applications ranging from metrology of optical and electronic elements (lenses, silicon wafers, thin film coatings), surface profile estimation, wave front and shape reconstruction, to name a few. In this context, a new methodology for surface profile and stress field determination based on a recently introduced non-contact, full-field optical method called digital gradient sensing (DGS) capable of measuring small angular deflections of light rays coupled with a robust finite-difference-based least-squares integration (HFLI) scheme in the Southwell configuration is advanced here. The method is demonstrated by evaluating (a) surface profiles of mechanically warped silicon wafers and (b) stress gradients near growing cracks in planar phase objects.
Correlation theory-based signal processing method for CMF signals
NASA Astrophysics Data System (ADS)
Shen, Yan-lin; Tu, Ya-qing
2016-06-01
Signal processing precision of Coriolis mass flowmeter (CMF) signals affects measurement accuracy of Coriolis mass flowmeters directly. To improve the measurement accuracy of CMFs, a correlation theory-based signal processing method for CMF signals is proposed, which is comprised of the correlation theory-based frequency estimation method and phase difference estimation method. Theoretical analysis shows that the proposed method eliminates the effect of non-integral period sampling signals on frequency and phase difference estimation. The results of simulations and field experiments demonstrate that the proposed method improves the anti-interference performance of frequency and phase difference estimation and has better estimation performance than the adaptive notch filter, discrete Fourier transform and autocorrelation methods in terms of frequency estimation and the data extension-based correlation, Hilbert transform, quadrature delay estimator and discrete Fourier transform methods in terms of phase difference estimation, which contributes to improving the measurement accuracy of Coriolis mass flowmeters.
Plouff, Donald
2000-01-01
Gravity observations are directly made or are obtained from other sources by the U.S. Geological Survey in order to prepare maps of the anomalous gravity field and consequently to interpret the subsurface distribution of rock densities and associated lithologic or geologic units. Observations are made in the field with gravity meters at new locations and at reoccupations of previously established gravity "stations." This report illustrates an interactively-prompted series of steps needed to convert gravity "readings" to values that are tied to established gravity datums and includes computer programs to implement those steps. Inasmuch as individual gravity readings have small variations, gravity-meter (instrument) drift may not be smoothly variable, and acommodations may be needed for ties to previously established stations, the reduction process is iterative. Decision-making by the program user is prompted by lists of best values and graphical displays. Notes about irregularities of topography, which affect the value of observed gravity but are not shown in sufficient detail on topographic maps, must be recorded in the field. This report illustrates ways to record field notes (distances, heights, and slope angles) and includes computer programs to convert field notes to gravity terrain corrections. This report includes approaches that may serve as models for other applications, for example: portrayal of system flow; style of quality control to document and validate computer applications; lack of dependence on proprietary software except source code compilation; method of file-searching with a dwindling list; interactive prompting; computer code to write directly in the PostScript (Adobe Systems Incorporated) printer language; and high-lighting the four-digit year on the first line of time-dependent data sets for assured Y2K compatibility. Computer source codes provided are written in the Fortran scientific language. In order for the programs to operate, they first
Model-based calculations of fiber output fields for fiber-based spectroscopy
NASA Astrophysics Data System (ADS)
Hernandez, Eloy; Bodenmüller, Daniel; Roth, Martin M.; Kelz, Andreas
2016-08-01
The accurate characterization of the field at the output of the optical fibres is of relevance for precision spectroscopy in astronomy. The modal effects of the fibre translate to the illumination of the pupil in the spectrograph and impact on the resulting point spread function (PSF). A Model is presented that is based on the Eigenmode Expansion Method (EEM) that calculates the output field from a given fibre for different manipulations of the input field. The fibre design and modes calculation are done via the commercially available Rsoft-FemSIM software. We developed a Python script to apply the EEM. Results are shown for different configuration parameters, such as spatial and angular displacements of the input field, spot size and propagation length variations, different transverse fibre geometries and different wavelengths. This work is part of the phase A study of the fibre system for MOSAIC, a proposed multi-object spectrograph for the European Extremely Large Telescope (ELT-MOS).
Global gravimetric geoid model based a new method
NASA Astrophysics Data System (ADS)
Shen, W. B.; Han, J. C.
2012-04-01
The geoid, defined as the equipotential surface nearest to the mean sea level, plays a key role in physical geodesy and unification of height datum system. In this study, we introduce a new method, which is quite different from the conventional geoid modeling methods (e.g., Stokes method, Molodensky method), to determine the global gravimetric geoid (GGG). Based on the new method, using the dada base of the external Earth gravity field model EGM2008, digital topographic model DTM2006.0 and crust density distribution model CRUST2.0, we first determined the inner geopotential field until to the depth of D, and then established a GGG model , the accuracy of which is evaluated by comparing with the observations from USA, AUS, some parts of Canada, and some parts of China. The main idea of the new method is stated as follows. Given the geopotential field (e.g. EGM2008) outside the Earth, we may determine the inner geopotential field until to the depth of D by using Newtonian integral, once the density distribution model (e.g. CRUST2.0) of a shallow layer until to the depth D is given. Then, based on the definition of the geoid (i.e. an equipotential surface nearest to the mean sea level) one may determine the GGG. This study is supported by Natural Science Foundation China (grant No.40974015; No.41174011; No.41021061; No.41128003).
Behavior of magnetic field and eddy current in a magnetostriction based bi-layered composite
NASA Astrophysics Data System (ADS)
Zhang, Kewei; Zhang, Kehao; Liu, Huifeng; Li, Junlin
2016-12-01
In this paper, we presented a theoretical method for studying the behavior of magnetic field intensity and eddy current inside a magnetostriction based bi-layered composite. Firstly, the mathematical model for the electromagnetic field in the composite was established. Then, the governing equation for determining the magnetic field intensity and eddy current was solved. Furthermore, the effect of the composite's conductivity on the magnetic field intensity and eddy current were discussed. Lastly, by comparing with the well known R.L. Stoll's equation, the magnetic field intensity calculated based on our equation showed a less than 0.5% error.
Phase field approaches of bone remodeling based on TIP
NASA Astrophysics Data System (ADS)
Ganghoffer, Jean-François; Rahouadj, Rachid; Boisse, Julien; Forest, Samuel
2016-01-01
The process of bone remodeling includes a cycle of repair, renewal, and optimization. This adaptation process, in response to variations in external loads and chemical driving factors, involves three main types of bone cells: osteoclasts, which remove the old pre-existing bone; osteoblasts, which form the new bone in a second phase; osteocytes, which are sensing cells embedded into the bone matrix, trigger the aforementioned sequence of events. The remodeling process involves mineralization of the bone in the diffuse interface separating the marrow, which contains all specialized cells, from the newly formed bone. The main objective advocated in this contribution is the setting up of a modeling and simulation framework relying on the phase field method to capture the evolution of the diffuse interface between the new bone and the marrow at the scale of individual trabeculae. The phase field describes the degree of mineralization of this diffuse interface; it varies continuously between the lower value (no mineral) and unity (fully mineralized phase, e.g. new bone), allowing the consideration of a diffuse moving interface. The modeling framework is the theory of continuous media, for which field equations for the mechanical, chemical, and interfacial phenomena are written, based on the thermodynamics of irreversible processes. Additional models for the cellular activity are formulated to describe the coupling of the cell activity responsible for bone production/resorption to the kinetics of the internal variables. Kinetic equations for the internal variables are obtained from a pseudo-potential of dissipation. The combination of the balance equations for the microforce associated to the phase field and the kinetic equations lead to the Ginzburg-Landau equation satisfied by the phase field with a source term accounting for the dissipative microforce. Simulations illustrating the proposed framework are performed in a one-dimensional situation showing the evolution of
Field-Based Teacher Education: Past, Present, and Future.
ERIC Educational Resources Information Center
Bruce, William C.; And Others
This monograph consists of five papers originating from a 1974 conference entitled, "Field-Based Teacher Education for the '80's." The first paper, "Public School-College Cooperation in the Field-Based Education of Teachers (FBTE)--A Historical Perspective," by James L. Slay, focuses on how the historical development of public school cooperation…
James, Kevin R; Dowling, David R
2008-09-01
In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.
Reliability of field methods for estimating body fat.
Loenneke, Jeremy P; Barnes, Jeremy T; Wilson, Jacob M; Lowery, Ryan P; Isaacs, Melissa N; Pujol, Thomas J
2013-09-01
When health professionals measure the fitness levels of clients, body composition is usually estimated. In practice, the reliability of the measurement may be more important than the actual validity, as reliability determines how much change is needed to be considered meaningful. Therefore, the purpose of this study was to determine the reliability of two bioelectrical impedance analysis (BIA) devices (in athlete and non-athlete mode) and compare that to 3-site skinfold (SKF) readings. Twenty-one college students attended the laboratory on two occasions and had their measurements taken in the following order: body mass, height, SKF, Tanita body fat-350 (BF-350) and Omron HBF-306C. There were no significant pairwise differences between Visit 1 and Visit 2 for any of the estimates (P>0.05). The Pearson product correlations ranged from r = 0.933 for HBF-350 in the athlete mode (A) to r = 0.994 for SKF. The ICC's ranged from 0.93 for HBF-350(A) to 0.992 for SKF, and the MD's ranged from 1.8% for SKF to 5.1% for BF-350(A). The current study found that SKF and HBF-306C(A) were the most reliable (<2%) methods of estimating BF%, with the other methods (BF-350, BF-350(A), HBF-306C) producing minimal differences greater than 2%. In conclusion, the SKF method presented with the best reliability because of its low minimal difference, suggesting this method may be the best field method to track changes over time if you have an experienced tester. However, if technical error is a concern, the practitioner may use the HBF-306C(A) because it had a minimal difference value comparable to SKF.
A Multipole Expansion Method for Analyzing Lightning Field Changes
NASA Technical Reports Server (NTRS)
Koshak, William J.; Krider, E. Philip; Murphy, Martin J.
1999-01-01
Changes in the surface electric field are frequently used to infer the locations and magnitudes of lightning-caused changes in thundercloud charge distributions. The traditional procedure is to assume that the charges that are effectively deposited by the flash can be modeled either as a single point charge (the Q model) or a point dipole (the P model). The Q model has four unknown parameters and provides a good description of many cloud-to-ground (CG) flashes. The P model has six unknown parameters and describes many intracloud (IC) discharges. In this paper we introduce a new analysis method that assumes that the change in the cloud charge can be described by a truncated multipole expansion, i.e., there are both monopole and dipole terms in the unknown source distribution, and both terms are applied simultaneously. This method can be used to analyze CG flashes that are accompanied by large changes in the cloud dipole moment and complex IC discharges. If there is enough information content in the measurements, the model can also be generalized to include quadrupole and higher order terms. The parameters of the charge moments are determined using a dme-dimensional grid search in combination with a linear inversion, and because of this, local minima in the error function and the associated solution ambiguities are avoided. The multipole method has been tested on computer-simulated sources and on natural lightning at the NASA Kennedy Space Center and U.S. Air Force Eastern Range.
Wang, Zhengzhou; Hu, Bingliang; Yin, Qinye
2017-01-01
The schlieren method of measuring far-field focal spots offers many advantages at the Shenguang III laser facility such as low cost and automatic laser-path collimation. However, current methods of far-field focal spot measurement often suffer from low precision and efficiency when the final focal spot is merged manually, thereby reducing the accuracy of reconstruction. In this paper, we introduce an improved schlieren method to construct the high dynamic-range image of far-field focal spots and improve the reconstruction accuracy and efficiency. First, a detection method based on weak light beam sampling and magnification imaging was designed; images of the main and side lobes of the focused laser irradiance in the far field were obtained using two scientific CCD cameras. Second, using a self-correlation template matching algorithm, a circle the same size as the schlieren ball was dug from the main lobe cutting image and used to change the relative region of the main lobe cutting image within a 100×100 pixel region. The position that had the largest correlation coefficient between the side lobe cutting image and the main lobe cutting image when a circle was dug was identified as the best matching point. Finally, the least squares method was used to fit the center of the side lobe schlieren small ball, and the error was less than 1 pixel. The experimental results show that this method enables the accurate, high-dynamic-range measurement of a far-field focal spot and automatic image reconstruction. Because the best matching point is obtained through image processing rather than traditional reconstruction methods based on manual splicing, this method is less sensitive to the efficiency of focal-spot reconstruction and thus offers better experimental precision. PMID:28207758
Hybrid star structure with the Field Correlator Method
NASA Astrophysics Data System (ADS)
Burgio, G. F.; Zappalà, D.
2016-03-01
We explore the relevance of the color-flavor locking phase in the equation of state (EoS) built with the Field Correlator Method (FCM) for the description of the quark matter core of hybrid stars. For the hadronic phase, we use the microscopic Brueckner-Hartree-Fock (BHF) many-body theory, and its relativistic counterpart, i.e. the Dirac-Brueckner (DBHF). We find that the main features of the phase transition are directly related to the values of the quark-antiquark potential V1, the gluon condensate G2 and the color-flavor superconducting gap Δ. We confirm that the mapping between the FCM and the CSS (constant speed of sound) parameterization holds true even in the case of paired quark matter. The inclusion of hyperons in the hadronic phase and its effect on the mass-radius relation of hybrid stars is also investigated.
Methods for Quantitative Interpretation of Retarding Field Analyzer Data
Calvey, J.R.; Crittenden, J.A.; Dugan, G.F.; Palmer, M.A.; Furman, M.; Harkay, K.
2011-03-28
Over the course of the CesrTA program at Cornell, over 30 Retarding Field Analyzers (RFAs) have been installed in the CESR storage ring, and a great deal of data has been taken with them. These devices measure the local electron cloud density and energy distribution, and can be used to evaluate the efficacy of different cloud mitigation techniques. Obtaining a quantitative understanding of RFA data requires use of cloud simulation programs, as well as a detailed model of the detector itself. In a drift region, the RFA can be modeled by postprocessing the output of a simulation code, and one can obtain best fit values for important simulation parameters with a chi-square minimization method.
Acoustic spectroscopy: A powerful analytical method for the pharmaceutical field?
Bonacucina, Giulia; Perinelli, Diego R; Cespi, Marco; Casettari, Luca; Cossi, Riccardo; Blasi, Paolo; Palmieri, Giovanni F
2016-04-30
Acoustics is one of the emerging technologies developed to minimize processing, maximize quality and ensure the safety of pharmaceutical, food and chemical products. The operating principle of acoustic spectroscopy is the measurement of the ultrasound pulse intensity and phase after its propagation through a sample. The main goal of this technique is to characterise concentrated colloidal dispersions without dilution, in such a way as to be able to analyse non-transparent and even highly structured systems. This review presents the state of the art of ultrasound-based techniques in pharmaceutical pre-formulation and formulation steps, showing their potential, applicability and limits. It reports in a simplified version the theory behind acoustic spectroscopy, describes the most common equipment on the market, and finally overviews different studies performed on systems and materials used in the pharmaceutical or related fields.
Singular boundary method for global gravity field modelling
NASA Astrophysics Data System (ADS)
Cunderlik, Robert
2014-05-01
The singular boundary method (SBM) and method of fundamental solutions (MFS) are meshless boundary collocation techniques that use the fundamental solution of a governing partial differential equation (e.g. the Laplace equation) as their basis functions. They have been developed to avoid singular numerical integration as well as mesh generation in the traditional boundary element method (BEM). SBM have been proposed to overcome a main drawback of MFS - its controversial fictitious boundary outside the domain. The key idea of SBM is to introduce a concept of the origin intensity factors that isolate singularities of the fundamental solution and its derivatives using some appropriate regularization techniques. Consequently, the source points can be placed directly on the real boundary and coincide with the collocation nodes. In this study we deal with SBM applied for high-resolution global gravity field modelling. The first numerical experiment presents a numerical solution to the fixed gravimetric boundary value problem. The achieved results are compared with the numerical solutions obtained by MFS or the direct BEM indicating efficiency of all methods. In the second numerical experiments, SBM is used to derive the geopotential and its first derivatives from the Tzz components of the gravity disturbing tensor observed by the GOCE satellite mission. A determination of the origin intensity factors allows to evaluate the disturbing potential and gravity disturbances directly on the Earth's surface where the source points are located. To achieve high-resolution numerical solutions, the large-scale parallel computations are performed on the cluster with 1TB of the distributed memory and an iterative elimination of far zones' contributions is applied.
Vocabulary Teaching Based on Semantic-Field
ERIC Educational Resources Information Center
Wangru, Cao
2016-01-01
Vocabulary is an indispensable part of language and it is of vital importance for second language learners. Wilkins (1972) points out: "without grammar very little can be conveyed, without vocabulary nothing can be conveyed." Vocabulary teaching has experienced several stages characterized by grammatical-translation method, audio-lingual…
NASA Astrophysics Data System (ADS)
Kother, L. K.; Hammer, M. D.; Finlay, C. C.; Olsen, N.
2014-12-01
We present a technique for modelling the lithospheric magnetic field based on estimation of equivalent potential field sources. As a first demonstration we present an application to magnetic field measurements made by the CHAMP satellite during the period 2009-2010. Three component vector field data are utilized at all latitudes. Estimates of core and large-scale magnetospheric sources are removed from the satellite measurements using the CHAOS-4 model. Quiet-time and night-side data selection criteria are also employed to minimize the influence of the ionospheric field. The model for the remaining lithospheric magnetic field consists of magnetic point sources (monopoles) arranged in an icosahedron grid with an increasing grid resolution towards the airborne survey area. The corresponding source values are estimated using an iteratively reweighted least squares algorithm that includes model regularization (either quadratic or maximum entropy) and Huber weighting. Data error covariance matrices are implemented, accounting for the dependence of data error variances on quasi-dipole latitudes. Results show good consistency with the CM5 and MF7 models for spherical harmonic degrees up to n = 95. Advantages of the equivalent source method include its local nature and the ease of transforming to spherical harmonics when needed. The method can also be applied in local, high resolution, investigations of the lithospheric magnetic field, for example where suitable aeromagnetic data is available. To illustrate this possibility, we present preliminary results from a case study combining satellite measurements and local airborne scalar magnetic measurements of the Norwegian coastline.
Virtual fields method coupled with moiré interferometry: Special considerations and application
NASA Astrophysics Data System (ADS)
Zhou, Mengmeng; Xie, Huimin; Wu, Lifu
2016-12-01
The virtual fields method (VFM) is a novel highly efficient non-iterative tool for the identification of the constitutive parameters of materials. The VFM can obtain several constitutive parameters based on the full-field deformation of the specimen measured in a single test. However, the available results demonstrate that the accuracy of the identification result is strongly dependent on the quality of the deformation field, which is generally measured using optical methods. Especially, in the case where a small deformation is applied under elastic loading, the image noise and measurement error will exhibit a significant influence on the identification results. By combining the VFM with moiré interferometry (MI), a MI-based VFM is used to identify the parameters of an orthotropic linear elastic material. A numerical experiment is conducted to examine the feasibility of this method. From the analysis results, we determine that two factors exhibit an influence on the identification accuracy. The reinforcement direction of the orthotropic material is one factor, and the other is the noise in the deformation field. This MI-based VFM is then applied to determine the mechanical parameters of a unidirectional carbon fiber composite material. In the measurement, a three-point bending load is applied to the specimens. A high density grating with a frequency of 1200 line/mm grating is replicated on the specimen surface and used for measuring the in-plane deformation fields using a moiré interferometer. The obtained deformation fields are taken as the inputs of the VFM identification process, and the elastic properties of the materials are identified. The obtained results verify the advantage of the proposed method with respect to high accuracy and good noise immunity.
Comparison of aquatic macroinvertebrate samples collected using different field methods
Lenz, Bernard N.; Miller, Michael A.
1996-01-01
Government agencies, academic institutions, and volunteer monitoring groups in the State of Wisconsin collect aquatic macroinvertebrate data to assess water quality. Sampling methods differ among agencies, reflecting the differences in the sampling objectives of each agency. Lack of infor- mation about data comparability impedes data shar- ing among agencies, which can result in duplicated sampling efforts or the underutilization of avail- able information. To address these concerns, com- parisons were made of macroinvertebrate samples collected from wadeable streams in Wisconsin by personnel from the U.S. Geological Survey- National Water Quality Assessment Program (USGS-NAWQA), the Wisconsin Department of Natural Resources (WDNR), the U.S. Department of Agriculture-Forest Service (USDA-FS), and volunteers from the Water Action Volunteer-Water Quality Monitoring Program (WAV). This project was part of the Intergovernmental Task Force on Monitoring Water Quality (ITFM) Wisconsin Water Resources Coordination Project. The numbers, types, and environmental tolerances of the organ- isms collected were analyzed to determine if the four different field methods that were used by the different agencies and volunteer groups provide comparable results. Additionally, this study com- pared the results of samples taken from different locations and habitats within the same streams.
Test of Scintillometer Saturation Correction Methods Using Field Experimental Data
NASA Astrophysics Data System (ADS)
Kleissl, J.; Hartogensis, O. K.; Gomez, J. D.
2010-12-01
Saturation of large aperture scintillometer (LAS) signals can result in sensible heat flux measurements that are biased low. A field study with LASs of different aperture sizes and path lengths was performed to investigate the onset of, and corrections for, signal saturation. Saturation already occurs at {C_n^2 ≈ 0.074 D^{5/3} λ^{1/3} L^{-8/3}}, where {C_n^2} is the structure parameter of the refractive index, D is the aperture size, λ is the wavelength, L is the transect length, which is smaller than theoretically derived saturation limits. At a transect length of 1 km, a height of 2.5 m, and aperture ≈0.15 m the correction factor exceeds 5% already at {C_n^2=2× 10^{-12}m^{-2/3}}, which will affect many practical applications of scintillometry. The Clifford correction method, which only depends on {C_n^2} and the transect geometry, provides good saturation corrections over the range of conditions observed in our study. The saturation correction proposed by Ochs and Hill results in correction factors that are too small in large saturation regimes. An inner length scale dependence of the saturation correction factor was not observed. Thus for practical applications the Clifford correction method should be applied.
Generalized method of eigenoscillations for near-field optical microscopy
NASA Astrophysics Data System (ADS)
Jiang, Bor-Yuan; Zhang, Lingfeng; Castro Neto, Antonio; Basov, Dimitri; Fogler, Michael
2015-03-01
Electromagnetic interaction between a sub-wavelength particle (the ``probe'') and a material surface (the ``sample'') is studied theoretically. The interaction is shown to be governed by a series of resonances (eigenoscillations), corresponding to surface polariton modes localized near the probe. The resonance parameters depend on the dielectric function and geometry of the probe, as well as the surface reflectivity of the material. Calculation of such resonances is carried out for several axisymmetric particle shapes (spherical, spheroidal, and pear-shaped). For spheroids an efficient numerical method is proposed, capable of handling cases of large or strongly momentum-dependent surface reflectivity. The method is applied to modeling near-field spectroscopy studies of various materials. For highly resonant materials such as aluminum oxide (by itself or covered with graphene) a rich structure of the simulated signal is found, including multi-peak spectra and nonmonotonic approach curves. These features have a strong dependence on physical parameters, e.g., the probe shape. For less resonant materials such as silicon oxide the dependence is weaker, and the spheroid model is generally applicable.
Deng, Yelin; Paraskevas, Dimos; Cao, Shi-Jie
2017-03-22
This study focuses on a detailed Life Cycle Assessment (LCA) for flax cultivation in Northern France. Nitrogen related field emissions are derived both from a process-oriented DeNitrification-DeComposition (DNDC) method and the generic Intergovernmental Panel on Climate Change (IPCC) method. Since the IPCC method is synthesised from field measurements at sites with various soil types, climate conditions, and crops, it contains significant uncertainties. In contrast, the outputs from the DNDC method are considered as more site specific as it is built according to complex models of soil science. As it is demonstrated in this paper the emission factors from the DNDC method and the recommended values from the IPCC method exhibit significant variations for the case of flax cultivation. The DNDC based emission factor for direct N2O emission, which is a strong greenhouse gas, is 0.25-0.5%, significantly lower than the recommend 1% level derived from the IPCC method. The DNDC method leads to a reduction of 17% in the impact category of climate change per kg retted flax straw production from the level obtained from the IPCC method. Much higher reductions are recorded for particulate matter formation, terrestrial acidification, and marine eutrophication impact categories. Meanwhile, based on the DNDC and IPCC methods, a comparative LCA per kg flax straw is presented. For both methods sensitivity analysis as well as comparison of uncertainties parameterisation of the N2O estimates via Monte-Carlo analysis are performed. The DNDC method incorporates more relevant field emissions from the agricultural life cycle phase, which can also improve the quality of the Life Cycle Inventory as well as allow more precise uncertainty calibration in the LCA inventory.
Using Field Trips and Field-Based Laboratories to Teach Undergraduate Soil Science
NASA Astrophysics Data System (ADS)
Brevik, Eric C.; Steffan, Joshua; Hopkins, David
2015-04-01
Classroom activities can provide important background information allowing students to understand soils. However, soils are formed in nature; therefore, understanding their properties and spatial relationships in the field is a critical component for gaining a comprehensive and holistic understanding of soils. Field trips and field-based laboratories provide students with the field experiences and skills needed to gain this understanding. Field studies can 1) teach students the fundamentals of soil descriptions, 2) expose students to features (e.g., structure, redoximorphic features, clay accumulation, etc.) discussed in the classroom, and 3) allow students to verify for themselves concepts discussed in the more theoretical setting of the classroom. In each case, actually observing these aspects of soils in the field reinforces and improves upon classroom learning and comprehension. In addition, the United States Department of Agriculture's Natural Resources Conservation Service has identified a lack of fundamental field skills as a problem when they hire recent soil science graduates, thereby demonstrating the need for increased field experiences for the modern soil science student. In this presentation we will provide examples of field trips and field-based laboratories that we have designed for our undergraduate soil science classes, discuss the learning objectives, and provide several examples of comments our students have made in response to these field experiences.
The system analysis of light field information collection based on the light field imaging
NASA Astrophysics Data System (ADS)
Wang, Ye; Li, Wenhua; Hao, Chenyang
2016-10-01
Augmented reality(AR) technology is becoming the study focus, and the AR effect of the light field imaging makes the research of light field camera attractive. The micro array structure was adopted in most light field information acquisition system(LFIAS) since emergence of light field camera, micro lens array(MLA) and micro pinhole array(MPA) system mainly included. It is reviewed in this paper the structure of the LFIAS that the Light field camera commonly used in recent years. LFIAS has been analyzed based on the theory of geometrical optics. Meanwhile, this paper presents a novel LFIAS, plane grating system, we call it "micro aperture array(MAA." And the LFIAS are analyzed based on the knowledge of information optics; This paper proves that there is a little difference in the multiple image produced by the plane grating system. And the plane grating system can collect and record the amplitude and phase information of the field light.
Defeaturing CAD models using a geometry-based size field and facet-based reduction operators.
Quadros, William Roshan; Owen, Steven James
2010-04-01
We propose a method to automatically defeature a CAD model by detecting irrelevant features using a geometry-based size field and a method to remove the irrelevant features via facet-based operations on a discrete representation. A discrete B-Rep model is first created by obtaining a faceted representation of the CAD entities. The candidate facet entities are then marked for reduction by using a geometry-based size field. This is accomplished by estimating local mesh sizes based on geometric criteria. If the field value at a facet entity goes below a user specified threshold value then it is identified as an irrelevant feature and is marked for reduction. The reduction of marked facet entities is primarily performed using an edge collapse operator. Care is taken to retain a valid geometry and topology of the discrete model throughout the procedure. The original model is not altered as the defeaturing is performed on a separate discrete model. Associativity between the entities of the discrete model and that of original CAD model is maintained in order to decode the attributes and boundary conditions applied on the original CAD entities onto the mesh via the entities of the discrete model. Example models are presented to illustrate the effectiveness of the proposed approach.
Multiresolution and Explicit Methods for Vector Field Analysis and Visualization
NASA Technical Reports Server (NTRS)
1996-01-01
We first report on our current progress in the area of explicit methods for tangent curve computation. The basic idea of this method is to decompose the domain into a collection of triangles (or tetrahedra) and assume linear variation of the vector field over each cell. With this assumption, the equations which define a tangent curve become a system of linear, constant coefficient ODE's which can be solved explicitly. There are five different representation of the solution depending on the eigenvalues of the Jacobian. The analysis of these five cases is somewhat similar to the phase plane analysis often associate with critical point classification within the context of topological methods, but it is not exactly the same. There are some critical differences. Moving from one cell to the next as a tangent curve is tracked, requires the computation of the exit point which is an intersection of the solution of the constant coefficient ODE and the edge of a triangle. There are two possible approaches to this root computation problem. We can express the tangent curve into parametric form and substitute into an implicit form for the edge or we can express the edge in parametric form and substitute in an implicit form of the tangent curve. Normally the solution of a system of ODE's is given in parametric form and so the first approach is the most accessible and straightforward. The second approach requires the 'implicitization' of these parametric curves. The implicitization of parametric curves can often be rather difficult, but in this case we have been successful and have been able to develop algorithms and subsequent computer programs for both approaches. We will give these details along with some comparisons in a forthcoming research paper on this topic.
Bayesian methods for parameter estimation in effective field theories
Schindler, M.R. Phillips, D.R.
2009-03-15
We demonstrate and explicate Bayesian methods for fitting the parameters that encode the impact of short-distance physics on observables in effective field theories (EFTs). We use Bayes' theorem together with the principle of maximum entropy to account for the prior information that these parameters should be natural, i.e., O(1) in appropriate units. Marginalization can then be employed to integrate the resulting probability density function (pdf) over the EFT parameters that are not of specific interest in the fit. We also explore marginalization over the order of the EFT calculation, M, and over the variable, R, that encodes the inherent ambiguity in the notion that these parameters are O(1). This results in a very general formula for the pdf of the EFT parameters of interest given a data set, D. We use this formula and the simpler 'augmented {chi}{sup 2}' in a toy problem for which we generate pseudo-data. These Bayesian methods, when used in combination with the 'naturalness prior', facilitate reliable extractions of EFT parameters in cases where {chi}{sup 2} methods are ambiguous at best. We also examine the problem of extracting the nucleon mass in the chiral limit, M{sub 0}, and the nucleon sigma term, from pseudo-data on the nucleon mass as a function of the pion mass. We find that Bayesian techniques can provide reliable information on M{sub 0}, even if some of the data points used for the extraction lie outside the region of applicability of the EFT.
A Calibration Method for Wide-Field Multicolor Photometric Systems
NASA Astrophysics Data System (ADS)
Zhou, Xu; Chen, Jiansheng; Xu, Wen; Zhang, Mei; Jiang, Zhaoji; Zheng, Zhongyuan; Zhu, Jin
1999-07-01
The purpose of this paper is to present a method to self-calibrate the spectral energy distribution (SED) of objects in a survey based on the fitting of a SED library to observed multicolor photometry. We adopt, for illustrative purposes, the Vilnius and Gunn & Stryker SED libraries. The self-calibration technique can improve the quality of observations which are not taken under perfectly photometric conditions. The more passbands used for the photometry, the better the results. This technique has been applied to the BATC 15 passband CCD survey.
Visual field interpretation with a personal computer based neural network.
Mutlukan, E; Keating, D
1994-01-01
The Computer Assisted Touch Screen (CATS) and Computer Assisted Moving Eye Campimeter (CAMEC) are personal computer (PC)-based video-campimeters which employ multiple and single static stimuli on a cathode ray tube respectively. Clinical studies show that CATS and CAMEC provide comparable results to more expensive conventional visual field test devices. A neural network has been designed to classify visual field data from PC-based video-campimeters to facilitate diagnostic interpretation of visual field test results by non-experts. A three-layer back propagation network was designed, with 110 units in the input layer (each unit corresponding to a test point on the visual field test grid), a hidden layer of 40 processing units, and an output layer of 27 units (each one corresponding to a particular type of visual field pattern). The network was trained by a training set of 540 simulated visual field test result patterns, including normal, glaucomatous and neuro-ophthalmic defects, for up to 20,000 cycles. The classification accuracy of the network was initially measured with a previously unseen test set of 135 simulated fields and further tested with a genuine test result set of 100 neurological and 200 glaucomatous fields. A classification accuracy of 91-97% with simulated field results and 65-100% with genuine field results were achieved. This suggests that neural networks incorporated into PC-based video-campimeters may enable correct interpretation of results in non-specialist clinics or in the community.
Teaching Geographic Field Methods to Cultural Resource Management Technicians
ERIC Educational Resources Information Center
Mires, Peter B.
2004-01-01
There are perhaps 10,000 technicians in the United States who work in the field known as cultural resource management (CRM). The typical field technician possesses a bachelor's degree in anthropology, geography, or a closely allied discipline. The author's experience has been that few CRM field technicians receive adequate undergraduate training…
METHOD OF JOINING CARBIDES TO BASE METALS
Krikorian, N.H.; Farr, J.D.; Witteman, W.G.
1962-02-13
A method is described for joining a refractory metal carbide such as UC or ZrC to a refractory metal base such as Ta or Nb. The method comprises carburizing the surface of the metal base and then sintering the base and carbide at temperatures of about 2000 deg C in a non-oxidizing atmosphere, the base and carbide being held in contact during the sintering step. To reduce the sintering temperature and time, a sintering aid such as iron, nickel, or cobait is added to the carbide, not to exceed 5 wt%. (AEC)
A New Method for Reconstruction of Coronal Force-Free Magnetic Fields
NASA Astrophysics Data System (ADS)
Yi, Sibaek; Choe, Gwangson; Lim, Daye; Kim, Kap-Sung
2016-04-01
We present a new method for coronal magnetic field reconstruction based on vector magnetogram data. This method belongs to a variational method in that the magnetic energy of the system is decreased as the iteration proceeds. We employ a vector potential rather than the magnetic field vector in order to be free from the numerical divergence B problem. Whereas most methods employing three components of the magnetic field vector overspecify the boundary conditions, we only impose the normal components of magnetic field and current density as the bottom boundary conditions. Previous methods using a vector potential need to adjust the bottom boundary conditions continually, but we fix the bottom boundary conditions once and for all. To minimize the effect of the obscure lateral and top boundary conditions, we have adopted a nested grid system, which can accommodate as large as a computational domain without consuming as much computational resources. At the top boundary, we have implemented the source surface condition. We have tested our method with the analytic solution by Low & Lou (1990) as a reference. When the solution is given only at the bottom boundary, our method excels in most figures of merits devised by Schrijver et al. (2006). We have also applied our method to the active region AR 11974, in which two M class flares and a halo CME took place. Our reconstructed field shows three sigmoid structures in the lower corona and two interwound flux tubes in the upper corona. The former seem to cause the observed flares and the latter seem to be responsible for the global eruption, i.e., the CME.
Natarajan, Annamalai; Angarita, Gustavo; Gaiser, Edward; Malison, Robert; Ganesan, Deepak; Marlin, Benjamin M.
2016-01-01
Mobile health research on illicit drug use detection typically involves a two-stage study design where data to learn detectors is first collected in lab-based trials, followed by a deployment to subjects in a free-living environment to assess detector performance. While recent work has demonstrated the feasibility of wearable sensors for illicit drug use detection in the lab setting, several key problems can limit lab-to-field generalization performance. For example, lab-based data collection often has low ecological validity, the ground-truth event labels collected in the lab may not be available at the same level of temporal granularity in the field, and there can be significant variability between subjects. In this paper, we present domain adaptation methods for assessing and mitigating potential sources of performance loss in lab-to-field generalization and apply them to the problem of cocaine use detection from wearable electrocardiogram sensor data. PMID:28090605
Natarajan, Annamalai; Angarita, Gustavo; Gaiser, Edward; Malison, Robert; Ganesan, Deepak; Marlin, Benjamin M
2016-09-01
Mobile health research on illicit drug use detection typically involves a two-stage study design where data to learn detectors is first collected in lab-based trials, followed by a deployment to subjects in a free-living environment to assess detector performance. While recent work has demonstrated the feasibility of wearable sensors for illicit drug use detection in the lab setting, several key problems can limit lab-to-field generalization performance. For example, lab-based data collection often has low ecological validity, the ground-truth event labels collected in the lab may not be available at the same level of temporal granularity in the field, and there can be significant variability between subjects. In this paper, we present domain adaptation methods for assessing and mitigating potential sources of performance loss in lab-to-field generalization and apply them to the problem of cocaine use detection from wearable electrocardiogram sensor data.
Method for comparing content based image retrieval methods
NASA Astrophysics Data System (ADS)
Barnard, Kobus; Shirahatti, Nikhil V.
2003-01-01
We assume that the goal of content based image retrieval is to find images which are both semantically and visually relevant to users based on image descriptors. These descriptors are often provided by an example image--the query by example paradigm. In this work we develop a very simple method for evaluating such systems based on large collections of images with associated text. Examples of such collections include the Corel image collection, annotated museum collections, news photos with captions, and web images with associated text based on heuristic reasoning on the structure of typical web pages (such as used by Google(tm)). The advantage of using such data is that it is plentiful, and the method we propose can be automatically applied to hundreds of thousands of queries. However, it is critical that such a method be verified against human usage, and to do this we evaluate over 6000 query/result pairs. Our results strongly suggest that at least in the case of the Corel image collection, the automated measure is a good proxy for human evaluation. Importantly, our human evaluation data can be reused for the evaluation of any content based image retrieval system and/or the verification of additional proxy measures.
ERIC Educational Resources Information Center
Angeli, Charoula; Valanides, Nicos
2013-01-01
The present study investigated the problem-solving performance of 101 university students and their interactions with a computer modeling tool in order to solve a complex problem. Based on their performance on the hidden figures test, students were assigned to three groups of field-dependent (FD), field-mixed (FM), and field-independent (FI)…
Fourier method for recovering acoustic sources from multi-frequency far-field data
NASA Astrophysics Data System (ADS)
Wang, Xianchao; Guo, Yukun; Zhang, Deyue; Liu, Hongyu
2017-03-01
We consider an inverse source problem of determining a source term in the Helmholtz equation from multi-frequency far-field measurements. Based on the Fourier series expansion, we develop a novel non-iterative reconstruction method for solving the problem. A promising feature of this method is that it utilizes the data from only a few observation directions for each frequency. Theoretical uniqueness and stability analysis are provided. Numerical experiments are conducted to illustrate the effectiveness and efficiency of the proposed method in both two and three dimensions.
NASA Astrophysics Data System (ADS)
Finke, G.; Kujawińska, M.; Kozacki, T.; Zaperty, W.
2016-09-01
In this paper we propose a method which allows to overcome the basic functional problems in holographic displays with naked eye observation caused by delivering too small images visible in narrow viewing angles. The solution is based on combining the spatiotemporal multiplexing method with a 4f optical system. It enables to increase an aperture of a holographic display and extend the angular visual field of view. The applicability of the modified display is evidenced by Wigner distribution analysis of holographic imaging with spatiotemporal multiplexing method and by the experiments performed at the display demonstrator.
Full field imaging based instantaneous hyperspectral absolute refractive index measurement
Baba, Justin S; Boudreaux, Philip R
2012-01-01
Multispectral refractometers typically measure refractive index (RI) at discrete monochromatic wavelengths via a serial process. We report on the demonstration of a white light full field imaging based refractometer capable of instantaneous multispectral measurement of absolute RI of clear liquid/gel samples across the entire visible light spectrum. The broad optical bandwidth refractometer is capable of hyperspectral measurement of RI in the range 1.30 1.70 between 400nm 700nm with a maximum error of 0.0036 units (0.24% of actual) at 414nm for a = 1.50 sample. We present system design and calibration method details as well as results from a system validation sample.
Creating analytically divergence-free velocity fields from grid-based data
NASA Astrophysics Data System (ADS)
Ravu, Bharath; Rudman, Murray; Metcalfe, Guy; Lester, Daniel R.; Khakhar, Devang V.
2016-10-01
We present a method, based on B-splines, to calculate a C2 continuous analytic vector potential from discrete 3D velocity data on a regular grid. A continuous analytically divergence-free velocity field can then be obtained from the curl of the potential. This field can be used to robustly and accurately integrate particle trajectories in incompressible flow fields. Based on the method of Finn and Chacon (2005) [10] this new method ensures that the analytic velocity field matches the grid values almost everywhere, with errors that are two to four orders of magnitude lower than those of existing methods. We demonstrate its application to three different problems (each in a different coordinate system) and provide details of the specifics required in each case. We show how the additional accuracy of the method results in qualitatively and quantitatively superior trajectories that results in more accurate identification of Lagrangian coherent structures.
Evaluation of different field methods for measuring soil water infiltration
NASA Astrophysics Data System (ADS)
Pla-Sentís, Ildefonso; Fonseca, Francisco
2010-05-01
Soil infiltrability, together with rainfall characteristics, is the most important hydrological parameter for the evaluation and diagnosis of the soil water balance and soil moisture regime. Those balances and regimes are the main regulating factors of the on site water supply to plants and other soil organisms and of other important processes like runoff, surface and mass erosion, drainage, etc, affecting sedimentation, flooding, soil and water pollution, water supply for different purposes (population, agriculture, industries, hydroelectricity), etc. Therefore the direct measurement of water infiltration rates or its indirect deduction from other soil characteristics or properties has become indispensable for the evaluation and modelling of the previously mentioned processes. Indirect deductions from other soil characteristics measured under laboratory conditions in the same soils, or in other soils, through the so called "pedo-transfer" functions, have demonstrated to be of limited value in most of the cases. Direct "in situ" field evaluations have to be preferred in any case. In this contribution we present the results of past experiences in the measurement of soil water infiltration rates in many different soils and land conditions, and their use for deducing soil water balances under variable climates. There are also presented and discussed recent results obtained in comparing different methods, using double and single ring infiltrometers, rainfall simulators, and disc permeameters, of different sizes, in soils with very contrasting surface and profile characteristics and conditions, including stony soils and very sloping lands. It is concluded that there are not methods universally applicable to any soil and land condition, and that in many cases the results are significantly influenced by the way we use a particular method or instrument, and by the alterations in the soil conditions by the land management, but also due to the manipulation of the surface
Examples of Information Technology in Field-based Educational Settings
NASA Astrophysics Data System (ADS)
Knoop, P.; van der Pluijm, B.; Dey, E.; Burn, H.
2007-12-01
Over the last five years we have utilized ruggedized Tablet PCs and Pocket PCs in a variety of summer field courses at our Camp Davis Rocky Mountain Field Station, near Jackson, WY, as well as during departmental field trips. The courses involved range from upper-level field geology to lower-level introductory geology, as well as a mid-level environmental science course. During this period we gained a lot of experience with how to integrate information technology in field courses and field trips, as we experimented with a range of hardware and software combinations as well as different teaching approaches, some more successful than others. During much of this time we have also collaborated with external educational researchers to help us assess and understand the impact of this evolving approach to field-based instruction. Presented here are some example cases of how information technology can be used in the field for educational purposes, such as mapping projects in field courses, as a digital field notebook and reference library on field trips, and to support a mobile classroom while students are dispersed among vehicles or across a field area. We also present results from the educational evaluation of this work, which indicate that students see information technology as an important tool for their work, rather than as a novelty, and that it provides them with important visualization capabilities to enhance their understand that are not available with traditional paper mapping techniques.
Integrating Field-Based Research into the Classroom: An Environmental Sampling Exercise
ERIC Educational Resources Information Center
DeSutter, T.; Viall, E.; Rijal, I.; Murdoff, M.; Guy, A.; Pang, X.; Koltes, S.; Luciano, R.; Bai, X.; Zitnick, K.; Wang, S.; Podrebarac, F.; Casey, F.; Hopkins, D.
2010-01-01
A field-based, soil methods, and instrumentation course was developed to expose graduate students to numerous strategies for measuring soil parameters. Given the northern latitude of North Dakota State University and the rapid onset of winter, this course met once per week for the first 8 weeks of the fall semester and centered on the field as a…
Graphical Methods for Quantifying Macromolecules through Bright Field Imaging
Chang, Hang; DeFilippis, Rosa Anna; Tlsty, Thea D.; Parvin, Bahram
2008-08-14
Bright ?eld imaging of biological samples stained with antibodies and/or special stains provides a rapid protocol for visualizing various macromolecules. However, this method of sample staining and imaging is rarely employed for direct quantitative analysis due to variations in sample fixations, ambiguities introduced by color composition, and the limited dynamic range of imaging instruments. We demonstrate that, through the decomposition of color signals, staining can be scored on a cell-by-cell basis. We have applied our method to Flbroblasts grown from histologically normal breast tissue biopsies obtained from two distinct populations. Initially, nuclear regions are segmented through conversion of color images into gray scale, and detection of dark elliptic features. Subsequently, the strength of staining is quanti?ed by a color decomposition model that is optimized by a graph cut algorithm. In rare cases where nuclear signal is significantly altered as a result of samplepreparation, nuclear segmentation can be validated and corrected. Finally, segmented stained patterns are associated with each nuclear region following region-based tessellation. Compared to classical non-negative matrix factorization, proposed method (i) improves color decomposition, (ii) has a better noise immunity, (iii) is more invariant to initial conditions, and (iv) has a superior computing performance
BTTB-RRCG method for downward continuation of potential field data
NASA Astrophysics Data System (ADS)
Zhang, Yile; Wong, Yau Shu; Lin, Yuanfang
2016-03-01
This paper presents a conjugate gradient (CG) method for accurate and robust downward continuation of potential field data. Utilizing the Block-Toeplitz Toeplitz-Block (BTTB) structure, the storage requirement and the computational complexity can be significantly reduced. Unlike the wavenumber domain regularization methods based on fast Fourier transform, the BTTB-based conjugate gradient method induces little artifacts near the boundary. The application of a re-weighted regularization in a space domain significantly improves the stability of the CG scheme for noisy data. The synthetic data with different levels of added noise and real field data are used to validate the effectiveness of the proposed scheme, and the computed results are compared with those based on recently proposed wavenumber domain methods and the Taylor series method. The simulation results verify that the proposed scheme is superior to the existing methods considered in this study in terms of accuracy and robustness. The proposed scheme is a powerful computational tool capable of applications for large scale data with modest computational cost.
Regularization methods for near-field acoustical holography.
Williams, E G
2001-10-01
The reconstruction of the pressure and normal surface velocity provided by near-field acoustical holography (NAH) from pressure measurements made near a vibrating structure is a linear, ill-posed inverse problem due to the existence of strongly decaying, evanescentlike waves. Regularization provides a technique of overcoming the ill-posedness and generates a solution to the linear problem in an automated way. We present four robust methods for regularization; the standard Tikhonov procedure along with a novel improved version, Landweber iteration, and the conjugate gradient approach. Each of these approaches can be applied to all forms of interior or exterior NAH problems; planar, cylindrical, spherical, and conformal. We also study two parameter selection procedures, the Morozov discrepancy principle and the generalized cross validation, which are crucial to any regularization theory. In particular, we concentrate here on planar and cylindrical holography. These forms of NAH which rely on the discrete Fourier transform are important due to their popularity and to their tremendous computational speed. In order to use regularization theory for the separable geometry problems we reformulate the equations of planar, cylindrical, and spherical NAH into an eigenvalue problem. The resulting eigenvalues and eigenvectors couple easily to regularization theory, which can be incorporated into the NAH software with little sacrifice in computational speed. The resulting complete automation of the NAH algorithm for both separable and nonseparable geometries overcomes the last significant hurdle for NAH.
Evidence-Based Practice: Integrating Classroom Curriculum and Field Education
ERIC Educational Resources Information Center
Tuchman, Ellen; Lalane, Monique
2011-01-01
This article describes the use of problem-based learning to teach the scope and consequences of evidence-based practices in mental health through an innovative assignment that integrates classroom and field learning. The authors illustrate the planning and implementation of the Evidence-Based Practice: Integrating Classroom Curriculum and Field…
Matched field localization based on CS-MUSIC algorithm
NASA Astrophysics Data System (ADS)
Guo, Shuangle; Tang, Ruichun; Peng, Linhui; Ji, Xiaopeng
2016-04-01
The problem caused by shortness or excessiveness of snapshots and by coherent sources in underwater acoustic positioning is considered. A matched field localization algorithm based on CS-MUSIC (Compressive Sensing Multiple Signal Classification) is proposed based on the sparse mathematical model of the underwater positioning. The signal matrix is calculated through the SVD (Singular Value Decomposition) of the observation matrix. The observation matrix in the sparse mathematical model is replaced by the signal matrix, and a new concise sparse mathematical model is obtained, which means not only the scale of the localization problem but also the noise level is reduced; then the new sparse mathematical model is solved by the CS-MUSIC algorithm which is a combination of CS (Compressive Sensing) method and MUSIC (Multiple Signal Classification) method. The algorithm proposed in this paper can overcome effectively the difficulties caused by correlated sources and shortness of snapshots, and it can also reduce the time complexity and noise level of the localization problem by using the SVD of the observation matrix when the number of snapshots is large, which will be proved in this paper.
Cultivating Kuumba: Applying Art Based Strategies to Any Field
ERIC Educational Resources Information Center
Ellis, Auburn Elizabeth
2015-01-01
There are many contemporary issues to address in adult education. This paper explores art-based strategies and the utilization of creativity (Kuumba) to expand learning for global communities in any field of practice. Benefits of culturally grounded approaches to adult education are discussed. Images from ongoing field research can be viewed at…
A new method to measure galaxy bias by combining the density and weak lensing fields
NASA Astrophysics Data System (ADS)
Pujol, Arnau; Chang, Chihway; Gaztañaga, Enrique; Amara, Adam; Refregier, Alexandre; Bacon, David J.; Carretero, Jorge; Castander, Francisco J.; Crocce, Martin; Fosalba, Pablo; Manera, Marc; Vikram, Vinu
2016-10-01
We present a new method to measure redshift-dependent galaxy bias by combining information from the galaxy density field and the weak lensing field. This method is based on the work of Amara et al., who use the galaxy density field to construct a bias-weighted convergence field κg. The main difference between Amara et al.'s work and our new implementation is that here we present another way to measure galaxy bias, using tomography instead of bias parametrizations. The correlation between κg and the true lensing field κ allows us to measure galaxy bias using different zero-lag correlations, such as <κgκ>/<κκ> or <κgκg>/<κgκ>. Our method measures the linear bias factor on linear scales, under the assumption of no stochasticity between galaxies and matter. We use the Marenostrum Institut de Ciències de l'Espai (MICE) simulation to measure the linear galaxy bias for a flux-limited sample (i < 22.5) in tomographic redshift bins using this method. This article is the first that studies the accuracy and systematic uncertainties associated with the implementation of the method and the regime in which it is consistent with the linear galaxy bias defined by projected two-point correlation functions (2PCF). We find that our method is consistent with a linear bias at the per cent level for scales larger than 30 arcmin, while non-linearities appear at smaller scales. This measurement is a good complement to other measurements of bias, since it does not depend strongly on σ8 as do the 2PCF measurements. We will apply this method to the Dark Energy Survey Science Verification data in a follow-up article.
Design for validation, based on formal methods
NASA Technical Reports Server (NTRS)
Butler, Ricky W.
1990-01-01
Validation of ultra-reliable systems decomposes into two subproblems: (1) quantification of probability of system failure due to physical failure; (2) establishing that Design Errors are not present. Methods of design, testing, and analysis of ultra-reliable software are discussed. It is concluded that a design-for-validation based on formal methods is needed for the digital flight control systems problem, and also that formal methods will play a major role in the development of future high reliability digital systems.
The application of strain field intensity method in the steel bridge fatigue life evaluation
NASA Astrophysics Data System (ADS)
Zhao, Xuefeng; Wang, Yanhong; Cui, Yanjun; Cao, Kaisheng
2012-04-01
Asce's survey shows that 80%--90% bridge damage were associated with fatigue and fracture problems. With the operation of vehicle weight and traffic volume increases constantly, the fatigue of welded steel bridge is becoming more and more serious in recent years. A large number of studies show that most prone to fatigue damage of steel bridge is part of the welding position. Thus, it's important to find a more precise method to assess the fatigue life of steel bridge. Three kinds of fatigue analysis method is commonly used in engineering practice, such as nominal stress method, the local stress strain method and field intensity method. The first two methods frequently used for fatigue life assessment of steel bridge, but field intensity method uses less ,and it widely used in fatigue life assessment of aerospace and mechanical. Nominal stress method and the local stress strain method in engineering has been widely applied, but not considering stress gradient and multiaxial stress effects, the accuracy of calculation stability is relatively poor, so it's difficult to fully explain the fatigue damage mechanism. Therefore, it used strain field intensity method to evaluate the fatigue life of steel bridge. The fatigue life research of the steel bridge based on the strain field method and the fatigue life of the I-section plate girder was analyzed. Using Ansys on the elastoplastic finite element analysis determined the dangerous part of the structure and got the stress-strain history of the dangerous point. At the same time, in order to divide the unit more elaborate introduced the sub-structure technology. Finally, it applies K.N. Smith damage equation to calculate the fatigue life of the dangerous point. In order to better simulating the actual welding defects, it dug a small hole in the welding parts. It dug different holds from different view in the welding parts and plused the same load to calculate its fatigue life. Comparing the results found that the welding
Wei, Zhiliang; Lin, Liangjie; Lin, Yanqin E-mail: chenz@xmu.edu.cn; Chen, Zhong E-mail: chenz@xmu.edu.cn; Chen, Youhe
2014-09-29
In nuclear magnetic resonance (NMR) technique, it is of great necessity and importance to obtain high-resolution spectra, especially under inhomogeneous magnetic fields. In this study, a method based on partial homogeneity is proposed for retrieving high-resolution one-dimensional NMR spectra under inhomogeneous fields. Signals from series of small voxels, which characterize high resolution due to small sizes, are recorded simultaneously. Then, an inhomogeneity correction algorithm is developed based on pattern recognition to correct the influence brought by field inhomogeneity automatically, thus yielding high-resolution information. Experiments on chemical solutions and fish spawn were carried out to demonstrate the performance of the proposed method. The proposed method serves as a single radiofrequency pulse high-resolution NMR spectroscopy under inhomogeneous fields and may provide an alternative of obtaining high-resolution spectra of in vivo living systems or chemical-reaction systems, where performances of conventional techniques are usually degenerated by field inhomogeneity.
Quantum Field Energy Sensor based on the Casimir Effect
NASA Astrophysics Data System (ADS)
Ludwig, Thorsten
The Casimir effect converts vacuum fluctuations into a measurable force. Some new energy technologies aim to utilize these vacuum fluctuations in commonly used forms of energy like electricity or mechanical motion. In order to study these energy technologies it is helpful to have sensors for the energy density of vacuum fluctuations. In today's scientific instrumentation and scanning microscope technologies there are several common methods to measure sub-nano Newton forces. While the commercial atomic force microscopes (AFM) mostly work with silicon cantilevers, there are a large number of reports on the use of quartz tuning forks to get high-resolution force measurements or to create new force sensors. Both methods have certain advantages and disadvantages over the other. In this report the two methods are described and compared towards their usability for Casimir force measurements. Furthermore a design for a quantum field energy sensor based on the Casimir force measurement will be described. In addition some general considerations on extracting energy from vacuum fluctuations will be given.
Thermal Diodes Based on Near-Field Radiation
2015-10-01
AFRL-RY-WP-TR-2015-0163 THERMAL DIODES BASED ON NEAR-FIELD RADIATION Michal Lipson Cornell University OCTOBER 2015...BASED ON NEAR-FIELD RADIATION 5a. CONTRACT NUMBER FA8650-14-1-7406 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 61101E 6. AUTHOR(S) Michal...45433-7320 Air Force Materiel Command United States Air Force Defense Advanced Research Projects Agency 675 North Randolph Street Arlington, VA
Melamine sensing based on evanescent field enhanced optical fiber sensor
NASA Astrophysics Data System (ADS)
Luo, Ji; Yao, Jun; Wang, Wei-min; Zhuang, Xu-ye; Ma, Wen-ying; Lin, Qiao
2013-08-01
Melamine is an insalubrious chemical, and has been frequently added into milk products illegally, to make the products more protein-rich. However, it can cause some various diseases, such as kidney stones and bladder cancer. In this paper, a novel optical fiber sensor with high sensitivity based on absorption of the evanescent field for melamine detection is successfully proposed and developed. Different concentrations of melamine changing from 0 to 10mg/mL have been detected using the micro/nano-sensing fiber decorated with silver nanoparticles cluster layer. As the concentration increases, the sensing fiber's output intensity gradually deceases and the absorption of the analyte becomes large. The concentration changing of 1mg/ml can cause the absorbance varying 0.664 and the limit of the melamine detectable concentration is 1ug/mL. Besides, the coupling properties between silver nanoparticles have also been analyzed by the FDTD method. Overall, this evanescent field enhanced optical fiber sensor has potential to be used in oligo-analyte detection and will promote the development of biomolecular and chemical sensing applications.
Model-Based Method for Sensor Validation
NASA Technical Reports Server (NTRS)
Vatan, Farrokh
2012-01-01
Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).
Trace projection transformation: a new method for measurement of debris flow surface velocity fields
NASA Astrophysics Data System (ADS)
Yan, Yan; Cui, Peng; Guo, Xiaojun; Ge, Yonggang
2016-12-01
Spatiotemporal variation of velocity is important for debris flow dynamics. This paper presents a new method, the trace projection transformation, for accurate, non-contact measurement of a debris-flow surface velocity field based on a combination of dense optical flow and perspective projection transformation. The algorithm for interpreting and processing is implemented in C ++ and realized in Visual Studio 2012. The method allows quantitative analysis of flow motion through videos from various angles (camera positioned at the opposite direction of fluid motion). It yields the spatiotemporal distribution of surface velocity field at pixel level and thus provides a quantitative description of the surface processes. The trace projection transformation is superior to conventional measurement methods in that it obtains the full surface velocity field by computing the optical flow of all pixels. The result achieves a 90% accuracy of when comparing with the observed values. As a case study, the method is applied to the quantitative analysis of surface velocity field of a specific debris flow.
NASA Astrophysics Data System (ADS)
Filatov, Michael; Huix-Rotllant, Miquel
2014-07-01
Computational investigation of the longest wavelength excitations in a series of cyanines and linear n-acenes is undertaken with the use of standard spin-conserving linear response time-dependent density functional theory (TD-DFT) as well as its spin-flip variant and a ΔSCF method based on the ensemble DFT. The spin-conserving linear response TD-DFT fails to accurately reproduce the lowest excitation energy in these π-conjugated systems by strongly overestimating the excitation energies of cyanines and underestimating the excitation energies of n-acenes. The spin-flip TD-DFT is capable of correcting the underestimation of excitation energies of n-acenes by bringing in the non-dynamic electron correlation into the ground state; however, it does not fully correct for the overestimation of the excitation energies of cyanines, for which the non-dynamic correlation does not seem to play a role. The ensemble DFT method employed in this work is capable of correcting for the effect of missing non-dynamic correlation in the ground state of n-acenes and for the deficient description of differential correlation effects between the ground and excited states of cyanines and yields the excitation energies of both types of extended π-conjugated systems with the accuracy matching high-level ab initio multireference calculations.
Filatov, Michael; Huix-Rotllant, Miquel
2014-07-14
Computational investigation of the longest wavelength excitations in a series of cyanines and linear n-acenes is undertaken with the use of standard spin-conserving linear response time-dependent density functional theory (TD-DFT) as well as its spin-flip variant and a ΔSCF method based on the ensemble DFT. The spin-conserving linear response TD-DFT fails to accurately reproduce the lowest excitation energy in these π-conjugated systems by strongly overestimating the excitation energies of cyanines and underestimating the excitation energies of n-acenes. The spin-flip TD-DFT is capable of correcting the underestimation of excitation energies of n-acenes by bringing in the non-dynamic electron correlation into the ground state; however, it does not fully correct for the overestimation of the excitation energies of cyanines, for which the non-dynamic correlation does not seem to play a role. The ensemble DFT method employed in this work is capable of correcting for the effect of missing non-dynamic correlation in the ground state of n-acenes and for the deficient description of differential correlation effects between the ground and excited states of cyanines and yields the excitation energies of both types of extended π-conjugated systems with the accuracy matching high-level ab initio multireference calculations.
Filatov, Michael; Huix-Rotllant, Miquel
2014-07-14
Computational investigation of the longest wavelength excitations in a series of cyanines and linear n-acenes is undertaken with the use of standard spin-conserving linear response time-dependent density functional theory (TD-DFT) as well as its spin-flip variant and a ΔSCF method based on the ensemble DFT. The spin-conserving linear response TD-DFT fails to accurately reproduce the lowest excitation energy in these π-conjugated systems by strongly overestimating the excitation energies of cyanines and underestimating the excitation energies of n-acenes. The spin-flip TD-DFT is capable of correcting the underestimation of excitation energies of n-acenes by bringing in the non-dynamic electron correlation into the ground state; however, it does not fully correct for the overestimation of the excitation energies of cyanines, for which the non-dynamic correlation does not seem to play a role. The ensemble DFT method employed in this work is capable of correcting for the effect of missing non-dynamic correlation in the ground state of n-acenes and for the deficient description of differential correlation effects between the ground and excited states of cyanines and yields the excitation energies of both types of extended π-conjugated systems with the accuracy matching high-level ab initio multireference calculations.
Using Problem Fields as a Method of Change.
ERIC Educational Resources Information Center
Pehkonen, Erkki
1992-01-01
Discusses the rationale and use of problem fields which are sets of related and/or connected open-ended problem-solving tasks within mathematics instruction. Polygons with matchsticks and the number triangle are two examples of problem fields presented along with variations in conditions that promote other matchstick puzzles. (11 references) (JJK)
NASA Astrophysics Data System (ADS)
Sukmono, Abdi; Ardiansyah
2017-01-01
Paddy is one of the most important agricultural crop in Indonesia. Indonesia’s consumption of rice per capita in 2013 amounted to 78,82 kg/capita/year. In 2017, the Indonesian government has the mission of realizing Indonesia became self-sufficient in food. Therefore, the Indonesian government should be able to seek the stability of the fulfillment of basic needs for food, such as rice field mapping. The accurate mapping for rice field can use a quick and easy method such as Remote Sensing. In this study, multi-temporal Landsat 8 are used for identification of rice field based on Rice Planting Time. It was combined with other method for extract information from the imagery. The methods which was used Normalized Difference Vegetation Index (NDVI), Principal Component Analysis (PCA) and band combination. Image classification is processed by using nine classes, those are water, settlements, mangrove, gardens, fields, rice fields 1st, rice fields 2nd, rice fields 3rd and rice fields 4th. The results showed the rice fields area obtained from the PCA method was 50,009 ha, combination bands was 51,016 ha and NDVI method was 45,893 ha. The accuracy level was obtained PCA method (84.848%), band combination (81.818%), and NDVI method (75.758%).
Integrated atom detector based on field ionization near carbon nanotubes
Gruener, B.; Jag, M.; Stibor, A.; Visanescu, G.; Haeffner, M.; Kern, D.; Guenther, A.; Fortagh, J.
2009-12-15
We demonstrate an atom detector based on field ionization and subsequent ion counting. We make use of field enhancement near tips of carbon nanotubes to reach extreme electrostatic field values of up to 9x10{sup 9} V/m, which ionize ground-state rubidium atoms. The detector is based on a carpet of multiwall carbon nanotubes grown on a substrate and used for field ionization, and a channel electron multiplier used for ion counting. We measure the field enhancement at the tips of carbon nanotubes by field emission of electrons. We demonstrate the operation of the field ionization detector by counting atoms from a thermal beam of a rubidium dispenser source. By measuring the ionization rate of rubidium as a function of the applied detector voltage we identify the field ionization distance, which is below a few tens of nanometers in front of nanotube tips. We deduce from the experimental data that field ionization of rubidium near nanotube tips takes place on a time scale faster than 10{sup -10} s. This property is particularly interesting for the development of fast atom detectors suitable for measuring correlations in ultracold quantum gases. We also describe an application of the detector as partial pressure gauge.
Subaperture correlation based digital adaptive optics for full field optical coherence tomography.
Kumar, Abhishek; Drexler, Wolfgang; Leitgeb, Rainer A
2013-05-06
This paper proposes a sub-aperture correlation based numerical phase correction method for interferometric full field imaging systems provided the complex object field information can be extracted. This method corrects for the wavefront aberration at the pupil/ Fourier transform plane without the need of any adaptive optics, spatial light modulators (SLM) and additional cameras. We show that this method does not require the knowledge of any system parameters. In the simulation study, we consider a full field swept source OCT (FF SSOCT) system to show the working principle of the algorithm. Experimental results are presented for a technical and biological sample to demonstrate the proof of the principle.
Wang, Sijia; Peterson, Daniel J.; Gatenby, J. C.; Li, Wenbin; Grabowski, Thomas J.; Madhyastha, Tara M.
2017-01-01
Correction of echo planar imaging (EPI)-induced distortions (called “unwarping”) improves anatomical fidelity for diffusion magnetic resonance imaging (MRI) and functional imaging investigations. Commonly used unwarping methods require the acquisition of supplementary images during the scanning session. Alternatively, distortions can be corrected by nonlinear registration to a non-EPI acquired structural image. In this study, we compared reliability using two methods of unwarping: (1) nonlinear registration to a structural image using symmetric normalization (SyN) implemented in Advanced Normalization Tools (ANTs); and (2) unwarping using an acquired field map. We performed this comparison in two different test-retest data sets acquired at differing sites (N = 39 and N = 32). In both data sets, nonlinear registration provided higher test-retest reliability of the output fractional anisotropy (FA) maps than field map-based unwarping, even when accounting for the effect of interpolation on the smoothness of the images. In general, field map-based unwarping was preferable if and only if the field maps were acquired optimally. PMID:28270762
NASA Astrophysics Data System (ADS)
Valeri, Guillermo; Koohbor, Behrad; Kidane, Addis; Sutton, Michael A.
2017-04-01
An experimental approach based on Digital Image Correlation (DIC) is successfully applied to predict the uniaxial stress-strain response of 304 stainless steel specimens subjected to nominally uniform temperatures ranging from room temperature to 900 °C. A portable induction heating device equipped with custom made water-cooled copper coils is used to heat the specimen. The induction heater is used in conjunction with a conventional tensile frame to enable high temperature tension experiments. A stereovision camera system equipped with appropriate band pass filters is employed to facilitate the study of full-field deformation response of the material at elevated temperatures. Using the temperature and load histories along with the full-field strain data, a Virtual Fields Method (VFM) based approach is implemented to identify constitutive parameters governing the plastic deformation of the material at high temperature conditions. Results from these experiments confirm that the proposed method can be used to measure the full field deformation of materials subjected to thermo-mechanical loading.
Wavelet-based Multiresolution Particle Methods
NASA Astrophysics Data System (ADS)
Bergdorf, Michael; Koumoutsakos, Petros
2006-03-01
Particle methods offer a robust numerical tool for solving transport problems across disciplines, such as fluid dynamics, quantitative biology or computer graphics. Their strength lies in their stability, as they do not discretize the convection operator, and appealing numerical properties, such as small dissipation and dispersion errors. Many problems of interest are inherently multiscale, and their efficient solution requires either multiscale modeling approaches or spatially adaptive numerical schemes. We present a hybrid particle method that employs a multiresolution analysis to identify and adapt to small scales in the solution. The method combines the versatility and efficiency of grid-based Wavelet collocation methods while retaining the numerical properties and stability of particle methods. The accuracy and efficiency of this method is then assessed for transport and interface capturing problems in two and three dimensions, illustrating the capabilities and limitations of our approach.
Calorimetric method of ac loss measurement in a rotating magnetic field.
Ghoshal, P K; Coombs, T A; Campbell, A M
2010-07-01
A method is described for calorimetric ac-loss measurements of high-T(c) superconductors (HTS) at 80 K. It is based on a technique used at 4.2 K for conventional superconducting wires that allows an easy loss measurement in parallel or perpendicular external field orientation. This paper focuses on ac loss measurement setup and calibration in a rotating magnetic field. This experimental setup is to demonstrate measuring loss using a temperature rise method under the influence of a rotating magnetic field. The slight temperature increase of the sample in an ac-field is used as a measure of losses. The aim is to simulate the loss in rotating machines using HTS. This is a unique technique to measure total ac loss in HTS at power frequencies. The sample is mounted on to a cold finger extended from a liquid nitrogen heat exchanger (HEX). The thermal insulation between the HEX and sample is provided by a material of low thermal conductivity, and low eddy current heating sample holder in vacuum vessel. A temperature sensor and noninductive heater have been incorporated in the sample holder allowing a rapid sample change. The main part of the data is obtained in the calorimetric measurement is used for calibration. The focus is on the accuracy and calibrations required to predict the actual ac losses in HTS. This setup has the advantage of being able to measure the total ac loss under the influence of a continuous moving field as experienced by any rotating machines.
Calorimetric method of ac loss measurement in a rotating magnetic field
Ghoshal, P. K.; Coombs, T. A.; Campbell, A. M.
2010-07-15
A method is described for calorimetric ac-loss measurements of high-T{sub c} superconductors (HTS) at 80 K. It is based on a technique used at 4.2 K for conventional superconducting wires that allows an easy loss measurement in parallel or perpendicular external field orientation. This paper focuses on ac loss measurement setup and calibration in a rotating magnetic field. This experimental setup is to demonstrate measuring loss using a temperature rise method under the influence of a rotating magnetic field. The slight temperature increase of the sample in an ac-field is used as a measure of losses. The aim is to simulate the loss in rotating machines using HTS. This is a unique technique to measure total ac loss in HTS at power frequencies. The sample is mounted on to a cold finger extended from a liquid nitrogen heat exchanger (HEX). The thermal insulation between the HEX and sample is provided by a material of low thermal conductivity, and low eddy current heating sample holder in vacuum vessel. A temperature sensor and noninductive heater have been incorporated in the sample holder allowing a rapid sample change. The main part of the data is obtained in the calorimetric measurement is used for calibration. The focus is on the accuracy and calibrations required to predict the actual ac losses in HTS. This setup has the advantage of being able to measure the total ac loss under the influence of a continuous moving field as experienced by any rotating machines.
Calorimetric method of ac loss measurement in a rotating magnetic field
NASA Astrophysics Data System (ADS)
Ghoshal, P. K.; Coombs, T. A.; Campbell, A. M.
2010-07-01
A method is described for calorimetric ac-loss measurements of high-Tc superconductors (HTS) at 80 K. It is based on a technique used at 4.2 K for conventional superconducting wires that allows an easy loss measurement in parallel or perpendicular external field orientation. This paper focuses on ac loss measurement setup and calibration in a rotating magnetic field. This experimental setup is to demonstrate measuring loss using a temperature rise method under the influence of a rotating magnetic field. The slight temperature increase of the sample in an ac-field is used as a measure of losses. The aim is to simulate the loss in rotating machines using HTS. This is a unique technique to measure total ac loss in HTS at power frequencies. The sample is mounted on to a cold finger extended from a liquid nitrogen heat exchanger (HEX). The thermal insulation between the HEX and sample is provided by a material of low thermal conductivity, and low eddy current heating sample holder in vacuum vessel. A temperature sensor and noninductive heater have been incorporated in the sample holder allowing a rapid sample change. The main part of the data is obtained in the calorimetric measurement is used for calibration. The focus is on the accuracy and calibrations required to predict the actual ac losses in HTS. This setup has the advantage of being able to measure the total ac loss under the influence of a continuous moving field as experienced by any rotating machines.
Electrodeless RF Plasma Propulsion by Rotating Magnetic Field Method
NASA Astrophysics Data System (ADS)
Furukawa, Takerku; Takizawa, Kohei; Kuwahara, Daisuke; Shinohara, Shunjiro
2016-10-01
Electric propulsion scheme is promising in the field of the space propulsion because of high fuel efficiency and long operating time. However, this time is limited due to the loss of electrodes contacting with plasmas directly. In order to solve this problem, we have proposed electrodeless acceleration schemes, e.g., a rotating magnetic field (RMF) scheme. In this RMF scheme, we use two pairs of 5 turns RMF coils with AC currents, which have a 90 deg. phase difference. The rotating magnetic field induces azimuthal current j by a nonlinear effect. Then, plasma is accelerated by the axial Lorentz force using the product of j and the radial component of external magnetic field. We have investigated the effect of the RMF current frequency f, and 24% increase of ion velocity in the case of f = 3 MHz. We will present the experimental results, using lower f and gas pressure, and also discuss the penetration of RMF into the plasma.
Comparison of dust sampling methods in Estonia and Sweden--a field study.
Berg, P; Jaakmees, V; Bodin, L
1999-09-01
The purpose of this field study was to compare an Estonian dust sampling method, a method also used in other former East Block countries, with a Swedish method and to estimate inter-method agreement with statistical analyses. The Estonian standard method (ESM), used to assess exposure in Estonia since the early 1950s, is based on a strategy where air samples are collected for 10 minutes every hour over a full shift. This method was compared to a Swedish standard method (SSM), a modified NIOSH method, comparable to international standards, where one air sample is collected during a full shift. The study was carried out at a cement plant that in the beginning of the 1990s was subjected to an epidemiological study, including collection of exposure data. The results of the analysis from 31 clusters of parallel samples of the two methods, when dust consisting of Portland cement was collected, showed a relatively weak correlation between the SSM and the ESM, ri = 0.81 (Pearson's intra-class correlation coefficient). A conversion factor between the two methods was estimated, where SSM is 0.69 times ESM and the limits of agreement are 0.25 and 1.84, respectively. These results indicate a substantial inter-method difference. We therefore recommend that measurements obtained from the two methods should not be used interchangeably. Because the present study is of limited extent, our findings are confined to the operations studied and further studies covering other exposure situations will be needed.
An Irregular-gridded Stable Potential-field Downward Continuation Method
NASA Astrophysics Data System (ADS)
Wang, B.
2004-12-01
Potential-fields downward continuation can increase the resolution, while it is an inherent ill-posed inverse problem. We advance a fast algorithm to solve the interpolation coefficients of arbitrary-spaced four variable cubic B-spline. The downward continuation, both 2D and 3D, is accomplished by solving integral equations using B-spline bases in space domain. In contrast to FFT method, our method can be irregular spacing, and the number of knots need not to be a power of 2. Through comparison with FFT method using synthetic examples, including noise-contaminated data continuation, it is found that our method is more accurate and more stable. Real data applications of B-spline method downward continuation provide very useful information for further interpretation.
Alternative Methods for Field Corrections in Helical Solenoids
Lopes, M. L.; Krave, S. T.; Tompkins, J. C.; Yonehara, K.; Flanagan, G.; Kahn, S. A.; Melconian, K.
2015-05-01
Helical cooling channels have been proposed for highly efficient 6D muon cooling. Helical solenoids produce solenoidal, helical dipole, and helical gradient field components. Previous studies explored the geometric tunability limits on these main field components. In this paper we present two alternative correction schemes, tilting the solenoids and the addition of helical lines, to reduce the required strength of the anti-solenoid and add an additional tuning knob.
Detection of Inorganic Arsenic in Rice Using a Field Test Kit: A Screening Method.
Bralatei, Edi; Lacan, Severine; Krupp, Eva M; Feldmann, Jörg
2015-11-17
Rice is a staple food eaten by more than 50% of the world's population and is a daily dietary constituent in most South East Asian countries where 70% of the rice export comes from and where there is a high level of arsenic contamination in groundwater used for irrigation. Research shows that rice can take up and store inorganic arsenic during cultivation, and rice is considered to be one of the major routes of exposure to inorganic arsenic, a class I carcinogen for humans. Here, we report the use of a screening method based on the Gutzeit methodology to detect inorganic arsenic (iAs) in rice within 1 h. After optimization, 30 rice commodities from the United Kingdom market were tested with the field method and were compared to the reference method (high-performance liquid chromatography-inductively coupled plasma-mass spectrometry, HPLC-ICP-MS). In all but three rice samples, iAs compound can be determined. The results show no bias for iAs using the field method. Results obtained show quantification limits of about 50 μg kg(-1), a good reproducibility for a field method of ±12%, and only a few false positives and negatives (<10%) could only be recorded at the 2015 European Commission (EC) guideline for baby rice of 100 μg kg(-1), while none were recorded at the maximum level suggested by the World Health Organization (WHO) and implemented by the EC for polished and white rice of 200 μg kg(-1). The method is reliable, fast, and inexpensive; hence, it is suggested to be used as a screening method in the field for preselection of rice which violates legislative guidelines.
Li, W. P.; Liu, Y.; Long, Q.; Chen, D. H.; Chen, Y. M.
2008-10-15
The electromagnetic field (both E and B fields) is calculated for a solenoidal inductively coupled plasma (ICP) discharge. The model is based on two-dimensional cylindrical coordinates, and the finite difference method is used for solving Maxwell equations in both the radial and axial directions. Through one-turn coil measurements, assuming that the electrical conductivity has a constant value in each cross section of the discharge tube, the calculated E and B fields rise sharply near the tube wall. The nonuniform radial distributions imply that the skin effect plays a significant role in the energy balance of the stable ICP. Damped distributions in the axial direction show that the magnetic flux gradually dissipates into the surrounding space. A finite difference calculation allows prediction of the electrical conductivity and plasma permeability, and the induction coil voltage and plasma current can be calculated, which are verified for correctness.
Ecologically-Based Invasive Plant Management Field School Workbook 2009
Technology Transfer Automated Retrieval System (TEKTRAN)
A curriculum developed for a field-based course of study for ecologically-based invasive plant management. This curriculum is presented in a modular format with specific exercises to emphasize the important aspects to applying this decision tool to land management....
Enzyme catalysis enhanced dark-field imaging as a novel immunohistochemical method
NASA Astrophysics Data System (ADS)
Fan, Lin; Tian, Yanyan; Yin, Rong; Lou, Doudou; Zhang, Xizhi; Wang, Meng; Ma, Ming; Luo, Shouhua; Li, Suyi; Gu, Ning; Zhang, Yu
2016-04-01
Conventional immunohistochemistry is limited to subjective judgment based on human experience and thus it is clinically required to develop a quantitative immunohistochemical detection. 3,3'-Diaminobenzidin (DAB) aggregates, a type of staining product formed by conventional immunohistochemistry, were found to have a special optical property of dark-field imaging for the first time, and the mechanism was explored. On this basis, a novel immunohistochemical method based on dark-field imaging for detecting HER2 overexpressed in breast cancer was established, and the quantitative analysis standard and relevant software for measuring the scattering intensity was developed. In order to achieve a more sensitive detection, the HRP (horseradish peroxidase)-labeled secondary antibodies conjugated gold nanoparticles were constructed as nanoprobes to load more HRP enzymes, resulting in an enhanced DAB deposition as a dark-field label. Simultaneously, gold nanoparticles also act as a synergistically enhanced agent due to their mimicry of enzyme catalysis and dark-field scattering properties.Conventional immunohistochemistry is limited to subjective judgment based on human experience and thus it is clinically required to develop a quantitative immunohistochemical detection. 3,3'-Diaminobenzidin (DAB) aggregates, a type of staining product formed by conventional immunohistochemistry, were found to have a special optical property of dark-field imaging for the first time, and the mechanism was explored. On this basis, a novel immunohistochemical method based on dark-field imaging for detecting HER2 overexpressed in breast cancer was established, and the quantitative analysis standard and relevant software for measuring the scattering intensity was developed. In order to achieve a more sensitive detection, the HRP (horseradish peroxidase)-labeled secondary antibodies conjugated gold nanoparticles were constructed as nanoprobes to load more HRP enzymes, resulting in an enhanced DAB
Recommendation advertising method based on behavior retargeting
NASA Astrophysics Data System (ADS)
Zhao, Yao; YIN, Xin-Chun; CHEN, Zhi-Min
2011-10-01
Online advertising has become an important business in e-commerce. Ad recommended algorithms are the most critical part in recommendation systems. We propose a recommendation advertising method based on behavior retargeting which can avoid leakage click of advertising due to objective reasons and can observe the changes of the user's interest in time. Experiments show that our new method can have a significant effect and can be further to apply to online system.
Model based iterative reconstruction for Bright Field electron tomography
NASA Astrophysics Data System (ADS)
Venkatakrishnan, Singanallur V.; Drummy, Lawrence F.; De Graef, Marc; Simmons, Jeff P.; Bouman, Charles A.
2013-02-01
Bright Field (BF) electron tomography (ET) has been widely used in the life sciences to characterize biological specimens in 3D. While BF-ET is the dominant modality in the life sciences it has been generally avoided in the physical sciences due to anomalous measurements in the data due to a phenomenon called "Bragg scatter" - visible when crystalline samples are imaged. These measurements cause undesirable artifacts in the reconstruction when the typical algorithms such as Filtered Back Projection (FBP) and Simultaneous Iterative Reconstruction Technique (SIRT) are applied to the data. Model based iterative reconstruction (MBIR) provides a powerful framework for tomographic reconstruction that incorporates a model for data acquisition, noise in the measurement and a model for the object to obtain reconstructions that are qualitatively superior and quantitatively accurate. In this paper we present a novel MBIR algorithm for BF-ET which accounts for the presence of anomalous measurements from Bragg scatter in the data during the iterative reconstruction. Our method accounts for the anomalies by formulating the reconstruction as minimizing a cost function which rejects measurements that deviate significantly from the typical Beer's law model widely assumed for BF-ET. Results on simulated as well as real data show that our method can dramatically improve the reconstructions compared to FBP and MBIR without anomaly rejection, suppressing the artifacts due to the Bragg anomalies.
Methner, M.M.; Bowman, J.D.
1998-03-01
Recent epidemiologic research has suggested that exposure to extremely low frequency (ELF) magnetic fields (MF) may be associated with leukemia, brain cancer, spontaneous abortions, and Alzheimer`s disease. A walkaround sampling method for measuring ambient ELF-MF levels was developed for use in conducting occupational hazard surveillance. This survey was designed to determine the range of MF levels at different industrial facilities so they could be categorized by MF levels and identified for possible subsequent personal exposure assessments. Industries were selected based on their annual electric power consumption in accordance with the hypothesis that large power consumers would have higher ambient MFs when compared with lower power consumers. Sixty-two facilities within thirteen 2-digit Standard Industrial Classifications (SIC) were selected based on their willingness to participate. A traditional industrial hygiene walkaround survey was conducted to identify MF sources, with a special emphasis on work stations.
Zhao, Sipei; Qiu, Xiaojun; Cheng, Jianchun
2015-09-01
This paper proposes a different method for calculating a sound field diffracted by a rigid barrier based on the integral equation method, where a virtual boundary is assumed above the rigid barrier to divide the whole space into two subspaces. Based on the Kirchhoff-Helmholtz equation, the sound field in each subspace is determined with the source inside and the boundary conditions on the surface, and then the diffracted sound field is obtained by using the continuation conditions on the virtual boundary. Simulations are carried out to verify the feasibility of the proposed method. Compared to the MacDonald method and other existing methods, the proposed method is a rigorous solution for whole space and is also much easier to understand.
Advanced materials characterization based on full field deformation measurements
NASA Astrophysics Data System (ADS)
Carpentier, A. Paige
Accurate stress-strain constitutive properties are essential for understanding the complex deformation and failure mechanisms for materials with highly anisotropic mechanical properties. Among such materials, glass-fiber- and carbon-fiber-reinforced polymer--matrix composites play a critical role in advanced structural designs. The large number of different methods and specimen types currently required to generate three-dimensional allowables for structural design slows down the material characterization. Also, some of the material constitutive properties are never measured due to the prohibitive cost of the specimens needed. This work shows that simple short-beam shear (SBS) specimens are well-suited for measurement of multiple constitutive properties for composite materials and that can enable a major shift toward accurate material characterization. The material characterization is based on the digital image correlation (DIC) full-field deformation measurement. The full-field-deformation measurement enables additional flexibility for assessment of stress--strain relations, compared to the conventional strain gages. Complex strain distributions, including strong gradients, can be captured. Such flexibility enables simpler test-specimen design and reduces the number of different specimen types required for assessment of stress--strain constitutive behavior. Two key elements show advantage of using DIC in the SBS tests. First, tensile, compressive, and shear stress--strain relations are measured in a single experiment. Second, a counter-intuitive feasibility of closed-form stress and modulus models, normally applicable to long beams, is demonstrated for short-beam specimens. The modulus and stress--strain data are presented for glass/epoxy and carbon/epoxy material systems. The applicability of the developed method to static, fatigue, and impact load rates is also demonstrated. In a practical method to determine stress-strain constitutive relations, the stress
NASA Technical Reports Server (NTRS)
Lyon, Richard G. (Inventor); Leisawitz, David T. (Inventor); Rinehart, Stephen A. (Inventor); Memarsadeghi, Nargess (Inventor)
2012-01-01
Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for wide field imaging interferometry. The method includes for each point in a two dimensional detector array over a field of view of an image: gathering a first interferogram from a first detector and a second interferogram from a second detector, modulating a path-length for a signal from an image associated with the first interferogram in the first detector, overlaying first data from the modulated first detector and second data from the second detector, and tracking the modulating at every point in a two dimensional detector array comprising the first detector and the second detector over a field of view for the image. The method then generates a wide-field data cube based on the overlaid first data and second data for each point. The method can generate an image from the wide-field data cube.
NASA Astrophysics Data System (ADS)
Yatabe, Kohei; Ishikawa, Kenji; Oikawa, Yasuhiro
2017-04-01
As an alternative to microphones, optical techniques have been studied for measuring a sound field. They enable contactless and non-invasive acoustical observation by detecting density variation of medium caused by sound. Although they have important advantages comparing to microphones, they also have some disadvantages. Since sound affects light at every points on the optical path, the optical methods observe an acoustical quantity as spatial integration. Therefore, point-wise information of a sound field cannot be obtained directly. Ordinarily, the computed tomography (CT) method has been applied for reconstructing a sound field from optically measured data. However, the observation process of the optical methods have not been considered explicitly, which limits the accuracy of the reconstruction. In this paper, a physical-model-based sound field reconstruction method is proposed. It explicitly formulates the physical observation process so that a model mismatch of the conventional methods is eliminated.
NASA Astrophysics Data System (ADS)
Ayele, Belayneh
2010-05-01
Soil erosion is one of the greatest challenges for the agricultural economic sector in particular and the general economic development for a country like Ethiopia in general. Despite this challenge, there have been limited studies on the amount of soil eroded at watershed level even though soil erosion prediction for the whole country has been done based on data collected from few erosion study sites. This led to ineffective soil conservation planning and the land degradation problem is still a threat to the country economy. This calls for an estimation of erosion rate at watershed level with easily manageable, cost effective method that enables the local farmers to participate in data collection so that they have an understanding of the ongoing erosion. The objective of this research was to estimate the rill and interrill erosion rate in Gelda Watershed, South Gondar, Ethiopia using field method (volumetric measurement of rills and interills). The dominant soil types were nitisols and regosols. The findings indicate that soil loss due to rills and interills in the cultivated fields was 50.25 ton/ha/yr. The contribution of rills in the upslope, middle slope and down slope was 7%, 15% and 78%, respectively to the overall rill erosion. In general, the contribution of rills to the overall erosion rate was 54%. The rill density for the nitisols and regosols was 349 and 294 m/ha respectively indicating higher rate of erosion in the former soil type. Average area of actual damage due to rills in the watershed was 113 m2/ha. The most intense erosion rate was recorded in teff field with an erosion rate of 73 tons/ha/yr followed by millet 35 tons/ha/yr. Maize fields showed the least erosion rate of 31 tons/ha/yr. The most important factors contributing to erosion rate variation among crops were time of sowing, hoeing practice, crop morphology and deliberate compaction practice that was common on teff field. The contribution of agroforestry practices (woodlots, scattered
UAV path planning using artificial potential field method updated by optimal control theory
NASA Astrophysics Data System (ADS)
Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long
2016-04-01
The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.
Geophysics-based method of locating a stationary earth object
Daily, Michael R.; Rohde, Steven B.; Novak, James L.
2008-05-20
A geophysics-based method for determining the position of a stationary earth object uses the periodic changes in the gravity vector of the earth caused by the sun- and moon-orbits. Because the local gravity field is highly irregular over a global scale, a model of local tidal accelerations can be compared to actual accelerometer measurements to determine the latitude and longitude of the stationary object.
A new method for calculating the scattered field by an arbitrary cross-sectional conducting cylinder
NASA Astrophysics Data System (ADS)
Ragheb, Hassan A.
2011-04-01
Scattering of a plane electromagnetic wave by an arbitrary cross-sectional perfectly conducting cylinder must be performed numerically. This article aims to present a new approach for addressing this problem, which is based on simulating the arbitrary cross-sectional perfectly conducting cylinder by perfectly conducting strips of narrow width. The problem is then turned out to calculate the scattered electromagnetic field from N conducting strips. The technique of solving such a problem uses an asymptotic method. This method is based on an approximate technique introduced by Karp and Russek (Karp, S.N., and Russek, A. (1956), 'Diffraction by a Wide Slit', Journal of Applied Physics, 27, 886-894.) for solving scattering by wide slit. The method is applied here for calculating the scattered field in the far zone for E-polarised incident waves (transverse magnetic (TM) with respect to z-axis) on a perfectly conducting cylinder with arbitrary cross-section. Numerical examples are introduced first for comparison to show the accuracy of the method. Other examples for well-known scattering by conducting cylinders are then introduced followed by new examples which can only be solved by numerical methods.
NASA Astrophysics Data System (ADS)
Lu, Wenbo; Jiang, Weikang; Yuan, Guoqing; Yan, Li
2013-05-01
Vibration signal analysis is the main technique in machine condition monitoring or fault diagnosis, whereas in some cases vibration-based diagnosis is restrained because of its contact measurement. Acoustic-based diagnosis (ABD) with non-contact measurement has received little attention, although sound field may contain abundant information related to fault pattern. A new scheme of ABD for gearbox based on near-field acoustic holography (NAH) and spatial distribution features of sound field is presented in this paper. It focuses on applying distribution information of sound field to gearbox fault diagnosis. A two-stage industrial helical gearbox is experimentally studied in a semi-anechoic chamber and a lab workshop, respectively. Firstly, multi-class faults (mild pitting, moderate pitting, severe pitting and tooth breakage) are simulated, respectively. Secondly, sound fields and corresponding acoustic images in different gearbox running conditions are obtained by fast Fourier transform (FFT) based NAH. Thirdly, by introducing texture analysis to fault diagnosis, spatial distribution features are extracted from acoustic images for capturing fault patterns underlying the sound field. Finally, the features are fed into multi-class support vector machine for fault pattern identification. The feasibility and effectiveness of our proposed scheme is demonstrated on the good experimental results and the comparison with traditional ABD method. Even with strong noise interference, spatial distribution features of sound field can reliably reveal the fault patterns of gearbox, and thus the satisfactory accuracy can be obtained. The combination of histogram features and gray level gradient co-occurrence matrix features is suggested for good diagnosis accuracy and low time cost.
Fan, Zongwei; Mei, Deqing; Yang, Keji; Chen, Zichen
2014-12-01
To eliminate the limitations of the conventional sound field separation methods which are only applicable to regular surfaces, a sound field separation method based on combined integral equations is proposed to separate sound fields directly in the spatial domain. In virtue of the Helmholtz integral equations for the incident and scattering fields outside a sound scatterer, combined integral equations are derived for sound field separation, which build the quantitative relationship between the sound fields on two arbitrary separation surfaces enclosing the sound scatterer. Through boundary element discretization of the two surfaces, corresponding systems of linear equations are obtained for practical application. Numerical simulations are performed for sound field separation on different shaped surfaces. The influences induced by the aspect ratio of the separation surfaces and the signal noise in the measurement data are also investigated. The separated incident and scattering sound fields agree well with the original corresponding fields described by analytical expressions, which validates the effectiveness and accuracy of the combined integral equations based separation method.
Comparison of dust sampling methods in Estonia and Sweden -- A field study
Berg, P.; Jaakmees, V.; Bodin, L.
1999-09-01
The purpose of this field study was to compare an Estonian dust sampling method, a method also used in other former East Block countries, with a Swedish method and to estimate inter-method agreement with statistical analyses. The Estonian standard method (ESM), used to assess exposure in Estonia since the early 1950s, is based on a strategy where air samples are collected for 10 minutes every hour over a full shift. This method was compared to a Swedish standard method (SSM), a modified NIOSH method, comparable to international standards, where one air sample is collected during a full shift. The study was carried out at a cement plant that in the beginning of the 1990s was subjected to an epidemiological study, including collection of exposure data. The results of the analysis from 31 clusters of parallel samples of the two methods, when dust consisting of Portland cement was collected, showed a relatively weak correlation between the SSM and the ESM, r{sub i} = 0.91 (Pearson's intro-class correlation coefficient). A conversion factor between the two methods was estimated, where SSM is 0.69 times ESm and the limits of agreement are 0.25 and 1.84, respectively. These results indicate a substantial intermethod difference. The authors therefore recommend that measurements obtained from the two methods should not be used interchangeably. Because the present study is of limited extent, the findings are confined to the operations studied and further studies covering other exposure situation will be needed.
ERIC Educational Resources Information Center
Flores, Ingrid M.
2015-01-01
Thirty preservice teachers enrolled in a field-based science methods course were placed at a public elementary school for coursework and for teaching practice with elementary students. Candidates focused on building conceptual understanding of science content and pedagogical methods through innovative curriculum development and other course…
Characterizing the complex permittivity of high-κ dielectrics using enhanced field method.
Chao, Hsien-Wen; Wong, Wei-Syuan; Chang, Tsun-Hsu
2015-11-01
This paper proposed a method to characterize the complex permittivities of samples based on the enhancement of the electric field strength. The enhanced field method significantly improves the measuring range and accuracy of the samples' electrical properties. Full-wave simulations reveal that the resonant frequency is closely related to the dielectric constant of the sample. In addition, the loss tangent can be determined from the measured quality factor and the just obtained dielectric constant. Materials with low dielectric constant and very low loss tangent are measured for benchmarking and the measured results agree well with previous understanding. Interestingly, materials with extremely high dielectric constants (ε(r) > 50), such as titanium dioxide, calcium titanate, and strontium titanate, differ greatly as expected.
Characterizing the complex permittivity of high-κ dielectrics using enhanced field method
NASA Astrophysics Data System (ADS)
Chao, Hsien-Wen; Wong, Wei-Syuan; Chang, Tsun-Hsu
2015-11-01
This paper proposed a method to characterize the complex permittivities of samples based on the enhancement of the electric field strength. The enhanced field method significantly improves the measuring range and accuracy of the samples' electrical properties. Full-wave simulations reveal that the resonant frequency is closely related to the dielectric constant of the sample. In addition, the loss tangent can be determined from the measured quality factor and the just obtained dielectric constant. Materials with low dielectric constant and very low loss tangent are measured for benchmarking and the measured results agree well with previous understanding. Interestingly, materials with extremely high dielectric constants (ɛr > 50), such as titanium dioxide, calcium titanate, and strontium titanate, differ greatly as expected.
NASA Astrophysics Data System (ADS)
Gonçalves, Ítalo Gomes; Kumaira, Sissa; Guadagnin, Felipe
2017-06-01
Implicit modeling has experienced a rise in popularity over the last decade due to its advantages in terms of speed and reproducibility in comparison with manual digitization of geological structures. The potential-field method consists in interpolating a scalar function that indicates to which side of a geological boundary a given point belongs to, based on cokriging of point data and structural orientations. This work proposes a vector potential-field solution from a machine learning perspective, recasting the problem as multi-class classification, which alleviates some of the original method's assumptions. The potentials related to each geological class are interpreted in a compositional data framework. Variogram modeling is avoided through the use of maximum likelihood to train the model, and an uncertainty measure is introduced. The methodology was applied to the modeling of a sample dataset provided with the software Move™. The calculations were implemented in the R language and 3D visualizations were prepared with the rgl package.
Characterizing the complex permittivity of high-κ dielectrics using enhanced field method
Chao, Hsien-Wen; Wong, Wei-Syuan; Chang, Tsun-Hsu
2015-11-15
This paper proposed a method to characterize the complex permittivities of samples based on the enhancement of the electric field strength. The enhanced field method significantly improves the measuring range and accuracy of the samples’ electrical properties. Full-wave simulations reveal that the resonant frequency is closely related to the dielectric constant of the sample. In addition, the loss tangent can be determined from the measured quality factor and the just obtained dielectric constant. Materials with low dielectric constant and very low loss tangent are measured for benchmarking and the measured results agree well with previous understanding. Interestingly, materials with extremely high dielectric constants (ε{sub r} > 50), such as titanium dioxide, calcium titanate, and strontium titanate, differ greatly as expected.
Magnetic field measurements based on Terfenol coated photonic crystal fibers.
Quintero, Sully M M; Martelli, Cicero; Braga, Arthur M B; Valente, Luiz C G; Kato, Carla C
2011-01-01
A magnetic field sensor based on the integration of a high birefringence photonic crystal fiber and a composite material made of Terfenol particles and an epoxy resin is proposed. An in-fiber modal interferometer is assembled by evenly exciting both eigenemodes of the HiBi fiber. Changes in the cavity length as well as the effective refractive index are induced by exposing the sensor head to magnetic fields. The magnetic field sensor has a sensitivity of 0.006 (nm/mT) over a range from 0 to 300 mT with a resolution about ±1 mT. A fiber Bragg grating magnetic field sensor is also fabricated and employed to characterize the response of Terfenol composite to the magnetic field.
Field-Based Teacher Education in Literacy: Preparing Teachers in Real Classroom Contexts
ERIC Educational Resources Information Center
DeGraff, Tricia L.; Schmidt, Cynthia M.; Waddell, Jennifer H.
2015-01-01
For the past two decades, scholars have advocated for reforms in teacher education that emphasize relevant connections between theory and practice in university coursework and focus on clinical experiences. This paper is based on our experiences in designing and implementing an integrated literacy methods course in a field-based teacher education…
A New Method for Radar Rainfall Estimation Using Merged Radar and Gauge Derived Fields
NASA Astrophysics Data System (ADS)
Hasan, M. M.; Sharma, A.; Johnson, F.; Mariethoz, G.; Seed, A.
2014-12-01
Accurate estimation of rainfall is critical for any hydrological analysis. The advantage of radar rainfall measurements is their ability to cover large areas. However, the uncertainties in the parameters of the power law, that links reflectivity to rainfall intensity, have to date precluded the widespread use of radars for quantitative rainfall estimates for hydrological studies. There is therefore considerable interest in methods that can combine the strengths of radar and gauge measurements by merging the two data sources. In this work, we propose two new developments to advance this area of research. The first contribution is a non-parametric radar rainfall estimation method (NPZR) which is based on kernel density estimation. Instead of using a traditional Z-R relationship, the NPZR accounts for the uncertainty in the relationship between reflectivity and rainfall intensity. More importantly, this uncertainty can vary for different values of reflectivity. The NPZR method reduces the Mean Square Error (MSE) of the estimated rainfall by 16 % compared to a traditionally fitted Z-R relation. Rainfall estimates are improved at 90% of the gauge locations when the method is applied to the densely gauged Sydney Terrey Hills radar region. A copula based spatial interpolation method (SIR) is used to estimate rainfall from gauge observations at the radar pixel locations. The gauge-based SIR estimates have low uncertainty in areas with good gauge density, whilst the NPZR method provides more reliable rainfall estimates than the SIR method, particularly in the areas of low gauge density. The second contribution of the work is to merge the radar rainfall field with spatially interpolated gauge rainfall estimates. The two rainfall fields are combined using a temporally and spatially varying weighting scheme that can account for the strengths of each method. The weight for each time period at each location is calculated based on the expected estimation error of each method
Method of using an electric field controlled emulsion phase contactor
Scott, Timothy C.
1993-01-01
A system for contacting liquid phases comprising a column for transporting a liquid phase contacting system, the column having upper and lower regions. The upper region has a nozzle for introducing a dispersed phase and means for applying thereto a vertically oriented high intensity pulsed electric field. This electric field allows improved flow rates while shattering the dispersed phase into many micro-droplets upon exiting the nozzle to form a dispersion within a continuous phase. The lower region employs means for applying to the dispersed phase a horizontally oriented high intensity pulsed electric field so that the dispersed phase undergoes continuous coalescence and redispersion while being urged from side to side as it progresses through the system, increasing greatly the mass transfer opportunity.
Method of using an electric field controlled emulsion phase contactor
Scott, T.C.
1993-11-16
A system is described for contacting liquid phases comprising a column for transporting a liquid phase contacting system, the column having upper and lower regions. The upper region has a nozzle for introducing a dispersed phase and means for applying thereto a vertically oriented high intensity pulsed electric field. This electric field allows improved flow rates while shattering the dispersed phase into many micro-droplets upon exiting the nozzle to form a dispersion within a continuous phase. The lower region employs means for applying to the dispersed phase a horizontally oriented high intensity pulsed electric field so that the dispersed phase undergoes continuous coalescence and redispersion while being urged from side to side as it progresses through the system, increasing greatly the mass transfer opportunity. 5 figures.
Bioventing Field Initiative at Keesler Air Force Base, Mississippi
2007-11-02
This report describes the activities conducted at Keesler AFB, Mississippi, as part of the Bioventing Field initiative for the U.S. Air Force Center...and installation of bioventing systems. Each site at the base is discussed individually, followed by a description of site activities at the...background area. The purpose of this Bioventing Field initiative is to measure the soil gas permeability and microbial activity at a contaminated site in
FIELD VALIDATION OF SEDIMENT TOXCITY IDENTIFCATION AND EVALUATION METHODS
Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both porewaters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question of whethe...
An equivalent source method for modelling the global lithospheric magnetic field
NASA Astrophysics Data System (ADS)
Kother, Livia; Hammer, Magnus D.; Finlay, Christopher C.; Olsen, Nils
2015-10-01
We present a new technique for modelling the global lithospheric magnetic field at Earth's surface based on the estimation of equivalent potential field sources. As a demonstration we show an application to magnetic field measurements made by the CHAMP satellite during the period 2009-2010 when it was at its lowest altitude and solar activity was quiet. All three components of the vector field data are utilized at all available latitudes. Estimates of core and large-scale magnetospheric sources are removed from the measurements using the CHAOS-4 model. Quiet-time and night-side data selection criteria are also employed to minimize the influence of the ionospheric field. The model for the remaining lithospheric magnetic field consists of magnetic equivalent potential field sources (monopoles) arranged in an icosahedron grid at a depth of 100 km below the surface. The corresponding model parameters are estimated using an iteratively reweighted least-squares algorithm that includes model regularization (either quadratic or maximum entropy) and Huber weighting. Data error covariance matrices are implemented, accounting for the dependence of data variances on quasi-dipole latitude. The resulting equivalent source lithospheric field models show a degree correlation to MF7 greater than 0.7 out to spherical harmonic degree 100. Compared to the quadratic regularization approach, the entropy regularized model possesses notably lower power above degree 70 and a lower number of degrees of freedom despite fitting the observations to a very similar level. Advantages of our equivalent source method include its local nature, the possibility for regional grid refinement and the production of local power spectra, the ability to implement constraints and regularization depending on geographical position, and the ease of transforming the equivalent source values into spherical harmonics.
Perspectives on the simulation of protein–surface interactions using empirical force field methods
Latour, Robert A.
2014-01-01
Protein–surface interactions are of fundamental importance for a broad range of applications in the fields of biomaterials and biotechnology. Present experimental methods are limited in their ability to provide a comprehensive depiction of these interactions at the atomistic level. In contrast, empirical force field based simulation methods inherently provide the ability to predict and visualize protein–surface interactions with full atomistic detail. These methods, however, must be carefully developed, validated, and properly applied before confidence can be placed in results from the simulations. In this perspectives paper, I provide an overview of the critical aspects that I consider being of greatest importance for the development of these methods, with a focus on the research that my combined experimental and molecular simulation groups have conducted over the past decade to address these issues. These critical issues include the tuning of interfacial force field parameters to accurately represent the thermodynamics of interfacial behavior, adequate sampling of these types of complex molecular systems to generate results that can be comparable with experimental data, and the generation of experimental data that can be used for simulation results evaluation and validation. PMID:25028242
Identifying work related injuries: comparison of methods for interrogating text fields
2010-01-01
Background Work-related injuries in Australia are estimated to cost around $57.5 billion annually, however there are currently insufficient surveillance data available to support an evidence-based public health response. Emergency departments (ED) in Australia are a potential source of information on work-related injuries though most ED's do not have an 'Activity Code' to identify work-related cases with information about the presenting problem recorded in a short free text field. This study compared methods for interrogating text fields for identifying work-related injuries presenting at emergency departments to inform approaches to surveillance of work-related injury. Methods Three approaches were used to interrogate an injury description text field to classify cases as work-related: keyword search, index search, and content analytic text mining. Sensitivity and specificity were examined by comparing cases flagged by each approach to cases coded with an Activity code during triage. Methods to improve the sensitivity and/or specificity of each approach were explored by adjusting the classification techniques within each broad approach. Results The basic keyword search detected 58% of cases (Specificity 0.99), an index search detected 62% of cases (Specificity 0.87), and the content analytic text mining (using adjusted probabilities) approach detected 77% of cases (Specificity 0.95). Conclusions The findings of this study provide strong support for continued development of text searching methods to obtain information from routine emergency department data, to improve the capacity for comprehensive injury surveillance. PMID:20374657
Methods for the treatment of acoustic and absorptive/dispersive wave field measurements
NASA Astrophysics Data System (ADS)
Innanen, Kristopher Albert Holm
Many recent methods of seismic wave field processing and inversion concern themselves with the fine detail of the amplitude and phase characteristics of measured events. Processes of absorption and dispersion have a strong impact on both; the impact is particularly deleterious to the effective resolution of images created from the data. There is a need to understand the dissipation of seismic wave energy as it affects such methods. I identify: algorithms based on the inverse scattering series, algorithms based on multiresolution analysis, and algorithms based on the estimation of the order of the singularities of seismic data, as requiring this kind of study. As it turns out, these approaches may be cast such that they deal directly with issues of attenuation, to the point where they can be seen as tools for viscoacoustic forward modelling, Q estimation; viscoacoustic inversion, and/or Q compensation. In this thesis I demonstrate these ideas in turn. The forward scattering series is formulated such that a viscoacoustic wave field is represented as an expansion about an acoustic reference; analysis of the convergence properties and scattering diagrams are carried out, and it is shown that (i) the attenuated wave field may be generated by the nonlinear interplay of acoustic reference fields, and (ii) the cumulative effect of certain scattering types is responsible for macroscopic wave field properties: also, the basic form of the absorptive/dispersive inversion problem is predicted. Following this, the impact of Q on measurements of the local regularity of a seismic trace, via Lipschitz exponents, is discussed, with the aim of using these exponents as a means to estimate local Q values. The problem of inverse scattering based imaging and inversion is treated next: I present a simple, computable form for the simultaneous imaging and wavespeed inversion of 1D acoustic wave field data. This method is applied to 1D, normal incidence synthetic data: its sensitivity with
Field-structured material media and methods for synthesis thereof
Martin, James E.; Hughes, Robert C.; Anderson, Robert A.
2001-09-18
The present application is directed to a new class of composite materials, called field-structured composite (FSC) materials, which comprise a oriented aggregate structure made of magnetic particles suspended in a nonmagnetic medium, and to a new class of processes for their manufacture. FSC materials have much potential for application, including use in chemical, optical, environmental, and mechanical sensors.
Enhancing Field Research Methods with Mobile Survey Technology
ERIC Educational Resources Information Center
Glass, Michael R.
2015-01-01
This paper assesses the experience of undergraduate students using mobile devices and a commercial application, iSurvey, to conduct a neighborhood survey. Mobile devices offer benefits for enhancing student learning and engagement. This field exercise created the opportunity for classroom discussions on the practicalities of urban research, the…
Polarization-current-based, finite-difference time-domain, near-to-far-field transformation.
Zeng, Yong; Moloney, Jerome V
2009-05-15
A near-to-far-field transformation algorithm for three-dimensional finite-difference time-domain is presented in this Letter. This approach is based directly on the polarization current of the scatterer, not the scattered near fields. It therefore eliminates the numerical errors originating from the spatial offset of the E and H fields, inherent in the standard near-to-far-field transformation. The proposed method is validated via direct comparisons with the analytical Lorentz-Mie solutions of plane waves scattered by large dielectric and metallic spheres with strong forward-scattering lobes.
NASA Astrophysics Data System (ADS)
Ko, Han Seo; Gim, Yeonghyeon; Kang, Seung-Hwan
2015-11-01
A three-dimensional optical correction method was developed to reconstruct droplet-based flow fields. For a numerical simulation, synthetic phantoms were reconstructed by a simultaneous multiplicative algebraic reconstruction technique using three projection images which were positioned at an offset angle of 45°. If the synthetic phantom in a conical object with refraction index which differs from atmosphere, the image can be distorted because a light is refracted on the surface of the conical object. Thus, the direction of the projection ray was replaced by the refracted ray which occurred on the surface of the conical object. In order to prove the method considering the distorted effect, reconstruction results of the developed method were compared with the original phantom. As a result, the reconstruction result of the method showed smaller error than that without the method. The method was applied for a Taylor cone which was caused by high voltage between a droplet and a substrate to reconstruct the three-dimensional flow fields for analysis of the characteristics of the droplet. This work was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Korean government (MEST) (No. 2013R1A2A2A01068653).
Killing vector fields in three dimensions: a method to solve massive gravity field equations
NASA Astrophysics Data System (ADS)
Gürses, Metin
2010-10-01
Killing vector fields in three dimensions play an important role in the construction of the related spacetime geometry. In this work we show that when a three-dimensional geometry admits a Killing vector field then the Ricci tensor of the geometry is determined in terms of the Killing vector field and its scalars. In this way we can generate all products and covariant derivatives at any order of the Ricci tensor. Using this property we give ways to solve the field equations of topologically massive gravity (TMG) and new massive gravity (NMG) introduced recently. In particular when the scalars of the Killing vector field (timelike, spacelike and null cases) are constants then all three-dimensional symmetric tensors of the geometry, the Ricci and Einstein tensors, their covariant derivatives at all orders, and their products of all orders are completely determined by the Killing vector field and the metric. Hence, the corresponding three-dimensional metrics are strong candidates for solving all higher derivative gravitational field equations in three dimensions.
Killgore, K.J.; Payne, B.S.
1984-04-01
The Aquatic Plant Control Research Program (APCRP) of the U.S. Army Engineer Waterways Experiment Station (WES) is developing field techniques to measure treatment efficacy and to determine site characteristics that influence the treatment efficacy. Treatment efficacy is considered a quantitative determination of the extent and duration of changes in problem aquatic plant populations attributable to the use of a treatment method (i.e., chemical, mechanical, biological, environmental). Depending on the plant species, efficacy can be determined or indicated by changes in biomass, areal distribution, or height of an aquatic plant in response to treatment. Aquatic plant biomass is sampled with a WES aquatic biomass sampler; areal distribution of aquatic plants is determined by aerial photography or with an electronic positioning system; and submersed aquatic plant height is measured with a fathometer (depth recorder) used with an electronic positioning and repositioning system (AGNAV). The APCRP has also developed field techniques to determine site characteristics that influence efficacy using commercially available instrumentation. This instrumentation can be used to measure treatment efficacy and to determine site characteristics simultaneously.
Size-extensive vibrational self-consistent field methods with anharmonic geometry corrections
NASA Astrophysics Data System (ADS)
Hermes, Matthew R.; Keçeli, Murat; Hirata, So
2012-06-01
In the size-extensive vibrational self-consistent field (XVSCF) method introduced earlier [M. Keçeli and S. Hirata, J. Chem. Phys. 135, 134108 (2011)], 10.1063/1.3644895, only a small subset of even-order force constants that can form connected diagrams were used to compute extensive total energies and intensive transition frequencies. The mean-field potentials of XVSCF formed with these force constants have been shown to be effectively harmonic, making basis functions, quadrature, or matrix diagonalization in the conventional VSCF method unnecessary. We introduce two size-consistent VSCF methods, XVSCF(n) and XVSCF[n], for vibrationally averaged geometries in addition to energies and frequencies including anharmonic effects caused by up to the nth-order force constants. The methods are based on our observations that a small number of odd-order force constants of certain types can form open, connected diagrams isomorphic to the diagram of the mean-field potential gradients and that these nonzero gradients shift the potential minima by intensive amounts, which are interpreted as anharmonic geometry corrections. XVSCF(n) evaluates these mean-field gradients and force constants at the equilibrium geometry and estimates this shift accurately, but approximately, neglecting the coupling between these two quantities. XVSCF[n] solves the coupled equations for geometry corrections and frequencies with an iterative algorithm, giving results that should be identical to those of VSCF when applied to an infinite system. We present the diagrammatic and algebraic definitions, algorithms, and initial implementations as well as numerical results of these two methods. The results show that XVSCF(n) and XVSCF[n] reproduce the vibrationally averaged geometries of VSCF for naphthalene and anthracene in their ground and excited vibrational states accurately at fractions of the computational cost.
Geometric and Topological Methods for Quantum Field Theory
NASA Astrophysics Data System (ADS)
Cardona, Alexander; Contreras, Iván.; Reyes-Lega, Andrés. F.
2013-05-01
Introduction; 1. A brief introduction to Dirac manifolds Henrique Bursztyn; 2. Differential geometry of holomorphic vector bundles on a curve Florent Schaffhauser; 3. Paths towards an extension of Chern-Weil calculus to a class of infinite dimensional vector bundles Sylvie Paycha; 4. Introduction to Feynman integrals Stefan Weinzierl; 5. Iterated integrals in quantum field theory Francis Brown; 6. Geometric issues in quantum field theory and string theory Luis J. Boya; 7. Geometric aspects of the standard model and the mysteries of matter Florian Scheck; 8. Absence of singular continuous spectrum for some geometric Laplacians Leonardo A. Cano García; 9. Models for formal groupoids Iván Contreras; 10. Elliptic PDEs and smoothness of weakly Einstein metrics of Hölder regularity Andrés Vargas; 11. Regularized traces and the index formula for manifolds with boundary Alexander Cardona and César Del Corral; Index.
Magnetic field adjustment structure and method for a tapered wiggler
Halbach, K.
1988-03-01
An improved wiggler having means for adjusting the magnetic field generated by electromagnet poles spaced along the path of a charged particle beam to compensate for energy losses in the charge particles is described which comprises; (a) windings on at least some of the electromagnet poles in the wiggler; (b) one of the windings on each of a group of adjacent electromagnet poles connected to a first power supply, and another winding on the electromagnet poles having more than one winding connected to a second power supply; and (c) means for independently adjusting one power supply to independently vary the current in one of the windings on a group of adjacent electromagnet poles; whereby the magnetic field strength of a group of adjacent electromagnet poles in the wiggler may be changed in smaller increments.
Circuitry, systems and methods for detecting magnetic fields
Kotter, Dale K [Shelley, ID; Spencer, David F [Idaho Falls, ID; Roybal, Lyle G [Idaho Falls, ID; Rohrbaugh, David T [Idaho Falls, ID
2010-09-14
Circuitry for detecting magnetic fields includes a first magnetoresistive sensor and a second magnetoresistive sensor configured to form a gradiometer. The circuitry includes a digital signal processor and a first feedback loop coupled between the first magnetoresistive sensor and the digital signal processor. A second feedback loop which is discrete from the first feedback loop is coupled between the second magnetoresistive sensor and the digital signal processor.
PROGRESS ON GENERIC PHASE-FIELD METHOD DEVELOPMENT
Biner, Bullent; Tonks, Michael; Millett, Paul C.; Li, Yulan; Hu, Shenyang Y.; Gao, Fei; Sun, Xin; Martinez, E.; Anderson, D.
2012-09-26
In this report, we summarize our current collobarative efforts, involving three national laboratories: Idaho National Laboratory (INL), Pacific Northwest National Laboratory (PNNL) and Los Alamos National Laboatory (LANL), to develop a computational framework for homogenous and heterogenous nucleation mechanisms into the generic phase-field model. During the studies, the Fe-Cr system was chosen as a model system due to its simplicity and availability of reliable thermodynamic and kinetic data, as well as the range of applications of low-chromium ferritic steels in nuclear reactors. For homogenous nucleation, the relavant parameters determined from atomistic studies were used directly to determine the energy functional and parameters in the phase-field model. Interfacial energy, critical nucleus size, nucleation rate, and coarsening kinetics were systematically examined in two- and three- dimensional models. For the heteregoneous nucleation mechanism, we studied the nucleation and growth behavior of chromium precipitates due to the presence of dislocations. The results demonstrate that both nucleation schemes can be introduced to a phase-field modeling algorithm with the desired accuracy and computational efficiency.
Endoscopic Skull Base Reconstruction: An Evolution of Materials and Methods.
Sigler, Aaron C; D'Anza, Brian; Lobo, Brian C; Woodard, Troy; Recinos, Pablo F; Sindwani, Raj
2017-03-31
Endoscopic skull base surgery has developed rapidly over the last decade, in large part because of the expanding armamentarium of endoscopic repair techniques. This article reviews the available technologies and techniques, including vascularized and nonvascularized flaps, synthetic grafts, sealants and glues, and multilayer reconstruction. Understanding which of these repair methods is appropriate and under what circumstances is paramount to achieving success in this challenging but rewarding field. A graduated approach to skull base reconstruction is presented to provide a systematic framework to guide selection of repair technique to ensure a successful outcome while minimizing morbidity for the patient.
Test method on infrared system range based on space compression
NASA Astrophysics Data System (ADS)
Chen, Zhen-xing; Shi, Sheng-bing; Han, Fu-li
2016-09-01
Infrared thermal imaging system generates image based on infrared radiation difference between object and background and is a passive work mode. Range is important performance and necessary appraised test item in appraisal test for infrared system. In this paper, aim is carrying out infrared system range test in laboratory , simulated test ground is designed based on object equivalent, background analog, object characteristic control, air attenuation characteristic, infrared jamming analog and so on, repeatable and controllable tests are finished, problem of traditional field test method is solved.
NASA Astrophysics Data System (ADS)
Ribaudo, J. T.; Constable, C.; Parker, R. L.
2009-12-01
Scripted finite element methods allow flexible investigations of the influence of asymmetric external source fields and 3-dimensional (3D) internal electrical conductivity structure in the problem of global geomagnetic depth sounding. Our forward modeling is performed in the time and frequency domains via FlexPDE, a commercial finite element modeling package, and the technique has been validated against known solutions to 3D steady state and time-dependent problems. The induction problem is formulated in terms of the magnetic vector potential and electric scalar potential, and mesh density is managed both explicitly and through adaptive mesh refinement. We investigate the effects of 3D Earth conductivity on both satellite and ground-based magnetic field observations in the form of a geographically varying conductance map of the crust and oceans overlying a radially symmetric core and mantle. This map is used in conjunction with a novel boundary condition based on Ampere's Law to model variable near-surface induction without the computational expense of a 3D crust/ocean mesh and is valid for magnetic signals in the frequency range of interest for satellite induction studies. The simulated external magnetic field is aligned with Earth's magnetic pole, rather than its rotational pole, and increases in magnitude along the Earth/Sun axis. Earth rotates through this field with a period of 24 hours. Electromagnetic c-responses estimated from satellite data under the assumption that the primary and induced fields are dipolar in structure are known to be biased with respect to local time. We investigate the influence of Earth's rotation through the non-uniform external field on these c-responses, to determine whether this can explain the observed local time bias.
Gradient shimming based on regularized estimation for B0-field and shim functions.
Song, Kan; Bao, Qingjia; Chen, Fang; Huang, Chongyang; Feng, Jiwen; Liu, Chaoyang
2016-07-01
Mapping B0-field and shim functions spatially is a crucial step in the gradient shimming. The conventional estimation method used in the phase difference imaging technique takes no account for noise and T2(∗) effects, and is prone to create noisy and distorted field maps. This paper describes a new gradient shimming based on the regularized estimation for B0-field and shim functions. Based on a statistical model, the B0-field and shim function maps are estimated by a Penalized Maximum Likelihood method that minimizes two regularized least-squares cost functions, respectively. The first cost function of B0-field exploits the two facts that the noise in the phase difference measurements is Gaussian and the B0-field maps tend to be smooth. And the other one adds an additional fact that each shim function corresponds to a given spherical harmonic of the magnetic field. Significant improvements in the quality of field mapping and in the final shimming results are demonstrated through computer simulations as well as experiments, especially when the magnetic field homogeneity is poor.
General purpose, field-portable cell-based biosensor platform.
Gilchrist, K H; Barker, V N; Fletcher, L E; DeBusschere, B D; Ghanouni, P; Giovangrandi, L; Kovacs, G T
2001-09-01
There are several groups of researchers developing cell-based biosensors for chemical and biological warfare agents based on electrophysiologic monitoring of cells. In order to transition such sensors from the laboratory to the field, a general-purpose hardware and software platform is required. This paper describes the design, implementation, and field-testing of such a system, consisting of cell-transport and data acquisition instruments. The cell-transport module is a self-contained, battery-powered instrument that allows various types of cell-based modules to be maintained at a preset temperature and ambient CO(2) level while in transit or in the field. The data acquisition module provides 32 channels of action potential amplification, filtering, and real-time data streaming to a laptop computer. At present, detailed analysis of the data acquired is carried out off-line, but sufficient computing power is available in the data acquisition module to enable the most useful algorithms to eventually be run real-time in the field. Both modules have sufficient internal power to permit realistic field-testing, such as the example presented in this paper.
Fowler Nordheim theory of carbon nanotube based field emitters
NASA Astrophysics Data System (ADS)
Parveen, Shama; Kumar, Avshish; Husain, Samina; Husain, Mushahid
2017-01-01
Field emission (FE) phenomena are generally explained in the frame-work of Fowler Nordheim (FN) theory which was given for flat metal surfaces. In this work, an effort has been made to present the field emission mechanism in carbon nanotubes (CNTs) which have tip type geometry at nanoscale. High aspect ratio of CNTs leads to large field enhancement factor and lower operating voltages because the electric field strength in the vicinity of the nanotubes tip can be enhanced by thousand times. The work function of nanostructure by using FN plot has been calculated with reverse engineering. With the help of modified FN equation, an important formula for effective emitting area (active area for emission of electrons) has been derived and employed to calculate the active emitting area for CNT field emitters. Therefore, it is of great interest to present a state of art study on the complete solution of FN equation for CNTs based field emitter displays. This manuscript will also provide a better understanding of calculation of different FE parameters of CNTs field emitters using FN equation.
Shape-based separation of microparticles with magnetic fields
NASA Astrophysics Data System (ADS)
Wang, Cheng; Zhou, Ran
2016-11-01
Precise manipulations, e.g., sorting and focusing, of nonspherical micro-particles in fluidic environment has important applications in the fields of biology sciences and biomedical engineering. However, non-spherical microparticles are hard to manipulate because they tumble in shear flows. Most of existing techniques, including traditional filtration and centrifugation, and recent microfluidic technology, have difficulty in separating microparticles by shape. We demonstrate a novel shape-based separation technique by combining external magnetic fields with pressure-driven flows in a microchannel. Due to the magnetic field, prolate ellipsoidal particles migrate laterally at different speeds than the spherical ones, leading to effective separation. Our experimental investigations reveal the underlying physical mechanism of the observed shape-dependent migration. We find that the magnetic field breaks the rotational symmetry of the nonspherical particles, and induces shape-dependent lift force and migration velocity.
NASA Astrophysics Data System (ADS)
Gauthier, P.-A.; Camier, C.; Lebel, F.-A.; Pasco, Y.; Berry, A.; Langlois, J.; Verron, C.; Guastavino, C.
2016-08-01
Sound environment reproduction of various flight conditions in aircraft mock-ups is a valuable tool for the study, prediction, demonstration and jury testing of interior aircraft sound quality and annoyance. To provide a faithful reproduced sound environment, time, frequency and spatial characteristics should be preserved. Physical sound field reproduction methods for spatial sound reproduction are mandatory to immerse the listener's body in the proper sound fields so that localization cues are recreated at the listener's ears. Vehicle mock-ups pose specific problems for sound field reproduction. Confined spaces, needs for invisible sound sources and very specific acoustical environment make the use of open-loop sound field reproduction technologies such as wave field synthesis (based on free-field models of monopole sources) not ideal. In this paper, experiments in an aircraft mock-up with multichannel least-square methods and equalization are reported. The novelty is the actual implementation of sound field reproduction with 3180 transfer paths and trim panel reproduction sources in laboratory conditions with a synthetic target sound field. The paper presents objective evaluations of reproduced sound fields using various metrics as well as sound field extrapolation and sound field characterization.
Matrix-based image reconstruction methods for tomography
Llacer, J.; Meng, J.D.
1984-10-01
Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures.
Towards Making Data Bases Practical for use in the Field
NASA Astrophysics Data System (ADS)
Fischer, T. P.; Lehnert, K. A.; Chiodini, G.; McCormick, B.; Cardellini, C.; Clor, L. E.; Cottrell, E.
2014-12-01
Geological, geochemical, and geophysical research is often field based with travel to remote areas and collection of samples and data under challenging environmental conditions. Cross-disciplinary investigations would greatly benefit from near real-time data access and visualisation within the existing framework of databases and GIS tools. An example of complex, interdisciplinary field-based and data intensive investigations is that of volcanologists and gas geochemists, who sample gases from fumaroles, hot springs, dry gas vents, hydrothermal vents and wells. Compositions of volcanic gas plumes are measured directly or by remote sensing. Soil gas fluxes from volcanic areas are measured by accumulation chamber and involve hundreds of measurements to calculate the total emission of a region. Many investigators also collect rock samples from recent or ancient volcanic eruptions. Structural, geochronological, and geophysical data collected during the same or related field campaigns complement these emissions data. All samples and data collected in the field require a set of metadata including date, time, location, sample or measurement id, and descriptive comments. Currently, most of these metadata are written in field notebooks and later transferred into a digital format. Final results such as laboratory analyses of samples and calculated flux data are tabulated for plotting, correlation with other types of data, modeling and finally publication and presentation. Data handling, organization and interpretation could be greatly streamlined by using digital tools available in the field to record metadata, assign an International Geo Sample Number (IGSN), upload measurements directly from field instruments, and arrange sample curation. Available data display tools such as GeoMapApp and existing data sets (PetDB, IRIS, UNAVCO) could be integrated to direct locations for additional measurements during a field campaign. Nearly live display of sampling locations, pictures
Regularization of identity based solution in string field theory
NASA Astrophysics Data System (ADS)
Zeze, Syoji
2010-10-01
We demonstrate that an Erler-Schnabl type solution in cubic string field theory can be naturally interpreted as a gauge invariant regularization of an identity based solution. We consider a solution which interpolates between an identity based solution and ordinary Erler-Schnabl one. Two gauge invariant quantities, the classical action and the closed string tadpole, are evaluated for finite value of the gauge parameter. It is explicitly checked that both of them are independent of the gauge parameter.
Field sampling method for quantifying odorants in humid environments.
Trabue, Steven L; Scoggin, Kenwood D; Li, Hong; Burns, Robert; Xin, Hongwei
2008-05-15
Most air quality studies in agricultural environments use thermal desorption analysis for quantifying semivolatile organic compounds (SVOCs) associated with odor. The objective of this study was to develop a robust sampling technique for measuring SVOCs in humid environments. Test atmospheres were generated at ambient temperatures (23 +/- 1.5 degrees C) and 25, 50, and 80% relative humidity (RH). Sorbent material used included Tenax, graphitized carbon, and carbon molecular sieve (CMS). Sorbent tubes were challenged with 2, 4, 8, 12, and 24 L of air at various RHs. Sorbent tubes with CMS material performed poorly at both 50 and 80% RH dueto excessive sorption of water. Heating of CMS tubes during sampling or dry-purging of CMS tubes post sampling effectively reduced water sorption with heating of tubes being preferred due to the higher recovery and reproducibility. Tenaxtubes had breakthrough of the more volatile compounds and tended to form artifacts with increasing volumes of air sampled. Graphitized carbon sorbent tubes containing Carbopack X and Carbopack C performed best with quantitative recovery of all compounds at all RHs and sampling volumes tested. The graphitized carbon tubes were taken to the field for further testing. Field samples taken from inside swine feeding operations showed that butanoic acid, 4-methylphenol, 4-ethylphenol, indole, and 3-methylindole were the compounds detected most often above their odor threshold values. Field samples taken from a poultry facility demonstrated that butanoic acid, 3-methylbutanoic acid, and 4-methylphenol were the compounds above their odor threshold values detected most often, relative humidity, CAFO, VOC, SVOC, thermal desorption, swine, poultry, air quality, odor.
Solution Deposition Methods for Carbon Nanotube Field-Effect Transistors
2009-06-01
solution prior to spin - coating . A comparison of the results for each deposition method will help to determine which conditions are useful for producing CNT devices for chemical sensing and electronic applications.
Participative Critical Enquiry in Graduate Field-Based Learning
ERIC Educational Resources Information Center
Reilly, Kathy; Clavin, Alma; Morrissey, John
2016-01-01
This paper outlines a critical pedagogic approach to field-based learning (FBL) at graduate level. Drawing on student experience stemming from a FBL module and as part of an MA programme in Environment, Society and Development, the paper addresses the complexities associated with student-led, participative critical enquiry during fieldwork in…
CyroSQUID: A SQUID-Based Magnetic Field Sensor.
1988-03-15
SQUID-based magnetometers. The motivation stems from a variety of applications, including the study of biomagnetic fields (Zimmerman and Radebaugh ...University Press, in press). J.E. Zimmerman and R. Radebaugh (1978). "Operation of a SQUID in a very low-power cryocooler," in: Applications of Closed-Cycle
Field-based Teacher Education for Greater Cultural Sensitivity.
ERIC Educational Resources Information Center
Cwick, Simin; Wooldridge, Deborah; Petch-Hogan, Beverly
2001-01-01
Southeast Missouri State University revised its teacher education program to include field-based experiences in each of its four blocks of courses. Student teachers are placed in rural and urban schools with pupils from various socioeconomic, cultural, racial, and disability groups. A survey of 225 cooperating teachers and student teachers…
Transformative Pathways: Field-based Teacher Educators' Perceptions.
ERIC Educational Resources Information Center
Goodfellow, Joy; Sumsion, Jennifer
2000-01-01
Investigated field-based teacher educators' perceptions of their contribution to preservice teachers' personal-professional development. Focus groups data indicated that respondents perceived wisdom, authenticity, and passion as particularly valuable in working with student teachers. Their work with student teachers constitutes a transformative…
Ethics in Field-Based Research: Contractual and Relational Responsibilities.
ERIC Educational Resources Information Center
Brickhouse, Nancy W.
The desire to abolish the gap between research theory and classroom practice has sparked an increasing interest in field-based research among science educators. Although most researchers are aware of the standard meanings of informed consent and confidentiality, and there are some codes of ethical principles published by such groups as the…
A Collaborative Field-Based Urban Teacher Education Program.
ERIC Educational Resources Information Center
Guyton, Edith; And Others
1993-01-01
Describes a 12-month, field-based, alternative teacher preparation program for individuals holding baccalaureate degrees in areas outside education who want master's degrees in early childhood education. The program involves collaboration between the State Department of Education, the Early Childhood Department of an urban university, and four…
Bioventing Field Initiative at Robins Air Force Base, Georgia
2007-11-02
This report describes the activities conducted at three sites at Robins Air Force Base (AFB), Georgia, as part of the Bioventing Field initiative for...respiration test, and installation of a bioventing system. The specific objectives of this task are described in the following section. The test sites at the
Field-based Interns' Philosophical Perspectives on Teaching.
ERIC Educational Resources Information Center
Telese, James A.
Elementary and secondary preservice teachers' philosophical perspectives were examined before and after their participation in field-based activities in a professional development school. A philosophical perspective survey was administered at the start and again at the end of the semester. The five categories were existentialism, behaviorism,…
In-vivo performance comparison study of wide-field oxygenation imaging methods
NASA Astrophysics Data System (ADS)
Van de Giessen, Martijn; Angelo, Joseph; Vargas, Christina; Gioux, Sylvain
2015-03-01
Wide-field oxygenation saturation (StO2) estimates can be clinically very advantageous. Particularly when implemented in a non-contact manner, applications such as intra-operative assessment of tissue perfusion are very promising. Nevertheless, wide-field optical oxygenation imaging did not yet successfully translate to the clinic. In this work we compare four proposed methods for wide-field imaging that are based on different photon propagation models and that depend on different sets of assumed parameters such as absorption and reduced scattering coefficients. We investigated these for methods, with particular attention to sensitivities to errors in assumed parameters of calibration estimates. To this end we acquired an in vivo time series of a pig skin flap with a venous occlusion. StO2 estimates of all methods were compared to estimates from spatial frequency domain imaging of the same time series. Correct assumptions on scatter power and accurate calibration were found to be the most important prerequisites for accurate StO2 estimates. Although all models were able to measure relative changes in StO2 when the occlusion was applied and released, only the models that incorporated assumed reduced scattering coefficients estimated StO2 values within 5% of the expected values (estimated using SFDI). An important aspect of the compared methods is their ability to be used for real-time imaging. With the addition of real-time calibration and robust tissue scattering estimates, real-time wide-field imaging of oxygenation saturation can prove to provide important added value in the clinic.
Ginsberg, Gary; Toal, Brian; Simcox, Nancy; Bracker, Anne; Golembiewski, Brian; Kurland, Tara; Hedman, Curtis
2011-01-01
Questions have been raised regarding possible exposures when playing sports on synthetic turf fields cushioned with crumb rubber. Rubber is a complex mixture with some components possessing toxic and carcinogenic properties. Exposure is possible via inhalation, given that chemicals emitted from rubber might end up in the breathing zone of players and these players have high ventilation rates. Previous studies provide useful data but are limited with respect to the variety of fields and scenarios evaluated. The State of Connecticut investigated emissions associated with four outdoor and one indoor synthetic turf field under summer conditions. On-field and background locations were sampled using a variety of stationary and personal samplers. More than 20 chemicals of potential concern (COPC) were found to be above background and possibly field-related on both indoor and outdoor fields. These COPC were entered into separate risk assessments (1) for outdoor and indoor fields and (2) for children and adults. Exposure concentrations were prorated for time spent away from the fields and inhalation rates were adjusted for play activity and for children's greater ventilation than adults. Cancer and noncancer risk levels were at or below de minimis levels of concern. The scenario with the highest exposure was children playing on the indoor field. The acute hazard index (HI) for this scenario approached unity, suggesting a potential concern, although there was great uncertainty with this estimate. The main contributor was benzothiazole, a rubber-related semivolatile organic chemical (SVOC) that was 14-fold higher indoors than outdoors. Based upon these findings, outdoor and indoor synthetic turf fields are not associated with elevated adverse health risks. However, it would be prudent for building operators to provide adequate ventilation to prevent a buildup of rubber-related volatile organic chemicals (VOC) and SVOC at indoor fields. The current results are generally
Treecode-based generalized Born method
NASA Astrophysics Data System (ADS)
Xu, Zhenli; Cheng, Xiaolin; Yang, Haizhao
2011-02-01
We have developed a treecode-based O(Nlog N) algorithm for the generalized Born (GB) implicit solvation model. Our treecode-based GB (tGB) is based on the GBr6 [J. Phys. Chem. B 111, 3055 (2007)], an analytical GB method with a pairwise descreening approximation for the R6 volume integral expression. The algorithm is composed of a cutoff scheme for the effective Born radii calculation, and a treecode implementation of the GB charge-charge pair interactions. Test results demonstrate that the tGB algorithm can reproduce the vdW surface based Poisson solvation energy with an average relative error less than 0.6% while providing an almost linear-scaling calculation for a representative set of 25 proteins with different sizes (from 2815 atoms to 65456 atoms). For a typical system of 10k atoms, the tGB calculation is three times faster than the direct summation as implemented in the original GBr6 model. Thus, our tGB method provides an efficient way for performing implicit solvent GB simulations of larger biomolecular systems at longer time scales.
A MULTICORE BASED PARALLEL IMAGE REGISTRATION METHOD
Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.
2012-01-01
Image registration is a crucial step for many image-assisted clinical applications such as surgery planning and treatment evaluation. In this paper we proposed a landmark based nonlinear image registration algorithm for matching 2D image pairs. The algorithm was shown to be effective and robust under conditions of large deformations. In landmark based registration, the most important step is establishing the correspondence among the selected landmark points. This usually requires an extensive search which is often computationally expensive. We introduced a nonregular data partition algorithm using the K-means clustering algorithm to group the landmarks based on the number of available processing cores. The step optimizes the memory usage and data transfer. We have tested our method using IBM Cell Broadband Engine (Cell/B.E.) platform. PMID:19964921
Field methods for measuring hydraulic properties of peat deposits
NASA Astrophysics Data System (ADS)
Hogan, J. M.; van der Kamp, G.; Barbour, S. L.; Schmidt, R.
2006-11-01
New field techniques were developed and tested to evaluate peat storativity and hydraulic conductivity in a Boreal fen. Enclosed drainage tests and pumping tests were successfully completed in the thawed peat above an impermeable frozen layer and then repeated when the peat was fully thawed. A loading test experiment constrained values of vertical hydraulic conductivity within an order of magnitude for the peat below a depth of 2 m. An inherent advantage of these tests is that volumes of undisturbed peat on the scale of cubic metres may be characterized. Storativity of the fen peat as determined by enclosed drainage tests ranged from about 1.0 at the peat surface to 0.35 at a water table depth of 0.15 m. Laboratory drainage tests of peat cores gave similar, but widely scattered results. Hydraulic conductivity near the surface was as high as 9.0 × 10-3 ms-1 determined with pumping tests and in the range of 10-6 to 10-5 ms-1 below a depth of 2 m, estimated with the loading test. Slug tests gave similar results. Pumping tests, enclosed storativity tests and loading tests are practical large-scale field tests for determining peat properties. Copyright
Characterization of structural vibration: Field descriptors based on energy density and intensity
NASA Astrophysics Data System (ADS)
Linjama, Jukka
Measurement of energy flow in acoustical and vibrational fields is usually based on the detection of one linear field quantity (e.g. sound pressure) and its spatial gradient, two transducers being used for the measurement. This report first reviews the quantities which can be obtained from the measurement of acoustical intensity with a two-microphone probe: intensity and the energy densities. A set of 'field descriptors', relative quantities giving a measure of propagating (active) character of the waves in the sound field, is proposed. These energetic quantities are based entirely on the transversal velocity measured and the gradient of that velocity, and are available when the two-transducer method of bending wave intensity is used. Examples of the energy densities and field descriptors measured in an aluminum plate are presented, and proposals for further work are given.
Unidirectional coating technology for organic field-effect transistors: materials and methods
NASA Astrophysics Data System (ADS)
Sun, Huabin; Wang, Qijing; Qian, Jun; Yin, Yao; Shi, Yi; Li, Yun
2015-05-01
Solution-processed organic field-effect transistors (OFETs) are essential for developing organic electronics. The encouraging development in solution-processed OFETs has attracted research interest because of their potential in low-cost devices with performance comparable to polycrystalline-silicon-based transistors. In recent years, unidirectional coating technology, featuring thin-film coating along only one direction and involving specific materials as well as solution-assisted fabrication methods, has attracted intensive interest. Transistors with organic semiconductor layers, which are deposited via unidirectional coating methods, have achieved high performance. In particular, carrier mobility has been greatly enhanced to values much higher than 10 cm2 V-1 s-1. Such significant improvement is mainly attributed to better control in morphology and molecular packing arrangement of organic thin film. In this review, typical materials that are being used in OFETs are discussed, and demonstrations of unidirectional coating methods are surveyed.
Field Analysis of Microbial Contamination Using Three Molecular Methods in Parallel
NASA Technical Reports Server (NTRS)
Morris, H.; Stimpson, E.; Schenk, A.; Kish, A.; Damon, M.; Monaco, L.; Wainwright, N.; Steele, A.
2010-01-01
Advanced technologies with the capability of detecting microbial contamination remain an integral tool for the next stage of space agency proposed exploration missions. To maintain a clean, operational spacecraft environment with minimal potential for forward contamination, such technology is a necessity, particularly, the ability to analyze samples near the point of collection and in real-time both for conducting biological scientific experiments and for performing routine monitoring operations. Multiple molecular methods for detecting microbial contamination are available, but many are either too large or not validated for use on spacecraft. Two methods, the adenosine- triphosphate (ATP) and Limulus Amebocyte Lysate (LAL) assays have been approved by the NASA Planetary Protection Office for the assessment of microbial contamination on spacecraft surfaces. We present the first parallel field analysis of microbial contamination pre- and post-cleaning using these two methods as well as universal primer-based polymerase chain reaction (PCR).
Phase-stepping method for whole-field photoelastic stress analysis using plane polariscope setup
NASA Astrophysics Data System (ADS)
Zhang, Xusheng; Chen, Lingfeng; He, Chuan
2010-10-01
A new six-step phase shifting method is presented in this paper to determine the phase retardation for whole-field photoelastic stress analysis in optical glass based on the plane polariscope setup. This new phase stepping strategy is of no quarter wave plate errors and with less intensity variations of emerging light. By this method, it's not necessary to determine the isoclinic angles in advance when measuring the phase retardations, so the data processing will be simplified and the isoclinic angle errors will cause no influnces on the measurement. A plane polariscope is setup including a LED array light source, rotatable dichroic polymer film polarizer and analyser, a digital CCD camera and image grab system. Two mica waveplates with known phase retardances are measured, and the experimental results agree well with the those values. This method is expected to be used for the stress induced birefringence test in optical glass.
Measuring slope to improve energy expenditure estimates during field-based activities.
Duncan, Glen E; Lester, Jonathan; Migotsky, Sean; Higgins, Lisa; Borriello, Gaetano
2013-03-01
This technical note describes methods to improve activity energy expenditure estimates by using a multi-sensor board (MSB) to measure slope. Ten adults walked over a 4-km (2.5-mile) course wearing an MSB and mobile calorimeter. Energy expenditure was estimated using accelerometry alone (base) and 4 methods to measure slope. The barometer and global positioning system methods improved accuracy by 11% from the base (p < 0.05) to 86% overall. Measuring slope using the MSB improves energy expenditure estimates during field-based activities.
Dynamics of field-aligned currents reconstructed by the ground-based and satellite data
NASA Astrophysics Data System (ADS)
Nikolaeva, V. D.; Kotikov, A. L.; Sergienko, T. I.
2014-09-01
Parameters of field-aligned currents reconstructed by ground-based measurements of magnetic field in the Scandinavian countries (IMAGE) and ionospheric conductivity for specific events of the 6 and 8 December 2004 are represented here. Ionospheric conductivity was calculated from precipitating electron flux measured at DMSP-13 satellite and electron density EISCAT incoherent scattering radar direct measurements. There is a high correlation between field-aligned currents, calculated from DMSP-13 satellite data and field-aligned currents calculated from radar measurements for the December 6, 2004 in the presence of developed ionospheric current system. The comparison of field-aligned currents, reconstructed by the proposed method, with the currents calculated by the variation of magnetic field on the DMSP satellites, confirms correctness of the offered algorithm.
NASA Astrophysics Data System (ADS)
Lensky, Vadim; Birse, Michael C.; Walet, Niels R.
2016-09-01
We construct a coordinate-space potential based on pionless effective field theory (EFT) with a Gaussian regulator. Charge-symmetry breaking is included through the Coulomb potential and through two- and three-body contact interactions. Starting with the effective field theory potential, we apply the stochastic variational method to determine the ground states of nuclei with mass number A ≤4 . At next-to-next-to-leading order, two out of three independent three-body parameters can be fitted to the three-body binding energies. To fix the remaining one, we look for a simultaneous description of the binding energy of 4He and the charge radii of 3He and 4He. We show that at the order considered we can find an acceptable solution, within the uncertainty of the expansion. We find that the EFT expansion shows good agreement with empirical data within the estimated uncertainty, even for a system as dense as 4He.
Stewart, W.A.
1990-05-01
This thesis describes the development of a design tool for the poloidal field magnet system of a tokamak. Specifically, an existing program for determining the poloidal field coil currents has been modified to: support the general case of asymmetric equilibria and coil sets, determine the coil currents subject to constraints on the maximum values of those currents, and determine the coil currents subject to limits on the forces those coils may carry. The equations representing the current limits and coil force limits are derived and an algorithm based on Newton's method is developed to determine a set of coil currents which satisfies those limits. The resulting program allows the designer to quickly determine whether or not a given coil set is capable of supporting a given equilibrium. 25 refs.
Volumetric calculations in an oil field: The basis method
Olea, R.A.; Pawlowsky, V.; Davis, J.C.
1993-01-01
The basis method for estimating oil reserves in place is compared to a traditional procedure that uses ordinary kriging. In the basis method, auxiliary variables that sum to the net thickness of pay are estimated by cokriging. In theory, the procedure should be more powerful because it makes full use of the cross-correlation between variables and forces the original variables to honor interval constraints. However, at least in our case study, the practical advantages of cokriging for estimating oil in place are marginal. ?? 1993.
Theory of Carbon Nanotube (CNT)-Based Electron Field Emitters
Bocharov, Grigory S.; Eletskii, Alexander V.
2013-01-01
Theoretical problems arising in connection with development and operation of electron field emitters on the basis of carbon nanotubes are reviewed. The physical aspects of electron field emission that underlie the unique emission properties of carbon nanotubes (CNTs) are considered. Physical effects and phenomena affecting the emission characteristics of CNT cathodes are analyzed. Effects given particular attention include: the electric field amplification near a CNT tip with taking into account the shape of the tip, the deviation from the vertical orientation of nanotubes and electrical field-induced alignment of those; electric field screening by neighboring nanotubes; statistical spread of the parameters of the individual CNTs comprising the cathode; the thermal effects resulting in degradation of nanotubes during emission. Simultaneous consideration of the above-listed effects permitted the development of the optimization procedure for CNT array in terms of the maximum reachable emission current density. In accordance with this procedure, the optimum inter-tube distance in the array depends on the region of the external voltage applied. The phenomenon of self-misalignment of nanotubes in an array has been predicted and analyzed in terms of the recent experiments performed. A mechanism of degradation of CNT-based electron field emitters has been analyzed consisting of the bombardment of the emitters by ions formed as a result of electron impact ionization of the residual gas molecules.
Chapter 11. Community analysis-based methods
Cao, Y.; Wu, C.H.; Andersen, G.L.; Holden, P.A.
2010-05-01
Microbial communities are each a composite of populations whose presence and relative abundance in water or other environmental samples are a direct manifestation of environmental conditions, including the introduction of microbe-rich fecal material and factors promoting persistence of the microbes therein. As shown by culture-independent methods, different animal-host fecal microbial communities appear distinctive, suggesting that their community profiles can be used to differentiate fecal samples and to potentially reveal the presence of host fecal material in environmental waters. Cross-comparisons of microbial communities from different hosts also reveal relative abundances of genetic groups that can be used to distinguish sources. In increasing order of their information richness, several community analysis methods hold promise for MST applications: phospholipid fatty acid (PLFA) analysis, denaturing gradient gel electrophoresis (DGGE), terminal restriction fragment length polymorphism (TRFLP), cloning/sequencing, and PhyloChip. Specific case studies involving TRFLP and PhyloChip approaches demonstrate the ability of community-based analyses of contaminated waters to confirm a diagnosis of water quality based on host-specific marker(s). The success of community-based MST for comprehensively confirming fecal sources relies extensively upon using appropriate multivariate statistical approaches. While community-based MST is still under evaluation and development as a primary diagnostic tool, results presented herein demonstrate its promise. Coupled with its inherently comprehensive ability to capture an unprecedented amount of microbiological data that is relevant to water quality, the tools for microbial community analysis are increasingly accessible, and community-based approaches have unparalleled potential for translation into rapid, perhaps real-time, monitoring platforms.
FIELD MEASUREMENT OF DISSOLVED OXYGEN: A COMPARISON OF METHODS
The ability to confidently measure the concentration of dissolved oxygen (D.O.) in ground water is a key aspect of remedial selection and assessment. Presented here is a comparison of the commonly practiced methods for determining D.O. concentrations in ground water, including c...
Field Evaluation of Personal Sampling Methods for Multiple Bioaerosols
Wang, Chi-Hsun; Chen, Bean T.; Han, Bor-Cheng; Liu, Andrew Chi-Yeu; Hung, Po-Chen; Chen, Chih-Yong; Chao, Hsing Jasmine
2015-01-01
Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC) filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters) and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min). Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols. PMID:25799419
Magnetic field sensor using a polymer-based vibrator
NASA Astrophysics Data System (ADS)
Wu, Jiang; Hasebe, Kazuhiko; Mizuno, Yosuke; Tabaru, Marie; Nakamura, Kentaro
2016-09-01
In this technical note, a polymer-based magnetic sensor with a high resolution was devised for sensing the high magnetic field. It consisted of a bimorph (vibrator) made of poly (phenylene sulfide) (PPS) and a phosphor-bronze foil glued on the free end of the bimorph. According to Faraday’s law of induction, when a magnetic field in the direction perpendicular to the bimorph was applied, the foil cut the magnetic flux, and generated an alternating voltage across the leads at the natural frequency of the bimorph. Because PPS has low mechanical loss, low elastic modulus, and low density, high vibration velocity can be achieved if it is employed as the elastomer of the bimorph. The devised sensor was tested in the magnetic field range of 0.1-570 mT and exhibited a minimum detectable magnetic field of 0.1 mT. At a zero-to-peak driving voltage of 60 V, the sensitivity of the PPS-based magnetic sensor reached 10.5 V T-1, which was 1.36 times the value of the aluminum-based magnetic sensor with the same principle and dimensions.
Distributed optical fiber dynamic magnetic field sensor based on magnetostriction.
Masoudi, Ali; Newson, Trevor P
2014-05-01
A distributed optical fiber sensor is introduced which is capable of quantifying multiple magnetic fields along a 1 km sensing fiber with a spatial resolution of 1 m. The operation of the proposed sensor is based on measuring the magnetorestrictive induced strain of a nickel wire attached to an optical fiber. The strain coupled to the optical fiber was detected by measuring the strain-induced phase variation between the backscattered Rayleigh light from two segments of the sensing fiber. A magnetic field intensity resolution of 0.3 G over a bandwidth of 50-5000 Hz was demonstrated.
A method for real time detecting of non-uniform magnetic field
NASA Astrophysics Data System (ADS)
Marusenkov, Andriy
2015-04-01
The principle of measuring magnetic signatures for observing diverse objects is widely used in Near Surface work (unexploded ordnance (UXO); engineering & environmental; archaeology) and security and vehicle detection systems as well. As a rule, the magnitude of the signals to be measured is much lower than that of the quasi-uniform Earth magnetic field. Usually magnetometers for these purposes contain two or more spatially separated sensors to estimate the full tensor gradient of the magnetic field or, more frequently, only partial gradient components. The both types (scalar and vector) of magnetic sensors could be used. The identity of the scale factors and proper alignment of the sensitivity axes of the vector sensors are very important for deep suppression of the ambient field and detection of weak target signals. As a rule, the periodical calibration procedure is used to keep matching sensors' parameters as close as possible. In the present report we propose the technique for detection magnetic anomalies, which is almost insensitive to imperfect matching of the sensors. This method based on the idea that the difference signals between two sensors are considerably different when the instrument is rotated or moved in uniform and non-uniform fields. Due to the misfit of calibration parameters the difference signal observed at the rotation in the uniform field is similar to the total signal - the sum of the signals of both sensors. Zero change of the difference and total signals is expected, if the instrument moves in the uniform field along a straight line. In contrast, the same move in the non-uniform field produces some response of each of the sensors. In case one measures dB/dx and moves along x direction, the sensors signals is shifted in time with the lag proportional to the distance between sensors and the speed of move. It means that the difference signal looks like derivative of the total signal at move in the non-uniform field. So, using quite simple
HDR Pathological Image Enhancement Based on Improved Bias Field Correction and Guided Image Filter
Zhu, Ganzheng; Li, Siqi; Gong, Shang; Yang, Benqiang; Zhang, Libo
2016-01-01
Pathological image enhancement is a significant topic in the field of pathological image processing. This paper proposes a high dynamic range (HDR) pathological image enhancement method based on improved bias field correction and guided image filter (GIF). Firstly, a preprocessing including stain normalization and wavelet denoising is performed for Haematoxylin and Eosin (H and E) stained pathological image. Then, an improved bias field correction model is developed to enhance the influence of light for high-frequency part in image and correct the intensity inhomogeneity and detail discontinuity of image. Next, HDR pathological image is generated based on least square method using low dynamic range (LDR) image, H and E channel images. Finally, the fine enhanced image is acquired after the detail enhancement process. Experiments with 140 pathological images demonstrate the performance advantages of our proposed method as compared with related work. PMID:28116303
Thors, Björn; Thielens, Arno; Fridén, Jonas; Colombi, Davide; Törnevik, Christer; Vermeeren, Günter; Martens, Luc; Joseph, Wout
2014-05-01
In this paper, different methods for practical numerical radio frequency exposure compliance assessments of radio base station products were investigated. Both multi-band base station antennas and antennas designed for multiple input multiple output (MIMO) transmission schemes were considered. For the multi-band case, various standardized assessment methods were evaluated in terms of resulting compliance distance with respect to the reference levels and basic restrictions of the International Commission on Non-Ionizing Radiation Protection. Both single frequency and multiple frequency (cumulative) compliance distances were determined using numerical simulations for a mobile communication base station antenna transmitting in four frequency bands between 800 and 2600 MHz. The assessments were conducted in terms of root-mean-squared electromagnetic fields, whole-body averaged specific absorption rate (SAR) and peak 10 g averaged SAR. In general, assessments based on peak field strengths were found to be less computationally intensive, but lead to larger compliance distances than spatial averaging of electromagnetic fields used in combination with localized SAR assessments. For adult exposure, the results indicated that even shorter compliance distances were obtained by using assessments based on localized and whole-body SAR. Numerical simulations, using base station products employing MIMO transmission schemes, were performed as well and were in agreement with reference measurements. The applicability of various field combination methods for correlated exposure was investigated, and best estimate methods were proposed. Our results showed that field combining methods generally considered as conservative could be used to efficiently assess compliance boundary dimensions of single- and dual-polarized multicolumn base station antennas with only minor increases in compliance distances.
Alignment method for fabricating a parallel flat-field grating used in soft x-ray region.
Wang, Qingbo; Liu, Zhengkun; Zheng, Yanchang; Chen, Huoyao; Wang, Yu; Liu, Ying; Hong, Yilin
2015-06-20
Parallel flat-field gratings consist of two flat-field gratings lying on one substrate, one for 5-20 nm and the other for 2-5 nm spectral regions, and thus can be widely used in various fields to record broader spectra in the soft x-ray region. The alignment of two subgratings directly determines the resolving power of parallel flat-field gratings. The theoretical resolving power is evaluated by means of the ray-tracing method and the maximal allowable alignment error is 0.366°. Alignment is based on diffraction patterns and moiré fringes and the total alignment error in our experiment is within 0.234°. The results demonstrate that this alignment method is an effective way for fabricating parallel flat-field gratings.
Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P
2015-03-01
Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical.
Field Testing of Compartmentalization Methods for Multifamily Construction
Ueno, K.; Lstiburek, J.
2015-03-01
The 2012 IECC has an airtightness requirement of 3 air changes per hour at 50 Pascals test pressure for both single-family and multifamily construction in Climate Zones 3-8. Other programs (LEED, ASHRAE 189, ASHRAE 62.2) have similar or tighter compartmentalization requirements, driving the need for easier and more effective methods of compartmentalization in multifamily buildings. Builders and practitioners have found that fire-resistance rated wall assemblies are a major source of difficulty in air sealing/compartmentalization, particularly in townhouse construction. This problem is exacerbated when garages are “tucked in” to the units and living space is located over the garages. In this project, Building Science Corporation examined the taping of exterior sheathing details to improve air sealing results in townhouse and multifamily construction, when coupled with a better understanding of air leakage pathways. Current approaches are cumbersome, expensive, time consuming, and ineffective; these details were proposed as a more effective and efficient method. The effectiveness of these air sealing methods was tested with blower door testing, including “nulled” or “guarded” testing (adjacent units run at equal test pressure to null out inter-unit air leakage, or “pressure neutralization”). Pressure diagnostics were used to evaluate unit-to-unit connections and series leakage pathways (i.e., air leakage from exterior, into the fire-resistance rated wall assembly, and to the interior).
Method for extruding pitch based foam
Klett, James W.
2002-01-01
A method and apparatus for extruding pitch based foam is disclosed. The method includes the steps of: forming a viscous pitch foam; passing the precursor through an extrusion tube; and subjecting the precursor in said extrusion tube to a temperature gradient which varies along the length of the extrusion tube to form an extruded carbon foam. The apparatus includes an extrusion tube having a passageway communicatively connected to a chamber in which a viscous pitch foam formed in the chamber paring through the extrusion tube, and a heating mechanism in thermal communication with the tube for heating the viscous pitch foam along the length of the tube in accordance with a predetermined temperature gradient.
Dreamlet-based interpolation using POCS method
NASA Astrophysics Data System (ADS)
Wang, Benfeng; Wu, Ru-Shan; Geng, Yu; Chen, Xiaohong
2014-10-01
Due to incomplete and non-uniform coverage of the acquisition system and dead traces, real seismic data always has some missing traces which affect the performance of a multi-channel algorithm, such as Surface-Related Multiple Elimination (SRME), imaging and inversion. Therefore, it is necessary to interpolate seismic data. Dreamlet transform has been successfully used in the modeling of seismic wave propagation and imaging, and this paper explains the application of dreamlet transform to seismic data interpolation. In order to avoid spatial aliasing in transform domain thus getting arbitrary under-sampling rate, improved Jittered under-sampling strategy is proposed to better control the dataset. With L0 constraint and Projection Onto Convex Sets (POCS) method, performances of dreamlet-based and curvelet-based interpolation are compared in terms of recovered signal to noise ratio (SNR) and convergence rate. Tests on synthetic and real cases demonstrate that dreamlet transform has superior performance to curvelet transform.
Active Graphene-Based Terahertz Dual-Band Modulator Implemented in the Presence of External Fields
NASA Astrophysics Data System (ADS)
Hu, Xiang; Huang, Qiuping; Zhao, Yi; Cai, Honglei; Lu, Yalin
2017-01-01
In this work, we numerically demonstrate a dynamic graphene-based dual-band metamaterial modulator (gDMM) in the presence of an external magnetic field and gate electric field. With the objective of modulating terahertz waves at two separate channels, we utilize the proposed dual-field control method to dynamically modulate the optical conductivity of graphene, and thus the working frequencies of the gDMM. An interpretation for such dependence on the external fields is presented based on a quantum understanding of the energy structure of graphene, and a numerical method based on the finite element method (FEM) is employed to investigate the optical responses of our proposed gDMM. Our results show that, by varying the strength of external fields, one can switch the operation status of the two working channels located at 3.18 THz and 9.04 THz, with modulation depths exceeding 84.4%. Only 30 meV of energy is required for shifting the Fermi level to accomplish the switch, which is extremely low compared with methods in previous works using gate electric control alone. Simultaneous ON/OFF statuses are also realized. Such great tunability and controllability of our proposed gDMM over a wide frequency range may give rise to a new class of dynamic devices for terahertz and microwave applications.
Xu, Xiaojie; Liu, Ming; Zhang, Zhanbin; Jia, Yueling
2014-01-01
Remote field eddy current is an effective non-destructive testing method for ferromagnetic tubular structures. In view of conventional sensors' disadvantages such as low signal-to-noise ratio and poor sensitivity to axial cracks, a novel high sensitivity sensor based on orthogonal magnetic field excitation is proposed. Firstly, through a three-dimensional finite element simulation, the remote field effect under orthogonal magnetic field excitation is determined, and an appropriate configuration which can generate an orthogonal magnetic field for a tubular structure is developed. Secondly, optimized selection of key parameters such as frequency, exciting currents and shielding modes is analyzed in detail, and different types of pick-up coils, including a new self-differential mode pick-up coil, are designed and analyzed. Lastly, the proposed sensor is verified experimentally by various types of defects manufactured on a section of a ferromagnetic tube. Experimental results show that the proposed novel sensor can largely improve the sensitivity of defect detection, especially for axial crack whose depth is less than 40% wall thickness, which are very difficult to detect and identify by conventional sensors. Another noteworthy advantage of the proposed sensor is that it has almost equal sensitivity to various types of defects, when a self-differential mode pick-up coil is adopted. PMID:25615738
A simple field method to identify foot strike pattern during running.
Giandolini, Marlène; Poupard, Thibaut; Gimenez, Philippe; Horvais, Nicolas; Millet, Guillaume Y; Morin, Jean-Benoît; Samozino, Pierre
2014-05-07
Identifying foot strike patterns in running is an important issue for sport clinicians, coaches and footwear industrials. Current methods allow the monitoring of either many steps in laboratory conditions or only a few steps in the field. Because measuring running biomechanics during actual practice is critical, our purpose is to validate a method aiming at identifying foot strike patterns during continuous field measurements. Based on heel and metatarsal accelerations, this method requires two uniaxial accelerometers. The time between heel and metatarsal acceleration peaks (THM) was compared to the foot strike angle in the sagittal plane (αfoot) obtained by 2D video analysis for various conditions of speed, slope, footwear, foot strike and state of fatigue. Acceleration and kinematic measurements were performed at 1000Hz and 120Hz, respectively, during 2-min treadmill running bouts. Significant correlations were observed between THM and αfoot for 14 out of 15 conditions. The overall correlation coefficient was r=0.916 (P<0.0001, n=288). The THM method is thus highly reliable for a wide range of speeds and slopes, and for all types of foot strike except for extreme forefoot strike during which the heel rarely or never strikes the ground, and for different footwears and states of fatigue. We proposed a classification based on THM: FFS<-5.49ms
Variational methods in supersymmetric lattice field theory: The vacuum sector
Duncan, A.; Meyer-Ortmanns, H.; Roskies, R.
1987-12-15
The application of variational methods to the computation of the spectrum in supersymmetric lattice theories is considered, with special attention to O(N) supersymmetric sigma models. Substantial cancellations are found between bosonic and fermionic contributions even in approximate Ansa$uml: tze for the vacuum wave function. The nonlinear limit of the linear sigma model is studied in detail, and it is shown how to construct an appropriate non-Gaussian vacuum wave function for the nonlinear model. The vacuum energy is shown to be of order unity in lattice units in the latter case, after infinite cancellations.
Field Testing of Compartmentalization Methods for Multifamily Construction
Ueno, K.; Lstiburek, J. W.
2015-03-01
The 2012 International Energy Conservation Code (IECC) has an airtightness requirement of 3 air changes per hour at 50 Pascals test pressure (3 ACH50) for single-family and multifamily construction (in climate zones 3–8). The Leadership in Energy & Environmental Design certification program and ASHRAE Standard 189 have comparable compartmentalization requirements. ASHRAE Standard 62.2 will soon be responsible for all multifamily ventilation requirements (low rise and high rise); it has an exceptionally stringent compartmentalization requirement. These code and program requirements are driving the need for easier and more effective methods of compartmentalization in multifamily buildings.
NMR system and method having a permanent magnet providing a rotating magnetic field
Schlueter, Ross D [Berkeley, CA; Budinger, Thomas F [Berkeley, CA
2009-05-19
Disclosed herein are systems and methods for generating a rotating magnetic field. The rotating magnetic field can be used to obtain rotating-field NMR spectra, such as magic angle spinning spectra, without having to physically rotate the sample. This result allows magic angle spinning NMR to be conducted on biological samples such as live animals, including humans.
[Field attraction effects of different trapping methods on Monochamus alternatus].
Wang, Sibao; Liu, Yunpeng; Fan, Meizhen; Miao, Xuexia; Zhao, Xieqiu; Li, Zengzhi; Si, Shengli; Huang, Yongping
2005-03-01
A comparative study on the field attraction effects of different attractant, trap, lure and controlled-releasing amount on Monochamus alternatus showed that four test attractants had a certain trapping ability to Monochamus alternatus, among which, MA2K05 was the strongest, with a mean capture efficiency of 26.3 individuals each trap and being attractive to other species of Loleoptera and Hemiptera; MA2K13 took the second place, with 21.3 individuals each trap; while MA2K11 was the weakest, with 13.8 individuals each trap. Among the three lures tested, lures C (60 ml plastic cup with 2 of 5 cm round holes on the cover) and B (20 ml specified controlled-releasing plastic bottle) had a comparatively stronger effect, with a capture efficiency of 34.25 and 20.3 individuals each trap, respectively; while lure A (20 ml specified controlled-releasing plastic bottle, the releasing amount being smaller than that of lure B) was the weakest, with 14.7 individuals each trap. Because the attractant volume of lure C was 1.5 times larger than that of lures B and A, and the attractant for lure C was appended every 3-5 d, while that for lures B and A could be used for more than a month with once appended, lure B was the best on the whole. As for the test traps, Xuanzhou trap was superior to imitated Japanese trap, with a trapping efficiency of 36.4 and 9.7 individuals each trap, respectively. The attractiveness of attractants was not significantly enhanced when the dosage was increased from 20 ml to 80 ml, but significantly improved when it was up to 120 ml.
NASA Astrophysics Data System (ADS)
Park, Kwangwoo; Bak, Jino; Park, Sungho; Choi, Wonhoon; Park, Suk Won
2016-02-01
A semiempirical method based on the averaging effect of the sensitive volumes of different air-filled ionization chambers (ICs) was employed to approximate the correction factors for beam quality produced from the difference in the sizes of the reference field and small fields. We measured the output factors using several cylindrical ICs and calculated the correction factors using a mathematical method similar to deconvolution; in the method, we modeled the variable and inhomogeneous energy fluence function within the chamber cavity. The parameters of the modeled function and the correction factors were determined by solving a developed system of equations as well as on the basis of the measurement data and the geometry of the chambers. Further, Monte Carlo (MC) computations were performed using the Monaco® treatment planning system to validate the proposed method. The determined correction factors (k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} ) were comparable to the values derived from the MC computations performed using Monaco®. For example, for a 6 MV photon beam and a field size of 1 × 1 cm2, k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} was calculated to be 1.125 for a PTW 31010 chamber and 1.022 for a PTW 31016 chamber. On the other hand, the k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} values determined from the MC computations were 1.121 and 1.031, respectively; the difference between the proposed method and the MC computation is less than 2%. In addition, we determined the k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} values for PTW 30013, PTW 31010, PTW 31016, IBA FC23-C, and IBA CC13 chambers as well. We devised a method for determining k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} from both the measurement of the output factors and model-based mathematical computation. The proposed method can be useful in case the MC simulation would not be applicable for the clinical settings.
Isarangkool Na Ayutthaya, S; Do, F C; Pannengpetch, K; Junjittakarn, J; Maeght, J-L; Rocheteau, A; Cochard, H
2010-01-01
The transient thermal dissipation (TTD) method developed by Do and Rocheteau (2002b) is a close evolution of the original constant thermal dissipation (CTD) method of Granier (1985). The TTD method has the advantage of limiting the influence of passive natural temperature gradients and of yielding more stable zero-flux references at night. By analogy with the CTD method, the transient method was first calibrated on synthetic porous material (sawdust) on the assumption that the relationship was independent of the woody species. Here, our concern was to test the latter hypothesis with a 10-min heating time in three tropical species: Hevea brasiliensis Müll. Arg., Mangifera indica L. and Citrus maxima Merr. A complementary objective was to compare the field estimates of daily transpiration for mature rubber trees with estimates based on a simplified soil water balance in the dry season. The calibration experiments were carried out in the laboratory on cut stems using an HPFM device and gravimetric control of water flow up to 5 L dm(-2) h(-1). Nineteen response curves were assessed on fully conductive xylem, combining 11 cut stems and two probes. The field evaluation comprised five periods from November 2007 to February 2008. Estimates of daily transpiration from the measurement of sap flow were based on the 41 sensors set up on 11 trees. Soil water depletion was monitored by neutron probe and 12 access tubes to a depth of 1.8 m. The calibrations confirmed that the response of the transient thermal index to flow density was independent of the woody species that were tested. The best fit was a simple linear response (R(2) = 0.88, n = 276 and P < 0.0001). The previous calibration performed by Do and Rocheteau (2002b) on sawdust fell within the variability of the multi-species calibration; however, there were substantial differences with the average curve at extreme flow rates. Field comparison with soil water depletion in the dry season validated to a reasonable extent
Generalized spectral method for near-field optical microscopy
Jiang, B.-Y.; Zhang, L. M.; Basov, D. N.; Fogler, M. M.; Castro Neto, A. H.
2016-02-07
Electromagnetic interaction between a sub-wavelength particle (the “probe”) and a material surface (the “sample”) is studied theoretically. The interaction is shown to be governed by a series of resonances corresponding to surface polariton modes localized near the probe. The resonance parameters depend on the dielectric function and geometry of the probe as well as on the surface reflectivity of the material. Calculation of such resonances is carried out for several types of axisymmetric probes: spherical, spheroidal, and pear-shaped. For spheroids, an efficient numerical method is developed, capable of handling cases of large or strongly momentum-dependent surface reflectivity. Application of the method to highly resonant materials, such as aluminum oxide (by itself or covered with graphene), reveals a rich structure of multi-peak spectra and nonmonotonic approach curves, i.e., the probe-sample distance dependence. These features also strongly depend on the probe shape and optical constants of the model. For less resonant materials such as silicon oxide, the dependence is weak, so that the spheroidal model is reliable. The calculations are done within the quasistatic approximation with radiative damping included perturbatively.
Generalized spectral method for near-field optical microscopy
NASA Astrophysics Data System (ADS)
Jiang, B.-Y.; Zhang, L. M.; Castro Neto, A. H.; Basov, D. N.; Fogler, M. M.
2016-02-01
Electromagnetic interaction between a sub-wavelength particle (the "probe") and a material surface (the "sample") is studied theoretically. The interaction is shown to be governed by a series of resonances corresponding to surface polariton modes localized near the probe. The resonance parameters depend on the dielectric function and geometry of the probe as well as on the surface reflectivity of the material. Calculation of such resonances is carried out for several types of axisymmetric probes: spherical, spheroidal, and pear-shaped. For spheroids, an efficient numerical method is developed, capable of handling cases of large or strongly momentum-dependent surface reflectivity. Application of the method to highly resonant materials, such as aluminum oxide (by itself or covered with graphene), reveals a rich structure of multi-peak spectra and nonmonotonic approach curves, i.e., the probe-sample distance dependence. These features also strongly depend on the probe shape and optical constants of the model. For less resonant materials such as silicon oxide, the dependence is weak, so that the spheroidal model is reliable. The calculations are done within the quasistatic approximation with radiative damping included perturbatively.
Conditional random field-based gesture recognition with depth information
NASA Astrophysics Data System (ADS)
Chung, Hyunsook; Yang, Hee-Deok
2013-01-01
Gesture recognition is useful for human-computer interaction. The difficulty of gesture recognition is that instances of gestures vary both in motion and shape in three-dimensional (3-D) space. We use depth information generated using Microsoft's Kinect in order to detect 3-D human body components and apply a threshold model with a conditional random field in order to recognize meaningful gestures using continuous motion information. Body gesture recognition is achieved through a framework consisting of two steps. First, a human subject is described by a set of features, encoding the angular relationship between body components in 3-D space. Second, a feature vector is recognized using a threshold model with a conditional random field. In order to show the performance of the proposed method, we use a public data set, the Microsoft Research Cambridge-12 Kinect gesture database. The experimental results demonstrate that the proposed method can efficiently and effectively recognize body gestures automatically.
Resonant Magnetic Field Sensors Based On MEMS Technology.
Herrera-May, Agustín L; Aguilera-Cortés, Luz A; García-Ramírez, Pedro J; Manjarrez, Elías
2009-01-01
Microelectromechanical systems (MEMS) technology allows the integration of magnetic field sensors with electronic components, which presents important advantages such as small size, light weight, minimum power consumption, low cost, better sensitivity and high resolution. We present a discussion and review of resonant magnetic field sensors based on MEMS technology. In practice, these sensors exploit the Lorentz force in order to detect external magnetic fields through the displacement of resonant structures, which are measured with optical, capacitive, and piezoresistive sensing techniques. From these, the optical sensing presents immunity to electromagnetic interference (EMI) and reduces the read-out electronic complexity. Moreover, piezoresistive sensing requires an easy fabrication process as well as a standard packaging. A description of the operation mechanisms, advantages and drawbacks of each sensor is considered. MEMS magnetic field sensors are a potential alternative for numerous applications, including the automotive industry, military, medical, telecommunications, oceanographic, spatial, and environment science. In addition, future markets will need the development of several sensors on a single chip for measuring different parameters such as the magnetic field, pressure, temperature and acceleration.
Resonant Magnetic Field Sensors Based On MEMS Technology
Herrera-May, Agustín L.; Aguilera-Cortés, Luz A.; García-Ramírez, Pedro J.; Manjarrez, Elías
2009-01-01
Microelectromechanical systems (MEMS) technology allows the integration of magnetic field sensors with electronic components, which presents important advantages such as small size, light weight, minimum power consumption, low cost, better sensitivity and high resolution. We present a discussion and review of resonant magnetic field sensors based on MEMS technology. In practice, these sensors exploit the Lorentz force in order to detect external magnetic fields through the displacement of resonant structures, which are measured with optical, capacitive, and piezoresistive sensing techniques. From these, the optical sensing presents immunity to electromagnetic interference (EMI) and reduces the read-out electronic complexity. Moreover, piezoresistive sensing requires an easy fabrication process as well as a standard packaging. A description of the operation mechanisms, advantages and drawbacks of each sensor is considered. MEMS magnetic field sensors are a potential alternative for numerous applications, including the automotive industry, military, medical, telecommunications, oceanographic, spatial, and environment science. In addition, future markets will need the development of several sensors on a single chip for measuring different parameters such as the magnetic field, pressure, temperature and acceleration. PMID:22408480
A Web-Based Information System for Field Data Management
NASA Astrophysics Data System (ADS)
Weng, Y. H.; Sun, F. S.
2014-12-01
A web-based field data management system has been designed and developed to allow field geologists to store, organize, manage, and share field data online. System requirements were analyzed and clearly defined first regarding what data are to be stored, who the potential users are, and what system functions are needed in order to deliver the right data in the right way to the right user. A 3-tiered architecture was adopted to create this secure, scalable system that consists of a web browser at the front end while a database at the back end and a functional logic server in the middle. Specifically, HTML, CSS, and JavaScript were used to implement the user interface in the front-end tier, the Apache web server runs PHP scripts, and MySQL to server is used for the back-end database. The system accepts various types of field information, including image, audio, video, numeric, and text. It allows users to select data and populate them on either Google Earth or Google Maps for the examination of the spatial relations. It also makes the sharing of field data easy by converting them into XML format that is both human-readable and machine-readable, and thus ready for reuse.
NASA Astrophysics Data System (ADS)
Bellocchi, Alberto; King, Donna T.; Ritchie, Stephen M.
2016-05-01
There is on-going international interest in the relationships between assessment instruments, students' understanding of science concepts and context-based curriculum approaches. This study extends earlier research showing that students can develop connections between contexts and concepts - called fluid transitions - when studying context-based courses. We provide an in-depth investigation of one student's experiences with multiple contextual assessment instruments that were associated with a context-based course. We analyzed the student's responses to context-based assessment instruments to determine the extent to which contextual tests, reports of field investigations, and extended experimental investigations afforded her opportunities to make connections between contexts and concepts. A system of categorizing student responses was developed that can inform other educators when analyzing student responses to contextual assessment. We also refine the theoretical construct of fluid transitions that informed the study initially. Implications for curriculum and assessment design are provided in light of the findings.
Feldwisch-Drentrup, Hinnerk; Schulze-Bonhage, Andreas; Timmer, Jens; Schelter, Bjoern
2011-06-15
The prediction of events is of substantial interest in many research areas. To evaluate the performance of prediction methods, the statistical validation of these methods is of utmost importance. Here, we compare an analytical validation method to numerical approaches that are based on Monte Carlo simulations. The comparison is performed in the field of the prediction of epileptic seizures. In contrast to the analytical validation method, we found that for numerical validation methods insufficient but realistic sample sizes can lead to invalid high rates of false positive conclusions. Hence we outline necessary preconditions for sound statistical tests on above chance predictions.
An Implicit Characteristic Based Method for Electromagnetics
NASA Technical Reports Server (NTRS)
Beggs, John H.; Briley, W. Roger
2001-01-01
An implicit characteristic-based approach for numerical solution of Maxwell's time-dependent curl equations in flux conservative form is introduced. This method combines a characteristic based finite difference spatial approximation with an implicit lower-upper approximate factorization (LU/AF) time integration scheme. This approach is advantageous for three-dimensional applications because the characteristic differencing enables a two-factor approximate factorization that retains its unconditional stability in three space dimensions, and it does not require solution of tridiagonal systems. Results are given both for a Fourier analysis of stability, damping and dispersion properties, and for one-dimensional model problems involving propagation and scattering for free space and dielectric materials using both uniform and nonuniform grids. The explicit Finite Difference Time Domain Method (FDTD) algorithm is used as a convenient reference algorithm for comparison. The one-dimensional results indicate that for low frequency problems on a highly resolved uniform or nonuniform grid, this LU/AF algorithm can produce accurate solutions at Courant numbers significantly greater than one, with a corresponding improvement in efficiency for simulating a given period of time. This approach appears promising for development of dispersion optimized LU/AF schemes for three dimensional applications.
A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams (Final Report)
EPA announced the availability of the final report, A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams. This report describes a method to characterize the relationship between the extirpation (the effective extinction) of invertebrate g...
Inquiry-Based Field Experiences: Transforming Early Childhood Teacher Candidates' Effectiveness
ERIC Educational Resources Information Center
Linn, Vicki; Jacobs, Gera
2015-01-01
Contemporary teacher preparation programs are challenged to provide transformational learning experiences that enhance the development of highly effective teachers. This mixed-methods case study explored the influence of inquiry-based field experiences as a pedagogical approach to teacher preparation. Four teacher candidates participated in a…
Student-Centred Inquiry "as" Curriculum as a Model for Field-Based Teacher Education
ERIC Educational Resources Information Center
Oliver, Kimberly L.; Oesterreich, Heather A.
2013-01-01
This research project focuses on teacher education in a field-based methods course. We were interested in understanding what "could be" when we worked with pre-service teachers in a high school physical education class to assist them in the process of learning to listen and respond to their students in ways that might better facilitate…
ERIC Educational Resources Information Center
Kea, Cathy D.; Trent, Stanley C.
2013-01-01
This mixed design study chronicles the yearlong outcomes of 27 undergraduate preservice teacher candidates' ability to design and deliver culturally responsive lesson plans during field-based experience lesson observations and student teaching settings after receiving instruction in a special education methods course. While components of…
Geodynamics branch data base for main magnetic field analysis
NASA Technical Reports Server (NTRS)
Langel, Robert A.; Baldwin, R. T.
1991-01-01
The data sets used in geomagnetic field modeling at GSFC are described. Data are measured and obtained from a variety of information and sources. For clarity, data sets from different sources are categorized and processed separately. The data base is composed of magnetic observatory data, surface data, high quality aeromagnetic, high quality total intensity marine data, satellite data, and repeat data. These individual data categories are described in detail in a series of notebooks in the Geodynamics Branch, GSFC. This catalog reviews the original data sets, the processing history, and the final data sets available for each individual category of the data base and is to be used as a reference manual for the notebooks. Each data type used in geomagnetic field modeling has varying levels of complexity requiring specialized processing routines for satellite and observatory data and two general routines for processing aeromagnetic, marine, land survey, and repeat data.